In theory it could do something close to 2X the power. However, most software isn’t optimized to use any number of GPUs in the most mathematically efficient way. Furthermore, even with optimally efficient software the 2-GPU system will always see an absolutely minute loss of performance due to the clock cycles required to task multiple devices with processing instead of one.
It would probably be possible to handle this disadvantage by treating the GPUs as a cluster and mediating between them with a purpose-built microprocessor. I do not know if this has been done.
Latest Answers