GPUs are specifically designed to perform huge numbers of calculations extremely fast. The trade-off is that they are specifically designed such that they preform the same operation (eg A+B*C) with differing input values over and over*. They are also generally optimized specifically towards floating point operations at the cost of making integer operations slower. GPUs also generally have dedicated memory that is specifically designed to be extremely fast.
CPUs, on the other hand, are optimized towards performing differing operations constantly, and generally also towards integer operations. This makes them ideal for performing business logic that often follows the flow of “if X do Y else do Z”, which a GPU is pretty bad at.
While GPUs were designed for performing graphics operations, hence the `G`, the same pattern has proven extremely useful for many other types of calculations, such physics, encryption, and artificial intelligence. While you could make a dedicated processor just for artificial intelligence, physics, or encryption, work, and it has been done, the benefit is not especially high compared to using a GPU for the same work. Additionally GPUs are much easier to find and are a very well known factor for software to develop against, making it cheaper and easier to write the software you need.
Due to all of this, you can actually find specialized motherboards with lots of PCIe slots for lots of graphics cards. When performing specialized types of operations, this is generally the best way to go: software can still mostly pretend its running on readily available hardware, making it easy to program against, while being able to take advantage of insane computing power.
* When rendering computer graphics, every pixel on your screen that has the same object on it needs to perform the same set of calculations with slightly different input values. There are other layers involved as well with similar patterns, but this simple example should make it fairly clear why this is useful.
Latest Answers