I’m oversimplifying, but you can think of a regular CPU core that normally runs your computer as a processor that does one calculation at a time. If you need to do the same multiplication to 100 pieces of data, you usually have to loop over each piece of data and to do the multiplication over and over again. (Modern CPUs actually do allow some parallelization and operations on multiple pieces of data at the same time, but this is still useful for comparison.)
GPUs, on the other hand, really excel at doing operations on many pieces of data at the same time. They will typically have hundreds of cores allowing for massive amounts of parallelization and a single instruction running on one core can operate on many inputs at the same time.
When it comes to rendering scenes on your screen, you are usually breaking down an environment into triangles and performing the same operations on points or regions defined by those triangles. For instance, you might want to blur far away pixels and focus on nearer pixels, so you might have a blurring function that you need to run over every pixel on the screen. The GPU would allow you to do that more efficiently because it’s the same calculation just being spread across many different inputs.
GPUs aren’t just for graphics though. They are often also used for things like machine learning models, and physics simulations. In these cases the problem they’re trying to solve just looks similar: you have one calculation that you are trying to do across many different inputs. This makes GPUs much more attractive than traditional CPUs.
Latest Answers