Why can’t GPU clockspeeds match CPU clockspeeds?

236 views

Why can’t GPU clockspeeds match CPU clockspeeds?

In: Technology

3 Answers

Anonymous 0 Comments

Your device might have somewhere between 1 and 8 CPU cores. 16 is the max I’ve heard consumer electronics having.

A GeForce GTX 1080 Ti has 3584 GPU cores. That number of cores is fairly normal.
They’re specializing in the the math of graphics, meaning you can split it up among many cores easily. And that kind of math benefits from having more workers rather than faster workers.

If they tried to make the thousands of GPU cores go as fast as a CPU, there would be too much heat and you’d burn down your computer.
Also, very few home users wants to pay $300,000 for a GPU. And any animation company that actually does would probably just buy hundreds of regular computers and network them together.

Anonymous 0 Comments

They could, but they’d end up performing worse

Higher clock speeds are good if you need to get through a specific task as fast as possible, but they’re not great if you need to get through a huge quantity of similar tasks as fast as possible, in that scenario you want to use your power budget for more cores. GPUs are the second example taken to the extreme

The bigger limit for high end GPUs these days is power consumption with recent high end ones exceeding 350 Watts. At some point you start generating power in the chip faster than the heatsink can pull it out and then the chip gets sad.

Power consumption in processors like CPUs and GPUs scales linearly with clockspeed (2 GHz requires twice as much power as 1 GHz all else being equal) but scales with the *square* of the voltage (Transistors running off 1.3V use 17% more power than 1.2V ones). Increasing clock speeds generally require an increase in voltage to let you get there and have a stable speed.

If we say 1 GHz needs 1.2V, 1.5 GHz needs 1.3V and 2 GHz needs 1.4V for example then each core running at 1.5GHz generates 75% more heat for 50% more performance and 2 GHz generates 2.7x the heat for 2x the performance. Increasing clock speeds end up *reducing* overall GPU performance because you can’t fit as many tiny cores under the same power budget

Unlike CPU loads, GPU loads scale almost perfectly with more cores because its trying to solve for 8 million pixels for a 4k screen 60 times per second so there’s plenty of independent workload to go around so more cores is generally better than fewer faster cores.

Anonymous 0 Comments

A CPU is like having 4 ultra smart PhDs working on a math problem. A GPU is like having an army of middle schoolers work on a problem. As you can see, either has its use. If you had to do an extremely long and convoluted sequence of operations, best give it to a really fast group of PhDs. If you need to do a million arithmetic problems, best divide it across an army of high schoolers. This is roughly what happens in computing – GPUs are very useful where you need many operations (each individual one being relatively easy. Like matrix operations, which you see a lot in graphics and AI) being done en masse, parallely. CPUs are useful for programs where you need to blitz through a sequence of events, even if that sequence might be long.