Why can’t GPU clockspeeds match CPU clockspeeds?

244 views

Why can’t GPU clockspeeds match CPU clockspeeds?

In: Technology

3 Answers

Anonymous 0 Comments

They could, but they’d end up performing worse

Higher clock speeds are good if you need to get through a specific task as fast as possible, but they’re not great if you need to get through a huge quantity of similar tasks as fast as possible, in that scenario you want to use your power budget for more cores. GPUs are the second example taken to the extreme

The bigger limit for high end GPUs these days is power consumption with recent high end ones exceeding 350 Watts. At some point you start generating power in the chip faster than the heatsink can pull it out and then the chip gets sad.

Power consumption in processors like CPUs and GPUs scales linearly with clockspeed (2 GHz requires twice as much power as 1 GHz all else being equal) but scales with the *square* of the voltage (Transistors running off 1.3V use 17% more power than 1.2V ones). Increasing clock speeds generally require an increase in voltage to let you get there and have a stable speed.

If we say 1 GHz needs 1.2V, 1.5 GHz needs 1.3V and 2 GHz needs 1.4V for example then each core running at 1.5GHz generates 75% more heat for 50% more performance and 2 GHz generates 2.7x the heat for 2x the performance. Increasing clock speeds end up *reducing* overall GPU performance because you can’t fit as many tiny cores under the same power budget

Unlike CPU loads, GPU loads scale almost perfectly with more cores because its trying to solve for 8 million pixels for a 4k screen 60 times per second so there’s plenty of independent workload to go around so more cores is generally better than fewer faster cores.

You are viewing 1 out of 3 answers, click here to view all answers.