Why do CPU’s and GPU’s need exponentially more power for marginal gains?

145 views

For example, the 7950x at 230w is only 6.5% faster that at 105w. Why is all that extra power being wasted so much?

In: 6

4 Answers

Anonymous 0 Comments

Just because we can squeeze more transistors into a silicon chip, it does not follow that the efficiency of each transistor correspondingly increases.

In fact, much research is put in to fighting decreases in efficiency that miniaturization often yields.

Anonymous 0 Comments

Well, I’m not sure what data you’re looking at, but usually in these sorts of things it isn’t “The CPU can draw 230W”, it’s “The CPU can draw up to 230W”. So you might not be able to assume that going from 105->230W actually represents a massive increase in power.

There is also sometimes an issue with the “limits” being more suggestions. So just because you set the system to “105 W” doesn’t mean the CPU is actually drawing 105 Watts or less.

There is then a 3rd issue that there are a few different ways to measure “watts”. 105W may be a TDP (thermal design power) and the 230W a “power-to-the-socket”, the latter of which is generally a significantly larger number.

Anonymous 0 Comments

Imagine running a kitchen at a restaurant with say 5 chefs. Business is going well and you’re feeding say 25 customers an hour. Let’s say you want to start scaling up and serving more customers. Simple enough, you hire an additional chef and that solves the problem, you can now serve 30 customers in the same time.

Now business is booming and maybe a year down the line you think you can do the same thing: ” if I hire a new chef, I can feed 5 more customers an hour”. So you try it and find that you’re only feeding maybe 1 or 2 more customers an hour. Problem is, there’s not enough counter space for all the chefs and not enough pots and pans and cook tops. Those need to be scaled as well. You also need new servers to keep up with the orders, and a system to keep track of which chefs should cook which meals, etc.

The point is, to get the same gains you saw in the past, you need to do more than hire an additional chef.

Likewise for CPUs and GPUs. If it were as simple as more cores equals more performance, it would be the case that power requirements scale linearly, but that’s not the whole picture. You need additional hardware/firmware to make sure all the cores can do their job. This gets way more difficult when you try to fit them all into the same space and even worse when you want to make sure they’re not making mistakes.

Anonymous 0 Comments

One reason is to add more cores, you can’t just slap on another core, you must delicately build your circuits around that extra core. You need to add more wires, more scheduling & control circuits, etc. That adds propagation delays, more heat and energy loss, as well as scheduling delays to make sure everything works how it’s supposed to when it’s supposed to.

So, you’re adding a lot of performance with a new core, but then you take away a bit of that performance with the added circuitry that makes it all work, as well as needing more energy to run the added circuitry.