– What is limiting computer processors to operate beyond the current range of clock frequencies (from 3 to up 5GHz)?

1.49K views

– What is limiting computer processors to operate beyond the current range of clock frequencies (from 3 to up 5GHz)?

In: Engineering

21 Answers

Anonymous 0 Comments

First, Power Density (or Heat).

Processors got exponentially faster over the last 50 years due to “Moore’s Law” [https://en.wikipedia.org/wiki/Moore%27s_law](https://en.wikipedia.org/wiki/Moore%27s_law). This was an economic prediction made in 1965 that the number of transistors on chips will continue to double every 2 years. It became a self-fulfilling prophecy because Intel integrated that schedule as part of their business plan. Having more transistors available lets you clock faster because you’re able to use the transistors for fancy tricks such as deep pipelining.

EDIT: I got caught wearing my architecture hat. It’s important to note that smaller transistors are just plain faster, so during this period, even with no tricks, the circuits would just magically get about 1.4x faster every generation.

This doubling was possible because of “Dennard Scaling” [https://en.wikipedia.org/wiki/Dennard_scaling](https://en.wikipedia.org/wiki/Dennard_scaling) which at a high level means that due to the physics of the transistors, the power density of a transistor will stay constant as they decrease in size. This allows you to fit twice as many transistors on a chip while using the same cooling mechanisms. However, this broke down in the late 90s. The graph here is a great illustration of this (haven’t read the rest of the article, but it’s probably good: [https://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck](https://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck)). Because Dennard scaling failed, we couldn’t use those transistors to make it go faster, so instead the industry moved to multicore processors which were each clocked lower.

Incidentally, this trend has also failed due to the “Dark Silicon” problem [https://en.wikipedia.org/wiki/Dark_silicon](https://en.wikipedia.org/wiki/Dark_silicon). This has resulted in huge innovation in the field, where custom hardware blocks are used for power efficiency rather than relying on a bulky CPU.

Second, Power Efficiency.

Power scales linearly with frequency, but quadratically with voltage. [https://physics.stackexchange.com/questions/34766/how-does-power-consumption-vary-with-the-processor-frequency-in-a-typical-comput](https://physics.stackexchange.com/questions/34766/how-does-power-consumption-vary-with-the-processor-frequency-in-a-typical-comput) Having a higher frequency requires a higher voltage. Conversely, underclocking the processor allows you to lower the voltage safely. This results in a cubic decrease in power consumption. So for similar performance, you might rather have several slower, cooler cores versus a single blazing fast and hot core.

Third, the Memory Wall ([https://www.researchgate.net/publication/224392231_Mitigating_Memory_Wall_Effects_in_High-Clock-Rate_and_Multicore_CMOS_3-D_Processor_Memory_Stacks/figures?lo=1](https://www.researchgate.net/publication/224392231_Mitigating_Memory_Wall_Effects_in_High-Clock-Rate_and_Multicore_CMOS_3-D_Processor_Memory_Stacks/figures?lo=1))

Most of the speed increase has gone to logic and not memory. This means that your CPU gets way faster, but the backing memory doesn’t. If your CPU triples in speed, but your DRAM goes 1.4x, the CPU will just end up idling for long periods of time. This is inefficient and results in poor relative performance increases. This problem gets even worse with multicore processors, which is why it’s still an active area of research.

You are viewing 1 out of 21 answers, click here to view all answers.