why does CPUs/CPU cores seem to be capped at 3 – 4 Ghz since almost a decade?

800 views

why does CPUs/CPU cores seem to be capped at 3 – 4 Ghz since almost a decade?

In: 1368

29 Answers

Anonymous 0 Comments

On top of everything else, I’d argue that CPU speeds have now reached the point where for most *consumer* applications, a faster CPU just isn’t worth the extra power draw and cooling demands.

Things like upgrading to a SSD rather than mechanical hard drive give you far better performance boosts on modern systems than an extra 1ghz would. Even for gaming, unless you’re targeting high frame rates (I.e. 120fps and above), you’re likely to find that your performance is being limited more by your GPU or HDD speed than the CPU.

Also, programmers have become much better at spreading the CPU demand across multiple cores, rather than relying on a single CPU thread to do all the work, and also moving some tasks (e.g. video encoding) onto the GPU, so again overclocking that CPU is likely to give you minimal gains unless you have a specific need for it.

Anonymous 0 Comments

This has actually been the case for about 2 decades now. As others have pointed out, even some Pentium 4 CPUs were already at 3 GHz

One of the fundamental issues in the chip industry today is that adding more cores is a much worse solution than speeding up single-core performance. Two cores capable of 1-GFlops operations each is much inferior to a single 2-GFlop core. The reason for this is [Amdahl’s Law](https://en.wikipedia.org/wiki/Amdahl%27s_law), if you’re keen on some further reading! 🙂

Unfortunately, as others have pointed out, speeding up Single-core speed is very hard

Anonymous 0 Comments

3 GHz is twenty years old, Pentiums broke the boundary for a commercially available CPU in 2002.

Anonymous 0 Comments

Shrinking circuits down any further causes too much quantum tunnelling. Basically, electrons normally stay flowing on the defined path (circuit). If those circuits are too close together, electrons can spontaneously jump across the gap through undesirable (but really cool!) quantum effects.

This limits how close the traces can be to each other, limiting how small you can shrink things. And if you don’t shrink things, you either have heat problems because of friction generated from electrons being pushed through the circuit too fast, or you end up with a chip that’s too large, causing timing, synchronicity, or speed problems.

So speed isn’t the singular attribute of performance any more. Parallelism, 3D layering, cache, and instruction additions and optimization increase performance and capability in better ways than simply “turning up the speed knob”, like we used to in years past.

Anonymous 0 Comments

Same reason car engines haven’t budged from the typical 1.6-2.0L instead of just lumping a big V6/V8 into it for more power, efficiency increases over time from the same displacement with technological improvements so you don’t need to go bigger.

Anonymous 0 Comments

Besides all the physics and electrical answers, CPUs are good enough to handle most tasks that the majority of users will throw at them.

Excel, Word, Outlook, Web Browsers, these things do not tax a CPU. Gamers want more, but that has always been true. Video production can be a hog, but that’s more niche, and they’ll follow similar paths as gamers.

Increased memory capacities, much faster SSD speeds, and all the peripheral things that have improved have made for less direct CPU performance needs.

Anonymous 0 Comments

This isn’t really an answer to your question but my 10850-k runs normally at 5.1GHz. The base speed is 3.60GHz but the new AI tech automatically does all the behind the scene work for you to optimize all the settings and shit based on temps and other stuff I didn’t care enough to look into. I think the newer gen CPU focus is more on AI optimization to automatically push performance in a safe way real time. That’s probably more impactful and efficient than just pushing the numbers higher

Anonymous 0 Comments

Increasing the clockspeed is by no means impossible but means significant cooling and energy costs. If Intel or AMD spent 10 years and billions of dollars they could likely develop something that allowed significantly higher clockspeeds such as cooling built into the chip, but the question is if there is any demand for a single, extremely expensive core with insanely high clock speeds.

You can basically divide software into multithreaded and single threaded. Since increasing clockspeed is so expensive it turbs out to be far more efficient to increase performance by running programs multithreaded (although it doesn’t scale perfectly, look up Amdahl’s law). The enormous difference in performance pretty much means that anything that needs to run quickly will be multithreaded (or maybe built to run on a gpu) and for everything else performance isn’t important enough to be willing to pay for significantly higher operating costs.

Anonymous 0 Comments

Increasing clock speed increases power consumption, making a CPU harder to cool and more likely to throttle. There is also a hard limit on how high clock speed can get because of how fast an electrical signal can go from one side of the processor to the other. Clock speed is just one of many things that can affect the performance of a CPU, over the past decade clock speed hasn’t increased all that much but processors are much faster than they were 10 years ago because of other improvements. The architecture of a CPU can be improved so more calculations can be done each clock cycle, the CPU can better predict what data it will need soon and load it into it’s cache for quick access and parts of the CPU can be designed to be very efficient at specific uses.

Anonymous 0 Comments

It’s not true. Both Intel and AMD are offering desktop CPUs that push or are above 5GHz these days.