why does CPUs/CPU cores seem to be capped at 3 – 4 Ghz since almost a decade?

325 views

why does CPUs/CPU cores seem to be capped at 3 – 4 Ghz since almost a decade?

In: 1368

29 Answers

Anonymous 0 Comments

We(intel specifically) learned the hard way that past a certain clock frequency(bout 3GHZ) going any higher will lead to unacceptable levels of heat and power draw(semi conductors actually ~~increase~~ lower electrical resistance as temperature rises):.

instead they figured that improving the instructions sets and having more cores at a lower clock was more efficient.

this was the main reason why the later pentuim 4’s while with impressive clock speeds were actually underperforming vs the Athlons of the time because this clock speed was wasted on an inneficient instruction pipeline(the step the CPU needs ot execute an instruction safely) and were drawing too much power and getting too hot ot achieve this.

EDIT: corrected a goof.

Anonymous 0 Comments

Also due to the fact that we are now able to fit far more physical cores on a chip, you don’t need the more speed, as they are able to dedicate different tasks to different cores. It’s like having an electric car with 3 motors, all the motors help move the car forward so you don’t need 1 overpowered motor.

Anonymous 0 Comments

At the most basic level, you’ve got a conductor that can either have a charge or not at any given point in time. The limit on your clock speed is based on how fast you can switch between ‘charge’ and ‘no charge’.

Now, you can improve this speed by making ‘charge’ and ‘no charge’ closer together. That’s why you see the voltages on processors going down – it’s easier to switch between the two states when you have a shorter ‘distance’ to go.

Unfortunately, as you decreasing voltage and increasing switching speed, you run up against fundamental constraints on noise. There’s a low level of electrical ‘buzz’ going on all around you, including from the other components in your computer. Get too close to that and you can’t tell the difference between ‘charge’ and ‘no charge’.

This makes simply increasing clock speed exponentially more difficult. It becomes far more cost effective to simply come up with ways you can do more at lower clock speeds.

Anonymous 0 Comments

The reason for it, that i learnt in my computer science classes, is that there are signals that need to travel through these components during each clock cycle. at 4 GHz, a clock cycle lasts 0.25 nanosecond. Considering that this signal cannot travel faster than light (3 *10^(8)m/s), the maximum distance it can cover during such a short clock cycle, is 0.25 * 10^(-9) * 3* 10^(8)= **7.5cm**

Which means your component has to be small enough so that the electrical signals do not have to cover more distance than that, in the span of a clock cycle.

Hence why we don’t make CPU/GPUs with a frequency ten times higher. It would involve making CPUs and GPUs that would be about ten times smaller (in surface) than they are now.

That reasoning is mostly dealing with orders of magnitude here. Maybe that in practice, these signals do not travel at the speed of light, but at 60%, 80% of that speed. No idea. Also, maybe they need to travel, two, three times along the CPU, i don’t know either. But the general idea is that we can’t keep making CPUs and GPUs smaller and smaller, and thus we can’t keep increasing the frequency way past a few Ghz, even though the CPUs frequencies increased by a factor of 1000 between the 1970s and nowadays.

Also there’s the heat issue. You can’t let your computer generates more heat than you can evacuate, else you have to throttle your GPU/CPU temporarily to prevent it from overheating. Heat generation is proportional to the square of the frequency. Hence, two CPUs at 1 GHz produce two times less heat than one CPU at 2 GHz, although the two options are equivalent in term of computational power. Hence why we rather multiply the CPU cores instead of making architectures with a single really fast CPU core.

The challenge however is to take maximum advantage of these parallelized architectures (aka architectures with several CPU cores instead of one single fast core), which lies in the developers’ hands. If you make a program that has to be run sequentially (where every calculation is dependent on the result of the previous one), it can’t take advantage of more than one CPU at a time. Minecraft is an example of that. Conversely, some tasks involves calculations that requires no particular order, and thus these calculations can be shared between several cores. A good example is image rendering. Each pixel can be calculated independently of the other pixels. If you share the calculations among several devices, each one can just do its calculations without having to coordinate itself with the others. This is partly why GPUs exist in the first place by the way. They are highly parallelized architectures suited for image rendering.

Anonymous 0 Comments

Heat

Transistors dump a tiny bit of energy each time they switch, and the amount of energy is based around the transistor(gate capacitance) and the square of the voltage you’re running it at. If you want a given transistor to switch faster you need to increase the voltage.

A 10% increase in clockspeed might require a 5% increase in voltage which results in a 21% increase in heat generation, and the scaling gets worse the faster you try to go.

You can also make the transistor switch faster by making it smaller but that also lets you pack more in so your heat generation per area remains about the same

We’ve instead switched to lots of medium sized cores running at moderate speeds which keeps temperatures down while still providing good performance on heavy workloads. Things that aren’t designed to run on multiple cores do suffer a bit, but there isn’t a ton you can do to improve that these days.

Anonymous 0 Comments

Since this is ELI5 and not a technology sub I’m gonna go real real simple.

If you wanna go faster you need more power. More power means more heat. Power and heat is bad. To mitigate this we can go smaller. But we are reaching the limits of how small we can go. Electrons (information) are quite literally jumping the gaps between the information pipelines. This is leading to unacceptable levels of errors which the extra speed doesn’t make up for.

So ELI5. We parallel processing as much as we possibly can now cos we are hitting the limits of speed and right now 4ghz is the consumer level sweet spot.

Anonymous 0 Comments

CPUs run in clock cycles per second (hz) as you know. While more clock cycles per second is faster than less, this is only true if you have the exact same hardware. If you have a chip with 1000 transistors and one with 2, the one with 2 will be able to technically completely more clocks per second, but that doesn’t mean it does more computation.

TLDR; HZ isn’t the only measure of CPU efficiency.

Anonymous 0 Comments

In addition to what everyone else said, there are some fundamental physical limitations at play.

When you turn off voltage in an electrical wire, there’s still some remnant current flowing through it (imagine going in a car. When you hit the brakes, it will still be moving for a while). This current lasts for a couple hundred ps (picoseconds), so the maximum theoretical frequency of conventional electronics is around 7,5 GHz. So even if we could create small enough CPUs/GPUs, we simply can’t move the signal fast enough through the wires.

There are a few areas of modern physics that are trying to overcome the issue. Instead of using electricity, these methods use light to move information. The two most promising options now are plasmonics (you shine light onto a metal and the light travels along the metal’s surface) and magnonics (you shine light on a magnet, you locally change it’s polarity and this “polarity wave” travels through the material).

ELI5 analogy: electrical wires are like a garden hose. When you turn off the water, there’s still some water dripping from the hose. Plasmonics and magnonics are like throwing water balloons.

Anonymous 0 Comments

Simple: diminishing returns relative to power consumption.

Multiple cores at a lower frequency is more efficient than a single core at a higher frequency. Eventually processors came to sweet spot where the frequency and power dissipation were at a point where very little additional performance was gained.

Anonymous 0 Comments

Don’t forget the actual electric signal speed, electricity doesn’t travel that far in a 4ghz cycle and pushing it faster will have you deal with some clock skew