Why are the GHz speeds in computers the same after such a long period of time?

2.95K views

Why are the GHz speeds in computers the same after such a long period of time?

In: Engineering

26 Answers

Anonymous 0 Comments

The greatest the processor speed, the greater the power required for the processor to run.

The greater the power required, the greater the heat dissipated.

The more heat dissipated, the harder it becomes to make a reasonably sized machine (desktop/laptop) that is useable and does not feel like an oven.

So this sets a practical limitation. Computer designers have elected to improve performance by increasing the number of processors while maintaining their speed to a (relatively) slow speed instead.

Anonymous 0 Comments

The GHz ‘speed’ is cycles per second, but that tells us nothing about how much the computer can do with each cycle. Modern computers tend to do more with each cycle, and have multiple cores running at once. So even though they don’t appear to be getting faster, they get a lot more work done in the same amount of time. Immagine a little car vs a fleet of vans; they all drive at the same speed, but the vans deliver a lot more stuff in the same time.

As for *why* the speeds haven’t increased, you can improve performance either by speeding it up, or by improving efficiency. Currently it’s easier to do the latter. Making anything go really quickly is hard; at larger scales this is generally self evident, but it still applies at smaller scales. At their heart computers rely on moving electrons about; they’re really small so they go really fast, but there’s still a limit.

Anonymous 0 Comments

They went from one core that was say 3.74 GHz in 2006 – that used 115W of power and 135 mm2 in size and $999 to 6 cores that are each 4.0 GHz that use 140W of power and 82 mm2 in size for $617 in 2015

Thats a big jump in performance, dropping in size and only using 23W per core vs 115W

Anonymous 0 Comments

Think a lot of these explanations are too technical for this sub.

GHz is only one factor in how fast a computer is. Like how in cars, horsepower is only one of many factors that impacts how fast it is.

Nowadays, it’s easier to make computers by making them more efficient, rather than raw power.

Anonymous 0 Comments

Imagine you had a maid who comes in and cleans your house every day. Over time it turns out you make more of a mess so you make the maid work faster and faster. But realistically, there’s only so fast she can work. That’s the problem with clock speed. You can make it faster and faster, but there’s only so fast it can reasonably go.

A much better solution is to hire multiple maids and have them work at a reasonable pace. So while one cleans the kitchen, another is cleaning the living room, etc. Overall, the amount of work they can get through is more than one maid working really fast. This is like a CPU with multiple cores.

So basically, instead of struggling to make 1 CPU that runs at 10ghz (which is really hard), manufacturers instead make a 4 core CPU where each core runs at, say, 2.5ghz, for roughly the same overall performance, and that’s really easy.

Anonymous 0 Comments

I’ll give you an analogy.

Let’s say you want to clean your kitchen.

Increasing the frequency (ghz) is sort of like you moving around and doing things faster, eg walking, picking things up etc. Now, you can get yourself pretty fast if you drink a lot of coffee say, but you will reach a limit.

Now to get around this limit we can do two things in our kitchen cleaning analogy.

Adding another person to help you clean is like adding another core. As time has moved on the cores or in the analogy people over time are able to work together better, eg not getting In each others way, blocking the sink. There is a limit with this too, think of trying to clean your kitchen with 20 people, you wouldn’t be able to manage that in normal circumstances at home.

And the other way to improve performance is how you can accomplish a task. Back to the kitchen analogy. Compare manually sweeping up the dust and crumbs with a brush and using a vacuum cleaner. Or adding in a dishwasher. Lots of the performance gains in processors these days are also from optimizing how they perform common sub tasks that they will run into.

I hope that clears it up a bit.

Anonymous 0 Comments

Imagine you had to blow into a straw. If you blew slowly, it’s pretty easy. If you blew hard you’d get a lot of resistance but it’s possible. Now try blowing with your full strength. It’s very hard right? Now try the same with 2 straws. It’s suddenly a lot easier to blow air out of them. What if you had 4 or even 8? It’s similar in computers. It becomes very hard to make it tick tock after a certain point (which seems to be about now). But it is fairly easy to add more things that go tick tocking, as you have to solve more logistical issues rather that technical ones.

so if you were over 5: making a cpu with a clock speed of say 8 GHz would require a lot of advanced physics, possibly a better understanding of quantum mechanics and so on (other comments explain this better). The only thing you have to figure out with sticking more cores is, how do you remove the heat from there (as it’s a very small surface and you can conduct so much heat per square cm), how to keep them supplied with things to do. These are not easy things to tackle, but are easier than increasing clock speed. Now, this makes the job of a programmer harder, but apparently it doesn’t seem too bad for now.

Anonymous 0 Comments

Physically, processors can only process so quickly. We’re limited by physics. We can only make things so small before we run into issues, and we can only transmit information so quickly with our current technology.

Imagine you’re an Olympic athlete. Let’s say you do the long jump. Due to limitations of physics and due to the limitations of the human body, there’s only so far a person can really long jump. It can only be optimized so much before humans reach a ceiling where they just can’t higher records for the long jump. The Olympic record for the men’s long jump is 8.90 meters in 1968. That’s over 50 years ago!

Anonymous 0 Comments

Used to be that we’d figure out how to cram twice as many transistors on chips every year and a half or so, that’s what we call Moore’s law. Used to be, too, that those transistors which were half the size also drew half the power, but at one point that stopped, meaning you could maybe put twice the transistors on the chip but it would also draw twice the power, meaning it would heat up twice as much, and heat dissipation became the limiting factor, not ability to make faster processors in itself.

Anonymous 0 Comments

In the past, when transistors got smaller, we could increase frequency and decrease voltage proportionally and have higher frequency at the same power (this is called Dennard scaling). However, we can’t lower the voltage any more (if we do, a different power component, leakage power, grows exponentially). If we increase the frequency without decreasing voltage, the power increases also, which means eventually the chip starts to melt. So we need to improve performance in other ways (e.g., by using those smaller transistors to make more cores running at the same speed). Look into ‘dark silicon’ for more info.