Why are the GHz speeds in computers the same after such a long period of time?

2.96K views

Why are the GHz speeds in computers the same after such a long period of time?

In: Engineering

26 Answers

Anonymous 0 Comments

I’ll give you an analogy.

Let’s say you want to clean your kitchen.

Increasing the frequency (ghz) is sort of like you moving around and doing things faster, eg walking, picking things up etc. Now, you can get yourself pretty fast if you drink a lot of coffee say, but you will reach a limit.

Now to get around this limit we can do two things in our kitchen cleaning analogy.

Adding another person to help you clean is like adding another core. As time has moved on the cores or in the analogy people over time are able to work together better, eg not getting In each others way, blocking the sink. There is a limit with this too, think of trying to clean your kitchen with 20 people, you wouldn’t be able to manage that in normal circumstances at home.

And the other way to improve performance is how you can accomplish a task. Back to the kitchen analogy. Compare manually sweeping up the dust and crumbs with a brush and using a vacuum cleaner. Or adding in a dishwasher. Lots of the performance gains in processors these days are also from optimizing how they perform common sub tasks that they will run into.

I hope that clears it up a bit.

Anonymous 0 Comments

Imagine you had to blow into a straw. If you blew slowly, it’s pretty easy. If you blew hard you’d get a lot of resistance but it’s possible. Now try blowing with your full strength. It’s very hard right? Now try the same with 2 straws. It’s suddenly a lot easier to blow air out of them. What if you had 4 or even 8? It’s similar in computers. It becomes very hard to make it tick tock after a certain point (which seems to be about now). But it is fairly easy to add more things that go tick tocking, as you have to solve more logistical issues rather that technical ones.

so if you were over 5: making a cpu with a clock speed of say 8 GHz would require a lot of advanced physics, possibly a better understanding of quantum mechanics and so on (other comments explain this better). The only thing you have to figure out with sticking more cores is, how do you remove the heat from there (as it’s a very small surface and you can conduct so much heat per square cm), how to keep them supplied with things to do. These are not easy things to tackle, but are easier than increasing clock speed. Now, this makes the job of a programmer harder, but apparently it doesn’t seem too bad for now.

Anonymous 0 Comments

Physically, processors can only process so quickly. We’re limited by physics. We can only make things so small before we run into issues, and we can only transmit information so quickly with our current technology.

Imagine you’re an Olympic athlete. Let’s say you do the long jump. Due to limitations of physics and due to the limitations of the human body, there’s only so far a person can really long jump. It can only be optimized so much before humans reach a ceiling where they just can’t higher records for the long jump. The Olympic record for the men’s long jump is 8.90 meters in 1968. That’s over 50 years ago!

Anonymous 0 Comments

Used to be that we’d figure out how to cram twice as many transistors on chips every year and a half or so, that’s what we call Moore’s law. Used to be, too, that those transistors which were half the size also drew half the power, but at one point that stopped, meaning you could maybe put twice the transistors on the chip but it would also draw twice the power, meaning it would heat up twice as much, and heat dissipation became the limiting factor, not ability to make faster processors in itself.

Anonymous 0 Comments

Processors can’t multitask. What they can do, however, is rapidly switch between different tasks. Pushing clock speed (Ghz) higher has diminishing returns for the amount of effort and cost involved, in getting better performance from the computer. Instead, its better to have multiple cores, each one capable of doing its own task so collectively, the CPU as a whole is multitasking. This means you can play a video game, for instance, having one core dedicated to that, and another to a background task so they aren’t competing to use the same core.

Anonymous 0 Comments

A lot of this has already been answered, but let me provide a bit of perspective from closer to the silicon level since I’m currently on an internship working with this issue. While clock speeds are important, they are not the only factor in computer performance. Thus, current designs aren’t focused solely on increasing clock speeds.

One of the main issues is simply heat. As we increase the rate transistors switch the power required increases exponentially and it gets difficult to cool.

A more fundamental issue is that transistors and associated wire have capacitances, or the ability to store electrical charge. This effectively slows down the rate you change your signal- as the electrons in these reservoirs counteracts any changes you make until the electrons it holds is depleted. This makes a nice sharp clock signal flatten out and slows down rise times.

Lastly, it is difficult to design good interconnects. Even if we have a really high clock speed, it’s not easy to design wires that can carry information at that clock speed. All wires have some capacitance and inductance where energy is temporarily stored in electric and magnetic fields instead of being sent down the wire. Worse still, the magnitude of this energy that is stored is frequency-dependent. This means at higher clock speeds/frequencies a lot more energy is “lost” before getting to the end. This means that the magnitude of the signal at the end is a lot less. For example, one thing you see is that at higher frequencies, signals on one wire start leaking to other wires close by- something you obviously want to avoid.

Anonymous 0 Comments

Let’s pretend your family is moving across town and you have 5000 boxes of toys. The moving truck can only fit 50 boxes at a time which means your dad will have to make 100 round trips. The fastest your dad can drive on the freeway is 65mph. Sure, he could drive at 100mph to make better time but damn it, he loves you too much to risk jail time. So instead of pushing himself to drive faster on the freeway, endangering himself and others, he decides it’s a better idea to have your mom drive a second moving truck, also packed with 50 boxes of your awesome toys. Now the two of them only have to make 50 round trips each, halving the initial time it would’ve taken, all without the risk of a speeding ticket or going to prison! Now imagine how faster the move would be if your parents also recruited your uncle Bob and aunt Sally to drive a third and a fourth moving truck. They would be able to move all of your toys 4 times faster than if your dad had to move everything by himself! To match this speed by himself, your dad would have to drive at 260mph and the U-haul down the street isn’t renting out Koenigseggs yet.

Anonymous 0 Comments

I haven’t seen it here yet for some reason, but one of the biggest reasons is heat from power consumption. Processors get unsustainably hot because they are less efficient as power consumption increases.

For decades, if people’s computer programs were too slow, they would wait a year for processor speeds to increase in order to get a “free lunch”.

Anonymous 0 Comments

The greatest the processor speed, the greater the power required for the processor to run.

The greater the power required, the greater the heat dissipated.

The more heat dissipated, the harder it becomes to make a reasonably sized machine (desktop/laptop) that is useable and does not feel like an oven.

So this sets a practical limitation. Computer designers have elected to improve performance by increasing the number of processors while maintaining their speed to a (relatively) slow speed instead.

Anonymous 0 Comments

The GHz ‘speed’ is cycles per second, but that tells us nothing about how much the computer can do with each cycle. Modern computers tend to do more with each cycle, and have multiple cores running at once. So even though they don’t appear to be getting faster, they get a lot more work done in the same amount of time. Immagine a little car vs a fleet of vans; they all drive at the same speed, but the vans deliver a lot more stuff in the same time.

As for *why* the speeds haven’t increased, you can improve performance either by speeding it up, or by improving efficiency. Currently it’s easier to do the latter. Making anything go really quickly is hard; at larger scales this is generally self evident, but it still applies at smaller scales. At their heart computers rely on moving electrons about; they’re really small so they go really fast, but there’s still a limit.