Why are the GHz speeds in computers the same after such a long period of time?

2.90K views

Why are the GHz speeds in computers the same after such a long period of time?

In: Engineering

26 Answers

Anonymous 0 Comments

simply, the gazillions of parts inside the thinking chip are now so small that it’s not possible to build them any littler. we’ve reached the limits of the laws of physics. they are down to billionths of a meter, and so other methods of making computers work faster are now being used.

Anonymous 0 Comments

because whenever some dipshit comes up with a made up fake law/rule like moore, quantum physics change by nature and it is ruined for everyone.

Anonymous 0 Comments

Ask yourself. When did the GHz speeds start to taper off?Around 3-4Ghz or three to four billion oscillations (cycles, flutters or ‘state flips’) per second.

The local heat generated by the chips was becoming a problem in earlier CPUs that could ‘fry’ themselves if the cooling was not put on properly. I remember these problems around 1999-2005 and the ‘bad caps’ phase where condensators on the motherboard would randomly pop and fry.

So they cloned the single CPU into many pipelines and then into many cores, fully parallel processing units in a single chip. But at the moment the software (the programs) lags behind, not always making optimal use of the parallel features.
For a long time (from the 1970s to the early 2000s) there was Moore’s Law, which basically predicted that memory sizes and speeds would increase by doubling every year. But due to processors being made smaller to the 1 nanometer scale, all kinds of ‘cross effects’ started appearing, of nanometer silicon ‘wires’ interfering with others. That is why the miniaturisation could not go on and they started spreading slower cores over a slightly larger area, to assist in cooling and auto cooling.

But separate from Intel, ARM was always a much more heat-effective chipset based on RISC (Reduced Instruction Set Computing), so it became the chip of choice in mobile phones and embedded devices.

Anonymous 0 Comments

In the past, when transistors got smaller, we could increase frequency and decrease voltage proportionally and have higher frequency at the same power (this is called Dennard scaling). However, we can’t lower the voltage any more (if we do, a different power component, leakage power, grows exponentially). If we increase the frequency without decreasing voltage, the power increases also, which means eventually the chip starts to melt. So we need to improve performance in other ways (e.g., by using those smaller transistors to make more cores running at the same speed). Look into ‘dark silicon’ for more info.

Anonymous 0 Comments

We’ve started to hit physical limits on how fast we can make processors go. Speed of electricity, size of transistors, etc. To compensate, we can design better processors that can run more specific instructions. Consider something like the Nintendo Switch, or your mobile phone. The CPU might not be faster than a PC 10 years ago, but the processor can handle a whole bunch of media that your old PC might have needed a powerful graphics card and soundcard to do. All those functions can now be built into a single CPU*, saving on space.

*I’ve avoided talking about cores, because I don’t think it’s all that relevant. The point is, a single chip can do more.

Anonymous 0 Comments

As the clock speeds increase, so does heat generation. We’ve found that it’s better to add more individual cores at the same speed rather than to make those cores go faster in order to get the same effective processing speed. There are diminishing returns as more cores are added, due to increased overhead of combining the data calculated by each individual core.

Obligatory analogy: you don’t drive on the highway in 1st gear because your engine would overheat and die. Instead, you use multiple gears to run your engine more efficiently, running the engine run slower, but gearing it up to make the car run faster.

Anonymous 0 Comments

Idk if somebody mentioned this yet but there will come a point where the transistors bridges inside the CPU are so close together that the electrons will quantum tunnel across the bridge whether it’s open or closed. Basically like a light switch turning itself on and off randomly because the wires are so close together the electricity will jump across the gap anyways. So there are upper limits, defined by the laws of physics, on how tightly we can pack the transistors. There are downsides to adding cores rather than increasing speeds but there might not be much of an option. Programmers write sets of instructions and the CPU executes one, and then the next, and then the next. By using multiple cores you have to send different sets of instructions to different cpus. It’s called threading And it can be very difficult for a novice programmer to do correctly. It requires skill, knowledge, and experience to do it correctly. But if done correctly, it can be more useful to have multiple cpus than one very powerful one

Anonymous 0 Comments

Imagine you had a maid who comes in and cleans your house every day. Over time it turns out you make more of a mess so you make the maid work faster and faster. But realistically, there’s only so fast she can work. That’s the problem with clock speed. You can make it faster and faster, but there’s only so fast it can reasonably go.

A much better solution is to hire multiple maids and have them work at a reasonable pace. So while one cleans the kitchen, another is cleaning the living room, etc. Overall, the amount of work they can get through is more than one maid working really fast. This is like a CPU with multiple cores.

So basically, instead of struggling to make 1 CPU that runs at 10ghz (which is really hard), manufacturers instead make a 4 core CPU where each core runs at, say, 2.5ghz, for roughly the same overall performance, and that’s really easy.

Anonymous 0 Comments

They went from one core that was say 3.74 GHz in 2006 – that used 115W of power and 135 mm2 in size and $999 to 6 cores that are each 4.0 GHz that use 140W of power and 82 mm2 in size for $617 in 2015

Thats a big jump in performance, dropping in size and only using 23W per core vs 115W

Anonymous 0 Comments

Think a lot of these explanations are too technical for this sub.

GHz is only one factor in how fast a computer is. Like how in cars, horsepower is only one of many factors that impacts how fast it is.

Nowadays, it’s easier to make computers by making them more efficient, rather than raw power.