Why are the GHz speeds in computers the same after such a long period of time?

2.98K views

Why are the GHz speeds in computers the same after such a long period of time?

In: Engineering

26 Answers

Anonymous 0 Comments

As the clock speeds increase, so does heat generation. We’ve found that it’s better to add more individual cores at the same speed rather than to make those cores go faster in order to get the same effective processing speed. There are diminishing returns as more cores are added, due to increased overhead of combining the data calculated by each individual core.

Obligatory analogy: you don’t drive on the highway in 1st gear because your engine would overheat and die. Instead, you use multiple gears to run your engine more efficiently, running the engine run slower, but gearing it up to make the car run faster.

Anonymous 0 Comments

We’ve started to hit physical limits on how fast we can make processors go. Speed of electricity, size of transistors, etc. To compensate, we can design better processors that can run more specific instructions. Consider something like the Nintendo Switch, or your mobile phone. The CPU might not be faster than a PC 10 years ago, but the processor can handle a whole bunch of media that your old PC might have needed a powerful graphics card and soundcard to do. All those functions can now be built into a single CPU*, saving on space.

*I’ve avoided talking about cores, because I don’t think it’s all that relevant. The point is, a single chip can do more.

Anonymous 0 Comments

In the past, when transistors got smaller, we could increase frequency and decrease voltage proportionally and have higher frequency at the same power (this is called Dennard scaling). However, we can’t lower the voltage any more (if we do, a different power component, leakage power, grows exponentially). If we increase the frequency without decreasing voltage, the power increases also, which means eventually the chip starts to melt. So we need to improve performance in other ways (e.g., by using those smaller transistors to make more cores running at the same speed). Look into ‘dark silicon’ for more info.

Anonymous 0 Comments

Ask yourself. When did the GHz speeds start to taper off?Around 3-4Ghz or three to four billion oscillations (cycles, flutters or ‘state flips’) per second.

The local heat generated by the chips was becoming a problem in earlier CPUs that could ‘fry’ themselves if the cooling was not put on properly. I remember these problems around 1999-2005 and the ‘bad caps’ phase where condensators on the motherboard would randomly pop and fry.

So they cloned the single CPU into many pipelines and then into many cores, fully parallel processing units in a single chip. But at the moment the software (the programs) lags behind, not always making optimal use of the parallel features.
For a long time (from the 1970s to the early 2000s) there was Moore’s Law, which basically predicted that memory sizes and speeds would increase by doubling every year. But due to processors being made smaller to the 1 nanometer scale, all kinds of ‘cross effects’ started appearing, of nanometer silicon ‘wires’ interfering with others. That is why the miniaturisation could not go on and they started spreading slower cores over a slightly larger area, to assist in cooling and auto cooling.

But separate from Intel, ARM was always a much more heat-effective chipset based on RISC (Reduced Instruction Set Computing), so it became the chip of choice in mobile phones and embedded devices.

Anonymous 0 Comments

because whenever some dipshit comes up with a made up fake law/rule like moore, quantum physics change by nature and it is ruined for everyone.

Anonymous 0 Comments

simply, the gazillions of parts inside the thinking chip are now so small that it’s not possible to build them any littler. we’ve reached the limits of the laws of physics. they are down to billionths of a meter, and so other methods of making computers work faster are now being used.