Why are the GHz speeds in computers the same after such a long period of time?

2.95K views

Why are the GHz speeds in computers the same after such a long period of time?

In: Engineering

26 Answers

Anonymous 0 Comments

Simply put we can’t make them any smaller. For the longest time & for most of the speed up, computers got faster because we could make their most basic part(a transistor) smaller. The smaller the parts, the smaller the distance electricity had to move to make things work.

But now we’ve gotten to the point that the transistor is only a few atoms, and any smaller & it just won’t work.

Anonymous 0 Comments

Lets say you own a deli shop making sandwiches. When you have a large order there are 2 ways to get the job done quicker. One is to make sandwiches faster (ghz). The other is to hire more people to make the sandwich (core). Current technology it is just cheaper to hire more people than to get people to work faster.

Anonymous 0 Comments

For something to oscillate past a few GHz, the length of time between two signals has to be short. That time is now *not long enough* to cross the entire length of the silicon chip. This means that one signal can “overtake” the one in front of it, which causes absolute confusion and merry hell as we design chips to be “synchronous” (i.e. things happen at the same time everywhere).

We don’t have asynchronous chips (we can have multiple chips that aren’t in sync, but one chip tends to have a single concept of “a clock signal” that turns on and off regularly and makes everything else happen).

Past about 5GHz, the length of the pulses needed for that clock signal mean that, even at the speed of light, they can’t make it across the physical length of the silicon chip before another one starts its journey.

Making the chips asynchronous, shorter, or quicker actually makes things incredibly complex and liable to all kinds of problems if there’s a bug found later on. Not to mention, the higher the clock speed, the more heat given out (because the power required to make more oscillations is greater), which means more cooling or more problems with heat, and more interference.

Pretty much, we’ve hit a physical boundary that you can only compensate for by making chips tinier and tinier (which has other problems, not least manufacturing), colder and colder (supercomputers are sometimes liquid-helium cooled or similar), or more and more complex to design, produce, run, program and diagnose.

Anonymous 0 Comments

We’re hitting the upper speed limit for processors because of limitations like the speed of light, and we are having trouble making things smaller because on that scale quantum mechanics starts doing unexpected or unwanted things.

We are overdue for a discovery that will revolutionize computer processing yet again.

So for the past decade the focus hasn’t been to increase speed, but to increase efficiency. Processors are being made with increasing numbers of cores so they can do more at once, and bus speeds are increasing so the processor can talk to devices and RAM more quickly. All of these things translate to improved performance.

Anonymous 0 Comments

Rather than increasing processor speed, which is becoming increasingly difficult thanks to things like substrate bleed (which is a whole other conversation), the push hasn’t been to increase clock speed (measured in Hz) but rather to simply add more processor cores. As software development has matured and proper utilization of multiple cores to get work done has become commonplace, the value of number of cores has steadily outpaced raw clock speed.

Clock speed used to be king because there was only a single “pipeline” at work in the processor. Stuff went in at one end, did what it had to do, and came out the other end. The faster you could get through the pipeline, the better. Modern processor architecture has added more and more pipelines running together. By spreading what’s coming in across multiple pipelines, it keeps everything flowing more smoothly than trying to stuff it all through one pipeline.

Additional factors include decreasing cost and size of what’s called *cache memory*. This is memory that’s actually on the processor itself and is used to store data the processor is actively using. It’s far, far faster than having to write data to system memory and retrieve it. Between increased cache memory and more effective use of multiple processor cores, the importance of raw clock speed has sharply dropped off over the past 10 years.

Anonymous 0 Comments

We’re reaching size limitations. Our computers are so fast that the speed of electricity (a decent portion of the speed if light) itself is hindering them. Thus, we need smaller systems, but smaller systems are encountering issues with being unable to prevent electrical flow due to quantum effects.