We’ve reached the limits of silicon. [New materials like carbon structures may be the way forward. ](https://medium.com/@laserboy/what-comes-after-silicon-a812847932a8) Quantum computing is still not fully beyond the laboratory, [but silicon may actually be an answer to pushing quantum computing out of those labs.](https://www.forbes.com/sites/kevinanderton/2020/04/20/the-largest-roadblock-in-quantum-computing-has-been-passed-infographic/?sh=74236255a08f)
A lot of great answers here on how timing and thermals limit CPU clock speeds. One of the main ways we have overcome these is by making the electronics smaller (smaller transistors). I’d like to add an interesting phenomenon called Quantum Tunneling, which is one of things that limits how small we can make electronics.
The simplified ELI5: imagine 2 wires carrying 2 signals (streams of electrons) right next to each other. In quantum mechanics there is a very small probability that an electron on one wire will spontaneously appear on the second wire. In almost every scenario this probability is so tiny (basically zero) that we can just ignore this effect. As you make the wires smaller and closer together down to the quantum level, the effect gets worse.
On a computer chip you have transistors that turn on and off. A transistor is basically a switch between 2 wires. When the transistor is “on” it connects both wires together and electrons can travel across, and when it is “off” it keeps the wires separate and blocks electrons from crossing. This is the basic building block of every computer. As transistors have gotten smaller and smaller, we’ve reached the point where even if the transistor is “off” we still have electrons jumping across the wires because of quantum tunneling.
Companies have come up with innovative solutions to workaround this by making the transistors into different shapes or using different materials to mitigate the effects of quantum tunneling. But as we are at 5nm transistors now, it is still one of the major factors preventing us from easily going smaller.
The limiting factor for speed is simply the maximum frequency response of silicon. Most of the responses you are getting are more related to what are the bottlenecks to getting more throughput which is a little bit different question than raw maximum speed.
It’s been 20 years since I took semiconductor physics in college but I still remember that and nothing has changed there. If you want a cpu with a 10 GHz clock you’re going to have to use something other than silicon.
Radio circuitry that operates at higher frequencies use transistors made from different semiconductor material with a higher maximum frequency response. Gallium arsenide is one option. Silicon is still used for CPUs because we’ve gotten very good at making it with very few impurities which allow us to make smaller transistors and pack them in.
In the simplest terms… When the paths the electrons follow are too close together, they have a tendency to “jump” to another path. Using techniques to prevent jumping can help, but at some point you just can’t control the jumping and start to lose efficiency. When you see things like 10nm process, or 5nm process, this is what is referring too.. the distance between the paths. More paths = more processing and higher clock speed, but that has to be balanced with the distance those electrons are moving, because we base our clocking on state changes in a given amount of time, and when it takes you longer to get to point B from point A, your calculation is slower.
I’m ready for quantum computing.
Depends on whether you’re speeding up something that exists or designing something new.
**Overclocking:**
When we talk about GHz we talk about a metric that is a combination of the CPUs multiplier and the front side bus speed of the motherboard it is currently slotted into.
With liquid hydrogen or other extreme cooling systems researchers have been slowly pushing their way to 9GHz. Increase your multiplier, speed up the FSB, push past stock voltage. This is overclocking.
Heat is a byproduct of the resistance of the components, they are not 100% efficiently conducting and so a portion of that electricity is expressed as heat. On the box the CPU will have a GHz rating and making it go faster than this starts with upping the voltage and ends with managing to keep it cool and stable.
**Processor Design:**
So what is a CPU? Let’s just say it’s a crazy amount of tiny switches called transistors. It has other parts, but nit relevant here. Small transistors still used today called MOSFETs were made in the 1950s. In the 1970s Gordon Moore predicted that the amount of transistors in ICs (read: computer chips) would double every 2 years as they got smaller and smaller.
Microprocessor engineers try to fit as much as they can in a given space. Most current gen CPUs have 3-4 billion transistors in them. When Moore noticed the trend, having a few thousand in a single IC was state of the art. The GraphCore Colossus MK2 has about 60 billion in a single IC, but it’s a damn sight larger than your desktop CPU.
So make switches smaller and you can put more in. We’re approaching a horizon where the laws of physics break down. That has to do with quantum mechanics but little to do with quantum computing. Can get into that if there’s interest but the short version that when me make them smaller than a given size, they stop working reliably.
There’s a physical gate speed limit (read: switches be switching) here that can be overcome with exponentially higher amounts of power, I’ll note that right around 2.8-3.2GHz there’s an increase in power necessary for most chips, so lots of reasons why core stacking is viable over a single faster core especially in mobile tech; but there’s also the distance that the electrical current will travel during one clock cycle – faster CPUs mean the power has less and less time to travel before the next clock cycle. Making faster CPUs means everything needs to be much smaller or closer, or else it’s not actually faster as it’s waiting for instructions. The speed of light is our limit here, so it’s very much an unbreakable barrier until someone proves otherwise.
Edit: bad grammar, spelling, i wasn’t totally awake.
Size, Power, and electrical limitations.
Size: Currently there are only so many standards for CPU sockets. The amount of surface area to place components on is finite, so it becomes a game of optimization to place as many components (transistors mostly) on the CPU itself.
Power: This one is kind of a two-fer. You need to up the power the CPU uses as the clock rate goes up (for the most part, new advances in transistor and CPU design resets the power requirements a bit for a little while. In addition, with more power comes more heat, and heat can affect the performance of the components on a CPU. This is the reason your computer has cooling mostly.
Electronics: The transistor works essentially by electrons hopping over a little wall. The smaller a transistor is the more we can fit on a board (and the less energy it uses technically). However if the transistor gets too small, electrons will be able to hop over that wall on their own (which we don’t want)
It gets a bit more complex than that but the rest is mostly just compounding these same issues over multiple cores and layers of printed circuit board and etc.
Adding to what others have said,from a computational perspective,the increase in performance if you run at say 6 ghz vs~5 ghz(the max any consumer class cpu can run now without ln2) is not worth the cooling effort put into it.
Performance doesn’t scale linearly with increased powerdraw and as you increase clocks,you need even more power to boost to the same .1ghz which translates to even more costs in cooling systems.
Latest Answers