: Why are computer CPUs the size they are? Wouldn’t making them bigger give way to more processing power without needing better technology?

562 views
0

Edit : My first post to blow up. Crazy.

In: Technology

Things in a CPU happen extremely quickly. The smaller/closer together things are, the more efficient they can be.

Bigger CPUs have more chances for stuff to go wrong in the manufacturing process. Plus, they’re already plenty big, compared to the transistors on them.

Computers with more than one CPU do exist, if you want a “bigger CPU”

No. In fact, making a CPU smaller makes it faster. CPUs are already bumping against the speed of light limit (seriously), and limited in part by how fast signals can travel from one end to the other.

The other issue is that CPU manufacturing has a probability of having a defect in it. The bigger the CPU, the bigger the chance that a defect will be found inside it, and it’ll need to be thrown out. [See the picture](https://slideplayer.com/slide/5218297/16/images/3/Effect+of+Die+Size+on+Yield.jpg). So making a CPU smaller makes it cheaper, because each defective CPU still costs money to make, and the cost of those has to be accounted for in the price of the CPU.

Although electricity moves *very* fast (almost the speed of light) it still has to travel, and traveling takes time. Modern processors are cycling more than 3,000,000,000 times every second per core. Even a tiny tiny fraction of a second extra per cycle adds up. So, you want to pack everything as close together as possible. Just making the processor smaller with the same number of transistors makes it go faster because the electricity has less distance to travel.

That also makes it more efficient, since every bit of metal and semiconductor has some electrical resistance. If you have less metal for the electricity to pass through, it takes less power to run the CPU.

But that creates the problem that you’re also packing all of the heat closer together, too. Every little bit of metal or semiconductor is generating heat with every cycle, and when you pack them close together it’s like putting a bunch of space heaters in a room together. More heat means a bigger, more powerful heat sink.

When the components are too close together they also start “leaking” electrical signals between them, causing false signals which have to be re-checked, which of course slows down the computer. So without new technology they can’t be packed much closer together than they are.

So why not just build bigger chips with the same sized components, but more of them? Partly because of cost. More powerful CPUs will always be more expensive, and there’s a point when 90% of users don’t need anything that powerful and won’t be willing to spend the money on it. There *are* CPUs that are just larger, but they’re normally for special uses like servers. Those chips need special motherboards, which is another problem.

Most motherboards for your average user have some backwards compatibility. CPUs are made to fit within the same socket as past generations, so your average user can upgrade to a better CPU without having to throw out half their computer. Even different normal CPU sockets are *around* the same size so when you’re designing a motherboard you know how big the socket should be. Server CPUs also often need special RAM and more powerful power supplies, too. At some point it’s just not worth it to manufacture a larger CPU when few people are going to buy it. Or else, manufacture them but design them for completely different, niche uses and price them accordingly.

Physically larger CPUs also mean yet more heat. More components make even more heat, and it still has to go somewhere. Bigger, stronger heat sinks get expensive so, again, it becomes more than most users will spend or want to deal with. That may change as the market evolves, but given that 90% of users aren’t doing anything more computationally stressful than word processing and internet browsing…probably not soon.

TL;DR: Like shorter roads, smaller distances between components makes the CPU go faster. Physics problems like quantum tunneling and waste heat from electrical resistance limit how small components be and how close together they are. Economics limits what people are willing to pay for so just making CPUs with more components is not generally worth it for manufacturers. Current generations of processors are right about in the sweet spot of being as small as the CPU can physically be + having the most components (and therefore processing power) that people are willing to pay for. (And there is a wide range of CPU types, sizes, and costs for people to choose from.)

EDIT: Other comments below have pointed out the manufacturing process is cheaper with smaller wafers because the odds of having a fatal flaw increase as the wafer size increases. That’s not something I was aware of! So those comments are totally worth reading (and upvoting).

EDIT2: Yes, Intel tends not to be backwards compatible and also sucks. This edit brought to you by the AMD Gang.

For the purpose of a CPU, the signals that matter are 1s and 0s – electric blips or no-blips.

But as long as the 1s and 0s are received and processed, it doesn’t matter if they are “big” or “small”, or carried down large hallways or small corridors.

Let’s say you are carrying boxes that either contain a signal or are empty. On old computers, huge boxes were carried by dumptrucks running down big roads – noisy, inefficient and large.

On new computers, small boxes are carried by small scooters down narrow tunnels. Physically much smaller, requires much less power. But this is great, since the only thing that matters is whether there is a signal in a box or not. You want the pathways and boxes to be as small as possible, as long as they are still received and registered correctly.

What I’m talking about here is really the die shrink process: [https://en.wikipedia.org/wiki/Die_shrink](https://en.wikipedia.org/wiki/Die_shrink) whereby a certain process is just shrunk down in size. This is always a good thing. A huge part of making computers smaller, cooler and faster and being able to be in phones is this shrinking processor.

Another question is making the processors have a bigger volume – at the same time as they shrink internally. Smaller pathways for the signals, just more of them. That’s a more tricky one.

It’s partly that there’s no need to, because the actions of a CPU core, the “instruction set” so to speak, can be performed with a given layout, big or small – so adding more volume to a CPU core has no reason. Instead, as you shrink down the pathways for each core, you add more cores, letting you run more programs at the same time.

CPUs have hence already been made “much bigger” than they used to be, in the sense that they have more cores – if they had as few cores as before, with the pathway shrinkage, they would be even tinier. You can also add things to processor cores, like a form of fast memory (cache) making them run faster, but there’s a limit to how well the improvement scales.

Let’s say that for a CPU to function as a CPU, and perform its instruction set: [https://en.wikipedia.org/wiki/Comparison_of_instruction_set_architectures](https://en.wikipedia.org/wiki/Comparison_of_instruction_set_architectures) it needs to contain a spoon, a fork and a knife, regardless of their sizes. When people write programs, they do so presuming that these are the only tools in the CPU. You get great gains from shrinking the tools inside, but there’s no reason to add more utensils. However, as you shrink them, you get the space to add multiple CPU cores each containing a spoon, a fork and a knife.