: Why are computer CPUs the size they are? Wouldn’t making them bigger give way to more processing power without needing better technology?

807 views

Edit : My first post to blow up. Crazy.

In: Technology

14 Answers

Anonymous 0 Comments

Things in a CPU happen extremely quickly. The smaller/closer together things are, the more efficient they can be.

Anonymous 0 Comments

Bigger CPUs have more chances for stuff to go wrong in the manufacturing process. Plus, they’re already plenty big, compared to the transistors on them.

Computers with more than one CPU do exist, if you want a “bigger CPU”

Anonymous 0 Comments

No. In fact, making a CPU smaller makes it faster. CPUs are already bumping against the speed of light limit (seriously), and limited in part by how fast signals can travel from one end to the other.

The other issue is that CPU manufacturing has a probability of having a defect in it. The bigger the CPU, the bigger the chance that a defect will be found inside it, and it’ll need to be thrown out. [See the picture](https://slideplayer.com/slide/5218297/16/images/3/Effect+of+Die+Size+on+Yield.jpg). So making a CPU smaller makes it cheaper, because each defective CPU still costs money to make, and the cost of those has to be accounted for in the price of the CPU.

Anonymous 0 Comments

Although electricity moves *very* fast (almost the speed of light) it still has to travel, and traveling takes time. Modern processors are cycling more than 3,000,000,000 times every second per core. Even a tiny tiny fraction of a second extra per cycle adds up. So, you want to pack everything as close together as possible. Just making the processor smaller with the same number of transistors makes it go faster because the electricity has less distance to travel.

That also makes it more efficient, since every bit of metal and semiconductor has some electrical resistance. If you have less metal for the electricity to pass through, it takes less power to run the CPU.

But that creates the problem that you’re also packing all of the heat closer together, too. Every little bit of metal or semiconductor is generating heat with every cycle, and when you pack them close together it’s like putting a bunch of space heaters in a room together. More heat means a bigger, more powerful heat sink.

When the components are too close together they also start “leaking” electrical signals between them, causing false signals which have to be re-checked, which of course slows down the computer. So without new technology they can’t be packed much closer together than they are.

So why not just build bigger chips with the same sized components, but more of them? Partly because of cost. More powerful CPUs will always be more expensive, and there’s a point when 90% of users don’t need anything that powerful and won’t be willing to spend the money on it. There *are* CPUs that are just larger, but they’re normally for special uses like servers. Those chips need special motherboards, which is another problem.

Most motherboards for your average user have some backwards compatibility. CPUs are made to fit within the same socket as past generations, so your average user can upgrade to a better CPU without having to throw out half their computer. Even different normal CPU sockets are *around* the same size so when you’re designing a motherboard you know how big the socket should be. Server CPUs also often need special RAM and more powerful power supplies, too. At some point it’s just not worth it to manufacture a larger CPU when few people are going to buy it. Or else, manufacture them but design them for completely different, niche uses and price them accordingly.

Physically larger CPUs also mean yet more heat. More components make even more heat, and it still has to go somewhere. Bigger, stronger heat sinks get expensive so, again, it becomes more than most users will spend or want to deal with. That may change as the market evolves, but given that 90% of users aren’t doing anything more computationally stressful than word processing and internet browsing…probably not soon.

TL;DR: Like shorter roads, smaller distances between components makes the CPU go faster. Physics problems like quantum tunneling and waste heat from electrical resistance limit how small components be and how close together they are. Economics limits what people are willing to pay for so just making CPUs with more components is not generally worth it for manufacturers. Current generations of processors are right about in the sweet spot of being as small as the CPU can physically be + having the most components (and therefore processing power) that people are willing to pay for. (And there is a wide range of CPU types, sizes, and costs for people to choose from.)

EDIT: Other comments below have pointed out the manufacturing process is cheaper with smaller wafers because the odds of having a fatal flaw increase as the wafer size increases. That’s not something I was aware of! So those comments are totally worth reading (and upvoting).

EDIT2: Yes, Intel tends not to be backwards compatible and also sucks. This edit brought to you by the AMD Gang.

Anonymous 0 Comments

For the purpose of a CPU, the signals that matter are 1s and 0s – electric blips or no-blips.

But as long as the 1s and 0s are received and processed, it doesn’t matter if they are “big” or “small”, or carried down large hallways or small corridors.

Let’s say you are carrying boxes that either contain a signal or are empty. On old computers, huge boxes were carried by dumptrucks running down big roads – noisy, inefficient and large.

On new computers, small boxes are carried by small scooters down narrow tunnels. Physically much smaller, requires much less power. But this is great, since the only thing that matters is whether there is a signal in a box or not. You want the pathways and boxes to be as small as possible, as long as they are still received and registered correctly.

What I’m talking about here is really the die shrink process: [https://en.wikipedia.org/wiki/Die_shrink](https://en.wikipedia.org/wiki/Die_shrink) whereby a certain process is just shrunk down in size. This is always a good thing. A huge part of making computers smaller, cooler and faster and being able to be in phones is this shrinking processor.

Another question is making the processors have a bigger volume – at the same time as they shrink internally. Smaller pathways for the signals, just more of them. That’s a more tricky one.

It’s partly that there’s no need to, because the actions of a CPU core, the “instruction set” so to speak, can be performed with a given layout, big or small – so adding more volume to a CPU core has no reason. Instead, as you shrink down the pathways for each core, you add more cores, letting you run more programs at the same time.

CPUs have hence already been made “much bigger” than they used to be, in the sense that they have more cores – if they had as few cores as before, with the pathway shrinkage, they would be even tinier. You can also add things to processor cores, like a form of fast memory (cache) making them run faster, but there’s a limit to how well the improvement scales.

Let’s say that for a CPU to function as a CPU, and perform its instruction set: [https://en.wikipedia.org/wiki/Comparison_of_instruction_set_architectures](https://en.wikipedia.org/wiki/Comparison_of_instruction_set_architectures) it needs to contain a spoon, a fork and a knife, regardless of their sizes. When people write programs, they do so presuming that these are the only tools in the CPU. You get great gains from shrinking the tools inside, but there’s no reason to add more utensils. However, as you shrink them, you get the space to add multiple CPU cores each containing a spoon, a fork and a knife.

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

We do do this. Well, sort of. I work for a big hosting company. There’s both core count and speed, more cores makes for a bigger physical processor, more speed makes each core faster but heat up more, so whether you are upping the clock or adding more cores the whole proc is generating more heat. There’s a limit to having that much heat in a single space before things like melting (have seen this) happen. Usually what we do to pack more processing in a box is to get motherboards that can support multiple CPUs, dual socket is common and quad socket is somewhat, believe you can get 8 sockets even, each socket can support over a TB of memory too so they get pretty big. The problem is those mobos are expensive, the more you ramp a single system up the more exponentially the price rises. So what we’ve done server side is write our code in ways it can be split up and served across multiple servers instead of multiple cores, since they both take more work than a program that runs on a single core anyway.

As to why they don’t simply increase performance per core, takes a whole lot of engineering to make that happen, and either way you still deal with heat. It’s more practical to scale horizontally than vertically these days.

Anonymous 0 Comments

Check out this video by the great Grace Hopper: https://www.youtube.com/watch?v=9eyFDBPk4Yw

The speed of light sounds really really fast, until you’re dealing with the order of nanoseconds and microseconds. I’m sure everyone is familiar with GHz (gigahertz) and such being a very rough measure of processor speed. Consider a simple 3 GHz processor. This means that for every nanosecond, this processor flips a switch (called a “clock) exactly 3 times. The clock is used for synchronizing the entire processor and is almost a hard requirement (some clockless/asynchronous designs exist, but they are very complex) in modern processor design. Consider If you build a large processor where the middle of the processor is the clock, memory is on one side, and the circuitry for adding two numbers is on the opposite side as memory. The memory and addition circuit are each 10cm away from the clock. With a design like this, if you had some addition operation that you wanted to then put into memory, and the processor frequency is 3GHz, then as soon as the addition circuit received the clock signal, the clock signal would change at the source. To put this into distance:

* clock to addition: 10cm/1 clock cycle
* addition operation: typically 1 clock cycle
* addition to memory: 20cm/2 clock cycles

Basically bigger distance means more propagation delay which means much more complexity with keeping the clock rate of a processor high while not running into problems with components of the processor falling out of sync with the clock, or not being able to “stabilize” on a result within the designated clock cycle. Smaller processor size means that you can push the “wiring” of a processor faster, even if the actual components (ie, transistors) are still effectively fixed speed. For one more example, consider some circuit that adds numbers. It is 10cm in area. This means that when you feed data into one side, assuming perfect component behavior etc, that the result comes out the other side in at least 0.33 nanoseconds or more typically much more time. Now, you invent something and shrink the design of this component to fit into 1cm. Now the result comes back after just 0.033 nanoseconds. The reduction in propagation delay almost always gives a huge advantage, even with a reduction in the quality of smaller transistors that might cause more “stability delay”

Anonymous 0 Comments

Yes, you could do that, and there are companies making chips the size of dinner plates. The problem is it is very very expensive.

To make a CPU you start with a piece of silicone about the size of a dinner plate and “print” your chip design on it as many times as it fits. So if you can put 1000 CPUs on your plate sized silicone now 1000 people pay for that batch rather than one person paying for the whole plate sized silicone.

*Slightly more technical details…A bigger chip is great for paralle computing or lots of cores (running lots of things together) but not every application/chip would benifit from scaling like that. But even for basic Nvidia chips, the faster one have a bigger die size (chip size).

Anonymous 0 Comments

Cost. To build a computer chip you start with a circular flat piece of metal called a wafer. The wafer is a fixed cost, say $1000, and a fixed size, say 12″ diameter.

Therefore, the cost of the chip is directly related to the size. If you can fit 10 chips on your wafer, then your chips would cost $100 each to make. But if you fit 100 chips on your wafer, then your chips would cost $10 each to make.