: Why are computer CPUs the size they are? Wouldn’t making them bigger give way to more processing power without needing better technology?

803 views

Edit : My first post to blow up. Crazy.

In: Technology

14 Answers

Anonymous 0 Comments

Not posting this comment to diminish what /u/RhynoD or others have already said, what they said is absolutely true. CPU manufacturers want them to be small, but not too small.

However, not all CPUs are the same size, and many higher-end CPUs are physically larger. In fact, the actual chip in consumer CPUs vary in size, even though the total size of the CPU does not. What you see the CPU is a bit of the printed circuit board (the little green board that contains all the connections between the chip and the rest of the computer) and the heat spreader on top. As the name implies, the heat spreader takes the heat generated by the CPU chip and spreads it out to improve cooling.

Under the heatspreader is the CPU chip itself. Even though the board and the heatspreader size doesn’t change, the chip underneath can vary. For example, [here is a size comparison between three different Intel CPUs](https://images.anandtech.com/doci/13400/9900K%20Mockup.jpg). The board and heatspreader remain the same size between all three CPUs. AMD and Intel have both decided on similar (although not the same) size CPU board/heatsink size just so both their low end and their high end CPUs of a generation can all work on the same motherboards and other products like heatsinks.

Their professional level CPUs, such as some of Intel Xeon or AMD’s Threadripper/EPYC CPUS have physically larger heat spreader/boards in addition to having massive chips. Part of the reason certain Xeon chips cost over ten thousand dollars is that large single-chip CPUs have high failure rates per /u/dale_glass comment, so they need to charge the price for a black market kidney to make their money back. AMD’s high end CPUs these days are actually a bunch of smaller CPU “chiplets” acting as a single CPU, which allows them pack a ton of CPU horsepower in a package you will only need to sell half of your kidney for.

Anonymous 0 Comments

The explanations posted are good, but I love this question because the actual math is so easy to understand.

Start with how big a processor is, which is **like an inch or two across** right?

Then think about how fast they’re doing stuff. The fastest boast a “4.0 GHz” (Gigahertz), which is the number of operations they can do in a second. That’s **4 billion operations per second**.

But then how long does it take them to do a single operation? Quick conversion, it’s **200 picoseconds**.

Then, we know the speed of light right? c is about **300,000 km/s**.

So the question is, how far can light, the fastest thing around, travel in 200 picoseconds? It turns out, when you crunch all the numbers, the answer is **2.4 inches**. Once you remember that it’s not moving straight it’s gotta wind around in there, it’s easy to see why a processor can’t possibly be bigger than an inch or two across and still run that fast!

Anonymous 0 Comments

The opposite is true.

Think about factories you want to connect with roads. The longer the roads, the longer it takes you to travel them. So to make things (data in a CPU) go from one factory to the other, you want the roads as short as possible.

But, if those factories are too close together, pollution (heat in a CPU) becomes a problem. So the only way to have those factories closer together is by making them produce less pollution (heat).

Therefore there is the balance to cram everything on a CPU as close together as possible, but not so close that it overheats.

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

Cost. To build a computer chip you start with a circular flat piece of metal called a wafer. The wafer is a fixed cost, say $1000, and a fixed size, say 12″ diameter.

Therefore, the cost of the chip is directly related to the size. If you can fit 10 chips on your wafer, then your chips would cost $100 each to make. But if you fit 100 chips on your wafer, then your chips would cost $10 each to make.

Anonymous 0 Comments

Yes, you could do that, and there are companies making chips the size of dinner plates. The problem is it is very very expensive.

To make a CPU you start with a piece of silicone about the size of a dinner plate and “print” your chip design on it as many times as it fits. So if you can put 1000 CPUs on your plate sized silicone now 1000 people pay for that batch rather than one person paying for the whole plate sized silicone.

*Slightly more technical details…A bigger chip is great for paralle computing or lots of cores (running lots of things together) but not every application/chip would benifit from scaling like that. But even for basic Nvidia chips, the faster one have a bigger die size (chip size).

Anonymous 0 Comments

Check out this video by the great Grace Hopper: https://www.youtube.com/watch?v=9eyFDBPk4Yw

The speed of light sounds really really fast, until you’re dealing with the order of nanoseconds and microseconds. I’m sure everyone is familiar with GHz (gigahertz) and such being a very rough measure of processor speed. Consider a simple 3 GHz processor. This means that for every nanosecond, this processor flips a switch (called a “clock) exactly 3 times. The clock is used for synchronizing the entire processor and is almost a hard requirement (some clockless/asynchronous designs exist, but they are very complex) in modern processor design. Consider If you build a large processor where the middle of the processor is the clock, memory is on one side, and the circuitry for adding two numbers is on the opposite side as memory. The memory and addition circuit are each 10cm away from the clock. With a design like this, if you had some addition operation that you wanted to then put into memory, and the processor frequency is 3GHz, then as soon as the addition circuit received the clock signal, the clock signal would change at the source. To put this into distance:

* clock to addition: 10cm/1 clock cycle
* addition operation: typically 1 clock cycle
* addition to memory: 20cm/2 clock cycles

Basically bigger distance means more propagation delay which means much more complexity with keeping the clock rate of a processor high while not running into problems with components of the processor falling out of sync with the clock, or not being able to “stabilize” on a result within the designated clock cycle. Smaller processor size means that you can push the “wiring” of a processor faster, even if the actual components (ie, transistors) are still effectively fixed speed. For one more example, consider some circuit that adds numbers. It is 10cm in area. This means that when you feed data into one side, assuming perfect component behavior etc, that the result comes out the other side in at least 0.33 nanoseconds or more typically much more time. Now, you invent something and shrink the design of this component to fit into 1cm. Now the result comes back after just 0.033 nanoseconds. The reduction in propagation delay almost always gives a huge advantage, even with a reduction in the quality of smaller transistors that might cause more “stability delay”

Anonymous 0 Comments

We do do this. Well, sort of. I work for a big hosting company. There’s both core count and speed, more cores makes for a bigger physical processor, more speed makes each core faster but heat up more, so whether you are upping the clock or adding more cores the whole proc is generating more heat. There’s a limit to having that much heat in a single space before things like melting (have seen this) happen. Usually what we do to pack more processing in a box is to get motherboards that can support multiple CPUs, dual socket is common and quad socket is somewhat, believe you can get 8 sockets even, each socket can support over a TB of memory too so they get pretty big. The problem is those mobos are expensive, the more you ramp a single system up the more exponentially the price rises. So what we’ve done server side is write our code in ways it can be split up and served across multiple servers instead of multiple cores, since they both take more work than a program that runs on a single core anyway.

As to why they don’t simply increase performance per core, takes a whole lot of engineering to make that happen, and either way you still deal with heat. It’s more practical to scale horizontally than vertically these days.

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

For the purpose of a CPU, the signals that matter are 1s and 0s – electric blips or no-blips.

But as long as the 1s and 0s are received and processed, it doesn’t matter if they are “big” or “small”, or carried down large hallways or small corridors.

Let’s say you are carrying boxes that either contain a signal or are empty. On old computers, huge boxes were carried by dumptrucks running down big roads – noisy, inefficient and large.

On new computers, small boxes are carried by small scooters down narrow tunnels. Physically much smaller, requires much less power. But this is great, since the only thing that matters is whether there is a signal in a box or not. You want the pathways and boxes to be as small as possible, as long as they are still received and registered correctly.

What I’m talking about here is really the die shrink process: [https://en.wikipedia.org/wiki/Die_shrink](https://en.wikipedia.org/wiki/Die_shrink) whereby a certain process is just shrunk down in size. This is always a good thing. A huge part of making computers smaller, cooler and faster and being able to be in phones is this shrinking processor.

Another question is making the processors have a bigger volume – at the same time as they shrink internally. Smaller pathways for the signals, just more of them. That’s a more tricky one.

It’s partly that there’s no need to, because the actions of a CPU core, the “instruction set” so to speak, can be performed with a given layout, big or small – so adding more volume to a CPU core has no reason. Instead, as you shrink down the pathways for each core, you add more cores, letting you run more programs at the same time.

CPUs have hence already been made “much bigger” than they used to be, in the sense that they have more cores – if they had as few cores as before, with the pathway shrinkage, they would be even tinier. You can also add things to processor cores, like a form of fast memory (cache) making them run faster, but there’s a limit to how well the improvement scales.

Let’s say that for a CPU to function as a CPU, and perform its instruction set: [https://en.wikipedia.org/wiki/Comparison_of_instruction_set_architectures](https://en.wikipedia.org/wiki/Comparison_of_instruction_set_architectures) it needs to contain a spoon, a fork and a knife, regardless of their sizes. When people write programs, they do so presuming that these are the only tools in the CPU. You get great gains from shrinking the tools inside, but there’s no reason to add more utensils. However, as you shrink them, you get the space to add multiple CPU cores each containing a spoon, a fork and a knife.