Why does the Computational Power of Chips grow somewhat formulaic without major Spikes?

271 views

Moore’s Law by the co-founder of Intel stated that the number of transistors on a Chip doubles every two years, which has been roughly true of several decades.

And there have also been somewhat formulaic increases to [Frequency and Cores Count.](https://i.imgur.com/XbMffI8.jpg)

I wonder what the Holdup is preventing power spikes. Like why did they not quadruple the transistors or increase frequency further. When extra Cores were invented and Intel built the Duo, then Quattro Processors, why did they not extrapolate the technology and build the Twelve-Core-CPUs of today or even 48-Core-CPUs of the future right then and there?

In: 2

8 Answers

Anonymous 0 Comments

Oh I know that one. It is because they generate heat and are so tiny it is not easy to pack more and more into the chips.

They have to figure out how to put more transistors in the chip and make them smaller. That takes time and research to come up with. Also, it needs to be better heat efficient. As electricity runs through the transistors, as with everything it generates heat (that is why you pc has heat sinks and fans).

There is no way to just pack as many transistors as you want into a chip the way it works.

Anonymous 0 Comments

This is a logarithmic scale in the chart. The spikes are giant on a normal scale. Its not as fitting to the optimal curve as this picture makes it look.

Anonymous 0 Comments

It takes time for electrical signals to move around. The larger (physically) the chip is, the lower the clock rate has to be. So for a given process size there’s a tradeoff between clock rate and size/count of cores. Even with modern CPUs, most of the die space is implementing giant L2/L3 caches that run at a lower clock rate than the CPU cores and L1 caches.

Heat dissipation is another problem, higher clock rates (and/or voltages) and more complex cores generate more heat. For consumer-level tech you have to be able to air cool the CPU, so power dissipation much above 50-100W is very very hard to deal with.

Anonymous 0 Comments

1. This chart is logarithmic so it damps spikes
2. This chart may not contain all examples, it doesn’t specify at all what it’s source data is so we don’t know if data was excluded which might not fit these lines well

Anonymous 0 Comments

It’s largely a result of expectations driving reality. Semiconductor manufacturers set their development to proceed at this pace based on Moores Law. Companies that couldn’t keep up were left in the dust. At this point there’s only 3 companies left on the cutting edge. Any of these companies could theoretically put out a massively expense CPU with say 1,000 cores, but nobody would buy it because there’s no software thst really needs it. The hardware and software developed symbiotically at a certain pace set by Moores Law.

PS: Recently it has begun to look like Moores Law is no longer sustainable. In order to try to keep up prices have risen substantially when previously they had fallen consistently. We’ve reached the point where even the 3 companies at the cutting edge can no longer sustain thus pace.

Anonymous 0 Comments

I think one point you’re missing is that there are dozens of innovations required to improve each generation of chips.

Anonymous 0 Comments

Let’s take your specific example, of a high-count multi core cpu.

Say i’m starting with a dual-core design, so I have some idea of the hw architecture, and how the sw might work.

I decide on a 64 core design target.

First, there’s a massively more complex inter-core data communication priblem, which I have never designed before.

Then, I have to design around the timing problem, because signals on the chip only propagate about 20 cm per nanosecond.

Then I have to think about heat.

Then i have to think about system integration with memories and other nearby components.

Next, the part area is going to be something like 30-100 times as large as the 2-core part, because I’m using existing fabrication tech. This means I’m going to have dead parts on the silicon water much more often, maybe approaching 100%, which is terrible economics.

Also, I may NEED bigger wafers. Which may not be available. And which are going to be very expensive, and very expensive to run in a fab line.

This list goes on and on. I’ve only scratched the surface. And all these things take time and money. LOTS of money, which is always in short supply.

Look into The history of Connection Machines, for an example of related difficulties.

Anonymous 0 Comments

So … many companies have done this, you’ve just not heard of them. Here’s one: https://www.cerebras.net/product-chip/

It’s possible you’re misunderstanding the nuance of Moore’s Law. It’s not simply that the “number of transistors in a chip double every two years,” it’s that the number of transistors *at the same cost* doubles every two years. Oversimplified: a 4-core smartphone CPU might cost (for example) $50 this year, so the same $50 will afford an 8-core CPU in two years. Meaning you get a higher performance phone, and … the whole phone still costs the same $200 each time. You could jump to a killer 64-core design instead, and … for $550 have a phone with one eighth the battery life, ending up no longer in the smartphone business anymore.