If we are running out of transistors to place on a chip then why not just make the chips bigger?

197 views

Basically the title. I was reading about how we will eventually reach a limit on the number of transistors we can place on a chip and will have to turn to quantum computing in the far future. So why not just make bigger chips to accommodate more transistors?

In: 3

4 Answers

Anonymous 0 Comments

It’s expensive. Big chips do exist but they cost thousands of dollars.

One issue is that as chips get bigger, the chance that a single defect somewhere on the chip exists also gets bigger. This is due to dust randomly landing on the wafer during manufacturing. If you average 5 dust particles per wafer, then having 100 chips per wafer means you have 4 or 5 bad chips and 95-96% good chips. That means each ‘good chip’ costs 1/95th or 1/96th of the wafer which is 1.04-1.05%. If you have 10 chips per wafer, then you might have 3-5 bad chips and only 50-70% good chips, so each good chip costs 30-50% of the wafer. So making the chips 10x bigger costs 29-49x more.

There’s also other issues where the transistors and wires on different areas of the wafer are not exactly the same (transistors near the edge might have lower resistance or something for example). This isn’t a big problem for smaller chips since the entire chip will come from one area of the wafer, but a larger chip might have parts from one area and other parts from other areas. It turns out that mixing low resistance transistors with high resistance transistors can cause problems on a chip so that further reduces the number of good chips you can get.

There’s also a lot of design issues with larger chips. As you get larger, you get more lag when sending data from one corner of the chip to the opposite corner, so you have to spend more time and money designing ways to mitigate that lag. This is more of an issue for CPUs since they have to do a list of millions or billions of instructions in order, so a small amount of lag on each instruction adds up really quickly.

It’s less of an issue with stuff like GPUs where you have a smaller list of instructions to do, and you need multiple cores all working on the same instruction in parallel. If a piece of dust lands on one of those cores, you can just disable it and market it as a lower core count GPU. And if one core has resistance issues making it run slower, you can just slow down the entire GPU and market it as a slower GPU. This is why only the super high core count + super fast GPUs are expensive.

You are viewing 1 out of 4 answers, click here to view all answers.