What is physically different between a high-end CPU (e.g. Intel i7) and a low-end one (Intel i3)? What makes the low-end one cheaper?

232 views

What is physically different between a high-end CPU (e.g. Intel i7) and a low-end one (Intel i3)? What makes the low-end one cheaper?

In: Technology

In many cases, they are the same physical chip. The i3 just has defective sections turned off or slowed down. It’s cheaper because selling a partially functional chip at a discount is better than just throwing it away.

Usually i3/i5 are chips that aren’t good enough or has damages so it can’t be sold as i7. Design wise they are usually the same. Every die is tested and depending on its property it could become an desktop or a mobile chip with 4 to 8 cores with or without igpu. Usually the parts that aren’t used will be disconnected from the rest of the die, got some rare cases when they didn’t do it and you could upgrade cpu/gpu via firmware if you got lucky

On a silicon wafer usually center yield the best quality, and especially in the corner the quality is usually lower resulting in more cpus where not all cores are working

The process to make computer chips isn’t perfect. Certain sections of the chip may not function properly.

They make dozens of chips on a single “wafer”, and then test them individually.

Chips that have defects or issues, like 1/8 cores not functioning, or a Cache that doesn’t work, don’t go to waste. They get re-configured into a lower tier chip.

In other words, a 6-core i5 is basically an 8-core i7 that has 2 defective cores.

(Just for reference, these defects and imperfections are why some chips overclock better than others. Every chip is slightly different.)

Most reply seem to focus on a process often called binning: disabling and rerouting defective or underperforming parts of a chip to “act” as a lower-spec config.

However, this only works for specific lines of processors – in GPUs you often see this happening between the top-tier and sub-top tier of a line.

For the rest of the range, chips are actually designed to be physically different: most chips are modular, cores and caches can be resized and modified independently during the design process. Especially stuff like cache takes up a lot of space on the die, but is easily scalable to fit lower specs. Putting in and taking out caches, cores and other more “peripheral circuits” can lower the size (and fail rate) of chips without needing to design completely different chips.

​

edit: use proper term, no idea where I got “harvesting”, binning is def. the proper term.

Imagine a fancy bakery. Their main customers expect nothing but the best cakes possible, and they make them.

Every so often, they’ll mess up the frosting, and the entire cake isn’t worth the price. So instead of throwing the cake away, they’ll repackage it and sell it cheaper instead.

Non ELI5:

A CPU is just a lot of silicone transistors. And i mean a LOT. Billions even. Imagine a sausage made of silicone, about as wide as your palm, which then gets sliced into thin discs called wafers. There’s multiple chips on one wafer.

Silicone isn’t perfect, and often, there’ll be a crack or imperfection right on top of a chip. So instead of throwing the whole wafer away, they’ll use what they have, and sell it cheaper. Silicone is ridiculously expensive, so they have to use every little bit they can.

EDIT: It’s silicon, not silicone, I’m baffled by how I messed it up

Most of the answers in this thread are incorrect, at least for the processors mentioned by OP. Intel Core processors vary in core count and cache size across the range, if not in actual architecture.

Through history occasionally are devices where a high end and a low end were similar, just had features disabled. That does not apply to the chips mentioned here.

If you were to crack open the chip and look at the inside in one of [these pictures](https://i.stack.imgur.com/Jl16e.jpg), you’d see that they are packed more full as the product tiers increase. The chips kinda look like shiny box regions in that style of picture.

If you cracked open some of the 10th generation dies, in the picture of shiny boxes perhaps you would see:

* The i3 might have 4 cores, and 8 small boxes for cache, plus large open areas
* The i5 would have 6 cores and 12 small boxes for cache, plus fewer open areas
* The i7 would have 8 cores and 16 small boxes for cache, with very few open areas
* The i9 would have 10 cores, 20 small boxes for cache, and no empty areas

The actual usable die area is published and unique for each chip. Even when they fit in the same slot, that’s where the lower-end chips have big vacant areas, the higher-end chips are packed full.

Imagine the job you want your processor to do is eating food. You know how I eat faster than you do? Part of that is having a bigger mouth (L1 cache), using bigger silverware (L2 cache), and having a larger plate (L3 cache). It’s also about making sure that I’m taking the right size bites, constantly chewing because I make sure that the next bite is ready to go into my mouth by the time I’m done chewing (hyperthreading and pipelining).

Guys, binning and architecture are not the same thing. Binning is used to determine the clock speed of a chip within the same family. The differences between i3 and i7 are not just limited to core/thread count. It’s also architectural. These have different features on the die that determine their capabilities.

Silicon area is expensive. Chip design is expensive. To make the numbers work, intel makes building blocks of chip parts and can “print” different versions. A 4 core chip takes up half the wafer as a 8 core chip and thus costs much less. There is a fixed cost to process 1 wafer. If you can squeeze more “CPU”s on a wafer they are cheaper to make. This is different than having a 16, 12, 10 or 8 core design of a family where ‘bad” cores are marked unused and sold as lower core count. Those chips still take up the silicon area of a 16 core chip, but instead of wasting them, the sell them with lower cores.

The other cost reduction is “binning” where they test the chip at the full rated speed. if it does not pass they test it at a slower clock speed. And keep dropping the speed until it passed. These lower clocked parts are sold cheaper because they can’t run at their design speed.

There are lots of ways to save money once you made the chip. But silicon area is the main driving factor. Which is why they are always shooting for smaller transistor sizes. Not just because smaller transistors can reduce power use, but smaller process size means they can put more chips on a wafer.

Other than arbitrary pricing in a non-competitive market situation, the main thing that affects CPU pricing is the **number of non-defective CPUs per wafer**.

CPU manufacturing starts with a big cylinder of silicon. That cylinder is cut into discs, or wafers. That wafer is then engraved (via secret magics) with as many CPUs as they can fit. They can’t make bigger and bigger wafers, because that original cylinder of silicon still has to obey the laws of physics and thermodynamics and cools differently in the middle vs the outside. Imagine the difficulty of making a cupcake vs. a giant cake, where if you don’t do it juuuuuuust right, the outside will be burnt while the inside is still raw.

All else being equal, the more features a CPU has, the more transistors it requires, the more space it takes up on a wafer. More space = fewer CPUs per wafer. Furthermore, the more transistors a given CPU has, the greater chance of a defect being in there somewhere. Defects => fewer CPUs they can sell per wafer => higher costs.

The main high-level feature differences between i3, i5, and i7 CPUs are clock speed, # of CPU cores, and size of the cache. # of cores and cache are basically directly responsible for the size of the CPU on a wafer. An i3 with 2 cores and 256K of cache will take up far, far less space than an i7 with 8 cores and 8MB of cache. Less space means more CPUs per wafer means less cost per CPU.

Others have touched on the idea of binning where an i7 with 2 out of 8 defective cores is sold as an i5 with 4 cores or something like that, but that’s really secondary. Being able to make an i5 out of a partially defective i7 helps them recover waste from a wafer full of i7s, but that’s far, far less important than being able to get 2x as many i5s out of a single wafer of non-defective chips in the first place. As their manufacturing process improves, the defect rate gets lower and lower and they wouldn’t have enough defective CPUs to market to the more price-conscious consumers. Binning is much more likely to be used to sell lower-rated CPUs in the same general class.