Why does tech increase size by doubling?

176 views

Example: most SSDs increase 1, 2, 4, 8, 16, 32, 64,…

In: 2

3 Answers

Anonymous 0 Comments

Binary addressing. In binary when you add an extra digit, you increase the range by two.

In decimal, every time you add a digit you increase the range by 10. So for instance one digit, `X` can be 0 to 9. Two digits, `XX` can be 0 to 99. Three digits are 0 to 999. And so on.

Computer memory has address lines, which are used to indicate which byte is being read or written. Every time you add another line you double the number of bits that can be used.

In theory, you could have a chip with 1000 bytes of memory rather than 1024, but then you’d have to come up with something sensible to do when somebody tries to access the nonexistent position 1011. Might as well not deal with that trouble.

Anonymous 0 Comments

For SSDs specifically (and mass storage devices like USB drives, hard drives), no particular technical reason. People tend to be familiar with these numbers from CPU/RAM size, so manufacturers use them. In reality they can and do make other values like [3TB](https://www.bestbuy.com/site/wd-blue-3tb-internal-sata-hard-drive-for-desktops/9312085.p?skuId=9312085), and are allowed to change the exact number of bytes on disk depending on whatever the real constraints are (e.g. for the target cost how small or reliable can we make the parts).

CPUs usually come in powers of 2, but again there isn’t really a fundamental technical reason for that. They are kind of stuck in coming in multiples of 8 because we decided early on that 8 bits was special, so 8 and 16 made sense, but we could have easily had 24 and 48 between 32 and 64, and in fact a few computers did use those. Some very special-purpose processors used non-multiples of 8 when they did not need to co-operate with engineers that assume certain data types must always come in chunks of 8 bits, e.g. this [14-bit microcontroller](https://www.mouser.com/Semiconductors/Embedded-Processors-Controllers/Microcontrollers-MCU/_/N-a85i8?P=1yrfw6t). There may have been some mild advantages to the numbers we chose, but they come from the larger context – what are the design tools like, how will these sizes interact with existing code when re-compiled for this architecture, how will it interact with binaries built for other architectures, what kind of production advantages/disadvantages do we get, … there’s a huge ecosystem of computer hardware and marketing that determines what gets made.

For RAM there is more of a reason. RAM is addressed in powers of 2 for the reason stated by u/dale_glass – it allows memory addresses to be a whole number of bits, so you don’t have to do any kind of numerical inequalities to make sure you haven’t asked for more memory than exists. This helps simplify the circuitry and make it faster. But we do know how to make RAM that isn’t a power of 2 – use a 4GB and 8GB in one computer to get 12GB total. The operating system ends up doing the math to make sure you don’t go over 12GB when looking up memory addresses.

Anonymous 0 Comments

Most human cultures like to describe things in “powers of 10” because the *decimal* number system caught on really well and we like it that way. This ends up meaning that nice “round” numbers end up being things like 10, 100, 1000… or billions or trillions, etc.

Computers, however, are built to think in *binary* which makes “powers of 2” (like 2, 4, 8, 16, etc.) the nice “round” numbers for computer systems.

—————————————

The reason that matters when it comes to companies choosing SSDs sizes and whatnot is because – even though customers might prefer a nice “round” decimal number – the hardware itself is designed for the *binary* that computers speak.

To store a file, your computer essentially has to know the “street address” of every single “bit” of a file when you go to put something onto the SSD and/or read it later; when you “open a file” your computer is essentially saying “All right, this file says it is stored from address-0123 up to address-4321, let’s take a look and copy that all to RAM.”

So, the people who make SSDs essentially have to design a “post-office” capable of understanding all of the “street addresses” that the computer will end up using to deliver mail to/from those addresses. If you build a SSD with, say, 50 addresses… then you at the very least need a “post-office” that can understand addresses from 01, 02… up to 50… but that same post-office design is likely able to handle any other 2-digit address… anything from 00-to-99. But what if you need 150 addresses? Well, that’s too many addresses for a 2-digit (00-99) “post-office” to handle, but a 3-digit (000-999) “post-office” could handle it no sweat! But that means that 150 address is a bit of a waste when you’re already set up to handle 999… so why not just tell the guys building the roads to put in a few more mailboxes?

That same process happens with actual SSD design… except, because computers think in *binary* the street addresses look a little different. Instead of counting as (decimal) 0,1,2,3,4,5,6,7,8,9,10,11,12,13…you get something like (binary) 0,1,10,11,100,101,110,111,1000,1001,1010,1011,1100,1101… In effect, this means that *binary* “post-offices” get 2x as many available addresses every time they upgrade to handle one extra digit in the address. (As compared to the *decimal* post-offices in the previous paragraph which got a 10x improvement.)

So, essentially, SSDs grow as “powers of 2” because *binary* “post-offices” grow as “powers of 2” and adding more mailboxes is relative cheap/easy* to do if you already have a “post-office” that can handle it.

*(At other periods in time, other design constraints meant that “adding-more-mailboxes” wasn’t always plausible at the right consumer pricepoint. For example, in things like HDDs (for a certain combination of recording-head and spinning-platter technology) could maybe only fit so many mailboxes so tightly on a platter and so we end up seeing things like a 7TB HDD even though the post-office built for that HDD could handle 8TB.)