Like storage is: 4gb, 8gb, 16gb, 32gb, 64gb…. And so on

Like the Processor type is: 32bit, 64bits..

Like the ram is: 2gb, 4gb, 8gb ?

In: Technology

At its simplest, a computer is just a bunch of transistors that are either on or off. So everything is a multiple of those two states. This is a gross oversimplification, but at the most basic level, it is still about measuring if a data bit is on or off.

Strictly speaking it doesn’t need to be anymore, and it’s just a convention.

However, in the early days of computing this was not so. Powers of two have the characteristic that they may be expressed as a one followed by a string of zeroes in binary. So 8 in binary is 100, and 32 is 10,000, etc. Numbers one less than powers of two may be represented by strings of all ones, though. What this means is, if you are storing information in the form of a bunch of switches which can only be on (one) or off (zero), each new switch added to the string of switches in your memory bank doubles the total amount of memory you have available. With 6 switches you can store any number from zero (000000) to 63 (111111), adding an additional switch increases that to 127, etc.

Memory is usually measured in bytes, and each byte is a string of 8 binary bits. Each byte is capable of storing any number from 0 to 255, so a 16k memory cartridge (actually a 16,384 byte cartridge, as that’s the power of 2 involved) can store 16,384 8-digit strings of ones or zeroes.

What computers do is perform operations on those strings, and turn the results into something human-readable. For instance, the original versions of unicode, the international standard for encoding characters in binary, could store one of 255 characters in any byte of memory. Each English letter, each Japanese Kanji character, punctuation, Latin numerals, a few spaces and zero-width formatting characters, and a few others, had a specific string of bits which would be translated by any program which used unicode into that character. In this way, that 16k memory cartridge could store 16,384 characters of plain text, minus whatever additional code was necessary to make the program reading it recognize it as a string of typed characters.

All of this is to say something fairly simple: you can talk using just about anything, as long as the people speaking agree before hand what each symbol will mean. We agreed that specific arrangements of on and off switches, or ones and zeroes, would translate into the characters we use to write our languages. And so, we can talk using just long strings of switches.

real eli5: computers need to process a lot of information and the best way to represent that information is with binary numbers. the reason binary is used is because it is the most simple way to represent numbers on a microchip

it’s kind of strange but you can represent any integer (e.g. 7, 23, -50, …) in binary using just zeros and ones. zeros and ones can also be thought of as “off” and “on”, or “closed” and “open.” when a single bit (a zero or a one) is stored in the computer’s memory, there is a physical component that opens and closes to store that information for later. you can think of it kind of like how a light switch can be either on or off, just much smaller. we may think of data, memory, and information as abstract concepts but in this case they are physical

imagine developing the same component but it could handle more than 0 and 1. maybe it could represent 0, 1, and 2. or even 0 through 9. these exist but they are unnecessarily complicated and more expensive to manufacture. this is why you see multiples of two. it is because information at the bottom level is read and written in binary

Computers operate in binary, lots and lots of tiny switches are either on (1) or off (0).

Normally you count decimal numbers: 111 = 1×10^2 + 1×10^1 + 1×10^0

In binary the number 111 = 1×2^2 + 1×2^1 + 1×2^0. Which is 7

For CPUs, the 32/64 denotes the size of CPU registers. This is the fundamental unit of integer size that the computer uses for operations.

Computers have a number of address lines for addressing memory, so each line works out as a power of two. So adding a new memory line will double the amount of memory accessible. So this is why RAM goes 2->4->8->16 etc

Storage technically doesn’t have to follow this numbering (and most drives are labeled in 10^9 1000000000 bytes not 2^30 bytes 1073741824). They’re close but not exactly the same.