think of 64 and 32 bit as packages handled by a post office
a 64 bit package can contain FAR more information than a 32 bit package. it’s like the difference between a postcard and a book
the computer is the post office and spends an equal amount of time sending and receiving 32 and 64 bit packages, but because 64 bit contains far more info than 32 bit it has to move far fewer packages
imagine sending the novel “War and Peace” by postcard instead of one book
Well, 8 bit gives you 2 ^ 8 = 256 unique values. If you use these as byte addresses, you can only address 256 bytes. 2 ^ 16 gives you 65,536 bytes which was a massive upgrade.
32 bit allows you to address 4 gigabytes, so this is effectively your maximum RAM size. 64 bit allows us to smash through that limit.
A computer “thinks” about one number at a time (not really true, but this is ELI5).
On an 8-bit computer, that number can only go up to 255. On a 16 bit computer, that number can go all the way up to 65,535. On a 32 or 64 bit computer, it can go much, much higher.
This limits a lot of things the computer can do. An 8 bit computer might only be able to show 256 (or fewer!) colors on-screen at a time, which is not very many. A 32 bit computer can show millions.
If the computer can only count to 255 it might only be able to hold 255 different things in memory at once (not very many!). 32-bit Windows could use a maximum of 4GB of RAM, because that’s how high it could count. 64-bit Windows could theoretically use *billions* of GB of RAM.
(This is all very simplified, 8-bit systems had lots of ways to count higher than 255. But again, this is the ELI5 version.)
“Bits” are just what we call digits in a number that uses base-2 (binary) instead of base-10 (decimal). In our normal decimal number system, a three digit number can hold a thousand different values, from 000 up to 999. Every time you add a digit, you get 10x as many values you can represent.
In base-2, every extra bit doubles the number of values you can represent. A single bit can have two values: 0 and 1. Two bits can represent four unique values:
00 = 0
01 = 1
10 = 2
11 = 3
When we talk about a computer being “8-bit” or “64-bit”, we mean the number of binary digits it uses to represent one of two things:
1. The size of a CPU register.
2. The size of a memory address.
On 8- and 16-bit machines, it usually just means the size of a register, and addresses can be larger (it’s complicated). On 32- and 64-bit machines, it usually means both.
CPU registers are where the computer does actual computation. You can think of the core of a computer as a little accountant with a tiny scratchpad of paper blinding following instructions and doing arithmetic on that scratchpad. Registers are that scratchpad, and the register size is the number of bits the scratchpad has for each number. On an 8-bit machine, the little accountant can effectively only count up to 255. To work with larger numbers, they would have to break it into smaller pieces and work on them a piece at a time, which is much slower. If their scratchpad had room for 32 bits, they could work with numbers up to about 4 billion with ease.
When the CPU isn’t immediately working on a piece of data, it lives in RAM, which is a much larger storage space. A computer has only a handful of registers but can have gigabytes of RAM. In order to get data from RAM onto registers and vice versa, the computer needs to know *where* in RAM to get it.
Imagine if your town only had a single street that everyone lived on. To refer to someone’s address, you’d just need a single number. If that number was only two decimal digits, then your town couldn’t have more than 100 residents before you lose the ability to send mail precisely to each person. The number of digits determines how many *different* addresses you can refer to.
To refer to different pieces of memory, the computer uses addresses just like the above example. The number of bits it uses for an address determines the upper limit for how much memory the computer can take advantage of. You could build more than 100 houses on your street, but if envelopes only have room for two digits, you couldn’t send mail to any of them. A computer with 16-bit addresses can only use about 64k of RAM. A computer with 32-bit addresses can use about 4 gigabytes.
So bigger registers and addresses let a computer work with larger numbers faster and store more data in memory. So why doesn’t every computer just have huge registers and addresses?
The answer is cost. At this level, we’re talking about actual electronic hardware. Each bit in a CPU register requires dedicated transistors on the chip, and each additional bit in a memory address requires more wires on the bus between the CPU and RAM. Older computers had smaller registers and busses because it was expensive to make electronics back then. As we’ve gotten better at make electronics smaller and cheaper, those costs have gone down, which enable larger registers and busses.
At some point, though, the usefulness of going larger diminshes. A 64-bit register can hold values greater than the number of stars in the universe and a 64-bit address could (I think) uniquely point to any single letter in any book in the Library of Congress. That’s why we haven’t seen much interest in 128-bit computers (those there are sometimes special-purpose registers that size).
Electrical engineer, here. This is going to be more of an ELi12 answer.
So, let’s count in binary!
0000 is 0.
0001 is 1
0010 is 2
0011 is 3
0100 is 4
0101 is 5
0110 is 6
0111 is 7
1000 is 8
And so on. That means that xxx0 is our ‘1’s, xx0x is our ‘2’s, x0xx is our ‘4’s, and 0xxx is our ‘8’s place. This is with 4 bits, where the highest we can count is 1111 which is 8+4+2+1 = 15. If we count from 0000,0000-1111,1111 we can count to 255.
So, when it comes to computers, picture a library where each page of a book receives a number. A 4 bit computer can count up to 16 pages (because 0000, or 0 is a number). An 8 bit computer can count up to 256 pages, and so on and so forth.
You still have to connect the physical hardware that can store them, but a 4 bit or 8 bit computer can only count up to 16 or 256 pages. Even if you attach more hardware. A 32 bit computer can count 4294967296 pages, which is a really big library. A 64 bit computer can count 18446744073709552000 pages.
That’s for the memory controller, which manages a library. The technical term is actually ‘memory pages’. But there are other instances where you’ll hear things measured in bit size.
…
An 8-bit number is one that can be between 0 and 255 (or signed 8 bit integers, -128 to 127) to . So if you’re doing math on 8 bit integers, 120+10 = -125 because it ‘loops back’. https://www.cs.auckland.ac.nz/references/unix/digital/AQTLTBTE/DOCU_031.HTM this explains more about bit size and integers (whole numbers) floats (decimal numbers), and integral (numbers that we translate to letters) types.
So, 32 bit and 64 bit computers refer to the memory controller. 8 and 16 bit video game consoles refer to the types of numbers they are best at counting with (though an 8 bit processor can count higher than 256 by using tricks! https://forums.nesdev.org/viewtopic.php?t=22713 )
…
You’ll also often hear about bit size with audio, I.E. 8 bit, 16 bit, 24 bit, and 32 bit digital audio. This refers to the distinct levels of volume that an audio signal can have.
Take a deep breath and at a constant volume go “EEEEEEEEEEE-AAAAAAAAA-OOOOOOOO”. Then stop. Then go “EEEEEEEEEEE-AAAAAAAAA-OOOOOOOO”. Then stop. This would (for purposes of explanation) be encoded as 1 bit audio, because it only has two possible volume levels even if it can have different pitches/frequencies to it.
Now repeat that exercise, but do your first EEEEEEEEEEE-AAAAAAAAA-OOOOOOOO at normal volume. Then your second quieter, then your third louder. This is 2 bit audio (00, 01, 10, 11) because you have four distinct volumes.
8 bit audio has 256 distinct levels of volume, 16 bit and 24 bit and 32 bit have more distinct levels. (This is separate from the maximum frequency they can capture, or the highest pitch sound that can be recorded or reproduced, which has to do with sample rate and Nyquist frequencies. The Nyquiest frequency is the highest frequency that can be reliably recorded. It is 1/2 the sample rate, so 44.1kHz sample rate can only record/reproduce up to 22.05kHz sounds, which is pretty high pitched!)
…
You’ll hear about video signals encoded as 16 bit, 24 bit, 32 bit, and more. This is the same thing. 24 bit video is encoded as the red, green, and blue channels each having 8 bits, so red=0 to 255, green = 0 to 255, and blue = 0 to 255. (32 bit adds a transparency layer of 0 to 255). You can have 30 bit, where each channel gets 10 bits so red = 0 to 1024, blue = 0 to 1024, and green = 0 to 1024, and then 36 bit, where each channel gets 12 bits, and so on and so forth.
More video bits means more distinct colors. Very high bit depths help artists work.
And lastly, there is the use of bits with communication bandwidth. This gets highly specific to the thing being discussed. https://www.techpowerup.com/forums/threads/explain-to-me-how-memory-width-128-192-256-bit-etc-is-related-to-memory-amount.170588/ this thread explains it in context of graphics card memory. Edit: I can answer some specific questions about this if anyone’s curious, but it can get complicated! 🙂
Let’s say you want a savings account at the bank. There are two options:
The 32 bit option let’s you have 4 digits for your balance. The most money you can have is $99.99. If you deposit $100, the extra penny is lost.
The 64 bit option let’s you have 8 digits for your balance. The most money you can have is $999,999.99. If you deposit $1,000,000, the extra penny is lost.
64 bits let’s you store more accurate numbers than 32 bits.
There’s way more to it then that, but that’s the ELI5 explanation.
Imagine how big numbers can get with 5 digits. All the way to 99999! Now imagine how big numbers get with 10 digits. 9999999999! The second number is so much bigger! It’s actually 99999 times bigger than 99999.
A computer needs to put a number on each thing. With 32 bits (32 digit numbers), computers can put numbers on about 2 million things. With 64 bits, computers can put numbers on FOUR MILLION MILLION things.
When computers can put numbers on lots of things, they can do lots of stuff. This makes them faster since they don’t have to stop doing one thing to start doing another thing.
This question was asked 7 hours after you asked. I liked user Muffinshire’s explanation the most:
“Computers are like children – they have to count on their fingers. With two “fingers” (bits), a computer can count from 0 to 3, because that’s how many possible combinations of “fingers” up and down there are (both down, first up only, second up only, both up). Add another “finger” and you double the possible combinations to 8 (0-7). Early computers were mostly used for text so they only needed eight “fingers” (bits) to count to 255, which is more than enough for all the letters in the alphabet, all the numbers and symbols and punctuation we normally encounter in European languages. Early computers could also use their limited numbers to draw simple graphics – not many colours, not many dots on the screen, but enough.
So if you’re using a computer with eight fingers and it needs to count higher than 255, what does it do? Well, it has to break the calculations up into lots of smaller ones, which takes longer because it needs a lot more steps. How do we get around that? We build a computer with more fingers, of course! The jump from 8 “fingers” to 16 “fingers” (bits) means we can count to 65,535, so it can do big calculations more quickly (or several small calculations simultaneously).
Now as well as doing calculations, computers need to remember the things they calculated so they can come back to them again. It does this with its memory, and it needs to count the units of memory too (bytes) so it can remember where it stored all the information. Early computers had to do tricks to count bytes higher than the numbers they knew – an 8-bit computer wouldn’t be much use if it could only remember 256 numbers and commands. We won’t get into those now.
By the time we were building computers with 32 “fingers”, the numbers it could count were so high it could keep track of 4.2 billion pieces of information in memory – 4 gigabytes. This was plenty, for a while, until we kept demanding the computers keep track of more and more information. The jump to 64 “fingers” gave us so many numbers – 18 quintillion, or for memory space, 16 billion gigabytes! More than enough for most needs today, so the need to keep adding more “fingers” no longer exists.”
If you could only do 1-digit math, you can calculate things like 5 x 3, but to calculate 2-digit problems you have to split them into single digit steps: 12 x 45 = 10 x 40 + 10 x 5 + 2 x 40 + 2 x 5.
If you can calculate 2-digit math, you could do 12 x 45 directly, but 4-digit problems need to be split into steps.
Now for a 32-bit computer, it can calculate problems up to 32 bits in size (about 10 digits) immediately, but bigger problems need to be split into steps. A 64-bit computer can do problems up to twice as large in a single step.
For small problems it doesn’t make a difference. 4 x 5 will be done in a single step on any computer, no matter if it’s 8, 16, 32 or 64 bits. For bigger calculations it does get important.
Another important thing is memory addressing. The way RAM works is that each part of memory has a number address. A processor that can only handle 2 digit numbers could only recall 100 parts of memory. Similarly, a 32 bit chip is limited to about 4 GB of RAM. That’s the main reason why pretty much every computer nowadays is 64 bits.
There are still some old programs written to run on 32 bits which have the issue that they can’t use more than 4 GB of RAM, even if they’re running on a 64 bit machine with far more available.
Latest Answers