A 64-bit OS is one designed to run on a 64-bit CPU.
A 32-bit OS is one designed to run on a 32-bit CPU.
So the real question is what is the difference between a 32-bit and 64-bit CPU.
And to be honest, it depends. It refers to the bit width of *some* part of the CPU. Often, it matches the length of numbers the CPU can operate on in a single instruction. In that case, a 32-bit CPU will be able to add one 32-bit number to another 32-bit number in a single instruction, while a 64-bit CPU can do the same with 64-bit numbers.
It also often relates to the memory address space available to the CPU. A 32-bit CPU uses 32-bit values as memory addresses, meaning that whenever wants to look something up in memory, the location it wants to read is identified by a 32-bit number (which means around 4 billion different numbers are possible, meaning the CPU can handle around 4 billion bytes, or 4GB, of memory)
But sometimes it relates to neither of those. If you’re old enough to remember the Nintendo 64 games console, for example, it was *called* a 64-bit CPU, but it worked with 32-bit addresses and could do arithmetic on 32-bit numbers. However, unlike other contemporary games consoles, it had instructions which allowed it to do arithmetic on two *pairs* of 32-but numbers at a time. So while adding A and B together, it could *also* add C and D together.
There’s no hard and fast rule for exactly when a CPU is called 32-bit or 64-bit. It relates to *some* part of the CPU, but which, exactly, is open to interpretation.
Latest Answers