When chips and programs went from 32 bit to 64 bit it was a huge leap forward, what is holding us back from going to 128bit?

956 views

When chips and programs went from 32 bit to 64 bit it was a huge leap forward, what is holding us back from going to 128bit?

In: Technology

9 Answers

Anonymous 0 Comments

The main reason why it’s an advantage to use more bits on the CPU is that you can now calculate stuff with larger numbers in one go. For example, if you wanted to add two 16 bit numbers on a Commodore 64 with its 8 bit CPU, you had to do two 8 bit additions. Likewise if you wanted to read a number from memory, you had to use two 8 bit numbers because the memory adress was 16 bit (allowing for a maximum of 64k of RAM). With multiplications, it’s even worse, because if you want to do a 16 bit multiplication with 8 bit, you have to do 4 8 bit multiplications instead. Which you have to do all the time, because 8 bit numbers are tiny.

Now fast forward a bit: 64 bit computers are the norm, so programmers can use instructions that do 64 bit operations in one go. 64 bit is huge: Multiple billion billions. It’s enough to adress most of the bytes of RAM manufactured so far, to count the seconds from the big bang to the end of the universe, and to do scientific calculations to a higher degree of accuracy than anyone could possibly measure. There are a few applications where 128 bit numbers are used, such as database and security related stuff, but it is not the norm. Most programmers will use nothing but 32 bit and 64 bits.

So for the CPU makers, the focus is now to just make sure that those numbers are processed faster: Instead of doing one number at at time, CPUs can now do multiple small numbers at the same time. So instead of making one CPU that can do a 128 bit in a single clock, they would rather make a CPU that can do two 64 bit calculations in a single clock.

Anonymous 0 Comments

To the best of my knowledge the main reason is necessity.
A 32bit CPU could only address a total of 4GB of system memory as 2³² gives us 4,294,967,296 addresses.
A 64bit CPU gives us 2⁶⁴ which is 18,446,744,073,709,551,616 values such would allow us to use about 16ExaBytes of system memory and we’re nowhere near requiring it.

Bear in mind that an Exabyte is 1024 Petabytes, a Petabyte is 1024 Terabytes and a terabyte is 1024 Gigabytes.
We’re a long way off.

The main issue people seem to have is that we live in a decimal world where 64 is just 2 X 32.
Computers (hardware) live in binary where each bit l or BInary digiT can be a 0 or 1.
So 1 bit is 2 addresses (0 or 1)

2 bit is 4 addresses (00,01,10,11)

3 bits would be 8 addresses (000, 001, 010, 011,100,101,110,111)
And so on.

Anonymous 0 Comments

The primary advantage of going from 32 bit to 64 bit is that you can address a larger area. 32 bit can address about 4 gigabytes – which is a lot, but still less than many media files. 64 bit can already address 18.4 exabytes – which is far more than you can usefully address in the first place.

There’s also an inherent inefficiency with regards to larger bit widths. The overwhelming majority of calculations your computer makes are relatively small. Yet you need 32 or 64 parallel paths to accommodate these trivially small calculations when you could easily make do with many fewer circuits.

So while we could easily make 128 bit processors, there really isn’t much demand for them.

Anonymous 0 Comments

The big change from 32 bits to 64 bits was changing the width that the memory controller dealt with so instead of being limited to 4 GB of RAM we can now use 17 billion GB of RAM (16 Exabytes), there isn’t much point going up to 128 bits for a longggg while.

In the early console days 8 bit vs 16 bit vs 32 bit was about the size of the data that the processor could work with, but modern processors work with data much larger than the addresses they can talk to. AVX (advanced vector extensions) instruction set was added in 2011 that let CPUs work with 128 bit data chunks, AVX2 expanded it to 256 bits, and AVX-512 is used on some of the Intel server processors to let them perform operations on 512 bit long data chunks

We’re unlikely to move to more than 64 bit memory addresses (most processors only use 42 bits right now) so you will likely never see a “128 bit” processor despite your processor handling 128 bit chunks of data on a regular basis

Anonymous 0 Comments

The biggest leap forward when going from 32 bit to 64 bit was that we were able to natively address more then 4GB of virtual memory in one process. There were also other changes made at the same time such as doubling the number of registers, not only their size. The 64 bit architecture we now have is well suited for the tasks we have today. We are several decades away from the same type of memory addressing issues we had. There are other issues with the modern CPU architectures that is much more pressing then those that can be solved by going to a 128 bit architecture. That in itself does not prevent anyone from developing a 128 bit computer and there are several out there, but they are mostly for research and specialty applications rather then mainstream use.

Anonymous 0 Comments

It wasn’t really a huge leap forward. 64 bit just means you can use more memory (RAM), you can also do certain things in fewer “steps” compared to 32bit but my understanding is that that isnt too big of a deal.

A 64 bit system can have around 16 exabytes of memory. Which is about *2 billion to 500 million* times more memory than your computer has.

So there’s nothing “holding us back” per say there is just no point.

Anonymous 0 Comments

I would also add as a small insight (beside the great technical answers) that this is an exponential scale. 64 bit is twice the size of 32 bit and 128bit is twice the size of 64bit, so it it 4 times bigger than 32bit.
Unless the development of computers and the need is also growing exponentially it might take more time to even want to get 128but after 64bit than to want to get 64bit after 32bit.

Anonymous 0 Comments

Primarily, cost. The numbers reference to the processor’s bit width, which is the number of individual bits you can store in register, a single unit of on-board memory. Now, at first glance, it may appear that fairly simple to double the size of your registers, but this causes blow ups through out the rest of the processor. You need to double the sizes of the buses that connect the registers with the rest of the hardware, then double the size of the hardware controlling the buses. The ALU (the device that does all the actual math) will be the hardest hit, as the increase in bit width will require a redesigned and significantly more hardware in order to maintain completion of most tasks in a single clock cycle (explaining why this is true and the workarounds is well beyond an ELI5), and I dread to imagine the hardware required for a 128-bit multiplier or floating point processor. Now, in addition to the extra space required on the chip (which can cause timing issues), and the raw silicons required to make the extra transitions, you also need to power them, requiring a larger power supply and creating more waste heat, raising the cost of the other parts of the processor as well.

Now, other people have covered why going from 64 to 128 bits isn’t that much bit of a performance increase, I’m going to address why the other bit width increase were significantly.

First, the bit width places a strict limit how large of a number the computer can understand. For example, in a 4-bit system, the largest number that can be stored in a single register is 2^4 or 16. You literally couldn’t count to 100 on a 4 bit system without a bit of software hacking to use another register to store the higher bits. One place where this restriction tended to be really obvious is the colour ranges in gaming consoles. Early consoles were in monochrome and as the bit width increased, the amount of colours that could be displayed by an individual pixel increased, until we hit a point of diminishing returns at 32-bit colour, which can display more different colours than the human eye can detect.

Second, the bit-width of a processor is also a factor in how it’s instructions set is designed, and wider processors can generally address more registers as well as more RAM. While registers are much more expensive than RAM chips, they are also significantly faster and having more registers available on board also makes writing programs for that processor much easier as well. Early computers only had a single onboard register, while it’s common for modern computers to have between 24 and 32. If you don’t have at least 3 registers, doing something as simple as adding 2 numbers together and storing the result will require going to memory.

Third, jumping up the bit-width also tends to be the point when other significant improvements in processor design tend to be rolled out. Since increasing the bit-width already requires a significant amount of hardware redesign, it’s a good time to make other changes as well. Things like implementing pipelining, branch-prediction, out of order execution, hyperthreading, etc tended to get added as part of a major new processor rollout. Again, this is well beyond the scope of an ELI5, but they all tend to make the processor faster without affecting the code the programmer has to write.

However, we are at the point where most of the gains in processor speed are coming from parallelization rather than redesigning an individual processor. This creates a situation were two 64 bit processors working in parallel can probably do more work than one 128-bit processor, at both a lower cost to the consumer and the manufacturer. While not a trivial problem, the number of engineer hours required to make two existing chips talk to each other is a lot lower than the number of engineer hours required to design, test and figure out how to manufacture an entirely new chip.

Anonymous 0 Comments

Lets say you’re a postal service that has an automatic mail sorter. You find that if you limit the number of characters and numbers on the address to be at most 32, the automated sorter takes 0.01 seconds to sort each piece of mail. You sell letters with 32 blanks on it so the automated sorter scans them easily. Even if the address doesn’t use all 32 blanks, it still has to check each blank and so can’t go faster than 0.01 seconds. Since you serve a community that doesn’t have a need for more addresses than can be represented with 32 characters, this works quite well for you.

Then your community grows. Now, 32 characters isn’t enough to account for each unique address. You need to allow for more characters if you’ll hope to serve all your new citizens. You decide to upgrade the limit of characters on a letter from 32 to 64. Now you can fit a much bigger address in the 64 blanks provided on your letter, but it takes the automated reader 0.02 seconds to read each letter since it has to check twice the number of blanks on the letter. This doubles the time it takes to sort each letter even if it’s address is 32 or smaller.

32, 64, 128 bits in this instance refers to how your operating system references memory. A 32 bit system can reference 2^32, which is 4 gigabytes, a 64 bit system can reference 2^64, which is about 16 exabytes. Every single time you move memory around outside of the processor, you need to enter it’s full address though, and having a bigger address means a bigger number to compute, which means a bigger overhead. It’s a trade off we’re not going to make until we need to.