When chips and programs went from 32 bit to 64 bit it was a huge leap forward, what is holding us back from going to 128bit?

976 views

When chips and programs went from 32 bit to 64 bit it was a huge leap forward, what is holding us back from going to 128bit?

In: Technology

9 Answers

Anonymous 0 Comments

The main reason why it’s an advantage to use more bits on the CPU is that you can now calculate stuff with larger numbers in one go. For example, if you wanted to add two 16 bit numbers on a Commodore 64 with its 8 bit CPU, you had to do two 8 bit additions. Likewise if you wanted to read a number from memory, you had to use two 8 bit numbers because the memory adress was 16 bit (allowing for a maximum of 64k of RAM). With multiplications, it’s even worse, because if you want to do a 16 bit multiplication with 8 bit, you have to do 4 8 bit multiplications instead. Which you have to do all the time, because 8 bit numbers are tiny.

Now fast forward a bit: 64 bit computers are the norm, so programmers can use instructions that do 64 bit operations in one go. 64 bit is huge: Multiple billion billions. It’s enough to adress most of the bytes of RAM manufactured so far, to count the seconds from the big bang to the end of the universe, and to do scientific calculations to a higher degree of accuracy than anyone could possibly measure. There are a few applications where 128 bit numbers are used, such as database and security related stuff, but it is not the norm. Most programmers will use nothing but 32 bit and 64 bits.

So for the CPU makers, the focus is now to just make sure that those numbers are processed faster: Instead of doing one number at at time, CPUs can now do multiple small numbers at the same time. So instead of making one CPU that can do a 128 bit in a single clock, they would rather make a CPU that can do two 64 bit calculations in a single clock.

You are viewing 1 out of 9 answers, click here to view all answers.