When chips and programs went from 32 bit to 64 bit it was a huge leap forward, what is holding us back from going to 128bit?

975 views

When chips and programs went from 32 bit to 64 bit it was a huge leap forward, what is holding us back from going to 128bit?

In: Technology

9 Answers

Anonymous 0 Comments

Primarily, cost. The numbers reference to the processor’s bit width, which is the number of individual bits you can store in register, a single unit of on-board memory. Now, at first glance, it may appear that fairly simple to double the size of your registers, but this causes blow ups through out the rest of the processor. You need to double the sizes of the buses that connect the registers with the rest of the hardware, then double the size of the hardware controlling the buses. The ALU (the device that does all the actual math) will be the hardest hit, as the increase in bit width will require a redesigned and significantly more hardware in order to maintain completion of most tasks in a single clock cycle (explaining why this is true and the workarounds is well beyond an ELI5), and I dread to imagine the hardware required for a 128-bit multiplier or floating point processor. Now, in addition to the extra space required on the chip (which can cause timing issues), and the raw silicons required to make the extra transitions, you also need to power them, requiring a larger power supply and creating more waste heat, raising the cost of the other parts of the processor as well.

Now, other people have covered why going from 64 to 128 bits isn’t that much bit of a performance increase, I’m going to address why the other bit width increase were significantly.

First, the bit width places a strict limit how large of a number the computer can understand. For example, in a 4-bit system, the largest number that can be stored in a single register is 2^4 or 16. You literally couldn’t count to 100 on a 4 bit system without a bit of software hacking to use another register to store the higher bits. One place where this restriction tended to be really obvious is the colour ranges in gaming consoles. Early consoles were in monochrome and as the bit width increased, the amount of colours that could be displayed by an individual pixel increased, until we hit a point of diminishing returns at 32-bit colour, which can display more different colours than the human eye can detect.

Second, the bit-width of a processor is also a factor in how it’s instructions set is designed, and wider processors can generally address more registers as well as more RAM. While registers are much more expensive than RAM chips, they are also significantly faster and having more registers available on board also makes writing programs for that processor much easier as well. Early computers only had a single onboard register, while it’s common for modern computers to have between 24 and 32. If you don’t have at least 3 registers, doing something as simple as adding 2 numbers together and storing the result will require going to memory.

Third, jumping up the bit-width also tends to be the point when other significant improvements in processor design tend to be rolled out. Since increasing the bit-width already requires a significant amount of hardware redesign, it’s a good time to make other changes as well. Things like implementing pipelining, branch-prediction, out of order execution, hyperthreading, etc tended to get added as part of a major new processor rollout. Again, this is well beyond the scope of an ELI5, but they all tend to make the processor faster without affecting the code the programmer has to write.

However, we are at the point where most of the gains in processor speed are coming from parallelization rather than redesigning an individual processor. This creates a situation were two 64 bit processors working in parallel can probably do more work than one 128-bit processor, at both a lower cost to the consumer and the manufacturer. While not a trivial problem, the number of engineer hours required to make two existing chips talk to each other is a lot lower than the number of engineer hours required to design, test and figure out how to manufacture an entirely new chip.

You are viewing 1 out of 9 answers, click here to view all answers.