Why we don’t have 128 bit OS or CPUs?

945 views

Why we don’t have 128 bit OS or CPUs?

In: 927

25 Answers

Anonymous 0 Comments

It is in all we just dont need that yet. 64 bit gives far more than enough memory addresses. We upgraded from 32 bit to 64bit. So more memory addresses could be used. It basically allows the computer to count much higher.

Anonymous 0 Comments

Because performance does not scale linearly with number of bits. Simply put, the number of use cases which needs more bits is pretty limited. 64 bits is plenty enough for any number you’d typically store.

More bits are mostly useful when shoveling data around, such as copying big blocks of memory or performing some operation on a big block of memory. This is why graphic cards often have wider buses/more bits. But the GPU is a very specialized beast, while a CPU is much more general.

It would be like making all transports by giant cargo ships. Sure, they are great at hauling huge loads, but somewhat ineffective to go shop for groceries.

Or, to use another example, it would be like building 10 lane highways on every road/street. It just isn’t worth the cost.

Anonymous 0 Comments

CPU performance is an art, not a science. There are many factors affecting performance, including memory speed, processing speed, parallel processing, compiler optimality, and many others, in addition to how wide the architecture is. To get the benefit of an architecture that is 128 bits wide, it will require that compilers can fill up that bandwidth in parallel on enough cycles to make it worthwhile. Holding that back is the reality that useful processing includes branches and has sequential dependencies. Sometimes the CPU guesses, and guesses wrong, and calculations are discarded. And whenever work is done but is discarded the power dissipation still takes place. And power dissipation is ultimately the limit on CPU performance. It may be the case that future algorithms will be sufficiently suitable for wider architectures (maybe neural nets, or other AI), but currently it is too difficult to take advantage of 128 bits or wider.

Edit: I should say, too difficult generally. Meaning that programs in general don’t benefit from a wider architecture. Programs with data objects that are wide can certainly benefit. For example, video screen memory is wide. But to calculate what to display on it, programs optimally use at most 32 bit wide quantities.

Anonymous 0 Comments

We do. Actually, we even have 256bit CPUs.

It’s just, why would you ever use them? They’re useful for some people for making ***EXTREMELY*** accurate calculations (like, ungodly levels of precision), but unless your field explicitly works with that level of precision, you’d never need them.

Anonymous 0 Comments

Actual ELI5:

The number of bits is basically a limit to how high the computer can count with one number. And the *vast* majority of computer work doesn’t need more yet.

* 32 bit is just over 2 billion(“9 zeros”)

* 64 bit is over 18 quintillion(“18 zeros”)

* 128 bit is 340 undecillion(“36 zeros”)