Why we don’t have 128 bit OS or CPUs?

961 views

Why we don’t have 128 bit OS or CPUs?

In: 927

25 Answers

Anonymous 0 Comments

To add to the great answers here. It’s a common misconception that 32 bit is faster than 16, 64 faster than 32 and so on.

Yes if software is specifically designed to take advantage of the larger integers then yes its faster, if not it’s the same or slower.

I’m no engineer but I think you could build a 16 bit processors that could handle higher clock speeds than a 64 bit cpu, while running 16 bit code that is.

Anonymous 0 Comments

5 year old answer: Imagine you are in math class and your teacher gives you math problems of certain complexity which yield answers of certain length. Now, you have to work on and turn in your homework in a piece of paper that can fit the math problems of the right complexity. Now, the issue is that your paper store only sells pages that are exponentially larger than the last size and twice or more the price of the last size.

In first grade you used 8 bit paper, in second grade you used 16 bit paper and you were upgrading your paper sizes with time until now that you’re in 6th grade and you have the choice between 64-bit paper and 128-bit paper. The math homework your teacher gives you is of a complexity that is too large to fit in 32 bit paper, doesn’t completely fill the 64-bit paper and your 128-bit page, being larger than the 64-bit page, can fit the problem with no issues but also with tons of wasted space. So the sensible decision would be to buy 64-bit paper until your math homework becomes too complex to fit in it. Also, since everyone else in your classroom is taking the same decision to turn in their homework in the same sized paper, the paper maker says “business is boomin’!!!” and makes more paper, which makes it more affordable, meanwhile 128-bit paper stays a lot more expensive in comparison, to the point you might wanna try out quantum paper.

Anonymous 0 Comments

In some sense, we do! Many CPU’s support [registers and instructions that can manipulate 512 bits at a time](https://en.wikipedia.org/wiki/AVX-512).

More accurately, these are *vector instructions*. A 512-bit vector addition instruction doesn’t add a pair of 512-bit numbers; instead it adds eight pairs of 64-bit numbers all at once (or sixteen pairs of 32-bit numbers, or thirty-two pairs of 16-bit numbers, or sixty-four pairs of 8-bit numbers).

People don’t often decide to write programs that need numbers larger than will fit in 64 bits. (Roughly 16 billion billion.) They do often decide to write programs that perform bulk processing of large arrays of smaller numbers.

The main difference between 32-bit systems and 64-bit systems is *addressing*: Every byte of memory has to have a unique number called an *address* identifying its location. 32-bit addressing gives you enough for 4 billion bytes; when computers started using more than that around 2005-2010, we started upgrading our CPU’s, OS’s and software to use 64-bit addressing.

64-bit addressing will run out when our computers hold 16 billion billion bytes. We can multiply one of those factors by 1000 and divide the other by 1000 without changing that overall number, so that’s 16 million trillion bytes. Our current largest computers have perhaps 10 trillion bytes of memory. So we’re about a factor of 1 million from the next upgrade.

(Don’t ask about the 16-bit to 32-bit upgrade in the early 1990’s. The memory architecture of 16-bit PC’s was super messy and rather insane.)

Anonymous 0 Comments

We have parts of the cpu operating at 128/256/512 bits

SSE/AVX/AVX512 work this wide.

It allows to process lots of 32/64bit data in parallel.

Also, we have 128/256/384 bit wide memory buses.

So we kinda have 128 or higher bit in parts of the cpu.

Anonymous 0 Comments

It’s exponential, 64-bit is a lot, our current CPUs are still not really 64-bit, I believe most use 48-bit addressing still. It’s going to take a while before we max out 64-bit.

128-bit is more than we will ever need, no really, a perfect computer that needed 128-bit memory would boil the oceans before it could fill it’s memory.

Anonymous 0 Comments

We have 128-bit and 256-bit GPUs where how fast we can move memory makes a big difference. Most of the comments here are about general purpose computation where 128-bit integers don’t make a lot of sense, but in special purpose fields like cryptography, 128 and 256-bit keys are actually on the small side.

Anonymous 0 Comments

Oh believe me we do. It’s just that it doesn’t make sense to mass produce and sell the fastest technology ever invented, until people want graphics better than Spider-Man 2 on Playstation.

Anonymous 0 Comments

Realistically, we shouldn’t even have 64bit.

Some people and probably google will tell you otherwise, stuff such as “32 bits were limited to 2^30 * 4 = 4 gb ram”.

Absolutely not, the original x86 was 16bit and could access a wider range through indexing.

The only real reason we moved to 64bit was marketing shenanigans from AMD.

In reality it simply increased the cost and power consumption of processors.

64bit would only be a good option if a considerable amount of programs used 64bit variables and (consequently) 64bit arithmetic, since that would mean that they can now add/subtract/muliplty/etc 64bit numbers in one circle instead of two.

In reality, the overwhelming majority of programs never use 64 byte variables, because they are unnecessary.

Which is why you should hope and expect no transition to 128bits is made for the generic line of computers.

Anonymous 0 Comments

Because we haven’t had the need yet. (Outside of very specific usecases)

Every bit you add you double the amount of information you can work with. So going from 32-bit to 64-bit didn’t just double the amount of things we could do, it doubled it 32 times.

32-bit has a limit of 4 GB and as you can probably tell by todays standards it’s not a lot anymore.

64-bit is capable of managing 17,179,869,184 GB. The biggest file of data currently is held by CERN where they have a single file over 2,000,000 GB (Source unsure)

While 2 petabytes sounds like a lot, it’s still miniscule compared to what 64-bit is capable of.

Anonymous 0 Comments

As others have mentioned the *main* benefit is memory addressing, to access more memory, really needs to be within the native “bit size” of the CPU for efficiency. For example, why 64 bit OS is needed – the OS gives addresses to the app of where it put things for the app, or where the app can put things. If the OS was only 32 bit then it could only give 32 bit addresses to the app.

We don’t actually need 128 bit CPUs (or OS) to perform math on 128 bit (or larger) integers – it just takes many more steps for a 64 bit CPU to do math on values greater than 64 bit. When it comes to the larger (128 bit or higher) numbers, these numbers are *so large* that they really fall into specialized realms like scientific research.

The most common “consumer” use of large numbers is cryptography – such as secure web browser connections, VPNs, password managers, and keeping your data encrypted on disk (Bitlocker or FileVault) and in the cloud. And, even then we take these large numbers and combine them with another number to make a new, short term use small number to do the encryption much faster.

For example, a (somewhat older, but still in use) form of secure web browser communication would use a known RSA key of 1024 bits or more, and some random data generated by the web browser and the web server when they first talk to each other (“handshake”), to make a new, temporary 256 bit number for the connection, and use this number for AES256 encryption, which is much faster than RSA (and many modern CPUs have instructions to do it even faster).