As others have mentioned the *main* benefit is memory addressing, to access more memory, really needs to be within the native “bit size” of the CPU for efficiency. For example, why 64 bit OS is needed – the OS gives addresses to the app of where it put things for the app, or where the app can put things. If the OS was only 32 bit then it could only give 32 bit addresses to the app.
We don’t actually need 128 bit CPUs (or OS) to perform math on 128 bit (or larger) integers – it just takes many more steps for a 64 bit CPU to do math on values greater than 64 bit. When it comes to the larger (128 bit or higher) numbers, these numbers are *so large* that they really fall into specialized realms like scientific research.
The most common “consumer” use of large numbers is cryptography – such as secure web browser connections, VPNs, password managers, and keeping your data encrypted on disk (Bitlocker or FileVault) and in the cloud. And, even then we take these large numbers and combine them with another number to make a new, short term use small number to do the encryption much faster.
For example, a (somewhat older, but still in use) form of secure web browser communication would use a known RSA key of 1024 bits or more, and some random data generated by the web browser and the web server when they first talk to each other (“handshake”), to make a new, temporary 256 bit number for the connection, and use this number for AES256 encryption, which is much faster than RSA (and many modern CPUs have instructions to do it even faster).
Latest Answers