First thing to appreciate is that going from 32 bits to 64 bit depth isn’t a doubling of “capacity”. These numbers refer to the length of the basic data size. A 32 bit machine handles data that has a precision up to 2^32 (ELI5 – loosely speaking) which is roughly 4 billion. Essentially each 32 bit word can hold an integer between 0 and 4 billion. A 64 bit word can hold a number up to 18 quintillion. (essentially 4 billion billion)
For example think of colors.
1 bit color can show essentially black and white (2 gradations).
2 bit color can show up to 4 colors (fairly boring)
8 bit color can go up to 256 colors (well within a human to distinguish easily)
16 bit color is 32,000 colors (this allows for fairly good picture representation)
32 bit color is 4 billion colors (a regular human eye cannot detect this many colors/shades)
Anything more than 32 bits is already overkill for colors. And the same goes for sound etc.
At some point increasing bit depth is only really necessary for scientific/industrial work where extreme precision might be needed.
Latest Answers