[ELI5] Why isn’t hexadecimal used for creating computer storage data? Why is it always in binary?

635 views
0

I’m asking this because data in game cartridges always seem to be shown in hexadecimal values instead of binary. I reckon maybe hexadecimal is more convenient than binary.

In: Technology

Hexadecimal is merely a representation of data; there’s nothing special about the nature of hexadecimal compared to other formats. It doesn’t enable data compression or anything like that.
The reason we often use binary notation when discussing values of bit-related things like bytes is simply because it makes the most sense. … And this fits nicely into our 8-bit bytes: two hex digits can represent every value of a byte

Hexadecimal is easier to read for humans than binary because it doesn’t require writing out as many digits, while still being easy to translate to binary. But binary is easier to do with electronics – 1 or 0, on or off. Operating in hexadecimal would require the ability to have 16 different states.

Data storage and data transfer are all done in binary because our technologies all work with on/off signals. Hexadecimal (base-16) would require having 16 unique and differentiable signals (logic levels) rather than on/off.

Because there’s no such thing as a digital signal in real life (on/off isn’t real, there’s always tiny fluctuations) logic levels don’t use 0 volts or max voltage to represent 0 and 1. Instead, they use various thresholds and zones to represent different logic levels. With TTL technology, logic 0 is 0 volts to 0.8 volts, and logic high is 2 volts to the collector voltage (usually 5 volts). Those threshold zones exist to prevent random fluctuations from flipping from one logic level to another.

Since with base-16 you’d need 16 distinct logic levels, you’d need to have 16 different threshold zones to represent those logic levels as voltage. This poses even more of an issue when you consider that the allowable voltage range for modern desktop CPUs is 0 volts to ~1.4 volts.

Hexadecimal is used to display binary values because it’s more compact to display on a screen, since two hexadecimal digits can be used to represent eight binary digits, or one byte.

Everything is binary to a computer. Anything else is just a convenience to humans.

It’s not 111111110000000000000000, it’s “red”. (24 bit colour)

It’s not 11011110101011011011111011101111, it’s 0xdeadbeef (binary to hex conversion)

It’s not 1100100, it’s 100. (binary to decimal conversion)

It’s not 01000101010011000100100100110101 it’s “ELI5” (text in binary)

Hexadecimal has a number range from 0 to 15 per “digit”, where A through F are considered digits. This has a binary range from 0000 to 1111 which means each hexadecimal digit represents *exactly* 4 bits. Since bytes are 8 bits that means 2 hex digits is a byte. Very convenient for humans, but by itself nothing in your computer cares except the software that’s doing the conversion for the humans.

Barring quantum computing and novelty devices, computer memory is always in binary. If current is running through a circuit, that’s a 1, otherwise 0. It’s really hard to design something that is both as small/fast as current computer chips and can take on more than two states.

Hexadecimal is often used to write computer memory for human consumption because 4 binary bits is exactly the same as one hexadecimal number. This allows you break the memory into easy chunks. When you change a digit in a hexadecimal number, you only change it’s 4 associated binary digits, not any of the digits around it. This does not hold for decimal.

Example:

-Hexadecimal 31 is 00110001 in binary. Hexadecimal 32 is 00110010 in binary. The first block of 4 binary digits did not change because we only changed the second hexadecimal digit.

-Decimal 31 is 00011111 in binary. Decimal 32 is 00100000 in binary. Both blocks of 4 binary digits changed despite only changing one decimal digit.