[ELI5] Why isn’t hexadecimal used for creating computer storage data? Why is it always in binary?

775 views

I’m asking this because data in game cartridges always seem to be shown in hexadecimal values instead of binary. I reckon maybe hexadecimal is more convenient than binary.

In: Technology

10 Answers

Anonymous 0 Comments

Computers are made of transistors wired into NAND gates to create logic. Everything they do is in binary, so to properly and easily represent what a computer is storing, you need to use binary. To create a circuit that had 16 states instead of just ones and zeros would be pointlessly complex, because a 16 digit binary circuit can be created so easily.

Binary is a very inefficient notation however – there are a lot of ones and zeros in even relatively small numbers. Converting binary to decimal is hard, and converting decimal back to binary is hard. Converting binary to hexidecimal is childsplay. Each hex digit represents 4 1s or 0s, and always the same ones, no matter where it appears in the number. Memorize 15 4 digit patterns and you can convert any hex number to binary and any binary number to hex.

TL;DR We use Hexidecimal to represent binary because it is must more efficient in terms of the number of characters needed to represent a number than binary, and we use it instead of decimal because with a very small amount of practice, conversion to binary from hex is very simple.

Anonymous 0 Comments

We sort of do, some times.

As other’s have explained, hex is just another, more succinct way to represent binary data. And typically, data is stored in binary.

But there is at least one exception: Mulit-Level Memory flash. Historically, flash memory stored binary data: 1 or 0. This is done by storing (or not storing) charge on the floating gate of a special transistor, making it be turned On or Off.

But it is also possible to store varying levels of charge on that floating gate, making the transistor off, a little bit on, or a lot on. For example, you can store 4 levels of charge on a floating gate and effectively store two binary bits of data (00, 01, 10, 11).

It is possible (although very rarely done) to store 16 different levels of charge on one flash transistor, which would equate to 4 binary bits or one hexadecimal digit. I’ve only ever heard of one application that actually did this (an old phone answering machine). It’s not done because the more levels you try to use, the more likely it is that you’ll get read errors.

Anonymous 0 Comments

TLDR; hex is a lie that we use to make binary wieldly. Binary is a lie we use to make the hardware almost wieldly.

# Binary is a lie.

For computers, “binary” doesn’t even really exist at the lowest level – it’s either “lots of power” (on) or “hardly any power” (off). The reason we have the concept of binary is because we do some funny things to make more than one power flow matter at a time. To facilitate this, we get multiple wires caring power at a time. Most machines currently run 64 wires as a single unit. (Technically called a “word”, but moving on).

Now, you asked explicitly about data storage. The truth is that the same concept applies. The hardware is set to either emit a high or low power signal when power is run through it. That’s it. That’s all it knows. (I’ll get to why you see the hexadecimal addresses in a bit)

Now, this kinda gets messy if you want to do a deep hardware dive. For now, I’ll just mention logic gates, which allow you to determine the output of one line based on two inputs. Typically your gates are And, Or, Not (single input, but worth mentioning), Nand, XOR, and XNor. They pretty much do what they look like, with N meaning the output is flipped and X meaning it only outputs if one of the two meets the conditions, not if both do.

# Correction: Binary is a very useful lie.

Now, most of us software people don’t actually want to think about it like wires. The physical wheres and hows are things that we don’t really want to work with, but on a low level, we still need them. We handle this by encoding all that hardware nonsense in binary. Binary can be written left to right, so it’s easier to hand write logic before you attempt an implementation. You can easily isolate values from meaning, or add spacing to separate segments of it, making it easier to work with for larger projects, like encoding things! It’s not *exactly* a number, but we can get it to act like one! We can ADD with it! We can even assign meaning to groups or individual bits, allowing us to, say, build instructions like “[11010101] 11001100, 00110011”. (I didn’t actually make this an instruction, sorry.)

# What about Hex?

Well… it turns out binary is a pain too. While it is better than trying to imagine all the hardware, It’s messy. Humans aren’t good with long strings of similar numbers. A small mistake can be difficult to catch – imagine trying to find the difference between 11101001101110010010101101100110 and 11101001110110010010101101100110. Looks like a pain, right?

So, we transform it. At this level, we don’t want to go particularly far from what the computer understands, but we need something that’s easier for us. So, we directly encode the bits. 4 bits works out well with our normal number system if we tack on a few letters, and it fits well with addressing so we use that. Encoding one of our earlier numbers:

|1110|1001|1011 |1001 |0010|1011 |0110 |0110 |
|:-|:-|:-|:-|:-|:-|:-|:-|
|E|9|B|9|2|B|6|6|

we get E9B92B66. Now, compare that to E9D92B66. I bet you found the difference almost immediately. It takes up much less visual space. So much easier to work with! Every two letters is a byte, Eight of those makes up 32 bits, the old standard for a word. Sixteen make up 64 bits, which is current standard.

Conveniently, and as it will be for always and forever, every single one of those numbers is an easy multiple of four. (It’s a thing about addressing bits. Because of the way binary works, there’s very little incentive to explicitly force a change.)

So… we display it as hex.

Because we like it more.

Anonymous 0 Comments

Hexadecimal is just a representation, it’s base 16, whereas normal counting is base 10, and binary is base two. That means 255 in decimal is equal to 0xFF in hexadecimal which is equal to 11111111 in binary. They’re all exactly the same ordinality.

The reason hexadecimal isn’t used outright is because of the physical nature in which data is stored. Magnets have two poles, north and south, transistors have two states, on or off, punch cards have two states – holes or no holes… Charles Babbage’s Analytical and Differential engines both had decimal based storage built out of gears in the 1830s. Anything that has ordinal distinct states can be used as storage, but binary is very simple and a minimal, reproducible unit. You can make decimal counters out of transistors, but it would take more parts, space, and energy than a binary system.

Anonymous 0 Comments

All current conventional computers are based on boolian algrebra. A form of mathematics that reduces all things to either a yes or a no. True or false. In computers this is used by the actual machine to create action. Physical switches turning on or off to represent either 1 or 0.

There’s actually a really great video series by Crash Course on YouTube that does a WONDERFUL job of explaining machines even down to the most basic parts.

To answer your question in a more basic way… ALL hexadecimal is for OUR benefit. The machine still sees binary because there isn’t anything else. I…really can’t explain it better than that. But that video series is assembled by actual experts.

Anonymous 0 Comments

Barring quantum computing and novelty devices, computer memory is always in binary. If current is running through a circuit, that’s a 1, otherwise 0. It’s really hard to design something that is both as small/fast as current computer chips and can take on more than two states.

Hexadecimal is often used to write computer memory for human consumption because 4 binary bits is exactly the same as one hexadecimal number. This allows you break the memory into easy chunks. When you change a digit in a hexadecimal number, you only change it’s 4 associated binary digits, not any of the digits around it. This does not hold for decimal.

Example:

-Hexadecimal 31 is 00110001 in binary. Hexadecimal 32 is 00110010 in binary. The first block of 4 binary digits did not change because we only changed the second hexadecimal digit.

-Decimal 31 is 00011111 in binary. Decimal 32 is 00100000 in binary. Both blocks of 4 binary digits changed despite only changing one decimal digit.

Anonymous 0 Comments

Everything is binary to a computer. Anything else is just a convenience to humans.

It’s not 111111110000000000000000, it’s “red”. (24 bit colour)

It’s not 11011110101011011011111011101111, it’s 0xdeadbeef (binary to hex conversion)

It’s not 1100100, it’s 100. (binary to decimal conversion)

It’s not 01000101010011000100100100110101 it’s “ELI5” (text in binary)

Hexadecimal has a number range from 0 to 15 per “digit”, where A through F are considered digits. This has a binary range from 0000 to 1111 which means each hexadecimal digit represents *exactly* 4 bits. Since bytes are 8 bits that means 2 hex digits is a byte. Very convenient for humans, but by itself nothing in your computer cares except the software that’s doing the conversion for the humans.

Anonymous 0 Comments

Data storage and data transfer are all done in binary because our technologies all work with on/off signals. Hexadecimal (base-16) would require having 16 unique and differentiable signals (logic levels) rather than on/off.

Because there’s no such thing as a digital signal in real life (on/off isn’t real, there’s always tiny fluctuations) logic levels don’t use 0 volts or max voltage to represent 0 and 1. Instead, they use various thresholds and zones to represent different logic levels. With TTL technology, logic 0 is 0 volts to 0.8 volts, and logic high is 2 volts to the collector voltage (usually 5 volts). Those threshold zones exist to prevent random fluctuations from flipping from one logic level to another.

Since with base-16 you’d need 16 distinct logic levels, you’d need to have 16 different threshold zones to represent those logic levels as voltage. This poses even more of an issue when you consider that the allowable voltage range for modern desktop CPUs is 0 volts to ~1.4 volts.

Hexadecimal is used to display binary values because it’s more compact to display on a screen, since two hexadecimal digits can be used to represent eight binary digits, or one byte.

Anonymous 0 Comments

Hexadecimal is easier to read for humans than binary because it doesn’t require writing out as many digits, while still being easy to translate to binary. But binary is easier to do with electronics – 1 or 0, on or off. Operating in hexadecimal would require the ability to have 16 different states.

Anonymous 0 Comments

Hexadecimal is merely a representation of data; there’s nothing special about the nature of hexadecimal compared to other formats. It doesn’t enable data compression or anything like that.
The reason we often use binary notation when discussing values of bit-related things like bytes is simply because it makes the most sense. … And this fits nicely into our 8-bit bytes: two hex digits can represent every value of a byte