How do computers KNOW what zeros and ones actually mean?

1.36K views

Ok, so I know that the alphabet of computers consists of only two symbols, or states: zero and one.

I also seem to understand how computers count beyond one even though they don’t have symbols for anything above one.

What I do NOT understand is how a computer knows* that a particular string of ones and zeros refers to a number, or a letter, or a pixel, or an RGB color, and all the other types of data that computers are able to render.

*EDIT: A lot of you guys hang up on the word “know”, emphasing that a computer does not know anything. Of course, I do not attribute any real awareness or understanding to a computer. I’m using the verb “know” only figuratively, folks ;).

I think that somewhere under the hood there must be a physical element–like a table, a maze, a system of levers, a punchcard, etc.–that breaks up the single, continuous stream of ones and zeros into rivulets and routes them into–for lack of a better word–different tunnels? One for letters, another for numbers, yet another for pixels, and so on?

I can’t make do with just the information that computers speak in ones and zeros because it’s like dumbing down the process human communication to the mere fact of relying on an alphabet.

In: 264

47 Answers

Anonymous 0 Comments

They know what each string of 0’s and 1’s means because we tell them what it means.

And by we, I mean the people that designed the operating system.

Anonymous 0 Comments

A one is when there’s a lot of current running through a component. A zero is when there’s not a lot of current running through a component. That’s pretty much the physical element.

Anonymous 0 Comments

They know because apart from the main information, they also store a lot of additional data to provide context, and all that _context data_ is standardized. For example, when you save a picture to a .jpg file, there’s not only the ones and zeroes of the pixels themselves, but also a lot of parameters that indicate that the content of the file is indeed a picture, what size it is in pixels, if it has transparency, what program/camera was used to create the file, etc.

Anonymous 0 Comments

If you’ve heard of an 8 bit cpu, 64 bit CPU, etc., that’s how many bits long each number is as it’s being worked on by the CPU. The more bits, the bigger the number that can be represented. If you have 8 bits, you can store an unsigned integer up to 255, or a signed integer from -128 to 127 (the system usually used for storing negative integers is called [twos complement](https://www.youtube.com/watch?v=4qH4unVtJkE))

The rest of the videos on that channel are relevant to this question, there’s also nandgame.com

Basically everything is a number, a letter for example is just a number that’s the index to where the letter’s image is stored in a font table. RGB color is just three numbers, one representing the brightness of each color.

Anonymous 0 Comments

The computer on it’s own doesn’t know what any of it means. *we* know what it means, and we know where we stored the data, so we can write a program that reads and interprets it in the correct way. A lot of that can be automated, because you can tell what type of file something is from the header. But again, that’s *our* way of interpreting the header, which is just a binary string, to mean different file types and such.

The computer is, at a basic level, just following the instructions that we give it to manipulate the data, and use that to display things on the screen for instance.

Anonymous 0 Comments

The computer itself doesn’t know. The code running on the computer decides. If the code says to add two things, the processor doesn’t care if the bits represent numbers or something else, it will add them as if they were numbers. If you add the bits that represent 2 (00000010) to the bits that represent ‘A’ (01000001), you’ll get some other bits: 01000011, that you can interpret as basically anything – as a number you’ll get 67 and as a letter you’ll get ‘C’, for example.

In other words, if the code says to display 01000101 as a number, you’ll see 71, and if it says to display it as a letter, you’ll see G.

This ability to reinterpret data as whatever you want is a really powerful concept in low-level programming, you can do a lot of neat tricks with it.

However, most programmers don’t deal with this directly – they can say that some data is supposed to be a number, and they’ll get an error if they try to do letter operations on it. However, this type information is thrown away when generating the instructions the processor can work with directly.

Anonymous 0 Comments

[Ben Eater](https://youtube.com/@BenEater?si=3lpQNMXHunQ-uu_q) o n YouTube has a great series on building an 8 bit computer from scratch and a series on building a 6502 based breadboard computer. Both are worth a watch and will answer your questions.

Anonymous 0 Comments

Ben Eater has [a good series of videos on Youtube](https://www.youtube.com/playlist?list=PLowKtXNTBypGqImE405J2565dvjafglHU) in which he builds a simple computer, working from how to store a 0 or 1 and read it back later, to interpreting those stored numbers as instructions in a simple programming language.

Something like [this](https://youtu.be/7zffjsXqATg?si=DBYwBZJIg3MiL2OD) might be a good place to start as an example of how 0s and 1s can become a human-readable display. Assuming that you have four wires which represent a 4-bit binary number, he designs a circuit which will display that number on a 7-segment display.

Anonymous 0 Comments

The key thing is that the computer isn’t looking at a single 1 or 0 at a time but 32 or 64 of them at a time. These represent numbers in binary, and when you design a CPU architecture, what you do is define what number corresponds to what command. The wires carrying the number manually go to different places according to your design document to do different commands in the CPU.

Other people build devices like screens and keyboards, and they all take specific numbers corresponding to commands that say “make this pixel red” or “make sure the cpu knows I pressed the f key”. There is a layer of translation (drivers) between the cpu and those devices that allow your computer to work with a variety of devices. For example, if the number 4 corresponds to the 4th pixel from the top on one brand of display vs the 4th pixel from the bottom on another display, they tell the cpu that information. How? More numbers of course!

Anonymous 0 Comments

> I think that somewhere under the hood there must be a physical element–like a table, a maze, a system of levers, a punchcard, etc.–that breaks up the single, continuous stream of ones and zeros into rivulets and routes them into–for lack of a better word–different tunnels? One for letters, another for numbers, yet another for pixels, and so on?

Sounds like you’re talking about [logic gates](https://www.techspot.com/article/1830-how-cpus-are-designed-and-built-part-2/)