How do computers KNOW what zeros and ones actually mean?

1.37K views

Ok, so I know that the alphabet of computers consists of only two symbols, or states: zero and one.

I also seem to understand how computers count beyond one even though they don’t have symbols for anything above one.

What I do NOT understand is how a computer knows* that a particular string of ones and zeros refers to a number, or a letter, or a pixel, or an RGB color, and all the other types of data that computers are able to render.

*EDIT: A lot of you guys hang up on the word “know”, emphasing that a computer does not know anything. Of course, I do not attribute any real awareness or understanding to a computer. I’m using the verb “know” only figuratively, folks ;).

I think that somewhere under the hood there must be a physical element–like a table, a maze, a system of levers, a punchcard, etc.–that breaks up the single, continuous stream of ones and zeros into rivulets and routes them into–for lack of a better word–different tunnels? One for letters, another for numbers, yet another for pixels, and so on?

I can’t make do with just the information that computers speak in ones and zeros because it’s like dumbing down the process human communication to the mere fact of relying on an alphabet.

In: 264

47 Answers

Anonymous 0 Comments

Context is how they know it. This is why every file has a format. Mp3, pdf, etc etc. These formats essentially act like a map that the computer has a legend on how to read and decipher. So The operating system can tell that if it sees an mp3 file. Then it should start and end with a certain pattern and everything else between that pattern should be set up in a way where like every set of like 8 1s and 0s plays a certain sound. So it reads those numbers. Compares it to the legend it is given. Then sends the signals to the speakers to play the sound before moving onto the next set of 0s and 1s.

This context is used for everything really. Even before the operating system starts there is a BIOS that essentially boots up the OS program. The makers of the bios and OS worked together so the computer executes the OS startup properly.

Anonymous 0 Comments

There is a large table, created by cpu manufacturers. Every x bits (zero or one) is interpreted as a command from that table and is executed by the cpu. There is a register that always points to the current command in memory and once the cpu have executed it the register moves to the next command. When the computer starts, execution begins at the first memory spot where your booter code will load and start your operation system code.

Commands usually have a fixed length that is dependent on the brand that should be ether 8, 16, 34 or 68 bits. And some commands require additional information that comes right after it, for example, the command for adding two numbers require information about which numbers to add and where to save the results.

As for the data itself, the computer dosent inherently know what it is and if the cpu accidentally get there it will simply run it as commands (creating chaos). It’s up to the programmer to use the data in a way that makes sence. For example, if you have data representing a pixel map of an image you can tell the cpu to put that in a special place from which the screen gets data for it’s next frame. If you have data representing a letter you need to first convert it into a pixel map (probably using a font type software) before placing it where the screen reads its images.

Anonymous 0 Comments

so what you’re missing is something called representation theory. It’s not really 0’s and 1s its various versions of presences and absences. As for how to interpret it it’s up to the particular program. if it’s. word processor the number 220 might be a character but if it’s a graphics program it might be a color or a particular sound. You see the idea is that in different contexts the different digital combinations are interpreted differently.

So if you opened up a video file in a program expecting to represent a number you’d get a very large number instead of a movie.

Anonymous 0 Comments

Computers “know” what to do with certain patterns of ones and zeros in the same way that a car “knows” to go in reverse when you move the shifting lever to the “R” position.

This is a highly simplified explanation that ignores a lot of the nuance, but in the same way that a car’s engine only turns in one direction at a given rate— it’s the shifting of gears that controls how that spinning is used, either by changing the direction or changing the gear ratio— a computer’s engine is also sending signals in one direction. What those zeroes and ones *actually* do is change that signal’s direction.

The way that this works is actually pretty simple: we have a gate called an AND gate, and this gate only allows a signal through if both of its inputs is turned on. Let’s say that a computer has two “bits” of memory: one in position one, and one in position two. In order to get the memory in position one, you would send the signal “10” to the computer, activating the AND gate for the first memory bit, but not the second one.

Taking this a step further, let’s say that your screen has two pixels; pixel one and pixel two. We want to turn pixel two on if memory bit one has a “1” inside it. Then, we send a “10” signal to the memory, while at the same time, sending a “01” signal to the display. Now, the “1” signal inside the first bit of memory is allowed to “flow through”, and then continues to “flow through” to the second pixel on the screen.

If the memory had a “0” instead, even though we sent a “10” signal to the memory, the first memory cell wouldn’t have a “1” to help activate the AND gate, and no signal would flow, meaning the pixel would stay turned off.

In all honesty, all a computer is actually is this relatively simple system just stacked and layered on top of itself a mind-boggling amount of times— which “place” to send a signal to (memory or display) is determined by another set of signals, what color to make the pixel is another set of signals, where to store memory is another.

This explanation, again, is very very simplified, and there are, in reality, many logic gates which serve different functions, hardware chips that use different gate architectures designed to accelerate specific functions, and so, so much overhead— not to mention the fact that computers operate in binary, meaning that a “10” would actually correlate to memory bit 2, not 1. But I think the core essence of it is this idea of “gates”, all of which work together to move the signal from one place to another by only allowing certain signals through to certain places.

Anonymous 0 Comments

When it comes to stored data, like numbers, letters, RGB colors, etc, the 1s and 0s are interpreted by the software. The software knows what to interpret them as because the programmer programmed them that way. Each file format is like a different secret code to what those 1s and 0s mean. Sometimes the software might look at the filename’s extension (.html, .xls, .jpg, .png, etc), and choose what interpretation method to use based on that. Sometimes the software will just do what it does, and sometime the results won’t make sense.

But software itself is also 1s and 0s. Is it interpreted by other software? But then what interprets that software? Is it software all the way down? no! (well, sometimes there are 2-3 layers of software.) At the bottom is the hardware.

Inside the CPU are a lot of electric buttons that are protected by electrical cutouts. When a part of software (usually 32 or 64 bits, but can be 8 or 16 on microprocessors) needs interpreting, the CPU plays ‘which hole does this shape fit into?’ but with electrical signals. When the matching hole is found, the electric button inside is activated. This turns on specific parts of the CPU for that function. This could be adding a few values together, telling the graphics card to change something on the screen, checking if 2 values are equal, reading input from the mouse or keyboard, etc.

Since its just electrical signals, every cutout can be tried at the same time. This makes it very fast to find the answer and activate the correct CPU bits, then move on to the next part of the software (it does this automatically).

a bit of additional info:

A “compiler” takes code and turns in into software. It knows what cutouts the CPU has, and what the buttons do, and it puts the right 0s and 1s in the right order to activate the right CPU buttons to do what the code describes. Different CPUs have different buttons behind different cutouts, so often code has to be compiled for different processors separately. However there are some standards, so consumer PCs are usually compatible.

Anonymous 0 Comments

The easiest answer? You tell it! When you’re programming ( in typed languages, at least ) you tell the computer, “Hey, this memory location has a 32 bit integer in it.” Or “Use the value stored in this memory location as the intensity of red in this pixel” etc etc.

Everyone else is also right of course, but this is how it works at a high level. Most people when programming or operating computers don’t tend to think about the logic gate or transistor level.

Anonymous 0 Comments

It’s a little above an ELI5, but this is perhaps the easiest ‘computer architecture’ course on YouTube:

It starts from ‘this is how a simple one-chip timer circuit works’,, expands that into a ‘simple computer clock’, and then goes through each part of the entire design process from +5v to running actual code.

The short answer to your question is a combination of good design and good timing. For any set of data bits in a computer, there’s another set of address bits that references them. So it’s not that the computer knows that a given set of bits is a pixel, it’s that when it calls for the bits that belong to that pixel’s address, those bits come up.

And the address is coded somewhere else in memory, which in turn has an address and set of instructions associated with it, and you can work backwards like so until you get to the very first bit the computer can ‘recognize’, the power button.

The term ‘booting up’ a computer comes from word ‘bootstrap’ which comes from the old saying ‘lift one’s self up by one’s bootstraps’, an impossible thing to do.

The first ‘bit’ is power = on. That goes to a simple circuit that starts distributing power and turns on a slightly more complex set of instructions that do various tasks, and turn on even more complicated instructions, and so on and so on.

All of this is done by synchronizing each subsystem to a centralized clock, and using various control signals to turn chips on and off at the right time so that it can read from the central data bus that shuttles the bits around.

Anonymous 0 Comments

Imagine a treasure hunt in a city. You are the processor which needs to do something to obtain a result. Throughout the city there are seemingly random post-it notes with words written on them. Those are all the words in the dictionary, and may mean different things. For example, ‘left’ may mean move left, lift your left arm, or just the word ‘left’. You start at your home and see what’s written outside your door, and you just know that you have to keep going forward and execute everything you find on your path as a command. You read the first note. It says “LEFT”. What is it, a direction to remember, an instruction to turn left right now or just a word? Well you know that you don’t have any instructions up to now, so it must be the new instruction. You turn left. You keep going and find another note: “SHOUT”. It must be another command, but you don’t know what to shout, so you keep it in mind and keep on going. Next note: “LEFT” again. What do you do now? You may say you should turn left, but you still have to complete the previous command, so you cannot start another. You then shout “left!”. Both notes with the word left are indistinguishable, but the word means different things depending on your current state. That’s how computers know which meaning a datum has: data doesn’t mean anything by itself, for it to have a meaning you have to take into account the current state of the machine.

Anonymous 0 Comments

I’m getting in late but I don’t think the top answers are what OP is looking for.

The answer is context. In the ones and zeros language of computers the letter A is defined with the exact same pattern of ones and zeros as the number 65. When a calculator app uses that pattern it “knows” that it’s a number. When a texting app uses the pattern it’s a letter. Programmers define what the interpretation should be in the particular context.

Anonymous 0 Comments

Well, the answer is that it “knows” because we tell it what to expect. Think of memory as a long long sequence of ones and zeros, and we break it into groups (bytes) with addresses that start at 0. When the computer starts, it knows what address to look at first, and what to expect there (code to start running to boot up). When running any program, we give it the starting address for that code, and when the code needs to look at data, the code supplies the data address and a description of the data that the programmer decided (look here, you’ll find an integer).