How do computers KNOW what zeros and ones actually mean?

1.45K views

Ok, so I know that the alphabet of computers consists of only two symbols, or states: zero and one.

I also seem to understand how computers count beyond one even though they don’t have symbols for anything above one.

What I do NOT understand is how a computer knows* that a particular string of ones and zeros refers to a number, or a letter, or a pixel, or an RGB color, and all the other types of data that computers are able to render.

*EDIT: A lot of you guys hang up on the word “know”, emphasing that a computer does not know anything. Of course, I do not attribute any real awareness or understanding to a computer. I’m using the verb “know” only figuratively, folks ;).

I think that somewhere under the hood there must be a physical element–like a table, a maze, a system of levers, a punchcard, etc.–that breaks up the single, continuous stream of ones and zeros into rivulets and routes them into–for lack of a better word–different tunnels? One for letters, another for numbers, yet another for pixels, and so on?

I can’t make do with just the information that computers speak in ones and zeros because it’s like dumbing down the process human communication to the mere fact of relying on an alphabet.

In: 264

47 Answers

Anonymous 0 Comments

Computers “know” what to do with certain patterns of ones and zeros in the same way that a car “knows” to go in reverse when you move the shifting lever to the “R” position.

This is a highly simplified explanation that ignores a lot of the nuance, but in the same way that a car’s engine only turns in one direction at a given rate— it’s the shifting of gears that controls how that spinning is used, either by changing the direction or changing the gear ratio— a computer’s engine is also sending signals in one direction. What those zeroes and ones *actually* do is change that signal’s direction.

The way that this works is actually pretty simple: we have a gate called an AND gate, and this gate only allows a signal through if both of its inputs is turned on. Let’s say that a computer has two “bits” of memory: one in position one, and one in position two. In order to get the memory in position one, you would send the signal “10” to the computer, activating the AND gate for the first memory bit, but not the second one.

Taking this a step further, let’s say that your screen has two pixels; pixel one and pixel two. We want to turn pixel two on if memory bit one has a “1” inside it. Then, we send a “10” signal to the memory, while at the same time, sending a “01” signal to the display. Now, the “1” signal inside the first bit of memory is allowed to “flow through”, and then continues to “flow through” to the second pixel on the screen.

If the memory had a “0” instead, even though we sent a “10” signal to the memory, the first memory cell wouldn’t have a “1” to help activate the AND gate, and no signal would flow, meaning the pixel would stay turned off.

In all honesty, all a computer is actually is this relatively simple system just stacked and layered on top of itself a mind-boggling amount of times— which “place” to send a signal to (memory or display) is determined by another set of signals, what color to make the pixel is another set of signals, where to store memory is another.

This explanation, again, is very very simplified, and there are, in reality, many logic gates which serve different functions, hardware chips that use different gate architectures designed to accelerate specific functions, and so, so much overhead— not to mention the fact that computers operate in binary, meaning that a “10” would actually correlate to memory bit 2, not 1. But I think the core essence of it is this idea of “gates”, all of which work together to move the signal from one place to another by only allowing certain signals through to certain places.

You are viewing 1 out of 47 answers, click here to view all answers.