How do computers KNOW what zeros and ones actually mean?

1.40K views

Ok, so I know that the alphabet of computers consists of only two symbols, or states: zero and one.

I also seem to understand how computers count beyond one even though they don’t have symbols for anything above one.

What I do NOT understand is how a computer knows* that a particular string of ones and zeros refers to a number, or a letter, or a pixel, or an RGB color, and all the other types of data that computers are able to render.

*EDIT: A lot of you guys hang up on the word “know”, emphasing that a computer does not know anything. Of course, I do not attribute any real awareness or understanding to a computer. I’m using the verb “know” only figuratively, folks ;).

I think that somewhere under the hood there must be a physical element–like a table, a maze, a system of levers, a punchcard, etc.–that breaks up the single, continuous stream of ones and zeros into rivulets and routes them into–for lack of a better word–different tunnels? One for letters, another for numbers, yet another for pixels, and so on?

I can’t make do with just the information that computers speak in ones and zeros because it’s like dumbing down the process human communication to the mere fact of relying on an alphabet.

In: 264

47 Answers

Anonymous 0 Comments

When it comes to stored data, like numbers, letters, RGB colors, etc, the 1s and 0s are interpreted by the software. The software knows what to interpret them as because the programmer programmed them that way. Each file format is like a different secret code to what those 1s and 0s mean. Sometimes the software might look at the filename’s extension (.html, .xls, .jpg, .png, etc), and choose what interpretation method to use based on that. Sometimes the software will just do what it does, and sometime the results won’t make sense.

But software itself is also 1s and 0s. Is it interpreted by other software? But then what interprets that software? Is it software all the way down? no! (well, sometimes there are 2-3 layers of software.) At the bottom is the hardware.

Inside the CPU are a lot of electric buttons that are protected by electrical cutouts. When a part of software (usually 32 or 64 bits, but can be 8 or 16 on microprocessors) needs interpreting, the CPU plays ‘which hole does this shape fit into?’ but with electrical signals. When the matching hole is found, the electric button inside is activated. This turns on specific parts of the CPU for that function. This could be adding a few values together, telling the graphics card to change something on the screen, checking if 2 values are equal, reading input from the mouse or keyboard, etc.

Since its just electrical signals, every cutout can be tried at the same time. This makes it very fast to find the answer and activate the correct CPU bits, then move on to the next part of the software (it does this automatically).

a bit of additional info:

A “compiler” takes code and turns in into software. It knows what cutouts the CPU has, and what the buttons do, and it puts the right 0s and 1s in the right order to activate the right CPU buttons to do what the code describes. Different CPUs have different buttons behind different cutouts, so often code has to be compiled for different processors separately. However there are some standards, so consumer PCs are usually compatible.

You are viewing 1 out of 47 answers, click here to view all answers.