How do computers KNOW what zeros and ones actually mean?

1.46K views

Ok, so I know that the alphabet of computers consists of only two symbols, or states: zero and one.

I also seem to understand how computers count beyond one even though they don’t have symbols for anything above one.

What I do NOT understand is how a computer knows* that a particular string of ones and zeros refers to a number, or a letter, or a pixel, or an RGB color, and all the other types of data that computers are able to render.

*EDIT: A lot of you guys hang up on the word “know”, emphasing that a computer does not know anything. Of course, I do not attribute any real awareness or understanding to a computer. I’m using the verb “know” only figuratively, folks ;).

I think that somewhere under the hood there must be a physical element–like a table, a maze, a system of levers, a punchcard, etc.–that breaks up the single, continuous stream of ones and zeros into rivulets and routes them into–for lack of a better word–different tunnels? One for letters, another for numbers, yet another for pixels, and so on?

I can’t make do with just the information that computers speak in ones and zeros because it’s like dumbing down the process human communication to the mere fact of relying on an alphabet.

In: 264

47 Answers

Anonymous 0 Comments

So you seem to understand that there are items made up of combinations of 0 and 1 that represent different things — a color, a letter, etc.

There are sets of such combinations that define a computer’s ‘instructions’; a modern-day computer is a machine that executes sets of such instructions. These instructions are things that the computer can do with other combinations of 0s and 1s; for instance, a computer instruction can indicate that the computer is to add the 1s and 0s from one memory location to a set in another location, and store the result in a third location.

From such basic instrucdtions, computer progrrams are built to do things with large sets of 0s and 1s. Some computer instructions read 1s and 0s from ‘ports’ that are part of the computer hardware; in order to make any sense out of what is read, the computer must be programmed to expect a certain set of 1s and 0s to come in on that port. For instance, some old-time computers used to communicate with “dumb terminals” which sent codes for letters on the ports to which they were attached; ‘A’ was 0100 0011, ‘B’ was 0100 0100, and so on. This particular set of codes is named ASCII; there are others that represent letters as well.

If someone had connected some other machine to that port, and the other machine had transmitted the SAME code, the computer could have read it, but if that machine had transmitted some other code, the computer would have attempted to read it as ASCII, and it wouldn’t have worked out well because what was being input was not ASCII code.

This illustrates the basic answer to your question — in order to interpret a set of codes as colors, letters, numbers, etc., the computer needs to have some designation of what the codes are. Although, in limited circumstances, the computer could try different interpretations and perhaps come up with one if the circumstances were right, mostly the computer has to ‘know’ (be programmed) to expect particular sets of codes in particular situations.

I’m happy to expand on this if it’s helpful; let me know if you want further information.

You are viewing 1 out of 47 answers, click here to view all answers.