Ok, so I know that the alphabet of computers consists of only two symbols, or states: zero and one.
I also seem to understand how computers count beyond one even though they don’t have symbols for anything above one.
What I do NOT understand is how a computer knows* that a particular string of ones and zeros refers to a number, or a letter, or a pixel, or an RGB color, and all the other types of data that computers are able to render.
*EDIT: A lot of you guys hang up on the word “know”, emphasing that a computer does not know anything. Of course, I do not attribute any real awareness or understanding to a computer. I’m using the verb “know” only figuratively, folks ;).
I think that somewhere under the hood there must be a physical element–like a table, a maze, a system of levers, a punchcard, etc.–that breaks up the single, continuous stream of ones and zeros into rivulets and routes them into–for lack of a better word–different tunnels? One for letters, another for numbers, yet another for pixels, and so on?
I can’t make do with just the information that computers speak in ones and zeros because it’s like dumbing down the process human communication to the mere fact of relying on an alphabet.
In: 264
Many of these answers miss the point of OP’s question (as I understand it), and definitely are not ELI5 level.
OP, the binary strings (and their hexadecimal equivalent) for functions, characters of text, etc., are defined in standards. The simplest reference would be ASCII https://www.rapidtables.com/code/text/ascii-table.html so you can see what that looks like.
Data is structured in defined block sizes and sequences to let the system “know” what a segment of code is for (this next bit is for a character in a word doc), and the value passes to the system then has meaning and instructions (type an “A”).
Latest Answers