How do computers KNOW what zeros and ones actually mean?

1.37K views

Ok, so I know that the alphabet of computers consists of only two symbols, or states: zero and one.

I also seem to understand how computers count beyond one even though they don’t have symbols for anything above one.

What I do NOT understand is how a computer knows* that a particular string of ones and zeros refers to a number, or a letter, or a pixel, or an RGB color, and all the other types of data that computers are able to render.

*EDIT: A lot of you guys hang up on the word “know”, emphasing that a computer does not know anything. Of course, I do not attribute any real awareness or understanding to a computer. I’m using the verb “know” only figuratively, folks ;).

I think that somewhere under the hood there must be a physical element–like a table, a maze, a system of levers, a punchcard, etc.–that breaks up the single, continuous stream of ones and zeros into rivulets and routes them into–for lack of a better word–different tunnels? One for letters, another for numbers, yet another for pixels, and so on?

I can’t make do with just the information that computers speak in ones and zeros because it’s like dumbing down the process human communication to the mere fact of relying on an alphabet.

In: 264

47 Answers

Anonymous 0 Comments

My digital instructor when I studied electronics stated that a processor is nothing more than a million morons working at the speed of light.

What that means is a processor, or computer chip that tells the computer what to do, and can’t do anything unless told to do something. A processor has logic gates hard coded into the chip, known as firmware. Programs that you load is called software because it can be modified, firmware can’t. Think of the processor as a giant water works station, and a series of 1’s and 0’s tell the station what ‘gates’ to open (logic gates in a computer). This directs the flow of water to physically do something, such as turn on a light, turn off a light, open some more gates, etc. The water station knows what gates to open as the 1 would represent water ‘pressure’, (or voltage in our computer chip) and 0 would represent no pressure. By opening the series of gates, the information encoded by the water, tells the station where to direct this flow, and by doing so, determines the outcome of this flow. Note that all peripherals such as printers, etc, all have their own internal waterworks as well, in which the computers water station, would use a series of ‘pipes’ or a ‘logic bus’ as called in a computer, to pass this information onto whatever station it needs to deal with. And this peripheral would also have its own ‘firmware’, to understand the logic being passed on.

I know this is rather simplistic but I hope it gives you some idea of how a computer processes information. Basically, you feed it a series of data, which the processor interprets, which then directs the outcome of this data to do something. Some of this you see directly, such as change an image on the screen, others, are invisible, such as handling timers, interrupts, memory management etc, but nothing can occur without the processor being told to do something. It is not automatic, (but can be if it is an embedded system, but that’s a different discussion). Your home computer cannot do anything without being told to, and that’s where the operating system comes in, (think water station), which handles the ‘flow’ of water, or data, internally. Think of the operating system as a giant reservoir, which the amount of water is fixed, (data) and all it does is circulate through the system operating various gates, causing ‘work’, such as updating your screen when typing, etc. The water station also handles external ‘flow’, (think of this as say, water flowing from an outside source, such as an external reservoirs) and also directs this data to use the same set of ‘gates’. Regardless of what the software does, it all has to use the same set of gates. Like a physical water station, (at least in our scenario), the processor can’t be modified and is forced to use it’s hard coded ‘infrastructure’, and can’t create new gates, but can only use the ones it has access too. (Some gates are protected, but we won’t get into that as I am trying to keep it simple lol).

Anonymous 0 Comments

The most basic logical operations are the booloean logical gates. Those are the core of all computing. And they can also be hardwired with actual cables. Let’s say the “AND” gate is when you want something to happen if other two things happen. For example you want a lamp to turn on, if both the dog and the cat are at home. Let’s say you have a dog sensor and a cat sensor and they both give an electric signal and your lamp is wired in a way that it turns on only if both sensors are on. If you check the lamp and it’s on, you know that both animals are at home, otherwise either or both of them are out.
An “OR” gate is when either input is enough to turn the light on. You can also hardwire it and you can make the previous setup a bit different, because now the light will be on if either one of them is at home, but as well if both of them are at home.
A “XOR” gate is a such lamp but now it’s on only if exactly one animal is at home. If both are out or both are in, the lamp is off.

So you can physically wire the two sensors together with the lamp, and add switches here and there in a way that if this and this and this switches are on, then it works as an AND loop, but if this switch is off, but that one is on, then the system works as an XOR loop.

So now we basically built a primitive computer. The dog sensor and the cat sensor are the inputs. They both produce input signals either 0 (animal out) or 1 (animal in). The lamp is the output that produces an output signal of 1 or 0 (on or off). And the state of the switches are also 1s and 0s that define whether today we want to run our loop in AND mode or XOR mode or any other mode. That’s defined by the combination of the switches that are on. The output lamp can be an input signal for a next lamp that takes this lamp and the presence of the owner as two inputs, also hardwired in a way using switches.

Now, these basic operators are actually hardwired in a computer too. So when the 0s and 1s come to the processor (or one little elemental unit of the processor), they come in certain well defined order. Some 0s and 1s represent the switches so that one element in the processor becomes an AND state or OR state or XOR state or NOR state etc. The rest of the numbers represent the input signals. Then this element either shines up and becomes a 1 or stays 0, based on the inputs *and* the switches.

And this happens terribly fast and there are millions of basic units do this in parallel.

So when it comes to the program, everything in the program breaks down to such series of 0s and 1s, some tell how to handle the rest some are the actual stuff to handle.

Anonymous 0 Comments

The way i finally understood it, is that the transitor and layout is not arbitary, the computer isn’t decoding anything on the fly and knowing what that means, the individual transitors do not matter so much as the combined ones set up to detect a certain string which flows because of that.

Imagine you have the instruction 0000 0000. The creator of the computer sets up the first 2 digits as the operation, say “10” is addition, that unlocks the flow down one path for the rest of the digits (where the other paths are blocked).

You also have move instructions; “1100 0100” might mean “move the value from data storage area A to the graphics card” and “1101 0100” might mean “move the value from data storage area A to the calculator”, and the individual instruction might be exactly the same between the two, but because the layout is setup differently, will use them differently.

Each instruction is meaningless in of itself; there is nothing that makes one instruction inherently a colour or a letter.

If you want to know more, play the “turning complete” game.

Anonymous 0 Comments

I think you are referring to stuff stored in computers memory. All the modern computers *don’t* know what they have stored in the memory, at least on hardware level. You can store texts, numbers and pieces of programs on the same stick of RAM. If you tell the CPU to read instructions from some address, it’ll happily try to do it without question.

It’s actually huge problem with modern computers. Imagine that you have some web browser that loads some webpage into memory. If attacker manages to jour program to continue from the part of memory that contains the webpage, they can execute arbitrary code on your computer. The text of the webpage would look like gibberish to the human if they saw it, but the job of CPU is to execute instructions, not question if the data in memory wasn’t intented to be displayed as text.

This just moves the question. Who knows what the data means if there is no special way to distinguish between types of the data on the hardware level? The answer is it’s job of the operating system, compiler and the programmer.

I’ve actually lied a bit about the CPU executing anything unquestionably. By default it does that, but in pretty much any case your operating system uses hardware support of your CPU to do some basic enforcement of what can be executed and what can’t.

As for distinguishing between text and numbers and pixels, it’s job of the programmer to do it. If you want, you could load two bytes that correspond to some text stored in memory and ask CPU to add them together, and it would do it as if it were two numbers. You just don’t do it on purpose, because why would you do it? Of course programmers don’t write machine code by hand they write code in some programming language and the compiler is responsible for making the machine code out of it. In most programming languages you specify what type some piece of data is. So let’s say you add two numbers in some programming language I made up. The compiler will know you are adding two numbers because you marked them as such, so when you look at the compiled machine code, it’ll probably load two numbers from memory into CPU, ADD them together and store the result somewhere in the memory. If you added two texts together, the compiler will know it needs to copy all the characters of the first text, then copy all the characters of the second text etc. It knows exactly how the characters of the text are stored and how it should know how long they are. If you try to add two things that don’t have adding implemented, you get error when compiling, so way before you run the code. So in practice, you often don’t store the type of the data anywhere, you just use it in right way.

Of course there are exceptions to everything that I said, if you want, you can store the data however you want. Interpreted programming languages store information about type of the data alongside of the data. If you are saving some data into file, you might want to put some representation of the type there too…

Anonymous 0 Comments

Two links for you:

[NANDgame](https://www.nandgame.com/) and [Nand2Tetris](https://www.nand2tetris.org/)

And as no doubt everyone else has said – the computer doesn’t “know” anything, it just looks at a group of 1’s and 0’s (8 or 16 or 32 or 64 of them in a row) and if they match a particular pattern that triggers the circuit to do something – move them somewhere, flip them, add them to another byte, compare them with another byte, etc. etc.

You can make a (very slow) computer with marbles as 1’s and gaps (no marble) as zero, or all sorts of other mechanisms:

[https://www.youtube.com/results?search_query=mechanical+logic+gates](https://www.youtube.com/results?search_query=mechanical+logic+gates)

Anonymous 0 Comments

In the simplest possible terms, and referring only to meaning (which was your question), it’s exactly the same principle that’s in play in our alphabet.

How do the letters in the alphabet actually mean something?

* They don’t really, on their own
* But, if you put them in combinations, specific combinations of letters make words and concepts
* In the same way, for computers combinations of 1s and 0s make meanings, it just takes more of them (since you only have two, you don’t have as many possible distinct combinations for a given length of digits, so you need to add more digits to make space for more meanings)

As far as how they “know,” if you remember that 0 and 1 are really “electricity on this wire off” or “electricity on this wire on,” there’s a cool trick going on:

* The 0s and 1s can both mean something and *do* something (because electricity can *do* things) at the same time
* Through clever design, they’re basically wired up in such a way that this can be taken advantage of—imagine if letters in our alphabet were able to do work (like the electricity of the 0s and 1s is able to do work)
* You could build a dictionary using letters that would be able to “do things” like making use of its own definitions to accomplish things
* Like interpreting and acting on (“knowing”) these definitions

This is a VERRRY high-level explanation, but ELI5 basically demands that.

Anonymous 0 Comments

I think the answer you’re looking for comes down to hardware interfaces. Not sure if this can be ELi5, but maybe ELi15:

I’m going to skip over the parts you already know: 0 and 1 is actually “on and off” on the transistors, and that a string of them means things like a letter, a different (higher) number, or a pixel on a game.

So how does it know if a random string of 01101010 is a pixel, the number 202, a letter, or part of the physics calculation to a game?

This comes down to transistors and circuitry. If you’re aware of AND, OR, NOR, gates, resistors, capacitors, etc.. you know that you can build entire systems using circuit components. Computers are just extremely complex versions of those things.

So where is the jump from 0 and 1s -> complex circuitry like a computer take place? Well this comes down to things like the BIOS (Basic Input and Output System), the operating system, and the firmware and drivers.

Again, without going into the details, people in the past have figured out that certain combination of AND/OR/NOR gates and signals would allow you to convert data (0 and 1s) into a pixel, or interface with keyboard, or turn it into an audio signal. These things people figured out in the past then get packaged up and turned into BIOS firmware, drivers, or part of the operating system.

So now that we’ve established ways to interface with a computer and all the physical interface stuff is abstracted (pre-packaged and invisible to the upper layers), we can do more from here. Computing is literally layers upon layers upon layers of abstraction, until you get near the top where a computer programmer can edit human readable code and compile (un-abstract) it back down to machine code.

Obviously there’s a lot more to this, this is an ELI15 after all, but hopefully it’s enough bridge that unknown magical mystery clouding in your head.

Anonymous 0 Comments

It “knows” based on context.

At the beginning there is a powered of computer with sleeping processor and a BIOS chip. As the computer is powered up, BIOS ROM (Read-Only-Memory) is connected to the bus of the processor and a signal “reset” is applied to one of the contact pins on the processor. The reset signal causes processor to set its “Program counter” (meaning “next instruction to be read” address) register to address 0x000000 – the beginning of the BIOS program. The first binary number in the first Byte (in an 8-bit processor, or 2Bytes in a 16 bit processor, or 4Bytes in a 32bit processor …) is an instruction, the next bytes are interpreted either as instructions or addresses, depending on context. An instruction can be a stand-alone (meaning the next number is next instruction), or can have a set number of parameters (such as an address where to read a number from or where to write the result). An instruction is hard-wired in the processor, it means transistors will, for example increment the “program counter” to the address of the next instruction, perform computations in Accumulator register and/or many other functions. A modern processor is very, VERY complicated and complex with interrupts, pipelines, busses … lots of stuff. Find some videos that describe functioning of a simple 8-bit computer, this can be understood by a simple mortal without years of studying.

The program in machine code in BIOS will set certain parameters, let processor identify connected disks and identify boot partition (where next step program in machine code is located that proceeds to load up an operating system), will let processor identify other parts of the system – size and mapping of the RAM, location of graphic cards, network cards … Whole lot of work is done before OS starts loading. When the OS loads the computer can read files from disk and start processing bytes according to context (file extension in Windows, “Magic number” (I am not kidding) at the beginning of a file in Linux or Unix …)

Anonymous 0 Comments

Programmers program the meaning of those zeroes and ones. They don’t do anything and they don’t mean anything until they are defined by hardware and software engineers to mean something and do something. The pixels on your display are meaningless until someone defines some meaningful system for displaying information using the pixels, and someone seeing the displayed pixels agrees that they appear meaningful.

Anonymous 0 Comments

Most comments here seem to answer how a computer works, which doesn’t seem to be the question, rather how a computer knows what data represents. And the answer to that, is that, it doesn’t.

Given just a memory address it isn’t possible to know what the data at that address represents.

For example, if address 10,000 contains the number 77, there’s no way to just tell what it represents. It *could* simply be the number 77, or it could be the letter ‘M’ for which the numerical representation is 77, or it could be the operation code 77 for the `dec ebp` instruction.

There’s no way to know what data represents, except looking at the context, IE the code that uses the data.

Sometimes you can get an idea by looking at the data. For example, if you find the number 1078530011 in memory, well, that *could* just be a number that happens to be 1078530011. But, coincidentally that’s the floating point encoding for PI, so there’s a high chance it actually *is* PI. But you’d need to check the code that accessed it to be sure. If you find that every number at an address happen to decode to the text “An error occurred, please try again”, then by all odds, it probably *is* that text.

In the example of the number 77, it really could be anything. A reverse engineer would look at the code that accessed it. If the code turns out to save the most common occurrence of a letter in a word, he’ll know it represents the letter ‘M’, if code tells the CPU to execute that address, then he knows it’s the instruction ‘dec ebp’, etc.