How do computers KNOW what zeros and ones actually mean?

1.38K views

Ok, so I know that the alphabet of computers consists of only two symbols, or states: zero and one.

I also seem to understand how computers count beyond one even though they don’t have symbols for anything above one.

What I do NOT understand is how a computer knows* that a particular string of ones and zeros refers to a number, or a letter, or a pixel, or an RGB color, and all the other types of data that computers are able to render.

*EDIT: A lot of you guys hang up on the word “know”, emphasing that a computer does not know anything. Of course, I do not attribute any real awareness or understanding to a computer. I’m using the verb “know” only figuratively, folks ;).

I think that somewhere under the hood there must be a physical element–like a table, a maze, a system of levers, a punchcard, etc.–that breaks up the single, continuous stream of ones and zeros into rivulets and routes them into–for lack of a better word–different tunnels? One for letters, another for numbers, yet another for pixels, and so on?

I can’t make do with just the information that computers speak in ones and zeros because it’s like dumbing down the process human communication to the mere fact of relying on an alphabet.

In: 264

47 Answers

Anonymous 0 Comments

‘Ones’ and ‘Zeros’ are in a sense ‘on’ and ‘off’. More in electronics it’s high voltage and low voltage where if for example voltage is more than 5v the components perceive as 1 and below that as 0. This high and low voltage states are cycled. so a cycle of two high voltage is perceived as ‘3’ (a number that’s generated using binary digits of 0 and 1

Anonymous 0 Comments

It doesnt know and doesnt have to. The CPU has a very basic, general purpose instruction set, for like addition, comparison, etc of units of 32/64/… bits. You can orchestrate those basic instructions to build higher level operations for your specific data type by using multiple of those basic instructions.

Want to compare two strings? Iterate over every 16 bit word from it’s starting address on, and check if each of them is equal, until one of them is a zero (marks the end of it).
This means that even stuff that could be a one-liner in your programming language, can create a large amount of those basic instructions.

Anonymous 0 Comments

The computer knows nothing, it just manipulates its memory and registers the way program instructions tell it to. Its up to programmers to write the software so that this results in something meaningful. For computer A or B makes no difference, its just some pattern of bits written to some memory address so that some pixels light up on a screen.

Anonymous 0 Comments

Well it’s not really 0 and 1, we use this as a way of notation so humans can make sense of it, what actually happens is that your computer components communicate using signals of electricity, 1 is a strong pulse of electricity and 0 is a ~~lack of it~~ weak pulse.

Your computer receives a series of electric pulses from your keyboard or mouse and does a lot of computations inside by moving that power through the CPU, GPU, memory etc. Each component will do different alteration to them and in the end will send them back to your screen as a series of electric pulses.

Each component will interact with the electric pulses differently: your screen will change color of pixel, your memory will write them to memory or transmit them to another component, your CPU and GPU will perform instructions based on them and deliver the result back as electrical impulses etc.

How your computer identifies a series of 1’s and 0’s as a certain number or letter is that there is a sort of dictionary (or better put series of instructions) that translate what different components should do with certain pulses they receive. Looking right down to the very basic part of your computer, it’s a very big series of circuits that based on the electric pulses they receive, do different computations using different circuits and the results generated by these get translated by your interface devices into useful information for humans.

Anonymous 0 Comments

You are making a mistake we all make as humans.

Humanizing.

You watch an octopus “grab” something, and you think he is grabbing it with his “tentacle HAND”.

Actually what the octopus is doing is leagues further from what we do when we grab something with our hands.

Same with the computer. You wonder how does it “know” what 1 and 0 means, when actually it’s just electricity impulses navigating circuits and unthinkable speeds.

It’s simply wired to process impulses and lack of impulses, it doesn’t KNOW shit.

Anonymous 0 Comments

I built a simple calculator in minecraft with the help of a youtube video. Learned a lot about binary!

Anonymous 0 Comments

I see a lot of comments at the more abstract end, looking at software and compilation, so I’ll take a crack from the other end.

Let’s start near the beginning: we have an electrical device known as a “transistor”, which in very simplified terms can be used as an electronically controlled switch, where we have the two ends we want to connected as well as a control input to determine if the ends are connected. We could say that a high voltage causes electricity to flow from end to end while a low one causes the ends to be unconnected. This idea of a switch allows us to actually perform logic operations based on high and low voltages (which we can assign the mathematical values of 1 and 0) when we arrange transistors in a certain way: AND, OR, NOT, XOR, NAND, NOR, XNOR. We call these arrangements “logic gates”, and this serves as a level of abstraction that we have built on top of individual transistors. For example, an AND gate has two inputs, and when both inputs are 1, it outputs a 1, and otherwise outputs a 0 (a la logical AND). This leads us to binary, representation of numbers where each digit can have two different values, 1 or 0. It works just like how we represent base-10 numbers in daily life where each digit can be from 0-9 and represents a power of 10. In binary, each digit can be 1 or 0 and represents a power of 2. By associating a bunch of high/low voltages together, we can represent a number electronically.

With the power of Boolean logic that deals with how math works with where values can be 1 or 0, or “true” and “false”, we can start to produce more and more complex logic equations and implement them by connecting a bunch of logic gates together. We can thus hook together a bunch of gates to do cool stuff, like perform addition. For instance we can represent the addition of two bits X and Y as X XOR Y. But oops, what if we try 1+1? 2 can’t exist in a single digit, so we could have a second output to represent this info known as a carry, which happens when X AND Y. Hooray, we’ve created what is known as a “half adder”! Now if we did multi-digit addition, we could pass that carry onto the next place in addition, and have different kind of adder called a “full adder” that can take the carry of another adder and use it as a 3rd input. All together we can create an adder that can add a group of bits to another one, and thus we have designed a math-performing machine 🙂

A CPU is ultimately made of these logic-performing building blocks that operate off of high/low voltage values which can be grouped together to form numbers (represented in binary) and work off of them.

The other comments covered a good deal of what happens above this at a software level. What software ultimately is a bunch of binary fed into the CPU (or GPU or other computing element). This binary is a sequence of numbers in a format that the CPU is logically designed to recognize and work off of: perhaps the CPU looks at the first 8-bits (aka a byte) and sees that it is the number 13. Perhaps the CPU designer decided that seeing 13 means that the CPU multiplies two values from some form of storage. That number 13 is “decoded” via logic circuits that ultimately lead to pulling values from storage, passing them to more logic circuits that perform multiplication. This format for what certain values mean to a CPU is known as an instruction set architecture (ISA), and serves as a contract between hardware and software. x86/x86_64 and various generations of ARM are examples of ISAs. For example, we see several x86_64 CPUs from Intel and AMD, they might all be implemented differently with different arrangements of logic circuits and implementations of transistors, but they’re still designed to interpret software the same way via the ISA, so code written for x86_64 should be runnable on whatever implements it.

This is an extremely simplified look at how CPUs do what they do, but hopefully it sheds some light on “how” we cross between this world of software and hardware, and what this information means to a computer. Ultimately, it’s all just numbers with special meanings attached and clever use of electronics such as transistors giving us an avenue to perform mathematical operations with electricity. Sorry if it’s a bit rambly; it’s very late where I am and I need to sleep, but I couldn’t help but get excited about this topic.

Anonymous 0 Comments

While most answers here are correct (or correct enough 😉), I feel they don’t really answer OPs question.

A computer “knows” what the zeros and ones mean because of their location in memory.

Computers have addressable memory. You can place zeros and ones at an address. The address is also identified by zeros and ones. You could have data “00110011” stored at address “11110011”

Some areas in memory are special and are for specific things. There is an “instruction”-area, for instance, where the current instructions for the CPU are held.

If your zeros and ones are stored in the “instruction”-area, then they are interpreted as instructions. The instruction “00000000” for instance, means “add two numbers” in most desktop CPUs. The exact instructions differ by architecture (x86 is the most common architecture for desktop PCs)

Other areas in memory are mapped to other functions and components. You could for instance have an area in memory which maps to a sound chip. The sequence “00010001” there could mean something like “play a sine wave at 8kHz”

The specific instructions and special addresses available differ by architecture. A desktop PC has different instructions and special memory areas than a GameBoy.

Anonymous 0 Comments

>I think that somewhere under the hood there must be a physical element–like a table, a maze, a system of levers, a punchcard, etc.–that breaks up the single, continuous stream of ones and zeros into rivulets and routes them into–for lack of a better word–different tunnels? One for letters, another for numbers, yet another for pixels, and so on?

Not quite.

It’s more like buckets, and it’s not so much “for” any one type, it’s just a giant…pegboard. But each spot in the pegboard has a location, and you tell the computer to find that location and “read along” for x many more positions.

You should look up a video on a Turing machine at work. That’ll probably help, at least a little bit. Also: water computers and mechanical computers.

Anonymous 0 Comments

[nandgame.com](https://nandgame.com) guides you through building a computer from transistors and logic-gates all the way up to software-coding. Highly recommended, as reading the theory is one thing and doing it yourself just gives you a whole new level of understanding!

BUT, a quick and dirty explanation is that 1 & 0 is just how WE HUMANS visualise it for ourselves. The reality is that it is POWER & NO-POWER (similar to morse-code’s short-and-long notes), and that is basically all that a computer “knows”.