How do computers know what to do with binary 1’s and 0’s?

324 views

I’m very interested in learning how computers work, but can’t seem to find the exact information I’m looking for. My understanding is, and **please** correct me if I’m wrong, is that if you press the letter “A” on a keyboard, a circuit underneath will close which sends electricity to wires, and based on the combination of voltages on the wires, the computer outputs an “A”. But how does the computer know what do to with voltages? What do the voltages represent? At what point does any of this information get converted into binary, and once it does, what happens?

I don’t expect someone to be able to explain this like I’m five. For me, it’s a difficult, but really interesting subject. Any clarification and dumbing down is appreciated! I’m really hoping to get a better grasp on my understanding of all this.

Edit: I should’ve made the title “How do computers work?” Still wondering how computers know what to do with 1’s and 0’s, though.

In: 0

13 Answers

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

How much time do you have? I’m writing a book that will take you step by step using only the assumption that you know opposites attract and ending with a functioning CPU. Honestly, though, I’m still working on visual aids so it’s gonna be tough for some parts. What would be a lot better is watching Ben Eater’s videos on YouTube. He has a few dozen long-format videos that start with how transistors work and ends with an actual, functioning CPU that he builds on camera out of logic gates, explaining every step. No way you’ll get better than that without going to college for an electronics degree.

Anonymous 0 Comments

The steps in between pressing a key like “a” and it printing on your screen are so many and rely on so many things that it’s practically impossible to explain it simply without massive oversimplification. Look into binary as a language and understand that at it’s most basic, all computers are interpreting large amounts of ones and zeros. Everything is ones and zeros. Even hardware works on this basic principle.

Anonymous 0 Comments

If I flash a flashlight at you in the pattern of Morse code, you can understand what I’m saying. The flashlight itself isn’t special, it’s the set of rules you and I have to understand the flashes.

Use that analogy for nearly anything a computer does. It just needs a set of rules given to it as to what these patterns mean. Another example is color-by-number coloring books. They have a table that instructs you to color red in some places, blue in others, all simply based on what number is there.

Usually these instructions are in the form of a driver software package. Is there a number in memory slot 1 that says 256-0-0? Then make the monitor pixel that corresponds to that memory red.

Anonymous 0 Comments

CE here. Its kindof a stretch to do this as an ELi5 (as I spent a buncha years in university for it) but:

Lets step back for a bit. The 1’s and 0’s of software are literally the instructions that make sense to the CPU of your computer, or tablet, or mobile phone.

If you were to translate, simply for human readability, the binary 1/0s of software into something we could read and parse you’d get something like:

MOV AX, 12
MOV BX, 27
ADD AX, BX
JMP AX GRTR 20 DEST

This would be assembly language. What this (completely made up and oh god my assembly is rusty so its kinda crap) example does is put the value of 12 into one register, the value of 27 into another, add the two together then see if the result (because we know the ADD command on registers AX, BX put the RESULT back into AX – its a chip command thing) is greater than 20 and if so, we jump (or branch program execution) to the memory address given by DEST.

Now… all the high level code we write in C, C++, C#, Java etc. will, through various emulation, runtime and compilation layers boil down to 1’s and 0’s that can are represented by assembly language like what I gave up there.

These assembly commands – or the binary they represent (or are compiled into) are the literal 1’s and 0’s that light certain circuits in the CPU up.

The

MOV AX, 12

command tells the CPU – take this binary value and load it into register AX. Register AX is just a temporary holding spot where the CPU can read or put data from/to. The

ADD AX, BX

command translate to the binary that triggers a different circuit in the CPU to execute. When triggered, that circuit reads the voltages corresponding to the bits of registers AX and BX and does funky circuit stuff on them – puts the voltages through all the digital logic gates – that spits out a result that is, the value in AX plus the value in BX – in binary. And then it copies the result back into AX>

The digital logic circuits being fired here use transistor and op-amp circuits – chained together to make fundamental logic gates: a simple logic gate takes two inputs – if both are 1, and the logic gate is an AND, then the OUTPUT is also a 1. All logic gates have low level electrical equivalents of transistors.

All a CPU does is use very small nano-meter scale implementations of these fundamental circuits designs, made into logic gates, made into circuits that implement various commands. When the CPU loads that MOV command from memory, it literally fires up a specific circuit that lets voltages flow from input lines to output lines. Rince and repeat, the CPU goes onto the next command.

Modern CPUs are quite a bit more advanced from this and we can get into caching and branch prediction and multiple cores, but essentially the binary 1’s and 0’s of software make the electronic circuits of the CPU do different things; and the CPu is just doing a lot of these things very very fast.

Keeping asking questions bridge keeper I am not afraid.

Anonymous 0 Comments

The electronic components in a computer chip combine to form things called [logic gates](https://en.wikipedia.org/wiki/Logic_gate), which are devices that output a signal based on a combination of input signals. Each signal is either on (1) or off (0). You can think of them as little boxes that have some wires going in, and one wire coming out, and whether there’s electricity flowing on the wire coming out depends on the kind of box it is.

For example, an `AND` gate’s output signal will be on (1) if and only if both of its input signals are also on; otherwise, it’ll be off (0). An `OR` gate will be 1 if either or both of its input signals are 1; if both input signals are 0, the output will be 0. A `NOT` gate has only one input, and it inverts it — the output will be 0 if the input is 1, and vice versa.

These gates combine to make circuits that do specific things. One simple example is an [adder](https://en.wikipedia.org/wiki/Adder_(electronics)), which takes bits (0 or 1, represented by on or off signals) and combines them into multiple signals representing their sum.

Modern computers have hundreds of billions of transistors that probably form on the order of billions of logic gates, so you won’t be able to get a full picture of what’s happening by looking at simple circuits like this. But at the lowest level, this is how computers derive meaning from 1 and 0.

Anonymous 0 Comments

It’s good that you are aware that it is all just voltages. A lot of people think there are actual 1 and 0s going through the wires!

Let’s look at this the other way. When you press A on the keyboard, nothing really means anything until you see it on the screen. How did it get there? Well as you are probably aware the image on the screen is made up of lots of little lights, which turn on and off to make the image you see. Voltages are sent from the video card thru the cable and the monitor says… ah ha! I am getting voltage changes! And this pattern of voltages means to turn on these lights… and you see the “A” now this happens so fast because there is voltage patterns sent for every little light (millions of them) and for which color… and it does this 60 time per second or faster.

It is literally voltages all the way down until a photon is generated from the monitor lights and hits your eye.

Now you would ask how did the video card know what voltages to send? It’s really just another series of voltages that were sent to the video card from the cpu and memory.

When you pressed A there is something called an interrupt, which interrupts the cpu from what it is doing and it says… a ha! New voltages received and I see this pattern… that means I need to stop what I’m doing and handle this voltage. The software sees tge voltage and responds. The software you are running really doesn’t matter, whether it’s notepad or Gmail it’s all just voltages in memory and there is a back and forth dance between the cpu and the memory. Some of these dance moves calculate stuff, and some send info to the video card.

Clear as mud?

Anonymous 0 Comments

> At what point do [the voltages] get converted into binary, and once it does, what happens?

Never. The computer is just a very complex calculator that can do, basically, [large matrix math](https://qph.cf2.quoracdn.net/main-qimg-c6acec260971027bde0869e697573954-pjlq), and by that I mean that the voltages are [huge complex chains](https://www.inquirer.com/resizer/UHD5ZP8WLLKS4Jl_bSGn_tkgNm0=/arc-anglerfish-arc2-prod-pmn/public/GWZL7BJHTRHJHFEZMZSFBK25QU.jpg) of transistors that will “flip” all the way down the line at the speed of light, based on inputs from the keyboard and mouse. Your screen is a bunch of [lights](https://previews.123rf.com/images/tailex47/tailex471509/tailex47150900060/44696475-bright-blue-colored-smd-led-screen-close-up-background.jpg) that are controlled by a pattern of voltages that changes 60 times per second. The computer doesn’t “know anything”, it just propagates voltages to your screen, to your printer, to your sound speakers, and gets voltages from the keyboard, mouse, internet, camera, etc.

The 1’s and 0’s are used by us humans to explain what’s going on in there. First we interpret the voltages as 1 and 0, then we interpret 1’s and 0’s as letters and symbols, which can form words, which can form a programming language, and so on, so we can use “complex” concepts to explain what the transistors are “programmed” to do.

Anonymous 0 Comments

0’s and 1’s are grouped in multiples. Although a single bit almost never represents anything, it is almost alwyas the sequence in such a grouping that gives meaning.

The meaning is by convention.

Just like a pencilestrike on a piece doesn’t necessarily mean anything, 3 lines in the form of an A, might mean ‘A’ as we all agreed to it, and by making combinations, we can create further meanings (words, sentences, …)

In the early days of computing, there were multiple incompatible conventions, such as ASCII and EBCDIC, now we are mostly aligned.

Anonymous 0 Comments

They don’t really work with single 1’s and 0’s. You could build a computer that does but generally they’ll use several in a row. You have several metal tracks carrying (or not carrying) current. On is 1. Off is 0.

If you have 8 tracks, there are 256 combinations of off and on each representing a different number. A lot of old computers worked this way. This is what “8-but” means when referring to a processor. 16 bit processors can deal with 65535 different combinations, 32 bit with 4 million or so but the principle is the same.

So, what do we do with these numbers? Well, we can access memory. Memory chips have a bunch of “address” pins, and a bunch of “data” pins. Send a number to the address pins and the memory chip sends a number at that location from the address pins to the processor.

The processor sends a request for data at address 00000000, and looks at these in increasing order. It receives, for example, 00101100. This means look at the next 2 memory locations, and add them. Then it might receive 01011111 which means “store this number at memory location given by the next memory location”. The numbers here are just made up and chosen arbitrarily by me but real processors have numbers assigned arbitrarily my a designer. There’s no particular reason to choose any number.