How do computers know what to do with binary 1’s and 0’s?

306 views

I’m very interested in learning how computers work, but can’t seem to find the exact information I’m looking for. My understanding is, and **please** correct me if I’m wrong, is that if you press the letter “A” on a keyboard, a circuit underneath will close which sends electricity to wires, and based on the combination of voltages on the wires, the computer outputs an “A”. But how does the computer know what do to with voltages? What do the voltages represent? At what point does any of this information get converted into binary, and once it does, what happens?

I don’t expect someone to be able to explain this like I’m five. For me, it’s a difficult, but really interesting subject. Any clarification and dumbing down is appreciated! I’m really hoping to get a better grasp on my understanding of all this.

Edit: I should’ve made the title “How do computers work?” Still wondering how computers know what to do with 1’s and 0’s, though.

In: 0

13 Answers

Anonymous 0 Comments

CE here. Its kindof a stretch to do this as an ELi5 (as I spent a buncha years in university for it) but:

Lets step back for a bit. The 1’s and 0’s of software are literally the instructions that make sense to the CPU of your computer, or tablet, or mobile phone.

If you were to translate, simply for human readability, the binary 1/0s of software into something we could read and parse you’d get something like:

MOV AX, 12
MOV BX, 27
ADD AX, BX
JMP AX GRTR 20 DEST

This would be assembly language. What this (completely made up and oh god my assembly is rusty so its kinda crap) example does is put the value of 12 into one register, the value of 27 into another, add the two together then see if the result (because we know the ADD command on registers AX, BX put the RESULT back into AX – its a chip command thing) is greater than 20 and if so, we jump (or branch program execution) to the memory address given by DEST.

Now… all the high level code we write in C, C++, C#, Java etc. will, through various emulation, runtime and compilation layers boil down to 1’s and 0’s that can are represented by assembly language like what I gave up there.

These assembly commands – or the binary they represent (or are compiled into) are the literal 1’s and 0’s that light certain circuits in the CPU up.

The

MOV AX, 12

command tells the CPU – take this binary value and load it into register AX. Register AX is just a temporary holding spot where the CPU can read or put data from/to. The

ADD AX, BX

command translate to the binary that triggers a different circuit in the CPU to execute. When triggered, that circuit reads the voltages corresponding to the bits of registers AX and BX and does funky circuit stuff on them – puts the voltages through all the digital logic gates – that spits out a result that is, the value in AX plus the value in BX – in binary. And then it copies the result back into AX>

The digital logic circuits being fired here use transistor and op-amp circuits – chained together to make fundamental logic gates: a simple logic gate takes two inputs – if both are 1, and the logic gate is an AND, then the OUTPUT is also a 1. All logic gates have low level electrical equivalents of transistors.

All a CPU does is use very small nano-meter scale implementations of these fundamental circuits designs, made into logic gates, made into circuits that implement various commands. When the CPU loads that MOV command from memory, it literally fires up a specific circuit that lets voltages flow from input lines to output lines. Rince and repeat, the CPU goes onto the next command.

Modern CPUs are quite a bit more advanced from this and we can get into caching and branch prediction and multiple cores, but essentially the binary 1’s and 0’s of software make the electronic circuits of the CPU do different things; and the CPu is just doing a lot of these things very very fast.

Keeping asking questions bridge keeper I am not afraid.

You are viewing 1 out of 13 answers, click here to view all answers.