What does the binary of a computer actually look like, and how does it understand those instructions?



What does the binary of a computer actually look like, and how does it understand those instructions?

In: Technology

Binary looks like this.


Binary is just a way to represent numbers with only 0s and 1s. These correspond to the concept of on (1) and off (0) inside computer chips.

It’s called base-2 because there are only two digits. Our usual numbering is base 10, because it has 10 digits (0-9).

In binary, 10 is the same as 2 in base 10. 11 is 3. (the 1 on the right is 1, and the 1 on the left is 2, add them up and you get 3). 100 is 4. 101 is 5. 110 is 6 (4+2). 111 is 7. 1000 is 8. And on and on.

There are basic operations that computers use that look at whether a combination is two 0s, a 1 and 0, or two 1s, to get results from calculations.

I think you might be asking about what binary numbers physically are in a processor? A binary number is just 0 or 1, so it’s like a light switch. To represent it physically, you have a voltage that is either on or off. So, a wire might be at 5V to represent the binary number 1 or at 0V to represent the binary number 0. The circuits in the processor can perform operations on those voltage signals, forming new voltage signals, effectively providing a physical framework to represent the math and logic we want the processor to do.

Typically, each component that’s using some sort of binary code has multiple wires, and each wire carries one digit. So if there’s 8 wires (8-bit) and the first 4 have voltage on them and the second 4 have no voltage, then that could represent 11110000. (Yes pedants, I know there’s exceptions). Now there’s two main things that 11110000 could represent. It could be a number, in this case 240. It could also be an instruction.

Typically, we design processors in a way that they will often take an instruction together with a number. This makes sense, if you want to do “add 10” or “access memory address 42” then having the instruction and the number together makes things simple.

So let’s think of a hypothetical processor that has 8 total wires for input and output. We could say that the first 4 wires are for the instruction, and the last 4 are for the number. Let’s say we want the chip to take whatever is in memory address 2 and put it into the processor’s register (the register is a place the processor holds numbers it’s currently working with). We could build the chip so that the command to do that load operation is 0100. So the full command 0100 0010 tells our processor to load from address 2. We could then make an add command and assign it to 0010. So if we want to add 1 to whatever’s in the current register, we would send 0010 0001 to the processor.

So why is 0010 add? It’s because whoever builds the processor gets to build into it the operations the processor does. It’s built specifically so that when it sees that code, the wiring does the stuff that is required to add numbers together.

Fun fact, those numbers representing operations are called “opcodes”. If you’re curious you can look up opcode references for most popular processors and see what they are, though they are often written in hexadecimal. For example the “jump” command for the 6502 processor is 4C, which is 76 in decimal or 01001100 in binary.

EDIT: Let me add that there are sometimes special commands on a chip that are controlled by a single wire. There might be, for instance, a “halt” wire that controls whether the processor is allowed to run or not, so changing the voltage on that wire will enable or disable the processor. This varies based on what the manufacturer wants to have as a feature on the chip.

The ones and zeros are implemented as voltages. So individual wires will go from high to low voltage very very, VERY quickly. So it’s not really that there’s something that _understands_ the voltage, so much as it is the voltage that causes behaviors in the chips. So a 1 could, electrically, cause a transistor to conduct.

The whole ones and zeros thing is just a handy, human accessible, way of talking about it.

The computer’s brain thinks using two things: actions and items. This is its language.

Items are things. Actions do things with items. They can modify items or they can use items to create new items.

These items and actions are very small. The objects might just be a single letter or number. The actions will be something simple like: add or copy.

The real trick is that we can represent all things with numbers. For example the value for the letter ‘A’ might be written as 01000001. However this could just as easily be the number representing the number 65 because 64 + 1 = 65. The computer’s brain doesn’t know or care about the difference.

The actions are a lot like objects. They are also numbers. However the computer brain knows to treat them differently.

Now, if we combine actions and objects we get instructions. An instruction is one action plus at least one object.

Suppose we have two actions:

COPY = 0000 0001

ADD = 0000 0002

Suppose we have a few objects:

A = 0100 0001

B = 0100 0010

5 = 0000 0101

7 = 0000 0111

We could tell the computer things like:

ADD 5 7

(0000 0001 . 0000 0101 . 0000 0111)

We can even do


The brain can’t tell the difference. The meaning of the objects above don’t exist in the cpu. The numbers are just numbers.

So, at the hardware level, objects and actions get loaded into the CPU as instructions. The action number gets put in a certain spot and the object numbers get put in certain spots.

The cpu then activates. The wires connecting all three parts are designed in such a way that when the electricity has flowed through them, another object is created in a special place. Sometimes this makes a new object in an object spot, other times it just gets stored over an existing object. Then, the cpu deactivates and it loads the next action and objects. One of these objects even may be the result object of the last instruction.

A computer’s brain is doing this loop over and over again. Binary is used because it makes the wiring logic about as simple as it can practically be. (It’s a lot simpler to manage whether electrical signals are low/high rather than low/middle/high)

I highly recommend the book “But how do it know?”. It has a silly title but it’s by far the most approachable guide to how a cpu works that I’ve ever read. It’s short and you don’t need a math or CS education.

I always understood it as instructions. You can translate 0001 to a 0010 to b 0011 to b, etc. and get the full alphabet (look up unicode) – then you can have set instructions say 000111001100 means add two numbers. This is done on the cpu and put into an accumulator. So give it two numbers and the instruction (op code) saying add them and boom you get the result. This happens extremely fast, so you can do a bunch on operations in just seconds. It becomes very powerful. It’s not that simple though. The computer is abstracted into layers. The os sits on the computer are we interact with the os, which then interacts with the kernel. The kernel tell the machine what to do with the instructions we send.