[ELI5] How computers work using only 1s and 0s?

1.58K views

[ELI5] How computers work using only 1s and 0s?

In:

8 Answers

Anonymous 0 Comments

Ones and zeros are combined to count numbers. It’s best shown as an example:

* 1 = 1
* 10 = 2
* 11 = 3
* 100 = 4
* etc

A computer uses one number to describe the operation (e.g., add = 1, subtract = 2, etc.), and then two other numbers for the operands of the operation. The computer has memory and a fourth number is used to tell the computer which memory address gets the result. When you tell a computer what to do, you feed it these four numbers, all in binary. This group of four numbers (operation, operand, operand, result) is called an ‘instruction’. A program is made up of a series of these instructions.

It is, of course, a little more complicated than this but these are the basics.

Anonymous 0 Comments

When it comes to doing math, computers use what’s called a different [radix](https://en.wikipedia.org/wiki/Radix) or base, which is a different system of notation. We’re used to using the decimal radix (deci being a prefix meaning ten), having 10 digits (0-9) and with 10 meaning ten. Raising that exponentially one power gives you 100 or one hundred. But there are other radixes, like:

* octal, with only the digits 0-7, where 10 means eight and 100 means sixty-four
* hexadecimal with the digits 0-F, where 10 means sixteen and 100 means two hundred and fifty six
* and, used at the low level within computers, binary which has only the digits 0-1 where 10 means two and 100 means four

The thing is, math still works in other radixes, only the notation is different. For computers, binary is especially useful because each digit position can be represented with a simple on-off switch/light as either a 0 or a 1. That representation is the easiest and simplest for an electronic device to work with. But there’s a bit more – binary also allows for using [boolean logic](https://en.wikipedia.org/wiki/Boolean_algebra) to easily express logical concepts like AND, OR, and NOT at the same simple fundamental level.

So it’s really easy for the computer to work with things as a bunch of tiny switches, and then at the end, when we want output, that series of 1s and 0s can be interpreted as characters that make up decimal digits of a number, letters, a color, or sound, duration to rumble a joystick, or whatever.

Anonymous 0 Comments

Binary is as simple as 1’s being “on” and 0’s being “off”. Computers essentially run off instructions that depend on binary. I suggest googling and going from there.

Anonymous 0 Comments

First of all, think of computers as having the purpose of displaying the correct image on screen, printing your document correctly, responding to keyboard and mouse presses correctly, and, in general, controlling other devices (speakers, VR goggles, whatever) so that YOU can interact with them. All of it is for your benefit.

So how does a screen work? Well, every dot of color that lights up to form your image, is controlled by a memory bit. If the memory is 1 (i.e. 5 volts in it), then the pixel of color lights up (5 volts in the pixel too). If the memory is 0, then the pixel is off (0 volts in it).

Your screen has a memory chip that matches the number of pixels, and whatever’s configured in that memory is displayed with the pixels.

So the computer’s processor and video card have the capability to flip 1’s and 0’s around, switching memory bits from 5v to 0v and back, millions of them very fast. Software just instructs the processor which memory bits to make 1 and 0, based on what you’re doing (typing, drawing, playing games, etc.).

The whole computer is a big math machine designed to work with 1’s and 0’s (5 volts and 0 volts) so that the attached devices show you the correct patterns of light and sound, which your brain then interprets as a game or a word processor software.

Anonymous 0 Comments

I once heard it described this way: a computer is basically a complicated system of millions, if not billions—some huge number, anyway—of electrical switches. Those switches are either on or off, which is why they’re operated by binary code, which is only ever a 1 or a 0.

Anonymous 0 Comments

The same way your brain does. Your brain also works only with neurons on or off.

Neither actually works just using 1s and 0s. That is the form of the data that they use. But the physical structure of the brain and the processors is what allows the data to be processed in a meaningful way.

Anonymous 0 Comments

This in and of itself is not the answer, but PBS Digital Studios produced a fantastic collaboration series with the vlogbrothers’ Crash Course channel all about how computers work from the basic, binary level and it’s incredibly helpful with breaking down and demystifying this topic: https://www.youtube.com/playlist?list=PL8dPuuaLjXtNlUrzyH5r6jN9ulIgZBpdo

Anonymous 0 Comments

They technically use only the 1s. In mathematics it is possible to perform addition, subtraction, multiplication and division using only addition. Sounds strange at first.

For addition:
Think of a wall of lightbulbs, you light 5 bulbs up, you then output a 5. You then use another wall of 4 bulbs to output a 4.

You can then set up some wires between the two lighbulb walls to add the result 5+4=9
and then store the result in lightbulb wall number 3. Lightbulb wall number 3 has 9 bulbs lit.

Minus, multiplication and division is just some maths trickery of multiple lightbulb walls being added together.

In a CPU we use a more complex version of lightbulbs called transistors to hold a 1 value(on like a lightbulb), we create special combinations for addition, subtraction, multiplication and division.

We can do all types of maths by combining many types of transistors together. We can even build combinations for certain types of equations, that’s why certain chips are optimised for say bitcoin mining or encryption.