Eli5: Computers can calculate based on instructions. But how do you teach computers what does it mean to add something, multiply, divide, or perform any other operation?

882 views

Edit: Most of the answers here are wonderful and spot on.

For those who interpreted it differently due to my incorrect and brief phrasing, by ‘teaching’ I meant how does the computer get to know what it has to do when we want it to perform arithmetic operations (upon seeing the operators)?

And how does it do it? Like how does it ‘add’ stuff the same way humans do and give results which make sense to us mathematically? What exactly is going on inside?

Thanks for all the helpful explanations on programming, switches, circuits, logic gates, and the links!

In: 583

43 Answers

Anonymous 0 Comments

If you really want to know look into half and full adders: [https://www.elprocus.com/half-adder-and-full-adder/](https://www.elprocus.com/half-adder-and-full-adder/)

But the quick version is, do you know these japanese bamboo water decorations? Where one bamboo stick fills with water until the weight makes it turn and all the water spills, partly into a second bamboo stick, etc?

If, let say, each bamboo stick needs 3 spillage cycles to get the next bamboo stick to fill up and spill, then you basically have an water based digital counter circuit that operates in base 3. An empty or completely, therefore in the progress of spilling, full bamboo stick would be the “0” and 1/3 and 2/3 full would be “1” and “2” and after 4 “ticks” you could read the number 011 (in base 3) off those bamboo sticks, which is the number 4 in base 10. So we “taught” the bamboo sticks to cound.

Anonymous 0 Comments

A computer runs completely in binary, the best way to describe a computer is a system that can do three things, it can read, write, and erase. Using 1s and 0s we can represent numbers and do arithmetic with them using operations, we have programming languages and logic gates, using the programming language we can make the computer hold data in its memory and ram, with this we can program it to do things like addition, subtraction, multiplying and dividing, imagine there is a group of cells with different numbers in them, a data location with the numbers 1, another cell with 0 and so on, using this binary number 1001, we use logic gates to combine these cells, using logic gates we tell the computer that this symbol + means adding cells. What we call cells in computing is bits

Anonymous 0 Comments

You don’t.

Go low level enough (down to hardware away from software) and you’re getting into electronic engineering and solid state physics.

You don’t teach a light to turn on if you flip a switch – you’re using a fundamental force and manipulating it.

Anonymous 0 Comments

People are correct in saying you don’t teach it, but I think that misses something important, which is that you find things in “nature” that can be used to do addition. Just like you can use falling sand in an hourglass to tell time, you can use electricity in circuits to perform addition. If you arrange the circuits in a specific configuration (called a binary half adder) you can input electrical signals representing two binary digits and get the output of their sum. That being said, there is nothing requiring that computers be made of electronics, anything that can be used to do binary logic (e.g. turns on and off in response to something else being on and off) can be used to make an adder. There are videos of people making adders from marbles and dominoes. Electric circuits are used because they are much faster than anything else we can currently use. In the future we may have computers that use light for doing calculations instead.

Anonymous 0 Comments

There’s a fun explanation from the remembrance of earths past also known as the three body problem, a book not the famous conundrum. It explains how you could create a manual computer using humans that make decisions based on the actions of the humans in front of them.

The example in the book shows 3 soldiers each given a flag. The middle soldier is told only to raise his flag if both soldiers on his sides raise theirs. This of course emulates an AND gate which is a type of transistor arrangement where if a high input is on both inputs the output will become high as well.

This scene in the book highlights really well that there’s not really any understanding at all, it’s simply a matter of arranging components that react in certain ways in long enough chains that complexity emerges…

By the way you know where else components arrange themselves in a sort of chain reaction that looks like conscious understanding but actually ends up being entirely based on an individual design? Your body 🙂

Anonymous 0 Comments

You start with something called a logic gate, which is a circuit built from transistors that produces a voltage either on or off based on the voltage status of one or more wires called inputs. Let’s start with a simple logic gate called OR. This logic gate will produce a voltage on the output wire if there is voltage on any of the input wires. Here is a truth table using 1 to indicate voltage, and 0 to indicate no voltage:
A B | OR
0 0 | 0
0 1 | 1
1 0 | 1
1 1 | 1

Their are other gates called AND and NOT. AND produces a 1 if ALL of the inputs are 1. NOT takes a single input and produces the opposite state on the output.

These two numbers, 0 and 1, are the only two numbers that exist in a digital electronic circuit. However, if you follow the laws of mathematics, you can convert any number, including negative numbers, into an encoding of this binary system. You can represent the number 2, for example, by writing it as 10, and 3 as 11. This means you can have a number in a computer represented by two circuits next to each other, which can equal 0 (00), 1 (01), 2 (10) or 3 (11). Bigger numbers simply require more parallel circuits. A processor stores these numbers in dedicated circuits called registers, which are made out of logic gates and retains their values by making use of logical mathematics in order to have a memory. There is more than one way to create a memory circuit, but it generally involves placing a number of logic gates into a feedback loop.

Now that we have numbers and a place to store them, we need a final group of circuits built using this mathematical logic called an ALU, or arithmetic and logic unit. The heart of this circuit, which contains a decoder to select operations based on instructions, is called the adder. An adder is a compound logic gate built from and, or, and not which adds 2 binary bits, 1 or 0, and produces an output value, and a carry bit. So if you add 1 and 0, you get a 1, with a 0 carry, and if you add 1 and 1, you get a 0, and a 1 carry. 1+1=10. There are many of these adder circuits built in parallel. One for each bit in the size of the register. This carry bit gets added to the next adder in the line, going from right to left. So, if you have a carry bit added to the above example, 1+1+1=11. In computers numbers are generally either 8, 16, 32, or 64 bits in length. So really, for the mathematical equation 1+1=2: 00000001+00000001=00000010. That is everything you need to know to design a very basic computer to add.

How do you provide the other functions? You have to rearrange the numbers so that everything is addition. How do you subtract? You don’t. You make the subtrahend negative, and add it to the minuend. For this, you need a way to make a negative number. We use something called two’s compliment. A negative number can be created from any positive number by flipping every bit in the register, and adding 1. -1 starts from 1: 00000001, flipped: 11111110 and adding 1: 11111111. -1=11111111. So 1-1 is 0000 0001 + 1111 1111 = 1 0000 0000 in an 8 bit register. Whoops, we only have 8 bits in our register. What about that extra 1 which carried out? Throw it away. 0000 0000. 1-1=0. Now you can add and subtract. We design the decoder in the ALU to produce these outputs selectively when the instruction input is an addition or subtraction instruction, which is part of a special code called an instruction set. Programs are compiled to this instruction set using a human readable language and a program called a compiler.

So how can we multiply and divide? Multiplication is accomplished by repeatedly adding one number to itself, to a count of the other number. Division is done by repeated subtraction of the numerator from the denominator and keeping track of the count, like opposite multiplication.

These are the basics of digital arithmetic. There are all sorts of optimizations and tricks that people much smarter than me have figured out, but this is enough to create a machine that can do basic math. According to the work of Turing, any machine that can do basic math can do *any* algorithm, given enough memory space and enough time. So we have just outlined a complete universal computer. The rest is just a matter of programming.

Hope this helps.

Anonymous 0 Comments

Nothing “means” anything to a computer. you give it symbols, it applies pre-defined operations to those symbols, and gives you symbols back. it knows nothing about the symbols, and doesn’t do any kind of “thinking” or assigning “meaning”.

as to your question about math operations, these are just pre-defined operations. Computers aren’t taught to do math, they have instructions built into the hardware (in the olden days this would have been a device called a math-coprocessor for math related processes, otherwise basic memory operations would bein the main processor) for doing many many types of basic operations. these are combined in various ways to do more complex functions.

A computer program tells a processor how to use its built in functions to do something complex.

Anonymous 0 Comments

Look up the youtube channel [Ben Eater](https://www.youtube.com/@BenEater) he does a wonderful job of explaining how computers work on the hardware level. This is where the actual operations happen.

Anonymous 0 Comments

That really comes down to philosophy, not science. Computers are nothing more than machines with lots of parts. With the arrival of machine learning projects like ChatGPT, they can do some things well enough that they can even fool us into thinking they were done by a human, like writing emails. However, with current AI it is still pretty clear to me that the AI doesn’t really understand what it’s doing. The guiding principle that makes large language models like ChatGPT work is nothing more than pattern recognition. They’re just doing a very advanced version of what your phone does when it suggests to you the next word to type in a sentence. You can tell this in practice by asking ChatGPT to do some basic arithmetic; half of the time it will get the answer right, and half of the time it will give you a wrong answer and be just as certain about it. That’s not because it made an error in calculation, it just didn’t have enough data on that particular math problem or problem like it to guess the right answer.

In the future though, we could imagine an AI that really does form mental models of whatever is learning about and is able to answer essentially any question a human could answer after having learned a topic. So, the question then becomes, does that count as understanding? They also brings up the question of what’s the difference between a machine and a sentient being; if a computer can think and talk in a perfect imitation of a human, is the computer sentient? Or are we just machines? Really crazy stuff to think about.

Anonymous 0 Comments

The others have explained it quite well already, if you’re really interested in a bit more background knowledge of how a computer works, try out [Turing Complete](https://store.steampowered.com/app/1444480/Turing_Complete/), a game where you virtually build a simple computer step by step and actually write little programs on it in the end