Eli5 Where exactly do computers make decisions?

739 views

I understand the concept of coding, that it’s a bunch of if/then/unless kind of hardwired decisions. But where, PHYSICALLY, and how does this happen? This one I need actually explained like I’m five. I’ve never understood how code physically implements itself into fruition.

In: 1

10 Answers

Anonymous 0 Comments

They kind of do it everywhere! Every transistor, of which there are billions in a computer, either turns “on” (conducts electricity) or “off” (does not conduct much) based on the voltages supplied to it.

The code you are probably thinking of is mostly executed in the microprocessor (CPU). Transistors are used to construct simple circuits that implement simple logical “decision making blocks” such as OR, AND, etc. These are commonly called logic gates.

The transistors are wired up into logic gates, which are constructed into more complex circuits, which can be constructed into even more complex circuits, etc. etc. It’s like building things out of Legos to make bigger more complicated Lego things, which can be combined to make even bigger more complicated ones, etc. etc.

There are quite a few other devices besides the CPU that are also “making decisions”, i.e. executing code. It’s just not the main program code that you are thinking of, but executing driver code, firmware, etc. And all of those devices are commonly passing data/instructions to each other to run.

Anonymous 0 Comments

I’ve wanted to teach this for ages, my time has come!

The first thing to understand is that a transistor is simply a switch operated by electricity. It’s got an input and an output, and there’s a control input that, if it sees voltage, will allow the current to flow to the output. So 2 inputs: signal and control, and 1 output.

Taking a step back, this is a form of the boolean logic for an AND gate. If there’s input voltage but the control is off, there’s nothing on the output. If the control is on but there’s no input voltage, then you’ve successfully sent no voltage to the output. If both the input AND the control are on, then the output is on.

This is an example of boolean AND. There are other “logic gates” for OR (either input makes the output turn on) and other logical operations.

It’s possible to chain these logical operations together to do binary math; for more detail start here: https://en.m.wikipedia.org/wiki/Adder_(electronics). At this moment just trust me that you can create a physical circuit that, given 2 binary numbers, “happens” to produce their binary sum as their output. It’s a hard-wired circuit designed to add by the chip maker.

Next, know that the processor is hard-wired to do this in a loop:

* Load next instruction and arguments into registers (immediately needed working memory)
* Execute the instruction by activating a circuit that performs the operation
* Store the output in a specified register

… over and over again.

So imagine you compile your code and it generates an add instruction. The processor, when it gets there, puts the two input numbers in the right place, “turns on” just the add circuit, and stores what comes up as its output.

Every circuit in the instruction set is built this way (hard wired in the chip) and your program is compiled to this set of instructions that just make a list of instructions to load, execute a physical circuit for the right operation, store.

That’s it, basically! It can get way more complicated but that’s the basics.

Anonymous 0 Comments

A transistor is a special type of electrical circuit. In a normal circuit, electricity can flow through it when it’s in the “on” position, and can’t flow through when it’s in the “off” position. A light switch is a circuit like this.

A transistor operates the same way, except there’s an extra wire leading to it. There are two sources of electricity going to the transistor. Both of them have to be “on” for electricity to come out the other end. Instead of you manually flicking the lights switch to “on”, the electricity from the second wire does.

So in a computer, they have all these transistors arranged together in a giant pattern. It’s set up like a million dominoes, each one ready to activate the next. But imagine that this set of dominoes has switches, like a train on a track (in fact, the dominoes themselves are basically the switches). And depending on which switches are flipped, Domino #1 can knock down either Domino #2 or Domino #3. So you know ahead of time, when you set up this complicated pattern, which switches need to be flipped to get certain answers.

Computer code gives directions on which switches to flip. Then when electricity passes through, the right dominoes all fall over, and depending on which path they take, that gives you your answer. Then they instantly reset (because it’s not actual physical dominoes, it’s just an electrical pulse running a giant maze).

Anonymous 0 Comments

A processor is made out of transistors. You can think of a transistor like a light switch, where when you flip it it lets electricity flow. If you hook two together, you can create “AND,” logic where both switch A *and* switch B have to be flipped in order for electricity to pass. We call these “logic gates,” and we can do other configurations for OR gates, and so on, building some complex logic.

An entire microchip is made out of billions of these transistors all set up almost like dominoes, where if a certain set of “input,” transistors are turned on, they change the path electricity takes through the chip, usually switching other transistors in the process. Each of these input configurations is called an “instruction,” and is usually represented as a bunch of 1s and 0s (1 for “on,” and 0 for “off.”). A single instruction might be “ADD 1, 2” where the instruction “ADD” switches the process to ADD mode, which means next two binary numbers it gets (0001 and 0010 in this case) will be sent to the part of the circuit that handles adding numbers.

When we program, we *could* type in the 1s and 0s, but that would be ridiculous. Instead we type in a programming language (the if/then/unless logic you mentioned in your post) and that gets “compiled,” to the binary instructions that the processor can understand.

Anonymous 0 Comments

It is a lot of interconnected switches that perform logical operations. that are today made of transistors but you can make it with mechanical switches, it would just be slower, Computer has been made with relays that are electrically controlled switches

A regular light switch just turns a lamp on and off with a single switch.

But in a staircase, you can turn it on and off by just flipping either switch. That is a logic function.

If we look at the position of the switches A and Be and call up 1 and down 0 on wiring can be

A B Light
0 0 on
0 1 off
1 0 off
1 1 on

That means if they have the same direction the lamp is on but they are in the opposite direction the light is off. This is the result of the wiring https://en.wikipedia.org/wiki/Multiway_switching#Traveler_system

As a logic circuit, this is a https://en.wikipedia.org/wiki/XNOR_gate

It is a connection like this between transistors that work as electrical control switches computers are built. As mentioned above computers can be made with electrically controlled mechanical switches, that is relays. In the 1940s to the late 1950s computer was built this way, Vacuum tubes, and transistors were faster, and transfers were a lot smaller too. The function function was the same.

For simpler control systems, relays were used before the computer was invented and continued to be used after they moved to transistors. They are still used in some systems like railway signaling where their advantage in handling high voltage and electrical interference is an advantage, the do not need to be fast the need to be reliable.

Take a look at [https://eater.net/8bit/](https://eater.net/8bit/) where translators are explained on the physic level and then they are used to build logic gates and then with logic chips 8-bit computers are built. This will show you how a computer works physically and how you can design one

Anonymous 0 Comments

Computers don’t make decisions when they run code. They do exactly what the code says.

The code is a sequential list of instructions that get loaded and run (mostly) in order. The list of commands that a processor knows is called its instruction set. These commands are quite simple; stuff like “load this value here”, “add these values and store it here”, “if this value is greater than zero, skip the next line”, “instead of the next instruction, go to this instruction instead”. The way that computers have conditional execution is usually a combination of those last two.

For instance, these instructions will return the bigger value between A and B:

* If A is greater than B, don’t read the next sentence.
* Return B.
* Return A.

Anonymous 0 Comments

Computers think in binary numbers, and they use a type of algebra known as *boolean logic*. **This is nothing to do with arithmetic**. Instead it was originally developed to makes rules using two values True and False and four operators OR, AND, XOR and NOT:

* False OR False = False
* False OR True = True
* True OR False = True
* True OR True = True

So ORing two values is always True unless all values are False.

AND goes:

* False AND False = False
* False AND True = False
* True AND False = False
* True AND True = True

The AND operator says that all combinations are False, unless they are all True.

XOR goes:

* False XOR False = False
* False XOR True = True
* True XOR False = True
* True XOR True = False

If they are all the same it’s false, otherwise it’s true.

NOT is easiest: it’s simply the other one.

* NOT False = True
* NOT True = False

Once computer scientists realised they had to work in binary 0s and 1s, they discovered they could use Boolean algebra, replacing False = 0 and True = 1.

A logic gate is a device that has two or more inputs, combines them using one of the operators above and gives one output. The internal mechanism is a combination of switches; in the 1940s these were made of valves^(1) which gave way to transistors which gave way to silicon chips. You can also model these using a battery, some regular light switches and a bulb.

Remember the XOR table? If both inputs are the same it’s true, otherwise it’s false. So let’s make an XOR logic gate:

[https://imgur.com/YuSGzeR](https://imgur.com/YuSGzeR)

You need A or B set to 0 to allow current in, and A or B set to 1 to allow current out and complete the circuit. If both are the same, no light.

So that’s how you do it with mechanical switches, and you can make the same electronic circuit using valves, transistors or silicon.

===

Valves need to be heated, so they glow and possibly attract moths. Such a moth might touch two terminals at once, creating a short or a “bug in the system”.

It’s quite astonishing to think how much electronic miniaturization has advanced since War2. The chip in the new iPhone 14 has 15 billion transistors on it, so assuming some logic gates are more complex, we might say there are three switches per gate, so 45 billion switches in all.

To make an iPhone 14 out of valves you might need 2 watts per valve, so 90 terawatts or 30x the entire world’s electricity production. Size? Allowing 1cm² per valve, this would cover half the land area of the USA.

So, not portable.

Anonymous 0 Comments

A processor has a “program counter”. This is basically a number that says which instruction is next. The CPU reads and executes the instruction at that location then increases the program counter.

A jump simply changes the program counter to a completely different number.

We also have conditional jumps. We can jump if the result of the previous instruction was zero, or negative.

Anonymous 0 Comments

this is the best and most clear explanation i found

Anonymous 0 Comments

its all transistors in the cpu, all code is executed by logic gates, aka transistors. So it’s literally just electricity flowing through the transistors