By this I mean, when you write code, what exactly gives that the power to do anything is it more code? 0’s and 1’s? more so, what gives that thing the power to do anything? At some stage I can only deduce what must just be magic making it work because it’ll boil down to something metal and plastic ‘talking to’ an electric current, which doesn’t make sense at all
In: Engineering
Computer code is typically an abstraction of “machine code”, the binary language that is directly executed by the computer’s hardware. That is the 0’s and 1’s that people talk about computers working with, and those in turn are symbolic of electric current (typically the 0 is no current and 1 is current).
How does that electric current do anything? Those pulses of current go into computer chips which rely on something called a “transistor” to function. A transistor is a semiconductor that can amplify or switch electric current.
Visualize a transistor as two wires going in and one wire going out. One of the incoming wires is the “source” and the other is a “gate” wire, with the outgoing wire being the “drain”. If a current comes in on the “source” but there is no current on the “gate” then the current is blocked and there is no current on the “drain”. However if a current comes in on both the “source” and the “gate” wire then it is allowed to flow through and there is a current on the “drain”!
Using multiple transistors it is possible to create binary logical operations. Everything computers do is built on this concept and modern computers contain billions of transistors.
And that is how code has the power to do stuff.
There are three separate things:
* code
* inputs
* outputs
The outputs are the “power to do anything”. The outputs could be many things, like printing or drawing on a display, writing a hard drive, writing a data link (ethernet), controlling solid state relays, driving analog outputs, etc.
The way the outputs are implemented is as varied as the outputs themselves. A basic discrete output is a pin on the processor that can be written to be a high voltage or low voltage.
It all boils down to basic binary math. 2 bits (1 or 0) and an operation. 1 and 1 is 1. 1 or 0 is 1. 1 nand (not and) 0 is 1. So you build your processor to do those things using basic electrical gates. Doing these operations in multiple steps, you can extrapolate into basic math, and then advanced math. The processor has multiple layers of memory, registers and cache, where it can store the result of an operation to use it again in a later operation. Then you start to layer on abstractions. A certain series of bits will represent “A”, a different series of bits will represent “B”, etc.
Then over decades you layer on more and more abstractions, and make your CPU faster and faster by putting the components closer to each other to limit how far electricity has to travel, and you find better materials for electricity to travel through, and add more layers of caching, etc. You add shortcuts for commonly used operations to make them faster. You can specialize certain processors for certain operations, essentially making them faster at certain operations at the expense of others. Pretty soon, it’s doing so much basic math so fast that it feels like magic.
This is why people sometimes say we electrocuted sand (silicon) and taught it to think.
Computer code that a human writes gets translated into “machine code.” Machine code is basically, as you say, ones and zeros. Those ones and zeros are stored in physical memory as on/off states- in other words, one of two electrical voltage levels (typically zero volts and a specific non-zero voltage). Those on/off voltage states propagate as electric currents through wires in the processor, memory, etc, which in turn activate or deactivate transistors, when then in turn allow or stop voltages from flowing to other components. These 1/0 voltage states control a complex network of transistor “switches,” and hence ultimately control the paths between electrical inputs (keyboard, touch screen, etc) and outputs (display, speaker, etc). This is the fundamental machinery of a computer.
Everything inside the computer really comes down to very, very tiny electrical switches.
The most simple thing we do with those switches is called “a logic gate”. They have “inputs” and “outputs”, and do very specific things based on whether each input has or does not have electricity applied to it.
A very simple one is an AND gate. It has two inputs and one output. The rules are:
* If there is no electricity at either input, it will make sure its output is “off”.
* If there is only electricity at one input, it will make sure its output is “off”.
* If there is electricity at BOTH inputs, it will make sure its output is “on”.
We learned to build useful machines we call “circuits” out of these “gates”. For example, an “adder” is a circuit that adds numbers. Here’s how that works:
* First, we decide how to use patterns of “on” and “off” on inputs to represent numbers.
* The “adder” has many inputs. Usually at least 8 inputs for one number and 8 inputs for the other number.
* You set up the first 8 input switches to represent the first number, then the second 8 input switches to represent the 2nd number.
* Inside the “adder”, lots of AND gates and other gates are wired together in a way that the 8 outputs flip their switches to represent the number that is the result.
So if you squint, this is just like a gate works: you set up the “input” switches and get a useful “output” switch.
Computers also have “memory”, which is like a switch that can tell you if it’s on or off. So to store the number from our adder, we have to find 8 switches in memory and say, “Please flip these 8 switches to the same on/off status as these 8 output switches.” Then, later, if we want that number again, we can say, “Please connect these 8 switches to these input switches so they’ll have the same values.”
How does memory work? Gates and circuits! It has inputs to tell it, “What switches would you like me to tell you about?”. After you set up those switches, they are connected to a lot of other gates that flip their own output switches until eventually it connects the correct memory switch to an output switch. Think of it like a very complicated set of train tracks, where you want the train from a particular track to follow a very specific path to get to a very specific track on the “output”. You’d have to study the tracks and figure out how to rearrange all the switches to make that happen. That’s what’s going on inside memory and the CPU.
So when you turn a computer on, usually its components spend a little bit of time turning every switch off or on according to how the factory says they should “start”.
A CPU is… a circuit! A very complicated one. The input is “an instruction”, but all that means is we decided if we set up its input switches to be a certain number, that means “I want you to add the two numbers I set up your other input pins to represent.” When that finishes, the next instruction is probably a set of numbers that means, “I want you to store the result of the last thing you did in the memory location with the address I set up your other inputs to represent.”
Those instructions, and the memory addresses/numbers each instruction needs for inputs, are what is in a program. So when the CPU starts, usually it’s configured to look in a hard-coded factory location for a program at the start. So it connects the switches in memory at that location to its input switches. Then that memory gets treated like an instruction. Then it connects the switches in memory at the *next* location to its input switches. THAT memory gets treated as an instruction. This keeps going.
So the information about say, your OS is somewhere on your disk at that factory-specified location. The CPU knows to copy instructions from that part of the disk into memory right away. Once the OS is running, it has its own rules about loading programs, and when you ask it to run something, it controls telling the CPU which memory locations to connect to inputs for its instructions.
So it’s like a very, very complicated switchboard, where the machine itself is capable of flipping its own switches. Punchcards were a very direct analogy: the patterns of holes on a card were aligned with a set of switches, and if the paper had a hole the switch would go “down” and that set up patterns for the inputs of the circuits inside the computer. Later we figured out how to make the switches smaller, and how to use other kinds of switches to flip each other.
Code is a set of instructions at a useful level of abstraction for programmers. (I’ll be loose with the “levels” below) Take for example, you want to add 1 to a variable x.
**High level code (Python)**: x = x + 1
**C-level code** [Where we have a better sense of what x is and where it is represented in memory at this point] x = x + 1
**Compiled–OS level:** [You don’t really need this, because you’re not doing anything relating to the OS, but it’s holding on to this current procedure and it’s giving a bit of a “scratchpad” in RAM for holding onto variables.]
**OS–Assembly level:** [Pull x from RAM, put it into a CPU register – think of these like little “slots” that hold a number while it’s being used for a computation. Then put that “1” into another register. Then call the ADD R0, R1, which puts the output into a third register. Then save that output value back into what we called ‘x’]
**Machine level:** ADD is really an opcode that is represented in a set of bytes/bits, like 00000100. There is a part of the CPU that knows when it sees “00000100”, it should set up the connections between different components to perform the adding operation with the registers it specified.
**Circuit level:** There is a tiny circuit that knows how to take a set of 0s and 1s, represented as voltages (let’s say 5V is 1, and near 0V is 0, but this can get more complicated). After all of those voltages are input, then the output is also represented as a set of voltages. The general structure of this is an Adder, which is really a connection of basic boolean circuits (AND, OR, NOT).
To be clear, this is for a very basic computer and not really representative of how things work in practice – that gets really complicated with a bunch of shortcuts and other complications.
Transistors: 3 connections input:output:switch. Apply constant power to the input, and when the switch is high the output is high. These ouputs are fed into other inputs and arranged to allow dynamic selection of operations. So addition verse multiplication depending on a single input.
Code is fed to billions of transistors to actually do the operations. Binary doesn’t always mean 0:low, 1:high. That only applies to storage. So ram and hard drives. When processing you have a clock. When the clock ticks it checks to see if there has been a change. No change=0 change=1.
When writing code I’ll write “if(x==0) x=1;”
In C/C++ it’ll get turned into assembly(asm) when you compile it. The result directly translates to the binary the computer understands.
For javascript there’s a parser that reads it then takes the appropriate actions.
This is a cliff notes of the cliffnotes. This is a rather deep subject.
This may be easier to answer from the bottom up.
The fundamental thing in a computer is the electronic switch. They’re different from regular switches in that you turn them on and off with electricity instead of by hand.
If we agree that on=1 and off=0 we have a “yes” gate; the output just repeats your input.
That lets you build combinations of switches by using some switches to turn other switches on and off.
The basic setup is to build “logic gates”. We can easily change it so that a 1 input gives you a 0 output and vice versa (a NOT gate) or we can put two switches in series or parallel (an AND or OR gate) and so on.
If we combine logic gates we can get more complicated relationships. We can create feedback loops so that you only need to send a single 0 or 1 pulse to a circuit and then the circuit gets stuck in that position until you send the other pulse. Now we have a memory circuit.
Now we’ll add some timing circuits so that changes in our circuit happen in discreet steps.
Now we can do some cool stuff. We can build circuits that look at memory and use those bit positions as inputs to our logic circuits. We can take the output and send them back to memory.
That’s a primitive and really annoying-to-use computer. You program them by picking combinations of 1s and 0s (machine code) that will switch the rest of your circuits in the right way. The “right way” is something that gives you an other set of 1’s and 0’s that you can interpret in some meaningful way.
That all sucks to use so we spent several decades using those crappy programming “languages” to write programs that would take sort of english looking stuff (source code) and convert it to the bits that circuits use (machine code).
Latest Answers