Let’s start with two people A and B. A and B work together to build this system that we refer to as computer.
A focused on building the hardware and B focused on building the software. But before they set out to do their respective tasks they both came to an agreement which is that they will understand the same set of commands. B agreed that no matter what software they build, they will figure out a way to translate it to these commands while A agreed that they will build the necessary physical components (transistors, wires, etc. etc.) to make these commands work.
So both A and B went about doing their tasks independent of each other. This is only possible because of the agreement they have. Let’s call the set of commands that they agreed upon as Super Important Agreement (or SIA).
B focused on understanding what the user wants and focus on developing the necessary software, but as the user requirements become more and more complicated (for example, your phone is now expected to do the same tasks that your laptop/PC does while having different capabilities) B had three issues: 1. how do I gets these programs to share the resources in an efficient and fair manner, 2. how do I translate these huge number of programs into SIA and 3. how do I write newer programs to meet the user expectations.
The thing is B had to do all of these simultaneously. So B hired C,D and E to do these tasks.
C agreed to take the responsibility of converting the programs to SIA and the only caveat was that the programs had to follow the rules set out by C.
D agreed to take the responsibility of making sure that all these programs co-existed together and shared resources fairly and efficiently.
E agreed to take the responsibility of writing the newer programs (much to C and D’s annoyance).
Now coming back to A, A had to figure out how to convert Silicon into the necessary hardware that could implement SIA.
So A had to somehow figure out 1. how to convert a metal like Silicon to meet the user needs (e.g. shrink the size) 2. the structure of the building blocks to implement SIA.
A hired two guys F and G. F agreed to take the responsibility of the physical process i.e. moulding the silicon to meet the user demands. G agreed to take the responsibility of the figure out the building blocks. F and G work on the understanding that G would provide a basic surface where F could come and sketch out the structure of the building blocks. This is similar to how we build Legos. You have an instruction booklet and then a base plate. The instruction booklet tells you how to put together the building blocks while the base lets you physically put them together.
F started using physical switches as the building blocks but they were huge and failed frequently so then F came up with the idea of this basic building block called transistor. The beauty of this block was that it could implement the functionality of these huge physical switches but at the same time allow G to shrink or expand it as they wanted and it had an added bonus: The smaller you shrink the transistor, the faster it becomes. So everyone agreed that from now on we ll start using transistor.
So F used transistors as the basic building block and by putting these blocks in a specific order they could implement the SIA. They would figure the structure and the physical location for each of the components and would then communicate it to G. (Think of this as the instruction booklet that comes with the Lego set).
G in the meantime focused on preparing the base layer and then using the instructions provided by F it puts the blocks in the specified order and voila, you have your physical hardware.
A then gets the physical hardware from G and B then gets the software from C,D and E and puts it on the hardware.
A -> Intel, Apple, ARM, AMD
B -> Windows, Apple, Linux
C -> Compiler (GCC,python) and the rules that C sets out is your programming languages
D -> Operating System (Windows, Mac, Ubuntu etc.)
E -> Software applications (Video Games, Word editor, etc.)
F -> Physical Design (Synopsys, Cadence)
G -> Backend physical design/Foundry (TSMC, UMCIP)
SIA -> Instruction Set Architecture (ISA) e.g. x86, arm, RISCV
A lot of people go right to the logic gates and stuff but I always found it to be most intuitive to think about it like this.
1. An electric circuit can be either on or off, meaning it can represent two states.
2. Similar to something like morse code, you can encode numbers so they’re represented by only two symbols, like, say, various patterns of just “1” and “0”, or, “on” and “off”.
3. Combine those two facts and you can now represent numbers with electricity. Like a row of LED lights where some are either on or off, which can be decoded back to the corresponding numbers based on the rules of our code.
4. You can also set up clever electric circuits that allow you to add two numbers. Like two sets of switches where you enter your number codes as the input and one set of LEDs that light up with the encoded representation of the added up result.
5. Once you can add up two numbers, you basically have the building block for all other mathematical operations.
6. Once you can do all mathematical operations with a circuit, you can basically have them perform any kind of function.
7. Make those circuits smaller and smaller so you can run more and more of them in parallel and you can have them do more and more complex functions
8. Now you have a computer.
I recommend starting with these amazing videos by Sebastian Lague:
[https://www.youtube.com/playlist?list=PLFt_AvWsXl0dPhqVsKt1Ni_46ARyiCGSq](https://www.youtube.com/playlist?list=PLFt_AvWsXl0dPhqVsKt1Ni_46ARyiCGSq)
In the videos, he goes through, visually, and shows you how logic gates work. He uses a little program he wrote to make some gates, then combines them to make things like addition machines, data stores and such. In some videos, he even breadboards them so you can see them actually running in real life. He’s really really good at explaining these, so I recommend watching them first.
From then on, it ends up being layer on layer on layer. Every chip has these logic gates in them, and the right combination of gates lets the chips do things like store data or take electrical signals and make them into audio, or send electrical signals to the monitor which in turn has its own logic gates that turn pixels on and off.
It’s a bit like a car. Since the dawn of man people have made various inventions that get refined and miniaturized and repurposed by other people. Someone discovered stuff is flammable which in turn led to making metal which in turn led to making engines which in turn led to making a smaller engine which in turn led to.. and so on and so forth.
Computers are incredibly complex, if viewed as a whole, but once you realize “it’s all just logic gates”, you can start to hypothesize how a processor talks to your graphics card to tell it to turn on or read data or write data.
A lot of people gave more detailed and accurate answers that try to fully explain everything but I’ll try to explain it at a more basic level.
Computers at very low level act kind of like a light switch. Flip the switch one way for “on”, flip it the other way for “off”. If you had 100 light bulbs, you could have 100 switches, one for each light bulb. But maybe you don’t want to deal with 100 individual light switches, so you start wiring them together. Maybe you set two switches up so that if either switch is flipped to “on”, the light bulb goes on. Maybe for another set if either switch is “off”, the light stays off. You could then take those more complicated sets of switches and combine them to make something like a switch that turns off all of the lights in the kitchen and turns on all of the lights in the living room.
This basically describes a computer, no matter what a computer actually does, at the lowest level it’s just a bunch of switches that are flipping in complicated ways. The ones and zeros everyone always talks about are the light bulbs being on or off. But the key to why anyone can actually program computers has to do what people have done over a very long period of time to build up what building blocks can be used. Some of the earliest computers had literal switches that had to be flipped in certain patterns to tell the computer what to do, but now when you look at a computer screen it’s very difficult to see how exactly everything is being translated to ones and zeros. That’s because there are so many layers of building blocks between what you see and what the actual physical hardware is doing.
Once a computer can do one thing, everyone can reuse that to do more complicated things. So companies that build the hardware will build in a few basic simple actions. You can tell the computer to add two numbers, or check if one number is higher than another, or copy a number from one place or another, things like that. They deal with making sure those actions get translated correctly to stuff that happens in the physical hardware. You can technically write a huge computer program with just those basic operations to directly tell the hardware what to do at all points, but that gets hard to manage pretty quickly.
Instead, higher level building blocks are created using those lower level building blocks. Once someone figures out how to use the add, check, and copy actions to build a loop function that does the same thing over and over until some condition is met, the programmer doesn’t need to know about add/check/copy, they just need to know about the loop. Once someone uses those higher level building blocks, called programming languages, to do something specific like show text on a screen, it can also be re-used. So now someone might not need to know about how to deal with the screen or how a font file works, they just know they have a function called draw text that they can use to do it. These are things like programming libraries, APIs, hardware drivers, etc.
This process of making more and more complicated building blocks and reusing other people’s low level code has been going on basically since computers have existed. So today you might start up a brand new app, but it could be using a programming language that was created 20 years ago built on top of another programming language from 50 years ago on an operating system that is still building off of things created 60 years ago. The key part is that no matter how complicated all of that gets at a high level, in the end it all gets translated into a very long series of simple actions that your computer can do very quickly.
**CPU**
You could read about transistors, what any processor is made of in the electronic side.
A simple electronic component that act as a switch. Yes, as simple as that.
Then, you can make an arrangement of those (in a multiple of 8; up to 64 in parallel for each bit) to create a logic circuit (to check for conditions and move the signal along a sub circuit that does the action). Now your transistors created a processor!
The thing is, a processor is wired to react to many specific conditions, and those “conditions” actually match (ELI5/tldr) what is called the assembly language (or ASM). Your software! (You can read UAL on Wikipedia for more)
Don’t forget your software is stored in some kind of memory, which is also an electronic/electric level.
(Technically, speaking, you could wiring up a multiple of 8 buttons (up to 64) + one “enter” instead of using a memory module. Then manual enter the next instructions with those buttons. This is a minimalist example. Yet, imagine if you replace button by light sensor? Pssss, you just created punched cards (or almost)!
If you are a programmer, you can see ASM as a… Parsing programming language… A physical one though.
ELI12 about: ASM is a programming language, a human language. A text file, which is also a human tool. That the processor won’t understand. However, ASM syntax is highly based on how the processor expects things to work. So there is software that basically converts it (on a 1:1) to what the processor understands.
Each command is called an OPCODE and his available in a document called a datasheet. (A technical documentation about how the whole device works). In case of your modern computer, they use all the same thing otherwise you won’t be able to install the same software across multiple processors.
For example, your cellphone and computer use a different chip design, hence why software isn’t compatible.
**Outside world**
The CPU is working inside him only. So now the next step is to interact with something outside. Think about, your mouse, screen, … or even as stupid as LED/button. Otherwise, your CPU is going to be useless!
The CPU also contains instructions to control directly (or read) some of his pins (pins that are connected to those transistors you control!).
Then it is a matter to know how to read (or talk) whatever is plugged into those pin you want to interact with.
LED is a basic electronic component, they just react to electricity. So nothing fancy.
But your screen, mouse, … They are controlled by another CPU. So you need to know what to send exactly over those pins. Again, you will need to read the datasheet of the outside circuit to know how to control them.
In this case, it will be a mix of many things. USB/PCIe as the messaging technology. Then HID mouse (for the mouse) specifications (for the “command”), OpenGL/DirectX (for the graphics card)
So ya know how movies and stuff like to portray 0s and 1s floating around inside computers? Well, there aren’t zeros and ones inside the computer anywhere. You could take an arbitrarily powerful microscope, zoom in as much as you want and you won’t ever see 0s or 1s floating around anywhere. The 1s represent a physical charge, the 0s represent lack of a physical charge.
Those 0s and 1s are just ways to portray a very very tiny thing being charged or lacking a charge. It’s maybe worth noting that it isn’t always a charge that’s being talked about, but this is ELI5 and I don’t want to overcomplicate it.
Humans can look at these groups of charge/lack of charge as 1s and 0s because it’s easier for us to work with and allows us to view things at different levels of abstraction depending on what layer of the computer we’re considering: the groups of charged/uncharged transistors get represented as a sequence of 0s and 1s, every so many 0s and 1 can be represented as a hexadecimal number, every so many hexadecimal numbers can be represented as a machine-level instruction, groups of machine-level instructions can be represented as programming language lines, and groups of programming lines can be represented as apps or games or whatever else.
Computers are layers upon layers upon layers, each layer providing a level of abstraction and a black box that will fulfil some function.
Fundamentally computers are input and output systems. A GUI desktop takes inputs from various input devices (keyboards, mice, network, hard drives etc), and outputs to a screen and perhaps speaker, and write to hard drives and network. The outputs for a computer will always be in the form of a digital signal (this is 1s and 0s I’ll get to this later) as digital is the fundamental unit of thought for a computer. If a computer wants to output an analogue signal, extra hardware will be required for conversion.
Computers have many components to help them get their output, but the heart is the CPU. This actually could be better thought of as the brain, the CPU is the piece of hardware that does all the thinking required to process input and generate the correct outputs. The way they do this is not too complicated, all CPUs are instruction machines. They operate step by step and take in instructions in the form of strings of 1s and 0s which encode an instruction. These instructions are of a limited subset, usually to perform some simple calculation using numbers stored in a very short term memory (called registers), or read or write to/from longer term memory (this is what we think of as RAM, but these operations are also what allow they CPU to interface with the output/input device that can access the “data bus”). It can be difficult to imagine how the ability to do such simple operations would allow us to create something as complicated as a desktop computer, but with enough of these instructions in the right order this can be done. To help us produce the correct instructions we have invented tools that help us to more quickly and easily develop them without getting lost in the 1s and 0s, for example we can develop shorthand for doing common groupings of instructions and write programs to convert these shorthands into the appropriate CPU instructions (these shorthands are called programming languages and the programs that convert them are compilers). Then we can write complicated programs that help us to write and manage programs on the CPU, and give us easier ways to manage resources such as our input/output devices, allow multiple programs “share” the main CPU by jumping back and forth between programs and much more, this is an operating system.
Let’s break this apart.
Computer processors are made from billions of tiny little switches called transistors. A transistor can be on or off. Conviently, we can base a maths system around this, a base-2 system. What that boils down to is that we can represent information by a series of on and/or off transistors. By arranging them in the right way, we can make the transistors store long pieces of information. This is your computer’s memory. The processor recieves a peice of information, and can do things to it. It can do arithmetic operations (add, subtract, multiply, and divide), logic operations (>, <, =), and others (register shifting, counting the 1’s or 0’s). So basically, the processor takes commands and information, and outputs information back to memory
When you initially turn on a computer, a bit of special memory is loaded into main memory and the processor runs it. This is the BIOS, or basic input-output system. It provides the necessary instructions to startup the hardware and put all the transistors in the right state. After this, your operating system is loaded into memory, and the processor begins to launch it.
To provide you with a GUI, the processor and software together are doing two things: 1) creating what is displayed and 2) talking to the monitor.
Lets start with 2. The monitor you use can vary, but the basic principle is the same. An array of tri-color lights are setup. Because of our eyes, a mix of mono-chromatic red green and blue light can create almost any color we can perceive. So for each point of 3 colors on the display, we can control the brightness of each light to control its effective color and brightness put a bunch together at a far distance and you brain blurs them into an image. So the monitor is designed to recieve a very specific set of information, the specific levels for each light across the array. This creates the image we see
1) The processor is the one producing this stream information. Let’s start basic, and say it’s only doing the background wallpaper. That image is stored in memory (stored in the same way the monitor will recreate it, as an array of rgb light), and the processor is taking each line of that image, and turning into the data stream it’s sending to the monitor. The processor also puts the mouse icon in a default position on the screen, and tracks the x and Y moments the mouse makes and recreates them on the screen. The software will command the processor that during a specific part of the stream for the screen, instead of the background, it’s going to make something new, a box with the start button in it. The processor knows the monitors resolution (because the monitor is also another computer that can talk to the big computer), so it knows how much space it’s working in. And when creating the stream for a particular x, y coordinate, it will make the box and button. It takes input from your mouse, and when you move the mouse to the same coordinates as the button and click, the processor recognizes the click in the right spot and executes the program associated with the button, according to its software. That’s how you interact with the screen in front of you.
Imagine a CPU+RAM as a giant excel spreadsheet. Each spot is just numbers, or equations that reference other numbers (all in 0/1 binary)…but it’s all just numbers and the CPU is built to do some basic operations on those numbers.
A program loads itself into the CPU and it starts with the first command in A1. Which can do an assortment of basic operations (see x86 assembly language)
A1 box might say “add B3 to C3 and store in J45”, then A2 says “if J45 is > J46, place ’55’ into M18, display M18 to the pixel”….showed a red(55) pixel on wherever M18 is drawn on screen….(all those numbers are made up, but so are the numbers in a CPU)…there is also a jump command to skip ahead/behind to repeat commands in code “if J45==16, jump to command B43”
We’ve given numbers in the CPU meaning, every single number is either a command, a Value, a location, a letter, a volor…etc. it’s all just numbers being moved around. Billions and billions of numbers. Some numbers become letters, some numbers are decimals, some are integers,,,,etc, it’s all about how we add meaning to the numbers via standards. (IEEE floating point, tells us how to interpret a chunk of 0/1 numbers into a decimal point number. ASCII tells us how to interpret a number into a character(letters). 24bit color tells us how to turn numbers into colors. X86 tells us how to interpret numbers into CPU commands. (All these standards are made up by humans to map 0/1 numbers into meaning)
It starts as very simple CPU level commands, and eventually fills a grid of pixels on screen that you see, while in the parts you don’t see, it’s storing numbers and values it needs.
It’s storing the letters I’m typing into a row of memory, and turning them into pixels you see. It has blocks of memory that describe the pattern for this font(letter to pixel diagram), and when told to print the ASCII letter ‘S’, it can look that pixel display up.
Latest Answers