Explain how a modern computer works at the most basic level.

1.20K viewsOtherTechnology

How do you go from some silicon wafer with some metallic lines printed on it to having a GUI desktop. I don’t understand the intersection between hardware and software at the fundamental level.

In: Technology

31 Answers

Anonymous 0 Comments

The silicon wafer has billions of tiny switches on it. Some of those switches change depending on initial input, some of them change depending on the state of adjacent switches. With clever combinations of switches, you can do all sorts of math.

By doing math, you have taken an input signal and generated a different output signal. These input and output signals can do many things, one of those things is give coordinates of a pixel and values for the red, green and blue elements of that pixel.

So your gui is, at its most basic level, a very long list of coordinates and associated red, blue and green values for each of those coordinates.

That list and those values change in set ways depending on inputs it receives, such as a mouse click in a certain location. In this way, the gui can react to your inputs and achieve some manner of work.

Anonymous 0 Comments

At the most basic level a computer is memory to store data, memory to store a program, a CPU to manipulate the data using directions from the program and a collection of “peripherals” (e.g., graphics, keyboard, Ethernet, HDMI, etc.) that connect these components to the real world. More on the hardware later.

The program at its most basic is 1s and 0s, and that is how early computers were programmed. This was a very slow, error prone and tedious way to create the program. Over the years more and more abstract ways of programming have evolved: assembly code that really is just labels in text for each of the binary commands, “high-level” languages like FORTRAN, COBOL, JAVA, C, etc., etc., etc. Many of such languages exist and they have functions like “if, then, else”, “multiply”, “add”, “read” from memory, “write” to memory. These languages must be “compiled” into assembly code so the hardware knows what to do. Some compilers are static, that is they do the compilation once and that gets stored into program memory. Some compilers are “run-time” and they compile each time the program is activated.

Programming now keeps building on itself. For example, instead of writing programs that each have to communicate with the peripheral units, manage memory, etc., computers have an operating system that provides standard functions that all software can use. Likewise, graphics has drivers, that perform common graphics commands. In addition, software like Python have enormous libraries of common functions that programmers can use to speed up software development.

Back to the hardware. Memory originally was very simple. For data, it was a set of circuits that could store the ones and zeros of data and a connection to the CPU. For the program it was a separate system that might be the same circuits as the data but had to be manually loaded. Some computers used paper tape that was continuously read by the CPU. Other media existed for that too. Today memory is very involved. The program and data are stored together on a hard drive (or maybe the cloud). When the program is activated parts of it are loaded into DRAM (this is the memory sticks you buy when you get the computer). Then as the program runs, parts are loaded into caches on the silicon. The caches are hierarchical with large slower ones that hold program and data, to small-fast ones, that separately hold data and program. These are what the CPU directly accesses.

The CPU has circuits that typically implement arithmetic, Boolean, loads, stores, program flow (branches, jumps). In addition the CPU has a memory management unit to help the operating move data through the memory hierarchy as needed. It can use the load/store hardware to communicate with the peripheral elements.

The peripheral elements are also known as the I/O system. Graphics is its own beast. It has a large array of simple CPUs that work in parallel to go from commands to creating a “bitmap” of the image that you see on the screen. It has to do all calculations each time the screen is changed, which today is 120 Times per second. That’s a lot of math and is why graphics cards need a lot of power. Once the image is in memory the graphics unit spits out the image in a serial fashion to the HDMI port so your monitor can display it.

This is very superficial, but hopefully helpful. For you experts out there, I intentionally simplified things so DM me if you want an in depth debate about how things work.

Anonymous 0 Comments

It all starts with a silicone sandwich called a transistor. Silicone is what sand and glass are made of. In this sandwich, the goal is to decide id electricity can go from one “bread” to the other and by applying voltage to the “meat” or not. Basically bread 1 is electrified and if you electrify the meat with an additional source of energy, you can electrify bread two.

There are different types of sandwiches with different layers and if you connect them to each other you can do math with them.

Since we can stack billions of these sandwiched, and they can work really fast, we can make incredibly complex operations with them, like playing flappy bird.

Anonymous 0 Comments

Others have explained the bits and bobs of how the little parts use electricity and logic to make a decision machine. But what about the bigger parts — the ones you touch and look at?

A computer has only 5 parts: A Clock, Processor, Memory, Output, and Input.

A Clock keeps everything working together, where information moves from place to place in time with the ticks. There are exceptions, but mostly is a huge bucket brigade as the 1’s and 0’s move around the computer system.

A Processor (the CPU), sometimes one of many, takes it’s piece of information (10110011 etc), follows the instructions for that piece, and makes it move somewhere else. And the next, and the next, all in time with the Clock.

Memory is the space where the CPU reads things from, and puts things when done. There are several types of memory, particularly Storage and Workspace. Think of a kitchen table. A recipe card (a program) is on the table (workspace – usually Random Access Memory – RAM). The CPU reads the recipe, goes to Storage (usually the Hard Drive), gets the stuff it needs and puts it on the table (RAM). Then the CPU follows the program to do stuff. And, some of that stuff goes to:

Output is where you get to see what’s happening. The output can be a video screen, printer, manufacturing device, blinky light thing, or whatever the computer can control in some way. And all of that happens because somewhere there was an:

Input is where you (or a sensor) tell the computer to do something. A mouse, keyboard, thermostat, position sensor, etc, makes a signal for the computer to detect. And what does it do then? It follows the program to print the letter “E” on the screen (output), shift the hydraulic cylinder a little bit, raise/lower the oven temperature, change the radio station, well … anything!

So you can have infinite variations of all these parts working together, sometimes nested inside each other, or working side by side. Often, both!

But, no matter how much you click your mouse, you still won’t get redstone ore until you have an iron pick. That’s just part of the game.

Anonymous 0 Comments

Most of the answers I see are def not for a 5 yo.

Basically, digital electronic signals are , at their root, incredibly simple. Thing is, they are also blindingly fast. That means you can stack what they are doing until you have something not simple at all, but still pretty fast.

It all starts with what is called “Machine Language.” It directly tells the hardware (silcon and wires) what to do. From there you can start those stacks I mentioned above.

What software does is create and organize those early stacks in more and more complicated ways, until you reach a level where you computer has the GUI, and everything else you have today.

BTW, it wasn’t until my 3rd PC, that I had an early Windows GUI.

Anonymous 0 Comments

Sebastian Lague has a wonderful series on YouTube titled [Exploring how Computers Work](https://www.youtube.com/watch?v=QZwneRb-zqA&list=PLFt_AvWsXl0dPhqVsKt1Ni_46ARyiCGSq) that starts at the logic gate level and works its way up to more complex computing concepts. It’s only four videos long now, but he regularly expands it. On top of that, he has made and released a piece of software called [Digital Logic Sim](https://github.com/SebLague/Digital-Logic-Sim) that lets you tinker with the logical concepts that make computer logic possible. I highly recommend both, as they work together in tandem.

Anonymous 0 Comments

There’s an episode of ‘3 body problem’ where they explain logic gates quite well. Put a million people with a flag into a square. They’ve each been told instructions like: if the person north of you has a flag, show your flag, or if the person to the left and south shows their, don’t show your flag. Then you tell the person at the bottom right, show your flag, and everyone follows suit depending on their instruction. That’s one clock cycle.

Then all the sand flows out of the bath … backwards. #hhgttg

Anonymous 0 Comments

There is a whole branch of computer science (which has existed long before digital computers, in fact) called theoretical computer science and it’s concerned with figuring out what types of problems can be computed by different theoretical machines.

Alan Turing came up with a theoretical machine which is capable of solving the most general class of problems that is possible to solve as far as we know, which we now call the Turing machine. A Turing machine consists of an infinite tape (like a movie tape, think of an infinite reel of paper with boxes on it) and a machine that can travel along that tape and be programmed to do a series of actions: it can read what is in the box it’s currently on top of, and based on what it reads, it can either choose to move left, move right, or write a new symbol into the box. Using just a machine as simple as that, you could theoretically compute any problem that a modern computer can!

Of course, real computers are much more complex because they have to contend with the real life limitations of physics and time, but they have the same basic components: they have memory (like the infinite tape) and they have a machine which operates on that memory called the CPU.

Memory, in its basic form, can be anything as long as you can write data to it and read it back later. Computer memory accomplishes this by using billions and billions of little electrical components called transistors, which when powered one way, store/release an electric charge (AKA writing) and when powered another way, they let the signal through only if they are currently storing a charge (AKA reading). Transistors can only store an on or off state, which is why computers encode their data in binary, a number system with only two possible states for each digit.

The CPU is a bit more complicated so I’ll be simplifying heavily here, but essentially every CPU comes with a pre-defined set of basic instructions that it can do such as add two numbers or read a number from some address in memory. For every one of those instructions, there is a circuit inside the CPU that performs that instruction, and the basic building blocks of those circuits are called logic gates. logic gates are electric components which take in one or more electric signals and output some logical combination of those signals. For example, an and gate will output ON only if input 1 AND input 2 are both ON. An or gate will output ON if either input 1 OR input 2 is ON. It may seem crazy but using just simple logic like this, it is possible to make circuits that can do advanced stuff like add two binary numbers together. If you want to see how it’s done, you can look up logic diagrams for addition. The important thing about the CPU is that it can read and execute instructions from memory, which means that you can write programs for it, which is something you can’t do for simpler, single use case electronics like scientific calculators or vending machines.

Anyway, that’s a very simplified overview of what’s going on inside a computer. There are many complications to this in reality that we have made in order to make computers faster and more resilient to damage. If you’re still wondering how all of this turns into apps that you can see on your screen and click on with your mouse, that’s really just a matter of translating your mouse button clicks into electric signals for the CPU and then taking the output of the CPU and translating that into patterns of pixels lighting up on your screen.

Anonymous 0 Comments

So, all the metal lines on the silicon wafer are basically wires, and electric energy flows though them. At one spot, the CPU, the electric is switched either on or off (binary: 1 or 0) in super-fast succession (or, in muli-core CPUs, multiple super-fast switches working together). The on-off pattern is interpreted by other parts of the computer to tell each component what to do. Sections of that data get grouped (e.g. 100101010 might mean Green on the display) and tell each component what to do.
The software does that interpreting. It’s just sections of electric/no electric that tell the CPU how to sort that electric and tells each other part what they mean. It’s usually stored on a hard drive, but firmware can also be component-specific.
Imagine a million 1/0 digits all lined up, with a software program on the hard drive telling each part of the computer what each section of code means and thus what to do with it. Each pixel on your screen gets a value, electric switched by the CPU with the software telling that code to go to the monitor and meaning this pixel is green and this one is blue and this one is white, etc. That’s how a graphic user interface is made, and it’s a convenient way to represent those gajillions of different instructions being sent around and about in super-fast speeds.

Anonymous 0 Comments

Let’s build from the bottom up. A diode is a section of silicon with 2 impurities added. One with extra electrons and one with a lack of electrons. This causes a region in the middle where the extra electrons flow into the lack of electrons. The uneven charges makes a kind of gap. In one direction, electricity can easily jump over this gap but in the other direction, the electricity pulls the gap further apart and it can’t. Slap 2 diodes together and you get a transistor. This allows electricity to flow from one end to the other but only if the middle section has electricity applied. Now you have a switch. Put 2 together and you have an “and” gate. Electricity can only flow if both middle parts have electricity applied. Now you add a whole bunch of similar things together and we get to the high level logic. The computer has a registry. Each entry is full of 1s and 0s. These relate to what logic gates get activated and what values are fed into them. There’s a “program counter” that is connected to the first entry then walks through them one at a time. Sometimes it jumps around if the value in the entry tells it to go to some other entry. Any further explanation will take hours but that’s the basic idea. You load a list with on off bits. That list determines what circuits get turned on and what bits get connected to those circuits. Do that like a billion times and you get a computer.