There is a whole branch of computer science (which has existed long before digital computers, in fact) called theoretical computer science and it’s concerned with figuring out what types of problems can be computed by different theoretical machines.
Alan Turing came up with a theoretical machine which is capable of solving the most general class of problems that is possible to solve as far as we know, which we now call the Turing machine. A Turing machine consists of an infinite tape (like a movie tape, think of an infinite reel of paper with boxes on it) and a machine that can travel along that tape and be programmed to do a series of actions: it can read what is in the box it’s currently on top of, and based on what it reads, it can either choose to move left, move right, or write a new symbol into the box. Using just a machine as simple as that, you could theoretically compute any problem that a modern computer can!
Of course, real computers are much more complex because they have to contend with the real life limitations of physics and time, but they have the same basic components: they have memory (like the infinite tape) and they have a machine which operates on that memory called the CPU.
Memory, in its basic form, can be anything as long as you can write data to it and read it back later. Computer memory accomplishes this by using billions and billions of little electrical components called transistors, which when powered one way, store/release an electric charge (AKA writing) and when powered another way, they let the signal through only if they are currently storing a charge (AKA reading). Transistors can only store an on or off state, which is why computers encode their data in binary, a number system with only two possible states for each digit.
The CPU is a bit more complicated so I’ll be simplifying heavily here, but essentially every CPU comes with a pre-defined set of basic instructions that it can do such as add two numbers or read a number from some address in memory. For every one of those instructions, there is a circuit inside the CPU that performs that instruction, and the basic building blocks of those circuits are called logic gates. logic gates are electric components which take in one or more electric signals and output some logical combination of those signals. For example, an and gate will output ON only if input 1 AND input 2 are both ON. An or gate will output ON if either input 1 OR input 2 is ON. It may seem crazy but using just simple logic like this, it is possible to make circuits that can do advanced stuff like add two binary numbers together. If you want to see how it’s done, you can look up logic diagrams for addition. The important thing about the CPU is that it can read and execute instructions from memory, which means that you can write programs for it, which is something you can’t do for simpler, single use case electronics like scientific calculators or vending machines.
Anyway, that’s a very simplified overview of what’s going on inside a computer. There are many complications to this in reality that we have made in order to make computers faster and more resilient to damage. If you’re still wondering how all of this turns into apps that you can see on your screen and click on with your mouse, that’s really just a matter of translating your mouse button clicks into electric signals for the CPU and then taking the output of the CPU and translating that into patterns of pixels lighting up on your screen.
Latest Answers