The first big computers did not execute code from storage as we see it today, the very early ones were “hard wired” meaning that the hardware was wired to do only one thing, no code required, you program it by connecting relays together in specific ways. Later ones used long strips of cardboard with holes in them that would get fed through a reader. The presence of a hole in the punch card will be detected by the reader and will cause the computer to do something. These were also limited in what they could do. For instance, you could have a computer that can do nothing but solve equation systems and is hard wired to do that, you can then use the punch cards to feed it the numbers you want it to solve.
Newer processors execute something we call “assembly code” which is sort of a modern version of the punch cards. They are small and simple operations such as add 2 numbers together, compare 2 numbers to see which is bigger, take this number and add 1 to it, go to this particular spot in the code, etc…
Code like python or c++ isnt really executed by the cpu, it’s a human readable/understandable translation for assembly code, what a compiler does is take the code text and convert it to assembly code which is directly executable by the cpu but quite difficult to write. For instance, to make the computer do something simple like “if a > b then b = b +1” in assembly would require many lines of code that can’t be understood by just glancing at it quickly like a python program would be. Assembly programs end up having a loooooot of small detailed lines instead of less more concise one
What happened is that people figured out easier ways for humans to tell computers what they want it to do such as the languages we all know and love today. They then wrote compilers in assembly code that can take those new languages and translate them to assembly which allows for the complex programs we see today
Assembly
Assembly can translate directly to signals received by the chip.
This can be as simple as flipping switches for the values. When the signals match a certain value, the chip performs a certain function.
What the chip does exactly depends on the instruction set architecture and the implementation.
Historically, although it may seem odd, “code” or algorithms existed prior to modern electronic computers. The concept behind coding came from mathematics and logic. In a sense, computers were designed around ideas already established in programming. Code existed before computers.
The raw form of computer instructions (commands) are simply binary numbers stored in a particular sequence. This is called “machine language” because a computer will take these numbers and ‘run’ the instruction these numbers represent. Much of it is very basic stuff like “get this data from this location”, “add the number stored here to the number stored there and put the result somewhere else”, “store this number in this location” etc etc.
It is possible to manually write programs with machine language as long as the writer translates the desired action into these numbers but this is laborious and error prone so only simple programs are feasible. This is the most basic form of programming.
The next form of electronic computer programming is called “assembly language”. Basically a program is written that converts simple human text into machine language – this is pretty much a one to one translation meaning “load register A” command is translated to “0010” or similar. This is one step above writing in machine language, a bit easier to read and debug, but still very very tedious.
From assembly, it is now feasible to write compilers for higher level languages. A compiler is another program but is designed to “translate” a specific language into machine code. These are the more familiar languages like BASIC, FORTRAN, C etc etc. These compilers are just sophisticated translators but only for one particular language.
Computers are machines, and machines do the thing they are designed to do when you press a button.
Computers are machines which can do many different things, so they have a lot of buttons connected to all those different things.
We never ‘taught’ computers to read code, rather we just wired all those buttons together and built a system in which we could write down all the button presses in order so that the computer does a more complicated thing- like those pianos that play themselves.
Then we built a computer for which one of the things they could do was interact with that button reading system, so there’s now a button for “go back five buttons” and a button for “if something looks like this, skip the next button”, among others.
Now we can write complex code, and never taught the computer anything- we simply designed it with built in commands.
Latest Answers