The computer’s built so that coded patterns of 1’s and 0’s physically open / close different paths for electricity, to make it do different things. These coded patterns are called “machine code.”
The details of exactly what patterns are available, and what they mean, can be different for different models, brands, or kinds of computers. A CPU manufacturer typically publishes a manual with a complete specification of the patterns.
Working directly with the coded patterns the computer actually uses is inconvenient for human programmers. It would also be more efficient if the same program could be used on multiple models / brands / kinds of computers.
So people created programs (compilers, interpreters, shells, JIT’s) that allow the computer to “understand” English-like commands. This involves a “translation” process, sort of like translating from German to Italian. (Except the computer is, well, a computer, so it expects programmers to use perfect spelling and grammar, but will happily translate a buggy or completely nonsensical program as long as it’s grammatically correct.) It can be done in a few ways:
– A compiler works like translating a novel. A long program’s translated all at once, then the result’s saved in a file that the computer can run.
– An interpreter analyzes one “sentence” (line of code) at a time, runs that one, then moves on to the next. Sort of like translating a novel out loud as you read it.
– A shell lets you type a line of code, immediately runs it, then shows you the result. Sort of like when a businessman or government official goes to a foreign country, they might bring a human translator who translates each sentence immediately when they say it.
I’m sure someone can explain it better but:
At the most basic level, a computer is just a set of transistors. They are switches that can be “on” and “off”, which are translated to 1 and 0 respectively.
What programs do is they give define the context for what patterns of 1s and 0s mean in different scenarios. For example: 01000001 in binary is the decimal number 65. It can also be the uppercase letter “A”. It all depends on the context provided by the “header record” in the program which explains to the computer which way to interpret that specific pattern.
Programming languages offer a necessary shortcut that allows programmers to write many instructions with minimal effort. If all programmers had to write binary there wouldn’t be many of us around
Programs are usually written in a “programming language” that is easy for people to learn and use. Then there is “machine language”, basically a string of 1s and 0s arranged in complex patterns that a computer can understand.
In-between the person and and the computer is a special program called the “compiler”. It takes your programming language and turns it into machine language. They’re like an interpreter; if I need to talk to somebody who speaks Russian, but I only speak English, I have to find someone who speaks both to translate what I’m saying.
Why don’t programmers all just learn machine language? Well, it’s really, really hard. And it takes a long time to say anything. So the smartest ones who can speak it come up with a programming language that the rest of us can understand. Then the first things they do are write a compiler and publish a dictionary and rules of their language.
Source: am programmer.
EDIT: If you’re also curious about what the 1s and 0s mean to the computer, check out some of the excellent engineering comments.
Latest Answers