When we think of modern coding, we think of Python and Rust and Swift and Ruby and so on.
My question is more abstract. How exactly did computer scientists develop the original language used to tell a computer what to do and when to do it? How did they teach the computer to recognize that langauge?
Going even further than that, how did current languages get developed? Did they just rewrite the original computer code from scratch or are all modern computer languages designed to communicate with this baseline original code which all computers were programmed with?
In: Technology
At it’s core a computer only does one thing, add two numbers together really really fast. Then by combining additions you can do more interesting things. If you add the same number several times you get multiplication, if you add a negative number you get subtraction, if you add a negative to a positive you can deduce which number is larger from the result. By 1978 we’d invented lots of clever ways to add two numbers together (as well as store and retrieve them) and created the list of options called the instruction set that we pretty much still use today, 8086.
Yes, very smart people wrote code like this. Later on they added layers of interpretation to make the instructions more human readable and then more layers to make it even more readable then even more layers to make it more human time efficient. But even when you write in python it still gets compiled down to a series of the 8086 instructions (there’s 117 different instructions) for the computer to understand.
Latest Answers