Programming code tells the computer to do something, but what makes the code actually mean anything to the computer?

1.25K views

Programming code tells the computer to do something, but what makes the code actually mean anything to the computer?

In: Engineering

36 Answers

Anonymous 0 Comments

A computer is a just a machine. It really doesn’t understand what it has been instructed to do. It just follows a very long and complicated list of primitive commands, expressed as software.

Deep down, a computer can only do a very few things. Set a bit to 0. Set a bit to 1. Check the current value of a bit: 0 or 1.

The most primitive computer commands are IF, AND, OR and XOR

A programmer can instruct the computer to test the value of a bit using IF. IF the bit is a 1 (which we generally assign the value TRUE or ON) the computer will execute the next block of commands. IF the bit is 0 it will skip to the next command block.

The AND command compares two bits, and depending on their values will set a third bit to either 0 or 1. Binary computers are built so that

1 AND 1 = 1,
0 AND 1 = 0,
1 AND 0 = 0,
0 AND 0 = 0

Similarly, the OR and XOR commands will compare two bits and set the value of a third… but based on a different mapping.

Engineers have decided how they want to encode numbers and letters into a long combination sequence of 0 and 1. By convention (and there are different encoding practices), we generally think the number 0 as 0000 0000, the number 1 as 0000 0001, the number 2 as 0000 0010, and so on… this is binary encoding.

As it turns out, once a computer knows how to execute IF, AND, OR and XOR, it can be taught to perform Addition by comparing each bit from two encoded numbers following a sequence of primitive commands.

For Addition, this series of primitive commands not so different from “1 plus 1 equals 0, carry the 1…”. Adding two eight bit numbers can probably be expressed using a dozen primitive commands—maybe fewer if the programmer is very clever in writing the most efficient algorithm.

The computer doesn’t really know that the bits it is setting have any special meaning to us. It is just flipping switches on and off according to the commands it has received.

Of course, once it has been taught to perform Addition, it can be taught how to perform Multiplication by doing Addition in a loop, through computational brute force. Same then with Exponentiation.

Modern software languages do not require a programmer to deal with bitwise commands any more. All the primitive functions have been written into the hardware, into the operating system software, and into the software compilers.

Programmers these days just say “Multiply A times B”, and the compilers convert that abstract intention into the very long series of machine instructions that deal with flipping the right switches to 0 and 1, in the right manner, and then sending the resulting value to output.

Anonymous 0 Comments

The code gets converted into another kind of code called assembly language. The computer can send the code as inputs to electrical circuits. The circuits “understand” the code in the sense that the electrical inputs cause electricity to flow through different pathways of the circuit, causing different things to happen.

For example, it might cause one pixel to light up blue or another pixel to light up white. Or it might store stuff in memory (such as RAM) which allows it to be used to keep track of what’s going on in your programs. Or the electrical signals might cause ripples in the electromagnetic field called radio waves that send information to and from your Wi-fi router.

Anonymous 0 Comments

A computer’s brain, the central processing unit (CPU), is a machine that can perform a limited number of operations. Collectively these operations are called “machine code” and include things like reading from and writing to memory (RAM), adding or subtracting numbers, comparing numbers to see if they’re equal, and running instructions from a particular place in memory.

For example, a simple CPU might read instructions from memory and map instruction number 1 to the add operation, number 2 to subtraction, number 3 to comparing two numbers, and number 4 to branch / run future instructions based on the results of a comparison. Since raw numbers don’t mean very much to humans, we assign names to the operations to make them easier to read. This is called “assembly code” and typically you’ll see operations like MOV, ADD, SUB, BRA.

Writing programs in assembly code is still pretty tedious, so some smart people came up with newer languages that are more readable to humans along with a special program called a compiler to convert the human-readable text into machine code. That way programmers can use familiar mathematical notation like `let x = 10` or `let y = x + z` rather than several lines of assembly code.

It’s important to note that the computer doesn’t really understand anything like a sentient being. It simply performs whatever operations are fed into it very quickly. Everything builds from that premise.

One of my favorite professors walked into my Digital Logic class the first day carrying a 5 pound metal object called a “flip flop”, an early form of computer storage. He slammed it on the desk to get our attention. “This is a flip flop”, he said. “It stores one bit of information.” He then took a tiny CPU out of his breast pocket. “This is a Motorola 68000,” he continued. “It contains 30,000 flip flops. We’re going to learn how it works.” And we did.

Anonymous 0 Comments

Machine doesn’t “understand” anything. It just ends up being a series of circuits, gates and switches that go on and off in whatever series we want. And we in turn create “meaningful” states to put the machine in. As short of an answer I can think of.

Anonymous 0 Comments

A computer at the very basic is just a machine with an on/off switch. 1 is the command to turn it on and 0 is the command to turn it off.

Modern computers are a complex assembly of bajillions such machines which can do a lot of different stuff based on the right on/off configuration.

The most direct way for humans to configure computer to do their bidding is by inserting the right string of on/off configuration with a bunch of 1s and 0s. These are what is known as machine language.

But due to the sheer complexity and huge numbers of machines involved in a modern computer, this is not really efficient nor human friendly so we developed a shortcut.

We saved a bunch of frequently used on/off or 0/1 configurations to words we can recognize. So for example 0100110101010 010110010 1010001011 1010111010 can probably be the configuration needed to display a simple apple on the screen based on which pixel should be turned on and which should be turned off.

We are going to need that a lot and we don’t want to waste time typing that long string of 1 and 0s each time so we automate it by saving the whole string as “draw apple”. Nicely short, understandable and easy to remember. This process of translating human words to 0 and 1s is called compiling. So the next time we want an apple displayed, we just type in “draw apple” and the compiler automatically transform it to a bunch of 0 and 1s and set the computer to the desired state.

Overtime we managed to assemble a vast collection of such commands enabling us to do more and more complex configurations by using a relatively short and understandable way. These are the programming code we end up with.

Anonymous 0 Comments

At the lowest level, the CPU chip has a few storage slots, called registers (which typically hold one number of a binary length equal to the number of bits the CPU is designed to handle) , and a bunch of simple operations it knows how to do involving the registers. The operations are somewhat arbitrarily assigned number codes. So for example, your CPU might have operations like:

001: Add a number n to register X and store it in register X

002: Put a number n in register X

003: Copy register X to register Y

004: Read memory location z to register X

005: Decrement register X

006: Skip the next instruction if register X is zero

And so on. These commands are based on operations that can be performed using hardware logic, such as AND, OR, and XOR, and by turning dedicated circuits in the CPU on and off. To run a program, we just feed the CPU a list of these operation codes and data to use in the operations. For example a some program might be:

002 008

005

001 005

Which, based on the instructions earlier, would put the number 8 in the X register, then decrement it to 7, then add 5. After the program is compete, if we were to read the value of the X register, it should be 12.

Because the CPU operations are usually very simple we use higher level languages and compilers to convert easier to use commands into these very simple instructions.

I also recommend Ben Eater’s 8-bit computer videos if you want to understand it more deeply.

Anonymous 0 Comments

At it’s heart you can think of a computer like a programmable calculator. It can add, subtract, tell if two numbers are the same, bigger, or smaller, and do different calculations depending on the result (if bigger, do A, if smaller do B, and the calculations continue). More complicated operations can be done based on these simple ones. We humans have devised ways of describing behavior, and translating into those calculations. This is basically a programming language.

It would be tough for a five year old, but not impossible to just read one of these instructions for the computer, and figure out what it does. Some of them are pretty much the same as you’d see in code. “C = A + B”, could be “add A, B, C”, where A is the result, B and C are the values to add. This is then turned into a number that the computer understands.

How the computer understands this number isn’t exactly easy for me to explain. It’s quite complicated, even though the math is often very easy. The principle isn’t that different from previously described though. We’ve figured out how to use electricity to do basic math. That number I mentioned is turned into the math. I’m too tired to explain from there, but that’s essentially how it works.

Anonymous 0 Comments

I used to ask this question all the time, and the answer was just ‘ the compiler ‘. How does the compiler work ? No one could answer that one.

Anonymous 0 Comments

At the human end we have high level languages which are easy for us to understand and write but which need to be interpreted or compiled to machine code before the computer can run it. Interpreted languages are slower but more immediate meaning you can run them straight from the source code (classically BASIC, more recently Python) whereas compiled languages are faster but need to go through a compilation step to be turned into executable code.

The machine code is a sequence of instructions that the CPU understands and would typically be different for each type of CPU. For example, an Intel processor can’t run the same code as an ARM processor even though the source code can be the same and the compiler would turn that into binary machine code for the specific type of CPU. At the CPU level itself there is memory (registers) and instruction decoders which then execute the binary code.

A recent wrinkle would be the Java Virtual Machine where the Java source code is compiled into a special byte code that the JVM understands and then issues CPU specific code so while the Java source code is compiled, it is designed to be an intermediate which can run on the JVM and then you just need a JVM that is compiled for your specific CPU which means languages like Java and their ilk can produce code that runs almost as fast as natively compiled code but will run on any CPU that has their virtual machine.

Anonymous 0 Comments

[removed]