Programming code tells the computer to do something, but what makes the code actually mean anything to the computer?

1.23K views

Programming code tells the computer to do something, but what makes the code actually mean anything to the computer?

In: Engineering

36 Answers

Anonymous 0 Comments

At the lowest level, the CPU chip has a few storage slots, called registers (which typically hold one number of a binary length equal to the number of bits the CPU is designed to handle) , and a bunch of simple operations it knows how to do involving the registers. The operations are somewhat arbitrarily assigned number codes. So for example, your CPU might have operations like:

001: Add a number n to register X and store it in register X

002: Put a number n in register X

003: Copy register X to register Y

004: Read memory location z to register X

005: Decrement register X

006: Skip the next instruction if register X is zero

And so on. These commands are based on operations that can be performed using hardware logic, such as AND, OR, and XOR, and by turning dedicated circuits in the CPU on and off. To run a program, we just feed the CPU a list of these operation codes and data to use in the operations. For example a some program might be:

002 008

005

001 005

Which, based on the instructions earlier, would put the number 8 in the X register, then decrement it to 7, then add 5. After the program is compete, if we were to read the value of the X register, it should be 12.

Because the CPU operations are usually very simple we use higher level languages and compilers to convert easier to use commands into these very simple instructions.

I also recommend Ben Eater’s 8-bit computer videos if you want to understand it more deeply.

Anonymous 0 Comments

A computer at the very basic is just a machine with an on/off switch. 1 is the command to turn it on and 0 is the command to turn it off.

Modern computers are a complex assembly of bajillions such machines which can do a lot of different stuff based on the right on/off configuration.

The most direct way for humans to configure computer to do their bidding is by inserting the right string of on/off configuration with a bunch of 1s and 0s. These are what is known as machine language.

But due to the sheer complexity and huge numbers of machines involved in a modern computer, this is not really efficient nor human friendly so we developed a shortcut.

We saved a bunch of frequently used on/off or 0/1 configurations to words we can recognize. So for example 0100110101010 010110010 1010001011 1010111010 can probably be the configuration needed to display a simple apple on the screen based on which pixel should be turned on and which should be turned off.

We are going to need that a lot and we don’t want to waste time typing that long string of 1 and 0s each time so we automate it by saving the whole string as “draw apple”. Nicely short, understandable and easy to remember. This process of translating human words to 0 and 1s is called compiling. So the next time we want an apple displayed, we just type in “draw apple” and the compiler automatically transform it to a bunch of 0 and 1s and set the computer to the desired state.

Overtime we managed to assemble a vast collection of such commands enabling us to do more and more complex configurations by using a relatively short and understandable way. These are the programming code we end up with.

Anonymous 0 Comments

Machine doesn’t “understand” anything. It just ends up being a series of circuits, gates and switches that go on and off in whatever series we want. And we in turn create “meaningful” states to put the machine in. As short of an answer I can think of.

Anonymous 0 Comments

A computer’s brain, the central processing unit (CPU), is a machine that can perform a limited number of operations. Collectively these operations are called “machine code” and include things like reading from and writing to memory (RAM), adding or subtracting numbers, comparing numbers to see if they’re equal, and running instructions from a particular place in memory.

For example, a simple CPU might read instructions from memory and map instruction number 1 to the add operation, number 2 to subtraction, number 3 to comparing two numbers, and number 4 to branch / run future instructions based on the results of a comparison. Since raw numbers don’t mean very much to humans, we assign names to the operations to make them easier to read. This is called “assembly code” and typically you’ll see operations like MOV, ADD, SUB, BRA.

Writing programs in assembly code is still pretty tedious, so some smart people came up with newer languages that are more readable to humans along with a special program called a compiler to convert the human-readable text into machine code. That way programmers can use familiar mathematical notation like `let x = 10` or `let y = x + z` rather than several lines of assembly code.

It’s important to note that the computer doesn’t really understand anything like a sentient being. It simply performs whatever operations are fed into it very quickly. Everything builds from that premise.

One of my favorite professors walked into my Digital Logic class the first day carrying a 5 pound metal object called a “flip flop”, an early form of computer storage. He slammed it on the desk to get our attention. “This is a flip flop”, he said. “It stores one bit of information.” He then took a tiny CPU out of his breast pocket. “This is a Motorola 68000,” he continued. “It contains 30,000 flip flops. We’re going to learn how it works.” And we did.

Anonymous 0 Comments

The code gets converted into another kind of code called assembly language. The computer can send the code as inputs to electrical circuits. The circuits “understand” the code in the sense that the electrical inputs cause electricity to flow through different pathways of the circuit, causing different things to happen.

For example, it might cause one pixel to light up blue or another pixel to light up white. Or it might store stuff in memory (such as RAM) which allows it to be used to keep track of what’s going on in your programs. Or the electrical signals might cause ripples in the electromagnetic field called radio waves that send information to and from your Wi-fi router.

Anonymous 0 Comments

A computer is a just a machine. It really doesn’t understand what it has been instructed to do. It just follows a very long and complicated list of primitive commands, expressed as software.

Deep down, a computer can only do a very few things. Set a bit to 0. Set a bit to 1. Check the current value of a bit: 0 or 1.

The most primitive computer commands are IF, AND, OR and XOR

A programmer can instruct the computer to test the value of a bit using IF. IF the bit is a 1 (which we generally assign the value TRUE or ON) the computer will execute the next block of commands. IF the bit is 0 it will skip to the next command block.

The AND command compares two bits, and depending on their values will set a third bit to either 0 or 1. Binary computers are built so that

1 AND 1 = 1,
0 AND 1 = 0,
1 AND 0 = 0,
0 AND 0 = 0

Similarly, the OR and XOR commands will compare two bits and set the value of a third… but based on a different mapping.

Engineers have decided how they want to encode numbers and letters into a long combination sequence of 0 and 1. By convention (and there are different encoding practices), we generally think the number 0 as 0000 0000, the number 1 as 0000 0001, the number 2 as 0000 0010, and so on… this is binary encoding.

As it turns out, once a computer knows how to execute IF, AND, OR and XOR, it can be taught to perform Addition by comparing each bit from two encoded numbers following a sequence of primitive commands.

For Addition, this series of primitive commands not so different from “1 plus 1 equals 0, carry the 1…”. Adding two eight bit numbers can probably be expressed using a dozen primitive commands—maybe fewer if the programmer is very clever in writing the most efficient algorithm.

The computer doesn’t really know that the bits it is setting have any special meaning to us. It is just flipping switches on and off according to the commands it has received.

Of course, once it has been taught to perform Addition, it can be taught how to perform Multiplication by doing Addition in a loop, through computational brute force. Same then with Exponentiation.

Modern software languages do not require a programmer to deal with bitwise commands any more. All the primitive functions have been written into the hardware, into the operating system software, and into the software compilers.

Programmers these days just say “Multiply A times B”, and the compilers convert that abstract intention into the very long series of machine instructions that deal with flipping the right switches to 0 and 1, in the right manner, and then sending the resulting value to output.

Anonymous 0 Comments

There is a process called compiling that turns people code into machine code. Machine code is very basic commands stored in binary. The computer interprets the 0’s and 1’s of binary as high and low voltages which act as inputs to an electrical circuit that operates the device.

Source: am a computer engineering student and Minecraft redstone engineer

Anonymous 0 Comments

People explained how the language (Java, C, Python, Ruby, etc.) Become Machine languages that are 0s and 1s. To make this go even lower without getting deep it’s just electrical impulses in a specific order that the computer understands doing this trillions of times in a few seconds

Anonymous 0 Comments

A high level programming language such as C++ needs to be compiled into hardware specific language aka assembly. Assembly is just a bunch of basic commands the cpu understands which is basically a string of 0s and 1s. The 0s and 1s tells the hardware what to do.

To understand how 0s and 1s tells a computer what to do, imagine you have a train that can end up in 128 destinations and depending on how you program the switch tracks determine where the train ends up, and each destination is a different instruction, such as add, subtract, multiply, divide, etc.

Anonymous 0 Comments

So there are basically lot of layers

1. programming code is converted to binary code which is series of 1s and 0s. these represent instructions to processor. for simplicity every continuos 32 0s or 1s is an instruction in a series of binary code.

2. instructions are executed by an always running program called kernel/operating system. it will feed this to processor.

3. the processor is an electronic circuit that when given different 32 bits gives respective output. it is designed like that.

4. the electronic circuit is made up of gates which perform boolean logic like – inverting a bit(not gate), checking if two bits are both 1(and gate) and so on.

4.1 the gates are made up of transistors.

4.2 the transistors are made up of diodes and materials.

5. basically any algorithm can be reprrsented using these gates and the output is used accordingly by operating system/kernel.