How does coding physically work?

771 viewsOtherTechnology

Like how exactly can a bunch of letters, numbers, and punctuation symbols make your computer do all kinds of things? Plus what happens inside the computer when it executes the code?

In: Technology

22 Answers

Anonymous 0 Comments

I’m gonna try to simplify this super hard:

Your computer runs on basic instructions. Super simple operations from a set it knows how to use, but a versatile enough set to creatively do a lot of things with when you combine them together right. It does this using a computational core (pretty much a calculator or adding machine) and bank of limited memory, where it time after time reads in something, makes a calculation, then stores its value. Sometimes it pulls a value from further away in memory, or stores it in a specific location using a network of electronic components (a bachelor’s in computer science or engineering should give you *half* of an idea of what all is required here).

When a human writes a code, they use a sort of understood “language”. They have sets of instructions, conventions, or things that follow VERY EXACT rules, and feed it into a compiler or interpreter, which will then take this human-written code and translate it into many, many steps of super simple operations to achieve the same thing, in instructions readable by the above.

So where a human might write “print ‘x'”, the computer might get instructions to load memory addresses, update one in an output buffer, send further instructions for related operations, move on to the next instruction address, etc.

Anonymous 0 Comments

The letters, numbers, and punctuation (syntax) must be first somehow be translated into a binary representation which your CPU is hard-wired to react to. This is the “machine code” or binary instructions that the CPU runs on. There are a great many approaches you can take to get from “syntax” to “machine code” (see compilers, interpreters, etc)

Anonymous 0 Comments

I will try to share how I understand it but for more knowledgeable people, please fix where I am wrong.

Basically programming languages are in different levels. The one you are asking about is on the high level. You write a code using defined rules and those defined rules are built on lower level logic or programming language. Those defined “rules” are built on even lower level rules and so on until you come to ones and zeros.

E.g. you write a code to multiple 2 times 2 but you don’t need to write function which would actually do calculation as it is already defined, you may just say something like multiple(2,2)

I did my best here, I am not a developer so this may be off. But I hope it gives you at least some idea

Anonymous 0 Comments

A switch on a wall turns on the light bulb if you flip it up, and it turns off the light when you flip it down.

Little while ago, we figured out how to make a switch that’s controlled by electrical signals instead of your hand.

Turns out, when you put bunch of them cleverly together, we can make just a few set of manual switches (that you turn on and off by hand) control a whole bunch of other switches to essentially do math like adding numbers.

And we also figured out that we can wire bunch of those cool things together to do even more complex math and decision making.

That’s today’s computer hardware.

Now, this hardware (very complex set of interconnected switches) can be built such that when the manual switch we control are set a certain way, it does something special like turning on a pattern of lights (we call this a screen), or setting some other switches to represent a result of a mathematical calculation.

For example, if switches 3 and 4 are turned on, the hardware might be built so switch 7 turns on to indicate the sum of two numbers.

Put bunch of these together, and we can come up with a series of switch positions (program and data input) that result in a certain pattern of other switch-settings and light-patterns appearing (output).

Originally, we only used the switch positions to program these hardware. Those would be represented as 0s and 1s (binary machine language). We called these software.

Soon enough, we realized that we can also build software that turn human-friendly keyword representations of what we want the switch positions to be into actual switch positions (assembly language and assemblers).

Since then, we built more and more complex software that translate what looks like what we speak in real life (if, else, while …) into the switch positions (high level languages and compilers).

So today, when we want to program the hardware, instead of writing down switch positions in 1s and 0s, we write code in languages like C, Java, Python, and JavaScript. They all eventually turn into the switch potions in 1s and 0s and run on the circuits that are made of switches that can be controlled by other switches.

Anonymous 0 Comments

Obviously it’s a *lot* more complicated than this, but code that a person writes on a computer ultimately just tells electricity where to go inside your computer’s processor to make certain outputs.

The words or letters or symbols (or combination thereof) that you write in your chosen programming language ultimately are used by your computer as binary code, which is just 1s and 0s. Those 1s and 0s represent voltage changes. For example 1 = some voltage and 0 = no voltage.

So when you save the code you’ve written as a program, you’re essentially just saving a sequence of 1s and 0s, or in other words, your code is a sequence of on and off voltage changes.

When you actually run this code, you’re essentially telling your computer processor what to do with this series of voltage changes. This voltage changes go through logic gates in your computer processor and turn those inputted voltages into electrical signal outputs as logical operations. Pack a few hundred million of those into a CPU and you’ve got yourself a computer that can browse reddit or send an email or play a game.

Anonymous 0 Comments

The computer can only execute a list of binary instructions. Imagine it like this:

The computer has a bunch of circuits for calculations. A circuit to add two binary numbers, a circuit to multiply two binary numbers, etc. It also has a memory circuit where binary numbers can be stored and retrieved again. All these circuits are connected to each other with switches. Imagine it like switches in a railway, by putting in the correct control signals, the correct circuits are connected to each other in a specific way to e.g. get a number from memory, perform a calculation and store it back into memory.

The control signals are coordinated by a control unit. That unit reads instruction by instruction (an instruction is also just a binary number, but with an assigned special meaning. E g. One number could mean “add two numbers”) from memory and translates it into the correct control signals for the switches. So with the correct order of binary instructions in memory you can tell the computer to do anything you like and the computer executes all of this by switching together the correct circuits.

You could program a computer just by directly putting in those binary numbers as instructions. This is what was done in the earliest days of computers.But that is of course not very easy. The coding you see nowadays with text files is just a convenience for humans to understand the code better. Humans can read text well, but computers cannot directly understand code in a programming language.

Therefore computer scientists developed programs like compilers or interpreters that translate text to binary instructions. They literally go through the text character by character to detect what each line is supposed to mean and translate it to binary instructions that can then be executed like described above.

Anonymous 0 Comments

There are some good answers here. If you really want to go in depth, you can play nandgame.com, watch the “Breadboard computer” video series on eater.net (or Ben Eater on YouTube), read “But how do it *know*” by J Clark Scott or read “Code” by Charles Petzold. I’m personally partial to the first book, but many say that the second one is better.

Anonymous 0 Comments

Physically? Inside a computer are transistors* they work almost exactly like a lightswitch, hit one and it can be on or off, they’re arranged kinda like a game of 20 questions, depending on the yes/no answers to the questions (the light being on or off) a different answer is reached. Now imagine the number of questions is more like 20 trillion. At a certain point interacting with the switches became too hard due to the increasing number of questions, so we used them to construct an abstract system to interact with all those questions more efficiently, like many robots. Imagine an office building, you could for example draw a picture on the face of the building by turning it’s many lights on or off, we design the ‘robots’ (operating systems, programming languages, compilers) to make this easy for us to do, so we can just write ‘draw a circle’ and the robots translate that into which switches should be on or off to reach that result. This also explains most computer issues, as those arise because either A: the person wrote a command the robots didn’t understand, or B: the robots did something wrong with the message and it became garbled.

*Transistors don’t physically move, but contain electrons that shift around rather like a physical switch.

Anonymous 0 Comments

I will try to make it simple.

Program writen in any kind of language is taken by a compiler (set of instructions) that are changing it to logic set of 1’s and 0’s.
Those numbers are passed thru procesor stored in memory and output is again 1 and 0 ‘s and is again translated back to natural language or visual things visible on output device

Anonymous 0 Comments

So the brain of the computer is called the processor. Deep down it is just a very large set of microscopic wires and switches, (formed to create logic gates) that are capable of performing really basic operations – for example they can detect if 2 input wires have power and pass this power along to the output, or they can invert the power from input to output. From those simple structures, by combining them appropriately you can create more and more advanced operations. Many talented engineers spend years of their work to design and manufacture operation sets, creating modern processors. Modern CPUs have more than 100 million of those logic gates.

Now, how does a written programming language talk to the processor? It can’t on its own, as they don’t speak the same language. They need some form of translation. This is what compiler does – it translates programming language to a machine code, that can be understood by the processor. This code is then taken by the operating system and ran through a driver – which is a piece of software designed to communicate between operating system and hardware components. The resulting machine code is in sets of 0’s and 1’s and as you can probably guess, 0’s mean some wires will be set to no power and 1’s mean that some wires will have power. Outputting wires, after the operation is complete are ran through driver back to the operating system, where it can be translated to human readable output.