Hardware Engineering -> Writing Software

239 views

In computer hardware engineering, how does the electricity and the little bits of silicon work, what’s the “lowest level” of computer engineering and how does that work, and how does that scale and translate up into writing software? i.e. hardware -> machine code -> assembly -> higher level programming languages?

I think mostly I’m just curious about the bridge between software and hardware, and how to get there, but I have no idea where to start.

In: 1

5 Answers

Anonymous 0 Comments

The lowest form of computer is a light switch. It has a single bit of input–the switch itself. It computes the identity function (i.e. the output is the same as the input). This output is sent to the one bit of display–the lights.

What you see in this “computer” is that everything is governed by basic circuitry. When electricity can flow it flows. Where it is blocked it stops. Input devices take some other phenomenon (e.g. the closing of a switch in this case) and turn it into an opportunity for electricity to flow or not. Output devices work the opposite, turning electricity into some physical phenomenon (light, in this case).

The next computer to look at is hallway/stairway light switches. These are wired up such that you can flip either of two switches and it’ll toggle light. This improves on the previous computer, now taking in *two* bits of input. The function being computed is now XOR (exclusive or, where the output is 1 if *exactly* one of the inputs is 1). The output is still a single bit–the lights.

At this level of computer the thing to recognize is that there are ways to arrange wires such that just by closing switches you can evaluate a simple function.

The next question to ask is “what if the output of this circuit wasn’t a light, but rather another switch?” All you need here is a switch that can be controlled electronically. The first choice here was relays, but those were superseded by vacuum tubes and later transistors. The physics varies between these, but the underlying idea of making an electronically controlled switch stays the same.

Once you start designing your computer where each unit can be used to drive later units you can start designing incredibly complex circuits through the magic of copy/paste (or abstraction, if you prefer the term). Instead of having to design everything in terms of wires you can design a few basic gates in terms of wires, then design more complicated gates in terms of those basic ones, and so on. For example, you might design a circuit that takes 16 inputs–two sets of 8 for two 8-bit numbers–and outputs their product. In designing this circuit perhaps you need a circuit to compute addition and another to execute a bit shift. You design those in terms of logic gates, and you design *those* in terms of wires. Thanks to abstraction you don’t have to think about wires when you’re designing something to multiply.

At that level you have the capability to design circuits that can compute extremely complicated things, but you’re still stuck with a very rigid computer: it just does one thing. To get around that you design several circuits that do commonly-needed things, then you design one last circuit that takes an instruction (perhaps as an 8-bit number) and, based on that instruction, it connects memory cells (circuits you’ve designed in terms of logic gates) to specific pieces of your computer. This circuit acts like the old-timey switchboard operators for telephones.

When the “addition” instruction comes up, for example, it may connect register A and B to the input of the adder and register C to the output. The adder does the one thing it’s designed to do, based on how it’s wired: it adds. This circuit that interconnects modules based on an instruction is what we’d call the instruction decoder. It is at this stage when the instructions are given meaning in hardware: the “function” that this circuit computes is “addition operator -> connect A, B, and C to the adder; multiplication operator -> connect A, B, and C to the multiplier; memory read A-> connect A to the memory bank, 1 to the read/write bit, and B to the address select, …” and so on.

To finish off that layer of the computer we need something that’s going to make the program advance, so for that we reach for a clock. There are various ways to set up circuits such that they trigger on the change of a signal. These circuits are used to advance the program from one instruction to the next. They can also be used in circuits that send serialized data down a wire.

Up until this point it is typical to look at the program as one thing and the data it operates on as something else entirely. Back with the original light switch the “program” was wires and the “data” was the position of switches. At this point when we look at the program it’s really just a sequence of numbers, just like the input and output data. What if these were the same thing? Leaning into this idea we move towards reading programs from the same physical media that we might pull data off of (e.g. a floppy disk, where the orientation of magnetic fields can result in the read-write head of the drive reading a 1 or 0 (being energized, closing a switch, or not)).

Finally, with programs as data we make one final realization: programs take in and spit out data. What if we had a program where the input was one program and the output was another? Specifically, what if the input was in a language that humans can understand more easily and the output was that same program but in a format that computers can more readily work with (e.g. the actual sequence of instructions that go into the computer)? This is the notion of a compiler or interpreter (the difference being whether this program runs in real time as the program executes or all at once ahead of time). Thanks to compilers we can write higher and higher level languages that take us even further from the underlying wires and switches (or traces and transistors now) that operate at the lowest level.

As a postscript, the above description gets you to about the 1980s. There have been a lot of developments since then that would extend this already long comment–I haven’t touched on things like microcode (the software running between your assembly and the hardware itself), register renaming, simultaneously multithreading, etc, all of which help improve performance. These are all the result of applying the same concepts described above.

You are viewing 1 out of 5 answers, click here to view all answers.