# Eli5: How does electricity = data?

57 views
0

We have electronics that can do a multitude of incredible things. I understand that data has to do with the binary system, but I can’t get my head around how electricity can be transfered into data… and how that data can then generate images and render graphics, etc. I mean, wut!? Plz hlp, brain cannot compute.

In: 5

Electricity travels really fast. If I flip a light switch to make a connection, the lightbulb will have electricity flowing through it basically instantly, no matter how far away it is from the switch. However, all you can do is change whether the light is on or off; there’s no dimmer switch.

So say you wanted to send someone far away the text of the alphabet: “ABCDEFG…” You have a switch and the other person is looking at the lightbulb it’s connected to. Ahead of time, you would agree with the person that a certain pattern of on/off by the light means the letter “A”. A different pattern would mean “B”, and so on. It’s like Morse code. You can send a message by manually flicking the switch on/off; this is basically what a telegram was. However, that’s really slow. Eventually, we built machines that can flip a switch billions of times per second and can read the on/off signals coming in that fast and translate them back into text for us. The rest is translating that text into instructions for machines. Like every time your monitor refreshes (60+ times per second), your computer has to send it a list of instructions: “Row 1, column 1: turn on the lights to color this pixel white. Row 2, column 1: turn on the right lights to color this pixel black…” and so on.

Binary is a system of digits like our own but it “resets” at 1 instead of 9. Since there is only 1 and 0 you can use electricity turning something “energized” to represent 1 and “not-energized” as 0. You can use that to represent data in a way a computer understands (0s and 1s).

The most important device in a modern computer chips are called MOSFETs which are a class of transitor. Although several other devices ate also used. There are many different designs for MOSFETs and depending on how designed they can have different operational characteristics which is why they’re popular. In the context of computer chips the types of MOSFETs used are designed to operate as electrinically controlled switches with very fast on/off switching time.

This means that a small current can be used, via a MOSFET to turn on our off a much larger current.

The simplest analogy for this might be a lever used to control the hydraulic cylinder on a log splitter. The control level attaches to an oil valve that sends high pressure hydraulic oils to on and if the cylinder or the other. This means a small force on the valve lever can control a much larger force on the splitting wedge.

Therefore one switch can be used to turn on and off a dozen, or perhaps even a hundred other switches, and it can also turn them on and off extremely quickly. That is, in less than a nanosecond.

So, using one switch to turn on and off another seems a bit silly and redundant. But it turns out if you use several hundred switches and wire then together in a particular way, your can start to do some pretty interesting things like add/subtract, multiply and divide binary numbers, or shuttle information around through a single wire by turning current inn amd off again in a specific order, like morse code.

What does any of this have to do with binary, logic, or ones and zeroes.

Well, using ones and zeroes is just a convenient writing notation. A computer program doesn’t really care what a zero is. Computer programs only care about whether current is ON or if it is OFF in a particular wire, pin, or circuit. This is similar to morse code but not quite the same, as in morse code, brief pauses are understood to mean spaces between letters, while a computer would interpret a pause simply to be OFF.

In binary we use 1 as a convention to represent “on” and a 0 to represent “off.”

Now, one of the neat properties of binary is that you can use a series of 1s and 0s to represent more familiar decimal numbers that you’re used to such as 14 (=00001110 in 8-bit format. Binary-decimal conversion is a topic I won’t go into.

So lets say you have one of those flat parallel signal ribbon cables with 8 wires, lets call them A-H.

If you wanted to input the number “8” into your electronic device, you could do something like:

Wire A = 0V

Wire B = 0V

Wire C = 0V

Wire D = 0V

Wire E = -3V

Wire F = -3V

Wire G = -3V

Wire H = 0V

Although technically speaking you’d need to have a 9th wire as a common drain or ground cable so you have:

Wire J = +0.2V.

This is at a slightly higher Voltage so current will flow out through the common line not back through A-D for example, which might be interpreted as them also being “ON”

Simple example: You have a lightswitch that runs all the way to your friends house. Light on = you’re home from work, come over and hang out, light off = you’re still at work. Congratulations, you have successfully sent information over a wire using electricity.

A computer essentially works the same way, turning an electric switch on and off really fast to send data. The way it does this is by using standard “protocols,” to send and receive data. A simplified example of a protocol might be “The first `on` means ‘start listening,’ and then every 1ms after that record if the wire is `on` or `off` for a total of 8ms.” If both computers know this protocol, they can easily send data back and forth.

This might result in the “packet,” of data being sent such as `01011001` (with `1` being the wire is `on`, `0` being `off`). Computers use binary, so depending on what the receiving computer was listening for, that packet might be interpreted as the number `89`, the letter `Y`, or some kind of arbitrary instruction that the programmers of a certain piece of software decided on (such as your player stepping on a landmine in a video game).

As far as images and graphics, you’re just sending basic data. If you want to send an image, the computer says “hey, start listening for 75,000 numbers I’m about to send you that will represent red, green, and blue values for an image,” and then proceeds to send those numbers. Graphics might send a list of points for the 3D geometry (again, just a list of numbers), or it might just say “you know the shape already because it’s in the program’s code, just draw that shape at X, Y, and Z position,” and sends those numbers.

Of course there are millions of other things that can be sent, but it’s really just up to the programmers to determine how they want to communicate, and what numbers they want to send.

Electricity doesn’t “transfer into data” any more than ink on paper “transfers into” words. The electricity is just the physical medium we use to store or transmit that data. Humans are the ones that give it meaning.

Binary also isn’t that extraordinary. It’s just another way to represent numbers. We happen to use 10 digits because we have 10 fingers. Computers use a 2-digit number system because it neatly maps to two discrete states of electricity (on or off) and we can easily build components to work with that.

So electricity is the physical medium, binary data is the abstract representation. The only step from there is to represent concepts like text or images or even math or logic operations using binary numbers, and thus electricity. And for that, we have different standards, protocols, instruction sets, file formats etc for different use cases. A simple example is ASCII, a standard which lets use groups of 8 binary digits to represent a variety of English letters, numbers and other symbols.