# What’s the difference between analog and digital?

247 views

I’m pretty sure that that analog signals is just a continuous stream of input versus digital which provides signals at discrete time steps. Why have we shifted from analog to digital for so many things? Wouldn’t a steady stream of information be of better use?

In: Technology

Really, the answer is ‘computers’. Digital processing of information is much, much, much easier to do in a consistent, reliable, reproducible, controlled, fault-tolerant, noise-resistant manner.

If we could build analog computers as good as digital computers, you may have a point.

For an ELI5, analog is like writing an essay, digital is like multiple choice.

Essays can give a much richer and deeper exploration into a single subject. But it is hard to switch context or explore multiple subjects in one essay.

Multiple choice is easy to context switch and is easier to mark. So with multiple short questions, one can switch subjects or areas within a subject easily.

Same with electronics – analog has to be fine tuned to that application. It can be done but once tuned, it isn’t very flexible. Analog systems are hard to build if they have to accept a wide variety of signals.

Digital is usually based on binary logic. Logic is easily chained, “stacked” or layered. So very complex structures and functionality can be built that can deal with many sources of input and be fairly easily upgraded and changed.

Your summary understanding is pretty good. The big fundamental reason behind the shift is noise/error.

If you have an analog signal, and it gets distorted with noise, you don’t know what it was beforehand, the noise is carried all the way to the end. If each step, each wire, each filter introduces a tiny bit of error, you are both limited in the number of steps before noise overwhelms the signal, and need perfect, great quality components to process it with as little noise introduced as possible.

If you have a digital signal, where 0 is 0 volt, and 1 is +5 volt, if you get 1.1v of noise on the line, you still know upon arrival which one was 0, and which one meant 1, run that through a buffer / schmitt trigger (a special circuit that takes such “dirty” digital input, say, +0.8v and +4.4v, and scrubs it, producing clean 0v or +5v on output).

This way your only point where you lose quality is when you convert from analog to digital, once – afterwards your data remains unchanged and undamaged, no matter how many processing steps it undergoes, because every time it has a chance to go a little “dirty” it can be cleaned up. And if given step risks introducing noise so bad the data can’t be “cleaned”, you just send more data, so that whatever was lost can be reconstructed from redundant extras.

That means cheap, tiny components, because you don’t care about a bit of noise. That means arbitrary media, because in analog converting between electric current, light intensity, magnetic field, magnetization of recording tape and so on was always tricky, as they never converted completely 1:1. What your tape recorder got through the microphone and wrote to tape was never identical to what it read and replayed; similar, yes, but if silence was still silence and max volume was still max volume, the bass was a little more warbling, the really quiet parts were completely gone, and so on. With digital, a “1” is always a “1” and “0” is always a “0”; in particular “0.98” is still a “1” and “0.2” is still a “0”, you know the inaccuracies are an error, and you can reconstruct the original just as it was digitized, simply by discarding the error.

In a digital system you’re transmitting the numerical representation of a signal. In an analog system you’re transmitting the signal directly.

Say you want to transmit one musical tone. A digital representation would look like this:

*Sing the musical note ‘A’ for 6 seconds.*

And you could then add extra information to make sure that whoever is receiving that information has understood you correctly:

*Sing the musical note ‘A’ for 6 seconds. That’s a frequency of 440 Hz for 6000 milliseconds.*

And then you could even add a checksum:

*Sing the musical note ‘A’ for 6 seconds. That’s a frequency of 440 Hz for 6000 milliseconds. The sum of all the digits in this instruction is 6+4+4+0+6+0+0+0 = 20.*

So even if the recipient has misheard you, the checksum probably isn’t going to match and he can ask you to repeat the instruction. So you now have pretty good protection against transmission errors.

That recipient can now go and forward this instruction to 100 other people, and even if he does so over a pretty bad phone line, the information isn’t going to change. Every single one of those people will sing the exact same note (as long as they can sing) for the exact same duration.

In an analog system you’d just sing the note A to someone for 6 seconds and ask them to repeat it. If that person then passes on the information to others (by singing the note to them), the information is always going to change a bit. And if those recipients pass on the information, it’s going to change even more.

> I’m pretty sure that that analog signals is just a continuous stream of input versus digital which provides signals at discrete time steps.

Exactly right. Analog is like drawing something, and digital is like drawing that same thing via a game of connect-the-dots.

> Why have we shifted from analog to digital for so many things?

The trouble with analog is that every time you make a copy, that copy is slightly different from the original. Over multiple generations of copying, or with poorly made copying equipment, these differences can be easily seen/heard.

Digital is written entirely only using on/off bits to represent 1s and 0s which then represent larger numbers.

The great thing about bits is that you can either read the bit or you can’t, so there aren’t generational losses in copying.

Digital media can also be scrambled and can include “checksum” bits. These two tools allow electronics to correct for bits that were incorrectly read.

> Wouldn’t a steady stream of information be of better use?

The discrete steps in modern digital signals are so small that they’re imperceptible. For example, the human ear can hear sound frequencies from about 20Hz to 20kHz, and Nyquist theorem says that a digital audio format should sample at double that rate to accurately reproduce signals, so we’ve been using 44kHz and 48kHz audio for years… and on high-end products and in studios, they’re even doing 96 kHz and 192 kHz now.

Analog signals are analogous to what they represent – for example in analog audio signal the amplitude tells how much the loudspeaker should push or pull.

Digital signals represent the signal as a string of digits, that can be used to reconstruct the original signal.

The reason why we bother with digital is that analog signals are vulnerable to noise, while digits can be transmitted exactly, as long as proper care is taken.

Sound waves are analog. Let’s say you have a device that directly turns sound waves (a gradual change in sound pressure over time) into an analog electrical signal, and you apply that signal to a wire, going to a speaker that vibrates along with the peaks and troughs of the signal, causing the sound waves to be reproduced.

This is normal, a microphone connected to a loudspeaker.

However, if someone speaks on one side of the planet, how would you transmit that to the other side of the planet?

One way would be a wire going all the way, to a speaker on the other side. But you would ideally need to boost the signal perfectly. Any kind of interference (noise) would directly alter the signal. It’s hard to separate the signal and interference.

It’s also less convenient to store analog signals, if you want to delay or repeat the sending. You can’t cut out the wire with the signal and send it in the mail. Gramophone records store analog signals, because the sound wave is directly represented on the platter. But gramophone records are not very efficient, and any degradation directly changes the signal.

You could directly record the analog signal as a magnetic field on a hard drive platter, instead of 0s and 1s, but working with this is far less flexible. RAM and SSDs are designed around discrete electrical charges.

Computer logic and programming would struggle a great deal with analog. It’s difficult to even imagine how you would calculate e.g. the maximum number of cars on a bridge, based on an analog representation of each car. It’s not impossible, just monumentally inconvenient.

Digital allows for perfect transmission, easy storage, easy processing.

In the sound realm, the analog signal of a voice is digitalized through an ADC (Analog to Digital Converter): [https://en.wikipedia.org/wiki/Analog-to-digital_converter](https://en.wikipedia.org/wiki/Analog-to-digital_converter) . To play it on a speaker, the speaker cone has to vibrate based on an analog signal, and that uses a Digital-to-analog converter (DAC): [https://en.wikipedia.org/wiki/Digital-to-analog_converter](https://en.wikipedia.org/wiki/Digital-to-analog_converter)