1’s and 0’s are represented by an “on” and an “off” signal. This allows them to do discrete operations quickly with essentially zero error.
Adding additional numbers requires adding additional ‘states’. It’s easy to tell if something is “on” or “off”, but adding a third or fourth state adds more room for error by bringing your different possibilities closer together.
Analog computers have been made before. These use infinite possible values. They can do all of their math in one ‘step’, but at the cost of precision. The trouble is in large part that their room for error or their speed sucks. See, as you speed them up, the odds of an error increase. The speed of light makes your signals act less like you’d expect and more like waves that travel down the wire, splashing around as they go.
Digital computers have achieved absurdly high speeds with near-zero errors, something which analog computers can’t dream of.
Latest Answers