It comes down to the components that they are made of, originally vacuum tubes, and now transistors have two stable states, on or off. One could certainly build a computer with components that had 3 or more states, but that’s not what turned out to be how history went and there aren’t compelling reasons to switch the underlying basic logic used for mainstream computing at this point.
Because it is very easy to control electronics parts so they are fully on or off with a control signal just on of off. Having multiple possible levels in a wire is possible but require lots more parts and changing stare is slower. A result the signal in the computer will just have two states on or off is faster, that is binary.
You could use 4 binary bits (wires) to represent a 1 digit decimal number. But 4 bits can represent 16 different states, not just 10. So only 10/16 = ~62% of the available states mean a 4-bit digital number is used for a 1 digit decimal.
The inefficiency gets larger if we have more digits a 2-digit decimal number like that requires 8 bits and that this 2^8 =256 combinations 100/256 = 39% efficiency.
Doing maths is also a bit harder if you do not use all available states. The result is that using it as binary is a more efficient use of the resources you have. IF you what to input and output decimal numbers you can just convert them at that point and store them as binary in the computer.
You can store data like and it is called binary-coded decimal (BCD) is can in some situations be easier to use but for multiple reasons is not as common today as in the past
You can build [https://en.wikipedia.org/wiki/Decimal_computer](https://en.wikipedia.org/wiki/Decimal_computer) and the first digital electronic computer ENIAC from 1945 is an example. For some design you had 10 wires for each digit whereas only now was on at the time. If you look a IBM computers, the was the largest computer manufacturer early on, they started to use fully binary computers with the IBM System/360 introduced in 1964
binary is a relatively simple coding, you can measure a voltage at one level, say 0V and say that’s 0, and another vintage, for example 5V and say that’s 1.
you can have a lot of slop in that and have a range of voltages that are acceptable for either value. so 0 can be anything from -0.5v to 0.5v. a 1 can be 4v to 6v, etc. if you read 3v on a line, you know it hasn’t stabilised yet and you need to wait or raise an error.
there’s no advantage in computing power in using multiple values and binary is easily encoded in many other media. paper tape, magnetic tape/disks, switches, compact disks, etc. all easily encoded binary and many would struggle to encode more values, and there’s no real benefit to doing so.
some modern internal communications protocols do use multiple voltage values on the same line, they might use 4 distinct voltage levels to send 2 bits of data in one pulse, or 16 values to send 4 bits, and so on. but this relies on more complex decoding at each end and is more susceptible to things like interference, where a small blip can momentarily make the pulse being sent get the wrong value and corrupt all the bits in that pulse.
also sending more bits per pulse requires exponentially more levels each time, so you end up with diminishing returns.
so in the end, a trinary computer would not be able to do anything that a binary computer couldn’t, would be significantly more complex to build and the data it uses would be more limited or expensive in its storage. it’s not impossible, but it’s just not the simplest possible, and therefore it’s more expensive and complex for no overall benefit.
Latest Answers