Why do computers work in base 2, as opposed to base (higher number here)?

519 views

I realise (/think?) that CPUs essentially treat two different voltages as a 1 or 0, but what stops us from using 3 or more different voltages? Wouldn’t that exponentially increase the CPU’s throughput by allowing for decisions with greater than two outcomes to be calculated in one cycle? This would presumably mean that a LOT of stuff written for base 2 would need to be updated to base 3 (in this example), but I can’t imagine that’s the only reason we haven’t done this.

I feel like I’ve explained that poorly, but hopefully you get the gist.

In: 12

20 Answers

Anonymous 0 Comments

A lot of effort had gone into getting them to work in base 2.

Earlier, things were a bit more open as standard ideas were less solid. The Soviets did a lot of work with base 3 computers using a concept called balanced ternary. Analogue electronics at the time commonly used voltages above and below zero – so when it came to digital circuits why not use zero, negative and positive.

The soviet approach was to use base 3 but with symbol for +, 0 and -. So for example the number eleven would be stored as ++- (9 + 3 – 1).

One of the advantages of the ternary approach was that it drastically reduced the number of transistors needed compared to a binary system. Ternary computers wer consistently more powerful than equivalent binary computers.

The problem came with defining the basic operations, especially logic. This made ternary circuits more difficult to design, build and test.

Eventually, theoretical improvements in binary circuit design won out – it was easier to teach circuit design and it was easier to be sure your circuit was right (including design of computer programs to check your circuit design) for binary. The increased design effort for ternary wasn’t worth it.

When chips with thousands of transistors came, the lower number of transistors in ternary, which was the only real advantage left, was gone.

Anonymous 0 Comments

Cpu which uses 0, 1, 2 are tetriarycpu. Such systems are used for AI and probability scientific reasearches.

Increasing number of outcomes only useful for a very limited specific cases where 2 generally represent that value is not known yet. Digital systems can simulate it by making object wrapper but it takes some space and resources. So adding 2 is an efficient thing for probability based evaluations.

However general user application does not require it at all or does it at so small scales that such benefit does not cover coast to be included in general pc architecture.

Anonymous 0 Comments

When you get into the history of computing, you learn that almost everything about computing is the way it is because:

a) They first people to enter that particular area of computing decided to make it that way

B) There was no compelling reason to change and inertia took over

Look at your keyboard. You know why they keys are in the order they are? Because mechanical typewriter had to move the most common letters farther apart so the arms wouldn’t jam up by typists going to fast, so they were deliberately built to have the keys in an inefficient placement.

Why are computer keyboards, which don’t have this issue, have the same layout? Because early IT researchers decided to copy physical keyboards, and since then inertia has carried it along, despite a solid effort from Dvorack

Anonymous 0 Comments

On/off is easy to represent accurately in electronics with considerable speed & scale.

But it isn’t really the base that matters, it’s the fixed values that lend themselves to identifying operations, memory locations, values – things that you want for writing software (like Turing machines).

Analog computers typically approximate a continuous range of values but it’s much harder to make them programmable. They are good for some operations, eg. amplification. Fixed-purpose versions can be found in practically all audio & radio equipment.

Anonymous 0 Comments

Boolean algebra works with base 2. And this is what our processors use to do calculations. For 3 or more voltages you would have to use different algebra, or the third state would just be useless.

Anonymous 0 Comments

> Wouldn’t that exponentially increase the CPU’s throughput by allowing for decisions with greater than two outcomes to be calculated in one cycle?

A CPU operates all of its transistors every cycle, which means that potential outcomes is more dependent on the number of transistors the CPU has, not the base it operates in. Consider adding 2 numbers. A simplified CPU does the addition in a single clock cycle. It doesn’t matter if its binary or ternary.

A 32 bit binary CPU can add 2 numbers from 0 to 4294967295 (2^32) in a single cycle.

A 64 bit binary CPU can add 2 numbers from 0 to 18446744073709551615 (2^64) in a single cycle

A 32 bit ternary CPU can add 2 numbers from 0 to 1853020188851840 (3^32) in a single cycle.

A 64 bit ternary CPU can add 2 numbers from 0 to 3433683820292512484657849089280 (3^64) in a single cycle

The number of potential outcomes of the add operation is related to the bit length and the base the CPU is operating in. For the 32 bit binary CPU, it’s 2^32. For a 32 bit ternary CPU it’s 3^32. But as you can see from above, that’s really just saying the largest numbers you can add in a single operation. A 64 bit binary CPU far exceeds the potential outcomes from a 32 bit ternary CPU.

This means that the base the CPU operates in is pretty much irrelevant for how fast the CPU is. Which means the speed of the CPU is dependent on both how many operations it takes to add 2 numbers together and how long each cycle is.

To get a CPU to add larger numbers together, you just need to cram more transistors on the chip. Which means that in this case, the only factor that impacts how fast a binary computer is compared to a ternary computer is how fast the transistors are and how many you can cram onto a chip. The reason we don’t have ternary computers is simply because we’ve gotten extremely good at making fast, small, and reliable binary transistors. On the internet people often post stories about soviet ternary computers being “superior”. There’s a reason why Russia today has no domestic chip making industries, and why ternary computers are an academic pursuit at best.

Anonymous 0 Comments

There’s a couple different answers but the real actual answer is: it’s more efficient that way. Ternary logic gates exist, trinary algebra exists and can be easily (constant time) translated to Boolean data. We absolutely could base computers on base-3 logic, or even higher orders. But it’s more efficient to do it with base-2.

Transistors are the fundamental block of electronic logic, and you need this or some version of it to have an electronic computer do anything, and the binary version is the most compact, fastest, and most energy efficient. This means you can fit more in a smaller space with less heat issues.

Everything else falls from that. If you come out with a ternary transistor that is proven to make better processors than the binary version, you’ll revolutionize the industry.

Anonymous 0 Comments

Because balanced ternary gets no love. For logical operations, you get a true and a false like you do today, but what do you do with the third state? A maybe? A shrug?

Anonymous 0 Comments

The reality is that computers don’t really work in base 2…

Each computer has a specific instruction set – which is the set of things that it can do. The number of instructions that it can do depends on the specific computer, but it will generally have instructions like:

add a, b

which means add the numbers a and b together. For a modern computer, the values a and b are numbers of a specific size – or specific number of bits. They might be 8 bits, 16 bits, 32 bits, or 64 bits.

So if they are 8 bits, each number can be from 0 to 255.

The computer process has an arithmetic unit that does the actual addition, but it does the addition all at once, not one bit at a time.

From a programmer perspective, you could write something like:

add 15, 3

add 00001111, 00000011

add 0x0F, 0x03

Those are just three ways of specifying the same number.

Anonymous 0 Comments

At the most basic level, your computer is built out of transistors. It’s easiest to think of a transistor as a voltage controlled switch, where the voltage present at the gate terminal determines if the current is allowed to flow between the source and drain terminals. This maps nicely onto binary & the boolean logic most computers are built with.

However, that’s not actually true: transistors are voltage controlled *resistors*, with the voltage at the gate terminal determining the resistance between the source and drain terminals. This means that it is possible to build circuits that map different levels of voltage onto different mathematical values, however, this isn’t really done in practice, as 1) any logical system that can be implemented with more than 2 values can always be implemented with just 2, and 2) maintaining consistent voltage levels inside a circuit is actually a fairly hard problem even with just 2 voltage levels, and it gets worse with multiple values.

It also wouldn’t adjust the throughput. The limitation of the results produced isn’t in the underlying data, but in how the results are calculated. Comparisons are calculated as subtraction: If A – B = 0, then A & B must be equal. If the result is positive, then A > B, and if negative, the A < B. This means that in 90% of cases, the computer is already capable of producing the kind of results you are thinking of. The remaining few cases tend to best implemented as fuzzy logic problems anyways, so any computer that can deal with floating points already can do that kind of stuff. Switching to a three value system isn’t actually going to give the computer any new capabilities or make it faster, so there is no reason too.