Why can’t computers understand decimal number systems?

598 views

Why can’t computers understand decimal number systems?

In: Technology

7 Answers

Anonymous 0 Comments

There is something called binary coded decimal which used 4 bits to specify numbers 0-9, but math on them was a bit of a pain. Some older CPUs had instructions to handle that math (x86 still have them, but they are disabled in 64-bit mode).

Anonymous 0 Comments

Decimal computers were made in the past, but it takes 4 wires to encode 10 states and those same 4 wires can encode 16 binary states. Getting 60% more numbers for how ever many wires you have to use is a pretty big efficiency improvement, so binary became the most common system.

Converting binary numbers into decimal is pretty easy, so it rarely matters that the representation is different inside the computer than in the outputs it makes for users.

Anonymous 0 Comments

Computers don’t “understand” anything really, including binary.

They store data using binary, but binary is just a way of representing a number. 256 in decimal, as a count of bananas, or in binary is the same thing.

Computers certainly can comprehend a number in other bases, decimal included. In the guts of the program the programmer tells the computer the type of input being expected and then translates that to an underlying storage type (consisting of some number or binary digits in the simple case). This is what operations (like adding, etc) are done in. Then when the programmer wants to display it to a user they might display it as a decimal number.

Anonymous 0 Comments

digital logic uses 2 states to implement numbers (and functionality as well). Unfortunately 10 is not a power of 2, so any decimal will be an infinite (binary) fraction .

Infinites are bad for storage, so it gets rounded to finite binary fractions. There is a rounding error, and depending on the task these can add up.

Yes, computers suck at calculus. They are wrong almost every time, but at least they are fast.

In principle one could implement a decimal logic based on 10 different states (e.g. 10 voltage levels), but this is way too complicated for a miniscule benefit.

Anonymous 0 Comments

Not an expert, but basically computers store data in the form of something called bits – these bits have values of 0 or 1/ True or false/on or off. ( The reason why bits only have two values is because they are generated by giving a voltage difference of (usually) 0 or 5 Volts.) So when computers store numbers, they use a system where any number can be written in just two digits – the binary system.When we enter a number in decimal system, the computer converts it into binary and stores it.
Also, technically the computer doesn’t understand a number system either, it is coded to work with certain information in a certain way.

Anonymous 0 Comments

Because the electrical wires are defined with digital and binary signal ( on or off). If you would like to have a decimal signal, then there would be 10 voltage levels required. But in the past, there were also analogue computers available, especially for chemical processes available. But it was difficult to handle in a stable way: there is noise on the wires and other disturbances.

Anonymous 0 Comments

Computers don’t “understand” anything, but as to why they use binary – it’s easy to wire.

At a physical level, computers represent the 0’s and 1’s of binary as different states in a physical electrical circuit (usually *not* just 0=”off” and 1=”on”, for the record). You can use more states if you want to invent more complex circuits, but it gets progressively harder to do that and there’s no reason to do it, so we don’t.