Why can’t computers calculate decimals as floats properly?

230 views

Why can’t computers calculate decimals as floats properly?

In: 2

9 Answers

Anonymous 0 Comments

You can, we just have to understand what “properly” means in this context. In order to represent a larger span of numbers with the same amount of bit space you have to sacrifice accuracy. This means that, in a sense, each float number actually represents a *range* of numbers, not just a single number. By that I mean there are many numbers which map to the same specific float representation.

So when you give the computer a number then tell it represent it as a float, then do some arithmetic (or even just try to get back your original number) the answer you get from your computer is going to be “wrong.” But it’s not really wrong, it’s right according to the agreed upon protocol for how float numbers are supposed to work. The computer is doing everything properly it is just you have used an inappropriate format to get the answer you want.

When you choose to use float, you must accept that you will lose this level of accuracy for whatever operations you wish to perform and either account for that in other ways or use a different number format.

You are viewing 1 out of 9 answers, click here to view all answers.