Why isn’t 0.1+0.2=0.3 in most programming languages.

1.25K views

0.1+0.2==0.3 evaluates to false in most programming languages because the result is somewhat 0.30000004, what’s the reason behind this?

In: Engineering

5 Answers

Anonymous 0 Comments

Computers do not work with the decimal system but they work in the binary system. What you ask it for looks perfectly fine with the decimal system. 0.1+0.2 is interpreted with only whole number as 1/10 + 2/10 which is easy to calculate. However try to convert 0.1 into a whole fraction in a power of two it does not quite work. You may start with 1.6/16 but that is not a whole number. Even 3.2/32 does not work. You may go up to 102.4/1024 which still does not work. You may even spot the pattern and figure out that it does not work for any fraction. At some point the computer just gives up as it runs out of digits. The problem then is of course that it end up having to round the number to a certain number of binary digits. So then you end up with rounding errors in your calculations.

You are viewing 1 out of 5 answers, click here to view all answers.