Why isn’t 0.1+0.2=0.3 in most programming languages.

1.25K views

0.1+0.2==0.3 evaluates to false in most programming languages because the result is somewhat 0.30000004, what’s the reason behind this?

In: Engineering

5 Answers

Anonymous 0 Comments

The problem is due to the fact that tenths cannot be accurately stored in a binary floating point representation. Thus, every number with a decimal contains a bit of rounding error, which typically isn’t shown. Typing 0.1 doesn’t yield a value that is *precisely* 0.1, and similarly with 0.2 and 0.3, but the error doesn’t scale linearly with the size of the number (it honestly doesn’t scale at all). E.g. the error in 0.2 is not twice as large as for 0.1 just because 0.2 is twice the size of 0.1.

If you want precision, you have to use integers, not floating point numbers, or a specific decimal formatting that gets around this problem. Alternatively, use a programming language that’s actually intended for computational work, like FORTRAN or MATLAB or whatever.

You are viewing 1 out of 5 answers, click here to view all answers.