The problem is due to the fact that tenths cannot be accurately stored in a binary floating point representation. Thus, every number with a decimal contains a bit of rounding error, which typically isn’t shown. Typing 0.1 doesn’t yield a value that is *precisely* 0.1, and similarly with 0.2 and 0.3, but the error doesn’t scale linearly with the size of the number (it honestly doesn’t scale at all). E.g. the error in 0.2 is not twice as large as for 0.1 just because 0.2 is twice the size of 0.1.
If you want precision, you have to use integers, not floating point numbers, or a specific decimal formatting that gets around this problem. Alternatively, use a programming language that’s actually intended for computational work, like FORTRAN or MATLAB or whatever.
Latest Answers