Why isn’t 0.1+0.2=0.3 in most programming languages.

1.25K views

0.1+0.2==0.3 evaluates to false in most programming languages because the result is somewhat 0.30000004, what’s the reason behind this?

In: Engineering

5 Answers

Anonymous 0 Comments

Because programming languages are giving you an approximation of decimal numbers. (There’s a piece of hardware called a [Floating Point Unit](https://en.wikipedia.org/wiki/Floating-point_unit) in most systems that *also* has these limitations, so in many ways it’s just lazily giving you access to that; but the real question is “What sparked the design of this unit anyway, seems a bit of an odd way to design something?!))

Floating Point numbers span a huge expanse, at the expense of being accurate.

They take approximately the form:

x * 2^y

similar to “scientific notation” something you may remember from your school days:

x * 10^y

The other limitation is you have a finite number of places for each number, let’s say 8ish (in real life, most of the floating point number is devoted to `x` [called a ‘mantissa’] rather than the exponent). Some numbers can’t be represented this way. For base 10, consider 1/3. You can’t actually represent it in scientific notation, you’re left with:

33333333 * 10^-8

but when you perform operations on that, it won’t come out as you expect: 1/3 + 1/3 + 1/3 = .9999999

Now floating point is a little different than that, it’s:

(1.0 + M) x 2^(E-127)

(in addition there’s a sign bit that can be thought of as “is negative”)

.1 (or 1/10) is like 1/3 in floating point. It’s one of these numbers that exists logically, but can’t be represented. Like .3333… in decimal, 0.1 in binary is 0.0**0011…** (sorry for lack of reddit formatting, but the bold bit is repeating as the threes were). This error in representation, flows through all your operations.

However, you’re trading this small inaccuracies for a much larger range of numbers available. If you naively used your 32bits to do 16-bit-Integer-before-decimal.16-bit-Integer-after-decimal, you’d get about **4.6 decimal digits** in each direction (and that’s *unsigned*). Going floating point, you get **37.3 digits** in some situations. (Where .3 digits mean you can run that decimal place up to about 3, but not all higher)

So, basically the idea is instead of being able to represent up to 65536, you can represent up to 340282350000000000000000000000000000000, just not very accurately.

Most languages have Fixed, Decimal, or Money types to accurately represent decimals with values where these small inaccuracies will ruin your day.

You are viewing 1 out of 5 answers, click here to view all answers.