Eli5: Programmers, why does “0.15 == 0.150000000000000001” equals = True after 15 zeros?

167 views

Eli5: Programmers, why does “0.15 == 0.150000000000000001” equals = True after 15 zeros?

In: 13

5 Answers

Anonymous 0 Comments

There are a few different ways that computers store numbers. When very common way is called “floating point”. It’s used in most calculators, things like Excel spreadsheets, and lots of other places.

Floating point has some really big advantages over other ways of storing numbers. They can store huge numbers – greater than the numbers of atoms in the universe – and tiny numbers – smaller than the size of a subatomic particle. Custom hardware allows for really quick calculations with floating point. Supercomputers use them. You might have heard of supercomputers having a certain number of “megaflops”. That stands for “million floating point operations per seconds”.

But floating point has one big drawback: precision. It cannot store numbers precisely. They are limited to a fixed number of bytes for each number. In your example, the number 0.150000000000000001 is represented in floating point as

00111110000110011001100110011010

The number that 0.15 is represented as

00111110000110011001100110011010

They are the same. Once the numbers have been converted into the floating point format, the computer has no way to tell whether it means 0.150000000000000001 or 0.15 or any other number very close to 0.15.

I wish I could use that meme of Jenna Fisher with two pictures saying “head office needs to you find the difference between these two numbers”. . . “They’re the same number”.

Edit: I did it. Possibly the worst meme ever created: https://i.imgflip.com/6iy40i.jpg

Anonymous 0 Comments

When creating software, you have two choices:

– (a) Track decimal numbers to limited accuracy. Pros: Very fast due to hardware support. Always uses a very small, pre-defined amount of memory (8 bytes, 4 bytes, or if you’re an ML hipster, 2 bytes). Good enough for most purposes. Cons: Sometimes new programmers don’t understand, or experienced programmers forget, that the roundoff is a thing. They’re surprised when 0.15 == 0.150000000000000001. Sometimes roundoff causes fundamental issues in real programs (for example, fractal explorer programs tend to break if you zoom in really far), but usually you see more minor issues that you can work around (for example, instead of checking “is x = 5” you check for “is x between 4.999999 and 5.000001” because roundoff might have caused it to be slightly off).
– (b) Track decimal numbers exactly. Pros: No unexpected behavior or information loss. Cons: Could use an unlimited amount of memory — someone could write 15.000000000000 … (1 billion zeros) … 00000001 and you’d need many millions of bytes to store that single number. Calculations need a bunch of loops and checking to deal with numbers of different sizes, which is slow. It’s unclear how to handle square roots or other calculations that have an unlimited number of decimals (typically in addition to square roots, programmers expect general purpose languages to have built-in support for trigonometry, logarithms, and exponentials).

Most programming languages make option (a) the default. Some programming languages also allow Option (b), for example Java’s `BigDecimal` or Python’s `decimal.Decimal`

Here’s what happens when I try your code in Python:

>>> 0.15 == 0.150000000000000001
True

And using the `decimal` library:

>>> from decimal import Decimal
>>> Decimal(“0.15”) == Decimal(“0.150000000000000001”)
False

Anonymous 0 Comments

Short answer: It’s because numbers are stored in binary on computers.

Long answer: decimal numbers are represented by adding up powers of 10. 0.15 is actually 1/10 + 5/100. But there are some numbers that can’t be accurately represented that way. Have you ever done 1/3 on a calculator and seen 0.3333333333…? It goes on forever because it’s not possible to exactly represent 1/3 as a sum of powers of 10 (technically powers of 1/10). It’s 3/10 + 3/100 + 3/1000 + 3/10000 … forever.

Now back to 0.15. We can represent that exactly with powers of 1/10, but computers have to store it in powers of 1/2 because they work in binary. So you can come close… 0.15 = 1/8 + 1/64 + 1/128 …, but it can’t be represented exactly. 0.15 would be a repeating decimal in base 2. So the computer stores it as close as it can, and when you convert it back to a readable base 10 format, you get a little bit left over because of the loss of precision.

Anonymous 0 Comments

To represent such a number as “0.150000000000000001” you need to use a data format known as “floating point”. Without getting into too many technical details, the floating point data format allows you to represent very small and very large numbers at the cost of accuracy. In a sense, a floating point number doesn’t represent a specific number, but a range of numbers.

When you take a number like 0.150000000000000001 and convert it to floating point format, the limitations on precision essentially round it down to 0.15.

EDIT:

As an addendum it is not technically accurate to say the computer rounds down to 0.15. Rather both 0.150000000000000001 and 0.15 become the same number in floating point format. According to [this calculator](https://www.h-schmidt.net/FloatConverter/IEEE754.html), they become 0.1500000059604644775390625

Anonymous 0 Comments

It has to do with number precision – how many decimal places the particular language and data type you’re using can store and use in comparisons. In some languages and data types, it can be as few as 7 places (SQL float data type) so this would also evaluate as true:

0.15 == 0.15000001

Anything beyond the last decimal place of precision is ignored. Thus, even though you’re typing the above, the computer looks at it like this:

0.15 == 0.1500000

And those are obviously the same so the result is True.