Why is floating point called floating point?



I tried to Google that but it didn’t help, so please be patient with me. I found this:

“The term floating point is derived from the fact that there is no fixed number of digits before and after the decimal point; that is, the decimal point can float. There are also representations in which the number of digits before and after the decimal point is set, called fixed-point representations. In general, floating-point representations are slower and less accurate than fixed-point representations, but they can handle a larger range of numbers.”

That doesn’t make sense to me. The decimal point stays where it is. What am I missing here?

In: 9

Say a floating point number can hold 5 digits, so this number is valid


But so is this:




The point moves around

Compare it to fixed point to understand it.

Say you have 5 digits, point fixed at 3 digits.
You could have numbers like 19.323, 27.999, 55.001 … and so on. Numbers between 100 and 0.001

With floating point, you can have:
98765. or 9874.9 or 0.00001
Now you can make numbers in a much larger range ! You just have to additional store where the point is at.

Suppose you wanted to represent the numbers 20 and 0.2 using only 4 digits. You could do it this way, keeping the decimal point in the middle:



But if you wanted to represent 20,000 or 0.00000002, you’d be in trouble. Floating point solves that by using one digit to represent how far you have to shift the decimal point to get the actual number. So

20 is 2.00 (+1)

0.2 is 2.00 (-1)

20,000 is 2.00 (+4)

etc. The system above would let you represent any number from .0000000001 to 9,990,000,000 with good accuracy using just four digits. This is all built on the mathematics of exponents, and in a real computer it covers a much wider range and happens in binary rather than decimal, but this is an ELI5 explanation.

2357.0 can be written as

* 2357.0
* 235.7*10^1
* 23.57*10^2
* 2.357*10^3

So by using exponential notation you can write the same number multiple way and the decimal point moves

The floating-point number computer use are binary but you can do it in decimal too.

Let put a single digit before the decimal point and call it a. A can be negative too. There is seven digits after the decimal point called ddddddd The two-digit exponent to the 10 is called nn with a value between -38 and +38

The limits are picked so the result is close the limitation of single-precision floating-point number that require 32 bits=4 bytes to store the,



So the format is now

a.dddddd *10 ^nn. So we store numbers and always use 10 digits

10 is then 1.0000000*10^1

-10 is then -1.0000000*10^1

0.1 = 1.0000000*10^-1

2357= 2.3570000*10^3

1,234,567,890 is 1.234569*10^9 That is not exactly the same because you would need 8 decimals so 8 was rounded up to 9 because of the following 9

You can have very large number like

1.0000000*10^38 =100000000000000000000000000000000000000

1.0000000*10^-38 =0.00000000000000000000000000000000000001

So it is a floating-point because where the decimal point is float around because of 10^nn. The format makes it possible to have the very large number and numbers very close to zero stored in a fixed size.


The fixed size makes storing the number in the computer quite simple. The hardware that does calculation is also relatively simple and fast. The drawback is that you have limited precision.

So 1.234569*10^9 + 1 = 1.234569*10^9 You need to add 100 to see a change 1.234569*10^9 +100= 1.234570*10^9

So floating-point is a compromise between program complexity, calculation speed, and precision. They are very useful if you know what you do but if you do not calculation will have an unexpected result

If I asked you the speed of light, are you more likely to say 300,000,000 m/s or 3.0 * 10^8 m/s? Probably the latter.

With floating point numbers, the numbers are actually represented in a scientific notation. This allows a floating point number to be quite large, representing incredibly large numbers, or very tiny, representing very small fractions of a number.

When we write something in scientific notation, we move the decimal point. This is the floating point that we talk about.

Computers can only hold a certain number of digits in a fixed space. Computers use binary, but we can illustrate with base-10.

If I have space for 6 digits, I can store any number from 0-999999. But I want to store fractional numbers. I can decide to put a decimal point in an arbitrary place. So if I want to store numbers accurate to 1/100, I can add a decimal in a *fixed* place. And have numbers from 0.00 to 9999.99.

This works well in specific cases. But we lose the ability to store larger numbers. So, we take the first digit, and decide that represents the position of the decimal. We can then have any number from 0-99999 (only 5 digits) and then shift it left or right depending on the other digit. This can give us a range from 0.0099999 to 9999900.0. This allows a very large range to be represented, with a small loss of precision. Often this is a trade off we’re happy to make.