Why is floating point called floating point?

477 views

I tried to Google that but it didn’t help, so please be patient with me. I found this:

“The term floating point is derived from the fact that there is no fixed number of digits before and after the decimal point; that is, the decimal point can float. There are also representations in which the number of digits before and after the decimal point is set, called fixed-point representations. In general, floating-point representations are slower and less accurate than fixed-point representations, but they can handle a larger range of numbers.”

That doesn’t make sense to me. The decimal point stays where it is. What am I missing here?

In: 9

6 Answers

Anonymous 0 Comments

Computers can only hold a certain number of digits in a fixed space. Computers use binary, but we can illustrate with base-10.

If I have space for 6 digits, I can store any number from 0-999999. But I want to store fractional numbers. I can decide to put a decimal point in an arbitrary place. So if I want to store numbers accurate to 1/100, I can add a decimal in a *fixed* place. And have numbers from 0.00 to 9999.99.

This works well in specific cases. But we lose the ability to store larger numbers. So, we take the first digit, and decide that represents the position of the decimal. We can then have any number from 0-99999 (only 5 digits) and then shift it left or right depending on the other digit. This can give us a range from 0.0099999 to 9999900.0. This allows a very large range to be represented, with a small loss of precision. Often this is a trade off we’re happy to make.

You are viewing 1 out of 6 answers, click here to view all answers.