I tried to Google that but it didn’t help, so please be patient with me. I found this:
“The term floating point is derived from the fact that there is no fixed number of digits before and after the decimal point; that is, the decimal point can float. There are also representations in which the number of digits before and after the decimal point is set, called fixed-point representations. In general, floating-point representations are slower and less accurate than fixed-point representations, but they can handle a larger range of numbers.”
That doesn’t make sense to me. The decimal point stays where it is. What am I missing here?