It’s an arbitrary rule, because rounding is a pretty fundamental operation to learn, but the ‘5’ is a problem, so we just make a rule and that’s that.
However, one would probably want to look at the first derivative of the numbers (where possible) to make the determination to round up or down. I.E. if whatever number it is was trending up, then round up. If that number was trending down, then round down.
When math is applied in real-world applications, the numbers involved are usually from some types of measurements or observations. It’s assumed that there was some loss of precision for these numbers. For example, if a temperature measurement is 6.32, it might actually be 6.32182374 but the instrument that measured it can only measure up to 2 decimal digits. Your question, I think, is why is a number like 6.5 always rounded up, when it’s in the exact middle between 6 and 7. It’s only in the exact middle if it’s **exactly** 6.5. Even 6.50000000000001 is closer to 7 than to 6. So because of this loss of precision on real-world numbers, it makes sense to always round up when a number ends with a 5 digit.
Latest Answers