Why do we round up 0.5000 to 1 instead of rounding down to 0?
Zero is net zero, however .5 is *not* zero, therefore a rounding factor is *more accurate* than no factor, or no accounting for the factor (in a case of say 1.205 being rounded to 1.21)
Because .0000 to .4999 round down, while .5000 to .9999 round up. That way even amount round up or down.
.49999…., that is, .49 with 9 repeating forever, is exactly the same as 5. Infinity is weird.
.49999… rounds down.
5 rounds up.
We literally split five in half for this.
Granted, in practice, .49999…. is rare, while 5 is common. But it was as good of a reason as any for something that is completely arbitrary but had to be agreed upon in order for the community of mathematicians to work together.
Actually, we do! Sometimes. It’s a method called “Banker’s Rounding” (if memory serves), where 0.5 is rounded to the nearest even number. This balances out the rounding better.
There has to be a convention, even if it’s arbitrary. We have chosen upwards.
However it does make some sense. All the other decimals 0.5xxx are greater than half so go upwards. For consistency therefore, 0.5000 ought to also.