Why do we round up 0.5000 to 1 instead of rounding down to 0?

412 views

Why do we round up 0.5000 to 1 instead of rounding down to 0?

In: 12

34 Answers

Anonymous 0 Comments

Zero is net zero, however .5 is *not* zero, therefore a rounding factor is *more accurate* than no factor, or no accounting for the factor (in a case of say 1.205 being rounded to 1.21)

Anonymous 0 Comments

There has to be a convention, even if it’s arbitrary. We have chosen upwards.

However it does make some sense. All the other decimals 0.5xxx are greater than half so go upwards. For consistency therefore, 0.5000 ought to also.

Anonymous 0 Comments

.49999…., that is, .49 with 9 repeating forever, is exactly the same as 5. Infinity is weird.

.49999… rounds down.

5 rounds up.

We literally split five in half for this.

Granted, in practice, .49999…. is rare, while 5 is common. But it was as good of a reason as any for something that is completely arbitrary but had to be agreed upon in order for the community of mathematicians to work together.

Anonymous 0 Comments

Because .0000 to .4999 round down, while .5000 to .9999 round up. That way even amount round up or down.

Anonymous 0 Comments

There are many different ways of rounding. The one you mention is just the one most commonly taught in school. https://en.m.wikipedia.org/wiki/Rounding

Anonymous 0 Comments

Actually, we do! Sometimes. It’s a method called “Banker’s Rounding” (if memory serves), where 0.5 is rounded to the nearest even number. This balances out the rounding better.

Anonymous 0 Comments

Zero is net zero, however .5 is *not* zero, therefore a rounding factor is *more accurate* than no factor, or no accounting for the factor (in a case of say 1.205 being rounded to 1.21)

Anonymous 0 Comments

There has to be a convention, even if it’s arbitrary. We have chosen upwards.

However it does make some sense. All the other decimals 0.5xxx are greater than half so go upwards. For consistency therefore, 0.5000 ought to also.

Anonymous 0 Comments

There are many rounding methods to choose from and not just up or down. For school and lots of other common usages, the common way is just to round 0.5 to 1. It is arbitrary but it is what has been chosen.

The way you would round a number is not producing a half-up. What would you round -0.5 too, would it be 0 or -1? If the answer is zero you are not rounding up -1 is less than -0.5. 0.5 to 1 and -0.5 to -1 are rounding away from zero.

In some situations, a constant rounding away from zero results can be problematic. If you do not have an equal amount of negative and passive numbers the average rounding will change the result away from zero.

A way to help fix that is rounding half to even. So 0.5 become 1 but 3.5 become 3. That way if you just have a positive number that is event enough distributed the error is zero. It is still not preferred because the values you have might not be distributed like that.

The standard for computer floating point numbers (think decimal numbers) is to round to even. It is a binary fraction so not exactly rounding half but it is still an example of where rounding away from zero could introduce error and that is one way to reduce the error

https://en.wikipedia.org/wiki/Rounding

Anonymous 0 Comments

Because .0000 to .4999 round down, while .5000 to .9999 round up. That way even amount round up or down.