Why do we round up 0.5000 to 1 instead of rounding down to 0?

425 views

Why do we round up 0.5000 to 1 instead of rounding down to 0?

In: 12

34 Answers

Anonymous 0 Comments

Zero is net zero, however .5 is *not* zero, therefore a rounding factor is *more accurate* than no factor, or no accounting for the factor (in a case of say 1.205 being rounded to 1.21)

Anonymous 0 Comments

Zero is net zero, however .5 is *not* zero, therefore a rounding factor is *more accurate* than no factor, or no accounting for the factor (in a case of say 1.205 being rounded to 1.21)

Anonymous 0 Comments

There has to be a convention, even if it’s arbitrary. We have chosen upwards.

However it does make some sense. All the other decimals 0.5xxx are greater than half so go upwards. For consistency therefore, 0.5000 ought to also.

Anonymous 0 Comments

There has to be a convention, even if it’s arbitrary. We have chosen upwards.

However it does make some sense. All the other decimals 0.5xxx are greater than half so go upwards. For consistency therefore, 0.5000 ought to also.