Why do we round up 0.5000 to 1 instead of rounding down to 0?

451 views

Why do we round up 0.5000 to 1 instead of rounding down to 0?

In: 12

34 Answers

Anonymous 0 Comments

There are many rounding methods to choose from and not just up or down. For school and lots of other common usages, the common way is just to round 0.5 to 1. It is arbitrary but it is what has been chosen.

The way you would round a number is not producing a half-up. What would you round -0.5 too, would it be 0 or -1? If the answer is zero you are not rounding up -1 is less than -0.5. 0.5 to 1 and -0.5 to -1 are rounding away from zero.

In some situations, a constant rounding away from zero results can be problematic. If you do not have an equal amount of negative and passive numbers the average rounding will change the result away from zero.

A way to help fix that is rounding half to even. So 0.5 become 1 but 3.5 become 3. That way if you just have a positive number that is event enough distributed the error is zero. It is still not preferred because the values you have might not be distributed like that.

The standard for computer floating point numbers (think decimal numbers) is to round to even. It is a binary fraction so not exactly rounding half but it is still an example of where rounding away from zero could introduce error and that is one way to reduce the error

https://en.wikipedia.org/wiki/Rounding

You are viewing 1 out of 34 answers, click here to view all answers.