Why do we cut off significant figures if they’re more accurate

751 views

Hey, when we solve for significant figures, why do we completely get rid of the remaining decimals even though hey have more accurate information?

Ex. 1.23*4.84=5.9532 but we would make it 5.95 based on Sig figs, even though those last two decimals are closer to the answer. Why is this? I know it’s less accurate, though it seems like we’re losing valuable accuracy (even if it’s not perfect, it should be closer)

In: Mathematics

17 Answers

Anonymous 0 Comments

Lets say I have a scale. The smallest thing it can measure is 0.01 grams. So, if I put one gizmo on the scale, and the scale says 1.23 grams, I don’t really know anything about the digits beyond the 3. Maybe the true weight of the gizmo is 1.234 grams, but the scale isn’t accurate enough so it just says 1.23 grams.

So, when I multiply the 1.23 by 4.84, I get 5.9532 grams. If I use that number, it looks like I know exactly how much it weighs down to the 0.0001 gram. But I don’t! Maybe the true weight was 1.234, so when I multiplied it, the true value should have been 5.97256, not 5.9532. The first digit 5 is correct, the one-tenths place 9 is correct, the one-hundredths place 5 is not quite correct, and the remaining digits, 32, are total nonsense. If I leave them in, it looks like I know what the weight is down to the ten-thousandth of a gram, when I don’t even know the hundredth of a gram exactly. So, we drop them, and keep only the numbers we are at least somewhat confident about.

You are viewing 1 out of 17 answers, click here to view all answers.