Why do we cut off significant figures if they’re more accurate

747 views

Hey, when we solve for significant figures, why do we completely get rid of the remaining decimals even though hey have more accurate information?

Ex. 1.23*4.84=5.9532 but we would make it 5.95 based on Sig figs, even though those last two decimals are closer to the answer. Why is this? I know it’s less accurate, though it seems like we’re losing valuable accuracy (even if it’s not perfect, it should be closer)

In: Mathematics

17 Answers

Anonymous 0 Comments

The question is what do 1.23 and 4.84 mean. Is an exact value so 1.23 = 1.230000000000000…. or is a measurement where the instrument only show 2 decimals?

A 2 decimal measurement would have an accuracy of +-0.005 because the real value might be any number between 1.235 and 1.225

4.84 could between 4.845 and 4.835

If you measure an area the maximum area is 1.235*4.845=5.983575 and the minimum is 1.225*4.835=5.922875

So you can see that the area is between 5.983575 and 5.922875 and 5.9532 is the average of the two numbers. Because you only measure with 2 decimals accuracy you do not know the exact area just the range.

So the simple way to handed this is to count significant digits and give the answer as 5.95 because if you add more decimals the implication is that your measurement is accurate to the last decimal.

So 5.95 is an approximation

You can do better as 5.983575/5.95 =1.0056 and 5.983575/5.95=1.0056
So you can say that the value is 5.95+-0.0056

If 1.23= 1.2300000000000… and 4.84=4.840000000… then the exact result is 5.9532

For any measurement of the real world, there is always a level of accuracy. So 1.23 and 1.2300 is a way to describe the accuracy in the measurement.

If you do a long calculation with multiple steps you should keep all the decimals but round the output to the appropriate number of decimals. It can be hard to know what the appropriate error. You might later learn how the propagation of uncertainty works mathematically.

The important part is that the final result shows not imply that the input measurement had higher accuracy than they had in reality. The number of decimals is an implicit way to describe accuracy.

You are viewing 1 out of 17 answers, click here to view all answers.