Scientific notation and why it’s necessary

240 views

Because every science course I’ve taken so far has left me even more confused than ever.

In: 15

9 Answers

Anonymous 0 Comments

Scientific notation is used to represent really small or large numbers, or number in cases where how many digits you can be sure are accurate really matters.

To make scientific notation, you take the number – let’s say 1867 – and write it with the first digit in the ones column, and the rest after the decimal point. In this case, 1.867. Then, you multiply it by the power of ten that would give you that number – this is the number of places you had to move the decimal. In this case, you moved the decimal three places to the left – x10^3.

To give some examples of where this is useful, writing 0.0000000000782g requires counting zeroes, and leaves a lot of room for error. With a number that small, doing math with it is pretty unintuitive anyway, so changing it into the harder to visualize 7.82×10^-11 makes it a lot easier to work with without making it particularly harder to imagine. *Note – most places in the world don’t use commas in their large or small numbers, so all the zeros blur together like that

It can also be helpful in places where significant figures, or how many digits you know from data and how many just showed up in the math, matters. If you measure that you have 10.0 grams of something, and you know that 1/3 of that by weight is green, then the answer that you’d get is 3.33333333333333333…g of green. However, your scale’s only so accurate. Maybe you have 10.00478083984g total grams. You have no way of knowing that anything past 3.33 is actually correct, so you don’t write this.

In a case where you end up with a very large number, like 1 888 967, and you only know that the first three digits are accurate, how do you write that? If you write 1 890 000, it still kind of implies that you’re sure of those zeroes. The best way to do it is 1.89×10^6

You are viewing 1 out of 9 answers, click here to view all answers.