First of all, an example; mean age of the children in a test is 12.93, with a standard deviation of .76.
Now, maybe I am just over thinking this, but everything I Google gives me this big convoluted explanation of what standard deviation is without addressing the kiddy pool I’m standing in.
Edit: you guys have been fantastic! This has all helped tremendously, if I could hug you all I would.
In: Mathematics
In my opinion, the easiest way of doing it: Think of the standard deviation as the average* distance you can expect any one of those children’s ages to fall from the mean. If you plucked one kid from the test at random, that’s about how far you could expect their age to be from the average age of the group.
^(*This is technically a lie, since the standard deviation is based on squared differences, not just differences. However, this is the best “kiddie pool” answer I can think of that doesn’t make things way more complicated than they need to be, and ends up being pretty close to the actual answer.)
With a normal (bell-curve) distribution, 66% (IIRC) will have a result within one standard deviation from the mean, and 95% will have a result within two standard deviations.
So if a test had an average score of 85, and the standard deviation was 5, then you know the majority of the class got a score in the 80s, and very few had scores >95 or <75.
Latest Answers