The measurement of IQ is based on the result of an average person adjusted by age and the statistical tool of standard deviations.
This means that a child and an adult handing in a test with exactly the same answers will get different scores. You expect less of a child and what is average for an adult is exceptional for a 12 year old.
The Q in IQ is “quotient” and refers to that adjustment.
The average/median result in a given population should be and IQ of 100 with everyone receiving a score based on how much better or worse they did to that average.
An IQ of 100 splits the population in half with 50% of those who did not receive exactly 100 having more and 50% having less.
There are actually two different systems one using 15 (Wechsler) standard deviations and the other 16 (Stanford-Binet).
Explaining how standard deviations work may be a bit too much, but the result is simply that the father away from 100 you get the fewer people will have such a score.
If you have an IQ of 125 you score higher than three quarters of the population and if you have an IQ of 75 you score lower than three quarters of the population.
The further away from the middle you get the rare he score becomes. It is also the point where the score becomes less and less reliable.
At about 190 or 196 (depending on the scale) you would be a 1 in billion genius if anyone suggested we could measure this accurately.
As the scale nears 200 or 0 we get into score that would hypothetically so rare that we might be looking at one out of the entire population of the world or one out of all the humans who have ever lived. So while the scale might in theory go that far the thing we would hope to measure is not something we are likely to ever find or have the tools to measure.
All that does not take into account how flawed many people think the whole notion of IQ really is due to its cultural biases.
Latest Answers