Most IQ tests are scored on a normal distribution curve where 100 is set as the average. (It should have nothing to do with age). To be at a zero would suggest that you were literally the least intelligent person in an infinitely large population. It just doesn’t really have meaning mathematically in the real world.
There’s a very deep misunderstanding of what IQ is and how it relates to intelligence. Basically, IQ numbers relate to how well you did on an IQ test. The researches then extrapolate that into what your supposed level of intelligence is using a lot of other data, including enviromental factors such as education level, language, reading comprehension, etc, etc.
It’s very debatable how any of that actually extrapolates to intelligence, if it does at all.
For someone to have a 0 IQ, it’d mean that they failed every question of the test (which should have been custom made for the particular population to be tested, there’s no 1 universal IQ test), which is pretty difficult since it has some very easy problems and questions. Essentially, you would have tested someone that was totally incapable of interacting with the given test, like giving a blind person a written test to do. And as in that example, it says a lot more about you as an examiner than the person being tested.
It is actually.
IQ tests have a mean of about 100 points and a standard deviation of about 15 points. So someone with IQ = 0 would be 6.67 standard deviations below the mean.
This gives a p-value in the neighborhood of 10^-11 , suggesting a population of around 10^11 people. So it’s possible, we just don’t have enough people right now.
The measurement of IQ is based on the result of an average person adjusted by age and the statistical tool of standard deviations.
This means that a child and an adult handing in a test with exactly the same answers will get different scores. You expect less of a child and what is average for an adult is exceptional for a 12 year old.
The Q in IQ is “quotient” and refers to that adjustment.
The average/median result in a given population should be and IQ of 100 with everyone receiving a score based on how much better or worse they did to that average.
An IQ of 100 splits the population in half with 50% of those who did not receive exactly 100 having more and 50% having less.
There are actually two different systems one using 15 (Wechsler) standard deviations and the other 16 (Stanford-Binet).
Explaining how standard deviations work may be a bit too much, but the result is simply that the father away from 100 you get the fewer people will have such a score.
If you have an IQ of 125 you score higher than three quarters of the population and if you have an IQ of 75 you score lower than three quarters of the population.
The further away from the middle you get the rare he score becomes. It is also the point where the score becomes less and less reliable.
At about 190 or 196 (depending on the scale) you would be a 1 in billion genius if anyone suggested we could measure this accurately.
As the scale nears 200 or 0 we get into score that would hypothetically so rare that we might be looking at one out of the entire population of the world or one out of all the humans who have ever lived. So while the scale might in theory go that far the thing we would hope to measure is not something we are likely to ever find or have the tools to measure.
All that does not take into account how flawed many people think the whole notion of IQ really is due to its cultural biases.
IQ is a normalized measure. You get a bunch of people to take a test, which produces a raw score. Then you find the average of the raw scores. Anybody who scored this average gets assigned an IQ of 100. People who scored 1 standard deviation below the average raw score get an IQ of 85. People who scored 2 standard deviations below get an IQ of 70.
To get an IQ of 0, you would need to score 6 2/3 standard deviations below the mean. (Also, note that a negative IQ is technically possible: 7 standard deviations below the mean should yield an IQ of -5. There’s no sense in which 0 is the absolute bottom for IQ) On a typical test, this simply isn’t possible. Consider a test with a mean raw score of 75 and a standard deviation of 25. The lowest raw score you can get is 0, but that’s only 3 standard deviations below the mean, equating to an IQ of 55. If you really wanted, you could add to the test a lot of questions that are very easy, which would give you the ability to produce a very low IQ if someone happened to get all of them wrong.
But this isn’t really done because it’s unlikely that such a person even exists. If whatever IQ purports to measure (probably not “general intelligence”) is actually normally distributed, we should expect to see someone 6 2/3 standard deviations below the mean in only 1 out of every 100 billion people. That gives pretty low odds of such a person even existing and extremely low odds of you actually testing them.
Latest Answers