How is it that in the U.S.,surveys of 1,000 are accepted as representative of the entire country?

2.91K viewsOther

I’ve noticed most U.S. polls query around 1,000 people and sometimes even less. Somehow that qualifies for headlines like “Americans say…” or “Most Americans…” How is it acceptable that 0.0002% of the population is accepted as representative?

In: Other

48 Answers

Anonymous 0 Comments

With a large sample size like 1000, you get a pretty good representation of the average without needing a larger sample size. Pretend you have a coin, and do not know what the odds of getting heads would be. Flip once, you get heads. That might imply 100% of the time. Flip 3 more times, get tails on those, now maybe heads is o ly 25% likely. Flip the coin 1000 times and you will see you will be close to 50/50, and doing another 10,000 won’t change that result, except maybe to push it closer to 50/50.

It would also depend on the poll, because asking 1000 people a question in one location could give a different result than another, so the sample group would need to consist of the right group depending on the question.

Anonymous 0 Comments

If you have a *really* good sample selection method and can get a member of most relevant demographics in roughly the same proportion as they exist in the united states as a whole (such as, on an issue of race, a sample that’s roughly ~60% Non-Hispanic Whites, ~18% Hispanic, ~12% Black, and ~5% Asian), then you can say that 1000 people are representative of the country.

But even with decent or mediocre sampling methods, 1000 people is still enough to get you roughly in the ballpark of the right answer for your survey, which is good enough for most use-cases. 1000 people averaged together will generally mute most of the more rare, extreme opinions.

Anonymous 0 Comments

Because if you can get a truly random sample that is representative of the overall population you’re surveying, 1000 people is enough that your results become generalizable.

Anonymous 0 Comments

The surveys have a lot of control questions, sometimes half the questions in a normal survey are control questions. They ask people about their race, what their income is, their age, their political affiliation, where they live, how many kids they have, etc. This way they can correct for any inaccuracies in who you surveyed. For example if they find out that 60% of the people they surveyed were female they can weight the male answers higher. Similar with location and income brackets. This makes the surveys far more accurate. We also accept that the results of these surveys are not perfectly accurate. There are ways of calculating the confidence interval for each number in the result, the interval that the researchers are confident the real answer is in. But often the confidence interval is not included when journalists use the result of the survey to write an article except in the most serious news publications. However policy makers and even competent journalists pay close attention to this number as well.

Anonymous 0 Comments

The key is that it’s (suppposed to be) random, and acknowledges that it’s not a precise result. 

It’s easier to understand if you think about it in the context of rolling dice.

Imagine I tell you i am going to roll a dice, and I won’t tell you how many sides it has: so it could be a normal, six-sided dice or a twenty-sided dice, or a four-sided dice or whatever.  But, I will tell you the results of the roll. 

How many times would I need to roll before you could safely tell me, with say 95% certainty, how many sides the dice had? Even if I were going to roll it a billion times, after a relatively small number of rolls between 1 and 6, (100, 500, 1000) you’d be able to say pretty confidently that it was a d6. 

Same thing here. Even if there are 300 million people, if you ask a thousand of them and have reason to believe they represent a random-enough sample of the population, you can extrapolate from their responses with confidence about the bigger population.

Anonymous 0 Comments

Imagine you have any enormous amount of trail mix. Like, an Olympic sized pool of trail mix. You want to get a rough idea of the constituent snacks and their proportions.

If you scoop out a bucket’s worth, you can count and sort the snacks and find out what’s in the trail mix without examining the entire pool.

Anonymous 0 Comments

As they said if the sample set is a good mixture then it can be indicative of the overall population mindset. Problem being most polls nowadays have their result in mind before they start so they find people that are more inclined to go how they want to poll. Lots of science is down this way now. It used to be “here’s money do science and let me know what you came up with” now it’s usually “here’s money prove that xxx is true” most are inclined to make the result as desired just to secure extra funding for other projects.

Anonymous 0 Comments

[deleted]

Anonymous 0 Comments

If it’s entirely random, like randomly drawn social security numbers or other things for non social security owners and enforcement to participate sorta like the census letter you get in the mail, I think it’s pretty accurate. If it’s on a website or a news channel or people in a certain, it’s not going to be accurate even if it’s randomly called landline phones that’s not an accurate representation because certain types of people don’t even have one

Anonymous 0 Comments

Much the same way a chef can figure out what’s wrong with the soup by tasting a spoon-full, rather than drinking the whole pot. Or, put another way, there’s a joke my professor used to tell: “If you don’t believe in random sampling, next time you need a blood test, tell the doctor to take it all.” Opinion polling can do the same thing: reliably determine a group’s opinion on an issue (the soup) from a small sample of the population (the spoon-full).

Imagine that you’re in charge of planning a party for your neighborhood, and you’re trying to decide whether to buy hamburgers or hot dogs. You expect 100 people will show up, so you knock on ten random doors, and ask. The first ten answers come back: three people want hot dogs, seven people want hamburgers.

Mathematically, we can draw conclusions about all 100 people from those ten. It’s *possible* you got really lucky/unlucky and found the only seven people who like hamburgers in the whole neighborhood, but it’s extremely unlikely. In fact, there are statistical equations that can tell us exactly how unlikely it is.

Polling companies use those equations to figure out how many people they should survey in order to get a good estimate. We know that 95% of the time, asking 1000 randomly selected people from the entire country whether they prefer hot dogs or hamburgers, the result will be within +/- 4% of the answer for the whole country.