when you conduct a survey, how do you know how many people is enough?

599 viewsOther

when you conduct a survey, how do you know how many people is enough?

In: Other

12 Answers

Anonymous 0 Comments

Just increasing the number, surveyed, beyond a sample point,  doesn’t get much increase in accuracy. Randomness of those to be surveyed is most important to avoid bias. Pay special attention to questions and survey delivery. I have often noticed some surveys and surveyors pose leading questions. 

Anonymous 0 Comments

You more or less don’t. Any subset of a population will be unlikely to completely represent the ENTIRE population, especially when bias in selection exists and is even very difficult to avoid (people who respond to surveys are already a potential subset skewed towards certain types). Regardless, generally the more subjects you have, and the more varied their background/demographics/etc, the better.

This is part of why it’s so important though to include details of how the survey was conducted, the participants, demographics, and so on. So, when you *interpret* the results, you are aware of the limitations or possible blindspots / biases. This interpretation and awareness is necessary to get a good idea of what data really says, as very often it is mishandled/misrepresented when talked about.

Anonymous 0 Comments

It depends on a lot of factors, including the size of the effect you are trying to detect and the level of certainty you want to achieve. The size of the population you are studying doesn’t really matter unless it is so small (or your sample so large) that your sample takes in a large fraction of the population.

There will usually be biases that do not vary with sample size. For example, there is often a “social desirability bias”, in which people are less likely to give responses that are seen as embarrassing (e.g. admitting that they have cheated on partners). This will have the same effect on your results regardless of whether you ask 100 people or 1 million. So there are usually diminishing returns from increasing the sample size.

It’s also important to remember that in many types of research, the goal is not to collect statistical evidence but to understand particular people or perspectives in detail. Many researchers use qualitative methods such as in-depth interviews and focus groups to understand how people think and make decisions. It is not generally feasible to conduct these with large sample sizes. Similarly, doctors will often publish case studies about individual patients if there is something particularly unusual about them.

Anonymous 0 Comments

There are statistical formulas to determine how big a sample size should be to yield significant results based on various factors including the population size, standard deviation, and etc.

The bigger your sample size, the more accurate your results will be to reflect reality.

However, this would also depend a lot on your surveying method, the way the questions are written and how you choose to distribute the survey. There is inherent bias in how you conduct a survey even if you choose people at random, so you have to have a crystal clear methodology explained in your study

Anonymous 0 Comments

There are predetermined formulas.  Obviously, in a perfect world, you would get the opinions of 100% of your target group.  But you’ll have neither the funding nor the ability to get everyone to respond. 

So instead, using the predetermined formulas, you’ll try to catch a percentage of the people in a certain group.  Imagine you want to know what type of soda left handed red headed men like.  You look up population statistics and determine that one million of those people exist.  Then, you check the formula and it says that polling 2% of them will give you a good determination of what the average response is.  You then find twenty thousand left handed red headed men, ask them what their favorite soda is, and you’ve got your answer. 

Of course, some people or companies don’t do this.  Instead, they’ll poll an abnormally small or closely situated people and them release those results, even though they’re inaccurate. 

For example, is you want to find out if Americans love Star Wars or Star Trek more, but then you only advertise your poll on a Star Wars forum, your results are obviously going to be skewed.

Anonymous 0 Comments

You can do an a priori power analysis but personally I don’t like them because they involve estimating an expected effect size, and coming up with that estimate can be pretty subjective. In general, getting “as many as you can” in the most representative way possible is seen as good enough. Of course the flip side to that, having a super large sample, means having a lot of statistical power, so of course this means needing to be careful about over-reliance on p values.

Anonymous 0 Comments

The first part is statistical significance and margin of error. There is a formula that will help you find the sample size needed given a population size, based on what margin of error you want.

There’s also convenient lookup tables.

For lots of surveys, that’s enough. But if you want to be more accurate given demographics, you would use stratified sampling.

This involves dividing the population into distinct subgroups (strata) that share similar characteristics, and then randomly sampling from each stratum proportionally to their subset of the population.

So let’s say one stratum is “people who agree Die Hard is a Christmas movie” and it’s 78% of all people. When we figure out our sample size for significance, we’ll need to make sure 78% of those people are the Die Hard folks.

Anonymous 0 Comments

Here’s a calculator, https://www.qualtrics.com/blog/calculating-sample-size/

As you can see you provide your requirements with respect to accuracy etc. at which point you could say, how do I choose those. But hopefully you’re a bit ahead with this

More on Wikipedia: https://en.m.wikipedia.org/wiki/Sample_size_determination

Anonymous 0 Comments

Ideally, depending on the kind and subject of the survey, there is a bunch of experimental precedent of the kind “we did the survey so and so often with so and so many people and we were able to predict the results of a later study to such and such precision” and that will tell you how many you need for an certain amount of accuracy. In reality it’s often more of a “ok how many people can we reasonably get to answer this in X time and how small can we reasonably claim our error to be given the sample size” situation.

Election polls, pre and post, are usually on the order of 1000 and up. You get a lot of potential candidates so you can just go big.

A study named “Acceptance of membership in the furry community of female siblings among males with Latino mothers and absent Polynesian fathers in rural Finnish communities” might find it a lot harder to get a representative sample size and will make due with what they get.

Anonymous 0 Comments

I think all the answers here are missing two really important questions you need to consider.

First: how are you doing your sampling? People seem to be assuming you’re using a random sample. But that’s not the only way to do a sample. It’s usually regarded as a gold standard, but it’s not always the best or a feasible option. (As other people have said, it’s rarely possible to do a *truly* random sample. Very often bias in your sample is a bigger problem than sample size.)

Sample size can easily be a matter of looking at the results and feeling whether you’re likely to learn more by increasing it.

Second: how reliable and accurate do you need your results to be?

Other people have talked about how you can calculate the margin of error for a survey for a certain confidence level. But what’s the right margin of error for you? What’s the right confidence level? Do you need to be 90% confident or 99% confident?

That depends on how your research is going to be used. There are rules of thumb for this in science, though they’re not uncontroversial. I do research in a business context, where I’ll be asking questions like: how important is the decision you’re going to make? If you get it wrong, what are the consequences? Can you easily change things if you get them wrong? Do you have other evidence supporting your results?