how does the gap between percentages work?

515 views

I don’t even know if the question is well written but this is my doubt: sometimes (in the area of medicine in my case) you can read numbers like “between 30 and 80 percent of the patients have a relapse”.
Isn’t a percentage an average per se?
If I do an experiment with 100 people, shouldn’t I have an exact percentage of people who react in certain way? How can’t I know if they are 30 or 80?

In: 0

27 Answers

Anonymous 0 Comments

>If I do an experiment with 100 people, shouldn’t I have an exact percentage of people who react in certain way?

Yes, but then if you do it again with a *different* 100 people, you’ll get a different answer.

>“between 30 and 80 percent of the patients have a relapse”

means that the outcomes for a given group of 100 you happen to pick will land somewhere between 30-80%, which you’re right doesn’t narrow things down very usefully at all.

Anonymous 0 Comments

Because if you rely on one study it is not a finding. It might be a trend, but it is not a finding.

Many experiments need to be conducted over many different conditions to find a trend, let alone a finding.

Anonymous 0 Comments

The gap between percentages is simply the range of values between two percentages. In the case of your example, if a report says that between 30 and 80 percent of patients have a relapse, it means that the actual percentage falls somewhere within that range.

In medical studies, researchers often don’t have data on every single person in a population, so they collect data from a sample of people instead. This sample may not perfectly represent the entire population, and so the percentages calculated from the sample may vary slightly from the true percentages for the whole population.

Additionally, the percentage of people experiencing a certain outcome can be affected by various factors such as age, gender, genetics, lifestyle habits, and medical history, among others. Therefore, the percentage of people who experience a particular outcome can vary from person to person.

The range of percentages provided in reports reflects the uncertainty that comes with estimating percentages from a sample. A larger sample size typically leads to a smaller range and a more accurate estimate of the true percentage in the population

Anonymous 0 Comments

Because if you rely on one study it is not a finding. It might be a trend, but it is not a finding.

Many experiments need to be conducted over many different conditions to find a trend, let alone a finding.

Anonymous 0 Comments

Because if you rely on one study it is not a finding. It might be a trend, but it is not a finding.

Many experiments need to be conducted over many different conditions to find a trend, let alone a finding.

Anonymous 0 Comments

If you flip a coin twice and get heads both times, does that mean that coin will continue to give heads in the future 100% of the time?
That would be nuts.

You can do a study and count how many times something happened.
Then you have to predict how it will do in the future.
A huge and fundamental part of science is really just being able to make predictions.
It’s that bridge between “this is what *did* happen” and “this is what *will* happen” where the uncertainty comes from.
When you’re making predictions based on statistical evidence, there is always going to be some amount of uncertainty because, hey, maybe it was just a coincidence.

Note that the level of uncertainty goes down the more trials you do (the more times you flip your coin).
Flipping a coin twice and getting heads both times doesn’t really tell you much.
If you flip a coin 10 times and get heads 7 times, you still don’t have enough data to say with confidence that the coin is biased (it could just be a coincidence).
If you flip a coin 100 times and get heads 70 times, though, *now* you’ve got enough data to confidently say that the coin is biased towards heads (maybe the tails side is a bit heavier).
Specifically you can say with 95% confidence that the coin will continue to give heads between 61% and 79% of the time.

Anonymous 0 Comments

This depends. There may be multiple studies, and someone might report that the success rate was between 30% and 80%.

Or, based on one or multiple studies, someone might say that the true success rate is between 30% and 80% (with 95% confidence).

That part in parentheses — “with 95% confidence” (or 98% or 99% or etc) — is sometimes unstated, but it’s not sleight of hand. There is an amazing result in statistics called the central limit theorem (CLT) that allows you to ‘bound’ the range of particular numbers.

There are some limitations to the CLT. It’s only approximately true, and only applies when you have a lot of data. (Also, it won’t work if your numbers have a very weird distribution, but that goes way beyond ELI5.) But it works for a huge number of statistics (including success rates).

Anonymous 0 Comments

If you flip a coin twice and get heads both times, does that mean that coin will continue to give heads in the future 100% of the time?
That would be nuts.

You can do a study and count how many times something happened.
Then you have to predict how it will do in the future.
A huge and fundamental part of science is really just being able to make predictions.
It’s that bridge between “this is what *did* happen” and “this is what *will* happen” where the uncertainty comes from.
When you’re making predictions based on statistical evidence, there is always going to be some amount of uncertainty because, hey, maybe it was just a coincidence.

Note that the level of uncertainty goes down the more trials you do (the more times you flip your coin).
Flipping a coin twice and getting heads both times doesn’t really tell you much.
If you flip a coin 10 times and get heads 7 times, you still don’t have enough data to say with confidence that the coin is biased (it could just be a coincidence).
If you flip a coin 100 times and get heads 70 times, though, *now* you’ve got enough data to confidently say that the coin is biased towards heads (maybe the tails side is a bit heavier).
Specifically you can say with 95% confidence that the coin will continue to give heads between 61% and 79% of the time.

Anonymous 0 Comments

If you flip a coin twice and get heads both times, does that mean that coin will continue to give heads in the future 100% of the time?
That would be nuts.

You can do a study and count how many times something happened.
Then you have to predict how it will do in the future.
A huge and fundamental part of science is really just being able to make predictions.
It’s that bridge between “this is what *did* happen” and “this is what *will* happen” where the uncertainty comes from.
When you’re making predictions based on statistical evidence, there is always going to be some amount of uncertainty because, hey, maybe it was just a coincidence.

Note that the level of uncertainty goes down the more trials you do (the more times you flip your coin).
Flipping a coin twice and getting heads both times doesn’t really tell you much.
If you flip a coin 10 times and get heads 7 times, you still don’t have enough data to say with confidence that the coin is biased (it could just be a coincidence).
If you flip a coin 100 times and get heads 70 times, though, *now* you’ve got enough data to confidently say that the coin is biased towards heads (maybe the tails side is a bit heavier).
Specifically you can say with 95% confidence that the coin will continue to give heads between 61% and 79% of the time.

Anonymous 0 Comments

This depends. There may be multiple studies, and someone might report that the success rate was between 30% and 80%.

Or, based on one or multiple studies, someone might say that the true success rate is between 30% and 80% (with 95% confidence).

That part in parentheses — “with 95% confidence” (or 98% or 99% or etc) — is sometimes unstated, but it’s not sleight of hand. There is an amazing result in statistics called the central limit theorem (CLT) that allows you to ‘bound’ the range of particular numbers.

There are some limitations to the CLT. It’s only approximately true, and only applies when you have a lot of data. (Also, it won’t work if your numbers have a very weird distribution, but that goes way beyond ELI5.) But it works for a huge number of statistics (including success rates).