eli5: how does the probability of something being wrong (e.g., a false negative on an at-home/rapid antigen test) change when you do the same test multiple times?

107 views

if there is a 98% chance the results will be correct (making it ~1/50 times it will be incorrect), would taking the test twice make the chance of it being incorrect ~1/100? then after three times ~1/200? or would it not change at all and still be 1/50?

In: 0

10 Answers

Anonymous 0 Comments

Depends on the real cause of false negative

Is it a problem where X% of the time the solution/tester is just bad and won’t come up positive? If so then multiple tests may be more accurate

Is it a problem where you’re on the edge of its detection window? If so then multiple tests may be more accurate

Or is it a problem where you’re below the level it can detect at all?

If your test is 100% sensitive to results 1,000 or higher, 50% sensitive to 800 or higher, and 0% sensitive to 400 and lower then if you’re at 300 taking multiple tests won’t do the trick, but if the ultra fancy gold standard test is sensitive down to 5 then during the clinical testing of the test kit it would have been reported as a person who had the sickness but didn’t test positive on the tester.

It all depends on the cause of the false negative and its impossible to know if it really is a false negative without going through the ultra sensitive test which accurately answers the question anyway

Anonymous 0 Comments

It depends on properties of the test: this isn’t enough information for an answer.

If the test is failing for reasons that are independent from one test to another (say, there was a manufacturing defect in one test), then you’d have independent probabilities of failure, and the chance would be (1-0.98)(1-0.98) = ~0.04% of both being wrong (i.e., about 2% of 2%).

But if they’re wrong for some underlying reason (say, you aren’t exposing the specific antigen it’s testing for), then one failing would ensure the other fails, and it’d still be 2%.

More likely, both failure modes are possible, and the probability is somewhere between these extremes.

Anonymous 0 Comments

Doesn’t change, the longer you repeat the study the closer to the real probability it gets. Probability is not exact and the sample would be way to small. If its a 1 in 50 then you need at least 50 samples

Anonymous 0 Comments

It depends on properties of the test: this isn’t enough information for an answer.

If the test is failing for reasons that are independent from one test to another (say, there was a manufacturing defect in one test), then you’d have independent probabilities of failure, and the chance would be (1-0.98)(1-0.98) = ~0.04% of both being wrong (i.e., about 2% of 2%).

But if they’re wrong for some underlying reason (say, you aren’t exposing the specific antigen it’s testing for), then one failing would ensure the other fails, and it’d still be 2%.

More likely, both failure modes are possible, and the probability is somewhere between these extremes.

Anonymous 0 Comments

Doesn’t change, the longer you repeat the study the closer to the real probability it gets. Probability is not exact and the sample would be way to small. If its a 1 in 50 then you need at least 50 samples

Anonymous 0 Comments

As others have mentioned, this is not really how error works, especially in medical tests. Not every false result is the same, indeed in medical testing we prefer to have false positives instead of false negatives. Why? Well imagine being the person that didn’t have cancer but went through chemo, and compare that to the person that DOES have cancer but doesn’t get treatment; I’d take the unnecessary chemo every time. So, medical tests are designed to, as much as is physically possible, have no false negatives and only false positives. This is why step 1 is always to repeat the test, to help rule out a false positive. For more information about error in testing, check out the Veritasium video on [Bayes’ theorem](https://youtu.be/R13BD8qKeTg)

Anonymous 0 Comments

Depends on the real cause of false negative

Is it a problem where X% of the time the solution/tester is just bad and won’t come up positive? If so then multiple tests may be more accurate

Is it a problem where you’re on the edge of its detection window? If so then multiple tests may be more accurate

Or is it a problem where you’re below the level it can detect at all?

If your test is 100% sensitive to results 1,000 or higher, 50% sensitive to 800 or higher, and 0% sensitive to 400 and lower then if you’re at 300 taking multiple tests won’t do the trick, but if the ultra fancy gold standard test is sensitive down to 5 then during the clinical testing of the test kit it would have been reported as a person who had the sickness but didn’t test positive on the tester.

It all depends on the cause of the false negative and its impossible to know if it really is a false negative without going through the ultra sensitive test which accurately answers the question anyway

Anonymous 0 Comments

As others have mentioned, this is not really how error works, especially in medical tests. Not every false result is the same, indeed in medical testing we prefer to have false positives instead of false negatives. Why? Well imagine being the person that didn’t have cancer but went through chemo, and compare that to the person that DOES have cancer but doesn’t get treatment; I’d take the unnecessary chemo every time. So, medical tests are designed to, as much as is physically possible, have no false negatives and only false positives. This is why step 1 is always to repeat the test, to help rule out a false positive. For more information about error in testing, check out the Veritasium video on [Bayes’ theorem](https://youtu.be/R13BD8qKeTg)

Anonymous 0 Comments

It depends on the desired outcome (confirmation bias). You believe you’re sick, you take a test which has 95% of giving you ring result and your healthy. You take it twice and you have 10% chance that one of them will show you’re sick. If you believe you’re healthy, then it’s enough for one test to show you’re not sick, so chance is 99.75% because you pick the suitable result.

Anonymous 0 Comments

It depends on the desired outcome (confirmation bias). You believe you’re sick, you take a test which has 95% of giving you ring result and your healthy. You take it twice and you have 10% chance that one of them will show you’re sick. If you believe you’re healthy, then it’s enough for one test to show you’re not sick, so chance is 99.75% because you pick the suitable result.