Why is the specificity of a test defined by the true negative rate?

38 viewsBiologyOther

For a testing method, there is **sensitivity** <true positive rate = P(tested positive given that the situation is true)> and **specificity** <true negative rate = P(tested negative given that the situation is not true)>.

My question is why **specificity** is not defined by something like P(the situation is true given a positive test result), doesn’t that also tell us whether untargeted situations trigger positive?

In: Biology

3 Answers

Anonymous 0 Comments

Let’s think of it another way: Imagine you are sick with a respiratory illness. You suspect you may have COVID, so you take an antigen test for COVID.

Let’s imagine first that you do in fact have COVID. *Sensitivity* is the test accurately telling you so. If it tells you that you don’t have COVID when you in fact do, that’s a False Negative (Type II Error). It’s literally telling you how good the test is at detecting the thing.

On the other hand, let’s imagine you don’t have COVID, and instead have something else, like Norovirus. *Specificity* is the test accurately ruling out COVID in the case where it doesn’t apply. If it tells you that you do have COVID when you in fact don’t, that’s a False Positive (Type I Error). It’s literally telling you how good the test is at only responding to the thing and not being triggered by something else.

In both cases, sensitivity and specificity are defined by the probability of the test being correct given the underlying reality. False Negative/Positive are the inverse of those states.

You are viewing 1 out of 3 answers, click here to view all answers.