Why is the specificity of a test defined by the true negative rate?

32 viewsBiologyOther

For a testing method, there is **sensitivity** <true positive rate = P(tested positive given that the situation is true)> and **specificity** <true negative rate = P(tested negative given that the situation is not true)>.

My question is why **specificity** is not defined by something like P(the situation is true given a positive test result), doesn’t that also tell us whether untargeted situations trigger positive?

In: Biology

3 Answers

Anonymous 0 Comments

Think about the denominator for specificity. It includes all of the people who don’t have the disease/condition. This means it is the true negative *and* the false positives. If a test is not very specific, that means there will be a lot of false positives, i.e. that people are getting positive results for not having the disease, or having a different disease.

The P(disease given positive test result) is the positive predictive value. This changes depending on how many people have the disease in the first place. If the disease is really rare, then most positive results are false positives. If the disease is really common, then most positive results are true positives. The sensitivity and specificity don’t depend on how common the disease is or not

You are viewing 1 out of 3 answers, click here to view all answers.