Why is the specificity of a test defined by the true negative rate?

34 viewsBiologyOther

For a testing method, there is **sensitivity** <true positive rate = P(tested positive given that the situation is true)> and **specificity** <true negative rate = P(tested negative given that the situation is not true)>.

My question is why **specificity** is not defined by something like P(the situation is true given a positive test result), doesn’t that also tell us whether untargeted situations trigger positive?

In: Biology

3 Answers

Anonymous 0 Comments

Misread the OP

If you had a lopsided sample, with lots of “true” observations, your measure does not look bad when it is supposed to look bad if you guessed all observations are true.

You are viewing 1 out of 3 answers, click here to view all answers.