When measuring the sensitivity and specificity of a test, how do scientists know the true prevalence of disease within the tested population?


Sorry if the title was poorly worded. Basically I understand that the sensitivity is the proportion of people who test positive out all the people tested who have the disease, but I don’t know how scientists know who is positive and who is negative without a test in the first place? It feels like a chicken and egg type situation.

In: Biology

You have to have a test that you already know is very good (i.e. a “golden standard”) to test against.

I think there’s a way to include the fact that your golden standard isn’t perfect in your calculation but it’s been a bit since I learned this stuff and don’t remember for sure.

ETA: In the case of something like COVID where there is no standard test they have been estimating by looking at a combination of other things ([“CT evidence of pneumonia in a pandemic area (not perfect as there are other causes of pneumonia, even in a pandemic) to a combination of clinical features, imaging, and positive follow-up testing”](https://www.cap.org/member-resources/articles/how-good-are-covid-19-sars-cov-2-diagnostic-pcr-tests)) and treating that as the “standard”.