Some of the most common **misconceptions about confidence intervals** are:
* “There is a 95% chance that the true population mean falls within the confidence interval.” *(FALSE)*
* “The mean will fall within the confidence interval 95% of the time.” *(FALSE)*
While I do know the true definition of confidence intervals, I wonder why the above are not true?
In: Mathematics
Confidence intervals are calculated from samples of a population, centered around sample mean of interest and ranging up or down from it by the standard deviation scaled by the confidence level. Every sample produces a different mean/SD, and thus a different interval. If you calculated all possible sample CIs for some population, the proportion of them that would contain the true population mean somewhere in the CI would be the same as the confidence level (e.g. calculating a 95% CI for every possible sample -> 95% of the resulting CIs would contain the true population mean). So then the statements given would not be true for any given sample CI, as you would not have any capacity to judge whether a given CI is among the 95% (or whatever confidence level) or not based on that single sample alone.
Latest Answers