When it is appropriate to use confidence intervals?

1.57K views

Ok, I have always been kind of confused by CI and p values. I understand how it makes sense to use them when we are randomly sampling from a population and want to extrapolate the CI.

What I don’t understand is if we can use it even in other use cases.
Let’s say I want to benchmark an application and measure its runtime. Let’s say I repeat this measurament 10 times and I get a certain mean and standard deviation. So far so good, now I am not sure, can I calculate a CI on this data? Like using the z-value tables I obviously get a result which looks somewhat reasonable (using something like this https://www.mathsisfun.com/data/confidence-interval-calculator.html ), but does it even make sense in this context? And if yes, what does it represent in the context of my program?

In: Mathematics

2 Answers

Anonymous 0 Comments

You should be using a t-distribution when you don’t know the standard deviation of the population and have to estimate it from the sample. T-distributions are wider and take into account some of the uncertainty you’re describing. However, they don’t solve all problems. For instance if the underlying distribution is sufficiently fat tailed you’ll never get a true estimate of the standard deviation because it might not exist.

You are viewing 1 out of 2 answers, click here to view all answers.