Why do researchers choose to use the “P-Value” rule in data analysis?

199 views

They say .014(the P-Value) is a “significant number”. Says who? Why? Isn’t any number “significant” if the distribution of data points is mostly around that area?

In: 4

9 Answers

Anonymous 0 Comments

You always report a p-value with respect to a significance level α. In principle α can be any value of significance, and what it designates is the probability that something with “no effect” produces observations that appear to actually have a significant effect.

The most common level of α is 0.05, which corresponds to a 95% confidence level that you aren’t accidentally measuring an accidental effect. If your p-value comes out to less than 0.05, you reject the null hypothesis (no effect) at a significance of 0.05. In ELI5 as I can, this means you are over 95% sure that the effect you measure is real and not just an accidental fluke.

You are correct that 0.05 is arbitrary. You could pick 90% confidence, in which case if you had a p-value less than 0.1 you could more or less be 90% sure that an effect exists. Or if your p-value is less than 0.01 you would say it’s significant at α=0.01. Generally if you get a very, very small p-value you report α much smaller than 0.05 to emphasize that you’re more certain that an effect exists. 0.05 is pretty much chosen because 95% certainty is accepted as a high level of confidence. Obviously 95% certainty is a healthy level of confidence, though anyone who plays Dungeons and Dragons or any other game with 20-sided die can tell you that events with 5% probability happen all the time. With studies coming out reporting p-values all the time you’re bound to get some amount of fluke measurements in the mix.

You are viewing 1 out of 9 answers, click here to view all answers.