How do statistical tests prove significance?

736 views

I did a biology undergraduate degree and often did reports where would statistically analyse our results. P value of less than 0.05 shows that the results are statistically significant. How do these tests actually know the data is significant? For example we might look at correlation and get a significant positive correlation between two variables. Given that variables can be literally anything in question, how does doing a few statistical calculations determine it is significant? I always thought there must be more nuance as the actual variables can be so many different things. It might show me a significant relationship for two sociological variables and also for two mathematical, when those variables are so different?

In: Mathematics

17 Answers

Anonymous 0 Comments

“Significance” in this context doesn’t mean “this is true”, it means “The chance this is true is pretty damn high”. Generally speaking, the stronger the correlation, the more true it’s likely to be. The P value is essentially a value of how likely it is that the results you got were just a fluke – that there’s no pattern at all and the data just happened to come out looking like there was. The tests that determine P value just look at the data in the abstract and the amount it deviates. The lower the deviation, the lower the P value, because it’s very unusual for random chance to produce results with very low deviation. It could still happen, which is why the P value isn’t 0, all you’re doing is saying “The chance random chance produced *these* results is sufficiently low that we can decide the correlation is significant and therefore reproducible”.

Also, there are cases where a P value of 0.05 is still too high to be confident the correlation is actually there. In some cases, the results won’t be considered significant until the P value is 0.01, or even lower.

You are viewing 1 out of 17 answers, click here to view all answers.