What is a p value and a null hypothesis in scientific research and how significant are they

161 views

[ad_1]

I’m starting to get into science a lot more these days but I do not know what p values and null hypothesis are.

Appreciate the help. Thank you.

In: Other
[ad_2]

Let’s say you want to understand if the temperature of the day relates to how many times a day my dog farts.

My null hypothesis is: the days temperate does not effect how much my dog farts. Ie that any relation is just chance.

My aim (Alternative hypothesis) is to reject that statement, so I can prove that the temperature DOES effect my dog’s farts. I want 95% confidence to prove my aim. So I need 5% confidence (p value) I can reject my null hypothesis.

Say I run this test and record the data for a year. Plug in all the values, and get a result that says it’s a 50:50 that these are related. Sadly then, I cannot reject my null hypothesis, and cannot then prove that the temperature and my dogs farts are related.

Going to start off with some relevant definitions first

Dependent variable — the thing that’s being measured, a response variable (ie. heart rate)

Independent variable — the thing that’s being manipulated or set by the researchers; the variable that is hypothesized to cause a change in the *dependent* variable (ie. a drug treatment)

Population — the group being tested. Important to note that the conclusion can only be generalized to the population of the study. If the experiment (“sample population”) only includes men of Asian descent aged 45 and over, then the conclusion cannot be assumed to extend to men of other backgrounds, women, young men, children, etc.

The alternative hypothesis is the research question in the form of a true/false statement (This drug affects heart rate).

The null hypothesis is the “blank.” It assumes there is no relation between the independent and dependent variables (This drug has no effect on heart rate). With no evidence, we default that the null hypothesis is true. The experiment aims to disprove the null hypothesis in favor of the alternative.*

The p-value is the probability of getting the observed result under the assumption that the null hypothesis is true — that there is no relation between the variables.

A small p-value means that it would be very unlikely to observe this result by random chance; therefore, it is likely that something is causing it. In a well-designed experiment, the cause can be attributed to the independent variable.

*Just because a result is not significant, does not necessarily mean that the null hypothesis is definitively true, it just means we did not find evidence to say otherwise. Same goes for the alternative. Just because a result is significant, does not mean it is the end-all explanation. We just have evidence to support the conclusion. That’s not a go-ahead for all you conspiracy theorists out there to say “Gotcha!” If the results can be observed time and time again, then that’s more and more evidence to support the explanation.

And just to add, in most of science, we need a p-value of less than .05, which means we’re only accepting a 5 percent chance of being wrong (saying there’s an effect when there isn’t one).

I’m learning about this this week in Statistics class. It’s all very interesting! As much as the text, PowerPoint, lecture and homework taught me, I still got a lot out of the Kahn Academy video about it. I recommend it!

https://www.khanacademy.org/math/statistics-probability/significance-tests-one-sample/more-significance-testing-videos/v/hypothesis-testing-and-p-values

5 year old explanation first.:

Null hpothesis: The thing we are testing does not cause a big difference

P Value : this is the chance that the results were just from chance and not actually because of a difference.

Confidence interval: This is how sure i want to be that it wasnt just by chance we got these results

Effect size: How big was the effect of the thing we are testing.

Adult explanation.:

Null hypothesis- “There is no significant difference between what is being tested” it basically means exactly this phrase. We reject it if we find there is a big difference

.P Value- a scale of 0-1 or 0% to 100% It tells use the chance that if there really is no big difference between the data . Then there is (p in this case is .05) 5% chance that youl get results this different. ( cannot determine whether a hypothesis is true or whether results are important. )

Confidence interval- Normally its 95% aka that if you are rejecting the null hypothesis theres a 5% chances your results are a mistake.

YOU SHOULD NOT ONLY USE THE P VALUE! It does not tell you the amount of effect something had, and can basically only tell you how likely it is you probably arent just getting your results by random chance. It is considered inappropriate and statisticians have had to plead with researchers to stop doing this. [https://www.nature.com/news/statisticians-issue-warning-over-misuse-of-p-values-1.19503](https://www.nature.com/news/statisticians-issue-warning-over-misuse-of-p-values-1.19503)

1. **Example**: I want to know if loud noises makes people perform worse on a math test. So NULL HYPOTHESIS – Loud noises do not cause worse math test results.
2. Now i set a confidence interval- How likely the results of my math are giving me a reasonably correct assessment. the standard is 95%
3. So now we did the tests and have all the data. We now do what’s called a hypothesis test. And wow there are alot of different tests for theses that are complicated.
4. So we get a p value- and we hope its less than .05 if it is that means there is a significant difference from loud noises and no loud noises.
5. But value doesn’t tell us how big the difference is just that its significant. So a common one ive used to measure effect size is COHENS D The effect sizes rang from .2 small .5medium .8 large 1.3 very large.
6. If my effect size is .2 yeah i can say LOOK GUYS AUDIO REALLY EFFECTS TEST RESULTS!! but in reality its not by a whole lot.

THINGS TO LOOK OUT FOR: Often times researchers will pull intellectual tricks to hide the fact their results arent as significant as they wanted. Here are some things they will do.

* Lower the confidence interval to something much lower like .85%
* Use some slippery wording when the P is greater than .05 like Provisionally significant or bordering significant.
* Not emphasize the effect size, not show the effect size, ect.

When we’re doing science, we’re trying to prove that our new idea is _wrong_, not right.

The Null Hypothesis is what we knew before we had our new idea, i.e. what we’ll see if our new idea _is_ wrong.

The _p_ value is the probability that results are due to luck rather than our idea being correct. When we design an experiment, we usually aim for a p of less than 5% (0.05).

Example. A coin is equally likely to flip heads or tails (this is the null hypothesis) but I think my coin always flips tails (my hypothesis). If I plan to flip once, and it’s a tail, I’ve disproven the null hypothesis and proven my hypothesis — but there was a 50% chance it was a fair coin that just happened to flip tails, so no-one would care about my result. So, instead, I plan to flip the coin six times. That way, if I flip all tails, there’s only a 4% chance that it was luck (p of 0.04).

Note that we might flip five tails and a head. In that case, we’ve failed to prove the coin always flips tails (yay, science!) , so we failed to prove our hypothesis, so the null hypothesis remains. So we make a new hypothesis, design a new experiment, and …