Lots of people are giving you kind of decent answers but are missing some nuance. Let me help. I got an A in stats for my bio degree.
* A p-value is **one tool of many** to quantify the “usefulness” of scientific data.
* In stats, every way of teasing the data around has tradeoffs and assumptions. Certain kinds of data do funny things to any sort of standard formula, and in some cases can totally break it. So it is important to note that statisticians **select tools carefully** based on the data they expect to see (and in some cases, based on the data they actually got). That said, a p-value is like a screwdriver or ratchet more than a saw. You will end up using that tool on lots of jobs. Another very, very common tool you should be familiar with is the **confidence interval**, which is also sometimes used to express whether a study is significant or not.
* p-values are rated against an *alpha* value which is standardized at 0.05. This is why you **constantly** see “(p<0.05)”. But in reality, this is mostly an arbitrary choice of the scientific community. We have apparently collectively decided that a 4.9% chance of error is acceptable and a 5.1% chance is not. In some fields, the alpha is 0.01 because we want to be **really sure**.
* p-values do **not** necessarily track with effect size. You design a drug to lower blood pressure and give it to 10,000 people, setting another 5,000 as a control. 98,000 people have their blood pressure lowered by 2 points compared to control, and the others don’t have any change. Without actually doing the math, that will probably generate a significant result. But do you really care? The effect size is not clinically meaningful. Who would use that drug?
* p-values and confidence intervals only deal with [one of the two kinds of sampling error](https://support.minitab.com/en-us/minitab/21/help-and-how-to/statistics/basic-statistics/supporting-topics/basics/type-i-and-type-ii-error/). Good scientists also do *power analysis* in many cases when choosing their sample sizes, meaning that they are thinking ahead about the relative risk of type I versus type II errors.
* p-values don’t deal with other problems in methodology. If I take bad measurements, my measurements don’t actually mean what I think they mean, I do some math wrong, etc. those totally bypass the p value. A p-value is calculated with the assumption that we’re doing everything elde correct and above board.
* In general, it is not only OK but actually encouraged to carefully plan a study by doing some analysis before even collecting data. p-hacking is generally done **after** data are collected. One way to avoid this is by telling everyone (or a trusted authority monitoring your study) what your plan is ahead of time so that you can’t just change it as soon as your results aren’t to your liking. This is called pre-registration.
[Here](https://blog.minitab.com/en/adventures-in-statistics-2/understanding-hypothesis-tests-significance-levels-alpha-and-p-values-in-statistics) is a good general article about p-values that shows the actual normal distribution curve.
Latest Answers