Why a study attempting to replicate a nominally significant effect (p~0.05) which uses the same sample size as the original study has 50% of rejecting the null?

77 views

Why a study attempting to replicate a nominally significant effect (p~0.05) which uses the same sample size as the original study has 50% of rejecting the null?

In: 0

3 Answers

Anonymous 0 Comments

Can you reword your question using proper sentence syntax? And the the specific journal referenced?

Anonymous 0 Comments

this needs a little more explanation from you OP.

but from how I interpret what you’re asking is essentially “why is a follow-up study using essentially the same setup not nearly as confident to have found something?”

that might be that there was some other effect affecting the first study (basically an unidentified systematic error) or something like that.

Anonymous 0 Comments

So first off, there is no real ELI5 answer, the question is derived from a academic journal, and the true answer is rooted in mathematical statistics and/or bayesian statistics. Although, experience with time series will also help you. A formal proof is way beyond ELI5, so I will skip it and do some handwaving.

But to try to explain. We have two identical studies in every aspect, and the studies themselves are unbiased in any way (making u/Faleya answer an answer to a different question, you have not asked) why if study 1 produces an estimate exactly equal to the critical value the chances for study 2 to reject (that is the estimate is above the critical value) is 50%?

Let me ask you a question? **What is the best estimate we have after we conducted study 1? The result from study 1 right?** **So we could use the results from study 1 to predict the results from study 2.** So assuming for a moment everything is normal distributed, the expected value of the estimator of study 2, is the estimation from study 1. Or rather formally: E(S(X_2)|X_1) = S(X_1) = Z, where Z is the critical value, and S is the mean. Because of normality, the mean is exactly the halfway point. Thus if we predict study 2, using the data from study 1, we obtain a 50/50 chance of rejecting prior to conducting study 2.

Why is this (specifically in bold) allowed? This is the handweaving part. It seems weird and counterintuitive. You at this point have to trust me and the authors, to have this statement:

If we barely (or barely not) reject a null hypothesis, and we wish to do a follow up study to determine if the null is truly rejected, repeating the same expirement over does not help at all. We need to do a meta-analysis, increase sample-size or switch to bayesian statistics (using the results from study 1 as a prior) to obtain better information.