A Bayesian perspective on the Reproducibility Project: Psychology

Abstract

We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors - a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis - for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.

Bibtex

@article{etz_vandekerckhove:2016:Reproducibility,
    title   = {{A} {B}ayesian perspective on the {R}eproducibility {P}roject: {P}sychology},
    author  = {Etz, Alexander and Vandekerckhove, Joachim},
    year    = {2016},
    journal = {PLoS ONE},
    volume  = {11},
    pages   = {e0149794}
}