Need to find a replication partner, or collaborator? There’s an online platform for that

Do researchers need a new "Craigslist?" We were recently alerted to a new online platform called StudySwap by one of its creators, who said it was partially inspired by one of our posts. The platform creates an "online marketplace" that previous researchers have called for, connecting scientists with willing partners – such as a team looking for someone to replicate its results, and vice versa. As co-creators Christopher Chartier at Ashland University and Randy McCarthy at Northern Illinois University tell us, having a place where researchers can find each other more efficiently "is in everyone’s best interest."

Psychology’s reproducibility problem is exaggerated – say psychologists

In August 2015, a team of 270 researchers reported the largest ever single-study audit of the scientific literature. Led by Brian Nosek, executive director of the Center for Open Science in Charlottesville, Virginia, the Reproducibility Project attempted to replicate studies in 100 psychology papers. According to one of several measures of reproducibility, just 36% could be confirmed; by another statistical measure, 47% could. Not so fast, says Gilbert. Because of the way the Reproducibility Project was conducted, its results say little about the overall reliability of the psychology papers it tried to validate, he argues. "The number of studies that actually did fail to replicate is about the number you would expect to fail to replicate by chance alone — even if all the original studies had shown true effects."

Evaluating replicability of laboratory experiments in economics

The reproducibility of scientific findings has been called into question. To contribute data about reproducibility in economics, we replicate 18 studies published in the American Economic Review and the Quarterly Journal of Economics in 2011-2014. All replications follow predefined analysis plans publicly posted prior to the replications, and have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We find a significant effect in the same direction as the original study for 11 replications (61%); on average the replicated effect size is 66% of the original. The reproducibility rate varies between 67% and 78% for four additional reproducibility indicators, including a prediction market measure of peer beliefs.

A Bayesian Perspective on the Reproducibility Project: Psychology

We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts.

How Many Replication Studies are Enough?

Researchers on social media ask at what point replication efforts go from useful to wasteful. The problem of irreproducibility in science has gained widespread attention, but one aspect that is discussed less often is how to find the right balance between replicating findings and moving a field forward from well-established ones.