Let's see that again

A few years ago, the topic of whether scientific papers are reproducible or not would have been an odd thing to see in a newspaper. But not any more: both the popular media and the journals themselves have been trying to deal with the topic, amid reports that far too many results can’t be replicated. Large scale efforts have begun to examine key papers in experimental psychology, among other areas. Reports from the biopharma industry about the numbers of interesting biology papers that don’t hold up have stirred alarm as well. But as far as I can tell, chemistry has largely escaped the current rounds of criticism.

Reproducibility: Crisis or Not?

Here are the results of a Nature survey on reproducibility in the scientific literature. They themselves admit that it’s a "confusing snapshot", but it shows that we're still arguing about what "reproducibility" means. 52% of the responders (over 1500 scientists) said that there was "a significant crisis", though, so this issue is on people’s minds. Interestingly, chemists were among the most confidant in the literature of their own field (physics and engineering as well). At the same time, chemists had the highest proportion of respondents who said that they'd been unable to reproduce someone else's experiment. I don't think that's necessarily a contradiction, though. Chemistry is a field with lower barriers to replication than many others, and we also probably do more replications in general.

1,500 scientists lift the lid on reproducibility

More than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research. The data reveal sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant 'crisis' of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.

Contextual sensitivity in scientific reproducibility

Scientific progress requires that findings can be reproduced by other scientists. However, there is widespread debate in psychology (and other fields) about how to interpret failed replications. Many have argued that contextual factors might account for several of these failed replications. We analyzed 100 replication attempts in psychology and found that the extent to which the research topic was likely to be contextually sensitive (varying in time, culture, or location) was associated with replication success. This relationship remained a significant predictor of replication success even after adjusting for characteristics of the original and replication studies that previously had been associated with replication success (e.g., effect size, statistical power). We offer recommendations for psychologists and other scientists interested in reproducibility.

Money back guarantees for non-reproducible results?

Money back guarantees are generally unheard of in biomedicine and healthcare. Recently, the US provider Geisenger Health System, in Pennsylvania, started a programme to give patients their money back if they were dissatisfied. That came as quite a surprise. Soon thereafter, the chief medical officer at Merck launched an even bigger one, proposing an "incentive-based approach" to non-reproducible results—what he termed a "reproducibility crisis" that "threatens the entire biomedical research enterprise."