Here are the results of a Nature survey on reproducibility in the scientific literature. They themselves admit that it’s a "confusing snapshot", but it shows that we're still arguing about what "reproducibility" means. 52% of the responders (over 1500 scientists) said that there was "a significant crisis", though, so this issue is on people’s minds. Interestingly, chemists were among the most confidant in the literature of their own field (physics and engineering as well). At the same time, chemists had the highest proportion of respondents who said that they'd been unable to reproduce someone else's experiment. I don't think that's necessarily a contradiction, though. Chemistry is a field with lower barriers to replication than many others, and we also probably do more replications in general.
More than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research. The data reveal sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant 'crisis' of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.
Money back guarantees are generally unheard of in biomedicine and healthcare. Recently, the US provider Geisenger Health System, in Pennsylvania, started a programme to give patients their money back if they were dissatisfied. That came as quite a surprise. Soon thereafter, the chief medical officer at Merck launched an even bigger one, proposing an "incentive-based approach" to non-reproducible results—what he termed a "reproducibility crisis" that "threatens the entire biomedical research enterprise."
The scientific community is bustling with projects to make published results more reliable. Efforts are under way to establish checklists, to revamp training in experimental design, and even to fund disinterested scientists to replicate others' experiments. A more efficient strategy would be to rework current incentives to put less emphasis on high-impact publications, but those systems are entrenched, and public funders and universities are ill-prepared for that scale of change. To catalyse change, industry must step up to the plate. I have learned this first hand, as head of the Structural Genomics Consortium (SGC), a research charity funded by business, government and other charities. If more companies contributed funds and expertise to efforts such as ours, I believe it would create a system that rewards science that is both cutting-edge and reproducible.
Researchers on social media ask at what point replication efforts go from useful to wasteful. The problem of irreproducibility in science has gained widespread attention, but one aspect that is discussed less often is how to find the right balance between replicating findings and moving a field forward from well-established ones.
The 2016 GBSI Summit—"Research Reproducibility: Innovative Solutions to Drive Quality" welcomed premiere life science thought leaders, including Arizona State University biomarker researcher Joshua LaBaer, MD, PhD, and science correspondent and moderator Richard Harris (currently on leave from National Public Radio as a visiting scholar this spring at Arizona State University), to explore the driving forces and profound impacts behind the issues.