You download the data and complete your analysis with ample time to spare. Then, just before deadline, your collaborator lets you know that they've "fixed a data error". Now, you have to do your analysis all over again. This is the reproducibility horror story.
Reproducible Science Promoting Open Science
University of Minnesota School of Public Health Assistant Professor Julian Wolfson was named an associate editor for reproducibility for the Journal of the American Statistical Association (JASA). The appointment is in support of the journal’s new requirement for authors to submit scientific code and data for review along with their papers.
Scientists, public servants, and patient advocates alike increasingly question the validity of published scientific results, endangering the public’s acceptance of science. Here, I argue that emerging flaws in the integrity of the peer review system are largely responsible. Distortions in peer review are driven by economic forces and enabled by a lack of accountability of journals, editors, and authors. One approach to restoring trust in the validity of published results may be to establish basic rules that render peer review more transparent, such as publishing the reviews (a practice already embraced by some journals) and monitoring not only the track records of authors but also of editors and journals.
"Promoting Responsible Scientific Research" is the title of a new report just released by the American Academy of Microbiology, a component of ASM. It grew out of an Academy colloquium held last October to tackle an issue that is unfortunately becoming well known both inside and outside scientific circles—the lack of rigor in science. I am delighted that the Academy and ASM are taking on this difficult issue and am grateful to all the participants, the Academy steering committee, and especially to Dr. Arturo Casadevall of Johns Hopkins University, who chaired the colloquium.
Sanjay Srivastava’s joke syllabus ("A Joke Syllabus With a Serious Point: Cussing Away the Reproducibility Crisis," The Chronicle, August 15) and Lee Jussim's blog post on Psychology Today about educating psychology students in light of the reproducibility crisis led me to reflect on my department’s recent curriculum changes. We have retooled or created from scratch multiple courses that engage something few of my colleagues seem to consider relevant to the problem: intellectual history. They instead hold firmly to the dictates of positivism, insisting that better training in the methods of science will be the source of rescue. Where does this prejudice come from? Might it be time to pave a new way?
Amid discussions around scientific reproducibility, the leading biomedical journal Cell will introduce a redesigned methods section to help authors clearly communicate how experiments are conducted. The first papers using Structured, Transparent, Accessible Reporting (STAR) Methods, which promotes guidelines encouraged by reagent labeling and animal experimentation initiatives, appear in Cell on August 25. The format will then be adopted by other Cell Press journals over the next year, starting with Cell Systems in the fall.
A major contributor to the scientific reproducibility crisis has been that the results from homogeneous, single-center studies do not generalize to heterogeneous, real world populations. Multi-cohort gene expression analysis has helped to increase reproducibility by aggregating data from diverse populations into a single analysis. To make the multi-cohort analysis process more feasible, we have assembled an analysis pipeline which implements rigorously studied meta-analysis best practices. We have compiled and made publicly available the results of our own multi-cohort gene expression analysis of 103 diseases, spanning 615 studies and 36,915 samples, through a novel and interactive web application. As a result, we have made both the process of and the results from multi-cohort gene expression analysis more approachable for non-technical users.
Reproducibility is a defining feature of science. Lately, however, serious concerns have been raised regarding the extent to which the results of research, especially biomedical research, are easily replicated. In this Editorial, we discuss to what extent reproducibility is a significant issue in chemical research and then suggest steps to minimize problems involving irreproducibility in chemistry.
To make replication studies more useful, researchers must make more of them, funders must encourage them and journals must publish them.No scientist wants to be the first to try to replicate another’s promising study: much better to know what happened when others tried it. Long before replication or reproducibility became major talking points, scientists had strategies to get the word out. Gossip was one. Researchers would compare notes at conferences, and a patchy network would be warned about whether a study was worth building on. Or a vague comment might be buried in a related publication. Tell-tale sentences would start "In our hands", "It is unclear why our results differed …" or "Interestingly, our results did not …".
The Alan Turing Institute Symposium on Reproducibility for Data-Intensive Research was held on 6th - 7th April 2016 at the University of Oxford. It was organised by senior academics, publishers and library professionals representing the Alan Turing Institute (ATI) joint venture partners (the universities of Cambridge, Edinburgh, Oxford, UCL and Warwick), the University of Manchester, Newcastle University and the British Library. The key aim of the symposium was to address the challenges around reproducibility of data-intensive research in science, social science and the humanities. This report presents an overview of the discussions and makes some recommendations for the ATI to take forwards.