It is important for research users to know how likely it is that reported research findings are true. The Social Science Replication Project finds that, in highly powered experiments, only 13 of 21 high-profile reports could be replicated. Investigating the factors that contribute to reliable results offers new opportunities for the social sciences.
If the results in a published study can’t be replicated in subsequent experiments, how can you trust what you read in scientific journals? One international group of researchers is well aware of this reproducibility crisis, and has been striving to hold scientists accountable. For their most recent test, they attempted to reproduce 21 studies from two of the top scientific journals, Science and Nature, that were published between 2010 and 2015. Only 13 of the reproductions produced the same results as the original study.
In January, Bruce Beutler, an immunologist at University of Texas Southwestern Medical Center and winner of the 2011 Nobel Prize in Physiology or Medicine, emailed Science editor-in-chief Jeremy Berg to report that attempts to replicate the findings in "MAVS, cGAS, and endogenous retroviruses in T-independent B cell responses" had weakened his confidence in original results. The paper had found that virus-like elements in the human genome play an important role in the immune system’s response to pathogens. Although Beutler and several co-authors requested retraction right off the bat, the journal discovered that two co-authors disagreed, which Berg told us drew out the retraction process. In an attempt to resolve the situation, the journal waited for Beutler’s lab to perform another replication attempt. Those findings were inconclusive and the dissenting authors continued to push back against retraction.
In order for research methods to be consistent, accessible and reproducible, we need universal, widely understood standards for research that all scientists adhere to. NPL has been responsible for maintaining fundamental standards and units for more than 100 years and is now engaged in pioneering work to create a set of “gold standards” for all scientific methodologies, materials, analyses and protocols, based on exhaustive testing at a large number of laboratories, in tandem with both industry and national and international standardisation organisations.
To address this critical need, the Laura and John Arnold Foundation has awarded a grant to a coalition of groups representing the international Earth and space science community, convened by the American Geophysical Union (AGU), to develop standards that will connect researchers, publishers, and data repositories in the Earth and space sciences to enable FAIR (findable, accessible, interoperable, and reusable) data – a concept first developed by Force11.org – on a large scale. This will accelerate scientific discovery and enhance the integrity, transparency, and reproducibility of this data. The resulting set of best practices will include: metadata and identifier standards; data services; common taxonomies; landing pages at repositories to expose the metadata and standard repository information; standard data citation; and standard integration into editorial peer review workflows.
A big part of this problem has to do with what’s been called a “reproducibility crisis” in science – many studies if run a second time don’t come up with the same results. Scientists are worried about this situation, and high-profile international research journals have raised the alarm, too, calling on researchers to put more effort into ensuring their results can be reproduced, rather than only striving for splashy, one-off outcomes. Concerns about irreproducible results in science resonate outside the ivory tower, as well, because a lot of this research translates into information that affects our everyday lives.