Mistakes in peer-reviewed papers are easy to find but hard to fix, report David B. Allison and colleagues: "In the course of assembling weekly lists of articles in our field, we began noticing more peer-reviewed articles containing what we call substantial or invalidating errors. These involve factual mistakes or veer substantially from clearly accepted procedures in ways that, if corrected, might alter a paper's conclusions."
Researchers at Sweden's Karolinska Institute and Royal Institute of Technology have developed a new data analysis workflow for shotgun mass spec that could help improve the technique's quantitative reproducibility. Detailed in a paper published this month in Molecular & Cellular Proteomics, the approach uses a new quality scoring system that allows for more reliable recovery of missing data points across multiple mass spec runs.
Today the Federation of American Societies for Experimental Biology (FASEB) issued Enhancing Research Reproducibility, a set of recommendations aimed to promote the reproducibility and transparency of biomedical and biological research.
Experimental results that don’t hold up to replication have caused consternation among scientists for years, especially in the life and social sciences (SN: 1/24/15, p. 20). In 2015 several research groups examining the issue reported on the magnitude of the irreproducibility problem. The news was not good.
Independently verifying research can help science regain its credibility, argues Laurie Zoloth. His paper: "Why Most Published Research Findings Are False", was published in August 2005, in PLOS Medicine. It became one of the journal’s most-cited articles. While climate sceptics, anti-vaccination campaigners and the rest of the pseudo-science community have dined out on this paper, arguably it has been a shot in the arm for science.
Brian Nosek of the University of Virginia and the Center for Open Science talks with EconTalk host Russ Roberts about the Reproducibility Project.