Posts about popular news

Building research evidence towards reproducibility of animal research

Since our debut in late 2006, PLOS ONE has strived to promote best practices in research reporting as a way to improve reproducibility in research. We have supported initiatives towards increased transparency, as well as the gathering of evidence that can inform improvements in the quality of reporting in research articles. In line with this commitment, PLOS ONE collaborated in a randomized controlled trial (RCT) to test the impact of an intervention asking authors to complete a reporting checklist at the time of manuscript submission. The results from this trial have recently been posted on bioRxiv (1) and provide a further step toward building the necessary evidence base to inform editorial interventions towards improving reporting quality.

Before reproducibility must come preproducibility

The lack of standard terminology means that we do not clearly distinguish between situations in which there is not enough information to attempt repetition, and those in which attempts do not yield substantially the same outcome. To reduce confusion, I propose an intuitive, unambiguous neologism: ‘preproducibility’. An experiment or analysis is preproducible if it has been described in adequate detail for others to undertake it. Preproducibility is a prerequisite for reproducibility, and the idea makes sense across disciplines.

Scientific Research: Reproducibility and Bias in Chemistry

When scientists are able to recreate earlier research results, published by other scientists, the research is considered reproducible. But what happens when the results don’t match? It means that the initial research is non-reproducible. Reproducibility, or non-reproducibility, of scientific experiments seems straightforward; it implies that an experimental result is either valid or invalid. In fact, researchers affiliated with Stanford University, Tufts University, and University of Ioannina in Greece concluded in 2005 that a majority of all research findings are false. How do those invalid results end up in scientific papers? A group of Stanford researchers concluded that, in many cases, bias is to blame.

Dealing with the reproducibility crisis: what can ECRs do about it?

Unless you’ve been living under a rock (no judgment, by the way), I’m sure you’ve heard about the reproducibility crisis in scientific research. In 2016, two posts on this blog covered what the main causes of irreproducibility are and what can be done, and how we can reform scientific publishing to value integrity. To briefly recap, a study published in PLOS biology noted that half of preclinical research is not reproducible. The estimated price tag on this irreproducibility is alarming—a whopping $28 billion. In my opinion, however, the most troubling cost of this crisis is its impact on public trust in science.

THE DISMAL SCIENCE REMAINS DISMAL, SAY SCIENTISTS

The paper inhales more than 6,700 individual pieces of research, all meta-analyses that themselves encompass 64,076 estimates of economic outcomes. That’s right: It’s a meta-meta-analysis. And in this case, Doucouliagos never meta-analyzed something he didn’t dislike. Of the fields covered in this corpus, half were statistically underpowered—the studies couldn’t show the effect they said they did. And most of the ones that were powerful enough overestimated the size of the effect they purported to show. Economics has a profound effect on policymaking and understanding human behavior. For a science, this is, frankly, dismal.