Transparency and Reproducibility in Economics Research

There is growing interest in research transparency and reproducibility in economics and other scientific fields. We survey existing work on these topics within economics and discuss the evidence suggesting that publication bias, inability to replicate, and specification searching remain widespread problems in the discipline. We next discuss recent progress in this area, including improved research design, study registration and pre-analysis plans, disclosure standards, and open sharing of data and materials, and draw on experiences in both economics and other social sciences. We discuss areas where consensus is emerging on new practices as well as approaches that remain controversial and speculate about the most effective ways to make economics research more accurate, credible, and reproducible in the future.

Cancer Research Is Broken

There’s a replication crisis in biomedicine—and no one even knows how deep it runs. Many science funders share Parker’s antsiness over all the waste of time and money. In February, the White House announced its plan to put $1 billion toward a similar objective—a “Cancer Moonshot” aimed at making research more techy and efficient. But recent studies of the research enterprise reveal a more confounding issue, and one that won’t be solved with bigger grants and increasingly disruptive attitudes. The deeper problem is that much of cancer research in the lab—maybe even most of it—simply can’t be trusted. The data are corrupt. The findings are unstable. The science doesn’t work.

A practical guide for improving transparency and reproducibility in neuroimaging research

Recent years have seen an increase in alarming signals regarding the lack of replicability in neuroscience, psychology, and other related fields. To avoid a widespread crisis in neuroimaging research and consequent loss of credibility in the public eye, we need to improve how we do science. This article aims to be a practical guide for researchers at any stage of their careers that will help them make their research more reproducible and transparent while minimizing the additional effort that this might require. The guide covers three major topics in open science (data, code, and publications) and offers practical advice as well as highlighting advantages of adopting more open research practices that go beyond improved transparency and reproducibility.

Checklists vs. checkmate: Reproducibility key to premium surgery success

Traditionally, checkmate is a position in the game of chess in which a player’s king is in check, without a way to remove the threat. The king cannot be captured, so the game ends when the king is checkmated. As a premium surgeon, no one ever wants to be checkmated at any stage of the surgical process, from preoperative to intraoperative to postoperative. Other forms of etymology have suggested checkmate to signify being “ambushed,” a feeling many of us have experienced in our surgical careers. A means to avoiding being a checkmated surgeon is creating “checklists” from the time of the first patient encounter until the final postoperative visit. The process of checklists can bring reproducibility to a surgical process that already yields successful outcomes in a premium surgeon’s practice.

Reproducibility in research results: the challenges of attributing reliability

Studies indicate, however, that more than half of the experiments involving clinical trials of new drugs and treatments are irreproducible. John Ioannidis at Stanford University, US, goes on saying that most of the search results is actually false. Ioannidis is the author of a mathematical model that predicts that the smaller the sample and less stringent are the experimental methodology, definitions, outcomes and statistical analysis, the greater the probability of error. Furthermore, studies that hold financial and other interests or of great impact are also more prone to false results.

Ten Major Errors in Obesity Research Discussed

A paper from investigators at the University of Alabama at Birmingham recently published in Obesity identifies several key statistical errors commonly seen in obesity research with discussions on how to identify and avoid making these mistakes. "Our goal is to provide researchers and reviewers with a tutorial to improve the rigor of the science in future obesity studies,” said Brandon George, Ph.D., statistician in the University of Alabama at Birmingham Office of Energetics. “Investigators who conduct primary research may find the paper useful to read or share with statistical collaborators to obtain a deeper understanding of statistical issues, avoid making the discussed errors, and increase the reproducibility and rigor of the field. Editors, reviewers and consumers will find valuable information allowing them to properly identify these common errors while critically reading the work of others."