Posts about reproducible paper (old posts, page 5)

An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014-2017)

Serious concerns about research quality have catalyzed a number of reform initiatives intended to improve transparency and reproducibility and thus facilitate self-correction, increase efficiency, and enhance research credibility. Meta-research has evaluated the merits of individual initiatives; however, this may not capture broader trends reflecting the cumulative contribution of these efforts. In this study, we evaluated a broad range of indicators related to transparency and reproducibility in a random sample of 198 articles published in the social sciences between 2014 and 2017. Few articles indicated availability of materials (15/96, 16% [95% confidence interval, 9% to 23%]), protocols (0/103), raw data (8/103, 8% [2% to 15%]), or analysis scripts (3/103, 3% [1% to 6%]), and no studies were pre-registered (0/103). Some articles explicitly disclosed funding sources (or lack of; 72/179, 40% [33% to 48%]) and some declared no conflicts of interest (32/179, 18% [13% to 24%]). Replication studies were rare (2/103, 2% [0% to 4%]). Few studies were included in evidence synthesis via systematic review (6/96, 6% [3% to 11%]) or meta-analysis (2/96, 2% [0% to 4%]). Slightly less than half the articles were publicly available (95/198, 48% [41% to 55%]). Minimal adoption of transparency and reproducibility-related research practices could be undermining the credibility and efficiency of social science research. The present study establishes a baseline that can be revisited in the future to assess progress.

Replication Redux: The Reproducibility Crisis and the Case of Deworming

In 2004, a landmark study showed that an inexpensive medication to treat parasitic worms could improve health and school attendance for millions of children in many developing countries. Eleven years later, a headline in the Guardian reported that this treatment, deworming, had been "debunked."The pronouncement followed an effort to replicate and re-analyze the original study, as well as an update to a systematic review of the effects of deworming. This story made waves amidst discussion of a reproducibility crisis in some of the social sciences. This paper explores what it means to"replicate"and"reanalyze"a study, both in general and in the specific case of deworming. The paper reviews the broader replication efforts in economics, then examines the key findings of the original deworming paper in light of the "replication," "reanalysis," and "systematic review."The paper also discusses the nature of the link between this single paper's findings, other papers' findings, and any policy recommendations about deworming. This example provides a perspective on the ways replication and reanalysis work, the strengths and weaknesses of systematic reviews, and whether there is, in fact, a reproducibility crisis in economics.

Open and Reproducible Research on Open Science Framework

By implementing more transparent research practices, authors have the opportunity to stand out and showcase work that is more reproducible, easier to build upon, and more credible. Scientists gain by making work easier to share and maintain within their own laboratories, and the scientific community gains by making underlying data or research materials more available for confirmation or making new discoveries. The following protocol gives authors step‐by‐step instructions for using the free and open source Open Science Framework (OSF) to create a data management plan, preregister their study, use version control, share data and other research materials, or post a preprint for quick and easy dissemination.

Rigor, Reproducibility, and Responsibility: A Quantum of Solace

Lack of reproducibility in biomedical science is aserious and growing issue. Two publications, in 2011 and 2012, along with other analyses, documented failures to replicate key findings and other fundamental flaws in high-visibility research articles. This triggered action among funding bodies, journals, and other change-agents. Here, I examine well-recognized and underrecognized factors that contribute to experimental failure andsuggest individual and community approaches that can be used to attack these factors and eschew the SPECTRE of irreproducibility.

Encouraging Reproducibility in Scientific Research of the Internet

Reproducibility of research in Computer Science (CS) and in the field of networking in particularis a well-recognized problem. For several reasons, including the sensitive and/or proprietarynature of some Internet measurements, the networking research community pays limited attentionto the of reproducibility of results, instead tending to accept papers that appear plausible.This article summarises a 2.5 day long Dagstuhl seminar on Encouraging Reproducibility inScientific Research of the Internet held in October 2018. The seminar discussed challenges toimproving reproducibility of scientific Internet research, and developed a set of recommendationsthat we as a community can undertake to initiate a cultural change toward reproducibility ofour work. It brought together people both from academia and industry to set expectations andformulate concrete recommendations for reproducible research. This iteration of the seminar wasscoped to computer networking research, although the outcomes are likely relevant for a broaderaudience from multiple interdisciplinary fields.