The Quest for Reproducible Science: Issues in Research Transparency and Integrity

A pre-conference event of the American Library Association's annual conference: "The credibility of scientific findings is under attack. While this crisis has several causes, none is more common or correctable than the inability to replicate experimental and computational research. This preconference will feature scholars, librarians, and technologists who are attacking this problem through tools and techniques to manage data, enable research transparency, and promote reproducible science. Attendees will learn strategies for fostering and supporting transparent research practices at their institutions."

Evaluating replicability of laboratory experiments in economics

The reproducibility of scientific findings has been called into question. To contribute data about reproducibility in economics, we replicate 18 studies published in the American Economic Review and the Quarterly Journal of Economics in 2011-2014. All replications follow predefined analysis plans publicly posted prior to the replications, and have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We find a significant effect in the same direction as the original study for 11 replications (61%); on average the replicated effect size is 66% of the original. The reproducibility rate varies between 67% and 78% for four additional reproducibility indicators, including a prediction market measure of peer beliefs.

Psychology’s reproducibility problem is exaggerated – say psychologists

In August 2015, a team of 270 researchers reported the largest ever single-study audit of the scientific literature. Led by Brian Nosek, executive director of the Center for Open Science in Charlottesville, Virginia, the Reproducibility Project attempted to replicate studies in 100 psychology papers. According to one of several measures of reproducibility, just 36% could be confirmed; by another statistical measure, 47% could. Not so fast, says Gilbert. Because of the way the Reproducibility Project was conducted, its results say little about the overall reliability of the psychology papers it tried to validate, he argues. "The number of studies that actually did fail to replicate is about the number you would expect to fail to replicate by chance alone — even if all the original studies had shown true effects."

Research Software Sustainability: Report on Knowledge Exchange workshop

The report introduces software sustainability, provides definitions, clearly demonstrates that software is not the same as data and illustrates aspects of sustainability in the software lifecycle. The recommendations state that improving software sustainability requires a number of changes: some technical and others societal, some small and others significant. We must start by raising awareness of researchers' reliance on software. This goal will become easier if we recognise the valuable contribution that software makes to research and reward those people who invest their time into developing reliable and reproducible software.

ACM SIGMOD 2016 Reproducibility Guidelines

SIGMOD Reproducibility has three goals: Highlight the impact of database research papers; Enable easy dissemination of research results; Enable easy sharing of code and experimentation set-ups. In short, the goal is to assist in building culture where sharing results, code, and scripts of database research is the norm rather than the exception. The challenge is to do this efficiently, which means building technical expertise on how to do better research via creating repeatable and shareable research. The SIGMOD Reproducibility Committee is here to help you with this.

Statistical Challenges in Assessing and Fostering the Reproducibility of Scientific Results: Summary of a Workshop

The workshop summarized in this report was designed not to address the social and experimental challenges but instead to focus on the latter issues of improper data management and analysis, inadequate statistical expertise, incomplete data, and difficulties applying sound statistical inference to the available data.

Transparency and Openness Promotion (TOP) Guidelines

Transparency, open sharing, and reproducibility are core features of science, but not always part of daily practice. Journals can increase transparency and reproducibility of research by adopting the TOP Guidelines. TOP includes eight modular standards, each with three levels of increasing stringency. Journals select which of the eight transparency standards they wish to adopt for their journal, and select a level of implementation for the selected standards. These features provide flexibility for adoption depending on disciplinary variation, but simultaneously establish community standards.

University of Washington's eScience Institute Guidelines for Reproducible & Open Science

Our working definition for reproducible research is that a research result can be replicated by another investigator. Our focus is data science and the reproducibility of computational studies and/or analysis of digital data. This note summarizes best practices to facilitate reproducible research in data science (and computational science more generally). It is expected that all research conducted with funding from the DSE will be performed in accordance with these guidelines to the extent possible.

Janiform Papers Demo (pdbf: portable database files)

PDBF documents are a hybrid format. They are a valid PDF and a valid HTML page at the same time. You can now optionally add an VirtualBox OVA file with a complete operating system to the PDBF document. Yes, this means that the resulting file is a valid PDF, HTML, and OVA file at the same time. If you change the file extension to PDF and open it with an PDF viewer, you can see the static part of the document.