The Recomputation Manifesto

Replication of scientific experiments is critical to the advance of science. Unfortunately, the discipline of Computer Science has never treated replication seriously, even though computers are very good at doing the same thing over and over again. Not only are experiments rarely replicated, they are rarely even replicable in a meaningful way. Scientists are being encouraged to make their source code available, but this is only a small step. Even in the happy event that source code can be built and run successfully, running code is a long way away from being able to replicate the experiment that code was used for. I propose that the discipline of Computer Science must embrace replication of experiments as standard practice. I propose that the only credible technique to make experiments truly replicable is to provide copies of virtual machines in which the experiments are validated to run. I propose that tools and repositories should be made available to make this happen. I propose to be one of those who makes it happen.

noWorkflow: a Tool for Collecting, Analyzing, and Managing Provenance from Python Scripts

We present noWorkflow, an open-source tool that systematically and transparently collects provenance from Python scripts, including data about the script execution and how the script evolves over time. During the demo, we will show how noWorkflow collects and manages provenance, as well as how it supports the analysis of computational experiments.We will also encourage attendees to use noWorkflow for their own scripts.

Using ReproZip for Reproducibility and Library Services

Achieving research reproducibility is challenging in many ways: there are social and cultural obstacles as well as a constantly changing technical landscape that makes replicating and reproducing research difficult. Users face challenges in reproducing research across different operating systems, in using different versions of software across long projects and among collaborations, and in using publicly available work. The dependencies required to reproduce the computational environments in which research happens can be exceptionally hard to track – in many cases, these dependencies are hidden or nested too deeply to discover, and thus impossible to install on a new machine, which means adoption remains low. In this paper, we present ReproZip, an open source tool to help overcome the technical difficulties involved in preserving and replicating research, applications, databases, software, and more. We examine the current use cases of ReproZip, ranging from digital humanities to machine learning. We also explore potential library use cases for ReproZip, particularly in digital libraries and archives, liaison librarianship, and other library services. We believe that libraries and archives can leverage ReproZip to deliver more robust reproducibility services, repository services, as well as enhanced discoverability and preservation of research materials, applications, software, and computational environments.

Here's the three-pronged approach we're using in our own research to tackle the reproducibility issue

A big part of this problem has to do with what’s been called a “reproducibility crisis” in science – many studies if run a second time don’t come up with the same results. Scientists are worried about this situation, and high-profile international research journals have raised the alarm, too, calling on researchers to put more effort into ensuring their results can be reproduced, rather than only striving for splashy, one-off outcomes. Concerns about irreproducible results in science resonate outside the ivory tower, as well, because a lot of this research translates into information that affects our everyday lives.

Tagged:
  • news article
  • Promoting transparency and reproducibility in Behavioral Neuroscience: Publishing replications, registered reports, and null results

    The editors of Behavioral Neuroscience have been discussing several recent developments in the landscape of scientific publishing. The discussion was prompted, in part, by reported issues of reproducibility and concerns about the integrity of the scientific literature. Although enhanced rigor and transparency in science are certainly important, a related issue is that increased competition and focus on novel findings has impeded the extent to which the scientific process is cumulative. We have decided to join the growing number of journals that are adopting new reviewing and publishing practices to address these problems. In addition to our standard research articles, we are pleased to announce 3 new categories of articles: replications, registered reports, and null results. In joining other journals in psychology and related fields to offer these publication types, we hope to promote higher standards of methodological rigor in our science. This will ensure that our discoveries are based on sound evidence and that they provide a durable foundation for future progress. (PsycINFO Database Record)

    Sustainable computational science: the ReScience initiative

    Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results, however computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested, hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests