This year, also SMI will introduce an Award for Reproducibility to be granted to authors of accepted papers who are willing to provide a complete open-source implementation of their algorithm. The reproducibility stamp does not affect the reviewing process or the requirements for your submission to be accepted. The awarded papers will receive an additional 5 to 10 minutes in their presentation to give a live demo and will be recognized during the SMI closing ceremony. More information on the web site soon.
Reproducible Science Promoting Open Science
A ReproZip demo has been accepted at SIGMOD 2016: "ReproZip: Computational Reproducibility With Ease." F. Chirigati, R. Rampin, D. Shasha, and J. Freire.
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts.
Researchers on social media ask at what point replication efforts go from useful to wasteful. The problem of irreproducibility in science has gained widespread attention, but one aspect that is discussed less often is how to find the right balance between replicating findings and moving a field forward from well-established ones.
Recent years have seen an increase in alarming signals about the lack of replicability in neuroscience, psychology, and other related fields. To avoid a widespread crisis in our field and consequent loss of credibility in the public eye, we need to improve how we do science. This article aims to be a practical guide for researchers at any stage of their careers that will help them make their research more reproducible and transparent while minimizing the additional effort that this might require. The guide covers three major topics in open science (data, code, and publications) and offers practical advice as well as highlighting advantages of adopting more open research practices that go beyond improved transparency and reproducibility.
For the past decade, scientists have been worried about the so-called replication crisis. Enter the Preclinical Reproducibility and Robustness channel. The website launched the first week in February with the goal of publishing the results of replication studies. The journal wants to keep scientists accountable for their work.
The 2016 GBSI Summit—"Research Reproducibility: Innovative Solutions to Drive Quality" welcomed premiere life science thought leaders, including Arizona State University biomarker researcher Joshua LaBaer, MD, PhD, and science correspondent and moderator Richard Harris (currently on leave from National Public Radio as a visiting scholar this spring at Arizona State University), to explore the driving forces and profound impacts behind the issues.
This blog post is apart of the Love Your Data campaign #LYD16, a global and cross-institution awareness campaign for open data, research reproducibility, and research data management. This post features ReproMatch and ReproZip as important tools for achieving reproducibility.
Finding a relevant reporting guideline for a study can be very difficult. Here we introduce a pilot experiment starting for some of the BMC-series journals which aims to overcome this issue.
ReproZip was featured in a post on the Library of Congress's digital preservation blog, the Signal. The author, Genevieve Havemeyer-King, writes "ReproZip is a tool being developed at NYU "aimed at simplifying the process of creating reproducible experiments from command-line executions", and could be something to consider as an alternative to many costly web-archiving services for preservation of internet-based projects and applications."