A blog post from C. Titus Brown on how he and his co-authors were able to make a paper they wrote replicable.
As part of this project, in collaboration with Philippe Bonnet, we are using (and extending) our infrastructure to support the SIGMOD Repeatability effort. Below are some case studies that illustrate how authors can create provenance-rich and reproducible papers, and how reviewers can both reproduce the experiments and perform workability tests: packaging an experiment on a distributed database system (link in title).
This site serves as a repository for experiments related to database research. Currently, it supports the submission and review of results published at PVLDB and ACM Sigmod.
Research at Duke University in genomics that involved fighting cancer by looking for gene patterns that would determine which drugs would best attack a particular cancer (no more trial-and-error treatment, considered a breakthrough). This research turned out to be wrong, due to flaws in the research (found by statisticians); if the research was reproducible, errors could have been found earlier and the patients could have continued their treatment.
NY article discussing the issues with scientific reproducibility: "Why? One simple answer is that it takes a lot of time to look back over other scientists’ work and replicate their experiments. Scientists are busy people, scrambling to get grants and tenure. As a result, papers that attract harsh criticism may nonetheless escape the careful scrutiny required if they are to be refuted."
Professor Matthias Troyer (ETH Zurich) and his collaborators have published a number of papers whose results are fully reproducible. He is using VisTrails to both carry out the experiments and to package them for publication.