In summary, my little experiment has shown that reproducibility of Python scripts requires preserving the original environment, which fortunately is not so difficult over a time span of four years, at least if everything you need is part of the Anaconda distribution. I am not sure I would have had the patience to reinstall everything from source, given an earlier bad experience. The purely computational part of my code was even surprisingly robust under updates in its dependencies. But the plotting code wasn’t, as matplotlib has introduced backwards-incompatible changes in a widely used function. Clearly the matplotlib team prepared this carefully, introducing a deprecation warning before introducing the breaking change. For properly maintained client code, this can probably be dealt with.
Replicability is a core principle of the scientific method. However, several scientific disciplines have suffered crises in confidence caused, in large part, by attitudes toward replication. This work reports on the value the computing education research community associates with studies that aim to replicate, reproduce or repeat earlier research. The results were obtained from a survey of 73 computing education researchers. An analysis of the responses confirms that researchers in our field hold many of the same biases as those in other fields experiencing a crisis in replication. In particular, researchers agree that original works - novel works that report new phenomena - have more impact and are more prestigious. They also agree that originality is an important criteria for accepting a paper, making such work more likely to be published. Furthermore, while the respondents agree that published work should be verifiable, they doubt this standard is widely met in the computing education field and are not eager to perform the work of verifying others' work themselves.
Developing effective information retrieval models has been a long standing challenge in Information Retrieval (IR), and significant progresses have been made over the years. With the increasing number of developed retrieval functions and the release of new data collections, it becomes more difficult, if not impossible, to compare a new retrieval function with all existing retrieval functions over all available data collections. To tackle thisproblem, this paper describes our efforts on constructing a platform that aims to improve the reproducibility of IR researchand facilitate the evaluation and comparison of retrieval functions.
A high-quality search strategy is considered an essential component of systematic reviews but many do not contain reproducible search strategies. It is unclear if low reproducibility spans medical disciplines, is affected by librarian/search specialist involvement or has improved with increased awareness of reporting guidelines.
This commentary provides a brief history of the U.S. funding initiatives associated with promoting multiscale modeling of the physiome since 2003. An effort led in the United States is the Interagency Modeling and Analysis Group (IMAG) Multiscale Modeling Consortium (MSM). Though IMAG and the MSM have generated much interest in developing MSM models of the physiome, challenges associated with model and data sharing in biomedical, biological and behavioral systems still exist. Since 2013, the IEEE EMBS Technical Committee on Computational Biology and the Physiome (CBaP TC) has supported discussions on promoting model reproducibility through publication. This Special Issue on Model Sharing and Reproducibility is a realization of the CBaP TC discussions. Though open questions remain on how we can further facilitate model reproducibility, accessibility and reuse by the worldwide community for different biomedical domain applications, this special issue provides a unique demonstration of both the challenges and opportunities for publishing reproducible computational models.
The International Working Group on Antibody Validation (IWGAV), an independent group of international scientists with diverse research interests in the field of protein biology, today announced the publication of initial strategies developed to address a critical unmet need for antibody specificity, functionality and reproducibility in the online issue of Nature Methods. The IWGAV is the first initiative of its size and scope to establish strategic recommendations for antibody validation for both antibody producers and users. Thermo Fisher Scientific, the world leader in serving science, provided financial support to the IWGAV in 2015 to spearhead the development of industry standards and help combat the common challenges associated with antibody specificity and reproducibility.