Reprowd: Crowdsourced Data Processing Made Reproducible

Crowdsourcing is a multidisciplinary research area in-cluding disciplines like artificial intelligence, human-computer interaction, database, and social science. One of the main objectives of AAAI HCOMP conferences is to bring together researchers from different fields and provide them opportunities to exchange ideas and share new research results. To facilitate cooperation across disciplines,repro-ducibilityis a crucial factor, but unfortunately it has not got-ten enough attention in the HCOMP community.

Conducting Reproducible Research with Umbrella: Tracking, Creating, and Preserving Execution Environments

Publishing scientific results without the detailed execution environments describing how the results were collected makes it difficult or even impossible for the reader to reproduce thework. However, the configurations of the execution environ-ments are too complex to be described easily by authors. To solve this problem, we propose a framework facilitating the conduct of reproducible research by tracking, creating, and preserving the comprehensive execution environments with Umbrella. The framework includes a lightweight, persistent anddeployable execution environment specification, an execution engine which creates the specified execution environments, and an archiver which archives an execution environment into persistent storage services like Amazon S3 and Open Science Framework (OSF). The execution engine utilizes sandbox techniques like virtual machines (VMs), Linux containers and user-space tracers, to cre-ate an execution environment, and allows common dependencies like base OS images to be shared by sandboxes for different applications. We evaluate our framework by utilizing it to reproduce three scientific applications from epidemiology, scene rendering, and high energy physics. We evaluate the time and space overhead of reproducing these applications, and the effectiveness of the chosen archive unit and mounting mechanism for allowing different applications to share dependencies. Our results show that these applications can be reproduced using different sandbox techniques successfully and efficiently, even through the overhead andperformance slightly vary.

PRUNE: A Preserving Run Environment for Reproducible Scientific Computing

Computing as a whole suffers from a crisis of reproducibility. Programs executed in one context are aston-ishingly hard to reproduce in another context, resulting in wasted effort by people and general distrust of results produced by computer. The root of the problem lies in the fact that every program has implicit dependencies on data and execution environment whichare rarely understood by the end user. To address this problem, we present PRUNE, the Preserving Run Environment.In PRUNE, every task to be executed is wrapped in a functional interface and coupled with a strictly defined environment. The task is then executed by PRUNErather than the user to ensure reproducibility. As a scientific workflow evolves in PRUNE, a growing but immutable tree of derived data is created. The provenance of every item in the system can be precisely described, facilitating sharing and modification between collaborating researchers, along with efficient management of limited storage space. We present the user interface and the initial prototype of PRUNE, and demonstrate its application in matching records and comparing surnames in U.S. Censuses.

Vive la Petite Différence! Exploiting Small Differences for Gender Attribution of Short Texts

This article describes a series of experiments on gender attribution of Polish texts. The research was conducted on the publicly available corpus called "He Said She Said", consisting of a large number of short texts from the Polish version of Common Crawl. As opposed to other experiments on gender attribution, this research takes on a task of classifying relatively short texts, authored by many different people. For the sake of this work, the original "He Said She Said" corpus was filtered in order to eliminate noise and apparent errors in the training data. In the next step, various machine learning algorithms were developed in order to achieve better classification accuracy. Interestingly, the results of the experiments presented in this paper are fully reproducible, as all the source codes were deposited in the open platform Gonito.net. Gonito.net allows for defining machine learning tasks to be tackled by multiple researchers and provides the researchers with easy access to each other’s results.

Moving Towards Model Reproducibility and Reusability

This commentary provides a brief history of the U.S. funding initiatives associated with promoting multiscale modeling of the physiome since 2003. An effort led in the United States is the Interagency Modeling and Analysis Group (IMAG) Multiscale Modeling Consortium (MSM). Though IMAG and the MSM have generated much interest in developing MSM models of the physiome, challenges associated with model and data sharing in biomedical, biological and behavioral systems still exist. Since 2013, the IEEE EMBS Technical Committee on Computational Biology and the Physiome (CBaP TC) has supported discussions on promoting model reproducibility through publication. This Special Issue on Model Sharing and Reproducibility is a realization of the CBaP TC discussions. Though open questions remain on how we can further facilitate model reproducibility, accessibility and reuse by the worldwide community for different biomedical domain applications, this special issue provides a unique demonstration of both the challenges and opportunities for publishing reproducible computational models.

Proposal for first validating antibody specificity strategies to publish in Nature Methods

The International Working Group on Antibody Validation (IWGAV), an independent group of international scientists with diverse research interests in the field of protein biology, today announced the publication of initial strategies developed to address a critical unmet need for antibody specificity, functionality and reproducibility in the online issue of Nature Methods. The IWGAV is the first initiative of its size and scope to establish strategic recommendations for antibody validation for both antibody producers and users. Thermo Fisher Scientific, the world leader in serving science, provided financial support to the IWGAV in 2015 to spearhead the development of industry standards and help combat the common challenges associated with antibody specificity and reproducibility.

A Framework for Improving the Quality of Research in the Biological Sciences

The American Academy of Microbiology convened a colloquium to discuss problems in the biological sciences, with emphasis on identifying mechanisms to improve the quality of research. Participants from various disciplines made six recommendations: (i) design rigorous and comprehensive evaluation criteria to recognize and reward high-quality scientific research; (ii) require universal training in good scientific practices, appropriate statistical usage, and responsible research practices for scientists at all levels, with training content regularly updated and presented by qualified scientists; (iii) establish open data at the timing of publication as the standard operating procedure throughout the scientific enterprise; (iv) encourage scientific journals to publish negative data that meet methodologic standards of quality; (v) agree upon common criteria among scientific journals for retraction of published papers, to provide consistency and transparency; and (vi) strengthen research integrity oversight and training. These recommendations constitute an actionable framework that, in combination, could improve the quality of biological research.

MBoC Introduces Author Checklist to Enhance Research Reproducibility

Molecular Biology of the Cell (MBoC) has developed a checklist for authors to help them ensure that their work can be reproduced by others. In so doing, the journal is mboc logofollowing the recommendations in the 2015 whitepaper by the ASCB Reproducibility Task Force. The checklist was developed by a committee of MBoC Editorial Board members chaired by Editor Jean Schwarzbauer and including Associate Editors Rick Fehon, Carole Parent, Greg Matera, Alex Mogilner, and Fred Chang with input from Editor-in-Chief David Drubin and other members of the board.

Reproducibility and Variation of Diffusion Measures in the Squirrel Monkey Brain, In Vivo and Ex Vivo

Animal models are needed to better understand the relationship between diffusion MRI (dMRI) and the underlying tissue microstructure. One promising model for validation studies is the common squirrel monkey, Saimiri sciureus. This study aims to determine (1) the reproducibility of in vivo diffusion measures both within and between subjects; (2) the agreement between in vivo and ex vivo data acquired from the same specimen and (3) normal diffusion values and their variation across brain regions.