Software to improve transfer and reproducibility of cell culture methods

Cell culture is a vital component of laboratories throughout the scientifi c community, yet the absence of standardized protocols and documentation practice challenges laboratory effi ciency and scientific reproducibility. We examined the effectiveness of a cloud-based software application, CultureTrax ® as a tool for standardizing and transferring a complex cell culture protocol. The software workfl ow and template were used to electronically format a cardiomyocyte differentiation protocol and share a digitally executable copy with a different lab user. While the protocol was unfamiliar to the recipient, they executed the experiment by solely using CultureTrax and successfully derived cardiomyocytes from human induced pluripotent stem cells. This software tool significantly reduced the time and resources required to effectively transfer and implement a novel protocol.

A deafening silence: a lack of data and reproducibility in published bioacoustics research?

A study of 100 papers from five journals that make use of bioacoustic recordings shows that only a minority (21%) deposit any of the recordings in a repository, supplementary materials section or a personal website. This lack of deposition hinders re-use of the raw data by other researchers, prevents the reproduction of a project's analyses and confirmation of its findings and impedes progress within the broader bioacoustics community. We make some recommendations for researchers interested in depositing their data.

Replicability or reproducibility? On the replication crisis in computational neuroscience and sharing only relevant detail

Replicability and reproducibility of computational models has been somewhat understudied by “the replication movement.” In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code.

To Clean or Not to Clean: Document Preprocessing and Reproducibility

Web document collections such as WT10G, GOV2, and ClueWeb are widely used for text retrieval experiments. Documents in these collections contain a fair amount of non-content-related markup in the form of tags, hyperlinks, and so on. Published articles that use these corpora generally do not provide specific details about how this markup information is handled during indexing. However, this question turns out to be important: Through experiments, we find that including or excluding metadata in the index can produce significantly different results with standard IR models. More importantly, the effect varies across models and collections. For example, metadata filtering is found to be generally beneficial when using BM25, or language modeling with Dirichlet smoothing, but can significantly reduce retrieval effectiveness if language modeling is used with Jelinek-Mercer smoothing. We also observe that, in general, the performance differences become more noticeable as the amount of metadata in the test collections increase. Given this variability, we believe that the details of document preprocessing are significant from the point of view of reproducibility. In a second set of experiments, we also study the effect of preprocessing on query expansion using RM3. In this case, once again, we find that it is generally better to remove markup before using documents for query expansion.

Editorial: Revised Guidelines to Enhance the Rigor and Reproducibility of Research Published in American Physiological Society Journals

A challenge in modern research is the common inability to repeat novel findings published in even the most “impact-heavy” journals. In the great majority of instances, this may simply be due to a failure of the published manuscripts to include—and the publisher to require— comprehensive information on experimental design, methods, reagents, or the in vitro and in vivo systems under study. Failure to accurately reproduce all environmental influences on an experiment, particularly those using animals, also contributes to inability to repeat novel findings. The most common reason for failures of reproducibility may well bein the rigor and transparency with which methodology is described by authors. Another reason may be the reluctance by more established investigators to break with traditional methods of data presentation. However, one size does not fit all when it comes to data presentation, particularly because of the wide variety of data formats presented in individual disciplines represented by journals. Thus, some flexibility needs to be allowed. The American Physiological Society (APS) has made available guidelines for transparent reporting that it recommends all authors follow(https://www.physiology.org/author-info.promoting-transparent-reporting) (https://www.physiology.org/author-info.experimental-details-to-report). These are just some of the efforts being made to facilitate the communication of discovery in a transparent manner, which complement what has been a strength of the discipline for many years—the ability of the scientists and scientific literature to self-correct (8).

Experimental deception: Science, performance, and reproducibility

Experimental deception has not been seriously examined in terms of its impact on reproducible science. I demonstrate, using data from the Open Science Collaboration’s Reproducibility Project (2015), that experiments involving deception have a higher probability of not replicating and have smaller effect sizes compared to experiments that do not have deception procedures. This trend is possibly due to missing information about the context and performance of agents in the studies in which the original effects were generated, leading to either compromised internal validity, or an incomplete specification and control of variables in replication studies. Of special interest are the mechanisms by which deceptions are implemented and how these present challenges for the efficient transmission of critical information from experimenter to participant. I rehearse possible frameworks that might form the basis of a future research program on experimental deception and make some recommendations as to how such a program might be initiated.