Reproducible Publications at AGILE Conferences

The council of the Association of Geographic Information Laboratories in Europe (AGILE) provides funding to support a new AGILE initiative. Reproducible Publications at AGILE Conferences" will develop protocols for publishing reproducible research in AGILE conference publications. The aim is to support and improve the way we describe our science and to enhance the usefulness of AGILE conference publications to the wider community. The potential benefits of this include greater research transparency, enhanced citations of published articles and increased relevance of the conference in the field. The funding will support a workshop attended by domain experts to develop author and reviewer guidelines that will be presented at the AGILE 2019 conference. The initiative members are Daniel Nüst (Institute for Geoinformatics, University of Münster, Münster, Germany), Frank Ostermann (Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Enschede, The Netherlands), Rusne Sileryte (Faculty of Architecture and the Built Environment, Delft University of Technology, Delft, The Netherlands), Carlos Granell (Institute of New Imaging Technologies, Universitat Jaume I de Castellón, Castellón, Spain), and Barbara Hofer (Interfaculty Department of Geoinformatics - Z_GIS, University of Salzburg, Salzburg, Austria)."

Conducting Replication Studies With Confidence

Although essential to the development of a robust evidence base for nurse educators, the concepts of replication and reproducibility have received little attention in the nursing education literature. In this Methodology Corner installment, the concepts of study replication and reproducibility are explored in depth. In designing, conducting, and documenting the findings of studies in nursing education, researchers are encouraged to make design choices that improve study replicability and reproducibility of study findings. [J Nurs Educ. 2018;57(11):638–640.] There has been considerable discussion in the professional literature about questionable research practices that raise doubt about the credibility of research findings (Shrout & Rodgers, 2018) and that limit reproducibility of research findings (Shepherd, Peratikos, Rebeiro, Duda, & McCowan, 2017). This discussion has led to what scientists term as a replication crisis (Goodman, Fanelli, & Ioannidis, 2016). Although investigators in various disciplines have provided suggestions to address this crisis (Alvarez, Key, & Núñez, 2018; Goodman et al., 2016; Shrout & Rodgers, 2018), similar discussions or reports of replication within nursing education literature are limited, despite a call for replication studies (Morin, 2016). Consequently, the focus of this article is on replication and reproducibility. The topic is important, given that the hallmark of good science is being able to replicate or reproduce findings (Morin, 2016). Replication serves to provide “stability in our knowledge of nature” (Schmidt, 2009, p. 92).

Software to improve transfer and reproducibility of cell culture methods

Cell culture is a vital component of laboratories throughout the scientifi c community, yet the absence of standardized protocols and documentation practice challenges laboratory effi ciency and scientific reproducibility. We examined the effectiveness of a cloud-based software application, CultureTrax ® as a tool for standardizing and transferring a complex cell culture protocol. The software workfl ow and template were used to electronically format a cardiomyocyte differentiation protocol and share a digitally executable copy with a different lab user. While the protocol was unfamiliar to the recipient, they executed the experiment by solely using CultureTrax and successfully derived cardiomyocytes from human induced pluripotent stem cells. This software tool significantly reduced the time and resources required to effectively transfer and implement a novel protocol.

Replicability or reproducibility? On the replication crisis in computational neuroscience and sharing only relevant detail

Replicability and reproducibility of computational models has been somewhat understudied by “the replication movement.” In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code.

To Clean or Not to Clean: Document Preprocessing and Reproducibility

Web document collections such as WT10G, GOV2, and ClueWeb are widely used for text retrieval experiments. Documents in these collections contain a fair amount of non-content-related markup in the form of tags, hyperlinks, and so on. Published articles that use these corpora generally do not provide specific details about how this markup information is handled during indexing. However, this question turns out to be important: Through experiments, we find that including or excluding metadata in the index can produce significantly different results with standard IR models. More importantly, the effect varies across models and collections. For example, metadata filtering is found to be generally beneficial when using BM25, or language modeling with Dirichlet smoothing, but can significantly reduce retrieval effectiveness if language modeling is used with Jelinek-Mercer smoothing. We also observe that, in general, the performance differences become more noticeable as the amount of metadata in the test collections increase. Given this variability, we believe that the details of document preprocessing are significant from the point of view of reproducibility. In a second set of experiments, we also study the effect of preprocessing on query expansion using RM3. In this case, once again, we find that it is generally better to remove markup before using documents for query expansion.