The examined paper presents a surrogate model for HPC networks. The authors have uploaded their artifact to Zenodo, which ensures a long-term retention of the artifact. This paper can thus receive the Artifacts Available badge. The artifact allows for easy re-running of experiments for two figures and textual output for one table. All of the dependencies are documented. The software in the artifact runs correctly with minimal intervention, and is relevant to the paper, earning the Artifacts Evaluated–Functional badge. The experimental results are reproduced in two figures and one table, which gains the Results Reproduced badge. Furthermore, since the artifact is also available on GitHub, the paper is assigned the Artifacts Evaluated–Reusable badge.
For the paper a permanent artifact is available. This artifact provides a simple script to reproduce the results. The results in the paper are simulations and therefore have no issues running on different hardware.
Experimentation in Software Engineering has increased in the last decades as a way to provide evidence on theories and technologies. In a controlled experiment life cycle, several artifacts are used/reused and even produced. Such artifacts are mostly in the form of data, which should favor the reproducibility of such experiments. In this context, reproducibility can be defined as the ability to reproduce a study. Different benefits, such as methodology and data reuse, can be achieved from this ability. Despite the recognized benefits, several challenges have been faced by researchers regarding the experiments’ reproducibility capability. To overcome them, we understand that Open Science practices, related to provenance, preservation, and curation, might aid in improving such a capability. Therefore, in this paper, we present the proposal for an open science-based Framework to deal with controlled experiment research artifacts towards making such experiments de facto reproducible. To do so, different models associated with open science practices are planned to be integrated into the Framework.
Hydrological models are essential in water resources management, but the expertise required to operate them often exceeds that of potential stakeholders. We present an approach that facilitates the dissemination of hydrological models, and its implementation in the Model INTegration (MINT) framework. Our approach follows principles from software engineering to create software components that reveal only selected functionality of models which is of interest to users while abstracting from implementation complexity, and to generate metadata for the model components. This methodology makes the models more findable, accessible, interoperable, and reusable in support of FAIR principles. We showcase our methodology and its implementation in MINT using two case studies. We illustrate how the models SWAT and MODFLOW are turned into software components by hydrology experts, and how users without hydrology expertise can find, adapt, and execute them. The two models differ in terms of represented processes and in model design and structure. Our approach also benefits expert modelers, by simplifying model sharing and the execution of model ensembles. MINT is a general modeling framework that uses artificial intelligence techniques to assist users, and is released as open-source software.
Reproducibility is an important feature of science; experiments are retested, and analyses are repeated. Trust in the findings increases when consistent results are achieved. Despite the importance of reproducibility, significant work is often involved in these efforts, and some published findings may not be reproducible due to oversights or errors. In this paper, we examine a myriad of features in scholarly articles published in computer science conferences and journals and test how they correlate with reproducibility. We collected data from three different sources that labeled publications as either reproducible or irreproducible and employed statistical significance tests to identify features of those publications that hold clues about reproducibility. We found the readability of the scholarly article and accessibility of the software artifacts through hyperlinks to be strong signals noticeable amongst reproducible scholarly articles.
Increasing the reproducibility of research should be a top priority. Great work is being done, but more work is needed to combine efforts and maximize our actions to enable true reproducibility reform.