MLDev: Data Science Experiment Automation and Reproducibility Software

In this paper we explore the challenges of automating experiments in data science. We propose an extensible experiment model as a foundation for integration of different open source tools for running research experiments. We implement our approach in a prototype open source MLDev software package and evaluate it in a series of experiments yielding promising results. Comparison with other state-of-the-art tools signifies novelty of our approach.

Reproducibility challenges in the search for antibacterial compounds from nature

Reproducibility of reported antibacterial activities of plant extracts has long remained questionable. Although plant-related factors should be well considered in serious pharmacognostic research, they are often not addressed in many research papers. Here we highlight the challenges in reproducing antibacterial activities of plant extracts. Plants with reported antibacterial activities of interest were obtained from a literature review. Antibacterial activities against Escherichia coli and Klebsiella pneumoniae were tested using extracts’ solutions in 10% DMSO and acetone. Compositions of working solutions from both solvents were established using LC-MS analysis. Moreover, the availability of details likely to affect reproducibility was evaluated in articles which reported antibacterial activities of studied plants. Inhibition of bacterial growth at MIC of 256–1024 μg/mL was observed in only 15.4% of identical plant species. These values were 4–16-fold higher than those reported earlier. Further, 18.2% of related plant species had MICs of 128–256 μg/mL. Besides, 29.2% and 95.8% of the extracts were soluble to sparingly soluble in 10% DMSO and acetone, respectively. Extracts’ solutions in both solvents showed similar qualitative compositions, with differing quantities of corresponding phytochemicals. Details regarding seasons and growth state at collection were missing in 65% and 95% of evaluated articles, respectively. Likewise, solvents used to dissolve the extracts were lacking in 30% of the articles, whereas 40% of them used unidentified bacterial isolates. Reproducibility of previously reported activities from plants’ extracts is a multi-factorial aspect. Thus, collective approaches are necessary in addressing the highlighted challenges.

Fixing dependency errors for Python build reproducibility

Software reproducibility is important for re-usability and the cumulative progress of research. An important manifestation of unreproducible software is the changed outcome of software builds over time. While enhancing code reuse, the use of open-source dependency packages hosted on centralized repositories such as PyPI can have adverse effects on build reproducibility. Frequent updates to these packages often cause their latest versions to have breaking changes for applications using them. Large Python applications risk their historical builds becoming unreproducible due to the widespread usage of Python dependencies, and the lack of uniform practices for dependency version specification. Manually fixing dependency errors requires expensive developer time and effort, while automated approaches face challenges of parsing unstructured build logs, finding transitive dependencies, and exploring an exponential search space of dependency versions. In this paper, we investigate how open-source Python projects specify dependency versions, and how their reproducibility is impacted by dependency packages. We propose a tool PyDFix to detect and fix unreproducibility in Python builds caused by dependency errors. PyDFix is evaluated on two bug datasets BugSwarm and BugsInPy, both of which are built from real-world open-source projects. PyDFix analyzes a total of 2,702 builds, identifying 1,921 (71.1%) of them to be unreproducible due to dependency errors. From these, PyDFix provides a complete fix for 859 (44.7%) builds, and partial fixes for an additional 632 (32.9%) builds.

Improving the reproducibility of findings by updating research methodology

The literature discusses causes of low reproducibility of scientific publications. Our article adds another main cause—uncritical adherence to accepted research procedures. This is evident in: (1) anachronistically requiring researchers to base themselves on theoretical background even if the studies cited were not tested for reproducibility; (2) conducting studies suffering from a novelty effect bias; (3) forcing researchers who use data mining methods and field-based theory, with no preliminary theoretical rationale, to present a theoretical background that allegedly guided their work—as a precondition for publication of their findings. It is possible to increase research validity in relation to the above problems by the following means: (1) Conducting a longitudinal study on the same participants and only on them; (2) Trying to shorten the time period between laboratory experiments and those on humans, based on cost–benefit considerations, anchored in ethical norms; (3) Reporting the theoretical background in a causal modular format; (4) Giving incentives to those who meet the above criteria while moderating the pressure for fast output.

Fostering Reproducibility, Reusability, and Technology Transfer in Health Informatics

Computational methods can transform healthcare. In particular, health informatics with artificial intelligence has shown tremendous potential when applied in various fields of medical research and opened a new era for precision medicine. The development of reusable biomedical software for research or clinical practice is time-consuming and requires rigorous compliance with quality requirements as defined by international standards. However, research projects rarely implement such measures, hindering smooth technology transfer into the research community or manufacturers as well as reproducibility and reusability. Here, we present a guideline for quality management systems (QMS) for academic organizations incorporating the essential components while confining the requirements to an easily manageable effort. It provides a starting point to implement a QMS tailored to specific needs effortlessly and greatly facilitates technology transfer in a controlled manner, thereby supporting reproducibility and reusability. Ultimately, the emerging standardized workflows can pave the way for an accelerated deployment in clinical practice.

Tool-based Support for the FAIR Principles for Control Theoretic Results: The "Automatic Control Knowledge Repository"

In 2016 a collection of guiding principles for the management of scientific data was proposed by a consortium of scientists and organizations under the acronym FAIR (Findability, Accessibility, Interoperability, Reusability). As many other disciplines, control theory also is affected by the (mostly unintended) disregard of these principles and to some degree also suffers from a reproducibility crisis. The specific situation for that discipline, however, is more related to software, than to classical numerical data. In particular, since computational methods like simulation, numeric approximation or computer algebra play an important role, the reproducibility of results relies on implementation details, which are typically out of scope for written papers.