Forum papers are thought-provoking opinion pieces or essays founded in fact, sometimes containing speculation, on a civil engineering topic of general interest and relevance to the readership of the journal. The views expressed in this Forum article do not necessarily reflect the views of ASCE or the Editorial Board of the journal.
In software engineering, the developers’ joy of decomposing and recomposing microservice-based applications has led to an enormous wave of microservice artefact technologies. To understand them better, researchers perform hundreds of experiments and empirical studies on them each year. Improving the reuse and reproducibility of these studies requires two ingredients: A system to automate repetitive experiments, and a research data management system with emphasis on making research reproducible. Both frameworks are now available via the Microservice Artefact Observatory (MAO) and Renku. In this paper, we explain the current capabilities of MAO as a global federated research infrastructure for determining software quality characteristics. Moreover, we emphasise the integration of MAO with Renku to demonstrate how a reproducible end-to-end experiment workflow involving globally distributed research teams looks like.
In the social sciences in recent years, replication has received increased attention, as there has been a growing understanding that for research to be credible it is crucial that studies can be repeated. Replication allows for determining the validity of scientific conclusions. This FORS Guide provides an overview of the concept of replication. Moreover, it provides some practical recommendations to social science researchers regarding what replication materials should include and elaborates on the role of scientific journals in encouraging replications.
The need for open, reproducible science is of growing concern in the twenty-first century, with multiple initiatives like the widely supported FAIR principles advocating for data to be Findable, Accessible, Interoperable and Reusable. Plant ecological and evolutionary studies are not exempt from the need to ensure that the data upon which their findings are based are accessible and allow for replication in accordance with the FAIR principles. However, it is common that the collection and curation of herbarium specimens, a foundational aspect of studies involving plants, is neglected by authors. Without publicly available specimens, huge numbers of studies that rely on the field identification of plants are fundamentally not reproducible. We argue that the collection and public availability of herbarium specimens is not only good botanical practice but is also fundamental in ensuring that plant ecological and evolutionary studies are replicable, and thus scientifically sound. Data repositories that adhere to the FAIR principles must make sure that the original data are traceable to and re-examinable at their empirical source. In order to secure replicability, and adherence to the FAIR principles, substantial changes need to be brought about to restore the practice of collecting and curating specimens, to educate students of their importance, and to properly fund the herbaria which house them.
Although quantitative image biomarkers (radiomics) show promising value for cancer diagnosis, prognosis, and treatment assessment, these biomarkers still lack reproducibility. In this systematic review, we aimed to assess the progress in radiomics reproducibility and repeatability in the recent years. Four hundred fifty-one abstracts were retrieved according to our search pattern criteria, with the publication dates ranging from 2017/05/01 to 2020/12/01. Each abstract including the keywords was independently screened by four observers. Forty-two full-text articles were selected for further analysis. Patient population data, radiomic feature classes, feature extraction software, image preprocessing, and reproducibility results were extracted from each article. To support the community with a standardized reporting strategy, we propose a specific reporting checklist to evaluate the feasibility to reproduce each study. Many studies continue to under-report essential reproducibility information: all but one clinical and all but two phantom studies missed to report at least one important item reporting image acquisition. The studies included in this review indicate that all radiomic features are sensitive to image acquisition, reconstruction, tumor segmentation, and interpolation. However, the amount of sensitivity is feature dependent, for instance, textural features were, in general, less robust than statistical features. Radiomics repeatability, reproducibility, and reporting quality can substantially be improved regarding feature extraction software and settings, image preprocessing and acquisition, cutoff values for stable feature selection. Our proposed radiomics reporting checklist can serve to simplify and improve the reporting and, eventually, guarantee the possibility to fully replicate and validate radiomic studies.
Reproducibility and transparency can be regarded (at least in experimental research) as a hallmark of research. The ability to reproduce research results in order to check their reliability is an important cornerstone of research that helps to guarantee the quality of research and to build on existing knowledge. The digital turn has brought more opportunities to document, share and verify research processes and outcomes. Consequently, there is an increasing demand for more transparency with regard to research processes and outcomes. This fits well with the open science agenda requiring, amongst others, open software, open data, and open access to publications even if openness alone does not guarantee reproducibility. The purpose of this activity of Knowledge Exchange was to explore current practices and barriers in the area of research reproducibility, with a focus on the publication and dissemination stage. We wanted to determine how technical and social infrastructures can support future developments in this area. In this work, we defined research reproducibility as cases where data and procedures shared by the authors of a study are used to obtain the same results as in their original work. We captured the views of research funding organisations, research performing organisations, learned societies, researchers, academic publishers and infrastructure and service providers.