Posts about reproducible paper (old posts, page 7)

Improving the reproducibility of findings by updating research methodology

The literature discusses causes of low reproducibility of scientific publications. Our article adds another main cause—uncritical adherence to accepted research procedures. This is evident in: (1) anachronistically requiring researchers to base themselves on theoretical background even if the studies cited were not tested for reproducibility; (2) conducting studies suffering from a novelty effect bias; (3) forcing researchers who use data mining methods and field-based theory, with no preliminary theoretical rationale, to present a theoretical background that allegedly guided their work—as a precondition for publication of their findings. It is possible to increase research validity in relation to the above problems by the following means: (1) Conducting a longitudinal study on the same participants and only on them; (2) Trying to shorten the time period between laboratory experiments and those on humans, based on cost–benefit considerations, anchored in ethical norms; (3) Reporting the theoretical background in a causal modular format; (4) Giving incentives to those who meet the above criteria while moderating the pressure for fast output.

Fostering Reproducibility, Reusability, and Technology Transfer in Health Informatics

Computational methods can transform healthcare. In particular, health informatics with artificial intelligence has shown tremendous potential when applied in various fields of medical research and opened a new era for precision medicine. The development of reusable biomedical software for research or clinical practice is time-consuming and requires rigorous compliance with quality requirements as defined by international standards. However, research projects rarely implement such measures, hindering smooth technology transfer into the research community or manufacturers as well as reproducibility and reusability. Here, we present a guideline for quality management systems (QMS) for academic organizations incorporating the essential components while confining the requirements to an easily manageable effort. It provides a starting point to implement a QMS tailored to specific needs effortlessly and greatly facilitates technology transfer in a controlled manner, thereby supporting reproducibility and reusability. Ultimately, the emerging standardized workflows can pave the way for an accelerated deployment in clinical practice.

Tool-based Support for the FAIR Principles for Control Theoretic Results: The "Automatic Control Knowledge Repository"

In 2016 a collection of guiding principles for the management of scientific data was proposed by a consortium of scientists and organizations under the acronym FAIR (Findability, Accessibility, Interoperability, Reusability). As many other disciplines, control theory also is affected by the (mostly unintended) disregard of these principles and to some degree also suffers from a reproducibility crisis. The specific situation for that discipline, however, is more related to software, than to classical numerical data. In particular, since computational methods like simulation, numeric approximation or computer algebra play an important role, the reproducibility of results relies on implementation details, which are typically out of scope for written papers.

A computational reproducibility study of PLOS ONE articles featuring longitudinal data analyses

Computational reproducibility is a corner stone for sound and credible research. Especially in complex statistical analyses—such as the analysis of longitudinal data—reproducing results is far from simple, especially if no source code is available. In this work we aimed to reproduce analyses of longitudinal data of 11 articles published in PLOS ONE. Inclusion criteria were the availability of data and author consent. We investigated the types of methods and software used and whether we were able to reproduce the data analysis using open source software. Most articles provided overview tables and simple visualisations. Generalised Estimating Equations (GEEs) were the most popular statistical models among the selected articles. Only one article used open source software and only one published part of the analysis code. Replication was difficult in most cases and required reverse engineering of results or contacting the authors. For three articles we were not able to reproduce the results, for another two only parts of them. For all but two articles we had to contact the authors to be able to reproduce the results. Our main learning is that reproducing papers is difficult if no code is supplied and leads to a high burden for those conducting the reproductions. Open data policies in journals are good, but to truly boost reproducibility we suggest adding open code policies.

A meta-review of transparency and reproducibility-related reporting practices in published meta-analyses on clinical psychological interventions (2000–2020)

Meta-analysis is a powerful and important tool to synthesize the literature about a research topic. Like other kinds of research, meta-analyses must be reproducible to be compliant with the principles of the scientific method. Furthermore, reproducible meta-analyses can be easily updated with new data and reanalysed applying new and more refined analysis techniques. We attempted to empirically assess the prevalence of transparency and reproducibility-related reporting practices in published meta-analyses from clinical psychology by examining a random sample of 100 meta-analyses. Our purpose was to identify the key points that could be improved, with the aim of providing some recommendations for carrying out reproducible meta-analyses. We conducted a meta-review of meta-analyses of psychological interventions published between 2000 and 2020. We searched PubMed, PsycInfo and Web of Science databases. A structured coding form to assess transparency indicators was created based on previous studies and existing meta-analysis guidelines. We found major issues concerning: completely reproducible search procedures report, specification of the exact method to compute effect sizes, choice of weighting factors and estimators, lack of availability of the raw statistics used to compute the effect size and of interoperability of available data, and practically total absence of analysis script code sharing. Based on our findings, we conclude with recommendations intended to improve the transparency, openness, and reproducibility-related reporting practices of meta-analyses in clinical psychology and related areas.

The perception of reproducibility in a small cohort of scientists in Europe

Reproducibility is an essential feature of all scientific outcomes. Scientific evidence can only reach its true status as reliable if replicated, but the results of well-conducted replication studies face an uphill battle to be performed, and little attention and dedication have been put into publishing the results of replication attempts. Therefore, we asked a small cohort of researchers about their attempts to replicate results from other groups, as well as from their own laboratories, and their general perception of the issues concerning reproducibility in their field. We also asked how they perceive the venues, i.e. journals, to communicate and discuss the results of these attempts. To this aim we pre-registered and shared a questionnaire among scientists at diverse levels. The results indicate that, in general, replication attempts of their own protocols are quite successful (with over 80% reporting not or rarely having problems with their own protocols). Although the majority of respondents tried to replicate a study or experiment from other labs (75.4%), the median successful rate was scored at 3 (in a 1-5 scale), while the median for the general estimation of replication success in their field was found to be 5 (in a 1-10 scale). The majority of respondents (70.2%) also perceive journals as unwelcoming of replication studies.