Posts about reproducible paper (old posts, page 10)

The ReproRubric: Evaluation Criteria for the Reproducibility of Computational Analyses

The computational reproducibility of analytic results has been discussed and evaluated in many different scientific disciplines, all of which have one finding in common: analytic results are far too often not reproducible. There are numerous examples of reproducibility guidelines for various applications, however, a comprehensive assessment tool for evaluating the individual components of the research pipeline was unavailable. To address this need, COS developed the ReproRubric, which defines multiple Tiers of reproducibility based on criteria established for each critical stage of the typical research workflow - from initial design of the experiment through final reporting of the results.

Facilitating Replication and Reproducibility in Team Science: The 'projects' R Package

The contemporary scientific community places a growing emphasis on the reproducibility of research. The projects R package is a free, open-source endeavor created in the interest of facilitating reproducible research workflows. It adds to existing software tools for reproducible research and introduces several practical features that are helpful for scientists and their collaborative research teams. For each individual project, it supplies an intuitive framework for storing raw and cleaned study data sets, and provides script templates for protocol creation, data cleaning, data analysis and manuscript development. Internal databases of project and author information are generated and displayed, and manuscript title pages containing author lists and their affiliations are automatically generated from the internal database. File management tools allow teams to organize multiple projects. When used on a shared file system, multiple researchers can harmoniously contribute to the same project in a less punctuated manner, reducing the frequency of misunderstandings and the need for status updates.

The Brazilian Reproducibility Initiative

Most efforts to estimate the reproducibility of published findings have focused on specific areas of research, even though science is usually assessed and funded on a regional or national basis. Here we describe a project to assess the reproducibility of findings in biomedical science published by researchers based in Brazil. The Brazilian Reproducibility Initiative is a systematic, multi-center effort to repeatbetween 60 and 100 experiments: theproject will focus on a set of common laboratory methods, repeating each experiment in three different laboratories. The results, due in 2021, will allow us to estimate the level of reproducibility of biomedical sciencein Brazil, and to investigate what the published literature can tell us about the reproducibility ofresearch in a given area.

Is it Safe to Dockerize my Database Benchmark?

Docker seems to be an attractive solution for cloud database benchmarking as it simplifies the setup process through pre-built images that are portable and simple to maintain. However, the usage of Docker for benchmarking is only valid if there is no effect on measurement results. Existing work has so far only focused on the performance overheads that Docker directly induces for specific applications. In this paper, we have studied indirect effects of dockerization on the results of database benchmarking. Among others, our results clearly show that containerization has a measurable and non-constant influence on measurement results and should, hence, only be used after careful analysis.

The Dagstuhl Beginners Guide to Reproducibility for Experimental Networking Research

Reproducibility is one of the key characteristics of good science, but hard to achieve for experimental disciplines like Internet measurements and networked systems. This guide provides advice to researchers, particularly those new to the field, on designing experiments so that their work is more likely to be reproducible and to serve as a foundation for follow-on work by others.