A paper which analyzes terminologies related to reproducible research -- exploring differences and patterns among them -- aiming to resolve some contradictions.
Reproducibility and repeatability dramatically increase the value of scientific experiments, but remain two challenging goals for the experimenters. Similar to the LAMP stack that considerably eased the web developers life, in this paper, we advocate the need of an analogous software stack to help the experimenters making reproducible research. We propose the EnosStack, an open source software stack especially designed for reproducible scientific experiments. EnosStack enables to easily describe experimental workflows meant to be re-used, while abstracting the underlying infrastructure running them. Being able to switch experiments from a local to a real testbed deployment greatly lower code development and validation time. We describe the abstractions that have driven its design, before presenting a real experiment we deployed on Grid'5000 to illustrate its usefulness. We also provide all the experiment code, data and results to the community.
In this case study, the authors present one library’s work to help increase awareness of reproducibility and to build capacity for our institution to improve reproducibility of ongoing and future research.
In recent years, evidence has emerged from disciplines ranging from biology to economics that many scientific studies are not reproducible. This evidence has led to declarations in both the scientific and lay press that science is experiencing a “reproducibility crisis” and that this crisis has significant impacts on both science and society, including misdirected effort, funding, and policy implemented on the basis of irreproducible research. In many cases, academic libraries are the natural organizations to lead efforts to implement recommendations from journals, funders, and societies to improve research reproducibility. In this editorial, we introduce the reproducibility crisis, define reproducibility and replicability, and then discusses how academic libraries can lead institutional support for reproducible research.
This chapter is written to help undergraduate students better understand the role of replication in psychology and how it applies to the study of social behavior. We briefly review various replication initiatives in psychology and the events that preceded our renewed focus on replication. We then discuss challenges in interpreting the low rate of replication in psychology, especially social psychology. Finally, we stress the need for better methods and theories to learn the right lessons when replications fail.
Jeffrey Spies, Ph.D., is the Co-Founder and Chief Technology Officer of the non-profit Center for Open Science. In this presentation, Dr. Spies discusses motivations, values, and common experiences of researchers and scholars in research and publication processes. Spies explores biases toward confirmatory research to the exclusion of exploratory research, funding and reward incentives that conflict with scholarly values, and, costs of delayed research publication -- as measured in human lives. This critical approach to ethics and values in research and publication begs the questions “Where would we be if this [publishing] system were a little more reproducible, a little more efficient?” and asks for an examination of values as revealed by our practice; are we implying that some lives matter more than others? Spies discusses how open output [open access] and open workflow policies and practices assist scholars in aligning their scholarly practices more closely to their scholarly values. For more information: Center for Open Science: https://cos.io Open badges: https://cos.io/our-services/open-science-badges Open Science Framework: https://cos.io/our-products/open-science-framework PrePrint servers: https://cos.io/our-products/osf-preprints/ Registered Reports: https://cos.io/rr Transparency and Openness Promotion Guidelines: https://cos.io/our-services/top-guidelines