To determine the reproducibility of psychological meta-analyses, we investigated whether we could reproduce 500 primary study effect sizes drawn from 33 published meta-analyses based on the information given in the meta-analyses, and whether recomputations of primary study effect sizes altered the overall results of the meta-analysis.
We describe a project-based introduction to reproducible and collaborative neuroimaging analysis. Traditional teaching on neuroimaging usually consists of a series of lectures that emphasize the big picture rather than the foundations on which the techniques are based. The lectures are often paired with practical workshops in which students run imaging analyses using the graphical interface of specific neuroimaging software packages. Our experience suggests that this combination leaves the student with a superficial understanding of the underlying ideas, and an informal, inefficient, and inaccurate approach to analysis. To address these problems, we based our course around a substantial open-ended group project. This allowed us to teach: (a) computational tools to ensure computationally reproducible work, such as the Unix command line, structured code, version control, automated testing, and code review and (b) a clear understanding of the statistical techniques used for a basic analysis of a single run in an MR scanner. The emphasis we put on the group project showed the importance of standard computational tools for accuracy, efficiency, and collaboration. The projects were broadly successful in engaging students in working reproducibly on real scientific questions. We propose that a course on this model should be the foundation for future programs in neuroimaging. We believe it will also serve as a model for teaching efficient and reproducible research in other fields of computational science.
In recent years, biomedical research has faced increased scrutiny over issues related to reproducibility and quality in scientific findings(1-3). In response to this scrutiny, funding institutions and journals have implemented top-down policies for grant and manuscript review. While a positive step forward, the long-term merit of these policies is questionable given their emphasis on completing a check-list of items instead of a fundamental re-assessment of how scientific investigation is conducted. Moreover, the top-down style of management used to institute these policies can be argued as being ineffective in engaging the scientific workforce to act upon these issues. To meet current and future biomedical needs, new investigative methods that emphasize collective-thinking, teamwork, shared knowledge and cultivate change from the bottom-up are warranted. Here, a perspective on a new approach to biomedical investigation within the individual laboratory that emphasizes collaboration and quality is discussed.
Results from cognitive neuroscience have been cited as evidence in courtrooms around the world, and their admissibility has been a challenge for the legal system. Unfortunately, the recent reproducibility crisis in cognitive neuroscience, showing that the published studies in cognitive neuroscience may not be as trustworthy as expected, has made the situation worse. Here we analysed how the irreproducible results in cognitive neuroscience literature could compromise the standards for admissibility of scientific evidence, and pointed out how the open science movement may help to alleviate these problems. We conclude that open science not only benefits the scientific community but also the legal system, and society in a broad sense. Therefore, we suggest both scientists and practitioners follow open science recommendations and uphold the best available standards in order to serve as good gatekeepers in their own fields. Moreover, scientists and practitioners should collaborate closely to maintain an effective functioning of the entire gatekeeping system of the law.
Reproducibility of modeling is a problem that exists for any machine learning practitioner, whether in industry or academia. The consequences of an irreproducible model can include significant financial costs, lost time, and even loss of personal reputation (if results prove unable to be replicated). This paper will first discuss the problems we have encountered while building a variety of machine learning models, and subsequently describe the framework we built to tackle the problem of model reproducibility. The framework is comprised of four main components (data, feature, scoring, and evaluation layers), which are themselves comprised of well defined transformations. This enables us to not only exactly replicate a model, but also to reuse the transformations across different models. As a result, the platform has dramatically increased the speed of both offline and online experimentation while also ensuring model reproducibility.
An increasing number of studies, surveys, and editorials highlight experimental and computational reproducibility and replication issues that appear to pervade most areas of modern science. This perspective examines some of the multiple and complex causes of what has been called a "reproducibility crisis," which can impact materials, interface/(bio)interphase, and vacuum sciences. Reproducibility issues are not new to science, but they are now appearing in new forms requiring innovative solutions. Drivers include the increasingly multidiscipline, multimethod nature of much advanced science, increased complexity of the problems and systems being addressed, and the large amounts and multiple types of experimental and computational data being collected and analyzed in many studies. Sustained efforts are needed to address the causes of reproducibility problems that can hinder the rate of scientific progress and lower public and political regard for science. The initial efforts of the American Vacuum Society to raise awareness of a new generation of reproducibility challenges and provide tools to help address them serve as examples of mitigating actions that can be undertaken.