Posts about reproducible paper (old posts, page 5)

Simple changes of individual studies can improve the reproducibility of the biomedical scientific process as a whole

We developed a new probabilistic model to assess the impact of recommendations rectifying the reproducibility crisis (by publishing both positive and 'negative' results and increasing statistical power) on competing objectives, such as discovering causal relationships, avoiding publishing false positive results, and reducing resource consumption. In contrast to recent publications our model quantifies the impact of each single suggestion not only for an individual study but especially their relation and consequences for the overall scientific process. We can prove that higher-powered experiments can save resources in the overall research process without generating excess false positives. The better the quality of the pre-study information and its exploitation, the more likely this beneficial effect is to occur. Additionally, we quantify the adverse effects of both neglecting good practices in the design and conduct of hypotheses-based research, and the omission of the publication of 'negative' findings. Our contribution is a plea for adherence to or reinforcement of the good scientific practice and publication of 'negative' findings.

Editorial: Data repositories, registries, and standards in the search for valid and reproducible biomarkers

The paucity of major scientific breakthroughs leading to new or improved treatments, and the inability to identify valid and reproducible biomarkers that improve clinical management, has produced a crisis in confidence in the validity of our pathogenic theories and the reproducibility of our research findings. This crisis in turn has driven changes in standards for research methodologies and prompted calls for the creation of open‐access data repositories and the preregistration of research hypotheses. Although we should embrace the creation of repositories and registries, and the promise for greater statistical power, reproducibility, and generalizability of research findings they afford, we should also recognize that they alone are no substitute for sound design in minimizing study confounds, and they are no guarantor of faith in the validity of our pathogenic theories, findings, and biomarkers. One way, and maybe the only sure way, of knowing that we have a valid understanding of brain processes and disease mechanisms in human studies is by experimentally manipulating variables and predicting its effects on outcome measures and biomarkers.

Practical guidelines for rigor and reproducibility in preclinical and clinical studies on cardioprotection

We refer to the recent guidelines for experimental models of myocardial ischemia and infarction [279], and aim to provide now practical guidelines to ensure rigor and reproducibility in preclinical and clinical studies on cardioprotection. In line with the above guidelines [279], we define rigor as standardized state-of-the-art design, conduct and reporting of a study, which is then a prerequisite for reproducibility, i.e. replication of results by another laboratory when performing exactly the same experiment.

ReproServer: Making Reproducibility Easier and Less Intensive

Reproducibility in the computational sciences has been stymied because of the complex and rapidly changing computational environments in which modern research takes place. While many will espouse reproducibility as a value, the challenge of making it happen (both for themselves and testing the reproducibility of others' work) often outweigh the benefits. There have been a few reproducibility solutions designed and implemented by the community. In particular, the authors are contributors to ReproZip, a tool to enable computational reproducibility by tracing and bundling together research in the environment in which it takes place (e.g. one's computer or server). In this white paper, we introduce a tool for unpacking ReproZip bundles in the cloud, ReproServer. ReproServer takes an uploaded ReproZip bundle (.rpz file) or a link to a ReproZip bundle, and users can then unpack them in the cloud via their browser, allowing them to reproduce colleagues' work without having to install anything locally. This will help lower the barrier to reproducing others' work, which will aid reviewers in verifying the claims made in papers and reusing previously published research.

Open Science Badges in the Journal of Neurochemistry

The Open Science Framework (OSF) has the mission to increase openness, integrity, and reproducibility in research. The Journal of Neurochemistry became a signatory of their Transparency and Openness guidelines in 2016, which provides eight modular standards (Citation standards, Data Transparency, Analytic Methods/Code Transparency, Research Materials Transparency, Design and Analysis Transparency, Study Pre‐registration, Analysis Plan Transparency, Replication) with increasing levels of stringency. Furthermore, OSF recommends and offers a collection of practices intended to make scientific processes and results more transparent and available in a standardized way for reuse to people outside the research team. It includes making research materials, data, and laboratory procedures freely accessible online to anyone. This editorial announces the decision of the Journal of Neurochemistry to introduce Open Science Badges, maintained by the Open Science Badges Committee and by the Center for Open Science (COS). The Open Science Badges, visual icons placed on publications, certify that an open practice was followed and signal to readers that an author has shared the corresponding research evidence, thus, allowing an independent researcher to understand how to reproduce the procedure.

Assessment of the impact of shared brain imaging data on the scientific literature

Data sharing is increasingly recommended as a means of accelerating science by facilitating collaboration, transparency, and reproducibility. While few oppose data sharing philosophically, a range of barriers deter most researchers from implementing it in practice. To justify the significant effort required for sharing data, funding agencies, institutions, and investigators need clear evidence of benefit. Here, using the International Neuroimaging Data-sharing Initiative, we present a case study that provides direct evidence of the impact of open sharing on brain imaging data use and resulting peer-reviewed publications. We demonstrate that openly shared data can increase the scale of scientific studies conducted by data contributors, and can recruit scientists from a broader range of disciplines. These findings dispel the myth that scientific findings using shared data cannot be published in high-impact journals, suggest the transformative power of data sharing for accelerating science, and underscore the need for implementing data sharing universally.