Editorial: Data repositories, registries, and standards in the search for valid and reproducible biomarkers

The paucity of major scientific breakthroughs leading to new or improved treatments, and the inability to identify valid and reproducible biomarkers that improve clinical management, has produced a crisis in confidence in the validity of our pathogenic theories and the reproducibility of our research findings. This crisis in turn has driven changes in standards for research methodologies and prompted calls for the creation of open‐access data repositories and the preregistration of research hypotheses. Although we should embrace the creation of repositories and registries, and the promise for greater statistical power, reproducibility, and generalizability of research findings they afford, we should also recognize that they alone are no substitute for sound design in minimizing study confounds, and they are no guarantor of faith in the validity of our pathogenic theories, findings, and biomarkers. One way, and maybe the only sure way, of knowing that we have a valid understanding of brain processes and disease mechanisms in human studies is by experimentally manipulating variables and predicting its effects on outcome measures and biomarkers.

Practical guidelines for rigor and reproducibility in preclinical and clinical studies on cardioprotection

We refer to the recent guidelines for experimental models of myocardial ischemia and infarction [279], and aim to provide now practical guidelines to ensure rigor and reproducibility in preclinical and clinical studies on cardioprotection. In line with the above guidelines [279], we define rigor as standardized state-of-the-art design, conduct and reporting of a study, which is then a prerequisite for reproducibility, i.e. replication of results by another laboratory when performing exactly the same experiment.

ReproServer: Making Reproducibility Easier and Less Intensive

Reproducibility in the computational sciences has been stymied because of the complex and rapidly changing computational environments in which modern research takes place. While many will espouse reproducibility as a value, the challenge of making it happen (both for themselves and testing the reproducibility of others' work) often outweigh the benefits. There have been a few reproducibility solutions designed and implemented by the community. In particular, the authors are contributors to ReproZip, a tool to enable computational reproducibility by tracing and bundling together research in the environment in which it takes place (e.g. one's computer or server). In this white paper, we introduce a tool for unpacking ReproZip bundles in the cloud, ReproServer. ReproServer takes an uploaded ReproZip bundle (.rpz file) or a link to a ReproZip bundle, and users can then unpack them in the cloud via their browser, allowing them to reproduce colleagues' work without having to install anything locally. This will help lower the barrier to reproducing others' work, which will aid reviewers in verifying the claims made in papers and reusing previously published research.

Building research evidence towards reproducibility of animal research

Since our debut in late 2006, PLOS ONE has strived to promote best practices in research reporting as a way to improve reproducibility in research. We have supported initiatives towards increased transparency, as well as the gathering of evidence that can inform improvements in the quality of reporting in research articles. In line with this commitment, PLOS ONE collaborated in a randomized controlled trial (RCT) to test the impact of an intervention asking authors to complete a reporting checklist at the time of manuscript submission. The results from this trial have recently been posted on bioRxiv (1) and provide a further step toward building the necessary evidence base to inform editorial interventions towards improving reporting quality.

Open Science Badges in the Journal of Neurochemistry

The Open Science Framework (OSF) has the mission to increase openness, integrity, and reproducibility in research. The Journal of Neurochemistry became a signatory of their Transparency and Openness guidelines in 2016, which provides eight modular standards (Citation standards, Data Transparency, Analytic Methods/Code Transparency, Research Materials Transparency, Design and Analysis Transparency, Study Pre‐registration, Analysis Plan Transparency, Replication) with increasing levels of stringency. Furthermore, OSF recommends and offers a collection of practices intended to make scientific processes and results more transparent and available in a standardized way for reuse to people outside the research team. It includes making research materials, data, and laboratory procedures freely accessible online to anyone. This editorial announces the decision of the Journal of Neurochemistry to introduce Open Science Badges, maintained by the Open Science Badges Committee and by the Center for Open Science (COS). The Open Science Badges, visual icons placed on publications, certify that an open practice was followed and signal to readers that an author has shared the corresponding research evidence, thus, allowing an independent researcher to understand how to reproduce the procedure.

Assessment of the impact of shared brain imaging data on the scientific literature

Data sharing is increasingly recommended as a means of accelerating science by facilitating collaboration, transparency, and reproducibility. While few oppose data sharing philosophically, a range of barriers deter most researchers from implementing it in practice. To justify the significant effort required for sharing data, funding agencies, institutions, and investigators need clear evidence of benefit. Here, using the International Neuroimaging Data-sharing Initiative, we present a case study that provides direct evidence of the impact of open sharing on brain imaging data use and resulting peer-reviewed publications. We demonstrate that openly shared data can increase the scale of scientific studies conducted by data contributors, and can recruit scientists from a broader range of disciplines. These findings dispel the myth that scientific findings using shared data cannot be published in high-impact journals, suggest the transformative power of data sharing for accelerating science, and underscore the need for implementing data sharing universally.