Replication is the scientific gold standard that enables the confirmation of research findings. Concerns related to publication bias, flexibility in data analysis, and high-profile cases of academic misconduct have led to recent calls for more replication and systematic accumulation of scientific knowledge in psychological science. This renewed emphasis on replication may pose specific challenges to cross-cultural research due to inherent practical difficulties in emulating an original study in other cultural groups. The purpose of the present article is to discuss how the core concepts of this replication debate apply to cross-cultural psychology. Distinct to replications in cross-cultural research are examinations of bias and equivalence in manipulations and procedures, and that targeted research populations may differ in meaningful ways. We identify issues in current psychological research (analytic flexibility, low power) and possible solutions (preregistration, power analysis), and discuss ways to implement best practices in cross-cultural replication attempts.
Scientific instruments are at the heart of the scientific process, from 17th‐century telescopes and microscopes, to modern particle colliders and DNA sequencing machines. Nowadays, most scientific instruments in biomedical research come from commercial suppliers , , and yet, compared to the biopharmaceutical and medical devices industries, little is known about the interactions between scientific instrument makers and academic researchers. Our research suggests that this knowledge gap is a cause for concern.
The lack of standard terminology means that we do not clearly distinguish between situations in which there is not enough information to attempt repetition, and those in which attempts do not yield substantially the same outcome. To reduce confusion, I propose an intuitive, unambiguous neologism: ‘preproducibility’. An experiment or analysis is preproducible if it has been described in adequate detail for others to undertake it. Preproducibility is a prerequisite for reproducibility, and the idea makes sense across disciplines.
A fundamental challenge for open science is how best to create and share documents containing computational results. Traditional methods involve maintaining the code, generated tables and figures, and text as separate files and manually assembling them into a finished document. As projects grow in complexity, this approach can lead to procedures which are error prone and hard to replicate. Fortunately, new tools are emerging to address this problem and librarians who provide data services are ideally positioned to provide training. In the workshop we’ll use RStudio to demonstrate how to create a "compilable" document containing all the text elements (including bibliography), as well as the code required to create embedded graphs and tables. We’ll demonstrate how the process facilitates making revisions when, for example, a reviewer has suggested a revision or when there has been a change in the underlying data. We’ll also demonstrate the convenience of integrating version control into the workflow using RStudio’s built-in support for git.
Top climate scientists say their field can improve its transparency. A group of researchers presented their findings on reproducibility in climate science to the National Academies of Sciences, Engineering and Medicine yesterday as part of a monthslong examination of scientific transparency.
For my Honors Senior Project, I read five research papers in the field of computational biology and attempted to reproduce the results. However, for the most part, this proved a challenge, as many details vital to utilizing relevant software and data had been excluded. Using Geir Kjetil Sandve's paper "Ten Simple Rules for Reproducible Computational Research" as a guide, I discuss how authors of these five papers did and did not obey these rules of reproducibility and how this affected my ability to reproduce their results.