In recent years, there’s been increasing awareness of a problem across many scientific fields—the problem of reproducibility. Can experiments be repeated (or "reproduced") to arrive at the same result? Evidence is piling up that the answer, all too often, is no. This makes it difficult to know which results we can confidently rely on, and which are spurious.
"It is entirely within the realm of possibility that the creation of a new publishing platform, focused on hosting formal replications, alongside these review style evaluations of method, would provide a new and more focused home for the type of discussion. Overall, implementing such a system would vastly improve the accessibility of research; both through providing links to peer reviewed replications which have not been filtered by the file drawer, and literally, in terms enabling an overview replication information out at a glance."
Reproducibility material (data and code) for 'Direct and Indirect Welfare Chauvinism as Party Strategies: An Analysis of the Danish People’s Party', Scandinavian Political Studies.
A new version of ReproZip has been released, adding some bugfixes and new commands related to distributed or server experiments.
This week in science, academia and publishing for reproducibility.
We compared the repeatability, reproducibility (intra- and inter-measurer similarity), separative power and subjectivity (measurer effect on results) of four morphometric methods frequently used in ichthyological research, the “traditional” caliper-based (TRA) and truss-network (TRU) distance methods and two geometric methods that compare landmark coordinates on the body (GMB) and scales (GMS).