Researchers tested the credibility of past investigations reaching the conclusion of a new study: scientific researches are not always reliable. Few of the past studies could be replicated showing that some researches are either too biased or too distinctive to make a statement in history.
Reproducible Science Promoting Open Science
A decade ago, John P.A. Ioannidis published a provocative and much-discussed paper arguing that most published research findings are false. It’s starting to look like he was right.
A study that sought to replicate 100 findings published in three prominent psychology journals has found that, across multiple criteria, independent researchers could replicate less than half of the original findings. In some cases this may call into question the validity of some scientific findings, but it may also point to the difficulty of conducting effective replications and achieving reproducible results.
Research findings advance science only if they are significant, reliable and reproducible. Scientists and journals must publish robust data in a way that renders it optimally reproducible. Reproducibility has to be incentivized and supported by the research infrastructure but without dampening innovation.
myExperiment is a collaborative environment where scientists can safely publish their workflows and in silico experiments, share them with groups and find those of others. Workflows, other digital objects and bundles (called Packs) can now be swapped, sorted and searched like photos and videos on the Web.
In collaboration with the University of Washington (UW) and Berkeley, and under the sponsorship of the Moore and Sloan foundations, NYU is working on a new initiative to 'harness the potential of data scientists and big data'. As part of this initiative, we aim to increase awareness of sharing, preservation, provenance, and reproducibility best practices across UW, NYU, Berkeley campuses and encourage their adoption.
The entire field of particle physics is set to switch to open-access publishing, a milestone in the push to make research results freely available to readers.
Ph.D. dissertation, Department of Computer Science, Stanford University, 2012: "By understanding the unique challenges faced during research programming, it becomes possible to apply techniques from dynamic program analysis, mixed-initiative recommendation systems, and OS-level tracing to make research programmers more productive. This dissertation characterizes the research programming process, describes typical challenges faced by research programmers, and presents five software tools that I have developed to address some key challenges."
When a cancer study is published in a prestigious peer-reviewed journal, the implication is the findings are robust, replicable, and point the way toward eventual treatments. Consequently, researchers scour their colleagues' work for clues about promising avenues to explore. Doctors pore over the pages, dreaming of new therapies coming down the pike. Which makes a new finding that nine out of 10 preclinical peer-reviewed cancer research studies cannot be replicated all the more shocking and discouraging.
A blog post from C. Titus Brown on how he and his co-authors were able to make a paper they wrote replicable.