Developing effective information retrieval models has been a long standing challenge in Information Retrieval (IR), and significant progresses have been made over the years. With the increasing number of developed retrieval functions and the release of new data collections, it becomes more difficult, if not impossible, to compare a new retrieval function with all existing retrieval functions over all available data collections. To tackle thisproblem, this paper describes our efforts on constructing a platform that aims to improve the reproducibility of IR researchand facilitate the evaluation and comparison of retrieval functions.
A high-quality search strategy is considered an essential component of systematic reviews but many do not contain reproducible search strategies. It is unclear if low reproducibility spans medical disciplines, is affected by librarian/search specialist involvement or has improved with increased awareness of reporting guidelines.
Reproducing palaeontological results depends on unrestricted access to fossils described in the literature, allowing others to re-examine or reinterpret them. Museums have policies and protocols for keeping materials in the public trust, but accessibility to privately owned fossil collections can be a problem.
There’s been a lot of discussion across many scientific fields about the "reproducibility crisis" in the past few years. Hundreds of psychologists attempted to redo 100 studies as part of the Reproducibility Project in Psychology, and claimed that fewer than half of the replication attempts succeeded. In Biomedicine, a study from the biotech firm Amgen tried to re-create results of 53 "landmark" preclinical cancer studies, and only got the same results for six of them. Amid a growing concern about research reliability, funders including the National Institutes of Health (NIH) have called for a greater effort to make research reproducible through transparent reporting of the methods researchers use to conduct their investigations.
The ongoing dialogue has included the role of improperly validated research reagents, such as antibodies, with blame falling at the feet of reagent vendors, researchers, and journals. This article will highlight how the lack of consistent research on antibody validation has contributed to the reproducibility crisis and the role of vendors from Cell Signaling Technology’s (CST) perspective in making research more robust and reproducible.
The lack of reproducibility of preclinical experimentation has implications for sustaining trust in and ensuring the viability and funding of the academic research enterprise. Here I identify problematic behaviors and practices and suggest solutions to enhance reproducibility in translational research.