
The Promise of Policy: In Theory
If a journal publishes your research only if you have made sure that your research is completely reproducible, then there are two choices in life. Either you make sure you meet the policy requirements, or you search for another journal. That is how policy works. In an ideal world, that is.
If a funder asks you to make sure your research is completely reproducible before you receive the funding money, then you just make sure that your research is reproducible. Policy incentives are strong motivators to change scientific behaviour. In an ideal world.
The Reality of Policy: Limits and Loopholes
In the real world, we see that policy does drive behavioural changes, but that policy is not the perfect solution that ensures that all research is reproducible. For example, the endorsement of reporting guidelines by journal editors was in many journals accompanied by the formal requirement of uploading a completed reporting checklist together with the submitted manuscript.
However, editors do not always check if the checklist is present; if the checklist is present, editors do not always check if the checklist is completed truthfully; and if the checklists are completed truthfully, then that does not automatically mean that all necessary information is provided to reproduce a study.
Clinical Trials: A Case in Point
In the field of clinical medicine, clinical trials involving humans need to be registered before they are published. Actually, before they have started, but that is not always easy to check. Medical journals all agreed that any randomised clinical trial that is submitted for publication has to be registered, for example, on a website like clinicaltrials.gov.
However, according to the WHO, in Europe alone, 37% of the clinical trials that have been registered have already started patient inclusion before the registration date. Furthermore, although registration of clinical trials aims to improve reproducibility of trials (as well as protecting trial participants) by ensuring openness of methods and results, registrations are not always sufficiently clear to enable reproduction, and the lack of available results prohibits the assessment of whether a trial reproduction or replication leads to similar results.
Open Data: Policy Without Enforcement
A third example is the requirement of funders to promise that data will be properly managed and shared. Even though funders do ask researchers to publish their results in gold open-access journals or to share their data, it is rarely checked whether researchers keep their promises. And if it is checked, then it occurs rarely that funds are really withheld for reasons of non-compliance with open science policies. And even if open science practices are followed, this does not always mean that studies are indeed reproducible.
Two Key Challenges of Policy-Based Reform
These examples show the two main challenges of policy change as a strategy to improve reproducibility.
First, policy changes will only lead to improved reproducibility of research if the consequences of following or not following the policy really impact a researcher and their career. This requires thinking about sticks and carrots. A journal that rejects a manuscript may not have such a big impact if another, equally prestigious, journal is less strict in its policies. In contrast to that, if datasets and codes count and are valued as outputs equally to publications, then this may be a strong incentive to publish datasets and codes openly.
The second challenge is that it is difficult to actually measure and monitor the reproducibility of research. Sharing data may be a prerequisite for reproducibility, but that does not mean that every shared dataset leads to a reproducible study.
What OSIRIS Is Doing
OSIRIS is working with funders, institutes and journals to investigate which policies may improve reproducibility. We do this by, for example, striving for consensus on how to measure reproducibility.
To enable this, we have developed a checklist to measure reproducibility. Only then can we actually measure whether policies on open science practices indeed lead to more reproducible research. Furthermore, we are investigating whether and how checklists may improve reproducibility when they are used in an editorial context or in a funding process. We will keep you posted on our achievements on these topics.
Reproducibility isn’t guaranteed by policy, it’s enabled by culture, accountability, and the right incentives. At OSIRIS, we’re working to turn well-meaning mandates into meaningful outcomes. By developing practical tools, such as reproducibility checklists, and testing how they function in real-world editorial and funding settings, we aim to bridge the gap between policy and practice. Because in the end, making research reproducible isn’t just about rules, it’s about responsibility. Stay tuned as we share what works, what doesn’t, and what’s next.
Stay tuned for updates on other OSIRIS activities! Visit our website to read our blogs and events section and follow us on social media to discover what’s new and how you can get involved!
Keep In Touch