Reproducibility: why it matters and how it can be nurtured
2021年10月26日
Torie Eva, Catriona Fennell, Katie Eve
At Elsevier, we support reproducibility of research through a wide range of initiatives
When you are seeking out a new recipe, it’s good to know that others have tried it and produced similar results to the original chef. You know it’s reproducible, and therefore you can trust the recipe. The same goes for research – but the stakes are astronomically higher.
Reproducibility – the repeatability of research findings to enable research and knowledge to progress – is vital and underpins trust in science. However, many challenges exist in relation to reproducing research. At Elsevier, we take our role as a steward of trust seriously; in this article, we explore reproducibility challenges and Elsevier’s activities to address them.
Complex challenges in reproducibility
Reproducibility faces many challenges, and they vary considerably between research fields.
While most researchers absolutely support the need for reproducible research, attitudes towards the practices that facilitate it can vary as can the incentives for those practices. Take data sharing: some researchers may simply be cautious about sharing their data openly or concerned about data misinterpretation. Meanwhile, in certain disciplines like medicine, there are barriers to data sharing relating to privacy, obtaining consent from patients and research subjects, and issues with data de-identification. Similarly, with negative/null results, researchers may not want to be associated with such a study, and this is compounded by reward systems that traditionally prioritize high-impact results.
Some disciplines and studies present unique challenges. For example: psychological studies can be more difficult to reproduce due to natural variations in human behavior and low statistical power, while use of AI in research brings specific challenges 打開新的分頁/視窗. Researchers need to share code, software and computing details to reproduce computational experiments, for instance, as well as provide a substantial level of information on data and its provenance, descriptions of methods, and detailed specifications of hyper-parameters used to generate results.
Also, the production of research itself is often subject to natural variation and honest human errors: experimental mistakes or wrong conclusions are an inherent risk when conducting research. For example, an unstable reagent or contaminated sample can yield differing results. We should avoid stigmatizing researchers if certain studies cannot be easily reproduced, given that it’s rare for this to be caused by a researcher falsifying their results with the intention to deceive. If we fail to do this, authors may become less willing to share their data and methods with others and less inclined to alert the journal if they are unable to repeat their own previous work.
Lastly, the questions researchers seek to address are increasingly complex, and the corresponding studies are potentially expensive to design and implement, which again has knock-on effects for reproducibility.
Reproducibility efforts at Elsevier
It is evident that responsibility for reproducibility does not land at the feet of a single stakeholder group. It requires collaboration among funders, institutions, publishers and researchers to provide and fund education and incentives, set standards and change behaviors on reproducibility of research across the research process, from idea to publication.
A prerequisite for reproducibility is full transparency in how studies and experiments were undertaken, the methods and protocols involved, and the results obtained. We draw here on our experience with new approaches Elsevier has trialed to nurture reproducibility and thereby maintain the integrity of the scholarly record.
Supporting transparent methodology
Our journals incentivize researchers to be transparent in their methodology, which in turn supports reproducibility, allowing others to follow a clear and reproducible method. In this vein, Cell Press 打開新的分頁/視窗 launched STAR Methods 打開新的分頁/視窗, which defines the features of a robust, reproducible method: structured, transparent, accessible reporting. STAR Methods has proved to be highly successful, and key elements have already been rolled out to 1,500 journals.
We also offer entire journals dedicated to methods transparency. STAR Protocols 打開新的分頁/視窗publishes complete, authoritative and consistent instructions on how to conduct experiments. MethodsX 打開新的分頁/視窗publishes small but important customizations to methods. Such journals not only give authors an incentive to share this information but also help other researchers be more efficient.
Encouraging open research data sharing
Research data is the foundation on which scientific knowledge is built. Access to the research data that underpins published findings helps ensure the research can be successfully reproduced. As a key pillar of open science, and to uphold research integrity, we promote and encourage open research data sharing practices and incentives for authors. These include:
Providing products and platforms to incentivize researchers to share their research data in a way that is structured but easy for researchers to deploy. Examples include
Promoting FAIR data principles to ensure that data is findable, accessible, interoperable and reusable. We are a founding member of Force11 打開新的分頁/視窗, which developed the FAIR principles.
Implementing journal policies that encourage researchers to share research data transparently or provide a data availability statement.
Integrating data sharing into our submission workflows, thereby using our journals to incentivize researchers to data share. For example, we encourage data sharing at the point a researcher submits their paper to our journals. We found that simply making it easy for authors to data share, and reminding them early in the publication process, doubled the amount of data sharing, supporting reproducibility.
Investing in journals that publish data output, such as SoftwareX 打開新的分頁/視窗 and Data in Brief 打開新的分頁/視窗.
Peer review innovation
Initiatives such as Registered Reports and Results Masked Review aim for work to be judged on the merits of the research question and methodology, not the findings.
Registered Reports requires authors to submit and commit to their protocols before experiments are conducted. The journal then accepts the paper in principle, based on whether editors believe the protocol has merit, and commits to publishing the research regardless of the results.
With Results Masked Review, the experiments have already taken place, but the reviewers are first sent the paper with the results masked. Both of these models prevent publication bias and enhance transparency, thereby ensuring that results aren’t skewed in pursuit of publication.
Inclusion & diversity
Finally, inclusion and diversity in research is crucial for reproducibility. For example, if clinical trials are carried out only on men, there is a strong likelihood of divergence if they are repeated with women.
This has clear implications for society: products of research must take diversity into account. Elsevier has undertaken a range of activities to support inclusion and diversity in science, including developing a Gender Equality resource center providing free access to research, data, and tools related to gender; supporting gender balance across our editorial boards and the research community; and increasing awareness of these issues through our gender reports. Furthermore, publishers’ management of the peer review process includes providing tools and information to help editors to find the most relevant reviewers and increase diversity in the peer reviewer pool. As we recently explained, expertise and diversity in the peer review process reduces bias and increases scientific rigor, in turn enhancing reproducibility.
Looking ahead: a stakeholder collaboration
For many years, Elsevier has been experimenting with a range of solutions to meet reproducibility challenges. However, our pilots are subject to differing results and degrees of success. For instance, journals publishing negative results and replication studies have so far seen low take-up from researchers, while our data journals have been highly successful.
We will nevertheless continue our work to promote reproducibility. In terms of data sharing, for example, our ambition is that during the course of 2022, we will require authors to link to their datasets or provide data availability statements across the majority of our journals. We continue to test and learn from the results of our journal and article pilots, while promoting our ongoing innovative projects like Registered Reports.
However, publishers are but one stakeholder, and reproducibility practices also need to be promoted by other parts of the research ecosystem. By their nature, journals operate far along in the research process and are therefore limited in their ability to encourage reproducibility at critical earlier junctures.
While some researchers are already championing reproducibility, the research community as a whole needs education, rewards and incentives to embark on practices that encourage reproducibility from the outset of their research projects. Stakeholders who operate at these earlier stages of research development, including funders and institutions, have important opportunities to influence and appropriately fund and incentivize researchers on reproducibility, complementary to the role of journals.
As outlined in the manifesto for reproducible science 打開新的分頁/視窗, stakeholders across the research community must work together to address reproducibility challenges, collaborating and aligning to build a positive research culture that rewards and integrates reproducibility practices. This could include agreeing on standards that support reproducibility, developing incentives – or even mandates – for authors to share data, and encouraging researchers to publish negative and null results.
We should examine fundamental questions, including:
What would be gained if research was fully reproducible?
What changes regarding research, publishing incentives and infrastructure would be required to make this possible, and which of these would have the greatest impact?
How can we resolve ongoing researcher concerns around practices such as data sharing?
We know we have more work to do to ensure the research we publish is as robust and reproducible as possible. We will continue this endeavor via our journals and services, and we look forward to partnering with the research community as part of this process.