跳转到主内容

非常抱歉,我们不完全支持您的浏览器。如果您可以选择,请升级到较新版本或使用 Mozilla Firefox、Microsoft Edge、Google Chrome 或 Safari 14 或更高版本。如果您无法进行此操作且需要支持,请将您的反馈发送给我们。

全新设计的官网为您带来全新体验,期待您的反馈 在新的选项卡/窗口中打开

Elsevier
通过我们出版
Connect

Evolving the research assessment landscape

2023年1月25日

Zoë Genova

A panel of experts discusses research assessment reform at the roundtable convened by Science|Business and Elsevier in Brussels.

Dr Nick Fowler, Elsevier’s Chief Academic Officer, lends his perspective on the pathway forward for research assessment reform

In recent years, there has been a call to change research assessment practices to ensure the fair and accurate reflection of the quality and diverse impact of research. After much debate and discussion, research assessment reform is now a top priority for the scientific community in Europe and beyond. The Agreement on Reforming Research Assessment 在新的选项卡/窗口中打开, published in July 2022 as a result of a co-creation process led by Science Europe and the European University Association with the support of the European Commission, agrees to base assessment primarily on qualitative judgement, supported by the responsible use of quantitative indicators. Signatories of the document commit to research assessment reform and review by the end of 2027.

To help guide the path forward for research assessment reform, Science|Business 在新的选项卡/窗口中打开 and Elsevier convened a roundtable discussion in October with representatives from EU institutions, government agencies, funders and international experts in the field meeting in Brussels and virtually. Participants discussed the balance of quantitative and qualitative factors in the evaluation process, respecting diversity, developing international cooperation, and taking research peculiarities into account while maintaining a concrete framework for assessment systems.

The workshop resulted in a report with recommendations and takeaways for the future of research assessment.

I sat down with Dr Nick Fowler, Chief Academic Officer at Elsevier, to reflect on the outcomes of the roundtable and his thoughts on the future of research assessment.

The first takeaway of the workshop focuses on rebalancing the use of metrics, as opposed to relying largely on publications in prestigious journals in the current system. What’s missing from the current research evaluation system?

My first thought on this is to be careful to not throw the baby out with the bath water. Meaning, metrics and qualitative narratives are inextricably linked, and you can’t reject the whole journal publishing system or indicators when working on reform. The thought that we should completely do away with metrics and instead have peer review doesn’t take into account that articles are already peer reviewed and qualitatively assessed. It would be hugely inefficient to ignore or, effectively, duplicate this system. It is critical to recognize that published articles have been and are in themselves entities that have gone through a qualitative peer review judgment and subjective decision by individuals to cite the articles.

That said, there are three clear areas for improvement with the current system:

  1. The breadth of academic outputs that are assessed, because the scope of outputs aside from journal articles is still relatively limited. In the humanities, for instance, there aren’t any databases that can tell you how many poems or paintings or plays exist, and therefore it is difficult actually to count the outputs or assess the impact or quality of them. The same can be said for datasets and computer code.

  2. A knowledge of the environment and culture behind the production of research. This element considers how research itself is conducted, independently of the research product. There is an increasing desire to understand diversity, inclusivity and equity in the research environment. Is the output produced by mostly White men? How nurturing are the researchers? Is a researcher one that always puts their name first on a paper and those who contributed are relegated or invisible? Is it clear what contributions were made and by whom? Are they enabling reproducibility? This element aims to identify and therefore foster an inclusive, diverse and equitable research culture.

  3. The impact of research on society. Ultimately, universities and researchers would like to know what the impact of their research actually is. For example, DNA is a double helix — so what? How does that knowledge benefit society? How do you begin to attribute the impact of research to the individuals and institutions that conducted it? Can I say this work saved 10,000 lives? In considering societal impact, it’s important to take into account time lags and multiple contributions over time. We don’t have reliable aggregated indicators to show the relationship between the research itself and its outcomes. Social, economic and environmental indicators can help show the relationship between research and real-world impacts. The UN Sustainability Goals are the closest we have to a framework to measure this impact, but much more work is needed in this area.

What feedback have you received from the scientific community on the current research evaluation system?

Those who are seeking to change the evaluation system generally give the feedback, not the researchers themselves. Some comments and concerns include:

  • “Citations aren’t a good indicator of impact.”

  • “Bad work is often cited.”

  • “How do you know if 100 citations reflect good research as opposed to an article gaining popularity and therefore more citations?”

  • “There is too much time-consuming administrative work.”

There is a sense that critics also believe publishers want to defend journal impact factor 在新的选项卡/窗口中打开, but this criticism is misplaced. The academic community initially came up with the impact factor to determine which journals to submit their work to. It provided a measure to help authors seeking to identify journals that — while also being a good fit with their topic — also get the most citations which in turn helps them advance in their careers. The Coalition for Advancing Research Assessment (CoARA) 在新的选项卡/窗口中打开 recommends that research assessment reform include reducing dependency on impact factor. Indeed, Impact factor was designed to provide a quality indicator about journals, not researchers. At Elsevier, we are developing indicators that aim to provide insights about people beyond the number of citations of the journal in which they published.  Those insights aim to address a broader range of researchers’ outputs, as well as the culture that researchers foster, and the impact on society they have. They should also take into account contextual factors like the need to have career breaks to care for children or other family members.

The need for broader indicators is common across researchers, university leaders and funders.

Another recommendation is that although there is a need to change the existing system, the path should be an evolution of change rather than a revolution that completely replaces the old model. What could a progressive and collaborative evolution look like?

A progressive evolution starts with being clear about what you are trying to evaluate and the questions you are trying to answer.  For instance, if you are trying to evaluate whether a researcher should be promoted, you might focus on understanding their academic impact: Where are they published? How often is their work cited? To determine how inclusive and nurturing a researcher is, you might want to look at indicators that show if a researcher is helping others. Do they win funding or distribute it? Do they take part in extracurricular activities? Are they great teachers, great at talking to industry, or great at communicating to society? What is the economic and social impact of their work?

Each question requires different indicators. Most indicators have some kind of functional benefit if the context is right, but no indicator will fit all circumstances. Having an evolution of change requires the development of data sets that underpin these indicators as well.

Considering the push for both the reform of existing assessment systems and the promotion of open science in Europe, how do you think this momentum for dual change could unfold in the research ecosystem?

Research assessment reform is designed to drive open science. This reform is increasingly likely to include measures of open access, i.e., whether a researcher’s data is openly available, and researchers will likely gain recognition for that. There is a relationship between making articles open and gaining more citations. This may incentivize researchers to publish open access in the existing system. The openness of data sets could lead to more grants, and then transparency in who has received which grant leads to publicly available data sets. The more data is made openly available, the more can be mined from that data. This moves things along right across the chain. For example, when health outcome data is more open, more can be done to assess health impact overall and tie this information back to researchers. By moving across the chain, open access, open data and open research assessment can progress together under the umbrella of open science.

Finally, one more conclusion is that all knowledgeable stakeholders should be involved in the transition to new research assessment systems. What role can publishers and data analytics companies like Elsevier play?

In general, it’s important to remember that publishers and data analytics companies have a wealth of experience in this area. This is what we do. Commercial providers like Elsevier have a role to play if change is to be achieved globally because the scale of investment required to have reliable, accurate global standards is very large. Information companies like Elsevier are part of the global scientific ecosystem, not separate from it. Publishers can support implementation of the core principles of the reform and experiment with the academic community on innovative ideas. This is what we are already doing with a good number of academic partners across the globe who are keen to investigate and co-create new approaches to assessment. We have sophisticated expertise and decades of experience gained from working with governments, funders and universities around the world. We want to share this to enable researchers to solve difficult problems. We welcome the change research assessment reform will bring and look forward to continuing to work together.

撰稿人

Portrait photo of Zoë Genova

ZG