Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

Elsevier
Publish with us
Connect

Advancing responsible research assessment: Elsevier’s story

May 17, 2021

By Linda Willems, Holly Falk-Krzesinski, PhD

From DORA and the Leiden Manifesto to ICSR and its new Lab – supporting evaluation and metrics is a key goal for Elsevier

From DORA and the Leiden Manifesto to ICSR and its new Lab – supporting evaluation and metrics is a key goal for Elsevier

As we explore in the first article opens in new tab/window of this two-part series, the idea that the assessment of researchers and their outcomes should be “responsible” has taken on a new resonance in recent years.

In 2020, Elsevier demonstrated its commitment to responsible research metrics and evaluation by publicly declaring support for the Leiden Manifesto (LM) and the San Francisco Declaration on Research Assessment (DORA).

But what does that support mean in practice? In this article, Elsevier’s Holly Falk-Krzesinski discusses the implications for Elsevier, librarians, and researchers. She also highlights some of the other assessment-related activities that Elsevier has initiated.

Many in the research ecosystem have long held the view that responsible research assessment looks beyond bibliometrics such as publication and citation metrics.

It’s certainly a point that Dr Holly Falk-Krzesinski, Elsevier’s Vice President of Research Intelligence, drives home whenever she speaks publicly on the topic. “I stress the importance of including measures that address the broad range of assessment-related questions; for example, funding, collaboration, usage, commercialization, and the adoption of innovation. As more diverse data sources and metrics are used, a richer and more informative picture of research emerges.”

According to Holly, for Elsevier, the term research evaluation not only encompasses the approaches used to assess the performance and practices of people, programs, institutions, and even nations but how that data is used to inform strategic decision-making: “It can include formal and informal evidence frameworks, methods and tools, narrative evidence on societal impact, etc.”

And when Elsevier talks about research metrics, it’s referring to scientometric, bibliometric or ‘altmetric’ indicators and the rankings that draw upon them. “Research metrics are an important part of the research evaluation landscape – they help us understand the current position and shape research strategies.”

Holly adds: “Critically, not all evaluation uses metrics and not all metrics are useful for evaluation. But there is a space where they overlap, and that is where we think about responsible use of research metrics in research evaluation.”

Aligning with DORA and the Leiden Manifesto

Elsevier has taken an active role in driving that responsible use over the past few years; for example, by sharing journal metrics and performance indicators on journal homepages, by championing the use of a “basket” of metrics, and by partnering with the research community to enable open science.

Increasingly, Elsevier’s solutions offer easy-to-access guidance on research metrics; for example, descriptions of metrics and their formulas are embedded in SciVal, and most data is downloadable for further examination.

Importantly, 2019 saw Elsevier establish the International Center for the Study of Research (ICSR), with its mission to further the study of research and contribute to the evidence base supporting the practice of research strategy, evaluation and policy. Holly, who serves on ICSR’s advisory board, explains: “It’s really at the heart of all we do here at Elsevier and libraries and librarians are important stakeholders.”

In 2020, Elsevier and ICSR took two additional and very public steps. They endorsed the Leiden Manifesto for Research Metrics (LM) and its 10 principles to guide research evaluation. And they announced that they had signed the San Francisco Declaration on Research Assessment (DORA) and would make the reference lists of all Elsevier articles openly available via CrossRef.

Holly says: “There are other, what I call “manifestatements”, on research assessment and metrics out there; for example, Metric Tide and the Hong Kong Principles. But one of the key reasons we selected DORA and LM is that their recommendations are actionable. And in the case of DORA, it’s a manifesto by the people for the people - that’s really important to us.”

She adds:When we announced the news, some people wondered ‘why now?’; after all, DORA was launched in 2012. The reason is simple –we did not want to align with either LM OR DORA until we were in a position to comply with all their recommendations. That time has come.”

Holly points to the “order of magnitude” involved in preparing for something like the DORA signing. “Elsevier has more than 2,200 journals. Many of those are published in partnership with the community, so we weren’t the only ones involved in the decision-making process. We also needed to ensure we could comply with all the DORA requirements first; for example, sharing citations with CrossRef. We are talking about a lot of citations, so we had to develop, then test a process that would allow us to do that. And CrossRef needed the right tech capabilities to accept our data and make it available. That all only fell into place just before we signed.”

What “alignment” means in practice

According to Holly, Elsevier’s journal metric CiteScore illustrates Elsevier’s commitment perfectly. “It complies with many of the Leiden Manifesto principles. For example, principle 10 says that indicators must be scrutinized and regularly updated. CiteScore was launched in 2016 and, by 2020, we had revised it based on expert and user feedback. Principle 4 says that data collection and analytical processes have to be open, transparent and simple. The new version we released following that feedback is fully transparent with the methodology openly available. Principle 5 says that those evaluated must be allowed to verify data and analysis. We support that too – underlying data are freely available for verification purposes without a subscription to Scopus.”

Holly continues: “The Leiden Manifesto states that the best decisions are based on a combination of quantitative and qualitative evidence and the highest quality data. Scopus combines a comprehensive, curated abstract and citation database with enriched data and linked scholarly content so it’s easy to find relevant and trusted research, identify experts, and access reliable data, metrics and analytical tools to support decision-making.”

But Holly adds a note of warning: “These manifestatements are often imbued with meaning not explicit in the text, so, it’s important to clarify what they do say, and what they don’t. NONE propose a ban on the use of existing metrics – only an end to their inappropriate use. NONE call for a ban on the creation of new research metrics – they ask only that they are designed according to best practices. And NONE claim that metrics themselves are responsible for how they are used – they state that responsibility vests in those who use them or guide their use.”

She adds: “Compliance is a slippery concept: the best that anyone can do is strive to honor the spirit (if not the letter) of these statements. That’s true whether you are a research impact librarian or an organization such as Elsevier.”

Launching the ICSR Lab

Another big step for Elsevier has been the launch of the ICSR Lab, a new platform that provides researchers with a free-to-use sandbox to pursue projects that are non-commercial and scholarly research in nature Holly explains: “It’s powerful as a computational platform, which makes it perfect for analyzing large, structured datasets. And it’s extensive due to the size and breadth of the datasets that it contains.  For example, we’ve included all Scopus abstracts and metadata, as well as SciVal datasets and metadata, and PlumX metrics data from Scopus. We’ve also added the gender disambiguation information used in Scopus profiles, which is great for research into gender diversity. And we are constantly looking to add new datasets.”

She adds: “It not only allows for powerful analysis on the part of investigators, students and early career researchers, it also offers reproducibility by design - any researchers can conduct reproducibility studies on the projects done there.”

Holly spoke about Elsevier's latest evaluation and metrics initiatives during the webinar Advancing responsible research metrics: implications for librarians and their users, held in May 2021 (timestamp 27:08 - 44:40). View the webinar opens in new tab/window

Contributors

Portrait photo of Linda Willems

LW

HFP

Holly Falk-Krzesinski, PhD

Dr Holly Falk-Krzesinski is Vice President, Research Intelligence, on the Global Strategic Networks team.