Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

Elsevier
Publish with us
Connect

What’s in your basket? Evaluating journals in the modern age

5 June 2018

By Andrew Plume, PhD

What’s-in-your-basket-credit-alexkich-iStock-859796176

© istockphoto.com/alexkich

The continuing evolution of journal citation metrics

Why journal citation metrics matter

As an early career researcher many years ago, I was faced with a choice. Towards the end of my PhD research, I had gathered enough data that connected to make an interesting story to draft a manuscript for journal publication. With the prospect of needing to find a “good” post-doc position looming, I knew that the choice of journal to submit to was key. Aside from considerations of journal scope and readership, I needed to publish in a venue whose standing would rub off on my article: in the long months or years before readership or citations might start to accrue to my work, the simple fact of my association with this journal would help evidence my merit as a researcher.

At that time, the only objective indicator of journal “importance” or “influence” was the Impact Factor (IF). And so it was that I found myself working my finger down a list of journals in my field by descending IF, doing the mental trade-off between my chances of acceptance against the “payday” of publication in a high-IF title.

Invented by the late Eugene Garfield in 1972 opens in new tab/window, the IF was not originally created with this purpose in mind. Instead, Garfield needed to ensure that his expanding citation index (now known as Clarivate’s Web of Science) was covering the most important scholarly literature, and used a journal citation metric to help him select sources. Later, the IF was adapted along similar lines by the library community to assist with collection development decisions. It was only later - in the 1990s - that the IF began to be conflated with research evaluation at the level of the individual researcher (a trend that Garfield himself was wary of opens in new tab/window). The pervasiveness of the IF in individual researcher evaluation was most elegantly demonstrated in a 2015 study published in PLoS One opens in new tab/window, which used functional neuroimaging to show that the reward centre in a scientists’ brain is activated in anticipation of a publication, and increasingly so when the publication venues depicted have higher IFs.

Metrics never exist in isolation, but instead reflect the goals of the evaluative frameworks we place them in, which in turn mirrors the cultural backdrop. So it is in research, where metrics such as the h-index reflect the reliance of research assessment approaches on productivity and excellence, which in turn mirrors the underlying “publish or perish” culture bred of increasing competition for scarce resources.

Journal citation metrics and the basket of metrics

As an editor, you are likely only too familiar with the controversies that have persisted around the “use and abuse” of the IF opens in new tab/window. You are also likely to spend a considerable proportion of the time you spend wearing your editorial hat thinking and talking about the IF, and considering what it means for your leadership and strategic direction of your journal. But it’s important to recognise that in the meantime the world is moving on: in response to some well-known criticisms of the IF opens in new tab/window, a variety of alternative journal citation metrics have become available in recent years, three of which are based on Elsevier’s Scopus database: SNIP, SJR and most recently CiteScore (first released in 2016, the latest values were published in May 2018).

These journal citation indicators now form part of a so-called “basket of metrics” – a growing selection of recognised measures that may also include article-, researcher- and institution-based metrics, and those which are based on citations, usage, peer review outcomes or societal impact. The basket of metrics is flexible and may never be complete, but shifts dynamically based on what is important to measure and what can be measured. Our guidance for you as editors is the same as that we offer anyone using metrics for evaluative purposes, and takes the form of two “golden rules”: firstly, recognising that metrics may emphasise different things, always select more than one metric in a way that they complement each other, and secondly, always use metrics alongside informed judgement.

To assess the recent performance of a journal we might for example select CiteScore as a size-independent measure of average citation performance, total citations (the CiteScore numerator) as a size-dependent indicator of citation popularity and the CiteScore percentile rank in the journal’s subject category as a relative view of the journal’s standing within the field. Alongside these, however, it is also important to consider the journal’s success in meeting author and reader needs, such as through community feedback or survey work (or even the informal feedback you receive in your role as editor from authors, reviewers and your editorial network). Only in this way does our evaluation become meaningful and helpful.

Beyond citations: What next in journal metrics?

As noted earlier, the basket of metrics may continue to expand and change composition based on what is important to measure and what can be measured. There is increasing desire for example to reward researchers for their hitherto largely unsung efforts in peer review [link], and the increasing popularity of services that expose such contributions may lead to widely-adopted metrics of the volume and “quality” of peer review conducted. Similarly, a critical aspect in the decision-making process for many authors when evaluating journals for their next manuscript submission is the speed of the publication process as measured from submission to online publication, and as a subset of this, the time elapsed from submission to first decision. In the online world, article-level usage (downloads, as a measure of interest and readership) have also risen to prominence in services such as Plum Metrics. Such indicators are calculable at journal level even now; for example, many journals display publication speeds on their homepages, but there has been no push for standardising this into a metric as choices must be made and defended (Should speeds be represented as a mean or as a median value? Should they include invited review content? Should they be shown in days or in weeks?).

Less easily measured are indicators of societal impact, or research reproducibility, though both are points of very active discussion in research today, not only in the context of journals but across levels of aggregation reaching from individual researchers through institutes and research fields to countries and broad knowledge domains. I would like to invite you to offer your views on the most important aspects of your journal that you feel could or should be metricised and how this could make your life as an editor easier – I look forward to knowing your thoughts (please comment below)!

Contributor

Portrait photo of Andrew Plume

APP

Andrew Plume, PhD

President, International Centre for the Study of Research (ISCR)

Elsevier

Editors' Update - supporting editors, every step of the way.

man working from home