주요 콘텐츠로 건너뛰기

귀하의 브라우저가 완벽하게 지원되지 않습니다. 옵션이 있는 경우 최신 버전으로 업그레이드하거나 Mozilla Firefox, Microsoft Edge, Google Chrome 또는 Safari 14 이상을 사용하세요. 가능하지 않거나 지원이 필요한 경우 피드백을 보내주세요.

이 새로운 경험에 대한 귀하의 의견에 감사드립니다.의견을 말씀해 주세요 새 탭/창에서 열기

Elsevier
엘스비어와 함께 출판

Measuring a journal’s impact

Journal-level metrics

Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of information abundance (often termed ‘information overload’), having a shorthand for the signals for where in the ocean of published literature to focus our limited attention has become increasingly important.

Research metrics are sometimes controversial, especially when in popular usage they become proxies for multidimensional concepts such as research quality or impact. Each metric may offer a different emphasis based on its underlying data source, method of calculation, or context of use. For this reason, Elsevier promotes the responsible use of research metrics encapsulated in two “golden rules”. Those are: always use both qualitative and quantitative input for decisions (i.e. expert opinion alongside metrics), and always use more than one research metric as the quantitative input. This second rule acknowledges that performance cannot be expressed by any single metric, as well as the fact that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary metrics can help to provide a more complete picture and reflect different aspects of research productivity and impact in the final assessment.

On this page we introduce some of the most popular citation-based metrics employed at the journal level. Where available, they are featured in the “Journal Insights” section on Elsevier journal homepages (for example 새 탭/창에서 열기), which links through to an even richer set of indicators on the Journal Insights homepage (for example 새 탭/창에서 열기).

CiteScore metrics

CiteScore metrics are a suite of indicators calculated from data in Scopus, the world’s leading abstract and citation database of peer-reviewed literature.

Calculating the CiteScore is based on the number of citations to documents (articles, reviews, conference papers, book chapters, and data papers) by a journal over four years, divided by the number of the same document types indexed in Scopus and published in those same four years. For more details, see this FAQ 새 탭/창에서 열기.

CiteScore is calculated for the current year on a monthly basis until it is fixed as a permanent value in May the following year, permitting a real-time view on how the metric builds as citations accrue. Once fixed, the other CiteScore metrics are also computed and contextualise this score with rankings and other indicators to allow comparison.

CiteScore metrics are:

  • Current: A monthly CiteScore Tracker keeps you up-to-date about latest progression towards the next annual value, which makes next CiteScore more predictable.

  • Comprehensive: Based on Scopus, the leading scientific citation database.

  • Clear: Values are transparent and reproducible to individual articles in Scopus.

The scores and underlying data for nearly 26,000 active journals, book series and conference proceedings are freely available at www.scopus.com/sources 새 탭/창에서 열기 or via a widget (available on each source page on Scopus.com) or the Scopus API.

SCImago Journal Rank (SJR)

SCImago Journal Rank (SJR) is based on the concept of a transfer of prestige between journals via their citation links. Drawing on a similar approach to the Google PageRank algorithm - which assumes that important websites are linked to from other important websites - SJR weights each incoming citation to a journal by the SJR of the citing journal, with a citation from a high-SJR source counting for more than a citation from a low-SJR source. Like CiteScore, SJR accounts for journal size by averaging across recent publications and is calculated annually. SJR is also powered by Scopus data and is freely available alongside CiteScore at www.scopus.com/sources 새 탭/창에서 열기.

Source Normalized Impact per Paper (SNIP)

Source Normalized Impact per Paper (SNIP) is a sophisticated metric that intrinsically accounts for field-specific differences in citation practices. It does so by comparing each journal’s citations per publication with the citation potential of its field, defined as the set of publications citing that journal. SNIP therefore measures contextual citation impact and enables direct comparison of journals in different subject fields, since the value of a single citation is greater for journals in fields where citations are less likely, and vice versa. SNIP is calculated annually from Scopus data and is freely available alongside CiteScore and SJR at www.scopus.com/sources 새 탭/창에서 열기.

Journal Impact Factor (JIF)

Journal Impact Factor (JIF) is calculated by Clarivate Analytics as the average of the sum of the citations received in a given year to a journal’s previous two years of publications (linked to the journal, but not necessarily to specific publications) divided by the sum of “citable” publications in the previous two years. Owing to the way in which citations are counted in the numerator and the subjectivity of what constitutes a “citable item” in the denominator, JIF as received sustained criticism for many years for its lack of transparency and reproducibility and the potential for manipulation of the metric. Available for only 11,785 journals (Science Citation Index Expanded plus Social Sciences Citation Index, per December 2019), JIF is based on an extract of Clarivate’s Web of Science database, and includes citations that could not be linked to specific articles in the journal, so-called unlinked citations.

h-index

Although originally conceived as an author-level metric, the h-index (and some of its numerous variants) have come to be applied to higher-order aggregations of research publications, including journals. A composite of productivity and citation impact, h-index is defined as the greatest number of publications h for which the count of lifetime citations is greater than or equal to h. Being bound at the upper limit only by total productivity, h-index favours older and more productive authors and journals. As h-index can only ever rise, it is also insensitive to recent changes in performance. Finally, the ease of increasing h-index does not scale linearly: an author with an h-index of 2 needs only publish a 3rd paper and have all three of them cited at least 3 times to rise to an h-index of 3; an author with an h-index of 44 must publish a 45th paper and have it and all the other attain 45 citations each before progressing to an h-index of 45. h-index is therefore of limited usefulness to distinguish between authors, since most have single-digit h-indexes.