Elsevier

Journal of Informetrics

Volume 6, Issue 1, January 2012, Pages 36-43
Journal of Informetrics

Ranking national research systems by citation indicators. A comparative analysis using whole and fractionalised counting methods

https://doi.org/10.1016/j.joi.2011.08.002Get rights and content

Abstract

This paper presents an empirical analysis of two different methodologies for calculating national citation indicators: whole counts and fractionalised counts. The aim of our study is to investigate the effect on relative citation indicators when citations to documents are fractionalised among the authoring countries. We have performed two analyses: a time series analysis of one country and a cross-sectional analysis of 23 countries. The results show that all countries’ relative citation indicators are lower when fractionalised counting is used. Further, the difference between whole and fractionalised counts is generally greatest for the countries with the highest proportion of internationally co-authored articles. In our view there are strong arguments in favour of using fractionalised counts to calculate relative citation indexes at the national level, rather than using whole counts, which is the most common practice today.

Highlights

► We present an analysis of whole counts and fractionalised counts in calculating national citation indicators. ► The citation index for all countries are lower when using fractionalised counting than when using whole counts. ► The difference is generally strongest for the countries with the highest proportion of internationally co-authored articles.

Introduction

Citation indicators play a prominent role in the assessment of the competitiveness of national research systems. In this paper we will examine the methodological basis for such indicators, specifically the overall citation rate of a country (average citation rate for all publications), which is usually interpreted as an indicator of a country's general scientific performance. This indicator is considered very important by many policymakers and has a great impact on public perception of the comparative position of a nation's research system. In this paper we will demonstrate how two different methods of calculating the relative citation index of a country – namely whole and fractionalised counting methods – affect the results, and we will look at the conclusions that may be drawn concerning national scientific impact. Although this study focuses on citation indicators at the national level, the results are equally relevant for analyses at other levels, such as universities, departments or research groups.

Over the years there have been many discussions concerning the methodological basis for national bibliometric indicators (and indicators more generally). A prime example is the dispute about the “decline of British science” that appeared at the end of the 1980s. In several papers Ben Martin and his colleagues at the Science Policy Research Unit (SPRU) showed that the UK's share of world publications decreased during the 1970s and the first half of the 1980s (Irvine, Martin, Peacock, & Turner, 1985). However, this conclusion was challenged by others who argued that by using alternative methodologies a rather different picture emerged (see e.g. Leydesdorff, 1988). The adequacy of different measurement principles was heavily debated, for example with regard to the type of publications to be included and how to count papers involving international co-authorship. The debate reveals that the bibliometric community at the time was far from reaching consensus on the best way of measuring national scientific performance. In 1994 Martin concluded that there was no simple way of unambiguously establishing the relative position of a country's science system and whether that position is improving or deteriorating: “Instead we are confronted with a slightly bewildering mass of possible indicators, all of them imperfect to a greater or lesser extent” (Martin, 1994).

While the debate about the “decline of British science” focused on the measurement of publication output by whole and fractional counting of author addresses, we are here turning to citations indicators. National citation indicators are based on the set of publications that have at least one author address from a country. In the most basic version a national indicator is simply calculated as the total number of citations of a country's publications divided by the total number of publications. A relative citation index is constructed by dividing this average by the corresponding worldwide average. This basic version of the indicator can be found in standard bibliometric products produced by Thomson Reuters such as the National Science Indicators (NSI). However, there are several problems related to calculating national citation indicators this way. It is a well-known fact that there are large differences in average citation rates among the various scientific disciplines and subfields (e.g. Hurt, 1987). Garfield (1979) used the term “citation potential” to describe this difference, referring to the fact that the average number of references per paper is significantly lower in for example mathematics than in biochemistry. Moreover, there are significant differences in national scientific specialisation profiles (Glänzel, 2000). This means that countries with high relative publication activity in highly cited fields will have a comparative advantage.

Over the years various normalisation procedures have been developed for the construction of citation indicators; e.g. involving reference standards based on journal and subfield averages (Schubert and Braun, 1986, Schubert and Braun, 1996, Schubert et al., 1988, Vinkler, 1986). The methodological principles underlying citation and publication indicators have been the topic of extensive discussion. Recently we have seen a revitalisation of the debate concerning methods for measuring scientific performance bibliometrically. One issue of debate is the adequacy of various methods for calculating publication indicators (Gauffriau & Larsen, 2005). Another concerns the methodological basis for normalisations of citation indicators. Traditional indicators such as the “crown indicator” (citations per publication/field-based world average (CPP/FSCm)) (Moed et al., 1995, van Raan, 2000) correct for differences among fields by using existing classification schemes. An issue of debate is whether normalisation should be calculated at the aggregated level as a ratio of sums as in the crown indicator described above, or as a sum of ratios at the publication level, as suggested by Lundberg (2007) and more recently by Opthof and Leydesdorff (2010). It is interesting to note that the Centre for Science and Technology Studies (CWTS) at Leiden University has produced a new crown indicator based on normalisation at the publication level (Waltman, van Eck, Leeuwen, Visser, & van Raan, 2010).

Recently, Zitt and Small (2008), Zitt (2010), and Leydesdorff and Opthof (2010) have proposed normalisation procedures based on the characteristics of the citing publications. Their aim is to normalise the variability of citing practices between fields by utilising a classification-free approach. These are referred to as citing-side or source normalisations, in contrast to the traditional cited-side normalisation.

In bibliometric analyses, credit for publications is ascribed to countries (or other units such as institutions or departments). There are different ways of doing this (see e.g. Gauffriau & Larsen, 2005) and the principles and limitations of the various methods are well-known. Most producers of bibliometric analyses apply whole counting of publications in the calculation of citation indicators, which means that each country in internationally co-authored publications receives full credit for its participation. In contrast, fractionalised publication counting, in which a country is credited a fraction of a publication equal to the fraction of the author addresses from that country, is rarely applied. With the exception of a few studies the question concerning the use of whole and fractionalised counts in the calculation of citation indicators has received little attention. van Hooydonk (1997) showed that the citation impact of a researcher could be dramatically affected by using fractional instead of whole counting procedures. Recently, Leydesdorff and Bornmann (2011) applied fractional counting of citations to journals as a mean to normalise for differences in citation averages among disciplines. This was done by using the citing-side fractionalisation method described above.

This article adds to the discussion by analysing the difference between whole and fractionalised counting of publications in the construction of relative citation indexes at the country level. We are interested in examining how the relative citation index of a country is influenced by the counting methods used and to what extent this re-ranks countries compared to rankings based on whole counts. Given that the share of publications involving international co-authorship is large and growing, this is an important and timely issue to address. In the fractionalisation we have used the principle that each of the addresses of a paper is weighted as 1/N of a publication, where N is the total number of addresses (cf. Section 2).

It should be noted that our approach differs from the method applied by Leydesdorff and Bornmann (2011) in comparing journals. Leydesdorff and Bornmann (2011) apply a citing-side normalisation where they fractionalise citations according to the number of references in citing publications. Their purpose is field normalisation. We have applied a traditional cited-side approach in order to normalise for field variations in citing practices. We also apply fractional citation counts, but different from Leydesdorff and Bornmann (2011), our purpose is to distribute authorship credits among countries. Hence, there is a difference between fractional citation counting (each citation carries different weight to the cited publication) and fractional attribution of citations to author addresses (the citations to a publication are split between all author addresses). Our study focuses on the latter.

Section snippets

Data and methods

We have used bibliometric data from the Thomson Reuters database at the Swedish Research Council (covering the Science Citation Index Expanded, Social Sciences Citation Index, and Arts & Humanities Citation Index), which contains publications published between 1982 and 2008.1

Results

First, we calculated national citation indexes for a single country: Norway. The results are provided in Fig. 1, which shows annual citation indexes for the period 1981–2008. In addition to field-normalised citation indexes based on whole and fractionalised counting methods, the figure also shows the results of calculations using the NSI standard indicator (also based on whole counting). As described above, the latter indicator is simply calculated as the total number of citations of a

Discussion

In this paper we have analysed the results of using different counting methods in the calculation of national citation indicators. The citation index of the analysed countries decreases when fractionalised counts, rather than whole counts, are used, and the decrease varies from 0.41 to 0.01 points. Although nations with high scientific impact remain highly cited, and vice versa, ranking is altered and some countries are more affected by calculation using fractionalised counts than other

References (37)

  • W. Glänzel

    Science in Scandinavia: A bibliometric approach

    Scientometrics

    (2000)
  • M.A. Harsanyi

    Multiple authors, multiple problems – Bibliometrics and the study of scholarly collaboration: A literature review

    Library and Information Science Research

    (1993)
  • H. Herbertz

    Does it pay to cooperate? A bibliometric case study in molecular biology

    Scientometrics

    (1995)
  • G. van Hooydonk

    Fractional counting of multiauthored publications: Consequences for the impact of author

    Journal of the American Society for Information Science

    (1997)
  • J. Irvine et al.

    Charting the decline in British science

    Nature

    (1985)
  • U. Kronman et al.

    The bibliometric database at the Swedish Research Council – Contents, methods and indicators

    (2010)
  • L. Leydesdorff

    Problems with the “measurement” of national scientific performance

    Science and Public Policy

    (1988)
  • L. Leydesdorff et al.

    How fractional counting of citations affects the impact factor: Normalization in terms of differences in citation potentials among fields of science

    Journal of the American Society for Information Science and Technology

    (2011)
  • Cited by (102)

    • Regimes of research and development funding in higher education

      2022, International Encyclopedia of Education: Fourth Edition
    • Intelligent contract adoption in the construction industry: Concept development

      2021, Automation in Construction
      Citation Excerpt :

      Fig. 3(b) shows the largest co-authorship network including only 15 co-authors of the largest set of connected network. Fig. 5(a) and 5(b) shows the fractional analysis [45], which was used for normalising the strength of the links between papers. It shows that the construction industry is one of the most co-occurring keywords linking to groups of keywords such as blockchain, project management and big data.

    View all citing articles on Scopus

    The article is based on results presented at the 11th International Conference on Science and Technology Indicators, Leiden University, The Netherlands, 8–11 September 2010.

    View full text