Sir

Johannes Stegmann1 presents an apparently simple method to calculate (not “evaluate”) Journal impact factors (JIFs) for journals not receiving an official JIF through the Journal Citation Reports (JCR) of the Institute for Scientific Information (ISI). Unfortunately, there are some technical limitations to its applicability.

ISI's citation indexes include references to all kinds of bibliographical materials (also books and low-profile journals). An essential condition, however, is that these are cited by a controlled set of source journals. This implies that self-citations of non-source journals will not be included in the citation indexes. Although there is a lot of variation between individual journals, a substantial part of citations received are generally self-citations. In many cases the journals are their own single biggest source of citations.

Whereas for top-ranking journals receiving tens of thousands of citations the percentage of self-citations generally remains low, for a large part of the other journals a self-citation rate between 10% and 25% appears to be typical, especially when relating to the more recent years on which JIF calculations are based. So the total number of citations for constructing JIFs would be underrated to some degree. For example, deleting self-citations from the totals of the six source journal examples given by Stegmann would decrease their impact factors for 1996 by a minimal 1.7% (for Molecular Medicine which, having started in 1994, can be considered a special case) to a more substantial 12.9% (for International Journal of Developmental Biology). More impressive examples are easily found (for example, Molecular and Biochemical Parasitology 21.3% or International Journal of Leprosy 30%).

Calculating the number of source items per journal may also be more difficult than Stegmann suggests. If the exact number of source articles of a specific journal is included in the SciSearch database, then it is probably already an ISI source journal featuring a JIF. Databases such as Medline do not necessarily cover all their source journals completely (especially if their contents are multidisciplinary, as for example with Nature). And, as acknowledged by Stegmann, the criteria for counting source articles may vary in different databases. Besides, lots of journals are not represented at all in the major databases. So having physical access to these journals would often be a requisite.

Assuming that JIFs are appropriate value indicators for scientific publications2, one may argue that an approximate JIF is preferable to nothing, even if both numerator and denominator are inaccurate. Apart from this issue, one should bear in mind that the somewhat inaptly named JIF indicates the average value of individual research articles judged by the journal they are published in, rather than the total impact of that journal. It would seem obvious that, when two journals have an identical JIF, if one annually publishes 2,000 papers and the other a mere 20, their overall impact on the scientific community cannot seriously be considered equal.

On the other hand, an important drawback of the unreserved attention being given to citation counts nowadays lies in the enormous gap between popular research areas (with many thousands of authors, papers and citations) and less popular research areas (with far fewer authors, papers and citations). To improve their status (and funding), the danger seems real that scientists would be tempted to neglect or abandon altogether the less popular research topics to the advantage of more rewarding ones. That might lead to an impoverishment of scientific knowledge in lots of domains.