Extracted from Editorial published in Proceedings of Singapore Healthcare Vol. 24 No. 1 2015.

  
By Assoc Prof Lo Yew Long, Head, Neurology (SGH Campus), NNI and Chief Editor, Proceedings of Singapore Healthcare.
Co-authored by Dr Ng Heok Hee, SingHealth Academy

Every year, more than US$200 billion is pumped into biomedical research worldwide.   The primary and most popular way of communicating these research outcomes to the public is via publication in peer-reviewed journals.

Traditionally, the academic value of an article is largely determined by the number of citations it receives in other articles.   Such references can be for an article, the author or the journal where it was originally published.   This sounds simple enough but over the years, the straightforward act of counting citation numbers has evolved into a complex exercise requiring analysis using computational or mathematical algorithms.

The advancement of technology has certainly facilitated the way citations are measured.   The publication of the landmark Journal Citation Reports by Thomson-Reuters introduced several indices to calculate journal citations, the most popular being the Impact Factor which computes the ratio of citations over the total number of articles published by a journal.  

Thomson-Reuters also introduced the eigenfactor score to determine the importance of a journal to the scientific community by rating it based on the number of incoming citations, weighted according to the rank of the journals. The Journal Citation Reports is currently the most widely used database among similar service providers.

However, as with any quantitative methodology that works to achieve a ranking order, it is inevitable that there would be attempts to manipulate the data to achieve a more favourable result. These include publishing articles such as review papers that are more likely to be frequently cited, limiting the total number of articles per issue, or mandating self-citation of papers from the same journal. In addition, it is difficult to compare impact factors across subspecialties, as each has its own frequency of citations unique to its field. 

"Journal publication is only one part of the equation.   Other important research outcomes that may not be published include patents, technological innovations and policy changes that affect healthcare significantly."  

Apart from indices that focus on journals, there are other indices that throw the spotlight on authors instead.   To track both the quality and quantity of an author’s (or group of authors) research, Jorge Hirsch introduced the h-index. Essentially, it computes the number of papers an author has published and the number of citations these have received.

Nevertheless, as with journal indices, the h-index has its limitations in measuring the quality of an author’s research.   These include over emphasis on the age of the journal/length of the author’s career, the failure to consider number, order, and amount of contribution for each author (thus triggering an increase in self-citations), as well as poor comparability across fields. In reality, one can remain a minor contributor and garner a relatively high h-index.

Since the original study by Hirsch, other author indices have been introduced and are still being developed.

The question remains: how do we accurately measure the impact of a publication? Behind this issue lies the challenge of ascertaining the true meaning of impact.   Journal publication is only one part of the equation. Other important research outcomes that may not be published include patents, technological innovations and policy changes that affect healthcare significantly.  

While administrators and grant-disbursing agencies prefer the simplicity of indices to assess the prospects and academic value of a paper or author, the truth is that a single number will never comprehensively reflect these qualities.   Researchers themselves have spoken up, calling for a de-emphasis on indices and re-focusing on scientific content and a broader range of considerations.

In conclusion, the search for a single ‘ideal’ publication index is more of an academic exercise than a true reflection of the value of research outcomes.   Without doubt, sensibility should be exercised in its application.