2 Sep 2014

Publish or perish? But always with impact

There's more to judging impact than the size of your impact factor, argues Nicola Beesley.

We’re all told our research must have impact, and as a veterinary researcher early in my career this is something that already plays on my mind. But how is the impact of research measured? Are these measures fair?

When considering academic impact (rather than economic or societal), publications are often used as a measure of success - and the higher the journal’s impact factor the better, right? I for one am not so sure.

Bigger isn’t always better

When choosing where to submit papers for publication, I have found that impact factor plays some part in your decision, but I feel picking a journal where a paper will be read by the appropriate audience is equally as important. For example, most vets in clinical practice tend to read veterinary specific journals, e.g. The Veterinary Record (2012 impact factor 1.803 compared to Nature’s 38.597). Even with the advent of open access journals, from personal experience, these are rarely read in veterinary practice. 

Let’s consider how impact factors are calculated. In any given year, the impact factor of a journal is the average number of citations received per paper published in that journal during the previous two years. So an impact factor of 1 means that on average articles published up to two years ago have been cited once. Or does it?
In 2005 a self-evaluation by Nature [1] discovered that 25% of papers contributed to 89% of its impact factor for 2003. And in 2004 the majority of Nature's papers received fewer than 20 citations. Clearly some papers will be cited numerous times: including publications of genomes, new experimental techniques and software programs. Do these outliers make a difference? Perhaps not, they may just cause a “blip” in the impact factor of a journal for a year or two.  A further evaluation of a sample of 100 journals (including physical, chemical, biological, earth sciences and engineering journals) found an r2 value of 0.94 between impact factor and five-year median (i.e. a measure that is robust to outliers) of citations [2].

It’s worth also remembering, that retracted papers still contribute to impact factors [3]. Indeed, I can see why examples of poor or controversial research might be cited to emphasise the value of one’s own research.  
The 2005 Nature self-evaluation further revealed a difference in citation rates between disciplines with papers relating to cancer and molecular and cell biology cited far more than those relating to physics, for example. This probably extends to sub-disciplines as well. 
Finally, it’s interesting to note that when “independent” parties have tried to replicate impact factors, they have found discrepancies between their values and the published values [4].

Proceed with caution

The European Association of Science Editors released a statement relating to impact factors in 2007 and recommends that they are used:
only – and cautiously – for measuring and comparing the influence of entire journals….not for the assessment of researchers or research programmes' [5]
Eugene Garfield, who first mentioned the idea of an impact factor in 1955, discusses many of the issues I’ve talked about in an article from 2006 [6]. None of his arguments particularly convince me that the positives of impact factors outweigh the negatives (but you may feel differently!) In his conclusion he quotes Hoeffel who states that:
impact factor is not a perfect tool to measure the quality of articles, but there is nothing better, and it has the advantage of already being in existence and is therefore a good technique for scientific evaluation [7].
This statement doesn’t really fit with the philosophy I have as a scientist – surely we should be striving towards alternatives. Alternative citation measures which can be used to rank scientists against each other include h-index, m-index, and the g-index. Indeed these are just a few examples, but as a report by the Joint Committee on Quantitative Assessment of Research discusses these too are by no means perfect [8].

Wider impact

There are, of course, other impacts research can have apart from publication. The University of Liverpool is one of only a handful of veterinary schools in the UK and the majority of teaching is given by staff involved in cutting edge research. This means that new graduates (105 this summer) go out into practice armed with up-to-date knowledge. There is a steep learning curve for new graduates, but many “older” vets appreciate them as a source of new knowledge.
Vet students also carry out research with us, or work in the various diagnostic laboratories and animal hospitals during their extramural studies (EMS). Veterinary surgeons in practice come back to the University for CPD and postgraduate qualifications. We also actively engage with the public to talk about our research, such as as food safety outreach events and art exhibitions.

Indeed, the new(ish) concept of “Altmetrics” uses other measures, not just citation counts, to measure impact. These include number of views and downloads of articles, discussion in the news or on social media, and recommendations of the articles to others [9]. Perhaps these, or a variation of these, will be a better way of measuring all the different ways research can have impact.
This is by no means an exhaustive review of the papers and opinions that exist on impact. However, in true scientific spirit, I am optimistic that my research will be judged not only on the impact factor of the journal it is (hopefully!) published in, but also the alternative means I have at my disposable to generate IMPACT.
Nicola Beesley graduated from the University of Liverpool with a degree in Veterinary Science and a Masters in Veterinary Parasitology. After a couple of years working as a veterinary surgeon in small animal practice, the lure of research got the better of her, and she returned to Liverpool where she is now studying for a PhD. Her research concentrates on the population genetics of liver fluke (www.liv.ac.uk/liver-fluke), a parasite that is increasingly causing disease of welfare and economic important in sheep and cattle in the UK.
[1] Editorial (2005) Not-so-deep impact. Nature 435, 1003 – 1004

Related Articles

1 comment:

  1. Lots of citations do not necessarily equal quality work. According to Google, the retracted article by Wakefield et al. on the link between MMR vaccine and autism has 1,066 citations! Most of them of course say things like: “This contradicts an earlier publication by Wakefield et al.”. I suppose it certainly had impact, just not in a good way…


The Institute of Infection and Global Health. Powered by Blogger.