Sunday, May 19, 2013

The end of the impact factor

Recently, both the American Society for Cell Biology (ASCB) and the journal Science both publicly proclaimed that the journal impact factor (IF) was bad for science. The ASCB statement argues that IFs limit meaningful assessment of scientific impact for both published articles and especially other scientific products. The Science statement goes further, and claims that assessments based on IFs lead researchers to alter research trajectories and try to game the system rather than focussing on the important questions that need answering.


Impact factors: tale of the tail
The impact factor was created by Thomson Reuters and is simply the number of citations a journal has received in the the previous two years, divided by the number of articles published over that time span. Thus it is a snapshot of a particular type of 'impact'. There are technical problems with this metric -for example, that citations accumulate at different rates across different subdisciplines. More importantly, and what all publishers and editors know, is that IFs generally rise and fall with the extreme tail of the distribution of the number of citations. For a smaller journal, it just takes one heavily cited paper to make the IF jump up. For example if a journal publishes one paper that accumulates 300 citations and it published just 300 articles over the two years, then its IF can jump up by 1, which can alter the optics. In ecology and evolution, IFs greater than 5 are usually are viewed as top journals.

Regardless of these issues, the main concern expressed by ACSB and Science is that a journal-level metric should not be used to assess an individual researcher's impact. Should a researcher publishing in a high IF journal be rewarded (promotion, raise, grant funded, etc.) if their paper is never cited? What about their colleague who publishes in the lower IF journal, but accrues a high number of citations?

Given that rewards are, in part, based on the journals we publish in, researchers try to game the system by writing articles for certain journals and journals try to attract papers that will accrue citations quickly. Journals with increasing IFs usually see large increases in the number of submissions, as researchers are desperate to have high IF papers on their CVs. Some researchers send papers to journals in the order of their IFs without regard for the actual fit of the paper to the journal. This results in an overloaded peer-review system.

Rise of the altmetric
Alternative metrics (altmetrics) movement means to replace journal and article assessment from one based on journal citation metrics to a composite of measures that include page views, downloads, citations, discussions on social media and blogs, and mainstream media stories. Altmetrics attempts to capture a more holistic picture of the impact of an article. Below is a screenshot from a PLoS ONE paper, showing an example of altmetrics:

By making such information available, the impact of an individual article is not the journal IF anymore, but rather how the article actually performs. Altmetrics are particularly important for subdisciplines where maximal impact is beyond the ivory towers of academia. For example, the journal I am an Editor for, the Journal of Applied Ecology, tries to reach out to practitioners, managers and policy makers. If an article is taken up by these groups, they do not return citations, but they do share and discuss these papers. Accounting for this type of impact has been an important issue for us. In fact, even though our IF may be equivalent to other, non-applied journals, our articles are viewed and downloaded at a much higher rate.

The future
Soon, how articles and journals are assessed for impact will be very different. Organizations such as Altmetric have developed new scoring systems that take into account all the different types of impact. Further, publishers have been experimenting with altmetrics and future online articles will be intimately linked to how they are being used (e.g., seeing tweets when viewing the article).

Once the culture shifts to one that bases assessment on individual article performance, where you publish should become less important, and journals can feel free to focus on an identity that is based on content and not citations. National systems that currently hire, fund and promote faculty based on the journals they publish in, need to carefully rethink their assessment schemes.

May 21st, 2013 Addendum:

You can sign the declaration against Impact Factors by clicking on the logo below:


5 comments:

Flo said...

Hi Marc,

Have you heard of Impact Story.? http://impactstory.org/
They try to develop article-level stats, and they include broader impact, eg on social networks.

[disclaimer: I personally know one of the founders].

Devin said...

It's really encouraging and interesting that J. App. Eco. places so much value on the "non-academic" use of their articles. It would be ideal if the importance of those articles could somehow be incorporated into grant apps, tenure packets, etc. In my field (systematics) dichotomous keys, systematic/taxonomic revisions and (to a lesser degree) phylogenies are often published in journals with low IF's and are rarely cited. But these have an enormous importance; they are essential for nearly any organismal biologist and used on a daily basis by managers, etc. We need a way to communicate the "impact" of these types of work to administrators and granting agencies.

Marc Cadotte said...

No I hadn't but I'm looking in to it now!

Marc Cadotte said...

Hi Devin, I think that there will be a culture shift, to one where those other impacts have a place. NSF is kind of there already with their strong emphasis on borader impacts.

Tim Vines said...

I agree with many things in this post, but I feel I have to contest the idea that the peer-review system is overloaded. The tragedy of the commons paper makes a number of verbal arguments on why the review system might be overloaded, but it doesn't contain any empirical evidence that it's true.

Since a peer review system in danger of imminent collapse would be a major problem, I tried to test this with data from Molecular Ecology (where I work). The paper itself is here: http://bit.ly/hwKoJe (apologies for the firewall). The main result is that submissions to Mol Ecol doubled between 2001 and 2010, but the number of unique reviewers we used in that period increased to keep track (http://www.molecularecologist.com/naturefigs/). If the system was getting overloaded, the line would asymptote as we were forced to go back to the same reviewers again and again. Of course, this isn't definitive proof that the system isn't overloaded elsewhere, but still I think the crisis rhetoric should be tempered: one of the few relevant studies does not match the current narrative.

For what it's worth, I've also written a blog post (http://bit.ly/wE1HAn) discussing why senior academics might be more inclined to feel that the system is overburdened, as they get an order of magnitude more review requests than less prominent researchers.

Apologies for the self citations...

Tim Vines