Thursday, July 18, 2013

The tide is turning against impact factors

Bruce Alberts, Editor in Chief of Science, has a powerful editorial Impact Factor Distortions. Here is the beginning and the end.
This Editorial coincides with the release of the San Francisco declaration on research Assessment (DORA), the outcome of a gathering of concerned scientists at the December 2012 meeting of the American Society for Cell Biology. To correct distortions in the evaluation of scientific research, DORA aims to stop the use of the "journal impact factor" in judging an individual scientist's work. The Declaration states that the impact factor must not be used as "a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions." DORA also provides a list of specific actions, targeted at improving the way scientific publications are assessed, to be taken by funding agencies, institutions, publishers, researchers, and the organizations that supply metrics. These recommendations have thus far been endorsed by more than 150 leading scientists and 75 scientific organizations, including the American Association for the Advancement of Science (the publisher ofScience)..... 
As a bottom line, the leaders of the scientific enterprise must accept full responsibility for thoughtfully analyzing the scientific contributions of other researchers. To do so in a meaningful way requires the actual reading of a small selected set of each researcher's publications, a task that must not be passed by default to journal editors.

2 comments:

  1. I don't think that impact factors are the problem.

    Impact factors seem to do a good job of encoding the way I would rank journals in fields I'd consider myself an expert, e.g

    Nature ~ Science > Nature Phys ~ Nature Chem > PRL ~ JACS > PRB ~ JPC > CPL ~ JPCM > Physica ~ Synth Met

    (Also they have the revealing feature on CVs that anyone quoting them to 5 s.f. is outing themselves as a poorly trained/thoughtless scientist.)

    Impact factors don't contain new information for journals you know. But often one is asked to evaluate work published in journals you don't read. E.g. we can be pretty confident that Stem Cells (IF=7.7) is a better place to publish than Tussue Eng Regen Med (IF=0.3). (Some care is needed though as there are exceptions e.g. Acta Cryst C is an important journal for the record with a low IF.)

    I think the problem is the exercise of ranking quality of science PRIOR to publication. To more-or-less randomly selected referees with personal biases and interests are not a good way to do this - indeed peer review isn't really designed to do this (I think PRL abuse this a lot, whereas Nature editors do a much better job of deciding the editorial line of the journal themselves). So the problem is ranking work on where it is published not on the content of or reaction to the work.

    Really the bigger issue is that journals are a 19th century solution to the problems of disseminating science. In particular they are based on a model that assumes that journal pages are a scarce resource (which is true if you have to print them and ship them around the planet), but not true in the internet age.

    I would argue that the arXiv is much closer to the best solution. But, it is not clear how to add peer review into that model. I think solving that is the central problem in scientific publishing. Open access journals are still a halfway house IMHO. We also need a cultural change to stop ranking scientist by who has the most papers in Nature.

    ReplyDelete
  2. The problem is that society has been oversupplying research labor for multiple decades. I've become much more aware of it since coming to Australia, and particularly recently. The private university system means that there are only ever a few positions -- far less than the number of applicants. This means that HR committees (and funding committees) must make arbitrary decisions between similarly qualified candidates. This has very naturally fueled the rise of arbitrary metrics. The problem that periodically arises when Fellow return to salaried teaching only makes it more competitive here, on top of the already fierce competition due to oversupply.

    The problem is systemic, and requires a systemic solution. Getting rid of journal impact factors will just leave space for another arbitrary metric to rise, because there is a current need for arbitrary metrics!

    The problem in Australia, BTW, could be helped by a simple but unlikely reform: simply require that anyone taking up a research fellowship resign permanently their continuing post if they hold one! This way, pressure will ease on junior- and mid-career researchers, whose current oversupply is driving this unfortunate fashion. It will also improve Australian science education, by forcing anyone with a continuing position (presumably the best, in the sense that juries are presumably just) to teach.

    In the meantime, I will probably just continue to try and max out the arbitrary metric, because that's what will help me get a job in an oversupplied market.

    ReplyDelete