Publishing research articles is the bedrock of science.
Knowledge advances through testing hypotheses, and the only way such advances
are communicated to the broader community of scientists is by writing up the
results in a report and sending it to a peer-reviewed journal. The assumption
is that papers passing through this review filter report robust and solid
science.
Of course this is not always the case. Many papers include
questionable methodology and data, or are poorly analyzed. And a small minority
actually fabricate or misrepresent data. As Retraction Watch often reminds us,
we need to be vigilant against bad science creeping into the published
literature.
Why should we care about bad science? Erroneous results or incorrect
conclusions in scientific papers can lead other researchers astray and result
in bad policy. Take for example the well-flogged Andrew Wakefield, a since
discredited researcher who published a paper linking autism to vaccines. The
paper is so flawed that it does not stand up to basic scrutiny and was rightly
retracted (though how it could have passed through peer review is an astounding
mystery). However, this incredibly bad science invigorated an anti-vaccine movement in Europe and North America that is responsible for the re-emergence of childhood diseases that should have been eradicated. This bad science is
responsible for hundreds of deaths.
From Huffington Post |
Of course most bad science will not result in death. But bad
articles waste time and money if researchers go down blind alleys or work to
rebut papers. The important thing is that there are avenues available to
researchers to question and criticize published work. Now days this usually
means that papers are criticized through two channels. First is through blogs
(and other social media). Researchers can communicate their concerns and
opinion about a paper to the audience that reads their blog or through social
media shares. A classic example was the blog post by Rosie Redfield criticizing a paper published in Science that claimed to have discovered bacteria that used arsenic as a food source.
However, there are a few problems with this avenue. First is
that it is not clear that the correct audience is being targeted. For example,
if you normally blog about your cat, and your blog followers are fellow cat
lovers, then a seemingly random post about a bad paper will likely fall on deaf
ears. Secondly, the authors of the original paper may not see your critique and
do not have a fair opportunity to rebut your claims. Finally, your criticism is
not peer-reviewed and so flaws or misunderstandings in your writing are less
likely to be caught.
Unlike the relatively new blog medium, the second option is
as old as scientific publication –writing a commentary that is published in the
same journal (and often with an opportunity for the authors of the original
article to respond). These commentaries are usually reviewed and target the
correct audience, namely the scientific community that reads the journal.
However, some journals do not have a commentary section and so this
avenue is not available to researchers.
Caroline and I experienced this recently when we enquired
about the possibility to write a commentary on an article was published and
that contained flawed analyses. The Editor responded that they do not publish
commentaries on their papers! I am an Editor-in-Chief and I routinely deal with
letters sent to me that criticize papers we publish. This is important part of
the scientific process. We investigate all claims of error or wrongdoing and if
their concerns appear valid, and do not meet the threshold for a retraction, we
invite them to write a commentary (and invite the original authors to write a
response). This option is so critical to science that it cannot be overstated.
Bad science needs to be criticized and the broader community of scientists
should to feel like they have opportunities to check and critique publications.
I could perceive that there are many reasons why a journal
might not bother with commentaries –to save page space for articles, they’re
seen as petty squabbles, etc. but I would argue that scientific journals have
important responsibilities to the research community and one of them must be to
hold the papers they publish accountable and allow for sound and reasoned
criticism of potentially flawed papers.
Looking over the author guidelines of the 40 main ecology and evolution journals (and apologies if I missed statements -author guidelines can be very verbose), only 24 had a clear statement about publishing commentaries on previously published papers. While they all had differing names for these commentary type articles, they all clearly spelled out that there was a set of guidelines to publish a critique of an article and how they handle it. I call these 'Group A' journals. The Group A journals hold peer critique after publication as an important part of their publishing philosophy and should be seen as having a higher ethical standard.
Next are the 'Group B' journals. These five journals had unclear statements about publishing commentaries of previously published papers, but they appeared to have article types that could be used for commentary and critique. It could very well be that these journals do welcome critiques of papers, but they need to clearly state this.
The final class, 'Group C' journals did not have any clear statements about welcoming commentaries or critiques. These 11 journals might accept critiques, but they did not say so. Further, there was no indication of an article type that would allow commentary on previously published material. If these journals do not allow commentary, I would argue that they should re-evaluate their publishing philosophy. A journal that did away with peer-review would be rightly ostracized and seen as not a fully scientific journal and I believe that post publication criticism is just as essential as peer review.
I highlight the differences in journals not to shame specific journals, but rather highlight that we need a set of universal standards to guide all journals. Most journals now adhere to a set of standards for data accessibility and competing interest statements, and I think that they should also feel pressured into accepting a standardized set of protocols to deal with post-publication criticism.
Looking over the author guidelines of the 40 main ecology and evolution journals (and apologies if I missed statements -author guidelines can be very verbose), only 24 had a clear statement about publishing commentaries on previously published papers. While they all had differing names for these commentary type articles, they all clearly spelled out that there was a set of guidelines to publish a critique of an article and how they handle it. I call these 'Group A' journals. The Group A journals hold peer critique after publication as an important part of their publishing philosophy and should be seen as having a higher ethical standard.
Next are the 'Group B' journals. These five journals had unclear statements about publishing commentaries of previously published papers, but they appeared to have article types that could be used for commentary and critique. It could very well be that these journals do welcome critiques of papers, but they need to clearly state this.
The final class, 'Group C' journals did not have any clear statements about welcoming commentaries or critiques. These 11 journals might accept critiques, but they did not say so. Further, there was no indication of an article type that would allow commentary on previously published material. If these journals do not allow commentary, I would argue that they should re-evaluate their publishing philosophy. A journal that did away with peer-review would be rightly ostracized and seen as not a fully scientific journal and I believe that post publication criticism is just as essential as peer review.
I highlight the differences in journals not to shame specific journals, but rather highlight that we need a set of universal standards to guide all journals. Most journals now adhere to a set of standards for data accessibility and competing interest statements, and I think that they should also feel pressured into accepting a standardized set of protocols to deal with post-publication criticism.
10 comments:
Interesting exercise.
How much do comment policies vary across the journals that have them?
Comment policies - do you mean, as in PLoSOne where you can comment directly on the articles?
HI Jeremy -The author instructions are quite vague, but my sample size of two (I have two published commentaries) shows great variation. One was automatically published after cursory review by editors and the authors of the critiqued paper decided that it was better to not respond (the flaws were fatal). This lack of response played no part in the decision to publish my commentary. At the other, the Editor said they would only publish mine after the authors supplied a response. If the authors did not respond, they would not publish mine. They also sent it out for review, but I found the reviewer comments were rather vague and it was not clear if they really understood the paper and my commentary.
I recently led a commentary on a paper published in Science. We had substantial problems with the authors' interpretations of the results. Science rejected the comment outright providing no reason behind their decision and we were advised to post our comment online in the journal's unedited eLetters section.
I think most people give up at this point, but we decided to see if another journal was willing to host our comment (after peer review of course). I just got the reviews back yesterday and after a few minor revisions our comment is expected to be published in the Journal of Pollination Ecology. Interestingly, one of our reviewers indicated that they were also a reviewer of the original Science paper. This reviewer had major reservations about the paper and had advised that it be rejected from Science; they were surprised when they saw it published. In this case, I think Science sacrificed quality for sake of a flashy headline. It's very unfortunate that this happens and perhaps worse that the journal appears unwilling to remain accountable. Anyway I thought it might be worth mentioning here that sometimes a journal will host comments on a paper that appeared in another journal, although this is probably quite rare.
Hey Charlotte - thanks for your perspective. That's really interesting, and an approach I hadn't thought of. Kudos on following through on this - I think that there is a tendency to wonder whether writing a response is a good use of limited time (especially when the paper is published somewhere less high profile than Science).
I'll put aside being miffed that we weren't deemed worthy of being in the top 40 and add our policy here. In which box does this put us? :)
Emilio (EIC, Biotropica)
From http://onlinelibrary.wiley.com/journal/10.1111/%28ISSN%291744-7429/homepage/ForAuthors.html:
"Commentary (up to 2000 words): an authoritative opinion on current issues in ecology or conservation, or a thought-provoking commentary on a previously published paper."
Excellent post, and something I've thought about a lot. Thanks for writing this. I submitted a commentary to a 'Group B' journal on a flawed paper they had recently published, and it was rejected without review 'because it didn't fit the journal's scope'. I gave up on it then because, as other commenters have mentioned, I didn't have the time to try and convince another journal to publish something that wasn't really relevant to them.
Hi Emilio - not top 40 per se, but the 40 that popped in my head and I was able to access with questionable internet in China. Yours is a 'Group A'; it clearly states you have a paper type to address a paper in your journal.
Hi Manu -That's too bad. I think commentaries should be 'no-brainers' and only require technical review. Further, they don't have to count against impact factor if they are clearly defined as a letter or commentary, and so I don't understand why journals avoid them.
Thanks Marc! I was just joking about being miffed...
Post a Comment