Showing posts with label reviewing. Show all posts
Showing posts with label reviewing. Show all posts

Friday, March 4, 2016

Pulling a fast one: getting unscientific nonsense into scientific journals. (or, how PLOS ONE f*#ked up)

The basis of all of science is that we can explain the natural world through observation and experiments. Unanswered questions and unsolved riddles are what drive scientists, and with every observation and hypothesis test, we are that much closer to understanding the universe. However, looking to supernatural causes for Earthly patterns is not science and has no place in scientific inquiry. If we relegate knowledge to divine intervention, then we fundamentally lose the ability to explain phenomena and provide solutions to real world problems.

Publishing in science is about leaping over numerous hurdles. You must satisfy the demands of reviewers and Editors, who usually require that methodologies and inferences satisfy strict and ever evolving criteria -science should be advancing. But sometimes people are able to 'game the system' and get junk science into scientific journals. Usually, this happens by improper use of the peer review systems or inventing data, but papers do not normally get into journals while concluding that simple patterns conform to divine intervention.

Such is the case in a recent paper published in the journal PLOS ONE. This is a fairly pedestrian paper about human hand anatomy and they conclude that anatomical structures provide evidence of a Creator. They conclude that since other primates show a slight difference in tendon connections, a Creator must be responsible for the human hand (well at least the slight, minor modification from earlier shared ancestors). Obviously this lazy science and an embarrassment to anyone that works as an honest scientist. But more importantly, it calls into question the Editor who handled this paper (Renzhi Han, Ohio State University Medical Center), but also PLOS ONE's publishing model. PLOS ONE handles thousands of papers and requires authors to pay for the costs of publishing. This may just be an aberration, a freak one-off, but the implications of this seismic f$@k up, should cause the Editors of PLOS to re-evaluate their publishing model.  

Monday, September 8, 2014

Edicts for peer reviewing

Reviewing is a right of passage for many academics. But for most graduate students or postdocs, it is also a bit of a trial by fire, since reviewing skills are usually assumed to be gained osmotically, rather than through any specific training. Unfortunately, the reviewing system seems ever more complicated for reviewers and authors alike (slow, poor quality, unpredictable). Concerns about modern reviewing pop up every few months, and different solutions to the difficulties of finding qualified reviewers and the quality of modern reviews (including publishing an instructional guide, taking alternative approaches (PeerJ, etc), or skipping peer review altogether (arXiv)). Still, in the absence of a systematic overhaul of the peer review system, an opinion piece in The Scientist by Matthew A. Mulvey and Dean Tantin provides a rather useful guide for new reviewers and a useful reminder for experienced reviewers. If you are going to do a review (and you should, if you are publishing papers), you should do it well. 
From "An Ecclesiastical Approach to Peer Review" 
"The Golden Rule
Be civil and polite in all your dealings with authors, other reviewers, editors, and so on, even if it is never reciprocated.
As a publishing scientist, you will note that most reviewers break at least a few of the rules that follow. Sometimes that is OK—as reviewers often fail to note, there is more than one way to skin a cat. As an author you will at times feel frustrated by reviews that come across as unnecessarily harsh, nitpicky, or flat-out wrong. Despite the temptation, as a reviewer, never take your frustrations out on others. We call it the “scientific community” for a reason. There is always a chance that you will be rewarded in the long run. 
The Cardinal Rule
If you had to publish your review, would you be comfortable doing so? What if you had to sign it? If the answer to either question is no, start over. (That said, do not make editorial decisions in the written comments to the authors. The decision on suitability is the editors’, not yours. Your task is to provide a balanced assessment of the work in question.) 
The Seven Deadly Sins of sub-par reviews
  1. Laundry lists of things the reviewer would have liked to see, but have little bearing on the conclusions.
  2. Itemizations of styles or approaches the reviewer would have used if they were the author.
  3. Direct statements of suitability for publication in Journal X (leave that to the editor).
  4. Vague criticism without specifics as to what, exactly, is being recommended. Specific points are important—especially if the manuscript is rejected.
  5. Unclear recommendations, with little sense of priority (what must be done, what would be nice to have but is not required, and what is just a matter of curiosity).
  6. Haphazard, grammatically poor writing. This suggests that the reviewer hasn’t bothered to put in much effort.
  7. Belligerent or dismissive language. This suggests a hidden agenda. (Back to The Golden Rule: do not abuse the single-blind peer review system in order to exact revenge or waylay a competitor.) 
Vow silence
The information you read is confidential. Don’t mention it in public forums. The consequences to the authors are dire if someone you inform uses the information to gain a competitive advantage in their research. Obviously, don’t use the findings to further your own work (once published, however, they are fair game). Never contact the authors directly.
Be timely
Unless otherwise stated, provide a review within three weeks of receiving a manuscript. This old standard has been eroded in recent years, but nevertheless you should try to stick to this deadline if possible. 
Be thorough
Read the manuscript thoroughly. Conduct any necessary background research. Remember that you have someone’s fate in your hands, so it is not OK to skip over something without attempting to understand it completely. Even if the paper is terrible and in your view has no hope of acceptance, it is your professional duty to develop a complete and constructive review.
Be honest
If there is a technique employed that is beyond your area of expertise, do the best you can, and state to the editor (or in some cases, in your review) that although outside your area, the data look convincing (or if not, explain why). The editor will know to rely more on the other reviewers for this specific item. If the editor has done his or her job correctly, at least one of the other reviewers will have the needed expertise.
Testify
Most manuscript reviews cover about a page or two. Begin writing by briefly summarizing the state of the field and the intended contribution of the study. Outline any major deficits, but refrain from indicating if you think they preclude publication. Keep in mind that most journals employ copy editors, so unless the language completely obstructs understanding, don’t bother criticizing the English. Go on to itemize any additional defects in the manuscript. Don’t just criticize: saying that X is a weakness is not the same as saying the authors should address weakness X by providing additional supporting data. Be clear and provide no loopholes. Keep in mind that you are not an author. No one should care how you would have done things differently in a perfect world. If you think it helpful, provide additional suggestions as minor comments—the editor will understand that the authors are not bound to them.
Judgment Day
Make a decision as to the suitability of the manuscript for the specific journal in question, keeping in mind their expectations. Is it acceptable in its current state? Would a reasonable number of experiments performed in a reasonable amount of time make it so, or not? Answering these questions will allow you to recommend acceptance, rejection, or major/minor revision. 
If the journal allows separate comments to the editor, here is the place to state that in your opinion they should accept and publish the paper as quickly as possible, or that the manuscript falls far below what would be expected for Journal X, or that Y must absolutely be completed to make the manuscript publishable, or that if Z is done you are willing to have it accepted without seeing it again. Good comments here can make the editor’s job easier. The availability of separate comments to the editor does not mean that you should provide only positive comments in the written review and reserve the negative ones for the editor. This approach can result in a rejected manuscript being returned to the authors with glowing reviewer comments. 
Resurrection
A second review is not the same as an initial review. There is rarely any good reason why you should not be able to turn it around in a few days—you are already familiar with the manuscript. Add no new issues—doing so would be the equivalent of tripping someone in a race during the home stretch. Determine whether the authors have adequately addressed your criticisms (and those of the other reviewers, if there was something you missed in the initial review that you think is vital). In some cases, data added to a revised manuscript may raise new questions or concerns, but ask yourself if they really matter before bringing them up in your review. Be willing to give a little if the authors have made reasonable accommodation. Make a decision: up or down. Relay it to the editor. 
Congratulations. You’ve now been baptized, confirmed, and anointed a professional manuscript reviewer."

Monday, August 26, 2013

Everything you wanted to know about peer review (but no one mentions)

Since the British Ecological Society has published an introduction to reviewing successfully, here’s a short list of additional, less noted, observations about the reviewing process.

For example, excitement for reviewing is proportional to the number of reviews you have done
  • When you are first asked, reviewing feels like a great honour. It is one of the first signs that some group larger than your lab or department recognizes your existence. You will spend an unreasonable amount of time perfecting your review.
This plot would not survive peer review.
  • The novelty will wear off, and your enthusiasm upon receiving a review request will decline, usually in relation to your increasing workload. 
  • Sadly, the urgent need to complete a review may also wane. You will probably submit the first review early, but after that… 
Despite declines in enthusiasm, review quality usually increases with the number of reviews you have done. Practice and experience make a difference. It is also a confidence boost to see your suggestions actually instituted and valued by the authors or editors.

Manuscripts fall broadly into only a few categories. They might be deeply flawed and unpublishable, and therefore easy to review; or they might be uniformly excellent and therefore easy to review. But these are the least common types you will experience. Most manuscripts have both strengths and weaknesses and fall somewhere on the spectrum between “accept” and “reject”. These are the papers that take the most time, since you must weigh the flaws against the strengths, agonize over what changes to suggest, what suggestions might get them around the biggest issues, and what recommendation to give the editor. It’s also easy to fall into Monday morning quarterbacking and make impractical suggestions - why didn’t you design your experiment like this? Why didn’t you measure that? While these points might be reasonable and relevant, but it is important to be clear as to what is within the scope of a revision and what is a bigger picture problem.

Reviewing is of course an important service to ecology. It can also makes a number of subtle contributions to your own professional development. Once the novelty of someone caring about your opinion has worn off, the best part of reviewing may be things you don’t expect.
  • For example, one of the best parts of reviewing a paper in the same area as your research is seeing what literature the authors cite and how they cite them– some real gems you've missed can show up. 
  • Reviewing a paper that falls so exactly in your body of knowledge that you feel completely qualified is a great feeling. It’s nice to be reminded that you have (mostly) mastered a topic you care about.
  • When you are asked to review a paper that combines some topic or method you are well-versed in with ideas or systems or methodologies you are not familiar with, it can be truly eye opening. The funnest papers to review are the ones where you think “I never thought of that!”.
  • Reviewing can give you the clarity to recognize the weaknesses in your own work.

Thursday, August 15, 2013

Everything you ever wanted to know about peer review, but were afraid to ask

The thing about peer review is that there isn't much of an education process. Maybe you've published a paper or two and experienced the process as an author, and then you're asked to start reviewing for other authors. It's a bit like the telephone game - you mimic the reviews you received, maybe noting what you liked and avoiding what you didn't like. But that's often all you have to go on, and when you're just beginning a little advice might come in handy. To that end, the British Ecological Society has just published a pretty useful Peer Review 101 text. This should be required reading for new reviewers.

http://www.britishecologicalsociety.org/wp-content/uploads/Publ_Peer-Review-Booklet.pdf

Tuesday, December 30, 2008

Review or publish; the curse of the commons

ResearchBlogging.orgNeed we be concerned about the volume and quality of manuscript reviews for journal submissions? In a recent editorial published in Ecology Letters by Michael Hochberg and colleagues, they answer yes, we should be concerned. They argue that manuscript reviewing is suffering from a tragedy of the commons, where growing submission rates to top journals is overburdening potential reviewers. This overburdening has two causes. First, that researchers tend to send their manuscripts to journals based on impact factors, regardless of the appropriateness of the manuscript for the receiving journal. Second is that authors view negative reviews as stochastic happenstance and in the rush to resubmit do little to improve their manuscript.

While the concerns are real, and the authors do suggest common sense approaches to publishing (i.e., choose appropriate journals and get colleagues to review drafts -something most of my colleagues do), there is little discussion of what incentives could be offered. The curse of the commons is when individual motives do not benefit the greater good, thus incentives could be used to alter motives potentially benefiting the larger community.

A number of journals now offer free access or free color figures in future publications for reviewing or even offering payment. Perhaps the move towards reduced length rapid turn around publications is part of the problem and that we should be valuing longer, more detailed papers (the classic quantity vs. quality problem). Whatever the potential solutions, it is promising to see journals, especially top-ranked ones like Ecology Letters, discussing these issues.

Michael E. Hochberg, Jonathan M. Chase, Nicholas J. Gotelli, Alan Hastings, Shahid Naeem (2009). The tragedy of the reviewer commons* Ecology Letters, 12 (1), 2-4 DOI: 10.1111/j.1461-0248.2008.01276.x