Showing posts with label peer review. Show all posts
Showing posts with label peer review. Show all posts

Friday, April 10, 2020

Can skipping the peer-review process be a legitimate way to communicate science?

Science is an approach to inquiry and knowledge production that provides an unsubstitutable approach to evaluating empirical claims. And it is a specific and particular thing. Beyond the experiments and data collection, science must be communicated in order to impact knowledge and inform humanity’s understanding of the world around us and potential solutions to problems of our own making. The gold standard for communicating scientific findings is through peer-review. Peer review is the process by which research articles are assessed by other experts and these reviewers are the gatekeepers that determine if papers should be published and how much revision they require. These reviewers look for flaws in logic, methodology and inference, and ensure that findings are set in their proper context.

So, peer-review is not perfect, but it is necessary and can always be improved. However, there is another question: is it always needed? Are there legitimate reasons for a scientist to skip the peer-review process?

To me, there could be reasons to skip the peer-review process, but the goals should be clear and we need to acknowledge that conclusions and inferences will always be in doubt. Yet, impacting the scientific understanding of some phenomenon and communicating it to other experts might not be the goal. Like this blog post, for example. There can be other communication objectives that do not necessarily require peer-review.

Here are three non-peer reviewed communication pathways that I’ve personally pursued, and I’m not including blogs and other social media here, because I think they differ in their goals and objectives, but these are communication approaches you might want to consider:

1-    You might want to capture a broader public readership, to tell a story in a way that captures a non-specialist audience. For example, you might want to extend your science to a call for policy or societal change or to draw attention of the public and policymakers to a critical issue. I was recently a co-author on several papers that attempted to do this, for example, one on the need to protect the Tibetan Plateau, and another on the globally uneven distribution of the readership and submissions of applied ecology papers.
2-    You might want to target a specific audience that does not need to access peer-reviewed literature. Especially for agencies and NGOs that need specific guidance and summary of best practice. The grey literature is a rich and diverse set of communication pathways, which is not well captured in journals nor permanently available (something with the British Ecological Society that we’ve been trying to overcome!).
3-    You may desire to publish information or findings that are desperately needed and extremely time-sensitive. I recently decided to skip the typical peer review pipeline to get out analyses showing that governmental responses to COVID19 quickly resulted in significant drops in air pollution, across six different air pollutants for those cities impacted in February. I published the findings in this blog and posted the manuscript to EarthRQiv.

Why would I do this, especially when I am reporting the outcomes of hypothesis tests and data analysis? I did submit the manuscript to Science and it was quickly rejected, and I’m sure legitimate biogeochemists and atmospheric chemists are already submitting better analyses. However, I told myself before submission that if it was rejected, I would immediately go to plan B, which I did. I felt that the need to engage in this conversation and to shine the light on policy decisions that would lead to reduced pollution were too important for me to pursue the lengthy peer-review process, especially one that is not in my area of research. So, my plan B was to post to a preprint server and blog it. My hope is that it will spur more discussion and further analyses.

In some ways, these alternative vehicles for communicating science have been an experiment for me, but I have the luxury to do this given that I now have a mature research program and rather large group. Its is important to evaluate how we value non-peer reviewed material, or more importantly, how you use these to tell the story about your contributions to society and your impact. While we clearly need to distinguish peer-reviewed and non-reviewed material, and that there is no replacing the impact of peer-review, we should view non-peer-reviewed material more positively and as a way for knowledge mobilization and engage other communities in discussion. As scientists, we need to think carefully about when and how to communicate and the value of this communication to both society and to our careers. But certainly, these alternative forms of scientific communication can help make the broader impact statements on grant and tenure applications more compelling.

We are ultimately evaluated primarily on our peer-reviewed science, as it should be, we can better tell our story about our contribution with a complementing minority of other communication types. I would go so far as to say that a scientist who only publishes peer-reviewed articles might be missing important opportunities to share their knowledge and have an impact on societally important issues.

Excluding blog posts and tweets, about 30% of my contributions are not peer-reviewed. If I include blog posts, then I’d guess I’m at about a 1:1 ratio, peer-reviewed to not. But I am at the stage in my career where this is less risky to do. Pursuing alternative communication forms needs to be non-linear, you need more peer-reviewed articles upfront to establish your credibility which then frees you to pursue other intellectual endeavours and modes of communication. But perhaps more importantly, you’ve established that you are knowledgeable and a trusted authority, meaning that your non-peer-reviewed writings have greater impact.

Regardless, many of us got into this business to expand our collective understanding of the world around us or to make the world a better place. Neither of these goals is achievable if we are not communicating to non-scientists.

Monday, October 17, 2016

Reviewing peer review: gender, location and other sources of bias

For academic scientists, publications are the primary currency for success, and so peer review is a central part of scientific life. When discussing peer review, it’s always worth remembering that since it depends on ‘peers’, broader issues across ecology are often reflected in issues with peer review. A series of papers from Charles W. Fox--and coauthors Burns, Muncy, and Meyer--do a great job of illustrating this point, showing how diversity issues in ecology are writ small in the peer review process.

The journal Functional Ecology provided the authors up to 10 years of data on the submission, editorial, and review process (between 2004 and 2014, maximum). This data provides a unique opportunity to explore how factors such as gender and geographic local affects the peer review process and outcomes, and also how this has changed over the past decade.

Author and reviewer gender were assigned using an online database ( that includes 200,000 names and an associated probability reflecting the genders for each name. Geographic location of editors and reviewers were also identified based on their profiles. There are some clear limitations to this approach, particularly that Asian names had to be excluded. Still, 97% of names were present in the database, and 94% of those names were associated with a single gender >90% of the time.

Many—even most—of Fox et al.’s findings are in line with what has already been shown regarding the causes and effects of gender gaps in academia. But they are interesting, nonetheless. Some of the gender gaps seem to be tied to age: senior editors were all male, and although females make up 43% of first authors on papers submitted to Functional Ecology, they are only 25% of senior authors.

Implicit biases in identifying reviewers are also fairly common: far fewer women were suggested then men, even when female authors or female editors were identifying reviewers. Female editors did invite more female reviewers than male editors. ("Male editors selected less than 25 percent female reviewers even in the year they selected the most women, but female editors consistently selected ~30–35 percent female").  Female authors also suggested slightly more female reviewers than male authors did.

Some of the statistics are great news: there was no effect of author gender or editor gender on how papers were handled and their chances of acceptance, for example. Further, the mean score given to a paper by male and female reviewers did not differ – reviewer gender isn’t affecting your paper’s chance of acceptance. And when the last or senior author on a paper is female, a greater proportion of all the authors on the paper are female too.

The most surprising statistic, to me, was that there was a small (2%) but consistent effect of handling editor gender on the likelihood that male reviewers would respond to review requests. They were less likely to respond and less likely to agree to review, if the editor making the request is female.

That there are still observable effects of gender in peer review despite an increasing awareness of the issue should tell us that the effects of other forms of less-discussed bias are probably similar or greater. Fox et al. hint at this when they show how important the effect of geographic locale is on reviewer choice. Overwhelmingly editors over-selected reviewers from their own geographic locality. This is not surprising, since social and professional networks are geographically driven, but it can have the effect of making science more insular. Other sources of bias – race, country of origin, language – are more difficult to measure from this data, but hopefully the results from these papers are reminders that such biases can have measurable effects.

From Fox et al. 2016a.