Showing posts with label journals. Show all posts
Showing posts with label journals. Show all posts

Wednesday, May 1, 2024

Encouraging ethical publishing

Scientists establish their credentials and reputations by publishing in peer-reviewed articles. Participating in the act of asking answerable questions, collecting unbiased empirical evidence to evaluate those questions, and passing through the gauntlet of peer-review to publish findings are the hallmark of science. Essentially, publishing in peer-reviewed scientific journal means that you are a scientist. However, the publishing landscape is replete with ethical, moral, practical, reputational, and economic decisions. 

 

 

Deciding where to publish is a complex and multifaceted decision process. The considerations about where to publish typically include:

 

1) Journal impact factor (and there has been a lot written about this).

2) Breadth of journal/topic area (is your article of general interest or better suited informing a more specific audience).

3) Cost to publish (Open access charges, page charges, etc.).

4) Article match (does the journal tend towards robust experiments, observational data, theory).

5) Editorial board (people you respect or are knowledgeable about your area of research)

6) Experience with journal (a place you’ve published before).

7) Who is doing the publishing and realizing the benefit of your work (we don’t discuss this enough)

 

The recent article by Receveur and colleagues, titled “David versus Goliath: Early career researchers in an unethical publishing system”, published in Ecology Letters, makes the argument that better publishing decisions need to be made by individual researchers in order to support a more ethical publishing landscape. They come at this from the point of view of early career researchers (ECRs), who are more impacted by publishing, but nothing in their article is exclusive to ECRs. In fact, I’d say that these discussions about publishing are best served by including the entire community.

 

Before I dig a little more into Receveur et al.’s suggested path forward I will say a bit about my publishing philosophy. I now only send my articles to society owned or non-profit journals. My more general-appeal manuscripts are sent to Science or PNAS (both society journals). I do not review for Springer-Nature, Wiley, Elsevier, etc. unless the publication is a society one. I do not want my labour, effort, and creativity to be turned into someone else’s profit, rather if indirect benefits arise, I want them to serve academic communities. I came to this philosophy slowly over time, but it solidified probably about five or six years ago seeing Nature journals created without any meaningful contribution back to the communities they purportedly serve. As those in working groups or collaborations I am in can attest, I do make my perspective known, though I won’t hold up others’ publishing decisions.

 

The Receveur et al. general guidelines are a good set of rules to follow, though some aspects could use more detail. For example, they state that decisions should be made on whether publications are ethical or not. But they don’t really set the parameters on what is ‘ethical’ or not. They do cite profits as one consideration. They do highlight some of the profits made by publishers, and Elsevier profits were two orders of magnitude higher than Wiley’s profits. Does this mean Wiley is much more ethical than Elsevier? Maybe, or maybe not.

 

What does ethical publishing look like?

- The journal follows the prescribed ethical guidelines laid out in COPE. This means that the publication has transparent processes and business practices, bases decisions on anonymous peer-review.

- Academics/researchers should be the ones making both the operational and strategic decisions for the journal.

- Editorial boards are populated by active researchers in the field, and these boards should be diverse and representative (gender, geography, career stage, etc.).

- The journal’s primary mandate is not to generate profits for a company, but rather to advance scientific knowledge.

- Proceeds made by the publication feed back into the scientific community.

 

As a result of these ethical imperatives:

- Journals should be society owned and managed. Even if the journals are published by for-profit publishers, society ownership indicates that oversight is likely not profit-driven, and that proceeds go back into supporting the community.

- If the journal is not owned by a society, then non-profit publishers again ensure that profits are not the primary motivation which could influence decision-making.

 

For an example that I am intimately familiar with[i], the British Ecological Society, who own eight ecological journals plus a grey literature repository, partner with Wiley to publish. Wiley obviously has a profit mandate, but the Society negotiates publishing contracts that prioritize benefits to the BES members, and they retain all decision-making power over their publications.

 

Moving forward

As Receveur and colleagues argue, there needs to be a culture change. I wholeheartedly agree. Right now, many academics support a perverse system that does not have our best interests in mind. Building on the Receveur et al. recommendations, what should we do as individuals? 

 

- Publish in society or non-profit journals.

- Publish in journals that adhere to ethical standards.

- Evaluate quality of the contributions of candidates for positions or promotion.

- Choose to serve on society or non-profit journal editorial boards rather than on publisher-owned for-profit ones.

- Only review for society or non-profit journals.

- Value service to society or non-profit journal editorial boards and reviewing in hiring, promotion, and annual progress evaluations.

 

Finally, Receveur and colleagues point to an invaluable resource for determining which journals are owned by societies or non-profit organizations: the DAFNEE database of ethical journals 

 

This is a discussion that needs to be had by academics more broadly, and needs to influence hiring, tenure, awards, and grant committees, so that we are cognizant of individual and shared ethical publishing behaviour.

 



[i] Note that I am the Editor-in-Chief of Ecological Solutions and Evidence and the Chair of Applied Ecology Resources, two newer BES publication projects. Before this, I was the Editor of Journal of Applied Ecology. So, I have been intimately entangled with the BES-Wiley relationship for years and might not have a completely objective perspectives and I have developed friendships with people on both sides of this.

Wednesday, May 13, 2020

Publication Partners: a COVID-19 publication assistance program in conservation science


Researchers around the world are trying to keep up on work duties and responsibilities while being required to stay at home. For some people this means caring for young children or other family members, devising homeschooling, switching courses to online delivery, scheduling meetings with team members, receiving new duties from superiors, and perhaps worrying about job security. It is natural that these people may feel overwhelmed and that routine tasks, like checking references or proofreading manuscripts, might seem insurmountable.

However, for others, COVID-19 lockdowns have resulted in more time to push projects to completion and clear out backlogs. There is then inequality in the impact of COVID-19 restrictions on individuals.

These COVID-19 impacts on individuals not only have these unequal impacts on mental wellbeing and career trajectories but are on top of the desperate necessity of conservation science to continue. We win by having a greater diversity of experts communicating with one another.

Publication Partners is an attempt to address some of this COVID-19 impact inequality and to ensure that conservation science is still being published by assisting people with their manuscript preparation. This is a match-making service of the conservation community to bring researchers struggling with their current working conditions together with those that feel that have extra capacity and are willing to help others in this difficult time. The partner might be asked for publication advice, to assist with manuscript editing, help sorting and checking references, organizing tasks for revisions or preparing figures.

The idea is that the Publication Partners would normally be contributing less than would be expected for authorship and thus will be listed in the acknowledgments of the resulting paper. Publication Partners will match volunteers with those requesting support.

To volunteer or request a partner, please see this document with contact instrucitons.

As a journal editor, I see this a valuable and much needed assistance strategy. And I’m not alone. Many of the most important conservation journals have signaled their support and welcome submissions using this service. The journals support Publication Partners includes (please note that the list of journals is being updated and so will change over time):

 *Thanks to Bill Sutherland for sharing his thoughts on this post.

Friday, May 27, 2016

How to deal with poor science?

Publishing research articles is the bedrock of science. Knowledge advances through testing hypotheses, and the only way such advances are communicated to the broader community of scientists is by writing up the results in a report and sending it to a peer-reviewed journal. The assumption is that papers passing through this review filter report robust and solid science.

Of course this is not always the case. Many papers include questionable methodology and data, or are poorly analyzed. And a small minority actually fabricate or misrepresent data. As Retraction Watch often reminds us, we need to be vigilant against bad science creeping into the published literature.



Why should we care about bad science? Erroneous results or incorrect conclusions in scientific papers can lead other researchers astray and result in bad policy. Take for example the well-flogged Andrew Wakefield, a since discredited researcher who published a paper linking autism to vaccines. The paper is so flawed that it does not stand up to basic scrutiny and was rightly retracted (though how it could have passed through peer review is an astounding mystery). However, this incredibly bad science invigorated an anti-vaccine movement in Europe and North America that is responsible for the re-emergence of childhood diseases that should have been eradicated. This bad science is responsible for hundreds of deaths.

From Huffington Post 

Of course most bad science will not result in death. But bad articles waste time and money if researchers go down blind alleys or work to rebut papers. The important thing is that there are avenues available to researchers to question and criticize published work. Now days this usually means that papers are criticized through two channels. First is through blogs (and other social media). Researchers can communicate their concerns and opinion about a paper to the audience that reads their blog or through social media shares. A classic example was the blog post by Rosie Redfield criticizing a paper published in Science that claimed to have discovered bacteria that used arsenic as a food source.

However, there are a few problems with this avenue. First is that it is not clear that the correct audience is being targeted. For example, if you normally blog about your cat, and your blog followers are fellow cat lovers, then a seemingly random post about a bad paper will likely fall on deaf ears. Secondly, the authors of the original paper may not see your critique and do not have a fair opportunity to rebut your claims. Finally, your criticism is not peer-reviewed and so flaws or misunderstandings in your writing are less likely to be caught.

Unlike the relatively new blog medium, the second option is as old as scientific publication –writing a commentary that is published in the same journal (and often with an opportunity for the authors of the original article to respond). These commentaries are usually reviewed and target the correct audience, namely the scientific community that reads the journal. However, some journals do not have a commentary section and so this avenue is not available to researchers.

Caroline and I experienced this recently when we enquired about the possibility to write a commentary on an article was published and that contained flawed analyses. The Editor responded that they do not publish commentaries on their papers! I am an Editor-in-Chief and I routinely deal with letters sent to me that criticize papers we publish. This is important part of the scientific process. We investigate all claims of error or wrongdoing and if their concerns appear valid, and do not meet the threshold for a retraction, we invite them to write a commentary (and invite the original authors to write a response). This option is so critical to science that it cannot be overstated. Bad science needs to be criticized and the broader community of scientists should to feel like they have opportunities to check and critique publications.


I could perceive that there are many reasons why a journal might not bother with commentaries –to save page space for articles, they’re seen as petty squabbles, etc. but I would argue that scientific journals have important responsibilities to the research community and one of them must be to hold the papers they publish accountable and allow for sound and reasoned criticism of potentially flawed papers.

Looking over the author guidelines of the 40 main ecology and evolution journals (and apologies if I missed statements -author guidelines can be very verbose), only 24 had a clear statement about publishing commentaries on previously published papers. While they all had differing names for these commentary type articles, they all clearly spelled out that there was a set of guidelines to publish a critique of an article and how they handle it. I call these 'Group A' journals. The Group A journals hold peer critique after publication as an important part of their publishing philosophy and should be seen as having a higher ethical standard.



Next are the 'Group B' journals. These five journals had unclear statements about publishing commentaries of previously published papers, but they appeared to have article types that could be used for commentary and critique. It could very well be that these journals do welcome critiques of papers, but they need to clearly state this.


The final class, 'Group C' journals did not have any clear statements about welcoming commentaries or critiques. These 11 journals might accept critiques, but they did not say so. Further, there was no indication of an article type that would allow commentary on previously published material. If these journals do not allow commentary, I would argue that they should re-evaluate their publishing philosophy. A journal that did away with peer-review would be rightly ostracized and seen as not a fully scientific journal and I believe that post publication criticism is just as essential as peer review.


I highlight the differences in journals not to shame specific journals, but rather highlight that we need a set of universal standards to guide all journals. Most journals now adhere to a set of standards for data accessibility and competing interest statements, and I think that they should also feel pressured into accepting a standardized set of protocols to deal with post-publication criticism. 

Friday, March 4, 2016

Pulling a fast one: getting unscientific nonsense into scientific journals. (or, how PLOS ONE f*#ked up)

The basis of all of science is that we can explain the natural world through observation and experiments. Unanswered questions and unsolved riddles are what drive scientists, and with every observation and hypothesis test, we are that much closer to understanding the universe. However, looking to supernatural causes for Earthly patterns is not science and has no place in scientific inquiry. If we relegate knowledge to divine intervention, then we fundamentally lose the ability to explain phenomena and provide solutions to real world problems.

Publishing in science is about leaping over numerous hurdles. You must satisfy the demands of reviewers and Editors, who usually require that methodologies and inferences satisfy strict and ever evolving criteria -science should be advancing. But sometimes people are able to 'game the system' and get junk science into scientific journals. Usually, this happens by improper use of the peer review systems or inventing data, but papers do not normally get into journals while concluding that simple patterns conform to divine intervention.

Such is the case in a recent paper published in the journal PLOS ONE. This is a fairly pedestrian paper about human hand anatomy and they conclude that anatomical structures provide evidence of a Creator. They conclude that since other primates show a slight difference in tendon connections, a Creator must be responsible for the human hand (well at least the slight, minor modification from earlier shared ancestors). Obviously this lazy science and an embarrassment to anyone that works as an honest scientist. But more importantly, it calls into question the Editor who handled this paper (Renzhi Han, Ohio State University Medical Center), but also PLOS ONE's publishing model. PLOS ONE handles thousands of papers and requires authors to pay for the costs of publishing. This may just be an aberration, a freak one-off, but the implications of this seismic f$@k up, should cause the Editors of PLOS to re-evaluate their publishing model.  

Sunday, May 19, 2013

The end of the impact factor

Recently, both the American Society for Cell Biology (ASCB) and the journal Science both publicly proclaimed that the journal impact factor (IF) was bad for science. The ASCB statement argues that IFs limit meaningful assessment of scientific impact for both published articles and especially other scientific products. The Science statement goes further, and claims that assessments based on IFs lead researchers to alter research trajectories and try to game the system rather than focussing on the important questions that need answering.


Impact factors: tale of the tail
The impact factor was created by Thomson Reuters and is simply the number of citations a journal has received in the the previous two years, divided by the number of articles published over that time span. Thus it is a snapshot of a particular type of 'impact'. There are technical problems with this metric -for example, that citations accumulate at different rates across different subdisciplines. More importantly, and what all publishers and editors know, is that IFs generally rise and fall with the extreme tail of the distribution of the number of citations. For a smaller journal, it just takes one heavily cited paper to make the IF jump up. For example if a journal publishes one paper that accumulates 300 citations and it published just 300 articles over the two years, then its IF can jump up by 1, which can alter the optics. In ecology and evolution, IFs greater than 5 are usually are viewed as top journals.

Regardless of these issues, the main concern expressed by ACSB and Science is that a journal-level metric should not be used to assess an individual researcher's impact. Should a researcher publishing in a high IF journal be rewarded (promotion, raise, grant funded, etc.) if their paper is never cited? What about their colleague who publishes in the lower IF journal, but accrues a high number of citations?

Given that rewards are, in part, based on the journals we publish in, researchers try to game the system by writing articles for certain journals and journals try to attract papers that will accrue citations quickly. Journals with increasing IFs usually see large increases in the number of submissions, as researchers are desperate to have high IF papers on their CVs. Some researchers send papers to journals in the order of their IFs without regard for the actual fit of the paper to the journal. This results in an overloaded peer-review system.

Rise of the altmetric
Alternative metrics (altmetrics) movement means to replace journal and article assessment from one based on journal citation metrics to a composite of measures that include page views, downloads, citations, discussions on social media and blogs, and mainstream media stories. Altmetrics attempts to capture a more holistic picture of the impact of an article. Below is a screenshot from a PLoS ONE paper, showing an example of altmetrics:

By making such information available, the impact of an individual article is not the journal IF anymore, but rather how the article actually performs. Altmetrics are particularly important for subdisciplines where maximal impact is beyond the ivory towers of academia. For example, the journal I am an Editor for, the Journal of Applied Ecology, tries to reach out to practitioners, managers and policy makers. If an article is taken up by these groups, they do not return citations, but they do share and discuss these papers. Accounting for this type of impact has been an important issue for us. In fact, even though our IF may be equivalent to other, non-applied journals, our articles are viewed and downloaded at a much higher rate.

The future
Soon, how articles and journals are assessed for impact will be very different. Organizations such as Altmetric have developed new scoring systems that take into account all the different types of impact. Further, publishers have been experimenting with altmetrics and future online articles will be intimately linked to how they are being used (e.g., seeing tweets when viewing the article).

Once the culture shifts to one that bases assessment on individual article performance, where you publish should become less important, and journals can feel free to focus on an identity that is based on content and not citations. National systems that currently hire, fund and promote faculty based on the journals they publish in, need to carefully rethink their assessment schemes.

May 21st, 2013 Addendum:

You can sign the declaration against Impact Factors by clicking on the logo below:


Tuesday, May 25, 2010

The successful launch of MEE

Usually, I view the release of a new journal with some skepticism. There are so many journals and it feels like academics are over-parsing fields, isolating researchers that should be communicating. However, sometimes a journal comes along and it is obvious that there is a need and the community responds to its arrival. Such is the case with the British Ecological Society's newest journal, Methods in Ecology and Evolution, started by Rob Freckleton. The idea that a journal would be dedicated to methods papers is a great idea. This era of ecology and evolution is one that is defined by rapid advances in experimental, technological and computational tools and keeping track of these advances is difficult. Having a single journal should make finding such papers easier, but more importantly provides a home for methodological and computational ecologists and evolutionary biologists, which will hopefully spur greater communication and interaction, fostering more rapid development of tools.

Two issues have been published and they have been populated by good, entertaining articles. I especially enjoyed the one by Bob O'Hara and Johan Kotze on why you shouldn't log transform count data. As a researcher, I've done this (instead of using a GLM with proper distribution) and as an editor, I've allowed this, but it has always felt wrong somehow, and this shows that it is.

The early success of the journal is not just the product of the good papers it has already published, but also because of the savvy use of electronic communication. They Tweet on Twitter, link fans through Facebook, blog about recent advances in methods from other journals and post podcast and videocast interviews with authors. These casts give readers access to authors' own explanations of how their methods can be used.

I am excited about this new journal and hope it has a great impact on the publication of methodological papers.

Saturday, October 17, 2009

The making of an open era

With the availability of open access (OA) journals, academics now have a choice to make when deciding where to send their manuscripts. The idealistic version of OA journals represents a 'win-win' for researchers. The researchers publishing their work ensure the widest possible audience and research has shown a citation advantage for OA papers. The other side of the 'win-win' scenario is that researchers, no matter where they are, or how rich their institution, get immediate access to high-caliber research papers.

However, not all researchers have completely embraced OA journals. There are two commonly articulated concerns. The first is that many OA journals are not indexed, in most notably Thomson Reuters Web of Knowledge, meaning that a paper will not show up in topic searches, nor will citations be tracked. I for one do not like the idea of a company determining which journals deserve inclusion, thus affecting our choice of journals to submit to.

The second concern is that some OA journals are expensive to publish in. This is especially true for the more prestigious OA journals. Even though such OA journals often provide cash-strapped authors the ability to request a cost deferment, the perception is that you generally need to allocate significant funds for publishing in OA journals. While this cost may be justifiable to an author for inclusion in a journal like PLoS Biology, because of the level of readership and visibility. However, there are other, new, profit-driven journals, which see the OA model as a good business model, with little overhead and the opportunity to charge $1000-2000 per article.

I think that, with the rise of Google Scholar, and tools to assess impact factors (e.g., Publish or Perish), assessing difference sources for articles is available. The second concern is a little more serious, and a broad-scale solution is not readily apparent.

Number of Open Access journals

Regardless, OA journals have proliferated in the past decade. Using the directory of biology OA journals, I show above that the majority of OA journals have appeared after 2000. Some of these have not been successful having faltered after a few volumes, such as the World Wide Web Journal of Biology which published nine volumes with the last in 2004. I am fairly confident that not all these journals could possibly be successful, but I hope that enough are. By having real OA options, especially higher-profile journals, research and academia benefit as a whole.

Which journals become higher profile and viewed as an attractive place to submit a paper is a complex process depending on a strong and dedicated editorial staff and emergent property of the articles submitted. I hope that researchers out there really consider OA journals as a venue for some of their papers and become part of the 'win-win' equation.

Friday, October 2, 2009

How to keep up on your favorite journals

Researchers live busy lives. Either you are spending your waking hours writing grant proposals, running experiments, analyzing data, writing papers, preparing lectures, supervising students, attending committee meetings, and not to mention taking care of your personal life. Often the activity that slips to the bottom of this list is keeping up on the current literature. How should one go about maximizing their ability to efficiently peruse recent publications. I think the best approach is to use journals RSS feeds (otherwise known as Really Simple Syndication). RSS is a web format that allows publishers to syndicate the abstracts of papers as they are published online.

The simplest way to do this is to make sure you have a Google account and use their Google reader. If you go to a journal's website you click on either of these symbols:
or

You'll be sent to their RSS feed page and at the top is a subscription option and you can select Google to subscribe using:

When you click on 'Subscribe Now', it prompts you to select the Google homepage or reader -I use reader, but that just depends on your preference. You can subscribe to as many Journals as you want, and I think that all the major ones have RSS set up. Then to keep up on recently published papers, you simply go to your Google reader and scroll through the journals you have RSS subscriptions. Or if you check it more often, the reader keeps a list of the most recent items from all your subscriptions. No more getting e-mail alerts and no more going to a bunch of different journal pages.

By the way, you can also subscribe to this blog in the same way (see 'subscribe to' links on side panel).

Friday, February 20, 2009

Increased access to science, but who gets to publish?

ResearchBlogging.orgWhat role will open access (OA) journals play as science publishing increasingly moves to the internet and involves a more diverse array of participants? In a recent short article in Science, Evans and Reimer tried to answer this using citation rates from 8253 journals and examine trends in citation rate shifts. They found that researchers from wealthier countries were not likely to shift to citing OA journals while researchers from poorer countries did. The authors conclude that the overall shift to citing OA journals has been rather modest, but these journals have increased inclusion for researchers at institutions in poorer countries that cannot afford commercial subscriptions. However, there is an unfortunate flip side to the OA model -paying to publish. Most OA journals recoup the lack of subscription earnings by placing the financial onus on to the publishing scientists. This means that while researchers from poorer countries can now read and cite current articles in OA journals, they still are limited from publishing in them. True, most OA journals allow for deferring costs for researchers lacking funds, there is usually a cap to the frequency in which this can be done.

J. A. Evans, J. Reimer (2009). Open Access and Global Participation in Science Science, 323 (5917), 1025-1025 DOI: 10.1126/science.1154562

Tuesday, December 30, 2008

Review or publish; the curse of the commons

ResearchBlogging.orgNeed we be concerned about the volume and quality of manuscript reviews for journal submissions? In a recent editorial published in Ecology Letters by Michael Hochberg and colleagues, they answer yes, we should be concerned. They argue that manuscript reviewing is suffering from a tragedy of the commons, where growing submission rates to top journals is overburdening potential reviewers. This overburdening has two causes. First, that researchers tend to send their manuscripts to journals based on impact factors, regardless of the appropriateness of the manuscript for the receiving journal. Second is that authors view negative reviews as stochastic happenstance and in the rush to resubmit do little to improve their manuscript.

While the concerns are real, and the authors do suggest common sense approaches to publishing (i.e., choose appropriate journals and get colleagues to review drafts -something most of my colleagues do), there is little discussion of what incentives could be offered. The curse of the commons is when individual motives do not benefit the greater good, thus incentives could be used to alter motives potentially benefiting the larger community.

A number of journals now offer free access or free color figures in future publications for reviewing or even offering payment. Perhaps the move towards reduced length rapid turn around publications is part of the problem and that we should be valuing longer, more detailed papers (the classic quantity vs. quality problem). Whatever the potential solutions, it is promising to see journals, especially top-ranked ones like Ecology Letters, discussing these issues.

Michael E. Hochberg, Jonathan M. Chase, Nicholas J. Gotelli, Alan Hastings, Shahid Naeem (2009). The tragedy of the reviewer commons* Ecology Letters, 12 (1), 2-4 DOI: 10.1111/j.1461-0248.2008.01276.x