Showing posts with label publishing. Show all posts
Showing posts with label publishing. Show all posts

Wednesday, April 12, 2017

The most "famous" ecologists (and some time wasting links) (Updated)

(Update: This has gotten lots more attention than I expected. Since first posted, the top 10 list has been updated 2 times based on commenters suggestions. You can also see everyone we looked up here. Probably I won't update this again, because there is a little time wasting, and there is a lot of time wasting :) )

At some point my officemates Matthias and Pierre and I started playing the 'who is the most famous ecologist' game (instead of, say, doing useful work), particular looking for ecologists with an h-index greater than 100. An h-index of 100 would mean that the scientist had 100 publications with at least 100 citations  and their other papers had less than 100 citations. Although the h-index is controversial, it is readily available and reasonably capture scientists that have above average citations per paper and high productivity. We restricted ourselves to only living researchers. We used Publish or Perish to query Google Scholar (which now believes everyone using the internet in our office may be a bot).

We identified only 12 ecologists at level 100 or greater. For many researchers in specialized subfields, an h-index this high is probably not achievable. The one commonality in these names seems to be that they either work on problems of broad importance and interest (particularly, climate change and human impacts on the landscape) or else were fundamental to one or more areas of work. They were also all men, and so we tried to identify the top 12 women ecologists. (We tried as best as we could, using lists here and here to compile our search). The top women ecologists tended to have been publishing for an average of 12 years less than the male ecologists (44 vs. 56 years) which may explain some of the rather jarring difference. The m-index is the h-index/years publishing and so standardizes for differences in career age.

(It's difficult to get these kind of analyses perfect due to common names, misspellings in citations, different databases used, etc. It's clear that for people with long publication lists, there is a good amount of variance depending on how that value is estimated).

Other links: 
(I've been meaning to publish some of these, but haven't otherwise had a time or space for it.. )
Helping graduate students deal with imposter syndrome (Link). Honestly, not only graduate students suffer from imposter syndrome, and it is always helpful to get more advice on how to escape the feeling that you've lucked into something you aren't really qualified for. 

A better way to teach the Tree of Life (Link). This paper has some great ideas that go beyond identifying common ancestors or memorizing taxonomy.

Analyzing scientists are on Twitter (Link). 

Recommendation inflation (Link). Are there any solutions to an arms race of positivity?  


Tuesday, January 24, 2017

The removal of the predatory journal list means the loss of necessary information for scholars.

We at EEB & Flow periodically post about trends and issues in scholarly publishing, and one issue that we keep coming back to is the existence of predatory Open Access journals. These are journals that abuse a valid publishing model to make a quick buck and use standards that are clearly substandard and are meant to subvert the normal scholarly publishing pipeline (for example, see: here, here and here). In identifying those journals that, though their publishing model and activities, are predatory, we have relied heavily on Beall's list of predatory journals. This list was created by Jeffrey Beall, with the goal of providing scholars with the necessary information needed to make informed decisions about which journals to publish in and to avoid those that likely take advantage of authors.

As of a few days ago, the predatory journal list has been taken down and is no longer available online. Rumour has it that Jeffrey Beall removed the list in response to threats of lawsuits. This is really unfortunate, and I hope that someone who is dedicated to scholarly publishing will assume the mantle.

However, for those who still wish to consult the list, an archive of the list still exists online -found here.

Monday, October 17, 2016

Reviewing peer review: gender, location and other sources of bias

For academic scientists, publications are the primary currency for success, and so peer review is a central part of scientific life. When discussing peer review, it’s always worth remembering that since it depends on ‘peers’, broader issues across ecology are often reflected in issues with peer review. A series of papers from Charles W. Fox--and coauthors Burns, Muncy, and Meyer--do a great job of illustrating this point, showing how diversity issues in ecology are writ small in the peer review process.

The journal Functional Ecology provided the authors up to 10 years of data on the submission, editorial, and review process (between 2004 and 2014, maximum). This data provides a unique opportunity to explore how factors such as gender and geographic local affects the peer review process and outcomes, and also how this has changed over the past decade.

Author and reviewer gender were assigned using an online database (genderize.io) that includes 200,000 names and an associated probability reflecting the genders for each name. Geographic location of editors and reviewers were also identified based on their profiles. There are some clear limitations to this approach, particularly that Asian names had to be excluded. Still, 97% of names were present in the genderize.io database, and 94% of those names were associated with a single gender >90% of the time.

Many—even most—of Fox et al.’s findings are in line with what has already been shown regarding the causes and effects of gender gaps in academia. But they are interesting, nonetheless. Some of the gender gaps seem to be tied to age: senior editors were all male, and although females make up 43% of first authors on papers submitted to Functional Ecology, they are only 25% of senior authors.

Implicit biases in identifying reviewers are also fairly common: far fewer women were suggested then men, even when female authors or female editors were identifying reviewers. Female editors did invite more female reviewers than male editors. ("Male editors selected less than 25 percent female reviewers even in the year they selected the most women, but female editors consistently selected ~30–35 percent female").  Female authors also suggested slightly more female reviewers than male authors did.

Some of the statistics are great news: there was no effect of author gender or editor gender on how papers were handled and their chances of acceptance, for example. Further, the mean score given to a paper by male and female reviewers did not differ – reviewer gender isn’t affecting your paper’s chance of acceptance. And when the last or senior author on a paper is female, a greater proportion of all the authors on the paper are female too.

The most surprising statistic, to me, was that there was a small (2%) but consistent effect of handling editor gender on the likelihood that male reviewers would respond to review requests. They were less likely to respond and less likely to agree to review, if the editor making the request is female.

That there are still observable effects of gender in peer review despite an increasing awareness of the issue should tell us that the effects of other forms of less-discussed bias are probably similar or greater. Fox et al. hint at this when they show how important the effect of geographic locale is on reviewer choice. Overwhelmingly editors over-selected reviewers from their own geographic locality. This is not surprising, since social and professional networks are geographically driven, but it can have the effect of making science more insular. Other sources of bias – race, country of origin, language – are more difficult to measure from this data, but hopefully the results from these papers are reminders that such biases can have measurable effects.

From Fox et al. 2016a. 

Tuesday, June 14, 2016

Rebuttal papers don’t work, or citation practices are flawed?

Brian McGill posted an interesting follow up to Marc’s question about whether journals should allow post-publication review in the form of responses to published papers. I don’t know that I have any more clarity as to the answer to that question after reading both (excellent) posts. Being idealistic, I think that when there are clear errors, they should be corrected, and that editors should be invested in identifying and correcting problems in papers in their journals. Based on the discussions I’ve had with co-authors about a response paper we’re working on, I’d also like to believe that rebuttals can produce useful conversations, and ultimately be illuminating for a field. But pragmatically, Brian McGill pointed out that it seems that rebuttals rarely make an impact (citing Banobi et al 2011). Many times this was due to the fact that citations of flawed papers continued, and “were either rather naive or the paper was being cited in a rather generic way”.

Citations are possibly the most human part of writing scientific articles. Citations form a network of connections between research and ideas, and are the written record of progress in science. But they're also one of the clearest points at which biases, laziness, personal relationships (both friendships and feuds), taxonomic biases, and subfield myopia are apparent. So why don't we focus on improving citation practices? 

Ignoring more extreme problems (coercive citations, citation fraud, how to cite supplementary materials, data and software), as the literature grows more rapidly and pressure to publish increases, we have to acknowledge that it is increasingly difficult to know the literature thoroughly enough to cite broadly. A couple of studies found that 60-70% of citations were scored as accurate (Todd et al. 2007; Teixeira et al. 2013) (Whether you can see that as too low or pretty high depends on your personality). Key problems were the tendency to cite 'lazily' (citing reviews or synthetic pieces rather than delve into the literature within) or 'naively' (citing high profile pieces in an offhand way without considering rebuttals and follow ups (a key point of the Banobi et al. piece)). At least one limited analysis (Drake et al. 2013) showed that citations tended to be much more accurate in higher IF journals (>5), perhaps (speculating) due to better peer review or copy editing. 

Todd et al (2007) suggest that journals institute random audits of citations to ensure authors take greater care. This may be a good idea that is difficult to institute in journals where peer reviewers are already in short supply. It may also be useful to have rebuttal papers considered as part of the total communication surrounding a paper - the full text would include them, they would be automatically downloaded in the PDF, there would be a tab (in addition to author information, supplementary material, references, etc) for responses. 

More generally - why don't we learn how to cite well as students? The vast majority of advice on citation practices with a quick google search regards the need to avoiding plagiarism and stylistic concerns. Some of it is philosophical, but I have never heard a deep discussion of questions like, 'What’s an appropriate number of citations – for an idea?'; 'For a manuscript?'; 'How deep do I cite? (Do I need to go to Darwin?)'. It would be great if there were a consensus advice publication, like the sort the BES is so good at on best practices in citation.

Which is to say, that I still hope that rebuttals can work and be valuable.

Friday, May 27, 2016

How to deal with poor science?

Publishing research articles is the bedrock of science. Knowledge advances through testing hypotheses, and the only way such advances are communicated to the broader community of scientists is by writing up the results in a report and sending it to a peer-reviewed journal. The assumption is that papers passing through this review filter report robust and solid science.

Of course this is not always the case. Many papers include questionable methodology and data, or are poorly analyzed. And a small minority actually fabricate or misrepresent data. As Retraction Watch often reminds us, we need to be vigilant against bad science creeping into the published literature.



Why should we care about bad science? Erroneous results or incorrect conclusions in scientific papers can lead other researchers astray and result in bad policy. Take for example the well-flogged Andrew Wakefield, a since discredited researcher who published a paper linking autism to vaccines. The paper is so flawed that it does not stand up to basic scrutiny and was rightly retracted (though how it could have passed through peer review is an astounding mystery). However, this incredibly bad science invigorated an anti-vaccine movement in Europe and North America that is responsible for the re-emergence of childhood diseases that should have been eradicated. This bad science is responsible for hundreds of deaths.

From Huffington Post 

Of course most bad science will not result in death. But bad articles waste time and money if researchers go down blind alleys or work to rebut papers. The important thing is that there are avenues available to researchers to question and criticize published work. Now days this usually means that papers are criticized through two channels. First is through blogs (and other social media). Researchers can communicate their concerns and opinion about a paper to the audience that reads their blog or through social media shares. A classic example was the blog post by Rosie Redfield criticizing a paper published in Science that claimed to have discovered bacteria that used arsenic as a food source.

However, there are a few problems with this avenue. First is that it is not clear that the correct audience is being targeted. For example, if you normally blog about your cat, and your blog followers are fellow cat lovers, then a seemingly random post about a bad paper will likely fall on deaf ears. Secondly, the authors of the original paper may not see your critique and do not have a fair opportunity to rebut your claims. Finally, your criticism is not peer-reviewed and so flaws or misunderstandings in your writing are less likely to be caught.

Unlike the relatively new blog medium, the second option is as old as scientific publication –writing a commentary that is published in the same journal (and often with an opportunity for the authors of the original article to respond). These commentaries are usually reviewed and target the correct audience, namely the scientific community that reads the journal. However, some journals do not have a commentary section and so this avenue is not available to researchers.

Caroline and I experienced this recently when we enquired about the possibility to write a commentary on an article was published and that contained flawed analyses. The Editor responded that they do not publish commentaries on their papers! I am an Editor-in-Chief and I routinely deal with letters sent to me that criticize papers we publish. This is important part of the scientific process. We investigate all claims of error or wrongdoing and if their concerns appear valid, and do not meet the threshold for a retraction, we invite them to write a commentary (and invite the original authors to write a response). This option is so critical to science that it cannot be overstated. Bad science needs to be criticized and the broader community of scientists should to feel like they have opportunities to check and critique publications.


I could perceive that there are many reasons why a journal might not bother with commentaries –to save page space for articles, they’re seen as petty squabbles, etc. but I would argue that scientific journals have important responsibilities to the research community and one of them must be to hold the papers they publish accountable and allow for sound and reasoned criticism of potentially flawed papers.

Looking over the author guidelines of the 40 main ecology and evolution journals (and apologies if I missed statements -author guidelines can be very verbose), only 24 had a clear statement about publishing commentaries on previously published papers. While they all had differing names for these commentary type articles, they all clearly spelled out that there was a set of guidelines to publish a critique of an article and how they handle it. I call these 'Group A' journals. The Group A journals hold peer critique after publication as an important part of their publishing philosophy and should be seen as having a higher ethical standard.



Next are the 'Group B' journals. These five journals had unclear statements about publishing commentaries of previously published papers, but they appeared to have article types that could be used for commentary and critique. It could very well be that these journals do welcome critiques of papers, but they need to clearly state this.


The final class, 'Group C' journals did not have any clear statements about welcoming commentaries or critiques. These 11 journals might accept critiques, but they did not say so. Further, there was no indication of an article type that would allow commentary on previously published material. If these journals do not allow commentary, I would argue that they should re-evaluate their publishing philosophy. A journal that did away with peer-review would be rightly ostracized and seen as not a fully scientific journal and I believe that post publication criticism is just as essential as peer review.


I highlight the differences in journals not to shame specific journals, but rather highlight that we need a set of universal standards to guide all journals. Most journals now adhere to a set of standards for data accessibility and competing interest statements, and I think that they should also feel pressured into accepting a standardized set of protocols to deal with post-publication criticism. 

Friday, March 4, 2016

Pulling a fast one: getting unscientific nonsense into scientific journals. (or, how PLOS ONE f*#ked up)

The basis of all of science is that we can explain the natural world through observation and experiments. Unanswered questions and unsolved riddles are what drive scientists, and with every observation and hypothesis test, we are that much closer to understanding the universe. However, looking to supernatural causes for Earthly patterns is not science and has no place in scientific inquiry. If we relegate knowledge to divine intervention, then we fundamentally lose the ability to explain phenomena and provide solutions to real world problems.

Publishing in science is about leaping over numerous hurdles. You must satisfy the demands of reviewers and Editors, who usually require that methodologies and inferences satisfy strict and ever evolving criteria -science should be advancing. But sometimes people are able to 'game the system' and get junk science into scientific journals. Usually, this happens by improper use of the peer review systems or inventing data, but papers do not normally get into journals while concluding that simple patterns conform to divine intervention.

Such is the case in a recent paper published in the journal PLOS ONE. This is a fairly pedestrian paper about human hand anatomy and they conclude that anatomical structures provide evidence of a Creator. They conclude that since other primates show a slight difference in tendon connections, a Creator must be responsible for the human hand (well at least the slight, minor modification from earlier shared ancestors). Obviously this lazy science and an embarrassment to anyone that works as an honest scientist. But more importantly, it calls into question the Editor who handled this paper (Renzhi Han, Ohio State University Medical Center), but also PLOS ONE's publishing model. PLOS ONE handles thousands of papers and requires authors to pay for the costs of publishing. This may just be an aberration, a freak one-off, but the implications of this seismic f$@k up, should cause the Editors of PLOS to re-evaluate their publishing model.  

Friday, November 6, 2015

Science in China –feeding the juggernaut*

For those of us involved in scientific research, especially those that edit journals, review manuscripts or read published papers, it is obvious that there has been a fundamental transformation in the scientific output coming from China. Both the number and quality of papers have drastically increased over the past 5-10 years. China is poised to become a global leader in not only scientific output, but also in the ideas, hypotheses and theories that shape modern scientific investigation.

I have been living in China for a couple of months now (and will be here for 7 months more), working in a laboratory at Sun Yat-sen University in Guangzhou, and I have been trying to identify the reasons for this shift in scientific culture in China. Moreover, I see evidence that China will soon be a science juggernaut (or already is), and there are clear reasons why this is. Here are some reasons why I believe that China has become a science leader, and there are lessons for other national systems.

The reasons for China’s science success:

1.      University culture.

China is a country with a long history of scholarly endeavours. We can look to the philosophical traditions of Confucius 2500 years ago as a prime example of the respect and admiration of scholarly traditions. Though modern universities are younger in China than elsewhere (the oldest being about 130 years old), China has invested heavily in building Universities throughout the country. In the mid-1990s, the government built 100 new universities in China, and now graduates more than 6 million students every year from undergraduate programs.
Confucius (551-479 BC), the grand-pappy of all Chinese scholars

This rapid increase in the number of universities means that many are very modern with state-of-the-art facilities. This availability of infrastructure has fostered the growth of new colleges, institutes and departments, meaning that new faculty and staff have been hired. Many departments that I have visited have large numbers of younger Assistant and Associate Professors, many having been trained elsewhere, that approach scientific problems with energy and new ideas.
My new digs


2.      Funding

From my conversations with various scientists, labs are typically very well funded. With the expansion of the number of universities seems to have been an expansion in funds available for research projects. Professors need to write a fair number of grant proposals to have all of their projects funded, but it seems that success rates are relatively high, and with larger grants available to more senior researchers. This is in stark contrast to other countries, where funding is inadequate. In the USA, National Science Foundation funding rates are often below 10% (only 1 in 10 proposals are funded). This abysmal funding rate means that good, well-trained researchers are either not able to realize their ideas or spend too much of their time applying for funding. In China, new researchers are given opportunities to succeed.


3.      Collaboration

Chinese researchers are very collaborative. There are several national level ecological research networks (e.g., dynamic forest plots) that involve researchers from many institutions, as well as international collaborative projects (e.g., BEF China). In my visits to different universities, Chinese researchers are very eager to discuss shared research interests and explore the potential for collaboration. Further, there are a number of funding schemes to get students, postdocs and junior Professors out of China and into foreign labs, which promotes international collaboration. Collaborations provide the creative capital for new ideas, and allow for larger, more expansive research projects.

4.      Environmental problems

It is safe to say that the environment in China has been greatly impacted by economic growth and development over the past 30 years. This degradation of the environment has made ecological science extremely relevant to the management of natural resources and dealing with contaminated soil, air and water. Ecological research appears to have a relatively high profile in China and is well supported by government funding and agencies.

5.      Laboratory culture

In my lab in Canada, I give my students a great deal of freedom to pursue their own ideas and allow them much latitude in how they do it. Some students say that they work best at night, others in spurts, and some just like to have four-day weekends every week. While Chinese students seem equally able to pursue their own ideas and interests, students tend to have more strict requirements about how they do their work. Students are often expected to be in the lab from 9-5 (at least) and often six days a week. This expectation is not seen as demanding or unreasonable (as it probably would be in the US or Canada), but rather in line with general expectations for success (see next point).

Labs are larger in China. The lab I work in has about 25 Masters students and a further 6 PhD students, plus postdocs and technicians. Further, labs typically have a head professor and several Assistant or Associate Professors. When everyone is there everyday, there is definitely a vibe and culture that emerges that is not possible if everyone is off doing their own thing.

The lab I'm working in -"the intellectual factory"

Another major difference is that there is a clear hierarchy of respect. Masters students are expected to respect and listen to PhD students, PhD students respect postdocs and so on up to the head professor. This respect is fundamental to interactions among people. As it has been described to me, the Professor is not like your friend, but more like a father that you should listen to.

What’s clear is that lab culture and expectations are built around the success of the individual people and the overall lab. And success is very important –see next point.


6.      Researcher/student expectations

I left the expectations on researchers for last because this needs a longer and more nuanced discussion. My own view of strict expectations has changed since coming to China, and I can now see the motivating effect these can have.

For Chinese researchers it is safe to say that publications are gold. Publishing papers, and especially the type of journal those papers appear determine career success in a direct way. A masters student is required to publish one paper, which could be in a local Chinese journal. A PhD student is required to publish two papers in international journals. PhD students who receive a 2-year fellowship to travel to foreign labs are required to publish a paper from that work as well. For researchers to get a professor position, they must have a certain number of publications in high-impact international journals (e.g., Impact Factor above 5).

Professors are not immune from these types of expectations. Junior professors are not tenured, nor are they able to get tenure until they qualify for the next tier, and they need to constantly publish. To get a permanent position as a full professor or group leader, they need to have a certain number of high impact papers. For funding applications, their publication records are quantified (number and impact factors of journals) and they must surpass some threshold.

Of course in any country, your publication record is the most important component for your success as a researcher, but in China the expectations are clearly stated.

While there are pros and cons of such a reward based system, and certainly the pressure can be overwhelming, I’ve witnessed the results of this system. Students are extremely motivated and have a clear idea what it means to be successful. To get two publications in a four year PhD requires a lot of focus and hard work; there is no time for drifting or procrastinating.

So why has Chinese science been so successful? It is because a number of factors have coalesced around and support a general high demand for success. Regardless of the number of institutional and funding resources available, this success is only truly realized because of researchers' desire to exceed strict expectations. And they are doing so wonderfully.  

*over the next several months I will write a series of posts on science and the environment in China

Thursday, May 28, 2015

Are scientists boring writers?


I was talking with an undergrad who is doing her honours project with me about the papers she’s reading, and she mentioned how difficult (or at least slow going) she’s found some of them. The papers are mostly reviews or straightforward experimental studies, but I remember feeling the same way as an undergraduate. Academic science writing uses its own language, and until you are familiar with the terms and phrases and article structure, it can be hard going. Some areas, for example theoretical papers, even have their own particular dialects (you don’t see the phrase “mean-field approximation” in widespread usage, for example). Grad school has the advantage of providing total immersion into the language, but for many students, lots of time/guidance and patience is necessary to understand the primary literature. But is science necessarily a boring language?

A recent blog piece argues that academic science writing needs to fundamentally change because it is boring, repetitive, and uninspired. And as a result, the scientific paper needs to evolve. The post quotes a biologist at University of Amsterdam, Filipe Branco dos Santos: he feels that the problem is rooted in the conservative nature of scientists, leading them to replicate the same article structure over and over again. Journals act as the gatekeeper for article style too – submission requirements enforce the inclusion of particular sections (Introduction, Methods, Results, Discussion, etc), and determine every thing from word counts, figure number, text size, and even title structure and length. Reviewers and editors are within their rights to require stylistic changes. The piece includes a few tips for better article writing: choose interesting titles, write in the active tense, use short sentences, avoid jargon, include a lay summary. It’s difficult to disagree with those points, but unfortunately the article makes no attempt to suggest what, precisely, we should be doing differently. Still, it suggests that consideration of the past, present and future of scientific writing is necessary.

One glaring issue with the post is that the argument that scientists are stuck in a pattern established hundreds of years ago ignores just how much science papers have changed, stylistically. Scientific papers are a very old phenomenon – the oldest, Philosophical Transactions of the Royal Society, was first published in 1665. The early papers were not formatted in the introduction / methods / results / discussion style of today, and were often excerpts from letters or reports.

From the first issue, “Of the New American Whale-fishing about the Bermudas” begins:

“Here follows a relation, somewhat more divertising, than the precedent Accounts; which is about the new Whale-fishing in the West Indies about the Bermudas, as it was delivered by an understanding and hardy Sea-man, who affirmed he had been at the killing work himself.”

Ecological papers written in the early 1900s are also strikingly different in style than those today. Sentences are long and complex, words like “heretofore”, “therefore”, and “thus” find frequent usage, and the language is rather flowery and descriptive.

From a paper in the Botanical Gazette in 1913, the first sentence:

“Plant geographers and climatologists have long been convinced that temperature is one of the most important conditions governing the distribution of plants and animals, but very little has as yet been accomplished toward finding out what sort of quantitative relationships may exist between the nature of floral and faunal associations and the temperature conditions that are geographically concomitant therewith.”

While this opening makes perfect sense and establishes the question to be dealt with in the paper, it probably wouldn’t make it past review without comment.

Some of my favourite examples that highlight how much ecological papers have changed come from R.H. Whittaker’s papers. He is clearly an avid (and verbose) naturalist and his papers are peppered with evocative phrases. For example, “If, for example, one stands on a viewpoint in the Southern Appalachian Mountains in the autumn, one sees a complex varicoloured mantle of vegetation covering the mountain topography” and “The student of vegetation seeks to construct systems of abstraction by which relationships in this mantle of vegetation may be comprehended.” Indeed!

Today, in contrast, academic science writing is minimalist – it is direct, focused, and clarity is prized. Sentences are typically shorter, with a single focal thought, and the aim is for a clear narrative without the peripatetic asides common in older work. These shifts in style reflect the prevailing thoughts about how to balance the role of scientific papers as a communication device versus as a contribution to the scientific record. It seems that science papers may be boring now because authors and editors would rather a paper be a little dry rather than be unclear or difficult to replicate. (Of course, some papers manage to be both boring and confusing, so this is not always successful….) Modern papers have a lot of modern bells and whistles too. The move away from physical copies of papers to pdfs and online only colour versions and supplementary information has made sharing results easier and more comprehensive than ever.

If there is going to be a revolution in academic science writing, it will probably be tied to the ongoing technological changes in science and publishing. The technology is certainly already present to make science more interactive to the reader, which might make it less boring? It is already possible to include videos or gifs in online supplements (a great example being this puppet show explaining Diversitree). More seriously, supplements can include data, computer code used for analyses or simulations, additional results. It’s possible to integrate GitHub repositories with articles tied to a paper’s analyses, or link markdown scripts for producing manuscripts. The one limitation is that these approaches is that they aren’t included in the main text and so most people never see them. It’s only a matter of time before we move towards a paper format that includes embedded elements (extending on current online versions that include links to reference papers). One could imagine plots that could be manipulated, or interactive maps, allowing you to explore the study site through satellite images of the vegetation and terrain.

Increasingly interactive papers might make it more fun to work through a paper, but a paper must stand alone without them. For me, the key to a well-written paper is that there is always a narrative or purpose to the writing. Papers should establish a focus and ensure connections between thoughts and paragraphs are always obvious to the reader. The goal is to never lose the reader in the details, because the bigger picture narrative can be read between the lines. That said, I rarely remember if a paper is boringly written: I remember the quality of the ideas and the science. I would always take a paper with interesting ideas and average writing over a stylish paper with no substance. So perhaps academic science writing is an acquired (or learned) taste, and certainly that taste could be improved, but it's clear that science writing is constantly evolving and will continue to do so.

Wednesday, May 20, 2015

I'll take 'things that have nothing to do with my research' for $400


I guess I do have a couple papers with the word fire in their titles?
And to Burns and Trauma's credit, this is a nicely formatted email and the reasons to publish with them are pretty convincing :-)

Monday, February 9, 2015

Can an algorithm tell you where to submit your next paper?


Choosing where to submit a manuscript is a difficult proposition. For a strong manuscript, you might hope for a high profile, high impact journal, but also to be considered is the hope (or need) for rapid publication, preferably without too many cycles of rejection, revision, and resubmission.

Maybe you saw this around twitter, but a little over a week ago, one paper offered a solution to the “where to publish?” puzzle. Published in PLoSONE (the journal chosen using their algorithm), authors Salinas and Munch provided one answer to "Optimizing the Submission Decision Process".

Surveys usually show that journal impact factor is the highest priority for authors, and a typical measure of paper success. Recognizing the importance of citations--and journal impact factors as an indirect predictor of them--the authors use Markov decision processes to determine the optimal submission process to maximize citation total. This model is a race against time, where delays reduce the total citations and worse, increase the probability that a paper will be scooped (and therefore have minimal citation value). They also considered a more complex model which maximizes citations, while minimizing delays due to rejection, resubmission and revisions.

The top choice of journals for the first model (maximize citations), were Ecology Letters, Science, and Molecular Ecology Resources. For the second model, the top journals to balance citations and time loss, were Molecular Ecology Resources, PLoSONE, and Ecology Letters.

Finally, if you know your willingness to tradeoff the number of times you submit your paper until acceptance and the number of citations it will receive, you can choose between several strategies. One option is a path that involves submission to a high impact journal (Ecology Letters in particular, possibly Ecological Monographs), accepting that you may actually need to resubmit your paper several times but will gain high citations. Alternately, you could choose a journal such as PLoSONE where resubmission is low and citation rate is moderate. Finally, many specialized journals may be faster, but provide relatively low citations. (Fig 3 below).
From Salinas and Munch 2015.
So what the authors get right is that choosing where to submit is a difficult task. Choosing journals is a skill that a scientist hones over a career. Graduate students have the hardest time, I think, not having experience with the underlying complexities (e.g. this journal is slow, this journal prefers experimental work rather than simulation approaches, Science will probably reject you, but at least it will be very fast...). Students usually have to rely on supervisors and more experienced collaborators precisely because they lack informed priors. That being said, the approach from this paper strikes me as a silly (and just bad) way of choosing journals.

The biggest reason is that even though everyone chooses "impact factor" as their primary criteria for choosing a journal in a survey, in practice impact factor is innately balanced against manuscript quality. Sure, there's the odd soul who always starts at Science and works their way down, but most researchers have a reasonably unbiased view of their manuscript's quality, and journal choice is conditioned by that estimate of manuscript quality. (More commentary on this from Marcel Holyoak and others, here). So it's really about maximizing citations, given the quality of a particular work. This implies authors must have knowledge of the journals in their field, not a simplistic algorithm.

Scooping doesn't strike me as the biggest concern for most ecologists either. There is a cost of declining novelty, perhaps, but it would be a rare ecological paper that lost all citation value because something similar had been published slightly earlier. (Or so I think. Is scooping a big issue in ecology?) 

Additionally, citations simply aren't the only thing concern for researchers, especially early-career people. The quality of journals that you publish in has important implications. Sending all of your papers to PLoSONE to reduce the time to publication while maximizing citations, while apparently a viable strategy, won't do a lot for a career application (not to pick on PLoSONE, which I think has an important role, but isn't usually the first choice journal for ecological research). Publishing in prestigious journals is usually considered an indicator of research quality. 

Journal choice will probably always be a subjective, imperfect behaviour. Even if a more complicated algorithm could be constructed, there are too much subjective inputs--paper quality, subject importance and novelty, journal quality--for the choice to be so simplistic.

Tuesday, February 3, 2015

Predatory open access journals: still keep'n it classy

As most academics are aware, there are hundreds of predatory open access journals that try to trick authors into submitting to their journals, charge exorbitant fees, and do not ensure that articles are peer reviewed or live up to basic scientific standards. The most celebrated cases are journals that embarrassingly publish non-sensical fake papers. I don't know why, but I sometimes go to the journal websites to see what they publish or who is on their editorial boards. I received such an e-mail this morning from SOJ Genetic Science published by Symbiosis, a recognized predatory publisher. This journal, unlike others, actually has a single published issue with an editorial! I thought: "wow, are they trying to be legitimate?"; then I read the editorial. The editorial is probably best described as a nonsensical diatribe about genetics, which lacks any real connection to modern genetic theory. Here is my favourite paragraph:



Predatory open access journals: still keep'n it classy.