Showing posts with label publishing. Show all posts
Showing posts with label publishing. Show all posts

Friday, May 9, 2014

Scaling the publication obstacle: the graduate student’s Achilles’ heel

There is no doubt that graduate school can be extremely stressful and overwhelming. Increasingly, evidence points to these grad school stressors contributing to mental health problems (articles here and here). Many aspects of grad school contribute to self-doubt and unrelenting stress: is there a job for me after? am I as smart as everyone else? is what I’m doing even interesting?

But what seems to really exacerbate grad school stress is the prospect of trying to publish*. The importance of publishing can’t be dismissed. To be a scientist, you need to publish. There are differing opinions about what makes a scientist (e.g., is it knowledge, job title, etc.), but it is clear that if you are not publishing, then you are not contributing to science. This is what grad students hear, and it is easy to see how statements like this do not help with the pressure of grad school.

There are other aspects of the grad school experience that are important, like teaching, taking courses, outreach activities, and serving on University committees or in leadership positions. These other aspects can be rewarding because they expand the grad school experience. There is also the sense that they are under your control and the rewards are more directly influenced by your efforts. Here then, publishing is different. The publication process does not feel like it is under your control and that the rewards are not necessarily commensurate with your efforts.

Cartoon by Nick Kim, Massey University, Wellington, accessed here

Given the publishing necessity, how then can grad students approach it with as little trauma as possible? The publication process will be experienced differently by different people, some seem like they can shrug off negative experiences while others internalize them, with negative experiences gnawing away at their confidence. There is no magic solution to making the publishing experience better, but here are some suggestions and reassurances.

1) It will never be perfect! I find myself often telling students to just submit already. There is a tendency to hold on to a manuscript and read and re-read it. Part of this is the anxiety of actually submitting it, and procrastination is a result of anxiety. But often students say that it doesn’t feel ready, or that they are unhappy with part of the discussion, or that it is not yet perfect. Don’t ever convince yourself that you will make it perfect –you are setting yourself up for a major disappointment. Referees ALWAYS criticize, even when they say a paper is good. There is always room for improvement and you should view the review process as part of the process that improves papers. If you think of it this way, then criticisms are less personal (i.e., why didn’t they think it was perfect too?) and feel more constructive, and you are at peace with submitting something that is less than perfect.

2) Let's dwell on part of the first point: reviewers ALWAYS criticize. It is part of their job. It is not personal. Remember, the reviewers are putting time and effort into your paper, and their comments should be used to make the product better. Reviewers are very honest and will tell you exactly what could be done to improve a manuscript. They are not attacking you personally, but rather assessing the manuscript. 

3) Building on point 2, the reviewers may not always be correct or provide the best advice. It is OK to state why you disagree with them. You should always appreciate their efforts (unless they are unprofessional), but you don’t have to always agree with them.

4) Not every paper is a literature masterpiece. Effective scientific communication is sometimes best served by very concise and precise papers. If you have an uncomplicated, relatively simple experiment, don’t make more complex by writing 20 pages. Notes, Brevia, Forum papers are all legitimate contributions.

5) Not every paper should be a Science or Nature paper (or whatever the top journals are in a given subdiscipline). Confirmatory or localized studies are helpful and necessary. Large meta-analyses and reviews are not possible without published evidence. Students should try to think how their work is novel or broadly general (this is important for selling yourself later on), but it is ok to acknowledge that your paper is limited in scope or context, and to just send it to the appropriate journal. It takes practice to fit papers to the best journals, so ask colleagues where they would send it. This journal matching can save time and trauma.

6) And here is the important one: rejection is ok, natural, and normal. We all get rejections. What I mean by this is that we all get rejections. Your rejection is not abnormal, you don’t suck more than others, and your experience has been experienced by all the best scientists. When your paper is reviewed, and then rejected, there is usually helpful information that should be useful in revising your work to submit elsewhere. Many journals are inundated with papers and are looking for reasons to reject. In the journal I edit, we accept only about 18% of submissions, and so it doesn’t take much to reject a paper. This is unfortunate, but currently unavoidable (though with the changing publishing landscape, this norm may change). Rejection is hard, but don’t take it personally, and feel free to express your rage to your friends.



Publishing is a tricky, but necessary, business for scientists. When you are having problems with publishing, don’t internalize it. Instead complain about it to your friends and colleagues. They will undoubtedly have very similar experiences. Students can be hesitant to share rejections with other students because they feel inferior, but sharing can be therapeutic. When I was a postdoc at NCEAS, the postdocs would share quotes from their worst rejection letters. What would have normally been a difficult, confidence-bashing experience, became a supportive, reassuring experience.

Publishing is necessary, but also very stressful and potentially adding to low-confidence and a feeling that grad school is overwhelming. I hope that the pointers above can help make the experience less onerous. But when you do get that acceptance letter telling you that your paper will be published, hang on to that. Celebrate and know that you have been rewarded for your hard work, but move on from the rejections.


*I should state that my perspective is from science, and my views on publishing are very much informed by the publishing culture in science. I have no way of knowing if the pressures in the humanities or economics are the same for science students.

Tuesday, February 18, 2014

P-values, the statistic that we love to hate

P-values are an integral part of most scientific analyses, papers, and journals, and yet they come with a hefty list of concerns and criticisms from frequentists and Bayesians alike. An editorial in Nature (by Regina Nuzzo) last week provides a good reminder of some of the more concerning issues with the p-value. In particular, she explores how the obsession with "significance" creates issues with reproducibility and significant but biologically meaningless results.

Ronald Fischer, inventor of the p-value, never intended it to be used as a definitive test of “importance” (however you interpret that word). Instead, it was an informal barometer of whether a test hypothesis was worthy of continued interest and testing. Today though, p-values are often used as the final word on whether a relationship is meaningful or important, on whether the the test or experimental hypothesis has any merit, even on whether the data is publishable. For example in ecology, significance values from a regression or species distribution model are often presented as the results. 

This small but troubling shift away from the original purpose for p-values is tied to concerns about false alarms and with replicability of results. One recent suggestion for increasing replicability is to make p-values more stringent - to require that they be less that 0.005. But the point the author makes is that although p-values are typically interpreted as “the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true”, this doesn't actually mean that a p-value of 0.01 in one study is exactly consistent with a p-value of 0.01 found in another study. P-values are not consistent or comparable across studies because the likelihood that there was a real (experimental) effect to start with alters the likelihood that a low p-value is just a false alarm (figure). The more unlikely the test hypothesis, the more likely a p-value of 0.05 is a false alarm. Data mining in particular will be (unwittingly) sensitive to this kind of problem. Of course one is unlikely to know what the odds of the test hypothesis are, especially a priori, making it even more difficult to correctly think about and use p-values. 

from: http://www.nature.com/news/scientific-method-statistical-errors-1.14700#/b5
The other oft-repeated criticism of p-values is that a highly significant p-value make still be associated with a tiny (and thus possibly meaningless) effect size. The obsession with p-values is particularly strange then, given that the question "how large is the effect?", should be more important than just answering “is it significant?". Ignoring effect sizes leads to a trend of studies showing highly significant results, with arguably meaningless effect sizes. This creates the odd situation that publishing well requires high profile, novel, and strong results – but one of the major tools for identifying these results is flawed. The editorial lists a few suggestions for moving away from the p-value – including to have journals require effect sizes and confidence intervals be included in published papers, to require statements to the effect of “We report how we determined our sample size, all data exclusions (if any), all manipulations and all measures in the study”, in order to limit data-mining, or of course to move to a Bayesian framework, where p-values are near heresy. The best advice though, is quoted from statistician Steven Goodman: “The numbers are where the scientific discussion should start, not end.”

Monday, October 21, 2013

Is ecology really failing at theory?


“Ecology is awash with theory, but everywhere the literature is bereft”. That is Sam Scheiner's provocative start to his editorial about what he sees as a major threat to modern ecology. The crux of his argument is simple – theory is incredibly important, it allows us to understand, to predict, to apply, to generalize. Ecology began as a study rooted in system-specific knowledge or natural history in the early 1900s, and developed into a theory-dominated field in the 1960s, when many great theoreticians came to the forefront of ecology. But today, he fears that theory is dwindling in importance in ecology. To test this, he provides a small survey of ecological and evolutionary journals for comparison (Ecology Letters, Oikos, Ecology, AmNat, Evolution, Journal of Evolutionary Biology), recording papers from each journal as either containing no theory, being ‘theory motivated’, or containing theory (either tests of, development of, or reviews of theory). The results showed that papers in ecological journals on average include theory only 60% of the time, compared to 80% for evolutionary papers. Worse, ecological papers seem to be more likely to develop theory than to test it. Scheiner’s editorial (as the title makes clear) is an indictment of this shortcoming of modern ecology.

Plots made based on data table in Scheiner 2013. Results combined for all evolution and all ecology papers.
The proportion of papers in each category - all categories starting with
 "Theory" refer to theory-containing papers.
Plots made based on data table in Scheiner 2013. Results for papers from individual journals.
The proportion of papers of each type - all categories starting with
 "Theory" refer to theory-containing papers.
This is not the kind of conclusion that I find myself arguing against too often. And I mostly agree with Scheiner: theory is the basis of good science, and ecology has suffered from a lack of theoretical motivation for work, or pseudo-theoretical motivation (e.g. productivity-diversity, intermediate diversity patterns that may lack an explanatory mechanism). But I think the methods and interpretation, and perhaps some lack of recognition of differences between ecological and evolutionary research make the conclusions a little difficult to embrace fully. There are three reasons for this – first, is this brief literature review a good measure of how and why we use theory as ecologists? Second, do counts of papers with or without theory really scale into impact or harm? And third, is it fair to directly compare ecological and evolutionary literature, or are there differences in the scope, motivations, and approaches of these fields?

If we are being truly scientific, this might be a good time to point out that The 95% confidence intervals for the percentage of ecology papers with theory overlap with the confidence intervals for the percentage of evolutionary papers with theory suggesting the difference that is the crux of the paper is not significant. [Thanks to a commenter for pointing out this difference is likely significant]. While significant at the 5% level, the amount of overlap is enough that whether this difference is meaningful is less clear. (I would accept an argument that this is due to small sample sizes though). The results also show that choice of journal makes a big difference in terms of the kinds of paper found within – Ecology Letters and AmNat had more theoretical papers or theory motivated papers, while Oikos had more tests of theory and Ecology had more case studies. This sort of unspoken division of labour between journals means that the amount of theory varies greatly. And most ecologists recognize this - if I write a theory paper, it will be undoubtedly targeted to a journal that commonly publishes theory papers. So to more fully represent ecology, a wider variety of journals and more papers would be helpful. Still, Scheiner's counterargument would likely be that even non-theory papers (case studies, etc) should include more theory.

It may be that the proportion of papers that include theory is not a good measure of theory’s importance or inclusion in ecology in general. For example, Scheiner states, “All observations presuppose a theoretical context...the simple act of counting individuals and assessing species diversity relies on the concepts of ‘individual’ and ‘species,’ both of which are complex ideas”. While absolutely true, does this suggest that any paper with a survey of species’ distributions needs to reference theory related to species’ concepts? What is the difference between acknowledging theory used via a citation and more involved discussion of theory? In neither of these cases is the paper “bereft” of theory, but it is not clear from the methods how this difference was dealt with. As well, I think that ecological literature contains far more papers about applied topics, natural history, and system-specific reports than evolutionary biology. Applied literature is an important output of ecology, and as Scheiner states, builds undeniably on years of theoretical background. But on the other hand, a paper about the efficacy of existing reserves in protecting diversity using gap analysis is both important and may not have a clear role for a theoretical section (but will no doubt cite some theoretical and methodological studies). Does this make it somehow of less value to ecology than a paper explicitly testing theory? In addition, case reports and data *are* a necessary part of the theoretical process, since they provide the raw observations on which to build or refine theory. In many ways, Scheiner's editorial is a continuation of the ongoing tension between theory and empiricism that ecology has always faced.

The point I did agree strongly with is that ecology is prone to missing the step between theory development and data collection, i.e. theory testing. Far too few papers test existing theories before the theoreticians have moved on to some new theory. The balance between data collection, theory development, and theory testing is probably more important than the absolute number of papers devoted to one or the other.

Scheiner’s conclusion, though, is eloquent and easy to support, no matter how you feel about his other conclusions: “My challenge to you is to examine the ecological literature with a critical eye towards theory engagement, especially if you are a grant or manuscript reviewer. Be sure to be explicit about the theoretical underpinnings of your study in your next paper…Strengthening the ecological literature by engaging with theory depends on you.”

Monday, September 30, 2013

Struggling (and sometimes cheating) to catch up

Scientific research is maturing in a number of developing nations, which are trying to join North American and European nations as recognized centres of research. As recent stories show, the pressure to fulfill this vision--and to publish in English-language, international journals--has led to some large-scale schemes to commit academic fraud, in addition to cases of run-of-the-mill academic dishonesty.

In China, a widely-discussed incident involved criminals with a sideline in the production of fake journal articles, and even fake versions of existing medical journals in which authors could buy slots for their articles. China has been criticized for widespread academic problems for some time, for example, 2010 the New York Times published a report suggesting academic fraud (plagiarism, falsification or fabrication) was rampant in China and would hold the country back in its goal to become an important scientific contributor. In the other recent incident, four Brazilian medical journals were caught “citation stacking”, where each journal cited the other three excessively, thus avoiding notice for obvious journal self-citation, while still increasing their journal’s impact factor. These four journals were among 14 having their impact factors suspended for a year, with other possible incidences that were flagged but could not be proven involved Italian, a Chinese, and a Swiss journal.

There are some important facts that might provide context to these outbreaks of cheating. Both Brazil and China are nations where to be a successful scientist in the national system, you need to prove that you are capable of success on the world stage. This is probably a tall order in countries where scientific research has not traditionally had an international profile and most researchers do not speak English as their first language. In particular it leads to focus on values which are comparable across the globe, such as journal impact factors, as measures of success. In China, there is great pressure to publish in journals included on the Science Citation Index (SCI), a list of leading international journals. When researcher, department, and university success is quantified with impact factors and SCI publications, it becomes a numbers game, a GDP of research. Further, bonuses for publications in high caliber journals can double a poorly-paid researcher’s salary: a 2004 survey found that for nearly half of Chinese researchers, performance based pay was 50+ percent of their income. In Brazil, the government similarly emphasizes publications in Western journals as evidence of researcher quality.

It’s easy to dismiss these problems as specific to China or Brazil, and there are some aspects of the issue that are naturally country-specific. On the other hand, if you peruse Ivan Oransky’s Retraction Watch website, you’ll notice that academic dishonesty leading to article retraction is hardly restricted to researchers from developing countries. At the moment, the leading four countries in retractions due to fraud are the US, Germany, Japan, and then China, suggesting that Western science isn’t free from guilt. But in developing nations the conditions are ripe to produce fraud. Nationalistic ambition is funnelled into pressure on national scientists to succeed on the international stage; disproportionate focus on metrics of international success; high monetary rewards to otherwise poorly paid individuals for achieving these measures of success; combined with the reality that it is particularly difficult for researchers who were educated in a less competitive scientific system and who may lack English language skills, to publish in top journals. The benefits of success for these researchers are large, but the obstacles preventing their success are also huge. Combine that with a measure of success (impact factor, h-index) that is open to being gamed, and essentially relies on honesty and shared scientific principles, and it is not surprising that system fails.

Medical research was at the heart of both of these scandals, probably because the stakes (money, prestige) are high. Fortunately (or not) for ecology and evolutionary biology, the financial incentives for fraud are rather smaller, and thus organized academic fraud is probably less common. But the ingredients that seem to lead to these issues – national pressures to succeed on the world stage and difficulty in obtaining such success; combined with reliance on susceptible metrics  – would threaten any field of science. And issues of language and culture are so rarely considered by English-language science (eg.), that it can be difficult for scientists from smaller countries to integrate into global academia. There are really two ways for the scientific community to respond to these issues of fraud and dishonesty – either treat these nations as second-class scientific citizens and assume their research may be unreliable, or else be available and willing to play an active role in their development. There are a number of ways the latter could happen. For example, some reputable national journals invite submissions from established international researchers to improve the visibility of their journals. In some nations (Portugal, Iceland, Czech Republic, etc), international scientists review funding proposals, so that an unbiased and external voice on the quality of work is provided. Indeed, the most hopeful fact is that top students from many developing nations attend graduate school in Europe and North America, and then return home with the knowledge and connections they gained. Obviously this is not a total solution, but we need to recognize fraud as problem affecting and interacting with all of academia, rather than solely an issue of a few problem nations.

Monday, June 10, 2013

The slippery slope of novelty

Coming up with a novel idea in science is actually very difficult. The many years during which smart people have, thought, researched, and written about ecology and evolution means that there aren’t necessarily many easy openings remaining. If you are lucky (or unlucky) enough to know someone with an encyclopedic knowledge of the literature, it becomes quickly apparent that only rarely has an idea not been suggested anywhere in the history of the discipline. Mostly science results from careful steps, not novel leaps and bounds. The irony is that to publish in a top journal, a researcher must convince the editor and reviewers that they are making a novel contribution.

There are several ways of thinking about the role of the novelty criterion - first, the effect it has had on research and publishing, but also more fundamentally, how difficult it is to even define scientific novelty in practice. Almost every new student spends considerable effort attempting to come up with a completely "novel" idea, but a strict definition of novelty – research that is completely different than anything published in the field in the past - is nearly impossible. Science is incrementally built on a foundation of existing knowledge, so new research mostly differs from past research in terms of scale and extent. Let's say that extent characterizes how different an idea must be from a previous one to be novel. Is neutral theory different enough from island biogeography (another, earlier, explanation for diversity which doesn’t rely on species-differences) to be considered novel? Most people would suggest that it is distinct enough as to be novel, but clearly it is not unrelated to works that came before it. What about biodiversity and ecosystem functioning? Is the fact that its results are converging with expectations from niche theory (ecological diversity yields greater productivity, etc) take away from its original, apparent novelty

Then there is the question of scale, which considers the relation of an new idea to those found in other disciplines or at previous points in time. For example, when applying ideas that originate in other disciplines, the similarity of the application or the relatedness of the other discipline alters our conclusions about its novelty. Applying fractals to ecology might be considered more novel than introducing particular statistical methods, for example. Priority (were you first?) is probably the first thing considered when evaluating scientific novelty. But ideas are so rarely unconnected to the work that came before them, so then we evaluate novelty as a matter of degree. The most common value judgment seems to be that re-inventing an obscure concept first describe many years ago is more novel than re-inventing an obscure concept that was recently described.

In practice, then, the working definition of novelty may be that something like ‘an idea or finding doesn't exist the average body of knowledge in the field’. The problem with this is that not everyone has an average body of knowledge – some will be aware of every obscure paper written 50 years ago, and for them nothing is novel. Others have a lesser knowledge or more generous judgement of novelty and for them, many things seems important and new. A great deal of inconsistency in the judgement of papers for a journal with a novelty criterion results simply from the inconsistent judgement of novelty. This is one of the points that Goran Arnqvist makes in his critique of the use of novelty as a criterion for publishing (also, best paper title in recent memory). Novelty is a slippery slope. It forces papers to be “sold” and so overvalues flashy and/or controversial conclusions and undervalues careful replication and modest advances. And worse, it ignores the truth about science, which is that science is built on tiny steps founded in the existing knowledge from hundreds of labs and thousands of papers. And that we've never really come up with a consistent way to evaluate novelty.


(Thanks Steve Walker for the bringing up the original idea)

Sunday, May 19, 2013

The end of the impact factor

Recently, both the American Society for Cell Biology (ASCB) and the journal Science both publicly proclaimed that the journal impact factor (IF) was bad for science. The ASCB statement argues that IFs limit meaningful assessment of scientific impact for both published articles and especially other scientific products. The Science statement goes further, and claims that assessments based on IFs lead researchers to alter research trajectories and try to game the system rather than focussing on the important questions that need answering.


Impact factors: tale of the tail
The impact factor was created by Thomson Reuters and is simply the number of citations a journal has received in the the previous two years, divided by the number of articles published over that time span. Thus it is a snapshot of a particular type of 'impact'. There are technical problems with this metric -for example, that citations accumulate at different rates across different subdisciplines. More importantly, and what all publishers and editors know, is that IFs generally rise and fall with the extreme tail of the distribution of the number of citations. For a smaller journal, it just takes one heavily cited paper to make the IF jump up. For example if a journal publishes one paper that accumulates 300 citations and it published just 300 articles over the two years, then its IF can jump up by 1, which can alter the optics. In ecology and evolution, IFs greater than 5 are usually are viewed as top journals.

Regardless of these issues, the main concern expressed by ACSB and Science is that a journal-level metric should not be used to assess an individual researcher's impact. Should a researcher publishing in a high IF journal be rewarded (promotion, raise, grant funded, etc.) if their paper is never cited? What about their colleague who publishes in the lower IF journal, but accrues a high number of citations?

Given that rewards are, in part, based on the journals we publish in, researchers try to game the system by writing articles for certain journals and journals try to attract papers that will accrue citations quickly. Journals with increasing IFs usually see large increases in the number of submissions, as researchers are desperate to have high IF papers on their CVs. Some researchers send papers to journals in the order of their IFs without regard for the actual fit of the paper to the journal. This results in an overloaded peer-review system.

Rise of the altmetric
Alternative metrics (altmetrics) movement means to replace journal and article assessment from one based on journal citation metrics to a composite of measures that include page views, downloads, citations, discussions on social media and blogs, and mainstream media stories. Altmetrics attempts to capture a more holistic picture of the impact of an article. Below is a screenshot from a PLoS ONE paper, showing an example of altmetrics:

By making such information available, the impact of an individual article is not the journal IF anymore, but rather how the article actually performs. Altmetrics are particularly important for subdisciplines where maximal impact is beyond the ivory towers of academia. For example, the journal I am an Editor for, the Journal of Applied Ecology, tries to reach out to practitioners, managers and policy makers. If an article is taken up by these groups, they do not return citations, but they do share and discuss these papers. Accounting for this type of impact has been an important issue for us. In fact, even though our IF may be equivalent to other, non-applied journals, our articles are viewed and downloaded at a much higher rate.

The future
Soon, how articles and journals are assessed for impact will be very different. Organizations such as Altmetric have developed new scoring systems that take into account all the different types of impact. Further, publishers have been experimenting with altmetrics and future online articles will be intimately linked to how they are being used (e.g., seeing tweets when viewing the article).

Once the culture shifts to one that bases assessment on individual article performance, where you publish should become less important, and journals can feel free to focus on an identity that is based on content and not citations. National systems that currently hire, fund and promote faculty based on the journals they publish in, need to carefully rethink their assessment schemes.

May 21st, 2013 Addendum:

You can sign the declaration against Impact Factors by clicking on the logo below:


Friday, May 3, 2013

Navigating the complexities of authorship: Part 2 -author order


Authorship can be tricky business. It is easy to establish agreed upon rules within, say, your lab or among frequent collaborators, but with large collaborations, multiple authorship traditions can cause tension. Different groups may not even agree on who should be included as an author (see Part 1), much less what order they should appear. The number of authors per paper has steadily increased over time reflecting broad cultural shifts in science. Research is now more collaborative, relying on different skill sets and expertise.


 Average number of authors per publication in computer science, compiled by Sven Bittner


Within large collaborations are researchers who have contributed to differing degrees and author order needs to reflect these contribution levels. But this is where things get complicated. In different fields of study, or even among sub-disciplines, there are substantial differences in cultural norms for authorship. According to Tscharntke andcolleagues (2007), there are four main author order strategies:

  1.        Sequence determines credit (SDC), where authors are ordered according to contribution.
  2.        Equal contribution (ED), where authors are ordered alphabetically to give equal credit.
  3.        First-last-author emphasis (FLAE), where last author is viewed as being very important to the work (e.g., lab head).
  4.        Percent contribution indicated (PCI), where contributions are explicitly stated.

The main approaches in ecology and evolutionary biology are SDC and FLAE, though journals are increasingly requiring PCI, regardless of order scheme. This seems like a good compromise allowing the two main approaches (SDC & FLAE) to persist without confusing things. However, PCI only works if people read these statements. Grant applications and CVs seldom contain this information, and the perspective from these two cultures can bias career-defining decisions.

I work in a general biology department with cellular and molecular biologists who wholeheartedly follow FLAE. They may say things like “I need X papers with me as last author to get tenure”. As much as I probe them about how they determine author order in multi-lab collaborations, it is not clear to me how exactly they do this. I know that all the graduate students appear towards the front in order of contribution, but the supervisor professors appear in reverse order starting from the back. Obviously an outsider cannot disentangle the meaning of such ordering schemes without knowing who the supervisors were.

The problem is especially acute when we need to consider how much people have contributed in order to assign credit (see Part 3 on assigning credit). With SDC, you know that author #2 contributed more than the last author. With FLAE, you have no way of knowing this. Did the supervisor fully participate in carrying out the research and writing the paper? Or did they offer a few suggestions and funding? The are cases where the head of ridiculously large labs appears as last author on dozens of publications a year, and grumbling from those labs insinuate that the professor hasn’t even read half the papers.

Under SDC, this person should appear as the last author, reflecting this minimal contribution, but this shouldn’t give the person some sort of additional credit.

In my lab, I try to enforce a strict SDC policy, which is why I appear as second author on a number of multi-authored papers coming out of my lab. I do need to be clear about this when my record is being reviewed in my department, or else they will think some undergrad has a lab somewhere. Even with this policy, there are complexities, such as collaborations with other labs we follow FLAE, such as with many European colleagues. I have two views on this, which may be mutually exclusive. 1) There is a pragmatic win-win, where I get to be second author and some other lab head gets the last position and there is no debate about who deserves this last position. But 2) this enters morally ambiguous territory where we each may receive elevated credit depending on whether people look at the order through SDC or FLAE.

I guess the win-win isn’t so bad, but it would nice if there was an unambiguous criterion directing author order. And the only one that is truly unambiguous is SDC –with ED (alphabetical) for all the authors after the first couple in large collaborations. The recent paper by Adler and colleagues(2011) is a perfect example of how this should work.


References:


Adler, P. B., E. W. Seabloom, E. T. Borer, H. Hillebrand, Y. Hautier, A. Hector, W. S. Harpole, L. R. O’Halloran, J. B. Grace, T. M. Anderson, J. D. Bakker, L. A. Biederman, C. S. Brown, Y. M. Buckley, L. B. Calabrese, C.-J. Chu, E. E. Cleland, S. L. Collins, K. L. Cottingham, M. J. Crawley, E. I. Damschen, K. F. Davies, N. M. DeCrappeo, P. A. Fay, J. Firn, P. Frater, E. I. Gasarch, D. S. Gruner, N. Hagenah, J. Hille Ris Lambers, H. Humphries, V. L. Jin, A. D. Kay, K. P. Kirkman, J. A. Klein, J. M. H. Knops, K. J. La Pierre, J. G. Lambrinos, W. Li, A. S. MacDougall, R. L. McCulley, B. A. Melbourne, C. E. Mitchell, J. L. Moore, J. W. Morgan, B. Mortensen, J. L. Orrock, S. M. Prober, D. A. Pyke, A. C. Risch, M. Schuetz, M. D. Smith, C. J. Stevens, L. L. Sullivan, G. Wang, P. D. Wragg, J. P. Wright, and L. H. Yang. 2011. Productivity Is a Poor Predictor of Plant Species Richness. Science 333:1750-1753.

Tscharntke T, Hochberg ME, Rand TA, Resh VH, Krauss J (2007) Author Sequence and Credit for Contributions in Multiauthored Publications. PLoS Biol 5(1): e18. doi:10.1371/journal.pbio.0050018







Thursday, April 11, 2013

Navigating the complexities of authorship: Part 1 –inclusion


One of the highlights of grad school is publishing your very first papers in peer-reviewed journals. I can still remember the feeling of seeing my first paper appear in print (yes on paper and not a pdf). But what this novice scientist should not be fretting over is which colleagues should be included as authors and whether they are breaking any norms. The two things that should be avoided are including as authors, those that did not substantially contribute to the work, and excluding those that deserve authorship. There have been controversial instances where breaking these authorship rules caused uncomfortable situations. None of us would want someone writing a letter to a journal arguing that they deserved authorship. Nor is it comfortable to see someone squirming out of authorship, arguing they had minimal involvement when an accusation of fraud has been levelled against a paper. How to determine who should be an author can be difficult.






Even though I spell out my own rules below, it is important to be flexible and to understand that different types of papers and differing situations can have an impact on this decision. That said, you do not want to be arbitrary in this decision. For example, if two people contribute similar amounts to a paper, you do not want to include only one because you personally dislike the other. You should have a benchmark for inclusion that can be defended. The cartoon above highlights the complexity and arbitrariness of authorship –and the perception that there are many instances of less than meritorious inclusion.

Journals do have their own guidelines, and many now require statements about contributions, but even these can be vague, still making it difficult to assess how much individuals actually contributed. When I discuss issues of authorship with my own students, I usually reiterate the criteria from Weltzin et al. (2006). I use four criteria to evaluate contribution:
1)   Origination of the idea for the study. This would include the motivation for the study, developing the hypotheses and coming up with a plan to test hypotheses.
2)   Running the experiment or data collection. This is where the blood, sweat and tears come in.
3)   Analyzing the data. Basically moving from a database to results, including deciding on the best analyses, programming (or using software) and dealing with inevitable complexities, issues and problems.
4)   Writing the paper. Putting everything together can sometimes be the most difficult and external motivation can be important.

My basic requirements for authorship are that one of these steps was not possible without a key person, or else there was a person who significantly contributed to more than one of these. Such requirements mean that undergraduates assisting with data collection do not meet the threshold for authorship. Obviously these are idealized and different types of studies (e.g., theory or methodological papers) do not necessarily have all these activities. Regardless, authors must have contributed in a meaningful way to the production of this research and should be able to defend it. All authors need to sign off on the final product.

While this system is idealized, there are still complexities making authorship decisions difficult or uncomfortable. Here are three obvious ones –but there are others.

Data sharing
Large, synthetic analyses require multiple datasets and some authors are loath to share their hard work without credit. This is understandable, as a particular dataset could be the product of years of work. But when is inclusion for authorship appropriate? It is certainly appropriate to offer authorship if the questions being asked in the synthesis overlap strongly with planned analyses for the dataset. Both the data owner and the synthesis architect have a mutual interest in fostering collaboration. In this case every effort should be made to include the data owner in the analyses and writing of the manuscript.

When is it not appropriate to include data owners as authors? First and foremost, if the data is publically available, then it is there for further independent investigation. No one would offer authorship to each originator of a gene sequence in Genbank. Secondly, if it is a dataset that has already been used in many publications and has fulfilled its intended goals, then it should be made available without authorship strings. I’ve personally seen scientists reserve the right of authorship for the use of datasets that are both publically available and have satisfied the intended purpose long ago.

The basic rule of thumb should be that if the dataset is recent and still being analyzed, and if the owner has an interest in examining similar questions, then authorship should be offered –with the caveat that additional work is required, beyond simply supplying the data.

Idea ontogeny
I thought about labeling this section ‘idea stealing’ but thought that wasn’t quite right. An idea is a complex entity. It lives, dies and morphs. It is fully conceivable to listen to a news story about agricultural subsidies, which somehow spurs an idea about ecosystem dynamics. We all have conversations with colleagues and go to talks, and these interactions can morph into new scientific ideas, even subconsciously. We need to be careful and acknowledge how much an idea came from a direct conservation with another scientist. Obviously if a scientist says “you should do this experiment…”,  then you need to acknowledge them and perhaps turn your idea into a collaboration.

Funding
Now here is the tricky one. Often people are authors because they control the purse strings. Yes, a PI has done an excellent job of securing funding, and should be acknowledged for this. If the study is a part of a funded project, where the PI developed the original idea, then the PI fully deserves to be included. However, if the specific study is independent from the funded project in terms of ideas and work plan, but uses funding from this project, then this contribution belongs in the acknowledgements and does not deserve authorship. There are cases where the PI of an extremely large lab gets dozens of papers a year, always appearing last in the list of authors (see part 2 on author order -forthcoming), and it is legitimate to view their contributions skeptically. Their relationship to many of the papers is likely financial and they probably couldn’t defend the science. I had a non-ecologist colleague ask me if it was still the case that graduate students in ecology produce papers without their advisors, to which I said yes (Caroline has several papers without me as an author).

Clearly there are cultural differences among sub disciplines. However, I do feel that authorship norms need to be robust and enforced. Cheaters (those gratuitously appearing on numerous papers –see part 3 on assigning credit; also forthcoming) reap the rewards and benefits of authorship, with little cost. It is disingenuous to list authors that have not have a substantial input into the publication, and the lead author is responsible for the accuracy of authorship. The easiest way to ensure that authors are really authors is to make an effort to include them in various aspects of the paper. For example, give them every opportunity to provide feedback –send them the first results and early drafts, have Skype for phone meetings with them to get their input and incorporate that input. Ultimately, we all should walk away from a collaboration feeling like we have contributed and made the paper better, and we should be proud to talk about it to other colleagues.


Many of these ideas were directly informed by this great paper by Weltzin and colleagues (2006):

Weltzin, J. F., Belote, R. T., Williams, L. T., Keller, J. K. & Engel, E. C. (2006) Authorship in ecology: attribution, accountability, and responsibility. Frontiers in Ecology and the Environment, 4, 435-441.

http://www.esajournals.org/doi/abs/10.1890/1540-9295(2006)4%5B435:AIEAAA%5D2.0.CO%3B2 

Thursday, January 17, 2013

Who are you writing your paper with?


Choosing who you work with plays an important role in who you become as a scientist. Every grad student knows this is true about choosing a supervisor, and we’ve all heard the good, the bad, and the ugly when it comes to student-advisor stories. But writing a paper with collaborators is like dealing with the supervisor-supervisee relationship writ small. Working with coauthors can be the most rewarding or the most frustrating process, or both. Ultimately, the combination of personalities involved merge in such a way as to produce a document that is usually more (but sometimes less) than the sum of its parts. The writing process and collaborative interactions are fascinating to consider all on their own.

Field Guide to Coauthors
The Little General
The Little General is willing to battle till the death for the paper to follow his particular pet idea. Regardless of the aim or outcome of an experiment, a Little General will want to connect it to his particular take on things. Two Little Generals on a paper can spell disaster.
Little General
The Silent Partner
These are the middle authors, the suppliers of data and computer code, people who were involved in the foundations of the work, but not actively a part of the writing process.
Silent Partner
The Nay-sayer
These are the coauthors who disagree, seemingly on principle, with any attempt to generalize the paper. Given free rein, such authors can prevent a work from having any generality beyond the particular system and question in the paper. These authors do help a paper become reviewer-proof, since every statement left in the paper is well-supported.
Nay-sayer

The Grammar Nazi
The Grammar Nazi returns your draft of the paper covered in edits, but he has mostly corrected for grammar and style rather than content. This is not the worst coauthor type, although it can be annoying, especially if these edits are mostly about personal taste.
Grammar Nazi
The Snail
This is the coauthor that you just don’t hear from. You can send them reminder emails, give them a phone call, pray to the gods, but they will take their own sweet time getting anything back to you. (And yes, they are probably really busy).

 The Cheerleader
The Cheerleader can encourage you through a difficult writing process or fuel an easy one. These are the coauthors who believe in the value of the work and will help motivate you through multiple edits, rejections, or revisions, as needed.
Cheerleader
The Good Samaritan
The Good Samaritan is a special type of person. They aren’t authors of your manuscript, but they read it for you out of pure generosity  They might provide better feedback and more useful advice than any of your actual coauthors. They always end up in the acknowledgements, but you often feel like you owe them more.
Good Samaritan
The Sage
The Sage is probably your supervisor or some scientific silverback. They read your manuscript and immediately know what’s wrong with it, what it needs, and distill this to you in a short comment that will change everything. The Sage will improve your work infinitely, and make you realize how far you still have to go.
Sage

There are probably lots of other types that I haven't thought of, so feel free to describe them in the comments. And, it goes without saying that if you coauthored a paper with me, you were an excellent coauthor with whom I have no complaints. Especially Marc Cadotte, who is often both Cheerleader and Sage :)

Thanks to Lanna Jin for the amazing illustrations!














Friday, October 26, 2012

Open access: where to from here?

Undoubtedly, readers of this blog have: a) published in an open access (OA) journal; b) debated the merits of an OA journal; and/or c) received spam from shady, predatory OA journals (I know when my grad students have 'made it' when they tell me they got an e-mail invite to submit to the Open Journal of the Latest Research Keyword). Now that we have had OA journals operating for several years, it is a good time to ask about their meaningfulness for research and researchers. Bob O'Hara has recently published an excellent reflection on OA in the Guardian newspaper, and it deserves to be read and discussed. Find it here.

Sunday, March 11, 2012

On rejection: or, life in academia


I guess it’s not surprising, given that I’ve written about failure in science, that I would write a post about rejection as well. Actually, I’m not so interested in writing about rejection as I am in hearing how people have learned to deal with it.  

Academia is a strange workplace. It’s stocked with bright people who’ve been successful throughout their previous academic endeavours (with some exceptions*). For the most part, they haven’t faced too much criticism of their intellectual abilities. But in academia you will spend your career being questioned and criticized, in large part by your peers. You will constantly be judged (with every submitted manuscript, grant application, or tenure review). And this is the universal truth about academia: you will be rejected. And for some (many?) people, that's a difficult thing to accept.

Rejection may be so painful in part because it can be hard to interpret. After all, it’s an old trope that rejection is a normal part of academia. But how much rejection is normal, when is it just a numbers game and when is it a sign of professional failing? Let alone the fact that rejection depends on a shifting academic landscape where available funding, journal quotas, and research caliber are always changing. So I’m curious: does the ability to deal with rejection factor into academic success? Are some people, based on personality, more likely to weather rejections successfully, and does this translate into academic success? Or is the development of a thick skin just the inevitable outcome of an academic life?

*A couple of the people I know who are generally unfazed by rejections would say that they deal well with rejection because they weren’t particularly great students and so academic failure isn’t new or frightening to them. 

Thursday, November 17, 2011

Google Scholar will track your citations

In case you haven't noticed, Google Scholar is now offering "My citations", which tracks citations and calculates indices for your papers. Setting it up looks straightforward and fast, making it another alternative to ISI Web of Science and other services. Let the h-index one-upmanship begin...

Tuesday, October 4, 2011

The four types of failure, or how to fail in science

As scientists, we’re all wrong, at least sometimes. The question is, how are we wrong?

The arsenic bacteria saga, which we’ve discussed on this blog before, is turning out to be a very public example of failure in science. First announced by NASA press conference in December 2010, authors lead by Felisa Wolfe-Simon shared their discovery of a bacterium capable of replacing phosphorus in its DNA with arsenic, suggesting the possibility of life in phosphorus-limited conditions. This apparently momentous discovery was published in Science, and met with disbelief and severe criticism. Critics throughout the blogosphere and academic departments began to compile a comprehensive list of failings on the part of the paper—8 technical criticisms were published in Science—and as the result of the intense focus on the paper’s lead author is no longer associated with the lab group where this research was carried out. This is failure at its worst—the science was flawed and it drew immediate and intense censure. This is the kind of failure that most young scientists fear: judgment, intense criticism, career-long repercussions. But it’s also probably the least common type of failure in science.

However, it’s arguable that the saddest form of failure is the opposite of this: when a paper is right—innovative, ahead of its time—but somehow never receives the attention it deserves. There are lots of famous examples of scientific obscurity, with Gregor Mendel being the poster child for scientists who toil for years in anonymity. In ecology, for example, papers that considered species as equivalent (a la neutral theory) to explain coexistence were around in the 1950’s-1960s, but received little attention. Other papers suggesting variation in environment as a possible mechanism for plant coexistence were published prior to Chesson and Huntly's influential paper, yet essentially uncited. Most researchers can name at least one paper that foreshadows the direction the field will take many years later, yet is unacknowledged and poorly cited. There are many reasons that papers could be under recognized—they are written by scientists outside of the dominant geographical areas or social networks, or who lack the ability to champion their ideas, either in writing or in person. In some instances the intellectual climate may not be conducive to an idea that, at a later time, will take off.

If that is the saddest type of failure, then the best type of failure is when being wrong inspires an explosion of new research and new ideas. Rather than causing an implosion, as the arsenic-bacteria paper did, these wrong ideas reinvigorate their field. Great examples in ecology include Steve Hubbell’s Unified Neutral Theory of Biodiversity, which although criticized rightly for its flaws, produced a high-quality body of literature debating its merits and flaws. When Jared Diamond (1975) proposed drawing conclusions about community assembly processes based on patterns of species co-occurrence, the disagreement, led by Dan Simberloff ultimately led to the current focus on null models. Cam Webb’s hypothesis that there should be a relationship between phylogenetic patterns in communities and the importance of different processes in structuring those communities sparked a decade-long investigation into the link between phylogenetic information and community assembly. Although Webb’s hypothesis proved too simplistic, it still informs current research. This is the kind of failure on which you can build a career, particularly if you are willing to continually revisit and develop your theory as the body of evidence against it grows.

However, the most common form of failure occurs when a paper is published that is wrong, yet no one notices or worse, cares. For every paper that blows up to the proportion of the arsenic bacteria paper, or inspires years of new research, there are hundreds of papers that just fade away, poorly cited and poorly read. Is it better to fail quietly, or to take the chance at public failure, with all its risks and rewards?

Wednesday, June 29, 2011

The reality of publishing papers

This is in response to my undergrads, who ask me "Have you published any of the stuff we're working on yet?" practically every week. To which my response invariably is "not yet".


(click to make larger)

Wednesday, June 9, 2010

Another reason why a new publishing model is needed...

The finances and ethics of scientific publishing are complex, and there is an inherent tension between commercial publishers and academics and their institutions. On the one hand, we as scientists are (most often) using public money to carry out research, usually in the public interest, and then we typically publish in for-profit journals that restrict public access to our publications. Authors seldom see any of the financial return from publisher profits. On the other hand, publishers provide a level of distribution and visibility for our work, which individual authors could not match. In previous posts I have discussed Open Access publications, but there is another reason to consider other publication models. Recently Nature Publishing Group notified the University of California system of an impending 400% increase in the cost for their publications. The UC administration has responded with an announced plan to boycott NPG publications. The announcement rightly points out a 400% increase is not feasible given the current plight of library budgets, especially in California, and that scientists in the UC system disproportionately contribute to publishing, reviewing and editing NPG publications and thus are the engine for NPG profits. (See a nice story about the boycott in The Chronicle of Higher Education)

This is just the latest symptom of the growing tension between publishing and academia, and is a stark reminder that other publishing models need to actively supported. Perhaps the UC system could invest in open access publishers in lieu of NPGs outrageous costs? Something has to give, and perhaps the UC boycott will remind libraries that they hold the purse strings and could be the greatest driving force for change.

Tuesday, May 25, 2010

The successful launch of MEE

Usually, I view the release of a new journal with some skepticism. There are so many journals and it feels like academics are over-parsing fields, isolating researchers that should be communicating. However, sometimes a journal comes along and it is obvious that there is a need and the community responds to its arrival. Such is the case with the British Ecological Society's newest journal, Methods in Ecology and Evolution, started by Rob Freckleton. The idea that a journal would be dedicated to methods papers is a great idea. This era of ecology and evolution is one that is defined by rapid advances in experimental, technological and computational tools and keeping track of these advances is difficult. Having a single journal should make finding such papers easier, but more importantly provides a home for methodological and computational ecologists and evolutionary biologists, which will hopefully spur greater communication and interaction, fostering more rapid development of tools.

Two issues have been published and they have been populated by good, entertaining articles. I especially enjoyed the one by Bob O'Hara and Johan Kotze on why you shouldn't log transform count data. As a researcher, I've done this (instead of using a GLM with proper distribution) and as an editor, I've allowed this, but it has always felt wrong somehow, and this shows that it is.

The early success of the journal is not just the product of the good papers it has already published, but also because of the savvy use of electronic communication. They Tweet on Twitter, link fans through Facebook, blog about recent advances in methods from other journals and post podcast and videocast interviews with authors. These casts give readers access to authors' own explanations of how their methods can be used.

I am excited about this new journal and hope it has a great impact on the publication of methodological papers.

Saturday, October 17, 2009

The making of an open era

With the availability of open access (OA) journals, academics now have a choice to make when deciding where to send their manuscripts. The idealistic version of OA journals represents a 'win-win' for researchers. The researchers publishing their work ensure the widest possible audience and research has shown a citation advantage for OA papers. The other side of the 'win-win' scenario is that researchers, no matter where they are, or how rich their institution, get immediate access to high-caliber research papers.

However, not all researchers have completely embraced OA journals. There are two commonly articulated concerns. The first is that many OA journals are not indexed, in most notably Thomson Reuters Web of Knowledge, meaning that a paper will not show up in topic searches, nor will citations be tracked. I for one do not like the idea of a company determining which journals deserve inclusion, thus affecting our choice of journals to submit to.

The second concern is that some OA journals are expensive to publish in. This is especially true for the more prestigious OA journals. Even though such OA journals often provide cash-strapped authors the ability to request a cost deferment, the perception is that you generally need to allocate significant funds for publishing in OA journals. While this cost may be justifiable to an author for inclusion in a journal like PLoS Biology, because of the level of readership and visibility. However, there are other, new, profit-driven journals, which see the OA model as a good business model, with little overhead and the opportunity to charge $1000-2000 per article.

I think that, with the rise of Google Scholar, and tools to assess impact factors (e.g., Publish or Perish), assessing difference sources for articles is available. The second concern is a little more serious, and a broad-scale solution is not readily apparent.

Number of Open Access journals

Regardless, OA journals have proliferated in the past decade. Using the directory of biology OA journals, I show above that the majority of OA journals have appeared after 2000. Some of these have not been successful having faltered after a few volumes, such as the World Wide Web Journal of Biology which published nine volumes with the last in 2004. I am fairly confident that not all these journals could possibly be successful, but I hope that enough are. By having real OA options, especially higher-profile journals, research and academia benefit as a whole.

Which journals become higher profile and viewed as an attractive place to submit a paper is a complex process depending on a strong and dedicated editorial staff and emergent property of the articles submitted. I hope that researchers out there really consider OA journals as a venue for some of their papers and become part of the 'win-win' equation.