Friday, May 31, 2013

Some ways you might not expect your research to be used

Most ecologists recognize that ecological knowledge is a tool, with useful applications to conservation and management, recreation, and ecosystem services and goods. Many of us have even written or said something suggesting uses for our work, no matter how likely. But few ecologists expect their research to be cited for military applications or support for the superiority of capitalism.

For example, a recent editorial in the New York Times detailed how conservation of biodiversity became part of American Cold War strategies. In those days, the American military was considering the role for ‘environmental warfare’, and the research of Charles Elton, who wrote of the dangers of simplifying landscapes by reducing biodiversity resonated. Strategists advocated maintaining biodiversity in food supplies and stockpiles (wisdom which transcends the military motivation). Ecological research into invasive species has also informed the US military in modern times. For example, the report "Invasive Threats to the American Homeland" considers the possibility of introduced species being used as terrorist weapons. Such introduced species might be crop parasites or vectors for human diseases, theoretically wreaking economic, structural, and human costs. 

Sometimes attempts to adapt research to other uses fall rather short of the mark. Evolutionary biology is not unfamiliar with this: for example, the misapplication of evolution to social Darwinism and some of the ideas touted in evolutionary psychology misrepresent evolutionary theory. This can happen in ecology too. A recent PNAS paper presented the result that evolutionary diversity increases ecosystem productivity. One writer in the Washington Post blogging community presented this finding as evidence that capitalist concepts like division of labour are found even in nature. It seems difficult to accept the link the writer attempts to make (the title is rather over the top as well: “Darwin’s free market wisdom: division of labor starts in the genes”). The writer states that nature wouldn’t exhibit a relationship between diversity and higher productivity if it wasn’t optimal, so “[t]he same findings would also appear to suggest that species, like humans, are not all created equal and some are more adept at certain tasks than others.” Therefore, apparently, capitalism is superior to communism. 

This kind of thing makes me think that Darwin was lucky that he did not live to see his words and ideas so frequently misquoted and misapplied (although he certainly suffered this during his own lifetime). This is the danger of sending an idea or result into the world: you no longer fully control how it is used and understood. A successful idea is one that, for better or worse, has an independent life. 

(There are probably many misapplications or unusual uses of ecology and evolution that I haven't thought of. If you think of other examples, feel free to mention them in the comments.)

Wednesday, May 29, 2013

Has academic advancement changed your point of view?


We regret to inform you that you paper has not been accepted
as a graduate student:
 photo pic01.gif

as a postdoc:

 photo pic02.gif

 as a professor:
 photo pic061.gif


We are pleased to inform you that your paper has been accepted
as a graduate student:
 photo pic04.gif

as a postdoc:

as a professor:
 photo pic061.gif

Monday, May 27, 2013

Evidence for the evolutionary diversity-productivity relationship at several scales


John J. Stachowicz, Stephanie J. Kamel, A. Randall Hughes, and Richard K. Grosberg. Genetic Relatedness Influences Plant Biomass Accumulation in Eelgrass (Zostera marina). The American Naturalist, Vol. 181, No. 5 (May 2013), pp. 715-724

Ecology is increasingly recognizing the value of non-species based measures of diversity in relation to ecosystem services, community diversity and invasibility, and conservation activities. One result is that we are seeing increasingly strong and interesting experimental evidence for the importance of genetic diversity in understanding populations, species, and communities are structured. Two recent papers are good examples of how our understanding is progressing.

For example, we are now at the point where research has clearly demonstrated the relationship between ecosystem functioning and evolutionary history, and now well-designed experiments can begin to explore the mechanisms that underlie the ecosystem functioning-evolutionary diversity link. The oft-demonstrated correlation between evolutionary diversity and productivity is explained based on the assumption that ecological similarity and evolutionary relatedness are connected. Diverse communities are often thought to have lower niche overlap (i.e. higher complemenarity), but these experiments often rely on highly distinct species (such as a grass and a N-fixer), which could over-emphasize the importance of this relationship. In Cadotte (2013), independent manipulations of phylogenetic diversity and species richness allow the author to explore separately the role of complementarity and selection effects (the increased likelihood that a highly productive species will be present as species richness increases).

The experiment involved old field plots, planted with between 1 and 4 species chosen from a pool of 17 possible species; evolutionary diversity (high, medium, or low) and species richness are manipulated to include all possible combinations. The study found found a much stronger relationship between phylogenetic diversity (PD) and biomass production then between species richness and biomass production, but this isn't especially novel. What is interesting is that it could also identify how selection effects and complementarity were driving this response. High levels of complementarity were associated with high levels of PD: polyculture plots with high complementarity values were much more likely to show transgressive overyielding. Plots with close relatives had a negative or negligible complementarity effect (negative suggesting competitive or other inhibitory interactions). There was also evidence for a selection effect, which was best captured by an abundance-weighted measure of evolutionary diversity (IAC), which measured the abundance of closely related species in a plot. Together, PD and IAC explain 60% of the variation in biomass production.
From Cadotte (2013).

The second study asks the exact same question – what is the relationship between biomass production and genetic diversity - but within populations. Stachowicz et al. (2013) looked at genetic relatedness among individuals in monocultures of the eelgrass Zostera marina and its relationship to productivity. Variation within a species has many of the same implications as variation within a community – high intraspecific variation might increase complementarity and diverse assemblages might also contain more productive genotypes leading to a selection effect. On the other hand, it is possible that closely related, locally adapted genotypes might be most productive despite their low genotypic variation. 

Similar to most community-level experiments, Stachowicz et al. found that looking at past experimental data suggested there was a strong relationship between genetic relatedness and biomass/density in eelgrass beds. Taxa (i.e. the number of genotypes) tended to be a poorer predictor of productivity. However, the relationship was in the opposite direction usually seen – increasing relatedness predicted higher biomass. This is difficult to explain, since it goes against the expected direction of complementarity or selection effects. Possibly cooperative/facilitative relationships are important in eelgrass monocultures. Data obtained from field surveys (rather than experimental data) suggested an alternative: possibly these studies didn’t cover a large enough range of relatedness. This field data covered a much larger range of relatedness values, and showed a unimodal relationship (below), indicating that the productivity-relatedness relationship had an optimum, where highly related or highly diverse assemblages were less productive. Although further work needs to be done, this is an intriguing possibility.
From Stachowicz et al. (2013). Grey dots represent range of relatedness values from experimental data only, compared to range covered by field survey.

At some scales, ecologists are now refining what we know about popular research questions, while at others we are just scratching the surface. Stachowicz et al. suggest that as we scale up or down our expectations should differ -  “the slope and direction of the relationship between genetic differentiation and ecological functioning might depend on the genetic scale under consideration”.


(Disclaimer - obviously Marc Cadotte was my PhD supervisor until very recently. But I think it's a nice paper, regardless, and worth a post :) )

Sunday, May 19, 2013

The end of the impact factor

Recently, both the American Society for Cell Biology (ASCB) and the journal Science both publicly proclaimed that the journal impact factor (IF) was bad for science. The ASCB statement argues that IFs limit meaningful assessment of scientific impact for both published articles and especially other scientific products. The Science statement goes further, and claims that assessments based on IFs lead researchers to alter research trajectories and try to game the system rather than focussing on the important questions that need answering.


Impact factors: tale of the tail
The impact factor was created by Thomson Reuters and is simply the number of citations a journal has received in the the previous two years, divided by the number of articles published over that time span. Thus it is a snapshot of a particular type of 'impact'. There are technical problems with this metric -for example, that citations accumulate at different rates across different subdisciplines. More importantly, and what all publishers and editors know, is that IFs generally rise and fall with the extreme tail of the distribution of the number of citations. For a smaller journal, it just takes one heavily cited paper to make the IF jump up. For example if a journal publishes one paper that accumulates 300 citations and it published just 300 articles over the two years, then its IF can jump up by 1, which can alter the optics. In ecology and evolution, IFs greater than 5 are usually are viewed as top journals.

Regardless of these issues, the main concern expressed by ACSB and Science is that a journal-level metric should not be used to assess an individual researcher's impact. Should a researcher publishing in a high IF journal be rewarded (promotion, raise, grant funded, etc.) if their paper is never cited? What about their colleague who publishes in the lower IF journal, but accrues a high number of citations?

Given that rewards are, in part, based on the journals we publish in, researchers try to game the system by writing articles for certain journals and journals try to attract papers that will accrue citations quickly. Journals with increasing IFs usually see large increases in the number of submissions, as researchers are desperate to have high IF papers on their CVs. Some researchers send papers to journals in the order of their IFs without regard for the actual fit of the paper to the journal. This results in an overloaded peer-review system.

Rise of the altmetric
Alternative metrics (altmetrics) movement means to replace journal and article assessment from one based on journal citation metrics to a composite of measures that include page views, downloads, citations, discussions on social media and blogs, and mainstream media stories. Altmetrics attempts to capture a more holistic picture of the impact of an article. Below is a screenshot from a PLoS ONE paper, showing an example of altmetrics:

By making such information available, the impact of an individual article is not the journal IF anymore, but rather how the article actually performs. Altmetrics are particularly important for subdisciplines where maximal impact is beyond the ivory towers of academia. For example, the journal I am an Editor for, the Journal of Applied Ecology, tries to reach out to practitioners, managers and policy makers. If an article is taken up by these groups, they do not return citations, but they do share and discuss these papers. Accounting for this type of impact has been an important issue for us. In fact, even though our IF may be equivalent to other, non-applied journals, our articles are viewed and downloaded at a much higher rate.

The future
Soon, how articles and journals are assessed for impact will be very different. Organizations such as Altmetric have developed new scoring systems that take into account all the different types of impact. Further, publishers have been experimenting with altmetrics and future online articles will be intimately linked to how they are being used (e.g., seeing tweets when viewing the article).

Once the culture shifts to one that bases assessment on individual article performance, where you publish should become less important, and journals can feel free to focus on an identity that is based on content and not citations. National systems that currently hire, fund and promote faculty based on the journals they publish in, need to carefully rethink their assessment schemes.

May 21st, 2013 Addendum:

You can sign the declaration against Impact Factors by clicking on the logo below:


Wednesday, May 15, 2013

Holding fast to a good(?) idea

One of my favourite lists on the internet is tucked away in the credits for the PHYLIP software. PHYLIP was authored by Joe Felsenstein, a professor at the University of Washington and expert on methods for phylogenetic inference. PHYLIP is a free package of programs for inferring phylogenies, and probably the first and oldest widely-distributed phylogenetic program. Programs like PHYLIP made phylogenetic approaches easily accessible to ecologists and evolutionary biologists. Apparently it took years to get from the idea for PHYLIP to funding, and Felsenstein memorializes this with his “No thanks to” list (below). The list includes reviewers and panels from the US Dept of Energy, NSF, and NIH that turned down his proposals and made comments like "The work has the potential to define the field for many years to come.... All agreed that the proposal is somewhat vague. There was also some concern that the proposed work is too ambitious.”

(Click to enlarge)

There are obvious responses to this list, mostly relating to the short-sightedness of funding agencies, meaningless requirements for ‘broader impacts’, the fact that proposals might be improved through the process of multiple failed applications, and of course the benefit of being long-established and respected when posting such lists on your website. But what I always wonder about is how long do you hold on to an idea, a proposal, or a manuscript that it is repeatedly rejected, before you give up on it? 

This question is interesting to me for a couple of reasons. Firstly, because personality is so intertwined with confidence about an idea’s success. We all know people who would argue that all their ideas are Nature-worthy, criticism be damned. Other people need to be convinced of the merit of their own ideas. Obviously past success probably helps with judgment – having experience in identifying good ideas builds confidence in your ability to do so again. But what is the line between self-confidence and self-delusion? Secondly, it is a reminder that lots of good ideas and good papers were rejected many times. In any case, I am curious whether people tend give up on an idea simply because they became discouraged at the prospects of getting it published, or because they lost faith in the idea, or a combination of both. 

Friday, May 10, 2013

Love the lab you’re with or find the lab you love? Being happy in grad school.

Every grad student is unhappy at some point; existential angst is basically required, hence the success of PhD comics. A surprisingly common reason for grad school unhappiness results when students feel they have diverged from the path they want to be on - that they are somehow in the wrong lab, learning the wrong thing, or working with the wrong person. That they dislike their research. Some people might argue that few people start in their dream job, but grad school is more like an apprenticeship than a 9-5 job: a place to obtain skills and experiences rather than a source of income.

Every unhappy student is different in their own way, but there are a few predictable causes. The path between undergrad and grad student is highly stochastic. Most undergraduates make choices about grad school while under-informed about their options and unclear on where their interests truly lay. Choosing a lab for a PhD or Masters is a huge commitment for an undergrad who has had comparatively limited interactions with ecological research. Academic labs tend to be so specialized that even if an undergrad has had the opportunity and motivation to interact with a number of labs as a student, they have experienced only a tiny fraction of the areas available for study. Students in schools with general biology programs, rather than specialized EEB departments, may be more limited again in the ecological experiences they can have. I can’t help but think that few undergrads are really equipped to make a definite, informed decision about what they want to spend the next 5 years (or more) of their lives doing. Even if they are, a successful graduate student should grow as a researcher and their interests will naturally expand or shift. Expanding interests and changing foci are part of a successful graduate experience, but what initially felt like a good fit may suddenly feel less comfortable. Students may also end up in uncomfortable fits simply because their choices for grad school were limited by geographical constraints or the availability of funded positions, causing them to compromise on their interests.

My own experience moving from undergrad to PhD student was pretty much in line with this. I knew I wanted to go to grad school and I felt reasonably prepared – I had good grades, three years of research experience in a lab, I researched and contacted a few potential supervisors – but I hadn’t specialized in ecology and didn’t exactly know what my interests were. It took more than a year as a PhD student, reading deep into the literature and taking classes, to realize what I really was interested in was completely different than what I was supposed to be doing. This was accompanied by a period of unhappiness and confusion – I had apparently gotten what I wanted (grad school, funding, etc), but it wasn’t what I wanted after all. No one prepared me for this possibility. Eventually, but with some hassle, I changed labs and was lucky to have the opportunity to get the skills I was really wanted.

I don’t think this outcome is anyone’s “fault”. I think most departments and many supervisors are sympathetic to these sort of graduate student issues. Formal advising of undergraduates at the department level, in addition to the usual informal advising that grad students and advisors provide, should be focused on guiding students in determining what they want from grad school (or if they really want grad school at all!), how to identify areas of interest and programs/supervisors that would suit their interests. In particular, they need to be empowered in how to contact potential supervisors and how to discuss the supervisor’s expectations and approach, what changing interests mean on a laboratory and department level, and what resources are available for student who may wish to obtain particular skills not available in the lab. Supervisors benefit too when their students are informed and more likely to be happy and engaged.

The lab rotation system, which some departments have, also seems like a good way to expose students to their options (although I have no personal experience with it). In addition, when grad student funding comes through the department, rather than from individual supervisors, students can change labs with less difficulty. Some supervisors have very relaxed approaches to grad student projects, allowing students to explore their interests well outside of the lab’s particular approach. But other supervisors (or funding sources) are very much organized around a particular project, making it difficult for students to do anything but the project they were hired to work on.

So what is a student who realizes they want to be working with a different system, approach, sub-discipline, or supervisor, supposed to do? How much does unhappiness really matter in the long run? This depends a lot on what a student wants to get out of grad school and what they need to achieve it. One thing students need to do is elucidate what they hope to achieve as a grad student. Though a student may ultimately be unsatisfied with some aspects of their position, they may be able to gain the experiences they want from grad school regardless. There are many tangible and intangible skills students learn in grad school. Students may decide that they want to obtain particular quantitative skills (statistics, ArcGIS, coding and modeling experience, etc, etc) that they want for the job market; if these aren’t available, change may be necessary. On the other hand, even if a student is less interested in the particular system they are working in, it may be possible to obtain experimental and technical skills that are transferrable elsewhere. If students wish to remain in academia, but realize they are interested in a different subdiscipline than where they work, one consideration is whether it will be easier to make the shift now compared when finding a postdoc and attempting to convince a potential employer that their knowledge is transferrable. This is a difficult question – having read a large number of Ecolog post-doctoral position ads, it seems that the request for system-specific experience occurs in about 50% of ads. The need to have a particular skill set (say, Python and R, or experimental design) tends to be mentioned in every ad. So if you want to go from a protist-microcosm PhD to a postdoc in kangaroo ecology, it seems difficult to predict how well your experimental design skills will trump your lack of understanding of Australian ecosystems.

Of course, there is no one-size-fits-all answer about what to do. Sometimes, unhappiness will pass, sometimes it won’t. Students need to be proactive above all. The truth is that sometimes it is better to be willing to drop out, to change labs, or take other drastic action. Students commonly fall victim to the sunk-cost fallacy, the idea that they’ve spent 2 years on this degree, so they might as well not “waste” it. Sometimes it is worth sticking it out, but there should be no stigma in deciding not to.

Tuesday, May 7, 2013

Testing the utility of trait databases

Cordlandwehr, Verena, Meredith, Rebecca L., Ozinga, Wim A., Bekker, Renée M., van Groenendael, Jan M., Bakker, Jan P. 2013. Do plant traits retrieved from a database accurately predict on-site measurements? Journal of Ecology. 101:1365-2745.

We are increasingly moving towards data-sharing and the development of online databases in ecology. Any scientist today can access trait data for thousands of species, global range maps, gene sequences, population time series, or fossil measurements. Regardless of arguments for or against, the fact that massive amounts of ecological data are widely available is changing how research is done.

For example, global trait databases (TRY is probably best known) allow researchers to explore trait-based measures in communities, habitats, or ecosystems without requiring that the researchers have actually measured the traits of interest in the field. And while few researchers would suggest that this is superior to making the measurements in situ, the reality is that there are many situations where trait data might be required without the researcher being able to make them. In these cases, online databases are like a one-stop shop for data. But despite the increasing frequency of citations for trait databases, until now there has been little attempt to quantify how well database values act as proxies for observed trait values. How much should we be relying on these databases?

There are many well-recorded reasons why an average trait value might differ from an individual value: intraspecific differences result from plasticity, genotype differences, and age or stage differences, all of which may vary meaningfully between habitats. How much this variation actually matters to trait-based questions is still up for debate, but clearly affects the value of such databases.  To look at this question, Cordlandwehr et al. (2013) examined how average trait values calculated with values from a North-west European trait database (LEDA) corresponded with average trait values calculated using in situ measurements. Average trait values were calculated across several spatial scales and habitat types. The authors looked plant communities growing in 70 2m x 2m plots in the Netherlands, divided between wet meadow and salt marsh habitats. In each community, they measure three very common plant traits: canopy height (CH), leaf dry matter content (LDMC), and specific leaf area (SLA).

In situ measurements were made such that the trait value for a given plot was the median value of all individuals measured; for each habitat it was the the median value of all individuals measured in the habitat. The authors calculated the average trait values (weighted by species abundance) across all species for each community (2m x 2m plot) and each habitat (wet meadow vs. salt marsh). They then compared the community or habitat average as calculated using the in situ values and the regional database values. 
From Cordlandwehr et al. 2013. Habitat-level traits at site scale plotted against habitat-level traits calculated using trait values retrieved from a database. 

The authors found the correspondence between average trait values measured using in situ or database values varied with the scale of aggregation, the type of trait and the particular habitat. For example, leaf dry matter content varied very little but SLA was variable. The mesic habitat (wet meadow) was easier to predict from database values than the salt marsh habitat, probably because salt marshes are stressful environments likely to impose a strong environmental filter on individuals, so that trait values are biased. While true that rank differences in species trait values tended to be maintained regardless of the source of data, intraspecific variation was high enough to lead to over- or under-prediction when database values were relied on. Most importantly, spatial scale mattered a lot. In general, database values at the habitat-scale were reasonable predictors of observed traits. However, the authors strongly cautioned against scaling such database values to the community level or indeed using averaged values of any type at that scale: “From the poor correspondence of community-level traits with respect to within-community trait variability, we conclude that neither average trait values of species measured at the site scale nor those retrieved from a database can be used to study processes operating at the plot scale, such as niche partitioning and competitive exclusion. For these questions, it is strongly recommended to rigorously sample individual plants at the plot scale to calculate functional traits per species and community.” 

There are two conclusions I take from this. First, that the correlation between sampling effort and payoff is still (as usual) high. It may be easier to get traits from a database, but it is not usually better. The second is that studies like this allow us to find a middle ground between unquestioning acceptance or automatic criticism of trait databases: they help scientists develop a nuanced view that acknowledges both strengths and weaknesses. And that's a valuable contribution for a study to make.


Friday, May 3, 2013

Navigating the complexities of authorship: Part 2 -author order


Authorship can be tricky business. It is easy to establish agreed upon rules within, say, your lab or among frequent collaborators, but with large collaborations, multiple authorship traditions can cause tension. Different groups may not even agree on who should be included as an author (see Part 1), much less what order they should appear. The number of authors per paper has steadily increased over time reflecting broad cultural shifts in science. Research is now more collaborative, relying on different skill sets and expertise.


 Average number of authors per publication in computer science, compiled by Sven Bittner


Within large collaborations are researchers who have contributed to differing degrees and author order needs to reflect these contribution levels. But this is where things get complicated. In different fields of study, or even among sub-disciplines, there are substantial differences in cultural norms for authorship. According to Tscharntke andcolleagues (2007), there are four main author order strategies:

  1.        Sequence determines credit (SDC), where authors are ordered according to contribution.
  2.        Equal contribution (ED), where authors are ordered alphabetically to give equal credit.
  3.        First-last-author emphasis (FLAE), where last author is viewed as being very important to the work (e.g., lab head).
  4.        Percent contribution indicated (PCI), where contributions are explicitly stated.

The main approaches in ecology and evolutionary biology are SDC and FLAE, though journals are increasingly requiring PCI, regardless of order scheme. This seems like a good compromise allowing the two main approaches (SDC & FLAE) to persist without confusing things. However, PCI only works if people read these statements. Grant applications and CVs seldom contain this information, and the perspective from these two cultures can bias career-defining decisions.

I work in a general biology department with cellular and molecular biologists who wholeheartedly follow FLAE. They may say things like “I need X papers with me as last author to get tenure”. As much as I probe them about how they determine author order in multi-lab collaborations, it is not clear to me how exactly they do this. I know that all the graduate students appear towards the front in order of contribution, but the supervisor professors appear in reverse order starting from the back. Obviously an outsider cannot disentangle the meaning of such ordering schemes without knowing who the supervisors were.

The problem is especially acute when we need to consider how much people have contributed in order to assign credit (see Part 3 on assigning credit). With SDC, you know that author #2 contributed more than the last author. With FLAE, you have no way of knowing this. Did the supervisor fully participate in carrying out the research and writing the paper? Or did they offer a few suggestions and funding? The are cases where the head of ridiculously large labs appears as last author on dozens of publications a year, and grumbling from those labs insinuate that the professor hasn’t even read half the papers.

Under SDC, this person should appear as the last author, reflecting this minimal contribution, but this shouldn’t give the person some sort of additional credit.

In my lab, I try to enforce a strict SDC policy, which is why I appear as second author on a number of multi-authored papers coming out of my lab. I do need to be clear about this when my record is being reviewed in my department, or else they will think some undergrad has a lab somewhere. Even with this policy, there are complexities, such as collaborations with other labs we follow FLAE, such as with many European colleagues. I have two views on this, which may be mutually exclusive. 1) There is a pragmatic win-win, where I get to be second author and some other lab head gets the last position and there is no debate about who deserves this last position. But 2) this enters morally ambiguous territory where we each may receive elevated credit depending on whether people look at the order through SDC or FLAE.

I guess the win-win isn’t so bad, but it would nice if there was an unambiguous criterion directing author order. And the only one that is truly unambiguous is SDC –with ED (alphabetical) for all the authors after the first couple in large collaborations. The recent paper by Adler and colleagues(2011) is a perfect example of how this should work.


References:


Adler, P. B., E. W. Seabloom, E. T. Borer, H. Hillebrand, Y. Hautier, A. Hector, W. S. Harpole, L. R. O’Halloran, J. B. Grace, T. M. Anderson, J. D. Bakker, L. A. Biederman, C. S. Brown, Y. M. Buckley, L. B. Calabrese, C.-J. Chu, E. E. Cleland, S. L. Collins, K. L. Cottingham, M. J. Crawley, E. I. Damschen, K. F. Davies, N. M. DeCrappeo, P. A. Fay, J. Firn, P. Frater, E. I. Gasarch, D. S. Gruner, N. Hagenah, J. Hille Ris Lambers, H. Humphries, V. L. Jin, A. D. Kay, K. P. Kirkman, J. A. Klein, J. M. H. Knops, K. J. La Pierre, J. G. Lambrinos, W. Li, A. S. MacDougall, R. L. McCulley, B. A. Melbourne, C. E. Mitchell, J. L. Moore, J. W. Morgan, B. Mortensen, J. L. Orrock, S. M. Prober, D. A. Pyke, A. C. Risch, M. Schuetz, M. D. Smith, C. J. Stevens, L. L. Sullivan, G. Wang, P. D. Wragg, J. P. Wright, and L. H. Yang. 2011. Productivity Is a Poor Predictor of Plant Species Richness. Science 333:1750-1753.

Tscharntke T, Hochberg ME, Rand TA, Resh VH, Krauss J (2007) Author Sequence and Credit for Contributions in Multiauthored Publications. PLoS Biol 5(1): e18. doi:10.1371/journal.pbio.0050018







Thursday, May 2, 2013

Why pattern-based hypotheses fail ecology: the rise and fall of ecological character displacement

Yoel E. Stuart, Jonathan B. Losos, Ecological character displacement: glass half full or half empty?, Trends in Ecology & Evolution, Available online 26 March 2013

Just as ecology is beginning to refocus on integrating evolutionary dynamics and community ecology, a paper from Yoel Stuart and Jonathan Losos (2013) suggests that perhaps the best-known eco-evolutionary hypothesis - Ecological Character Displacement (ECD) – needs to be demoted in popularity. They review the existing evidence for ECD and in the process illustrate the rather typical path that research into pattern-based hypotheses seems to be taking.

ECD developed during that period of ecology when competition was at the forefront of ecological thought. During the 1950s-1960s, Connell, Hutchinson and McArthur produced their influential ideas about competitive coexistence. At the same time, Brown and Wilson (1956) first described ecological character displacement. ECD is defined as involving first, competition for limited resources; second, in response, selection for resource partitioning which drives populations to diverge in resource use. Ecological competition drives adaptive evolution in resource usage – either resulting in exaggerated divergence in sympatry or trait overdispersion. ECD fell in line with a competition-biased worldview, integrated ecology and evolution, and so quickly became entrenched: the ubiquity of trait differences between sympatric species seemed to support its predictions. Pfennig and Pfennig (2012) go so far as to say ‘Character displacement...plays a key, and often decisive, role in generating and maintaining biodiversity.’

One problem was that tests of ECD tended to make it a self-fulfilling prophecy. Differences in resource usage are expected when coexisting species compete; therefore if differences in resource usage are observed, competition is assumed to be the cause. In the ideal test, divergent sympatric species would be found experimentally to compete, and ECD could be used to explain the proximal cause of divergence. But the argument was also made that when divergent sympatric species were not found to compete, this was also evidence of ECD, since “ghosts of competition past” could have lead to complete divergence such that competition no longer occurred. This made it rather difficult to disprove ECD.

There was pushback in the 1970s against these problems, but interestingly, ECD didn’t fall out of favour. A familiar pattern took form: initial ecstatic support, followed by critical papers, which were in turn rebutted by new experimental studies. Theoretical models both supported or rebutted the hypothesis depending on the assumptions involved. In response the large literature, several influential reviews were written (Schluter (2000), Dayan and Simberloff (2005)) that appeared to suggest at least partial support for the ECD from existing data. Rather than dimming interest in ECD, debate kept it relevant for 40+ years. And continued relevance translated to the image of ECD as a longstanding (hence important) idea. Stuart and Losos carry out a new evaluation of the existing evidence for ECD using Schluter and McPhail’s (1992) ‘6 criteria’, using both the papers from the two previous reviews and more recent studies. Their results suggest that strong evidence for ECD is nearly non-existent, with only 5% of all 144 studies meeting all 6 criteria. (Note: this isn't equivalent to suggesting that ECD is nearly non-existent, just that currently support is limited. There's a good discussion as to some of the possible reasons that ECD has been rarely observed, in the paper).
From Stuart and Losos (2013). Fraction of cases from Schluter 2000, Dayan and Simberloff 2005, and this study that meet either 4 or all 6 of the criteria for ECD.

The authors note that there are many explanations for this finding of weak support: the study of evolution in nature is difficult, particularly given the dearth of long term studies. The 6 criteria are very difficult to fulfill. But they also make an important, much more general point: character displacement patterns can result from multiple processes that are not competition, so patterns on their own are not indicative. Patterns that result from legitimate ecological character displacement may not show the predicted trait overdispersion. The story of the rise and fall of ECD is a story with applications to many pattern-driven ecological hypotheses. There are many axiomatic relationships you learn about in introductory courses: productivity-diversity hump shaped relationships, the intermediate disturbance hypothesis, ECD, etc, etc. These have guided hypothesis formation and testing for 40 years and have become entrenched in the literature despite criticism. And similarly, there are recent papers suggesting that long-standing pattern-based hypotheses are actually wrong or at least misguided (e.g. 1, 2, 3, etc). Why? Because pattern-driven hypotheses lack mechanism, usually relying on some sort of common-sense description of a relationship. The truth is that the same pattern may result from multiple processes. Further, a single process can produce multiple patterns. So a pattern means very little without the appropriate context.

So have we wasted 40 years of time, energy and resources jousting at windmills? Probably not, data and knowledge are arrived at in many ways. And observing patterns is important - it is the source of information from natural systems we use to develop hypotheses. But it is hopeful that this is a period where ecology is recognizing that pattern-based hypotheses (and particularly the focus on patterns as proof for these hypotheses) ask the right questions but focus on the wrong answers.
Long-term studies of Darwin's finches have provided strong evidence for ECD.