The role for history in ecology can be tough to generalize. For a neo-ecologist, 5 years may be a long time scale, for a restoration ecologists, 50 years might be, for a paleoecologists, millions of years could matter. But multiple talks today argued that without considering history we are lost. Whether it is using climate history to understand how the effects of past climate change are still being felt today (from Jens Christian-Svenning), or the oft-mentioned debate about whether local biodiversity is truly in decline, the past is necessary to understand our changing planet. [The various papers on this debate came up in at least 3 of talks I saw]. Regardless of which way the diversity relationship goes, Frederic De Laender pointed out that loss of functioning due to increased environmental stress meant that local communities were changing anyways.
Related to this is the question of how ecologists should incorporate and understand the role of human history in their studies. Are humans simply a disturbance? A covariate in a statistical analysis? Or an intrinsic component of ecology across the globe? What is a baseline for ‘naturalness’ in the absence of humans anyways? Further, human records can be potentially misleading as ecological research tools. For example, P. Szabo showed that the popular conception in the Czech Republic (based on archival data) is that beech forests are the true ‘natural’ forest, and coniferous forests were simply the result of forestry plantations. Policy reflects this and promotes preservation of broad leafed forests. However, analysis of paleo-pollen data showed that in fact spruce and other conifers appeared to have dominated for thousands of years in some regions. What then is the true ‘natural’ forest? From Emily Southgate’s fascinating talk showing how the development of an oil refinery in the 1800s and its impacts could be investigated using historical land surveys, to maps showing the still-unfilled ranges of European tree species, the legacy of the past is clear in present day data.
Tuesday, August 30, 2016
Monday, August 29, 2016
#EcoSummit2016 Day 1 - Reconciling the warp and weft of ecology
For the first time since 2008 I didn’t make it to ESA, and instead I get to attend my first EcoSummit, here in Montpellier. Participants represent a more European contingent than the typical ESA, which is a great opportunity to see a slightly different group of people and topics.
Two plenary talks were particularly memorable for me. First, Sandra Diaz gave a really elegant talk that spanned from patterns of functional diversity to the philosophy of ecology. A woven carpet provided the central analogy. A carpet includes the warp – the underlying structure of the carpet – and the weft – the supplementary threads that produce the designs. Much like species, a great diversity of colors and patterns arise from the weft, but the warp provides the underlying structure. The search for a small number of general functional relationships one way ecologists can look for the structural fabric of life. Much like Phil Grimes, an earlier speaker, Diaz has attempted to identify generalities in ecology. It’s worth reading the paper she discussed for much of her talk, which attempts to describe a global spectrum of plant function (Diaz et al. 2016). Diaz noted, however, that your focus should be determined by your questions. And you need both details and generalities if you want to provide predictions at a global scale but with a local resolution.
The other plenary of note was from Stephen Hubbell (it actually preceded Diaz’s talk), and it provided a contrasting approach. Hubbell discussed a number of detailed analyses to derive a general conclusion about processes maintaining tropical tree diversity. Data from the Barro Colorado island provides information about changes in growth rate, abundances, presence/absence, distances between species. It shows seemingly large shifts in abundance and composition through time. And Hubbell (in a fairly provocative mood) suggests that it shows that ‘community ecology is a failure’. I would argue against that statement, and what Hubbell really seemed to be saying is that expectations of equilibrium and equilibrium models (L-V, etc) are not useful. Instead, factors such as weak stabilizing mechanisms and demographic stochasticity may be enough to understand high diversity regions.
Two plenary talks were particularly memorable for me. First, Sandra Diaz gave a really elegant talk that spanned from patterns of functional diversity to the philosophy of ecology. A woven carpet provided the central analogy. A carpet includes the warp – the underlying structure of the carpet – and the weft – the supplementary threads that produce the designs. Much like species, a great diversity of colors and patterns arise from the weft, but the warp provides the underlying structure. The search for a small number of general functional relationships one way ecologists can look for the structural fabric of life. Much like Phil Grimes, an earlier speaker, Diaz has attempted to identify generalities in ecology. It’s worth reading the paper she discussed for much of her talk, which attempts to describe a global spectrum of plant function (Diaz et al. 2016). Diaz noted, however, that your focus should be determined by your questions. And you need both details and generalities if you want to provide predictions at a global scale but with a local resolution.
The other plenary of note was from Stephen Hubbell (it actually preceded Diaz’s talk), and it provided a contrasting approach. Hubbell discussed a number of detailed analyses to derive a general conclusion about processes maintaining tropical tree diversity. Data from the Barro Colorado island provides information about changes in growth rate, abundances, presence/absence, distances between species. It shows seemingly large shifts in abundance and composition through time. And Hubbell (in a fairly provocative mood) suggests that it shows that ‘community ecology is a failure’. I would argue against that statement, and what Hubbell really seemed to be saying is that expectations of equilibrium and equilibrium models (L-V, etc) are not useful. Instead, factors such as weak stabilizing mechanisms and demographic stochasticity may be enough to understand high diversity regions.
Tuesday, July 26, 2016
Summer hiatus, back for EcoSummit
As you may have noticed, the EEB & Flow is taking a much needed summer break.
We'll be back for the EcoSummit Congress from Montpellier, France starting Aug. 29th. :-)
Monday, July 18, 2016
The Forest, the Trees, and the Phylo-diversity Jungle
with Florent Mazel
As has been a recurrent topic on the blog recently (here, and here and elsewhere), it is difficult to know when it is appropriate and worthwhile to write responses to published papers. Further, a number of journals don't provide clear opportunities for responses even when they are warranted. And maybe, even when published, most responses won't make a difference anyways.
As has been a recurrent topic on the blog recently (here, and here and elsewhere), it is difficult to know when it is appropriate and worthwhile to write responses to published papers. Further, a number of journals don't provide clear opportunities for responses even when they are warranted. And maybe, even when published, most responses won't make a difference anyways.
Marc Cadotte and I and our coauthors experienced this first hand when we felt a paper of ours had been misconstrued. We wanted to provide a useful, positive response, but whether the time investment was worthwhile was unclear. The journal then informed us they didn't publish responses. We tried instead to write a 'News and Views' piece for the journal, which it ultimately declined to publish. And really, a response piece is at cross-purposes from the usual role of N&V (positive editorials). In the end, rather than spend more time on this, we made the manuscript available as a preprint, found here.
The initial response was to a publication in Ecography from Miller et al. (2016) [citations below]. Their paper that does a nice job of asking how well 32 phylo-diversity metrics and nine null models discriminate between community assembly mechanisms. The authors first simulated communities under three main assembly rules, competitive exclusion, habitat filtering, and neutral assembly. They then tested which combination of metrics and null models yielded the best statistical performance. Surprisingly, only a fraction of phylo-diversity metrics and null models exhibited both high statistical power coupled with low Type I error rate. Miller et al. conclude that, for this reason, some metrics and null models proposed in the literature should be avoided when asking if filtering and competition play an important role in structuring communities. This is a useful extension for the eco-phylogenetic literature. However, the authors also argue that their results show that a framework for phylodiversity metrics introduced in a paper by myself and coauthors (Tucker et al. 2016) was subjective and should not be used.
What was disappointing is that there is a general issue (how can we best understand phylogenetic metrics for ecology?) that could benefit from further discussion in the literature.
Metrics can be analysed and understood in two ways: (1) by grouping them based on their underlying properties (e.g. by comparing mathematical formulations); and (2) by assessing context-dependent behaviour (e.g. by comparing metric performance in relation to particular questions). The first approach requires theoretical and cross-disciplinary studies to summarize the main dimensions along which phylo-diversity metrics vary, while the second provides a field-specific perspective to quantify the ability of a particular metric to test a particular hypothesis. These two approaches have different aims, and their results are not necessarily expected to be identical.
What was disappointing is that there is a general issue (how can we best understand phylogenetic metrics for ecology?) that could benefit from further discussion in the literature.
Metrics can be analysed and understood in two ways: (1) by grouping them based on their underlying properties (e.g. by comparing mathematical formulations); and (2) by assessing context-dependent behaviour (e.g. by comparing metric performance in relation to particular questions). The first approach requires theoretical and cross-disciplinary studies to summarize the main dimensions along which phylo-diversity metrics vary, while the second provides a field-specific perspective to quantify the ability of a particular metric to test a particular hypothesis. These two approaches have different aims, and their results are not necessarily expected to be identical.
One reason there are so many metrics is that they have been pooled across community ecology, macroecology and conservation biology. The questions typically asked by conservationists and macroecologists, for example, differ from those of community ecologists. Different metrics frequently perform better or worse for different types of problems. The second approach to metrics provides a solution to this problem through explicitly simulating the processes of interest for a given research question (e.g. vicariance or diversification processes in macroecological research), and selecting the most appropriate metric for the task. The R package presented by Miller et al., as well as others (e.g. Pearse et al. 2015) all help facilitate this approach. And it can be very useful to a field when this is done thoroughly.
But this approach has some limitations as well - it is inefficient and sensitive to choices made in the simulation process. It also doesn't provide a framework or context in which to understand results. The general approach fills this need: the Tucker et al. paper took this approach and classified 70 phylo-diversity metrics along three broad mathematical dimensions: richness, divergence and regularity--the sum, mean and variance of phylogenetic distances among species of assemblages, respectively. This framework is analogous to a system for classifying functional diversity metrics (e.g. Villéger et al. 2008), allowing theoretical linkages between phylogenetic and functional approaches in ecology. We also carried out extensive simulations to corroborate the metric behaviour classification system across different assembly scenarios.
The minor point to me is that, although Miller et al. concluded this tripartite framework performed poorly, their results appear to provide independent support for the tripartite classification system. (And this is despite some methodological differences, including using a clustering algorithm instead of an ordination approach for metric grouping). The vast majority of metrics used by Miller et al. on their simulated communities group according to this richness-divergence-regularity classification system (see our Fig 2 vs. Miller et al.'s Fig 1B). And metrics like HAED and EED, which stem from a mathematical combination of richness and regularity dimensions, are expected to sometimes cluster with richness (as observed by Miller et al. but noted as evidence against our framework), and sometimes with regularity. There is specific discussion on this type of behaviour in Tucker et al., 2016.
Miller et al. Fig 1B. "Dendrogram of intercorrelations among the phylogenetic community structure metrics, including species richness itself (labeled richness). Group 1 metrics focus on mean relatedness; Group 2 on nearest-relative measures of community relatedness; and Group 3 on total community diversity and are particularly closely correlated with species richness. Four metrics, PAE, EED , IAC, and EAED show variable behavior. They do not consistently cluster together or with each other, and we refer to their placement as unresolved. The branches of the dendrogram are colored according to the metric classifications proposed by Tucker et al. (2016): green are “regularity” metrics, pink are “richness” metrics, and yellow are “divergence” metrics." |
The major point is that dismissing general approaches can lead to more confusion about phylogenetic metrics, leading users to create even more metrics (please don't!), to conclude that particular metrics should be discarded, or to adopt hard-to-interpret metrics because some study found they were highly correlated with a response. Context is necessary.
I think both approaches have utility, and importantly, both approaches benefit each other. On one hand, detailed analyses of metric performance offer a valuable test of the broader classification system, using alternative simulations and codes. On the other hand, broad syntheses offer a conceptual framework within which results of more focussed analyses may be interpreted.
For example, comparing Miller et al.'s results with the tripartite framework provides some additional interesting insight. They found that metrics closely aligned with only a single dimension are not the best indicators of community assembly. In their results, sometimes the metrics with the best statistical performances are Rao’s quadratic entropy and IntraMPD. Because of the general framework, we know that these classified as are 'hybrid' metrics that include both richness and divergence in phylogenetic diversity. Taking it one step further, because the general framework connects with functional ecology metrics, we can compare their findings about Rao's QE/IntraMPD to results using corresponding dimensions in the functional trait literature. Interestingly, functional ecologists have found that community assembly processes can alter multiple dimensions of diversity (e.g. both richness and divergence)(Botta-Dukát and Czúcz 2016), which may provide insight to why a hybrid metric is useful for understanding community assembly.
I think both approaches have utility, and importantly, both approaches benefit each other. On one hand, detailed analyses of metric performance offer a valuable test of the broader classification system, using alternative simulations and codes. On the other hand, broad syntheses offer a conceptual framework within which results of more focussed analyses may be interpreted.
For example, comparing Miller et al.'s results with the tripartite framework provides some additional interesting insight. They found that metrics closely aligned with only a single dimension are not the best indicators of community assembly. In their results, sometimes the metrics with the best statistical performances are Rao’s quadratic entropy and IntraMPD. Because of the general framework, we know that these classified as are 'hybrid' metrics that include both richness and divergence in phylogenetic diversity. Taking it one step further, because the general framework connects with functional ecology metrics, we can compare their findings about Rao's QE/IntraMPD to results using corresponding dimensions in the functional trait literature. Interestingly, functional ecologists have found that community assembly processes can alter multiple dimensions of diversity (e.g. both richness and divergence)(Botta-Dukát and Czúcz 2016), which may provide insight to why a hybrid metric is useful for understanding community assembly.
In summary, there is both a forest and individual trees, and both of these are valid approaches. I hope that we can continue complement broad-scale syntheses with question- and hypothesis-specific studies, and that as a result the field can be clarified.
References:
Botta-Dukát, Z. and Czúcz, B. 2016. Testing the ability of functional diversity indices to detect trait convergence and divergence using individual-based simulation. - Methods Ecol. Evol. 7: 114–126.
References:
Botta-Dukát, Z. and Czúcz, B. 2016. Testing the ability of functional diversity indices to detect trait convergence and divergence using individual-based simulation. - Methods Ecol. Evol. 7: 114–126.
Bryant, J. A. et al. 2008. Microbes on mountainsides: contrasting elevational patterns of bacterial and plant diversity. - Proc. Natl. Acad. Sci. U. S. A. 105: 11505–11.
Graham, C. H. and Fine, P. V. A. 2008. Phylogenetic beta diversity: linking ecological and evolutionary processes across space in time. - Ecol. Lett. 11: 1265–1277.
Hardy, O. 2008. Testing the spatial phylogenetic structure of local communities: statistical performances of different null models and test statistics on a locally neutral community. - J. Ecol. 96: 914–926.
Isaac, N. J. B. et al. 2007. Mammals on the EDGE: conservation priorities based on threat and phylogeny. - PLoS One 2: e296.
Kraft, N. J. B. et al. 2007. Trait evolution, community assembly, and the phylogenetic structure of ecological communities. - Am. Nat. 170: 271–283.
Miller, E. T. et al. 2016. Phylogenetic community structure metrics and null models: a review with new methods and software. - Ecography. DOI: 10.1111/ecog.02070
Pavoine, S. and Bonsall, M. B. 2011. Measuring biodiversity to explain community assembly: a unified approach. - Biol. Rev. 86: 792–812.
Pearse, W. D. et al. 2014. Metrics and Models of Community Phylogenetics. - In: Modern Phylogenetic Comparative Methods and Their Application in Evolutionary Biology. Springer Berlin Heidelberg, pp. 451–464.
Pearse, W. D. et al. 2015. pez : phylogenetics for the environmental sciences. - Bioinformatics 31: 2888–2890.
Tucker, C. M. et al. 2016. A guide to phylogenetic metrics for conservation, community ecology and macroecology. - Biol. Rev. Camb. Philos. Soc. doi: 10.1111/brv.12252.
Vellend, M. et al. 2010. Measuring phylogenetic biodiversity. - In: McGill, A. E. M. B. J. (ed), Biological diversity: frontiers in measurement and assessment. Oxford University Press, pp. 193–206.
Villéger, S. et al. 2008. New multidimensional functional diversity indices for a multifaceted framework in functional ecology. - Ecology 89: 2290–2301.
Webb, C. O. et al. 2002. Phylogenies and Community Ecology. - Annu. Rev. Ecol. Evol. Syst. 33: 475–505.
Winter, M. et al. 2013. Phylogenetic diversity and nature conservation: where are we? - Trends Ecol. Evol. 28: 199–204.
Thursday, June 30, 2016
The pessimistic and optimistic view of BEF experiments?
The question of the value of biodiversity-ecosystem function (BEF) experiments—their results, their relevancy—has become a heated one in the literature. An extended argument over the last few years has debated the assumption that local biodiversity is in fact in decline (e.g. Vellend et al. 2013; Dornelas et al. 2014; Gonazalez et al. 2016). If biodiversity isn't disappearing from local communities, the logical conclusion would be that experiments focussed on the local impacts of biodiversity loss are less relevant.
Two papers in the Journal of Vegetation Science (Wardle 2016 and Eisenhauer et al. 2016) continue this discussion regarding the value of BEF experiments for understanding biodiversity loss in natural ecosystems. From reading both papers, it seems as though broadly speaking, the authors agree on several key points: that results from biodiversity-ecosystem functioning experiments don’t always match observations about species loss and functioning in nature, and that nature is much more complex, context-dependent, and multidimensional than typical BEF experimental systems. (The question of whether local biodiversity is declining may be more contested between them).
Biodiversity and ecosystem experiments typically involve randomly assembled plant communities containing either the full complement of species, or subsets containing different numbers of species. Communities containing lower numbers are meant to provide information about the loss of species diversity a system. Functions (often including, but not limited to, primary productivity or biomass) are eventually measured and analysed in relation to treatment diversity. Although some striking results have come out of these types of studies (e.g. Tilman and Downing 1996), they can vary a fair amount in their findings (Cardinale et al. 2012).
David Wardle’s argument is that BEF experiments differ a good deal from natural systems: in natural systems, BEF relationships can take different forms and explain relatively little variation, and so extrapolating from existing experiments seems uninformative. In nature, changes in diversity are driven by ecological processes (invasion, extinction) and experiments involving randomly assembled communities and randomly lost species do nothing to simulate these processes. Wardle seems to feel that the popularity of typical BEF experiments has come at the cost of more realistic experimental designs. This is something of a zero-sum argument, (although in some funding climates that may be true...). But it is true that big BEF experiments tend to be costly and take time and labour, meaning that there is an impetus to publish as much as possible from each one. Given BEF experiments have changed drastically in design once already, in response to criticisms about their inability to disentangle complementarity vs. portfolio effects, it seems they are not inflexible about design though.
Eisenhauer et al. agree in principle that current experiments frequently lack a realistic design, but suggest that there are plenty of other types of studies (looking at functional diversity or phylogenetic diversity, for example, or using random loss of species) being published as well. For them too, there is value in having multiple similar experiments: this allows metaanalysis and comparison aggregation, and will help to tease apart the important mechanisms eventually. Further, realism is difficult to obtain in the absence of a baseline for a “natural, untouched, complete system” from which to remove species.
The point that Eisenhauer et al. and Wardle appear to agree on most strongly is that real systems are complex, multi-dimensional and context-dependent. Making the leap from a BEF experiment with 20 plant species to the real world is inevitably difficult. Wardle sees this is a massive limitation, Eisenhauer et al. sees it as a strength. Inconsistencies between experiments and nature are information that highlight when context matters. By having controlled experiments in which you vary context (such as by manipulating both nutrient level and species richness), you can begin to identify mechanisms.
Perhaps this is the greatest problem with past BEF work, is that there is a tendency to oversimplify the interpretation of results – to conclude that ‘loss of diversity is bad’ but with less attention to ‘why’, 'where', or 'when’. The best way to do this depends on your view of how science should progress.
Wardle, D. A. (2016), Do experiments exploring plant diversity–ecosystem functioning relationships inform how biodiversity loss impacts natural ecosystems?. Journal of Vegetation Science, 27: 646–653. doi: 10.1111/jvs.12399
Eisenhauer, N., Barnes, A. D., Cesarz, S., Craven, D., Ferlian, O., Gottschall, F., Hines, J., Sendek, A., Siebert, J., Thakur, M. P., Türke, M. (2016), Biodiversity–ecosystem function experiments reveal the mechanisms underlying the consequences of biodiversity change in real world ecosystems. Journal of Vegetation Science. doi: 10.1111/jvs.12435
Additional References:
Vellend, Mark, et al. "Global meta-analysis reveals no net change in local-scale plant biodiversity over time." Proceedings of the National Academy of Sciences 110.48 (2013): 19456-19459.
Dornelas, Maria, et al. "Assemblage time series reveal biodiversity change but not systematic loss." Science 344.6181 (2014): 296-299.
Gonzalez, Andrew, et al. "Estimating local biodiversity change: a critique of papers claiming no net loss of local diversity." Ecology (2016).
Tilman, David, and John A. Downing. "Biodiversity and stability in grasslands." Ecosystem Management. Springer New York, 1996. 3-7.
Cardinale, Bradley J., et al. "Biodiversity loss and its impact on humanity."Nature 486.7401 (2012): 59-67.
Two papers in the Journal of Vegetation Science (Wardle 2016 and Eisenhauer et al. 2016) continue this discussion regarding the value of BEF experiments for understanding biodiversity loss in natural ecosystems. From reading both papers, it seems as though broadly speaking, the authors agree on several key points: that results from biodiversity-ecosystem functioning experiments don’t always match observations about species loss and functioning in nature, and that nature is much more complex, context-dependent, and multidimensional than typical BEF experimental systems. (The question of whether local biodiversity is declining may be more contested between them).
Biodiversity and ecosystem experiments typically involve randomly assembled plant communities containing either the full complement of species, or subsets containing different numbers of species. Communities containing lower numbers are meant to provide information about the loss of species diversity a system. Functions (often including, but not limited to, primary productivity or biomass) are eventually measured and analysed in relation to treatment diversity. Although some striking results have come out of these types of studies (e.g. Tilman and Downing 1996), they can vary a fair amount in their findings (Cardinale et al. 2012).
David Wardle’s argument is that BEF experiments differ a good deal from natural systems: in natural systems, BEF relationships can take different forms and explain relatively little variation, and so extrapolating from existing experiments seems uninformative. In nature, changes in diversity are driven by ecological processes (invasion, extinction) and experiments involving randomly assembled communities and randomly lost species do nothing to simulate these processes. Wardle seems to feel that the popularity of typical BEF experiments has come at the cost of more realistic experimental designs. This is something of a zero-sum argument, (although in some funding climates that may be true...). But it is true that big BEF experiments tend to be costly and take time and labour, meaning that there is an impetus to publish as much as possible from each one. Given BEF experiments have changed drastically in design once already, in response to criticisms about their inability to disentangle complementarity vs. portfolio effects, it seems they are not inflexible about design though.
Eisenhauer et al. agree in principle that current experiments frequently lack a realistic design, but suggest that there are plenty of other types of studies (looking at functional diversity or phylogenetic diversity, for example, or using random loss of species) being published as well. For them too, there is value in having multiple similar experiments: this allows metaanalysis and comparison aggregation, and will help to tease apart the important mechanisms eventually. Further, realism is difficult to obtain in the absence of a baseline for a “natural, untouched, complete system” from which to remove species.
The point that Eisenhauer et al. and Wardle appear to agree on most strongly is that real systems are complex, multi-dimensional and context-dependent. Making the leap from a BEF experiment with 20 plant species to the real world is inevitably difficult. Wardle sees this is a massive limitation, Eisenhauer et al. sees it as a strength. Inconsistencies between experiments and nature are information that highlight when context matters. By having controlled experiments in which you vary context (such as by manipulating both nutrient level and species richness), you can begin to identify mechanisms.
Perhaps this is the greatest problem with past BEF work, is that there is a tendency to oversimplify the interpretation of results – to conclude that ‘loss of diversity is bad’ but with less attention to ‘why’, 'where', or 'when’. The best way to do this depends on your view of how science should progress.
Additional References:
Vellend, Mark, et al. "Global meta-analysis reveals no net change in local-scale plant biodiversity over time." Proceedings of the National Academy of Sciences 110.48 (2013): 19456-19459.
Dornelas, Maria, et al. "Assemblage time series reveal biodiversity change but not systematic loss." Science 344.6181 (2014): 296-299.
Gonzalez, Andrew, et al. "Estimating local biodiversity change: a critique of papers claiming no net loss of local diversity." Ecology (2016).
Tilman, David, and John A. Downing. "Biodiversity and stability in grasslands." Ecosystem Management. Springer New York, 1996. 3-7.
Cardinale, Bradley J., et al. "Biodiversity loss and its impact on humanity."Nature 486.7401 (2012): 59-67.
Tuesday, June 14, 2016
Rebuttal papers don’t work, or citation practices are flawed?
Brian McGill posted an interesting follow up to Marc’s question about whether journals should allow post-publication review in the form of responses to published papers. I don’t know that I have any more clarity as to the answer to that question after reading both (excellent) posts. Being idealistic, I think that when there are clear errors, they should be corrected, and that editors should be invested in identifying and correcting problems in papers in their journals. Based on the discussions I’ve had with co-authors about a response paper we’re working on, I’d also like to believe that rebuttals can produce useful conversations, and ultimately be illuminating for a field. But pragmatically, Brian McGill pointed out that it seems that rebuttals rarely make an impact (citing Banobi et al 2011). Many times this was due to the fact that citations of flawed papers continued, and “were either rather naive or the paper was being cited in a rather generic way”.
Citations are possibly the most human part of writing scientific articles. Citations form a network of connections between research and ideas, and are the written record of progress in science. But they're also one of the clearest points at which biases, laziness, personal relationships (both friendships and feuds), taxonomic biases, and subfield myopia are apparent. So why don't we focus on improving citation practices?
Ignoring more extreme problems (coercive citations, citation fraud, how to cite supplementary materials, data and software), as the literature grows more rapidly and pressure to publish increases, we have to acknowledge that it is increasingly difficult to know the literature thoroughly enough to cite broadly. A couple of studies found that 60-70% of citations were scored as accurate (Todd et al. 2007; Teixeira et al. 2013) (Whether you can see that as too low or pretty high depends on your personality). Key problems were the tendency to cite 'lazily' (citing reviews or synthetic pieces rather than delve into the literature within) or 'naively' (citing high profile pieces in an offhand way without considering rebuttals and follow ups (a key point of the Banobi et al. piece)). At least one limited analysis (Drake et al. 2013) showed that citations tended to be much more accurate in higher IF journals (>5), perhaps (speculating) due to better peer review or copy editing.
More generally - why don't we learn how to cite well as students? The vast majority of advice on citation practices with a quick google search regards the need to avoiding plagiarism and stylistic concerns. Some of it is philosophical, but I have never heard a deep discussion of questions like, 'What’s an appropriate number of citations – for an idea?'; 'For a manuscript?'; 'How deep do I cite? (Do I need to go to Darwin?)'. It would be great if there were a consensus advice publication, like the sort the BES is so good at on best practices in citation.
Which is to say, that I still hope that rebuttals can work and be valuable.
Friday, May 27, 2016
How to deal with poor science?
Publishing research articles is the bedrock of science.
Knowledge advances through testing hypotheses, and the only way such advances
are communicated to the broader community of scientists is by writing up the
results in a report and sending it to a peer-reviewed journal. The assumption
is that papers passing through this review filter report robust and solid
science.
Of course this is not always the case. Many papers include
questionable methodology and data, or are poorly analyzed. And a small minority
actually fabricate or misrepresent data. As Retraction Watch often reminds us,
we need to be vigilant against bad science creeping into the published
literature.
Why should we care about bad science? Erroneous results or incorrect
conclusions in scientific papers can lead other researchers astray and result
in bad policy. Take for example the well-flogged Andrew Wakefield, a since
discredited researcher who published a paper linking autism to vaccines. The
paper is so flawed that it does not stand up to basic scrutiny and was rightly
retracted (though how it could have passed through peer review is an astounding
mystery). However, this incredibly bad science invigorated an anti-vaccine movement in Europe and North America that is responsible for the re-emergence of childhood diseases that should have been eradicated. This bad science is
responsible for hundreds of deaths.
From Huffington Post |
Of course most bad science will not result in death. But bad
articles waste time and money if researchers go down blind alleys or work to
rebut papers. The important thing is that there are avenues available to
researchers to question and criticize published work. Now days this usually
means that papers are criticized through two channels. First is through blogs
(and other social media). Researchers can communicate their concerns and
opinion about a paper to the audience that reads their blog or through social
media shares. A classic example was the blog post by Rosie Redfield criticizing a paper published in Science that claimed to have discovered bacteria that used arsenic as a food source.
However, there are a few problems with this avenue. First is
that it is not clear that the correct audience is being targeted. For example,
if you normally blog about your cat, and your blog followers are fellow cat
lovers, then a seemingly random post about a bad paper will likely fall on deaf
ears. Secondly, the authors of the original paper may not see your critique and
do not have a fair opportunity to rebut your claims. Finally, your criticism is
not peer-reviewed and so flaws or misunderstandings in your writing are less
likely to be caught.
Unlike the relatively new blog medium, the second option is
as old as scientific publication –writing a commentary that is published in the
same journal (and often with an opportunity for the authors of the original
article to respond). These commentaries are usually reviewed and target the
correct audience, namely the scientific community that reads the journal.
However, some journals do not have a commentary section and so this
avenue is not available to researchers.
Caroline and I experienced this recently when we enquired
about the possibility to write a commentary on an article was published and
that contained flawed analyses. The Editor responded that they do not publish
commentaries on their papers! I am an Editor-in-Chief and I routinely deal with
letters sent to me that criticize papers we publish. This is important part of
the scientific process. We investigate all claims of error or wrongdoing and if
their concerns appear valid, and do not meet the threshold for a retraction, we
invite them to write a commentary (and invite the original authors to write a
response). This option is so critical to science that it cannot be overstated.
Bad science needs to be criticized and the broader community of scientists
should to feel like they have opportunities to check and critique publications.
I could perceive that there are many reasons why a journal
might not bother with commentaries –to save page space for articles, they’re
seen as petty squabbles, etc. but I would argue that scientific journals have
important responsibilities to the research community and one of them must be to
hold the papers they publish accountable and allow for sound and reasoned
criticism of potentially flawed papers.
Looking over the author guidelines of the 40 main ecology and evolution journals (and apologies if I missed statements -author guidelines can be very verbose), only 24 had a clear statement about publishing commentaries on previously published papers. While they all had differing names for these commentary type articles, they all clearly spelled out that there was a set of guidelines to publish a critique of an article and how they handle it. I call these 'Group A' journals. The Group A journals hold peer critique after publication as an important part of their publishing philosophy and should be seen as having a higher ethical standard.
Next are the 'Group B' journals. These five journals had unclear statements about publishing commentaries of previously published papers, but they appeared to have article types that could be used for commentary and critique. It could very well be that these journals do welcome critiques of papers, but they need to clearly state this.
The final class, 'Group C' journals did not have any clear statements about welcoming commentaries or critiques. These 11 journals might accept critiques, but they did not say so. Further, there was no indication of an article type that would allow commentary on previously published material. If these journals do not allow commentary, I would argue that they should re-evaluate their publishing philosophy. A journal that did away with peer-review would be rightly ostracized and seen as not a fully scientific journal and I believe that post publication criticism is just as essential as peer review.
I highlight the differences in journals not to shame specific journals, but rather highlight that we need a set of universal standards to guide all journals. Most journals now adhere to a set of standards for data accessibility and competing interest statements, and I think that they should also feel pressured into accepting a standardized set of protocols to deal with post-publication criticism.
Looking over the author guidelines of the 40 main ecology and evolution journals (and apologies if I missed statements -author guidelines can be very verbose), only 24 had a clear statement about publishing commentaries on previously published papers. While they all had differing names for these commentary type articles, they all clearly spelled out that there was a set of guidelines to publish a critique of an article and how they handle it. I call these 'Group A' journals. The Group A journals hold peer critique after publication as an important part of their publishing philosophy and should be seen as having a higher ethical standard.
Next are the 'Group B' journals. These five journals had unclear statements about publishing commentaries of previously published papers, but they appeared to have article types that could be used for commentary and critique. It could very well be that these journals do welcome critiques of papers, but they need to clearly state this.
The final class, 'Group C' journals did not have any clear statements about welcoming commentaries or critiques. These 11 journals might accept critiques, but they did not say so. Further, there was no indication of an article type that would allow commentary on previously published material. If these journals do not allow commentary, I would argue that they should re-evaluate their publishing philosophy. A journal that did away with peer-review would be rightly ostracized and seen as not a fully scientific journal and I believe that post publication criticism is just as essential as peer review.
I highlight the differences in journals not to shame specific journals, but rather highlight that we need a set of universal standards to guide all journals. Most journals now adhere to a set of standards for data accessibility and competing interest statements, and I think that they should also feel pressured into accepting a standardized set of protocols to deal with post-publication criticism.
Wednesday, May 25, 2016
Thoughts on successful postdoc-ing
Unlike grad school, postdoc positions start and end without much fanfare. If grad students are apprentices, postdocs are the journeymen/women of the trade. (Wikipedia defines journeymen as… “considered competent and authorized to work in that field as a fully qualified employee… [but] they are not yet able to work as a self-employed master craftsman.”) Though short compared to a PhD, postdoc jobs are an important stepping stone towards a 'real' job, be that another postdoc, or a position inside or outside of academia. There’s less advice out there about being successful as a postdoc, and often you are on your own to figure things out. I’m finishing a first postdoc this week, and moving on to a second one, and while I think the last 2 years worked out well, they took their own, unexpected path. Some of this is good advice that I was given, some comes from experience or observation, some I even manage to follow :-) *
Choose carefully. If you have some choice, be strategic in choosing a postdoc job. Decide what the position is going to accomplish for you: that may be expanding your skill set, such as by learning a new experimental system or additional analytical techniques; improving your current skills by working with an expert; being involved in high profile research; or being in a certain locale for various reasons. Beware projects too far from your current skill set – the risk is that the learning curve may be so steep that you will be barely competent at the end, and have little to show for your time. Of course, you might decide to use a postdoc to pursue interdisciplinary work, or move away from your dissertation work, in which case this is a risk worth taking.
Because postdocs are short, it may seem as though having a good fit with your supervisor is less important. Don’t assume that your new supervisor be broadly similar in approach to your previous supervisor (or an improvement). Mismatched expectations between supervisors and postdocs seem pretty common and it’s important to get an understanding of what your role is beforehand. The variation in expectations from supervisor to supervisor is huge - from those that require time sheets and expect strict hours, to those that give you total autonomy. Does your supervisor see postdocs as colleagues? 9-5 employees? Advanced students? Lab managers? Talk to friends, colleagues, and students. This may depend on the source of funding as well - will you be working on a specific existing project with specific timelines (common in the US where many postdocs are funded off of NSF grants), or are you funded by a fellowship and therefore more independent?
Get to know your neighbours. Once you’ve chosen and started your postdoc, the most important thing to do is to establish connections in your lab and department immediately. I cannot emphasize this enough. Don’t wait to settle in, or get on top of some papers, or hope people in the hallway will introduce themselves. Postdoc positions are short, and in many departments postdocs are isolated, not students but not really faculty. This can lead to feelings of disconnection, loneliness, and frustration. Seek out the other postdocs - join or organize postdoc social events, go to lab meetings and journal clubs, get the department to maintain an active postdoc email list. Not only will this give you a sense of belonging, but now you have people to talk to (and sometimes rant to), with whom to navigate administrative issues, and potential collaborators. Postdocs are an invaluable resource for job applications as well: they usually have the most up-to-date experience on the job market, and can provide great feedback on job applications and practice job talks. For example, the postdocs in my current department built an exhaustive list of potential questions asked during academic interviews, and shared interview horror stories over drinks.
Mental health and life balance. Postdocs don’t get the kinder, gentler approach sometimes given to grad students and people expect you to stand on your own. This can reignite imposter syndrome. There is no easy solution to this, but some combination of taking care of yourself, working on that mythical thick skin, and highlighting the positive events in your life can help.
Time management continues to become more important, at least for me. More than in grad school, you have to actively decide how much work you want to be doing. There is always something that you *could* be working on, so start scheduling when things will get done based on priority, energy, etc, is important. In addition, people start inviting you to things or asking for you input on projects. Learn to say no. Be strategic about your time management – it’s flattering to wanted, but time is limited and not all invitations are of equal value towards your specific goals.
Practice professional networking. On the other hand, don’t say no to everything: networking and the opportunities it creates are very helpful. Focus on the professional areas that are of interest to you, but consider joining and being active in ESA sections (including the Early Career section) or other relevant organizations; organize workshops or symposia at conferences; host invited speakers. If your department hosts an external seminar series, take advantage (nicely!) of the revolving cast of scientists. They are a great way to make connections with people whose work you admire, and even speakers you have less in common with are great practice for networking skills. From experience, if you have breakfast with a different visiting speaker every week, you will quickly improve your description of your research and your ability to keep a conversation going (also, you will become an expert on your city’s breakfast places). These are helpful skills to have for faculty interviews, for talking to the media and press, even for telling your family what you do.
Take initiative. You are your own advocate now. If you wish you could learn something, or be invited to a working group, or get teaching experience, look into making it happen yourself. This may include organizing working groups (many provide competitive funding, for example, iDiv/sDiv, CIEE (Canada), the new NCEAS, SESYNC), applying for small grants and other project funding on your own, recruiting undergraduates and mentoring them, organizing or co-teaching courses.
Similarly, don’t stop learning new things. Inertia gets higher the less time you have, and it can be hard find the time to pick up the next skill.
Publish. Focus on publishing (if you are interested in academic jobs)– this may be obvious, but publishing is more important than ever as a postdoc. You need to show that you are independently able to produce work after leaving your PhD lab. This counters the ‘maybe they just had a good supervisor’ concern. It can be hard to find time to work on both current and past projects, but try to. From experience (and illustrated by the periodic emails from my PhD supervisor), the longer your dissertation chapters sit around, the less likely they are to ever be published…
Know what your dream job is, and apply for it if you see it. Be willing to move on if something better comes up. Postdocs usually have to think in the short-term, because most funding is in 1-2 year increments. So keep an eye on new sources of funding/positions. Make decisions based on your needs (be they career-related, family-related, whatever): it’s easy to feel guilty moving on from one unfinished position to another, but the reality is that postdocs are temporary and fleeting.
I was told to start applying for jobs as early as I felt reasonably qualified. The logic was that the best practice for job interviews is doing actual job interviews, and further, it is better to fail when it doesn’t matter, rather than when it is your dream job.
Choose carefully. If you have some choice, be strategic in choosing a postdoc job. Decide what the position is going to accomplish for you: that may be expanding your skill set, such as by learning a new experimental system or additional analytical techniques; improving your current skills by working with an expert; being involved in high profile research; or being in a certain locale for various reasons. Beware projects too far from your current skill set – the risk is that the learning curve may be so steep that you will be barely competent at the end, and have little to show for your time. Of course, you might decide to use a postdoc to pursue interdisciplinary work, or move away from your dissertation work, in which case this is a risk worth taking.
Because postdocs are short, it may seem as though having a good fit with your supervisor is less important. Don’t assume that your new supervisor be broadly similar in approach to your previous supervisor (or an improvement). Mismatched expectations between supervisors and postdocs seem pretty common and it’s important to get an understanding of what your role is beforehand. The variation in expectations from supervisor to supervisor is huge - from those that require time sheets and expect strict hours, to those that give you total autonomy. Does your supervisor see postdocs as colleagues? 9-5 employees? Advanced students? Lab managers? Talk to friends, colleagues, and students. This may depend on the source of funding as well - will you be working on a specific existing project with specific timelines (common in the US where many postdocs are funded off of NSF grants), or are you funded by a fellowship and therefore more independent?
Get to know your neighbours. Once you’ve chosen and started your postdoc, the most important thing to do is to establish connections in your lab and department immediately. I cannot emphasize this enough. Don’t wait to settle in, or get on top of some papers, or hope people in the hallway will introduce themselves. Postdoc positions are short, and in many departments postdocs are isolated, not students but not really faculty. This can lead to feelings of disconnection, loneliness, and frustration. Seek out the other postdocs - join or organize postdoc social events, go to lab meetings and journal clubs, get the department to maintain an active postdoc email list. Not only will this give you a sense of belonging, but now you have people to talk to (and sometimes rant to), with whom to navigate administrative issues, and potential collaborators. Postdocs are an invaluable resource for job applications as well: they usually have the most up-to-date experience on the job market, and can provide great feedback on job applications and practice job talks. For example, the postdocs in my current department built an exhaustive list of potential questions asked during academic interviews, and shared interview horror stories over drinks.
Mental health and life balance. Postdocs don’t get the kinder, gentler approach sometimes given to grad students and people expect you to stand on your own. This can reignite imposter syndrome. There is no easy solution to this, but some combination of taking care of yourself, working on that mythical thick skin, and highlighting the positive events in your life can help.
Time management continues to become more important, at least for me. More than in grad school, you have to actively decide how much work you want to be doing. There is always something that you *could* be working on, so start scheduling when things will get done based on priority, energy, etc, is important. In addition, people start inviting you to things or asking for you input on projects. Learn to say no. Be strategic about your time management – it’s flattering to wanted, but time is limited and not all invitations are of equal value towards your specific goals.
Practice professional networking. On the other hand, don’t say no to everything: networking and the opportunities it creates are very helpful. Focus on the professional areas that are of interest to you, but consider joining and being active in ESA sections (including the Early Career section) or other relevant organizations; organize workshops or symposia at conferences; host invited speakers. If your department hosts an external seminar series, take advantage (nicely!) of the revolving cast of scientists. They are a great way to make connections with people whose work you admire, and even speakers you have less in common with are great practice for networking skills. From experience, if you have breakfast with a different visiting speaker every week, you will quickly improve your description of your research and your ability to keep a conversation going (also, you will become an expert on your city’s breakfast places). These are helpful skills to have for faculty interviews, for talking to the media and press, even for telling your family what you do.
Take initiative. You are your own advocate now. If you wish you could learn something, or be invited to a working group, or get teaching experience, look into making it happen yourself. This may include organizing working groups (many provide competitive funding, for example, iDiv/sDiv, CIEE (Canada), the new NCEAS, SESYNC), applying for small grants and other project funding on your own, recruiting undergraduates and mentoring them, organizing or co-teaching courses.
Similarly, don’t stop learning new things. Inertia gets higher the less time you have, and it can be hard find the time to pick up the next skill.
Publish. Focus on publishing (if you are interested in academic jobs)– this may be obvious, but publishing is more important than ever as a postdoc. You need to show that you are independently able to produce work after leaving your PhD lab. This counters the ‘maybe they just had a good supervisor’ concern. It can be hard to find time to work on both current and past projects, but try to. From experience (and illustrated by the periodic emails from my PhD supervisor), the longer your dissertation chapters sit around, the less likely they are to ever be published…
Know what your dream job is, and apply for it if you see it. Be willing to move on if something better comes up. Postdocs usually have to think in the short-term, because most funding is in 1-2 year increments. So keep an eye on new sources of funding/positions. Make decisions based on your needs (be they career-related, family-related, whatever): it’s easy to feel guilty moving on from one unfinished position to another, but the reality is that postdocs are temporary and fleeting.
I was told to start applying for jobs as early as I felt reasonably qualified. The logic was that the best practice for job interviews is doing actual job interviews, and further, it is better to fail when it doesn’t matter, rather than when it is your dream job.
Friday, May 6, 2016
What’s so great about Spain? Assessing UNESCO World Heritage inequality.
Some places are more valuable than others. We often regard
places as being of high or unique value if they possess high biological
diversity, ancient cultural artefacts and structures, or outstanding geological
features. These valuable places deserve special recognition and protection. The
sad reality is that when we are driven by immediate needs and desires, these
special places are lost.
The natural world, and the wonderful diversity of plants and
animals, is on the losing end of a long and undiminished conflict with human
population growth, development, and resource extraction. We don’t notice it
when there is ample natural space, but as nature becomes increasingly relegated
to a few remaining places, we place a high value on them.
The same can be said for places with significant cultural
value. Ancient temples, villages, and human achievement are too valuable to
lose and we often only have a few remnants to connect us to the past.
In either case, natural or cultural, when they’re gone, we
lose a part of us. That is because these special places tell us about
ourselves; where we come from, how the world shaped us, and what unites all of
humanity. Why did the world cry out in a united voice when the Taliban destroyed the Buddhas of Bamiyan in 2001, even though many of those concerned people were not Buddhist?
The answer is simple –the expansion of Buddhism out of India along ancient
trade routes tells us why many Asian nations share a common religion. They tell
us about ourselves, the differences that interest us, and the similarities that
bind us. The same can be said about the global outcry over the recent
destruction of the ancient city of Palmyra by ISIS.
Before and after photos of the taller of the Buddhas of Bamiyan. Image posted by Carl Montgomery CC BY-SA 3.0. |
Similarly, the natural world tells us about ourselves. The
natural world has constantly shaped and influenced what it means to be human.
Our desires, fears, and how we interact with the natural world are products of
our evolution. If I flash a picture of a car to my 500-student ecology class,
very few students, if any, screech in fear. But if I flash a photo of a hissing
cobra or close-up of a spider, invariably a bunch of students squirm, gasp, or scream. Rationally, this is an odd response, since cars are
the leading cause of death and injury in many western countries. Snakes
and spiders kill very few people in Canada.
These special places deserve recognition and protection, and
that is what the UNESCO World Heritage designation is meant to achieve. To get this
designation for a site requires that countries nominate ones that represent unique and
globally significant contributions to world heritage, and are adequately
protected to ensure the long-term existence of these sites. World Heritage sites are amazing places. They
represent the gems of our global shared heritage. They need to be protected in
perpetuity and should be accessible to all people. Though some I have visited
seem like they are loved too much with high visitation rates degrading some elements of
Heritage sites.
Examples of UNESCO World Heritage sites. A) The Great Wall of China. B) The Gaoligong Mountains, part of the Three Parallel Rivers of Yunnan. C) Angkor Wat in Cambodia. D) An example of a site that may be too loved -Lijiang in Yunnan. All photos by Shirley Lo-Cadotte and posted on our family travel blog -All The Pretty Places. |
UNESCO World Heritage sites should also be representative.
What I mean by this is that they should be designated regardless of national
borders. Heritage sites are found on all continents across most countries
–though a number of politically unstable countries (e.g., Liberia, Somalia,
etc.) do not possess Heritage sites, likely because they lack the organization
or resources to undertake the designation application process, and they lack
the governance to ensure a site is adequately protected. But there are
substantial differences in the number of World Heritage sites across nations[1]. Some
countries, because of inherent priorities, national pride, resources or
expertise, are better able to identify and persuade UNESCO that a particular
place deserves designation.
The distribution of the number of UNESCO World Heritage sites across countries and the top ten. |
Why do we see such disparity in the number of World Heritage
sites -where many countries have few sites, and a few countries have many sites? This is a difficult question to answer, and to do so I took an empirical
approach. I combined data on the number of sites per country with Gross
Domestic Product (GDP)[2],
country size[3],
and country population size[4]. I
then ran simple statistical analyses to figure out what predicts the number of Heritage
sites, and identified those countries that are greatly over-represented by Heritage
sites, and those that are very under-represented. A couple things to note, the
best statistical models included variables that were all log-transformed, I
excluded the World Heritage sites that spanned more than one country, and I did
not include countries that did not have any Heritage sites. The data and R code have been posted to Figshare and are freely available.
All three of GDP, area, and population size predicted the
number of World Heritage sites. It is important to note that these three
country measures are not strongly correlated with one another (only moderately
so). So, larger, richer and more populous countries had more World Heritage
sites. This makes sense –big countries should contain more unique sites due to
random chance and more populous countries tend to have longer historical presence of organized states, and so should
possess more cultural relics (especially China). GDP is more difficult to assign a reason, but high GDP countries should have robust national parks or other bureaucratic
structures that assess and protect important sites, making them easier to
document and justify for UNESCO. GDP is
quite interesting, because it is the single best measure for predicting the
number of Heritage sites, better than population size and area. Further, neither
country density (population/area) nor productivity (GDP/population) are strong
predictors of the number of Heritage sites.
The relationships between the number of World Heritage sites and GDP, area, and population. Note that the axes are all log-transformed. |
While these relationships make sense, it is also clear that
countries are not all close to the main regression line and some countries are
well above the line –meaning they have more Heritage sites than predicted; as
well as some below the line and thus having fewer sites. When I combine the different
measures in different combinations and look for the best single statistical
explanation for the number of World Heritage sites, I find that the combination
including GDP and population size, and their interaction (meaning that
population size is more important for high GDP countries) is the best. For
aficionados, this model explains about 65% of the variation in the number of
Heritage sites.
Now, we can identify those countries that are over or under
represented by UNESCO World Heritage sites according to how far above or below
countries are from the predicted line (technically, looking at statistical
residuals).
The top five over-represented countries are all European,
which means that given their GDP and population size, these countries have more
World Heritage sites than expected. At the other extreme, countries
under-represented come from more diverse regions including Africa, the Middle
East and Southeast Asia.
An interesting comparison to think about is Germany and
Indonesia. Germany has more World Heritage sites than expected (residual =
+0.61) and is a moderately sized, high GDP country. Let me say, I like Germany,
I’ve been there a half a dozen times, and it has beautiful landscapes and great
culture. However, does it deserve so much more World Heritage recognition than
Indonesia, which has fewer sites than expected (residual = -0.63)? Indonesia
has spectacular landscapes and immense biodiversity and great cultural
diversity and history. To put it in perspective, Germany has 35 World Heritage
sites and Indonesia has just 8.
To answer the question in the title of this post: what’s so
great about Spain? Well, it not only has beautiful and diverse natural landscapes
and cultural history, but it appears to have the infrastructure in place to
identify and protect these sites. It's place at the top of UNESCOs relative (to
GDP and population) ranking of the number of World Heritage sites means that Spain's natural and cultural wonders are in good hands. However, for the countries
at the other end of the spectrum, having relatively few World Heritage sites probably is not a reflection of these countries being uninteresting, or that they have
little to offer the world, rather it is something more alarming. These places
lack the financial capacity or national will to fully recognize those places
that are of value to the whole world. The problem is that the globally
important heritage that does exist in these places is at risk of being
lost. These under-represented countries serve as a call to the whole world to
help countries not just identify and protect heritage sites but to aid these
countries with infrastructure and human well-being that empowers them to
prioritize their natural and cultural heritage.
[1] Here I use the 2015 database of World Heritage sites.
[2] From: The United Nations 2014 GDP estimates.
[4] From: The United Nations 2016 projections.
Wednesday, May 4, 2016
The future of community phylogenetics
Community phylogenetics has received plenty of criticism over the last ten years (e.g. Mayfield and Levine, 2010; Gerhold et al. 2015). Much of the criticism is tied to concerns about pattern-based inference, the use of proxy variables, and untested assumptions. These issues are hardly unique to community phylogenetics, and I think that few ideas are solely ''good or solely 'bad'. They are useful in moulding our thinking as ecologists and inspiring new directions of thought. Many influential ideas in ecology have bobbled in confidence through time, but remain valuable nonetheless [e.g. interspecific competition, character displacement (Schoener 1982; Strong 1979)]. But still, it can be hard to see exactly how to use phylogenetic distances to inform community-level analyses in a rigorous way. Fortunately, there is research showing exactly this. The key, to me at least, to avoid treating a phylogeny as just another matrix to analyze, but to consider and test the mechanisms that might link the outcome of millions of years of evolution to community-level interactions.
A couple of potential approaches to move forward questions about community phylogenetics are discussed below. The first is to consider the mechanisms behind the pattern-inference analyses and ask whether assumptions hold.
1) Phylogenies and traits - testing assumptions about proxy value
As you know, if you have read the introductory paragraph of many community phylogenetic papers, Charles Darwin was the first to highlight that two closely related species might have different interactions than two distantly related species. People have tested this hypothesis in many ways in various systems, with mixed results. The most important directions forward is to make explicit the assumptions behind such ideas and experimentally test them. I.e. Do phylogenetic distances/divergence between species capture trait and ultimately ecological divergence between species?
Because evolutionary divergence should relate to feature divergence (sensu Faith), the most direct question to ask is how functionally important trait differences increase with increasing phylogenetic distances. For example, Kelly et al. (2014) found that “close relatives share more features than distant relatives but beyond a certain threshold increasingly more distant relatives are not more divergent in phenotype”, although in a limited test based only on patristic distances. This suggests that at short distances, phylogenetic distances may be a reasonable proxy for feature divergence, but that the relationship is not useful for making predictions about distant relatives.
Phylogenies and coexistence/competition. Ecological questions about communities may not be interested in traits alone. The key assumption behind many early analyses was that closely related species shared more similar *niches*, and so competed more strongly than distantly related species. Thus the question is one step removed from trait evolution, asking instead how phylogenetic divergence correlates into fitness differences or interaction strength. Not surprisingly, current papers suggest there is a fairly mixed, less predictable relationship between phylogenetic relatedness and competitive outcomes.
Recent findings have varied from “Stabilising niche differences were unrelated to phylogenetic distance, while species’ average fitness showed phylogenetic structure” (California grassland plants, Godoy et al. 2014); to, there is no signal in fitness or niche differences (algae species, Narwani et al. 2013); to, when species are sympatric, both stabilizing and fitness differences increase with phylogenetic distance (mediterranean annual plants; Germain et al. 2016). Given constraints, tradeoffs and convergence of strategies, it is really not surprising that the idea of simply inferring the importance of competition from patterns along a phylogenetic tree is not generally possible (Kraft et al. 2015; blogpost).
2) Phylogenies and the regional species pool
Really more interesting than testing for proxy value is to think about the mechanisms that tie evolution and community dynamics together. A key role for evolution in questions about community ecology is to ask what we can learn about the regional species pool—from which local communities are assembled. What information about the history of the lineages in a regional species pool informs the composition of local composition?
The character of the regional species pool is determined in part by the evolutionary history of the region, and this can in turn greatly constrain the evolutionary history of the community (Bartish et al. 2010). The abundance of past habitat types may alter the species pool, while certain communities may act as 'museums' harbouring particular clades. For example, Bartish et al. 2016 found that the lineages represented in different habitat types in a region differ in the evolutionary history they represent, with communities in dry habitats disproportionately including lineages from dry epochs and similar for wet habitats. Here, considering the phylogeny provides insight into the evolutionary component of an ecological idea like 'environmental filtering'.
Similarly, species pools are formed by both ecological processes (dispersal and constraints on dispersal) and evolutionary ones (extinctions, speciation in situ), and one suggestion is that appropriate null models for communities may need to consider both ecological and evolutionary processes (Pigot and Etienne, 2015).
Invasive species also should be considered in the context of evolution and ecology. Gallien et al. 2016 found that “currently invasive species belong to lineages that were particularly successful at colonizing new regions in the past.”
I think using phylogenies in this way is philosophically in line with ideas like Robert Ricklef's 'regional community' concept. The recognition is that a single time scale may be limiting in terms of understanding ecological communities.
References:
A couple of potential approaches to move forward questions about community phylogenetics are discussed below. The first is to consider the mechanisms behind the pattern-inference analyses and ask whether assumptions hold.
1) Phylogenies and traits - testing assumptions about proxy value
As you know, if you have read the introductory paragraph of many community phylogenetic papers, Charles Darwin was the first to highlight that two closely related species might have different interactions than two distantly related species. People have tested this hypothesis in many ways in various systems, with mixed results. The most important directions forward is to make explicit the assumptions behind such ideas and experimentally test them. I.e. Do phylogenetic distances/divergence between species capture trait and ultimately ecological divergence between species?
From Kelly et al. 2015 Fig 1b. |
Recent findings have varied from “Stabilising niche differences were unrelated to phylogenetic distance, while species’ average fitness showed phylogenetic structure” (California grassland plants, Godoy et al. 2014); to, there is no signal in fitness or niche differences (algae species, Narwani et al. 2013); to, when species are sympatric, both stabilizing and fitness differences increase with phylogenetic distance (mediterranean annual plants; Germain et al. 2016). Given constraints, tradeoffs and convergence of strategies, it is really not surprising that the idea of simply inferring the importance of competition from patterns along a phylogenetic tree is not generally possible (Kraft et al. 2015; blogpost).
2) Phylogenies and the regional species pool
Really more interesting than testing for proxy value is to think about the mechanisms that tie evolution and community dynamics together. A key role for evolution in questions about community ecology is to ask what we can learn about the regional species pool—from which local communities are assembled. What information about the history of the lineages in a regional species pool informs the composition of local composition?
The character of the regional species pool is determined in part by the evolutionary history of the region, and this can in turn greatly constrain the evolutionary history of the community (Bartish et al. 2010). The abundance of past habitat types may alter the species pool, while certain communities may act as 'museums' harbouring particular clades. For example, Bartish et al. 2016 found that the lineages represented in different habitat types in a region differ in the evolutionary history they represent, with communities in dry habitats disproportionately including lineages from dry epochs and similar for wet habitats. Here, considering the phylogeny provides insight into the evolutionary component of an ecological idea like 'environmental filtering'.
Similarly, species pools are formed by both ecological processes (dispersal and constraints on dispersal) and evolutionary ones (extinctions, speciation in situ), and one suggestion is that appropriate null models for communities may need to consider both ecological and evolutionary processes (Pigot and Etienne, 2015).
Invasive species also should be considered in the context of evolution and ecology. Gallien et al. 2016 found that “currently invasive species belong to lineages that were particularly successful at colonizing new regions in the past.”
I think using phylogenies in this way is philosophically in line with ideas like Robert Ricklef's 'regional community' concept. The recognition is that a single time scale may be limiting in terms of understanding ecological communities.
References:
- Mayfield, Margaret M., and Jonathan M. Levine. "Opposing effects of competitive exclusion on the phylogenetic structure of communities." Ecology letters 13.9 (2010): 1085-1093.
- Gerhold, Pille, et al. "Phylogenetic patterns are not proxies of community assembly mechanisms (they are far better)." Functional Ecology 29.5 (2015): 600-614.
- Schoener, Thomas W. "The controversy over interspecific competition: despite spirited criticism, competition continues to occupy a major domain in ecological thought." American Scientist 70.6 (1982): 586-595.
- Strong Jr, Donald R., Lee Ann Szyska, and Daniel S. Simberloff. "Test of community-wide character displacement against null hypotheses." Evolution(1979): 897-913.
- Kelly, Steven, Richard Grenyer, and Robert W. Scotland. "Phylogenetic trees do not reliably predict feature diversity." Diversity and distributions 20.5 (2014): 600-612.
- Godoy, Oscar, Nathan JB Kraft, and Jonathan M. Levine. "Phylogenetic relatedness and the determinants of competitive outcomes." Ecology Letters17.7 (2014): 836-844.
- Narwani, Anita, et al. "Experimental evidence that evolutionary relatedness does not affect the ecological mechanisms of coexistence in freshwater green algae." Ecology Letters 16.11 (2013): 1373-1381.
- Rachel M. Germain, Jason T. Weir, Benjamin Gilbert. Species coexistence: macroevolutionary relationships and the contingency of historical interactions. Proc. R. Soc. B 2016 283 20160047
- Nathan J. B. Kraft, Oscar Godoy, and Jonathan M. Levine. Plant functional traits and the multidimensional nature of species coexistence. 2015. PNAS.
- Bartish, Igor V., et al. "Species pools along contemporary environmental gradients represent different levels of diversification." Journal of Biogeography 37.12 (2010): 2317-2331.
- IV Bartish, WA Ozinga, MI Bartish, GW Wamelink, SM Hennekens. 2016. Different habitats within a region contain evolutionary heritage from different epochs depending on the abiotic environment. Global Ecology and Biogeography
- Pigot, Alex L., and Rampal S. Etienne. "A new dynamic null model for phylogenetic community structure." Ecology letters 18.2 (2015): 153-163.
- Gallien, L., Saladin, B., Boucher, F. C., Richardson, D. M. and Zimmermann, N. E. (2016), Does the legacy of historical biogeography shape current invasiveness in pines?. New Phytol, 209: 1096–1105.
Subscribe to:
Posts (Atom)