Ecologists have always found the question of how communities assemble to be of great interest. However, studies of community assembly are often thwarted by the
large temporal and spatial scales over which processes occur,
making experimental tests of assembly theory difficult. As a result, researchers are often forced to rely
on observational data and make inferences about the mechanisms at play from
patterns alone. While historical assembly research focused on inferring evidence of competition or environmental filtering from patterns of species co-occurrence, more recent
research often looks at patterns of phylogenetic or trait similarity in a
community to answer these questions.
Not surprisingly, when methods rely heavily on observational data they are open to criticism: one of the most important outcomes of early community
assembly literature was the recognition that patterns that appeared to support
a hypothesis about competition or environmental filtering could in fact result by
random chance. This ultimately lead to the
widespread incorporation of null models, which are meant to simulate patterns
that might be observed by random chance (or other processes not under study),
against which the observed data can be compared. Patterns of functional and
phylogenetic information in communities can also be compared against null
expectations to ensure that patterns of phylogenetic or functional over- or under-dispersion can't arise due to chance alone. However, while
null models are an important tool in assembly research, they are sometimes as the foolproof solution to all of its problems.
In a new paper by Francesco de Bello, the author states
frankly “whilst reading null-model methods applied in the literature (indeed
including my work), one may have the impression of reading a book of magic
spells”. While null models are increasingly sophisticated, allowing researchers
to determine which processes to control for and which to leave out, de Bello
suggests that the decision to include or omit particular factors from a null
model can be unclear, making it difficult to interpret results or compare results across studies. Further, results from null models may not mean what researchers
expect them to mean.
Using the example of functional diversity (FD; variation in
trait values among species in a community), de Bellow illustrates how null models may
have different meanings than expected. Ideally, a null model for FD should produce
random values of FD, against which the observed values of FD can be compared. Interpreting the difference between the observed and random results can be done using the standardized effect size (SES, the standardized difference between the
observed and randomly generated FD values); SES values >0 show that traits are more
divergent than expected by chance, suggesting competition structures
communities. If SES<0, traits are more convergent than expected by chance,
suggesting environmental conditions structure communities. Finally, if SES ~0, then
trait values aren’t different from random. However, de Bello shows that the SES
is driven by the observed FD values, because the ‘random’ FD values are
dependent on the pool of observations sampled. This means that the values the
null model produces are ultimately dependent on those observed values, despite
the fact you plan to make inferences by comparing the null and observed values
as though they are independent. For example, consider the situation where you
are building a null model of community structure for plant communities found along
two vegetation belts. If the null model is constructed using all the plant
communities, regardless of the habitat they are found in, the resulting null FD
value will be higher, since species that are dissimilar and found in different
vegetation belts are being randomly selected as occurring in a community. If
null models are constructed separately for both vegetation belts, the null FD
value is lower, since species are more similar. The magnitude of the difference
between the null model and the observed values, and further, the biological
conclusions one would take from this study, would therefore depend on which null model was
constructed.
De Bello’s findings make important points about the
limitations of null models, particularly for functional diversity, but likely
for other types of response variable. The type of null model he explores is
relatively simplistic (reshuffling of species among sites), and the suggestion
that the species pool affects the null model is not unique (Shipley & Weiher, 1995).
However, even sophisticated and complex null models need to be biologically
relevant and interpretable, and null models are still frequently used incorrectly. Although only mentioned briefly, De Bello also
notes another problem with studies of community assembly, which is that popular
indices like FD, PD, and others may not always be able to distinguish correctly between
different assembly mechanisms (Mouchet et al. 2010, Mayfield & Levine, 2010), something that null
model do not control for.
6 comments:
So, the Narcissus Effect (Colwell & Winkler 1984, IIRC) is rediscovered yet again...
Time to refight the null model wars!
http://oikosjournal.wordpress.com/2011/06/01/why-ecologists-should-refight-the-null-model-wars/
http://oikosjournal.wordpress.com/2012/02/02/cool-new-oikos-papers/
Hi Jeremy - it's true that arguments about null models have been going on for a really long time (e.g. Strong et al. 1984, etc), and seem to surface and disappear periodically. What's interesting to me is that modern approaches to assembly (phylogenetic and functional diversity) are prone to the same fallacies we recognized in species-assembly studies, and yet seem to have avoided the same level of scrutiny in terms of null models. You're right, it's probably a good time for the null model issue to come to the forefront again.
Whoops, meant to include this post in my previous comment:
http://oikosjournal.wordpress.com/2012/02/07/drilling-down-vs-scaling-up/
As for why modern phylogenetic and functional approaches aren't recognized as suffering from the same problems as old approaches, one natural hypothesis is that most of the people using these shiny new quantitative tools are largely unfamiliar with the older literature.
So what's worse: repeating an old mistake because you're not aware of recent refutations? Or repeating an old mistake because you're totally unaware of the old literature?
One of the may great things about being a Morin lab alum is that I had to take Peter's grad course in community ecology, which demands that you both read and think critically about a lot of old stuff (and recent stuff too, of course). Thereby preventing you from making both sorts of mistakes: those borne of undue reverence of the older literature and those born of ignorance of the older literature.
It's interesting you say that about Morin's course. I feel the same way about the grad course in community ecology I took with Peter Abrams here at U of T: reading classic literature helped me think much more clearly about why community ecology is where (and what) it is today.
It may not be a coincidence that they're both good to learn from, and not just because they're both very good ecologists. Peter and Peter are rough contemporaries, and in part they're teaching the stuff that they learned during their own training. They both have a skeptical streak. Peter M. doesn't buy any idea that hasn't worked in an experiment, and Peter A. doesn't buy any idea that hasn't been rigorously demonstrated mathematically. So they know the history, but they know it in a 'warts and all' way and they don't revere it. For both of them, I suspect teaching the history of the field is very much a way to try to keep today's students from repeating the mistakes of the past.
Peter A. is probably the bigger contrarian, though--much of his work is dedicated to attacking widely-held intuitions, especially those derived from MacArthur's work. Even I think that contrarianism isn't always manifested in the best way (some of his work puts too much emphasis on unnecessary complications; I don't believe in introducing complexity just for complexity's sake or just because "it might change the answer"). But I think his body of work deserves huge respect and much of it is essential reading for every community ecologist. In my filing cabinets of reprints (yes, I'm old enough to have several of these), there are more Peter Abrams papers than papers by anyone else.
I have recently become very aware of these types of environmental issues because of the business we are in. It seems like these two Peters kind of complement each other. One believes a theory must be validated by an experiment, whilst the other believes maths are necessary for validation. Maybe combine the two together and if a theory passes muster with both of these guys it can be assumed that they have come up with a very strong case!
Post a Comment