One of the most vociferous recent debates in community ecology started in the 1970s between Jared Diamond and Dan Simberloff (and colleagues) regarding whether 'checkerboard patterns' of bird distributions provided evidence for interspecific competition. This was an early and particularly heated example of the pattern versus process debate that continues in various forms today. Diamond (1975) proposed that the distribution of birds in the Bismark Archipelago, and particularly the fact that some pairs of bird species did not co-occur on the same islands (producing a checkerboard pattern), was evidence that competition between species limited their distributions. The issue with using this checkerboard pattern as evidence of competition, which Connor and Simberloff (1979) subsequently pointed out, was that a null model was necessary to determine whether it was actually different from random patterns of apparent non-independence between species pairs. Further, other mechanisms (different habitat requirements, speciation, dispersal limitations) could also produce non-independence between species pairs. The original debate may have died down, but the methodology for null models of communities suggested by Connor and Simberloff has greatly influenced modern ecological methods, and continues to be debated and modified to this day.
The original null model of bird distributions in the Bismark Archipelago involved a binary community matrix (rows represent islands, columns represent species) with 0s and 1s representing species presences or absences. Hence, all the 1s in a row represent the species present on the island. The original null model approach involved randomly shuffling the 0s and 1s, maintaining island richness (row sums) and species range sizes (column sums). The authors of a new paper in Ecology admit that the original null models didn’t accurately capture what Diamond meant by a "checkerboard pattern". This is interesting in part because two of the authors (E.F. Connor and Dan Simberloff) lead the debate against Diamond and introduced the binary matrix approach for generating null expectations. So there is a little bit of a ‘mea culpa’ here. The authors note that earlier null models captured patterns of non-overlap between species' distributions but didn’t differentiate between non-overlap between species with overlapping ranges compared to non-overlap between species which simply occurred on sets of geographically distant islands (referred to here as 'regional allopatry'). The original binary matrix approach didn’t consider spatial proximity of species ranges.
With this fact in mind, the authors re-analyzed checkerboard patterns in the Bismark Archipelago, but in such a way as to control for regional allopatry. True checkerboarding was defined as: “a congeneric or within-guild pair with exclusive distribution, co-occurrence in at least one island group, and geographic ranges that overlap more or significantly more than expected under an hypothesis of pairwise independence”. This definition appears closer to Jared Diamond's original definition and so a null model that captures this is probably a better test of the original hypothesis. The authors looked at the overlap of convex hulls defining species’ ranges and when randomizing the binary matrix, added the further restriction that species could occur only within the island groups where they were actually found (instead of being randomly shuffled through any island, as before).
Even with these clarified and more precise null models, the results remain consistent. True checkerboarding appears to rarely occur compared to chance. Of course, this doesn't mean that competition is not important, but “Rather, in echoing what we said many years ago, one can only conclude that, if they do compete, competition does not strongly affect their patterns of distribution among islands.” More generally, the endurance of this particular debate says a lot about the longstanding tension in ecology over the value and wealth of information captured by ecological patterns, and the limitations and caveats that come with such data. There is also a subtle message about the limitations of null models: they are often treated as a magic wand for dealing with observed patterns, but null models are limited by our own understanding (or ignorance) of the processes at play and our interpretation of their meaning.
4 comments:
"There is also a subtle message about the limitations of null models: they are often treated as a magic wand for dealing with observed patterns, but null models are limited by our own understanding (or ignorance) of the processes at play and our interpretation of their meaning."
The Duhem-Quine thesis comes to mind. From wikipedia:
"The Duhem–Quine thesis (also called the Duhem–Quine problem, after Pierre Duhem and Willard Van Orman Quine) is that it is impossible to test a scientific hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions (also called auxiliary assumptions or auxiliary hypotheses)."
The problem is in deriving predictions from hypotheses. Here the hypothesis of no competition requires several background assumptions for the statistical test to be a good test of the 'no competition' hypothesis. There are many drastically different datasets that could result from a non-competitive system.
But Duhem-Quine is true of all science...why single out null models? I think the problem has more to do with how null models are routinely used, than with how they could (or should) be used. Null model results are usually only effective as part of a much larger body of evidence in support of (or against) a particular hypothesis. From wikipedia again,
"One solution to the dilemma thus facing scientists is that when we have rational reasons to accept the background assumptions as true (e.g. scientific theories via evidence) we will have rational—albeit nonconclusive—reasons for thinking that the theory tested is probably wrong if the empirical test fails."
Hey Steve - I don't disagree with you. I would like to say that I'm not singling out null models per se! I think the issue is more that I notice that null models are routinely applied in a general manner without considering assumptions, interpretations, etc. There are lots of other tools similarly (mis)used, null models just relate to this discussion.
I figured you'd agree. I just thought the D-Q angle is cool.
Question: has anyone ever actually simulated data from a spatial competition model and looked at the patterns thereby generated and how they change as you tweak model parameters? Because that's surely what you want to do to validate any observation-based method. The scientific question of interest here is actually not ultimately "is the observed spatial distribution of species a checkerboard or just random"? It's "what, if anything, can you infer about the biological processes that generated the observed spatial distribution of species from the spatial distribution itself?" Any approach to answering that question ought to be validated not by testing whether it can distinguish checkerboards or any other "pattern" from "noise", but by asking if it can distinguish spatial distributions generated by one process or combination of processes from spatial distributions generated by other processes or combinations of processes.
I know just a couple of people have done this for variance partitioning methods in metacommunity ecology (Ben Gilbert is one), with very negative answers. Basically, using variance partitioning methods to try to infer whether metacommunities are source-sink or species sorting or whatever looks like a terrible idea.
If nobody's done the same for checkerboard distributions (maybe nestedness too), somebody really should. I'm sure the answer would be negative (i.e. you'd find that you can't infer anything about process from pattern here), but somebody should still do it if it hasn't yet been done. And it wouldn't even be hard.
Post a Comment