Showing posts with label theories. Show all posts
Showing posts with label theories. Show all posts

Thursday, January 18, 2018

A general expectation for the paradox of coexistence

There are several popular approaches to the goal of finding generalities in ecology. One is essentially top down, searching for generalities across ecological patterns in multiple places and at multiple scales and then attempting to understand the underlying mechanisms (e.g. metabolic scaling theory and allometric approaches). Alternatively, the approach can be bottom up. It may consider multiple models or multiple individual mechanisms and find generalities in the patterns or relationships they predict. 

A great example of generalities from multiple models is in a recent paper published in PNAS (from Sakavara et al. 2018). It relies on, links together, and adds to, our understanding of community assembly and the effects of competition on the distribution of niches in communities. In particular, it adds additional support to the assertion that both combinations of either highly similar or highly divergent species can coexist, across a wide variety of models.

Work published in 2006 by Scheffer and van Nes played an important early role towards a reconciliation of neutral theory and niche-based approaches. They used a Lotka-Volterra model to highlight that communities could assemble with clusters of coexisting, similar species evenly spaced along a niche axis (Figure 1). Neutrality, or at least near-neutrality, could result even when dynamics were determined by niche differences. [Scheffer, van Nes, and Remi Vergnon also provide a nice commentary on the Sakavara et al. paper found here].
Fig. 1: From Scheffer and van Nes, emergent 'lumpiness' in communities.
One possibility is that Scheffer and van Nes's results might be due to the specifics of the L-V model rather than representing a general and biologically realistic expectation. Sakavara et al. address this issue using a mechanistic consumer-resource model in "Lumpy species coexistence arises robustly in fluctuating resource environments". Under this model, originally from Tilman's classic work with algae, coexistence is limited by the number of resources that limit a species' growth. For 2 species, for example, 2 resources must be present that limit species growth, and further the species must experience a tradeoff in their competitive abilities for the 2 resources. Coexistence can occur when each species is limited more by the resource on which it is most competitive (Figure 2). Such a model– in which resources limit coexistence—leads to an expectation that communities will assemble to maximize the dissimilarity of species.
Fig 2. From Sakavara et al. (2017).
Such a result occurs when resources are provided constantly, but in reality the rates of resource supply may well be cyclical or unpredictable. Will community assembly be similar (resulting in patterns of limiting similarity) when resources are variable in their supply? Or will clumps of similar species be able to coexist? Sakavara et al. considered this question using consumer-resource models of competition, where there are two fluctuating limiting resources. They simulated the dynamics of 300 competing species, which were assigned different trait values along a trait gradient. Here the traits were the half-saturation coefficients for the 2 limiting resources: these were related via a tradeoff between the half saturation constants for each resource.

What they found is strikingly similar to the results from Scheffer and van Nes and dissimilar to the the results that emerge when resources are constant. Clumps of coexisting species emerged along the trait axis. When resource fluctuations occurred rapidly, only fairly specialized species survived in these clumps (R* values that were high for either resource 1, or resource 2, rather than intermediate). But when fluctuations were less frequent, clusters of species also survived at intermediate points along the trait axis. However, in all cases the community organized into clumps composed of very similar species that were coexisting (see Figure 3). It appears that this occurs because the fluctuating resources result in the system having non-stationary conditions. That is, similar sets of species can coexist because the system varies between those species' requirements for persistence and growth. 

Fig. 3. "Lumpy species coexistence". The y-axis shows the trait value (here, the R*) of species present under 360 day periodicity of resource supply.  
Using many of the dominant models of competition in ecology, it is clearly possible to explain the coexistence of both similar or dissimilar species. This is true across approaches from the Lotka-Volterra results of Scheffer and van Nes, to Tilman's R* resource competition, to Chessonian coexistence (2000). It provides a unifying expectation upon which further research can build. Perhaps the paradox of the planktons is not really a paradox anymore?
-->

Friday, November 25, 2016

Can coexistence theories coexist?

These days, the term ‘niche’ manages to cover both incredibly vague and incredibly specific ideas. All the many ways of thinking about an organism’s niche fill the literature, with various degrees of inter-connection and non-independence. The two dominant descriptions in modern ecology (last 30 years or so) are from ‘contemporary niche theory’ and ‘modern coexistence theory’. Contemporary niche theory is developed from consumer-resource theory, where organisms' interactions occur via usage of shared resources. (Though it has expanded to incorporate predators, mutualists, etc), Analytical tools such as ZNGIs and R* values can be used to predict the likelihood of coexistence (e.g. Tilman 1981, Chase & Leibold 2003). Modern coexistence theory is rooted in Peter Chesson’s 2000 ARES review (and earlier work), and describes coexistence in terms of fitness and niche components that allow positive population growth.

On the surface these two theories share many conceptual similarities, particularly the focus on measuring niche overlap for coexistence. [Chesson’s original work explicitly connects the R* values from Tilman’s work to species’ fitnesses in his framework as well]. But as a new article in Ecological Monographs points out, the two theories are separated in the literature and in practice. The divergence started with their theoretical foundations: niche theory relied on consumer-resource models and explicit, mechanistic understanding of organisms’ resource usage, while coexistence theory was presented in terms of Lotka-Volterra competition models and so phenomenological (e.g. the mechanisms determining competition coefficients values are not directly measured). The authors note, “This trade-off between mechanistic precision (e.g. which resources are regulating coexistence?) and phenomenological accuracy (e.g. can they coexist?) has been inherited by the two frameworks….”

There are strengths and weaknesses to both approaches, and both have been used in important ecological studies. So it's surprising that they are rarely mentioned in the same breathe. Letten et al. answer an important question: when directly compared, can we translate the concepts and terms from contemporary niche theory into modern coexistence theory and vice versa?

Background - when is coexistence expected? 
Contemporary niche theory (CNT) (for the simplest case of two limiting resources): for each species, you must know the consumption or impact they have on each resource; the ratio at which the two resources are supplied, and the ZNGIs (zero net growth isoclines, which delimit the resource conditions a species can grow in). Coexistence occurs when the species are better competitors for different resources, when each species has a greater impact on their more limiting resource, and when the supply ratio of the two resources doesn’t favour one species over the other. (simple!)

For modern coexistence theory (MCT), stable coexistence occurs when the combination of fitness differences and niche differences between species allow both species to maintain positive per capita growth rates. As niche overlap decreases, increasingly small fitness differences are necessary for coexistence.

Fig 1, from Letten et al. The criteria for coexistence under modern coexistence theory (a) and contemporary niche theory (b).  In (a), f1 and f2 reflect species' fitnesses. In (b) "coexistence of two species competing for two substitutable resources depends on three criteria: intersecting ZNGIs (solid red and blue lines connecting the x- and y-axes); each species having a greater impact on the resource from which it most benefits (impact vectors denoted by the red and blue arrows); and a resource supply ratio that is intermediate to the inverse of the impact vectors (dashed red and blue lines)."

So how do these two descriptions of coexistence relate to each other? Letten et al. demonstrate that:
1) Changing the supply rates of resources (for CNT) impacts the fitness ratio (equalizing term in MCT). This is a nice illustration of how the environment affects the fitness ratios of species in MCT.

2) Increasing overlap of the impact niche between two species under CNT is consistent with increasing overlap of modern coexistence theory's niche too. When two species have similar impacts on their resources, there should be very high niche overlap (weak stabilizing term) under MCT too.

3) When two species' ZNGI area converge (i.e. the conditions necessary for positive growth rates), it affects both the stabilizing and equalizing terms in MCT. However, this has little meaningful effect on coexistence (since niche overlap increases, but fitness differences decrease as well).

This is a helpful advance because Letten et al. make these two frameworks speak the same (mathematical) language. Further, this connects a phenomological framework with a (more) mechanistic one. The stabilizing-equalizing concept framework (MCT) has been incredibly useful as a way of understanding why we see coexistence, but it is not meant to predict coexistence in new environments/with new combinations of species. On the other hand, contemporary niche theory can be predictive, but is unwieldy and information intensive. One way forward may be this: reconciling the similarities in how both frameworks think about coexistence.

Letten, Andrew D., Ke, Po-Ju, Fukami, Tadashi. 2016. Linking modern coexistence theory and contemporary niche theory. Ecological Monographs: 557-7015. http://dx.doi.org/10.1002/ecm.1242
(This is a monograph for a reason, so I am just covering the major points Letten et al provide in the paper. It's definitely worth a careful read as well!).

Tuesday, August 11, 2015

#ESA100 Declining mysticism: predicting restoration outcomes.

Habitat restoration literature is full of cases where the outcomes of restoration activities are unpredictable, or where multiple sites diverge from one another despite identical initial restoration activities. This apparent unpredictability in restoration outcomes is often attributed to undetected variation in site conditions or history, and thus have a mystical quality where the true factors affecting restoration are just beyond our intellect. These types of idiosyncrasies have led some to question whether restoration ecology can be a predictable science.

Photo credit: S. Yasui


The oral session “Toward prediction in the restoration of biodiversity”, organized by Lars Brudvig, showed how restoration ecologists are changing our understanding of restoration, and shedding light on the mystical qualities of success. What is clear from the assembly of great researchers and fascinating talks in this session is that recent ecological theories and conceptual developments are making their way into restoration. Each of the 8 of 10 talks I saw (I had to miss the last two) added a novel take on how we predict and measure success, and the factors that influence it. From the incorporation of phylogenetic diversity to assess success (Becky Barak) to measuring dispersal and establishment limitation (Nash Turley), and from priority effects (Katie Stuble) to plant-soil feedbacks (Jonathan Bauer), it is clear that predicting success is a multifaceted problem. Further, from Jeffry Matthews talk on trajectories, even idiosyncratic restoration trajectories can be grouped into types of trajectories (e.g., increasing diversity vs plateauing) and then relevant factors can be determined.


What was most impressive about this session was the inclusion of coexistence theory and basic demography into understanding how species perform in restoration. Two talks in particular, one from Loralee Larios on coexistence theory and the other from Dan Laughlin on predicting fitness from traits by environment interactions, shed new light on predicting restoration. Both of these talks showed how species traits and local environmental conditions influence species’ demographic responses and the outcome of competition. These two talks revealed how basic ecological theory can be applied to restoration, but more importantly, and perhaps under-appreciated, these talks show how our basic assumptions about traits and interactions with other species and the environment require ground-truthing to be applicable to important applied problems.

Tuesday, January 27, 2015

50 years of applying theory to ecological problems: where are we now?

Fifty years ago, the seminal volume ‘The Genetics of Colonizing Species’ edited by Herbert G. Baker and G. Ledyard Stebbins was published, and it marked a new phase for the nascent sciences of ecology and evolutionary biology –namely applying theories and concepts to understanding applied issues. Despite the name, this book was not really about genetics, though there were several excellent genetics chapters, what it was really about was the collective flexing of the post-modern synthesis intellectual muscles. Let’s back up for a minute.

The modern synthesis, largely overlooked and forgotten by modern course syllabi, is the single most important event in ecology and evolution since the publication of Darwin’s Origin of the Species. Darwin’s concepts of evolution stand as dogma today, but after publishing his book, Darwin and others recognized that he lacked a crucial mechanism –how organismal characteristics were passed on from parent to offspring. He assumed that whatever the mechanisms, offspring varied in small ways from parents and that there was continuous variation across a population.

For more than 30 years, from about 1900-1930, evolution via natural selection was thought disproven. With the rediscovery of Mendel’s garden pea breeding experiments in 1900, many influential biologists of the day believed that genetic variation was discontinuous in ‘either-or’ states and that abrupt changes typified the appearance of new forms. Famously, this thinking lead to the belief that ‘hopeful monsters’ were produced with some becoming new species instantaneously. This model of speciation was referred to ‘saltationism’

Of course there were heretics, most notably the statisticians who worked with continuous variation (e.g., Karl Pearson, and Ronald Fisher) who refuted the claims made by saltationists in the 1920s. Some notable geneticists changed their position on saltationism because their experiments and observations provided evidence that natural selection was important (most notably T.H. Morgan). However, it wasn't until WWII that the war was won. A group of scientists working on disparate phenomena published a series of books from 1937-1950 that showed how genetics was completely compatible with Darwinian natural selection and could explain a wide variety of observations from populations to biogeography to paleontology. These ‘architects’ and their books were: Theodosius Dobzhansky (Genetics and the Origin of Species); Ernst Mayr (Systematics and the Origin of Species); E. B. Ford (Mendelism and Evolution); George Gaylord Simpson (Tempo and Mode in Evolution); and G. Ledyard Stebbins (Variation and Evolution in Plants). With this, they unified biology and thus the modern synthesis was born.
Now back to the edited volume. Which such a powerful theory, it made sense that there should be a theoretical underpinning to applied ecological problems. The book grew out of a symposium held in Asilomar, California Feb. 12-16, 1964[1], organized by C. H.Waddington, who originally saw an opportunity to bring together thinkers on population genetics. But the book became so much more. According to Baker and Stebbins:
“…the symposium … had as its object the bringing together of geneticists, ecologists, taxonomists and scientists working in some of the more applied phases of ecology –such as wildlife conservation, weed control, and biological control of insect pests.”

Thus the goal was really about modern science and the ability to inform ecological management. The invitees include a few of the ‘architects’ (Dobzhansky, Mayr, and Stebbins) and their academic or intellectual progeny, which includes many of the most important thinkers in ecology and evolution in the 1960s and 70s (Wilson, Lewontin, Sakai, Birch, Harper, etc.).

Given the importance of the Genetics of Colonizing Species in establishing the role that theory might play for applied ecology, it is important to reflect on two important questions: 1) How much have our basic theories advanced in the last 50 years; and perhaps more importantly, 2) has theory provided key insights to solving applied problems?

This book is the fodder for a graduate seminar course I am teaching, and these two questions are the focus of our comparing the chapters to modern papers. Over the next couple of months, students in this course will be contributing blog posts that examine the relationship between the classic chapters and modern work, and they will muse on these two questions. Hopefully by the end of this ongoing dialogue, we will have a better feeling of whether basic theory has advanced our ability to solve applied problems.

Monday, October 21, 2013

Is ecology really failing at theory?


“Ecology is awash with theory, but everywhere the literature is bereft”. That is Sam Scheiner's provocative start to his editorial about what he sees as a major threat to modern ecology. The crux of his argument is simple – theory is incredibly important, it allows us to understand, to predict, to apply, to generalize. Ecology began as a study rooted in system-specific knowledge or natural history in the early 1900s, and developed into a theory-dominated field in the 1960s, when many great theoreticians came to the forefront of ecology. But today, he fears that theory is dwindling in importance in ecology. To test this, he provides a small survey of ecological and evolutionary journals for comparison (Ecology Letters, Oikos, Ecology, AmNat, Evolution, Journal of Evolutionary Biology), recording papers from each journal as either containing no theory, being ‘theory motivated’, or containing theory (either tests of, development of, or reviews of theory). The results showed that papers in ecological journals on average include theory only 60% of the time, compared to 80% for evolutionary papers. Worse, ecological papers seem to be more likely to develop theory than to test it. Scheiner’s editorial (as the title makes clear) is an indictment of this shortcoming of modern ecology.

Plots made based on data table in Scheiner 2013. Results combined for all evolution and all ecology papers.
The proportion of papers in each category - all categories starting with
 "Theory" refer to theory-containing papers.
Plots made based on data table in Scheiner 2013. Results for papers from individual journals.
The proportion of papers of each type - all categories starting with
 "Theory" refer to theory-containing papers.
This is not the kind of conclusion that I find myself arguing against too often. And I mostly agree with Scheiner: theory is the basis of good science, and ecology has suffered from a lack of theoretical motivation for work, or pseudo-theoretical motivation (e.g. productivity-diversity, intermediate diversity patterns that may lack an explanatory mechanism). But I think the methods and interpretation, and perhaps some lack of recognition of differences between ecological and evolutionary research make the conclusions a little difficult to embrace fully. There are three reasons for this – first, is this brief literature review a good measure of how and why we use theory as ecologists? Second, do counts of papers with or without theory really scale into impact or harm? And third, is it fair to directly compare ecological and evolutionary literature, or are there differences in the scope, motivations, and approaches of these fields?

If we are being truly scientific, this might be a good time to point out that The 95% confidence intervals for the percentage of ecology papers with theory overlap with the confidence intervals for the percentage of evolutionary papers with theory suggesting the difference that is the crux of the paper is not significant. [Thanks to a commenter for pointing out this difference is likely significant]. While significant at the 5% level, the amount of overlap is enough that whether this difference is meaningful is less clear. (I would accept an argument that this is due to small sample sizes though). The results also show that choice of journal makes a big difference in terms of the kinds of paper found within – Ecology Letters and AmNat had more theoretical papers or theory motivated papers, while Oikos had more tests of theory and Ecology had more case studies. This sort of unspoken division of labour between journals means that the amount of theory varies greatly. And most ecologists recognize this - if I write a theory paper, it will be undoubtedly targeted to a journal that commonly publishes theory papers. So to more fully represent ecology, a wider variety of journals and more papers would be helpful. Still, Scheiner's counterargument would likely be that even non-theory papers (case studies, etc) should include more theory.

It may be that the proportion of papers that include theory is not a good measure of theory’s importance or inclusion in ecology in general. For example, Scheiner states, “All observations presuppose a theoretical context...the simple act of counting individuals and assessing species diversity relies on the concepts of ‘individual’ and ‘species,’ both of which are complex ideas”. While absolutely true, does this suggest that any paper with a survey of species’ distributions needs to reference theory related to species’ concepts? What is the difference between acknowledging theory used via a citation and more involved discussion of theory? In neither of these cases is the paper “bereft” of theory, but it is not clear from the methods how this difference was dealt with. As well, I think that ecological literature contains far more papers about applied topics, natural history, and system-specific reports than evolutionary biology. Applied literature is an important output of ecology, and as Scheiner states, builds undeniably on years of theoretical background. But on the other hand, a paper about the efficacy of existing reserves in protecting diversity using gap analysis is both important and may not have a clear role for a theoretical section (but will no doubt cite some theoretical and methodological studies). Does this make it somehow of less value to ecology than a paper explicitly testing theory? In addition, case reports and data *are* a necessary part of the theoretical process, since they provide the raw observations on which to build or refine theory. In many ways, Scheiner's editorial is a continuation of the ongoing tension between theory and empiricism that ecology has always faced.

The point I did agree strongly with is that ecology is prone to missing the step between theory development and data collection, i.e. theory testing. Far too few papers test existing theories before the theoreticians have moved on to some new theory. The balance between data collection, theory development, and theory testing is probably more important than the absolute number of papers devoted to one or the other.

Scheiner’s conclusion, though, is eloquent and easy to support, no matter how you feel about his other conclusions: “My challenge to you is to examine the ecological literature with a critical eye towards theory engagement, especially if you are a grant or manuscript reviewer. Be sure to be explicit about the theoretical underpinnings of your study in your next paper…Strengthening the ecological literature by engaging with theory depends on you.”

Monday, December 26, 2011

Rumors of community ecology’s death were greatly exaggerated: reflections on Lawton 1999

In 1999, John Lawton, eminent British ecologist, published a lament for the state of community ecology entitled “Are there general laws in ecology?” Cited more than 600 times, Lawton’s paper forced a re-evaluation of community ecology’s value, success, and even future existence. Other scientists at the time seemed to agree, with papers starting with phrases like “Although community ecology is a struggling science…” and “Given the lack of general laws in ecology…”. Lawton appeared to be suggesting that community ecology be abandoned for the generality of macroecology or the structure of population ecology.

An important point to be made is that Lawton was simply making a particularly public expression of ecology’s growing pains. In 1999, ecology was at a crossroads between the traditional approach of in-depth system-based study, and a fairly single-minded focus on competition as an explanation for patterns (e.g., Cooper 1993 ‘The Competition Controversy in Community Ecology’ Biology and Philosophy 8: 359-384), while at the same time there were emergent approaches and explanations like neutrality, macroecology, spatial ecology, ecophylogenetics, and improved computer and molecular methods. There was also growing dissent about ecology’s philosophical approach to ecology (e.g., Peters 1991 ‘A Critique for Ecology’; Haila and Heininen 1995 ‘Ecology: A New Discipline for Disciplining’ Social Text 42: 153-171): ecologists tended to ignore the Popperian approach, which required falsification of existing hypothesis, instead tending to look for support for an existing hypothesis, or at least advocated looking for patterns without considering alternative mechanisms. Not only this, but the applications for ecology were more clear than ever – the Intergovernmental Panel for Climate Change was meeting , and the ecological consequences of human actions were perhaps more obvious they had ever been. But ecologists were failing at providing solutions –Lawton argued-correctly-that in 1999 ecologists could provide little insight into how a community might change in structure and function in response to changing climate.

Although everyone should read Lawton’s paper, a simple synthesis of his concerns would be this – that community ecology is too contingent, communities are too complex, and therefore community ecology cannot formulate any laws, cannot make predictions, cannot be generalized from one system to another. This makes community ecology suspect as a science (physics being the most common example of an “ideal” science), and certainly not very useful. Lawton suggests that population ecology, where only a few models of growth could explain the majority of species’ dynamics, or macroecology, which focuses on the most general, large-scale patterns, were a better example of how ecology should be practiced.

Community ecology, rather than dying, has experienced an incredible surge in popularity, with a large contingent represented at meetings and in journal publications. Ecology itself is also thriving, as one of the fastest growing departments in universities. So what, if anything, has changed? Has ecology addressed Lawton’s criticisms?

Two major things happened in the late 1990’s and early 2000’s, which helped ecologists see beyond this general malaise. The first was that a number of well-thought out alternative ecological mechanisms explaining community membership were published. Before the late 90’s community ecologists looked for evidence of competition in patterns of community composition, either among locales or through time following disturbance. When local competition was insufficient to explain patterns, researchers likely cited, but did not test other mechanisms. Or if they did test other mechanisms, say predation, it was as an alternative, mutually exclusive mechanism. The new publications, drawing on previous ideas and concepts formalized assembly mechanisms like neutral processes or metacommunity dynamics where uneven fitnesses in a heterogeneous landscape can affect local coexistence. More than these as solely alternative mechanisms, these allowed for a synthesis where multiple mechanisms operate simultaneously to affect coexistence. Probably the most emblematic paper of this renewed excitement is Peter Chesson’s 2000 ‘Mechanisms of maintenance of species diversity’ published in Annual Reviews of Ecology and Systematics. This paper, cited over a thousand times, offers a way forward with a framework that includes competitive and niche differences but can also account for neutral dynamics.

A second major development that rejuvenated ecology was the formation of technological and statistical tools engendering broad-scale synthetic research. Suddenly the search for general explanations – Lawton’s most piercing criticism - became more common and more successful. With the advent of on-line databases, meta-analytic procedures and centers (e.g., the National Center for Ecological Analysis and Synthesis) that foster synthetic research, ecologists routinely test hypotheses that transcend local idiosyncrasies. Often, the capstone publication on a particular hypothesis is no longer a seminal experiment, but rather a meta-analysis that is combines all the available information to assess how strongly and how often a particular mechanism affects patterns.

While these theoretical and technological developments have been essential ingredients in this ecological rejuvenation, there has also been a subtle shift the philosophical approach to what it is ecological theory can and should do. Criticism in the 1990’s (e.g., Peters 1991 ‘A Critique for Ecology’) centered on the inability of ecological theory to make accurate predictions. The concept of science common in ecology in the 1990’s was that a rigorous, precise science (i.e., with laws) results in the ability to accurately predict species composition and species abundances given a set of mechanisms. This view of ecological science has been criticized as simplistic ‘physics-envy’ (e.g., see Massimo Pigliucci’s PhD dissertation ‘Dangerous habits: examining the philosophical baggage of biological research’published by the University of Tennessee in 2003). The subtle philosophical change has been a move from law=prediction to law=understanding. This is as true for physics as it is for ecology. We don’t expect a physicist to predict precisely where a falling feather will land, but we do expect to totally understand why it landed where it did based on fundamental processes. (for more on the contrast of prediction and understanding, see Wilhelm Windelband’s nomothetic and idiographic knowledge)


While the feather example above is simplistic, it is telling. In reality a physicist can produce probability contours of where the feather is likely to land, which could be very focused on a calm day or broad on a windy one. This is exactly what ecologists do. Once they understand how differing mechanisms come together to shape diversity, they make probabilistic predictions about the outcome of a set of known mechanisms.

Ecology today is as vibrant as ever. This is not a result of finding new laws that proved Lawton incorrect. Rather, ecologists now have a more sophisticated understanding of how various mechanisms operate in concert to shape diversity. Moreover, conceptual, technological and philosophical revolutions have fundamentally changed what ecologists do and what they are trying to explain. It is a great time to be an ecologist.

Lawton, J. H. (1999). Are there general laws in ecology? Oikos, 84(2), 177-192.


By Marc Cadotte and Caroline Tucker