By Marc Cadotte and Caroline Tucker
For their centennial, ESA is asking their members identify as the ecological milestones of the last 100 years. They’ve asked the EEB & Flow to consider this question as a blog post. And there are many – ecology has grown from an amateur mix of natural history and physiology to a relevant and mature discipline. Part of this growth rests on major theoretical developments from great ecologists like Clements, Gleason, MacArthur, Whittaker, Wilson, Levins, Tilman and Hubbell. These people provided the ideas needed to move ecology to new territory. But ideas on their own aren’t enough, in the absence of necessary tools and methods. Instead, we argue that modern ecology would not exist without statistics.
The most cited paper in ecology and evolutionary biology is a methodological one (Felsenstein’s 1985 paper on phylogenetic confidence limits in Evolution – cited over 26,000 times) (Felsenstein, 1985). Statistics is the backbone that ecology develops around. Every new statistical method potentially opens the door to new ways of analyzing data and perhaps new hypotheses. To this end, we show how seven statistical methods changed ecology.
1. P-values and Hypothesis Testing – Setting standards for evidence.
Ecological papers in the early 1900s tended to be data-focused. And that data was analyzed in statistically rudimentary ways. Data was displayed graphically, perhaps with a simple model (e.g. regression) overlaid on the plot. Scientists sometimes argued that statistical tests offered no more than confirmation of the obvious.
At the same time, statistics were undergoing a revolution focused on hypothesis testing. Karl Pearson started it, but Ronald Fisher (Fisher 1925) and Pearson’s son Egon and Jerzy Neyman (Neyman & Pearson 1933) produced the theories that would change ecology. These men gave us the p-value – ‘the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true’ and gave us a modern view of hypothesis testing – i.e. that a scientist should attempt to reject a null hypothesis in favour of some alternative hypothesis.
It’s amazing to think that these concepts are now rote memorization for first year students, having become so ingrained into modern science. Hypothesis testing using some pre-specified level of significance is now the default method for looking for evidence. The questions asked, choices about sample size, experimental design and the evidence necessary to answer questions were all framed in the shadow of these new methods. p-values are no longer the only approach to hypothesis testing, but it is incontestable that Pearson and Fisher laid the foundations for modern ecology. (See Biau et al 2010 for a nice introduction).
2. Multivariate statistics: Beginning to capture ecological complexity.
Because the first emergence of statistical tests arose from agricultural studies, they were designed to test for differences from among treatments or from known distributions. They applied powerfully to experiments manipulating relatively few factors and measuring relatively few variables. However, these types of analyses did not easily permit investigations of complex patterns and mechanisms observed in natural communities.
Often what community ecologists have in hand are multiple datasets about communities including species composition and abundance, environmental measurements (e.g. soil nutrients, water chemistry, elevation, light, temperature, etc.), and perhaps distances between communities. And what researchers want to know is how compositional (multi-species) change among communities is determined by environmental variables. We shouldn’t understate the importance of this type of analysis on communities, in one tradition of community ecology, we would simply analyze changes in richness or diversity. But communities can show a lack of variation in diversity even when communities are being actively structured: diversity is simply the wrong currency.
Many of the first forays into multivariate statistics were through measuring the compositional dissimilarity or distances between communities. For example Jaccard (Jaccard, 1901), and Bray and Curtis (Bray & Curtis, 1957) are early ecologists that invented distance-based measures. Correlating compositional dissimilarity with environmental differences required ordination techniques. Principle Component Analysis (PCA) was actually invented by Karl Pearson around 1900 but computational limitations constrained its use until the 1980s. Around this time, other methods began to emerge which ecologists started to employ (Hill, 1979; Mantel, 1967). The development of new methods continues today (e.g. Peres-Neto & Jackson, 2001), and the use of multivariate analysis is a community ecology staple.
There are now full texts dedicated to the implementation of multivariate statistical tests with ecological data (e.g., Legendre & Legendre, 1998). Further, there are excellent resources available in R (more on this later) and especially in the package vegan (Oksanen et al., 2008), which implements most major multivariate methods. Going forward it is clear that multivariate techniques will continue to be reassessed and improved (e.g. Guillot & Rousset, 2013), and there will be a greater emphasis on the need to articulate multivariate hypotheses and perhaps use multivariate techniques to predict communities (Laughlin, 2014) –not just explain variation.
3. Null models: Disentangling patterns and processes.
Ecology occurs over large spatial and temporal scales, and so it is always reliant on observational data. Gathering observational data is often much easier than doing experimental work at the same spatial or temporal scale, but it is also complicated to analyze. Variation from a huge number of unmeasured variables could well weaken patterns or create unexpected ones. Still, the search for patterns drove the analysis of observational data: including patterns along environmental gradients, patterns in species co-occurrences, patterns in traits. The question of what represented a meaningful pattern was harder to answer.
It seems that ecology could not go on looking at patterns forever. But it took some heated arguments finally change this. The ‘null model wars’ revolved around Jared Diamond’s putative assembly rules for birds on islands (Diamond 1975), which relied on a “checkerboard” pattern of species co-occurrences. The argument for null models was led by Connor and Simberloff (Connor & Simberloff 1979) and later joined by Nicholas Gotelli (e.g. Gotelli & Graves 1996). A null model, they point out, was necessary to determine whether observed patterns of bird distribution were actually different from random patterns of apparent non-independence between species pairs. Further, other ecological mechanisms (different habitat requirements, speciation, dispersal limitations) could also produce non-independence between species pairs. The arguments about how to appropriately formulate null models have never completely ended (e.g., 1, 2, 3), but they now drive ecological analyses. Tests of species-area relationships, phylogenetic diversity within communities, limiting similarity of body sizes or traits, global patterns of diversity, species co-occurrences, niche overlaps, and nestedness in networks, likely all include a null model of some sort.
The null model wars have been referred to as a difficult and contentious time for ecology. Published work (representing significant amounts of time and funding) perhaps needed to be re-evaluated to differentiate between true and null ecological patterns. But despite these growing pains, null models have forced ecology to mature beyond pattern-based analyses to more mechanistic ones.
4. Spatial statistics: Adding distance and connectivity.
Spatially-explicit statistics and models seem like an obvious necessity for ecology. After all, the movement of species through space is an immensely important part of their life history, and further, most ecologically relevant characteristics of the landscapes vary through space, e.g. resources, climate, and habitat. Despite this, until quite recently ecological models tended to assume a uniform distribution of species and processes through space, and that species’ movement was uniform or random through space. The truism that points close in space, all else being equal, should be more similar than distant points, while obvious, also involved a degree of statistical complexity and computing requirements difficult to achieve.
Fortunately for ecology, the late 1980s and early 1990s were a time of rapid computing developments that enabled the incorporation of increasing spatial complexity into ecological models (Fortin & Dale 2005). Existing methods – some ecological, some borrowed from geography – were finally possible with available technology, including nearest neighbour distances, Ripley’s K, variograms, and the Mantel test (Fortin, Dale & ver Hoef 2002). Ideas now fundamental to ecology such as connectivity, edge effects, spatial scale (“local” vs. “regional”), spatial autocorrelation, and spatial pattern (non-random, non-uniform spatial distributions) are the inheritence of this development. Many fields of ecology have incorporated spatial methods or even owe their development to spatial ecology, including meta-communities, landscape ecology, conservation and management, invasive species, disease ecology, population ecology, and population genetics. Pierre Legendre asked in his seminal paper (Legendre 1993) on the topic whether space was trouble, or a new paradigm. It is clear that space was an important addition to ecological analyses.
5. Measuring diversity: rarefaction and diversity estimators.
How many species are there in a community? This is a question that inspires many biologists, and is something that is actually very difficult to measure. Cryptic, dormant, rare and microscopic organisms are often undersampled, and accurate estimates of community diversity need to deal with these undersampled species.
Communities may seem to have different numbers of species simply based on the fact some have been sampled more thoroughly. Unequal sampling effort can distort real differences or similarities in the numbers of species. For example, in some recent analyses of plant diversity using the freely available species occurrence data from GBIF, we found that Missouri seems to have the highest plant diversity –a likely outcome of the fact that the Missouri Botanical Gardens routinely samples local vegetation and makes the data available. Estimating diversity from equalized sampling effort was developed by a number of ecologists (Howard Sanders, Stuart Hurlbert, Dan Simberloff, and Ken Heck) in the 1960s and 1970s resulting in modern rarefaction techniques.
Sampling effort was one problem, and ecologists also recognized that even with equivalent sampling effort, we are likely missing rare and cryptic species. Most notably Anne Chao and Ramon Margalef developed a series of diversity estimators in the 1980s-1990s. These types of estimators place emphasis on the numbers of rare species, because these give insight into the unobserved species. All things being equal, the community with more rare species likely has more unobserved species. These types of estimators are particularly important when we need to estimate the ‘true’ diversity form a limited number of samples. For example, researchers at Sun Yat-sen University in Guangzhou, China, recently performed metagenomic sampling of almost 2000 soil samples from a 500x1500 m forest plot. From these samples they used all known diversity estimators and have come to the conclusion that there are about 40,000 species of bacteria and 16,000 species of fungi in this forest plot! This level of diversity is truly astounding, and without genetic sampling and the suite of diversity estimators, we would have no way of knowing that there is this amazing, complex world beneath our feet.
As we move forward, researchers are measuring diversity in new ways, by quantifying phylogenetic and functional diversity and we will need new methods to estimate these for entire communities and habitats. Anne Chao, and colleagues have recently published a method to estimate true phylogenetic diversity (Chao et al., 2014).
6. Hierarchical and Bayesian modelling: Understanding complex living systems.
Each previous section reinforces the fact that ecology has embraced statistical methods that allow it to incorporate complexity. Accurately fitting models to observational data might require large numbers of parameters with different distributions and complicated interconnections. Hierarchical models offer a bridge between theoretical models and observational data: they can account for missing or biased data, latent (unmeasured) variables, and model uncertainty. In short, they are ideal for the probabilistic nature of ecological questions and predictions (Royle and Dorazio, 2008). The computational and conceptual tools have greatly advanced over the past decade, with a number of good computer programs (e.g., BUGS ) available and several useful texts (e.g., Bolker 2008).
The usage of these types of models has been closely (but not exclusively) tied to Bayesian approaches to statistics. Bayesian statistics have had much written about them, and not a little controversy beyond the scope of this post (but see these blogs for lots of interesting discussion). The focus is on assigning a probability distribution to a hypothesis (the prior distribution) which can be updated sequentially as more information is obtained. Such an approach may have natural similarities to management and applied practices in ecology, where expert or existing knowledge is already incorporated into decision making and predictions informally. Often though, hierarchical models can be tailored to better fit our hypotheses than traditional univariate statistics. For example, species occupancy or abundance can be modelled as probabilities based on detection error, environmental fit and dispersal likelihood.
There is so much that can be said about hierarchical and bayesian statistical models, and their incorporation into ecology is still in progress. The promise from these methods that the complexity inherent in ecological processes can be more closely captured by statistical models and that model predictions are improving, is one of the most important developments in recent years.
7. The availability, community development and open sharing of statistical methods.
The availability of and access to statistical methods today is unparalleled in any time in human history. And it is because of the program R. There was a time recently where a researcher might have had to purchase a new piece of software to perform a specific analysis, or that they would have to wait years for new analyses to become available. The rise of this availability of statistical methods is threefold. First, R is freely available without any fees limiting access. Second, is that the community of users contribute to it, meaning that specific analyses required for different questions are available, and often formulated to handle the most common types of data. Finally, new methods appear in R as they are developed. Cutting edge techniques are immediately available, further fostering their use and scientific advancement.
References
Bolker, B. M. (2008). Ecological models and data in R. Princeton University Press.
Bray, J. R., & Curtis, J. T. (1957). An Ordination of the Upland Forest Communities of Southern Wisconsin. Ecological Monographs, 27(4), 325–349. doi:10.2307/1942268
Chao, A., Chiu, C.-H., Hsieh, T. C., Davis, T., Nipperess, D. A., & Faith, D. P. (2014). Rarefaction and extrapolation of phylogenetic diversity. Methods in Ecology and Evolution, n/a–n/a. doi:10.1111/2041-210X.12247
Connor, E.F. & Simberloff, D. (1979) The assembly of species community: chance or competition? Ecology, 60, 1132-1140.
Diamond, J.M. (1975) Assembly of species communities. Ecology and evolution of communities (eds M.L. Cody & J.M. Diamond), pp. 324-444. Harvard University Press, Massachusetts.
Felsenstein, J. (1985). Confidence limits on phylogenies : An approach using the bootstrap. Evolution, 39, 783–791.
Fisher, R.A. (1925) Statistical methods for research workers. Oliver and Boyd, Edinburgh.
Fortin, M.-J. & Dale, M. (2005) Spatial Analysis: A guide for ecologists. Cambridge University Press, Cambridge.
Fortin, M.-J., Dale, M. & ver Hoef, J. (2002) Spatial analysis in ecology. Encyclopedia of Environmetrics (eds A.H. El-Shaawari & W.W. Piegorsch). John Wiley & Sons.
Gotelli, N.J. & Graves, G.R. (1996) Null models in ecology. Smithsonian Institution Press Washington, DC.
Guillot, G., & Rousset, F. (2013). Dismantling the Mantel tests. Methods in Ecology and Evolution, 4(4), 336–344. doi:10.1111/2041-210x.12018
Hill, M. O. (1979). DECORANA — A FORTRAN program for Detrended Correspondence Analysis and Reciprocal Averaging.
Jaccard, P. (1901). Etude comparative de la distribution florale dans une portion des Alpes et du Jura. Bulletin de La Societe Vaudoise Des Sciences Naturelle, 37, 547–579.
Laughlin, D. C. (2014). Applying trait-based models to achieve functional targets for theory-driven ecological restoration. Ecology Letters, 17(7), 771–784. doi:10.1111/ele.12288
Legendre, P. (1993) Spatial autocorrelation: trouble or new paradigm? Ecology, 74.
Legendre, P., & Legendre, L. (1998). Numerical Ecology. Amsterdam: Elsevier Science B. V.
Mantel, N. (1967). The detection of disease clustering and a generalized regression approach. Cancer Research, 27, 209–220.
Neyman, J. & Pearson, E.S. (1933) On the problem of the most efficient tests of statistical hypotheses. PHilosophical Transactions of the Royal Society A, CCXXXL.
Oksanen, J., Kindt, R., Legendre, P., O’Hara, R., Simpson, G. L., Stevens, M. H. H., & Wagner, H. (2008). Vegan: Community Ecology Package. Retrieved from http://vegan.r-forge.r-project.org/
Peres-Neto, P. R., & Jackson, D. A. (2001). How well do multivariate data sets match? The advantages of a Procrustean superimposition approach over the Mantel test. Oecologia, 129, 169–178.
Royle and Dorazio. (2008). Hierarchical Modeling and Inference in Ecology.
6 comments:
Ecologists study only statistics, not mathematic (calculus, linear algebra, etc.) So they confuse statistics with theory. Statistical methods are empirical methods, not theory. McArthur Tilman adn Hubbel etc.did no or very little statistics.
Hi Hans, I don't think that this post implies that a statistical method is a theory, rather that ecologists deal with messy data and new statistical methods have opened new doors. Some ecologists do theory, all ecologists do stats...
Great list - interesting to reflect about these innovations for understanding the development of ecology as a scientific field, but I think also pure stats courses would often profit from a bit of historical perspective. My experience is that it really aids the learning process to understand where a method is coming from and what it contributed at its time, even if it may be outdated now.
Thanks Florian. I agree, stats courses when I was an undergrad tended to be a bit dry with a lot of memorization. The history is really pretty fascinating - between Karl Fisher and Ronald Pearson, the null models, and Bayesian vs Frequentist approaches, there's lots of interesting conflict, for example.
Hans - Our reference to McArthur and Tilman, etc, was only to point out that theoretical developments (consumer-resource models, R*, neutral theory) often get attention as fundamental moments for ecology.
You don't hear that as much about statistical developments, hence the focus of this post. Only a subset of ecologists do theory (mathematical model development), but the majority use statistics in some form. Even a theory person like Hubbell might use statistics, if only to compare the fit of observed species abundance distributions with those predicted by his theory, etc.
I agree that sometimes the terminology overlaps - for example, I've heard different people state that they do "ecological modelling", some of whom were theoreticians, some of whom built statistical models such as SDMs.
Post a Comment