Are those who do not
learn from (ecological) history are doomed to repeat it?
A pervasive view within ecology is that discovery tends to
be inefficient and that ideas reappear as vogue pursuits again and again. For
example, the ecological implications of niche partitioning re-emerges as an
important topic in ecology every decade or so. Niche partitioning was well
represented in ecological literature of the 1960s and 1970s, which focused theoretical and experimental attention on how
communities were structured through resource partitioning. It
would be fair to say that the evolutionary causes and the ecological
consequences of communities structured by niche differences were one of the
most important concepts in community ecology during that time. Fast-forward 30
years, and biodiversity and ecosystem functioning (BEF) research slowly has come to the conclusion that niche
partitioning to explains the apparent relationship between species diversity and
ecosystem functioning. Some of the findings in the BEF literature could be criticized as simply being rediscoveries of classical theory and experimental evidence already in existence. How does
one interpret these cycles? Are they a failure of ecological progress or
evidence of the constancy of ecological mechanisms?
Ecology is such a young science that this process of
rediscovery seems particularly surprising. Most of the fundamental theory in
ecology arose during this early period: from the 1920s (Lotka, Volterra), 1930s (Gause) to 1960s (Wilson, MacArthur,
May, Lawton, etc). There are several reasons why this was the foundational
period for ecological theory – the science was undeveloped, so there was a void
that needed filling. Ecologists in those years were often been trained in other
disciplines that emphasized mathematical and scientific rigor, so the theory
that developed was in the best scientific tradition, with analytically resolved
equations meant to describe the behaviour of populations and communities. Most
of the paradigms we operate in today owe much to this
period, including an inordinate focus on predator-prey, competitive
interactions, and plant communities, and the use of Lotka-Volterra and consumer-resource models. So
when ecologists reinvent the wheel, is this
foundation of knowledge to blame, is it flawed or incomplete? Or does ecology fail
in education and practice in maintaining contact with the knowledge base that already
exists? (Spoiler alert – the answer is going to be both).
Modern ecologists face the unenviable task of prioritizing
and decoding an exponentially growing body of literature. Ecologists in the
1960s could realistically read all the literature pertaining to community
ecology during their PhD studies –something that is impossible today with an
exponentially growing literature. Classic papers can be harder to access than new ones: old papers are less likely to be accessible
online, and when they are, the quality of the documents is often poor. The
style and accessibility of some of these papers is also difficult for
readers used to the succinct and direct writing more common today. The cumulative
effect of all of this is that we read very little older literature and instead
find papers that are cited by our peers.
True, some fields may have grown or started apart from a
base of theory that would have been useful during their development. But it would
also be unfair to ignore the fact that ecology’s foundation is full of cracks. Certain
interactions are much better explored than others. Models of two species
interactions fill in for complex ecosystems. Lotka-Volterra and related
consumer-resource models make a number of potentially unrealistic assumptions,
and parameter space has often been incompletely explored. We seem to lack a
hierarchical framework or synthesis of what we do know (although a few people
have tried (Vellend 2010)). When models are explored in-depth, as Peter Abrams has done in many papers, we discover the complexity and possible futility
of ecological research: anything can result from complex dynamics. The cynic
then, would argue that models can predict anything (or worse, nothing). This is
unfair, since most modelling papers test hypotheses by manipulating a single
parameter associated with a likely mechanism, but it hints at the limits that current theory exhibits.
So the bleakest view of would be this: the body of knowledge that makes up ecology is inadequate and poorly
structured. There is little in the way of synthesis, and though we know many,
many mechanisms that can occur, we
have less understanding of those that are likely
to occur. Developing areas of ecology often have a tenuous connection to the
existing body of knowledge, and if they eventually connect with and contribute
to the central body, it is through an inefficient, repetitive process. For
example a number of papers have remarked that invasion biology has dissociated
itself from mainstream ecology, reinventing basic mechanisms. The most
optimistic view, is that when we discover similar mechanisms multiple times, we
gain increasing evidence for their importance. Further, each cycle of
rediscovery reinforces that there are a finite number of mechanisms that
structure ecological communities (maybe just a handful). When we use the same
sets of mechanisms to explain new patterns or processes, in some ways it is a
relief to realize that new findings fit logically with existing knowledge. For
example niche partitioning has long been used to explain co-occurrence, but
with a new focus on ecosystem functioning, it has leant itself as an
efficacious explanation. But the question remains, how much of what we do is
inefficient and repetitive, and how much is advancing our basic understanding
of the world?
By Caroline Tucker & Marc Cadotte