Wednesday, October 15, 2014

Putting invasions into context

How can we better predict invasions?

Ernesto Azzurro, Victor M. Tuset,Antoni Lombarte, Francesc Maynou, Daniel Simberloff,  Ana Rodríguez-Pérez and Ricard V. Solé. External morphology explains the success of biological invasions. Ecology Letters (2014) 17: 1455–1463.

Fridley, J. D. and Sax, D. F. (2014), The imbalance of nature: revisiting a Darwinian framework for invasion biology. Global Ecology and Biogeography, 23: 1157–1166. doi: 10.1111/geb.12221

Active research programs into invasion biology have been ongoing since the 1990s, but their results make clear that while it is sometimes possible to explain invasions post hoc, it is very difficult to predict them. Darwin’s naturalization hypothesis gets so much press in part because it is the first to state the common acknowledgement that the struggle for existence should be strongest amongst closely related species, implying that ‘invasive species must somehow be different than native species to be so successful’. Defining more generally what this means for invasive species in terms of niche space, trait space, or evolutionary history has had at best mixed results. 

A couple of recent papers come to the similar-but rather different-conclusion that predicting invasion success is really about recognizing context. For example, Azurro et al. point out that despite the usual assumption that species’ traits reflect their niches, trait approaches to invasion that focus on the identifying traits associated with invasiveness have not be successful. Certainly invasive species may be more likely to show certain traits, but these are often very weak from a predictive standpoint, since many non-invasive species also have these traits. Morphological approaches may still be useful, but the authors argue that the key is to consider the morphological (trait) space of the invaders in the context of the morphological space used by the resident communities.
Figure 1. From Azurro et al. 2014. A resident community uses morphospace as delimited by the polygon in (b). Invasive species may fill morphospace within the same area occupied by the community (c) or (d)) or may use novel morphospace (e). Invasiveness should be greatest in situation (e). 
The authors use as an illustration, the largest known invasion by fish - the invasion of the Mediterranean Sea after the construction of the Panama Canal, an event known as the ‘Lessepsian migration’. They hypothesize that when a new species entering a community that fills some defined morphospace will face one of 3 scenarios (Figure 1): 1) they will be within the existing morphospace and occupy less morphospace than their closest neighbour; 2) they will be within the existing morphospace but occupy more morphospace than their closest neighbour; or 3) they will occupy novel morphospace compared to the existing community. The prediction being that invasion success should be highest for this third group, for whom competition should be weakest. Their early results are encouraging, if not perfect – 73% of species located outside of the resident morphospace became abundant or dominant in the invaded range. (Figure 2)
Figure 2. From Azurro et al. 2014. Invasion success of fish to the Mediterranean Sea in relation to morphospace, over multiple historical periods. Invasive (red) species tended to exist in novel morphospace compared to the resident community. 
A slightly different approach to invasion context comes from Jason Fridley and Dov Sax, who revision invasion in terms of evolution - the Evolutionary Imbalance Hypothesis (EIH). In the EIH, the context for invasion success is the characteristics of the invaders' home range. If, as Darwin postulated, invasion success is simply the natural expectation of natural selection, then considering the context for natural selection may be informative. 

In particular, the postulates of the EIH are that 1) Evolution is contingent and imperfect, thus species are subject to the constraints of their histories; 2) The degree to which species are ecologically optimized increases as the number of ‘evolutionary experiments’ increases, and with the intensity of competition (“Richer biotas of more potential competitors and those that have experienced a similar set of environmental conditions for a longer period should be more likely to have produced better environmental solutions (adaptations) to any given environmental challenge”); and 3) Similar sets of ecological conditions exist around the world. When these groups are mixed, some species will have higher fitness and possibly be invasive. 

Figure 3. From Fridley and Sax, 2014.
How to apply this rather conversational set of tenets to actual invasion research? A few factors can be considered when quantifying the likelihood of invasion success: “the amount of genetic variation within populations; the amount of time a population or genetic lineage has experienced a given set of environmental conditions; and the intensity of the competitive environment experienced by the population.” In particular, the authors suggest using phylogenetic diversity (PD) as a measure of the evolutionary imbalance between regions. They show for several regions that the maximum PD in a home region is a significant predictor of the likelihood of species from that region becoming invasive. The obvious issue with max PD being used as a predictor is that it is a somewhat imprecise proxy for “evolutionary imbalance” and one that correlates with many other things (including often species richness). Still, the application of evolutionary biology to a problem often considered to be primarily ecological may make for important advances. 
Figure 4. From Fridley and Sax 2014. Likelihood of becoming invasive vs. max PD in the species' native region.

Monday, October 6, 2014

What is ecology’s billion dollar brain?

(*The topic of the billion dollar proposal came up with Florian Hartig (@florianhartig), with whom I had an interesting conversation on the idea*)

Last year, the European Commission awarded 1 billion dollars to a hugely ambitious project to recreate the human brain using supercomputers. If successful, the Human Brain Project would revolutionize neuroscience. (Although skepticism remains as to whether this project is a more of a pipe dream than reasonable goal). For ecology and evolution, where infrastructure costs are relatively low (compared to say, a Large Hadron Collider), 1 billion dollars means that there is essentially no financial limitation on your proposal, so nearly any project, experiment, analysis, dataset, or workforce, is within the realm of possibility. The European Commission call was for a proposal for research to occur over 10 years, meaning that the constraints on project length (usually driven by grant terms and graduate student theses) are low. So if you could write a proposal, upon which there are essentially no constraints at all, what would it be for? (*if you think that 10 years is too limiting for a proper long-term study, feel free to assume you can set up the infrastructure in 10 years and run it for as long as you want).

The first thing I recognized was that in proposing the 'ultimate' ecological project, you're implicitly stating how you think ecology should be done. For example, do you could focus on the most general questions and start from the bottom. If this is the case, it might be most effective to ask a single fundamental question. It would not be unreasonable to propose to measure metabolic rates under standardized conditions for every extent species, and develop a database of parameter values for them. This would be the most complete ecological database ever, that certainly seems like an achievement. 

But perhaps you choose something that is still of general importance but less simplistic, and run a standardized experiment in multiple systems. This has been effective for the NutNet project. Propose to run replicate experiments with top-of-the-line warming arrays on plant communities in every major ecosystem. Done for 10 years, over a reasonably large scale, with data recorded on physiology and important life history events, this might provide some ability to predict how warming temperatures are affecting ecosystems. 

The alternative is embrace ecological complexity (and the ability to deal with complexity that 1 billion dollars offers). Given the analytic power, equipment, and man hours that 1 billion dollars can buy, you could record every single variable--biotic, abiotic, weather--in a particular system (say, a wetland) for every second of every day. If you don’t simply drown in the data you’ve gathered, maybe you can reconstruct that wetland, predict every property from the details. While that may seem a bit extreme, if you are a complexity-fatalist, you start to recognize that even the general experiments are quickly muddied by complexity. Even that simple, general list of species' metabolic parameters quickly spirals into complexity. Does it make sense to use only one set of standardized conditions? After all, conditions that are reasonable for a rainforest tree are meaningless for an ocean shark or a tundra shrub. Do you use the mean condition for each ecosystem as the standard, knowing that species may only interact with the variance or extremes in those conditions (such as desert annuals that bloom after rains, or bacteria that use cyst stages to avoid harsh environments). What about ontogenetic or plastic differences? Intraspecific differences?

It's probably best then to realize that there is no perfect ecological experiment. The interesting thing about the Human Brain project is that neuroscience is more like ecology than many scientific fields - it deals with complex organic systems with emergent properties and great variability. What ecology needs, ever so simplistically, is more data and better models. Maybe, like neuroscience, we should request a supercomputer that could located and incorporate all ecological data ever collected, across fields (natural history, forestry, agronomy, etc) and recognize the connections between that data, based on geography, species, or scale. This could both give us the most sophisticated possible data map, showing where the data gaps exist, and where areas are data-rich and ready for model development. Further, it could (like the Human Brain) begin to develop models for the interconnections between data. 

Without too many billion dollar calls going on, this is only a thought experiment, but I have yet to find someone who had an easy answer for what they would propose to do (ecologically) with 1 billion dollars. Why is it so difficult?

Monday, September 15, 2014

Links: Reanalyzing R-squares, NSF pre-proposals, and the difficulties of academia for parents

First, Will Pearse has done a great job of looking at the data behind the recent paper looking at declining R and p-values in ecology, and his reanalysis suggests that there is a much weaker relationship between r2 values and time (only 4% rather than 62% as reported). Because the variance is both very large within-years and also not equal through time, a linear model may not be ideal for capturing this relationship.
Thanks @prairiestopatchreefs for linking this.

From the Sociobiology blog, something that most US ecologists would probably agree on: the NSF pre-proposal program has been around long enough (~3 years) to judge on its merits, and it has not been an improvement. In short, pre-proposals are supposed to use a 5 page proposal to allow NSF to identify the best ideas and then invite those researchers to submit a full proposal similar to the traditional application. Joan Strassman argues that not only is this program more work for applicants (you must write two very different proposals in short order if you are lucky to advance), it offers very few benefits for them.

The reasons for the gender gap in STEM academic careers gets a lot of attention, and rightly so given the continuing underrepresentation of women. The demands of parenthood often receive some of the blame. The Washington Post is reporting on a study that considers parenthood from the perspective of male academics. The study took an interview-based, sociological approach, and found that the "majority of tenured full professors [interviewed] ... have either a full-time spouse at home who handles all caregiving and home duties, or a spouse with a part-time or secondary career who takes primary responsibility for the home." But the majority of these men also said they wanted to be more involved at home. As one author said, “Academic science doesn’t just have a gender problem, but a family or women, if they want to have families, are likely to face significant challenges.”

On a lighter note, if you've ever joked about PNAS' name, a "satirical journal" has taken that joke and run with it. PNIS (Proceedings of the Natural Institute of Science) looks like the work of bored post-docs, which isn't necessarily a bad thing. The journal has immediately split into two subjournals: PNIS-HARD (Honest and Real Data) and PNIS-SOFD (Satirical or Fake Data), which have rather interesting readership projections:

Friday, September 12, 2014

Do green roofs enhance urban conservation?

ResearchBlogging.orgGreen roofs are now commonly included in the design of new public and private infrastructure, bolstered by energy savings, environmental recognition and certification, bylaw compliance, and in some cases tax or other direct monetary incentives (e.g., here).  While green roofs clearly provide local environmental benefits, such as reduced albedo (sunlight reflectance), storm water retention, CO2 sequestration, etc., green roof proponents also frequently cite biodiversity and conservation enhancement as a benefit. This last claim has not been broadly tested, but existing data was assessed by Nicholas Williams and colleagues in a recent article published in the Journal of Applied Ecology.

Williams and colleagues compiled all available literature on biodiversity and conservation value of green roofs and they explicitly tested six hypotheses: 1) Green roofs support higher diversity and abundance compared to traditional roofs; 2) Green roofs support comparable diversity and composition to ground habitat; 3) Green roofs using native species support greater diversity than traditional green roofs; 4) Green roofs aid in rare species conservation; 5) Green roofs replicate natural communities; and 6) Green roofs facilitate organism movement through urban areas.

Photo by: Marc Cadotte

What is surprising is that given the abundance of papers on green roofs in ecology and environmental journals, very few quantitatively assessed some of these hypotheses. What is clear is that green roofs support greater diversity and abundance compared to non-green roofs, but we know very little about how green roofs compare to other remnant urban habitats in terms of species diversity, ecological processes, or rare species. Further, while some regions are starting to require that green roofs try to maximize native biodiversity, there are relatively few comparisons, but those that exist reveal substantial benefits for biodiverse green roofs.

How well green roofs replicate ground or natural communities is an important question, with insufficient evidence. It is important because, according to the authors, there is some movement to use green roofs to offset lost habitat elsewhere. This could represent an important policy shift, and one that may ultimately lead to lost habitats being replaced with lower quality ones. This is a policy direction that simply requires more science.

There is some evidence that green roofs, if designed correctly, could aid in rare species conservation. However, green roofs, which by definition are small patches in an inhospitable environment, may assist rare species management in only a few cases. The authors caution that enthusiasm for using green roofs to assist with rare species management needs to be tempered by designs that are biologically and ecologically meaningful to target species. They cite an example where green roofs in San Francisco were designed with a plant that is an important food source for an endangered butterfly, Bay Checkerspot, which currently persists in a few fragmented populations. The problem was that the maximum dispersal distance of the butterfly is about 5 km, and there are no populations within 15 km of the city. These green roofs have the potential to aid in rare species conservation, but it needs to be coupled with additional management activities, such as physically introducing the butterfly to the green roofs.

Overall, green do provide important environmental and ecological benefits in urban settings. Currently, very few studies document the ways in which green roofs provide ecological processes and services, enhance biodiversity, replicate other ground level habitats, or aid in biodiversity conservation. As the prevalence of green roofs increases, we will need scientifically valid ecological understanding of green roof benefits to better engage with municipal managers and affect policy.

Williams, N., Lundholm, J., & MacIvor, J. (2014). Do green roofs help urban biodiversity conservation? Journal of Applied Ecology DOI: 10.1111/1365-2664.12333

Monday, September 8, 2014

Edicts for peer reviewing

Reviewing is a right of passage for many academics. But for most graduate students or postdocs, it is also a bit of a trial by fire, since reviewing skills are usually assumed to be gained osmotically, rather than through any specific training. Unfortunately, the reviewing system seems ever more complicated for reviewers and authors alike (slow, poor quality, unpredictable). Concerns about modern reviewing pop up every few months, and different solutions to the difficulties of finding qualified reviewers and the quality of modern reviews (including publishing an instructional guide, taking alternative approaches (PeerJ, etc), or skipping peer review altogether (arXiv)). Still, in the absence of a systematic overhaul of the peer review system, an opinion piece in The Scientist by Matthew A. Mulvey and Dean Tantin provides a rather useful guide for new reviewers and a useful reminder for experienced reviewers. If you are going to do a review (and you should, if you are publishing papers), you should do it well. 
From "An Ecclesiastical Approach to Peer Review" 
"The Golden Rule
Be civil and polite in all your dealings with authors, other reviewers, editors, and so on, even if it is never reciprocated.
As a publishing scientist, you will note that most reviewers break at least a few of the rules that follow. Sometimes that is OK—as reviewers often fail to note, there is more than one way to skin a cat. As an author you will at times feel frustrated by reviews that come across as unnecessarily harsh, nitpicky, or flat-out wrong. Despite the temptation, as a reviewer, never take your frustrations out on others. We call it the “scientific community” for a reason. There is always a chance that you will be rewarded in the long run. 
The Cardinal Rule
If you had to publish your review, would you be comfortable doing so? What if you had to sign it? If the answer to either question is no, start over. (That said, do not make editorial decisions in the written comments to the authors. The decision on suitability is the editors’, not yours. Your task is to provide a balanced assessment of the work in question.) 
The Seven Deadly Sins of sub-par reviews
  1. Laundry lists of things the reviewer would have liked to see, but have little bearing on the conclusions.
  2. Itemizations of styles or approaches the reviewer would have used if they were the author.
  3. Direct statements of suitability for publication in Journal X (leave that to the editor).
  4. Vague criticism without specifics as to what, exactly, is being recommended. Specific points are important—especially if the manuscript is rejected.
  5. Unclear recommendations, with little sense of priority (what must be done, what would be nice to have but is not required, and what is just a matter of curiosity).
  6. Haphazard, grammatically poor writing. This suggests that the reviewer hasn’t bothered to put in much effort.
  7. Belligerent or dismissive language. This suggests a hidden agenda. (Back to The Golden Rule: do not abuse the single-blind peer review system in order to exact revenge or waylay a competitor.) 
Vow silence
The information you read is confidential. Don’t mention it in public forums. The consequences to the authors are dire if someone you inform uses the information to gain a competitive advantage in their research. Obviously, don’t use the findings to further your own work (once published, however, they are fair game). Never contact the authors directly.
Be timely
Unless otherwise stated, provide a review within three weeks of receiving a manuscript. This old standard has been eroded in recent years, but nevertheless you should try to stick to this deadline if possible. 
Be thorough
Read the manuscript thoroughly. Conduct any necessary background research. Remember that you have someone’s fate in your hands, so it is not OK to skip over something without attempting to understand it completely. Even if the paper is terrible and in your view has no hope of acceptance, it is your professional duty to develop a complete and constructive review.
Be honest
If there is a technique employed that is beyond your area of expertise, do the best you can, and state to the editor (or in some cases, in your review) that although outside your area, the data look convincing (or if not, explain why). The editor will know to rely more on the other reviewers for this specific item. If the editor has done his or her job correctly, at least one of the other reviewers will have the needed expertise.
Most manuscript reviews cover about a page or two. Begin writing by briefly summarizing the state of the field and the intended contribution of the study. Outline any major deficits, but refrain from indicating if you think they preclude publication. Keep in mind that most journals employ copy editors, so unless the language completely obstructs understanding, don’t bother criticizing the English. Go on to itemize any additional defects in the manuscript. Don’t just criticize: saying that X is a weakness is not the same as saying the authors should address weakness X by providing additional supporting data. Be clear and provide no loopholes. Keep in mind that you are not an author. No one should care how you would have done things differently in a perfect world. If you think it helpful, provide additional suggestions as minor comments—the editor will understand that the authors are not bound to them.
Judgment Day
Make a decision as to the suitability of the manuscript for the specific journal in question, keeping in mind their expectations. Is it acceptable in its current state? Would a reasonable number of experiments performed in a reasonable amount of time make it so, or not? Answering these questions will allow you to recommend acceptance, rejection, or major/minor revision. 
If the journal allows separate comments to the editor, here is the place to state that in your opinion they should accept and publish the paper as quickly as possible, or that the manuscript falls far below what would be expected for Journal X, or that Y must absolutely be completed to make the manuscript publishable, or that if Z is done you are willing to have it accepted without seeing it again. Good comments here can make the editor’s job easier. The availability of separate comments to the editor does not mean that you should provide only positive comments in the written review and reserve the negative ones for the editor. This approach can result in a rejected manuscript being returned to the authors with glowing reviewer comments. 
A second review is not the same as an initial review. There is rarely any good reason why you should not be able to turn it around in a few days—you are already familiar with the manuscript. Add no new issues—doing so would be the equivalent of tripping someone in a race during the home stretch. Determine whether the authors have adequately addressed your criticisms (and those of the other reviewers, if there was something you missed in the initial review that you think is vital). In some cases, data added to a revised manuscript may raise new questions or concerns, but ask yourself if they really matter before bringing them up in your review. Be willing to give a little if the authors have made reasonable accommodation. Make a decision: up or down. Relay it to the editor. 
Congratulations. You’ve now been baptized, confirmed, and anointed a professional manuscript reviewer."

Monday, August 25, 2014

Researching ecological research

Benjamin Haller. 2014. "Theoretical and Empirical Perspectives in Ecology and Evolution: A Survey". BioScience; doi:10.1093/biosci/biu131.

Etienne Low-Décarie, Corey Chivers, and Monica Granados. 2014. "Rising complexity and falling explanatory power in ecology". Front Ecol Environ 2014; doi:10.1890/130230.

A little navel gazing is good for ecology. Although maybe it seems like it, ecology spends far less time evaluating its approach, compared to simply doing research. Obviously we can't spend all of our time navel-gazing, but the field as a whole would benefit greatly from ongoing conversations about its strength and weaknesses. 

For example, the issue of theory vs. empirical research. Although this issue has received attention and arguments ad nauseum over the years (including here, 1, 2, 3), it never completely goes away. And even though there are arguments that it's not an issue anymore, that everyone recognizes the need for both, if you look closely, the tension continues to exist in subtle ways. If you have participated in a mixed reading group did the common complaint “do we have to read so many math-y papers?" ever arise; or equally “do we have to read so many system specific papers and just critique the methods?” Theory and empirical research don't see eye to eye as closely as we might want to believe.

The good news? Now there is some data. Ben Haller did a survey on this topic that just came out in BioScience. This paper does the probably necessary task of getting some real data beyond the philosophical and argumentative about the theory/data debate. Firstly, he defines empirical research as being involved in the gathering and analysis of real world data, while theoretical research does not gather or analyze real world data, instead involves mathematical models, numerical simulations, and other such work. The survey included 614 scientists from related ecology and evolutionary biology fields, representing a global (rather North American) perspective.

The conclusions are short, sweet and pretty interesting: "(1) Substantial mistrust and tension exists between theorists and empiricists, but despite this, (2) there is an almost universal desire among ecologists and evolutionary biologists for closer interactions between theoretical and empirical work; however, (3) institutions such as journals, funding agencies, and universities often hinder such increased interactions, which points to a need for institutional reforms."
For interpreting the plots – the empirical group represents respondents whose research is completely or primarily empirical; the theoretical group's research is mostly or completely related to theory, while the middle group does work that falls equally into both types. Maybe the results don't surprise anyone – scientists still read papers, collaborate, and coauthor papers mostly with others of the same group. What is surprising is that this trend is particularly strong for the empirical group. For example, nearly 80% of theorists have coauthored a paper with someone in the empirical group while only 42% of empiricists have coauthored at least one paper with a theorist. Before we start throwing things at empiricists, it should be noted that this could relate to a relative scarcity of theoretical ecologists, rather than insularity on the part of the empiricists. However, it is interesting that while the responses to the question “how should theory and empiricism coexist together?” across all groups agreed that “theoretical work and empirical work would coexist tightly, driving each other in a continuing feedback loop”, empirical scientists were significantly more likely to say “work would primarily be data-driven; theory would be developed in response to questions raised by empiri­cal findings.”

Most important, and maybe concerning, is that the survey found no real effect of age, stage or gender – i.e. existing attitudes are deeply ingrained and show no sign of changing.

Why is it so important that we reconcile the theoretical/empirical issue? The paper “Rising complexity and falling explanatory power in ecology” offers a pretty compelling reason in its title. Ecological research is getting harder, and we need to marshall all the resources available to us to continue to progress. 

The paper suggests that ecological research is experiencing falling mean Rvalues. Values in published papers have fallen from above 0.75 prior to 1950 to below 0.5 in today's papers.
The worrying thing is that as a discipline progresses and improves, you might predict that the result is an improving ability to explain ecological phenomenon. For comparison, criminology was found to show no decline in R2 values as that matured through time. Why don’t we have that? 

During the same period, however, it is notable that the average complexity of ecological studies also increased – the number of reported p-values is 10x larger on average today compared to the early years (where usually only a single p-value relating to a single question was reported). 

The fall in R2 values and the rise in reported p-values could mean a number of things, some worse for ecology than others. The authors suggest that R2 values may be declining as a result of exhaustion of “easy” questions (“low hanging fruit”), increased effort in experiments, or a change in publication bias, for example. The low hanging fruit hypothesis may have some merit – after all, studies from before the 1950s were mostly population biology with a focus on a single species in a single place over a single time period. Questions have grown increasingly more complex, involving assemblages of species over a greater range of spatial and temporal scales. For complex sciences, this fits a common pattern of diminishing returns: “For example, large planets, large mammals, and more stable elements were discovered first”.

In some ways, ecologists lack a clear definition of success. No one would argue that ecology is less effective now than it was in the 1920s, for example, and yet a simplistic measure (R2) of success might suggest that ecology is in decline. Any biases between theorists and empiricists is obviously misplaced, in that any definition of success for ecology will require both.  

Thursday, August 14, 2014

#ESA2014 Day 4: Battle Empiricism vs Theory

You are our only hope!(?)
First off, the Theory vs. Empiricism Ignite session was a goldmine for quotes:

In God we trust, all others bring data” (H. Edwards Deming)
Models are our only hope” (Greg Dwyer)
"Nature represents a special part of parameter space" (Jay Stachowicz)

The Theory vs. Empiricism Ignite session was designed in response to an impromptu survey at ESA last year that found that 2/3 s of an audience did not believe that there are general laws in ecology. Speakers were asked to choose whether an empirical paper or a theoretical paper would be most important for ecology, and to defend their choice, perhaps creating some entertaining antagonism along the way. 

There wasn't actually much antagonism to be had: participants were mostly conciliatory and hardly controversial. Despite this, the session was entertaining and also insightful, but perhaps not in the way I expected. First though, I should say that I think the conversation could have used some definitions of the terms (“theory”,  “empiricism”). We throw these terms around a lot but they mean different things to different people. What counts as theory to a field based scientist may be consider no more than a rule of thumb or statistical model to a pure theoretician. Data from a microcosm might not count as experimental evidence to a fieldwork-oriented ecologist.

The short talks included examples and arguments as to how theoretical or empirical science is a necessary and valuable contributor to ecological discoveries. That was fine, but the subtext from a number of talks turned out to be more interesting. The tension, it seemed, was not about whether theory is useful or empiricism is valuable, but about which one is more important. Should theory or empiricism be the driver of ecological research? (Kudos to Fred Adler for the joke that theory wants to be a demanding queen ant with empiricists as the brainless order-following workers!) And funding should follow the most worthy work. Thus empiricists bemoan the lack of funding for natural history, while theoreticians argue that pure theory is even harder to get grants for. The question of which one should lead research was sadly mostly unanswered (and 5 minutes per person didn't offer much space for a deeper discussion). 

Of course there was the inevitable call for reconciliation of the two areas, of some way to breach the arrogance and ignorance (to paraphrase Brad Cardinale) holding them apart. Or, perhaps all ecologists should be renaissance scientists, who have mastered theory and empiricism equally. Hard to say. For me, considering the example of ecological subfields that have found a balance and feedback between theory and data is wise. Areas such as disease ecology or population biology incorporate models and experiments successfully, for example. Why do some other fields like community ecology or conservation biology struggle so much more?

#ESA2014 - Day 3 bringing together theory and empiricism

I was tied up in a session all afternoon, so most of the interesting comments below are from Topher Weiss-Lehman, who caught what sounds like a pretty thought provoking session about theory and conservation biology, with thought provoking talks from Hugh Possingham and David Ackerly. This concept of bringing theory and empiricism together permeated through a number of talks, including the session I moderated on using microbes in theoretical ecology and applying theory to microbial ecology (although at the moment, the distance between those things still feels large).

The most thought-provoking talk I saw was Peter Chesson's, on "Diversity maintenance: new concepts and theory for communities as multiple-scale entities". Chesson discussed his discomfort with how his coexistence theory is sometimes applied (I suppose that is the definition of success, that you see your ideas misused). His concerns fall with those of many ecologists on the question of how to define and research an ecological community. Is the obsession with the looking at 'local' communities limiting and misguided, particularly when paired with the ridiculous assumption that such communities are closed systems? Much like Ricklef's well known paper on the defining a 'regional community', Chesson suggests we move to a multi-scale emphasis for community ecology.

Rather than calculating coexistence in a local community, Chesson argued that ecologists should be begin to think about how coexistence mechanisms varied in strength across multiple spatial scales. For example, is frequency dependence more importance at smaller or larger scales? He used a concept similar to the idea of Ricklef's regional community, in which a larger extent encompassed a number of increasingly smaller scale communities. The regional community likely includes environmental gradients, and species distributions that vary across them. Chesson presented some simulations based on a multi-scale model of species interactions to illustrate the potential of his multi-scale coexistence theory framework. The model appears to bring together Chesson's work on coexistence mechanisms-- including the importance of fitness differences (here with fitness calculated at each scale as the change in density over a time step) and stabilizing forces, and the invasion criteria (where coexistence has a signal of a positive growth rate from low density)--and his scale-transition theory work. This is a very obvious advance, and a sensible way of recognizing the scale-dependent nature of ecology in coexistence mechanisms. His approaches allows ecologists to drop their obsession with defining some spatial area as "the community" and a regional community decreases the importance of the closed system assumption. My one with is that there be some discussion of how this concept fits with existing ideas about scale and communities in ecology. For example, how compatible are existing larger scale approaches like macroecology/biogeography and other theoretical paradigms like metacommunity theory with this?  

#Notes from Topher Weiss-Lehman

Applied Theory I spent the morning of my third day at ESA in a symposium on Advancing Ecological Theory for Conservation Biology. Hugh Possingham started out with a call for more grand theories in a talk titled “Theory for conservation decisions: the death of bravery.” Possingham argued for the development of theory tailored to the needs of conservation managers, identifying the SLOSS debate as an example of the scientific community agonizing over the answer to a question no managers were asking. He described the type of theory he meant as simple and easily applicable rather than relying on intensive computer simulations that managers are unlikely to be able to use for their own systems. Possingham is right that conservation managers need theory to help guide them in decisions over where and what species to protect, however I can’t help but think about the scientific advances that arose specifically as a result of the SLOSS debate and computational models. The talk left me wondering if theoretical ecology, like other scientific fields, could be split into basic and applied theory.

The other talks in the session approached the topic of theory for conservation from a number of perspectives. Justin Kitzes discussed the ways in which macroecology can inform conservation concerns and Annette Ostling explored how niche and neutral community dynamics affect extinction debts. H. Resit Akakaya provided a wonderful example of the utility of computer simulations for conservation issues. He presented results predicting the extinction risk of species due to climate change via simulations based on niche modeling coupled with metapopulation dynamics. Jennifer Dunne then explored how the network structure of food webs changed as a result of human arrival and hunting in several systems. The session ended with a presentation by David Ackerly calling for a focus on disequilibrium dynamics in ecology. Ackerly made a compelling case for the importance of considering disequilibrium dynamics, particularly when making predictions of species reactions to climate change or habitat alteration. However the most memorable part of his talk for me was the last 5 minutes or so. He suggested that we reconsider what conservation success should mean. Since systems are changing and will continue to change, Ackerly argued that to set conservation goals based on keeping them the way they are is setting ourselves up for failure. Instead, we need to understand that systems are transitioning and that while we have a crucial role in deciding what they might transition into, we can’t and shouldn’t try to stop them from changing.

The talks today gave me lots of ideas and new papers to read, but they also left me pondering more questions on the philosophy of science (what we do, why we do it, and what our goals should be) than I expected.

Tuesday, August 12, 2014

#ESA2014: Day two, what are we measuring and how?

It's probably in part because I attended sessions that are along similar lines today, but I noticed a common theme played across a number of talks. Ecological data is in some ways becoming very complex - a single analysis may include traits, phylogenetic distances, and taxonomic information, and climate and soil variables, possibly at multiple spatial scales. How to combine disparate data appropriately and how to determine the comparable "scales" across which to measure each variable is more important than ever. But it is still difficult to determine what an appropriate comparison actually is.

Studies of intraspecific variation frequently have to determine how to measure and compare variables. (i.e. Do you measure intraspecific trait variation at the genotype level, the individual level, etc?) For example, in a nice talk by Jessica Abbott, the effects of intraspecific variation in genetic relatedness and trait similarity on intraspecific competition among eelgrass hit upon exactly this point. There was no relationship between trait similarity between genotypes and their degree of genetic relatedness. Traits, not relatedness, were the clearest predictor of competitive success. A number of the talks I saw today incorporated intraspecific variation, including a couple of excellent talks on Daphnia by Sarah Duple and Chris Holmes. Both of the Daphnia talks found evidence of great intraspecific trait variation in the Daphnia but weak relationships between that variation and competitive interactions or diversity. These talks were all nice examples of how empirical work can relate to larger ecological theory, and found fairly mixed evidence for the importance of intraspecific variation. There are many reasons why intraspecific variation is not always strongly tied to ecological processes - intraspecific variation may simply have low explanatory power, for example. But it is also interesting to consider the issues that arise as we ask questions at ever smaller and more precise scales. How do we distinguish a low importance of intraspecific variation, or trait variation, or phylogenetic variation from incorrect scale of measurement? Asking questions with multiple measures opens up new and important issues - how should we measure genetic relatedness to be truly comparable to trait variation at intraspecific or interspecific scales? How does combining mismatched variables (intraspecific trait values with interpolated large scale environmental values, for example) affect the explanatory power of those variables? Given the increasingly multi-faceted nature of ecological analyses it seems important that we consider these questions.

#Lauren Shoemaker
I started Day 2 of ESA attending talks focusing on quantifying coexistence mechanisms and the role of intraspecific competition in coexistence. Yue Li and Peter Chesson started the day presenting work quantifying the storage effect in three desert winter annuals in Arizona’s Goldwater Range. This work highlighted the methodology for quantifying the storage effect in empirical systems—which was refreshing for me since I spend so much time thinking about spatial storage mechanisms in simplified, theoretical systems.

In the same session, Peter Adler presented his work with Chengjin Chu examining the strength of stabilizing niche differences and fitness differences. When stabilizing niche differences are too low relative to fitness differences, competitive exclusion occurs, while high stabilizing niche forces create coexistence. Using long-term demographic data of perennial grasses from five communities, they found that all species exhibited high niche differences and low fitness differences, creating high coexistence strength. For all communities, stabilizing niche differences likely resulted from recruitment. The high niche differentiation highlights the need for a stronger focus on intraspecific density dependence and for more models of coexistence with explicit intraspecific competition.

In the afternoon, Louie Yang argued that ecologists as a whole need to more explicitly consider changes in species interaction through time, especially with increasing effects of climate change. Using an example of 17-year cicada cycles, he showed that questions of “bottom up or top-down” are often really bottom up and then top-down when viewed in a temporally explicit framework. He even ended his talk with an excellent analogy comparing historic artwork and ecology—a hard analogy to pull off!

As an added bonus, I finished the day with a long list of paper citations to look up and read after the conference.