Monday, October 6, 2014

What is ecology’s billion dollar brain?

(*The topic of the billion dollar proposal came up with Florian Hartig (@florianhartig), with whom I had an interesting conversation on the idea*)

Last year, the European Commission awarded 1 billion dollars to a hugely ambitious project to recreate the human brain using supercomputers. If successful, the Human Brain Project would revolutionize neuroscience. (Although skepticism remains as to whether this project is a more of a pipe dream than reasonable goal). For ecology and evolution, where infrastructure costs are relatively low (compared to say, a Large Hadron Collider), 1 billion dollars means that there is essentially no financial limitation on your proposal, so nearly any project, experiment, analysis, dataset, or workforce, is within the realm of possibility. The European Commission call was for a proposal for research to occur over 10 years, meaning that the constraints on project length (usually driven by grant terms and graduate student theses) are low. So if you could write a proposal, upon which there are essentially no constraints at all, what would it be for? (*if you think that 10 years is too limiting for a proper long-term study, feel free to assume you can set up the infrastructure in 10 years and run it for as long as you want).

The first thing I recognized was that in proposing the 'ultimate' ecological project, you're implicitly stating how you think ecology should be done. For example, do you could focus on the most general questions and start from the bottom. If this is the case, it might be most effective to ask a single fundamental question. It would not be unreasonable to propose to measure metabolic rates under standardized conditions for every extent species, and develop a database of parameter values for them. This would be the most complete ecological database ever, that certainly seems like an achievement. 

But perhaps you choose something that is still of general importance but less simplistic, and run a standardized experiment in multiple systems. This has been effective for the NutNet project. Propose to run replicate experiments with top-of-the-line warming arrays on plant communities in every major ecosystem. Done for 10 years, over a reasonably large scale, with data recorded on physiology and important life history events, this might provide some ability to predict how warming temperatures are affecting ecosystems. 

The alternative is embrace ecological complexity (and the ability to deal with complexity that 1 billion dollars offers). Given the analytic power, equipment, and man hours that 1 billion dollars can buy, you could record every single variable--biotic, abiotic, weather--in a particular system (say, a wetland) for every second of every day. If you don’t simply drown in the data you’ve gathered, maybe you can reconstruct that wetland, predict every property from the details. While that may seem a bit extreme, if you are a complexity-fatalist, you start to recognize that even the general experiments are quickly muddied by complexity. Even that simple, general list of species' metabolic parameters quickly spirals into complexity. Does it make sense to use only one set of standardized conditions? After all, conditions that are reasonable for a rainforest tree are meaningless for an ocean shark or a tundra shrub. Do you use the mean condition for each ecosystem as the standard, knowing that species may only interact with the variance or extremes in those conditions (such as desert annuals that bloom after rains, or bacteria that use cyst stages to avoid harsh environments). What about ontogenetic or plastic differences? Intraspecific differences?

It's probably best then to realize that there is no perfect ecological experiment. The interesting thing about the Human Brain project is that neuroscience is more like ecology than many scientific fields - it deals with complex organic systems with emergent properties and great variability. What ecology needs, ever so simplistically, is more data and better models. Maybe, like neuroscience, we should request a supercomputer that could located and incorporate all ecological data ever collected, across fields (natural history, forestry, agronomy, etc) and recognize the connections between that data, based on geography, species, or scale. This could both give us the most sophisticated possible data map, showing where the data gaps exist, and where areas are data-rich and ready for model development. Further, it could (like the Human Brain) begin to develop models for the interconnections between data. 

Without too many billion dollar calls going on, this is only a thought experiment, but I have yet to find someone who had an easy answer for what they would propose to do (ecologically) with 1 billion dollars. Why is it so difficult?

Monday, September 15, 2014

Links: Reanalyzing R-squares, NSF pre-proposals, and the difficulties of academia for parents

First, Will Pearse has done a great job of looking at the data behind the recent paper looking at declining R and p-values in ecology, and his reanalysis suggests that there is a much weaker relationship between r2 values and time (only 4% rather than 62% as reported). Because the variance is both very large within-years and also not equal through time, a linear model may not be ideal for capturing this relationship.
Thanks @prairiestopatchreefs for linking this.

From the Sociobiology blog, something that most US ecologists would probably agree on: the NSF pre-proposal program has been around long enough (~3 years) to judge on its merits, and it has not been an improvement. In short, pre-proposals are supposed to use a 5 page proposal to allow NSF to identify the best ideas and then invite those researchers to submit a full proposal similar to the traditional application. Joan Strassman argues that not only is this program more work for applicants (you must write two very different proposals in short order if you are lucky to advance), it offers very few benefits for them.

The reasons for the gender gap in STEM academic careers gets a lot of attention, and rightly so given the continuing underrepresentation of women. The demands of parenthood often receive some of the blame. The Washington Post is reporting on a study that considers parenthood from the perspective of male academics. The study took an interview-based, sociological approach, and found that the "majority of tenured full professors [interviewed] ... have either a full-time spouse at home who handles all caregiving and home duties, or a spouse with a part-time or secondary career who takes primary responsibility for the home." But the majority of these men also said they wanted to be more involved at home. As one author said, “Academic science doesn’t just have a gender problem, but a family problem...men or women, if they want to have families, are likely to face significant challenges.”

On a lighter note, if you've ever joked about PNAS' name, a "satirical journal" has taken that joke and run with it. PNIS (Proceedings of the Natural Institute of Science) looks like the work of bored post-docs, which isn't necessarily a bad thing. The journal has immediately split into two subjournals: PNIS-HARD (Honest and Real Data) and PNIS-SOFD (Satirical or Fake Data), which have rather interesting readership projections:


Friday, September 12, 2014

Do green roofs enhance urban conservation?

ResearchBlogging.orgGreen roofs are now commonly included in the design of new public and private infrastructure, bolstered by energy savings, environmental recognition and certification, bylaw compliance, and in some cases tax or other direct monetary incentives (e.g., here).  While green roofs clearly provide local environmental benefits, such as reduced albedo (sunlight reflectance), storm water retention, CO2 sequestration, etc., green roof proponents also frequently cite biodiversity and conservation enhancement as a benefit. This last claim has not been broadly tested, but existing data was assessed by Nicholas Williams and colleagues in a recent article published in the Journal of Applied Ecology.

Williams and colleagues compiled all available literature on biodiversity and conservation value of green roofs and they explicitly tested six hypotheses: 1) Green roofs support higher diversity and abundance compared to traditional roofs; 2) Green roofs support comparable diversity and composition to ground habitat; 3) Green roofs using native species support greater diversity than traditional green roofs; 4) Green roofs aid in rare species conservation; 5) Green roofs replicate natural communities; and 6) Green roofs facilitate organism movement through urban areas.

Photo by: Marc Cadotte


What is surprising is that given the abundance of papers on green roofs in ecology and environmental journals, very few quantitatively assessed some of these hypotheses. What is clear is that green roofs support greater diversity and abundance compared to non-green roofs, but we know very little about how green roofs compare to other remnant urban habitats in terms of species diversity, ecological processes, or rare species. Further, while some regions are starting to require that green roofs try to maximize native biodiversity, there are relatively few comparisons, but those that exist reveal substantial benefits for biodiverse green roofs.

How well green roofs replicate ground or natural communities is an important question, with insufficient evidence. It is important because, according to the authors, there is some movement to use green roofs to offset lost habitat elsewhere. This could represent an important policy shift, and one that may ultimately lead to lost habitats being replaced with lower quality ones. This is a policy direction that simply requires more science.

There is some evidence that green roofs, if designed correctly, could aid in rare species conservation. However, green roofs, which by definition are small patches in an inhospitable environment, may assist rare species management in only a few cases. The authors caution that enthusiasm for using green roofs to assist with rare species management needs to be tempered by designs that are biologically and ecologically meaningful to target species. They cite an example where green roofs in San Francisco were designed with a plant that is an important food source for an endangered butterfly, Bay Checkerspot, which currently persists in a few fragmented populations. The problem was that the maximum dispersal distance of the butterfly is about 5 km, and there are no populations within 15 km of the city. These green roofs have the potential to aid in rare species conservation, but it needs to be coupled with additional management activities, such as physically introducing the butterfly to the green roofs.

Overall, green do provide important environmental and ecological benefits in urban settings. Currently, very few studies document the ways in which green roofs provide ecological processes and services, enhance biodiversity, replicate other ground level habitats, or aid in biodiversity conservation. As the prevalence of green roofs increases, we will need scientifically valid ecological understanding of green roof benefits to better engage with municipal managers and affect policy.

Williams, N., Lundholm, J., & MacIvor, J. (2014). Do green roofs help urban biodiversity conservation? Journal of Applied Ecology DOI: 10.1111/1365-2664.12333

Monday, September 8, 2014

Edicts for peer reviewing

Reviewing is a right of passage for many academics. But for most graduate students or postdocs, it is also a bit of a trial by fire, since reviewing skills are usually assumed to be gained osmotically, rather than through any specific training. Unfortunately, the reviewing system seems ever more complicated for reviewers and authors alike (slow, poor quality, unpredictable). Concerns about modern reviewing pop up every few months, and different solutions to the difficulties of finding qualified reviewers and the quality of modern reviews (including publishing an instructional guide, taking alternative approaches (PeerJ, etc), or skipping peer review altogether (arXiv)). Still, in the absence of a systematic overhaul of the peer review system, an opinion piece in The Scientist by Matthew A. Mulvey and Dean Tantin provides a rather useful guide for new reviewers and a useful reminder for experienced reviewers. If you are going to do a review (and you should, if you are publishing papers), you should do it well. 
From "An Ecclesiastical Approach to Peer Review" 
"The Golden Rule
Be civil and polite in all your dealings with authors, other reviewers, editors, and so on, even if it is never reciprocated.
As a publishing scientist, you will note that most reviewers break at least a few of the rules that follow. Sometimes that is OK—as reviewers often fail to note, there is more than one way to skin a cat. As an author you will at times feel frustrated by reviews that come across as unnecessarily harsh, nitpicky, or flat-out wrong. Despite the temptation, as a reviewer, never take your frustrations out on others. We call it the “scientific community” for a reason. There is always a chance that you will be rewarded in the long run. 
The Cardinal Rule
If you had to publish your review, would you be comfortable doing so? What if you had to sign it? If the answer to either question is no, start over. (That said, do not make editorial decisions in the written comments to the authors. The decision on suitability is the editors’, not yours. Your task is to provide a balanced assessment of the work in question.) 
The Seven Deadly Sins of sub-par reviews
  1. Laundry lists of things the reviewer would have liked to see, but have little bearing on the conclusions.
  2. Itemizations of styles or approaches the reviewer would have used if they were the author.
  3. Direct statements of suitability for publication in Journal X (leave that to the editor).
  4. Vague criticism without specifics as to what, exactly, is being recommended. Specific points are important—especially if the manuscript is rejected.
  5. Unclear recommendations, with little sense of priority (what must be done, what would be nice to have but is not required, and what is just a matter of curiosity).
  6. Haphazard, grammatically poor writing. This suggests that the reviewer hasn’t bothered to put in much effort.
  7. Belligerent or dismissive language. This suggests a hidden agenda. (Back to The Golden Rule: do not abuse the single-blind peer review system in order to exact revenge or waylay a competitor.) 
Vow silence
The information you read is confidential. Don’t mention it in public forums. The consequences to the authors are dire if someone you inform uses the information to gain a competitive advantage in their research. Obviously, don’t use the findings to further your own work (once published, however, they are fair game). Never contact the authors directly.
Be timely
Unless otherwise stated, provide a review within three weeks of receiving a manuscript. This old standard has been eroded in recent years, but nevertheless you should try to stick to this deadline if possible. 
Be thorough
Read the manuscript thoroughly. Conduct any necessary background research. Remember that you have someone’s fate in your hands, so it is not OK to skip over something without attempting to understand it completely. Even if the paper is terrible and in your view has no hope of acceptance, it is your professional duty to develop a complete and constructive review.
Be honest
If there is a technique employed that is beyond your area of expertise, do the best you can, and state to the editor (or in some cases, in your review) that although outside your area, the data look convincing (or if not, explain why). The editor will know to rely more on the other reviewers for this specific item. If the editor has done his or her job correctly, at least one of the other reviewers will have the needed expertise.
Testify
Most manuscript reviews cover about a page or two. Begin writing by briefly summarizing the state of the field and the intended contribution of the study. Outline any major deficits, but refrain from indicating if you think they preclude publication. Keep in mind that most journals employ copy editors, so unless the language completely obstructs understanding, don’t bother criticizing the English. Go on to itemize any additional defects in the manuscript. Don’t just criticize: saying that X is a weakness is not the same as saying the authors should address weakness X by providing additional supporting data. Be clear and provide no loopholes. Keep in mind that you are not an author. No one should care how you would have done things differently in a perfect world. If you think it helpful, provide additional suggestions as minor comments—the editor will understand that the authors are not bound to them.
Judgment Day
Make a decision as to the suitability of the manuscript for the specific journal in question, keeping in mind their expectations. Is it acceptable in its current state? Would a reasonable number of experiments performed in a reasonable amount of time make it so, or not? Answering these questions will allow you to recommend acceptance, rejection, or major/minor revision. 
If the journal allows separate comments to the editor, here is the place to state that in your opinion they should accept and publish the paper as quickly as possible, or that the manuscript falls far below what would be expected for Journal X, or that Y must absolutely be completed to make the manuscript publishable, or that if Z is done you are willing to have it accepted without seeing it again. Good comments here can make the editor’s job easier. The availability of separate comments to the editor does not mean that you should provide only positive comments in the written review and reserve the negative ones for the editor. This approach can result in a rejected manuscript being returned to the authors with glowing reviewer comments. 
Resurrection
A second review is not the same as an initial review. There is rarely any good reason why you should not be able to turn it around in a few days—you are already familiar with the manuscript. Add no new issues—doing so would be the equivalent of tripping someone in a race during the home stretch. Determine whether the authors have adequately addressed your criticisms (and those of the other reviewers, if there was something you missed in the initial review that you think is vital). In some cases, data added to a revised manuscript may raise new questions or concerns, but ask yourself if they really matter before bringing them up in your review. Be willing to give a little if the authors have made reasonable accommodation. Make a decision: up or down. Relay it to the editor. 
Congratulations. You’ve now been baptized, confirmed, and anointed a professional manuscript reviewer."

Monday, August 25, 2014

Researching ecological research

Benjamin Haller. 2014. "Theoretical and Empirical Perspectives in Ecology and Evolution: A Survey". BioScience; doi:10.1093/biosci/biu131.

Etienne Low-Décarie, Corey Chivers, and Monica Granados. 2014. "Rising complexity and falling explanatory power in ecology". Front Ecol Environ 2014; doi:10.1890/130230.

A little navel gazing is good for ecology. Although maybe it seems like it, ecology spends far less time evaluating its approach, compared to simply doing research. Obviously we can't spend all of our time navel-gazing, but the field as a whole would benefit greatly from ongoing conversations about its strength and weaknesses. 

For example, the issue of theory vs. empirical research. Although this issue has received attention and arguments ad nauseum over the years (including here, 1, 2, 3), it never completely goes away. And even though there are arguments that it's not an issue anymore, that everyone recognizes the need for both, if you look closely, the tension continues to exist in subtle ways. If you have participated in a mixed reading group did the common complaint “do we have to read so many math-y papers?" ever arise; or equally “do we have to read so many system specific papers and just critique the methods?” Theory and empirical research don't see eye to eye as closely as we might want to believe.

The good news? Now there is some data. Ben Haller did a survey on this topic that just came out in BioScience. This paper does the probably necessary task of getting some real data beyond the philosophical and argumentative about the theory/data debate. Firstly, he defines empirical research as being involved in the gathering and analysis of real world data, while theoretical research does not gather or analyze real world data, instead involves mathematical models, numerical simulations, and other such work. The survey included 614 scientists from related ecology and evolutionary biology fields, representing a global (rather North American) perspective.

The conclusions are short, sweet and pretty interesting: "(1) Substantial mistrust and tension exists between theorists and empiricists, but despite this, (2) there is an almost universal desire among ecologists and evolutionary biologists for closer interactions between theoretical and empirical work; however, (3) institutions such as journals, funding agencies, and universities often hinder such increased interactions, which points to a need for institutional reforms."
 
For interpreting the plots – the empirical group represents respondents whose research is completely or primarily empirical; the theoretical group's research is mostly or completely related to theory, while the middle group does work that falls equally into both types. Maybe the results don't surprise anyone – scientists still read papers, collaborate, and coauthor papers mostly with others of the same group. What is surprising is that this trend is particularly strong for the empirical group. For example, nearly 80% of theorists have coauthored a paper with someone in the empirical group while only 42% of empiricists have coauthored at least one paper with a theorist. Before we start throwing things at empiricists, it should be noted that this could relate to a relative scarcity of theoretical ecologists, rather than insularity on the part of the empiricists. However, it is interesting that while the responses to the question “how should theory and empiricism coexist together?” across all groups agreed that “theoretical work and empirical work would coexist tightly, driving each other in a continuing feedback loop”, empirical scientists were significantly more likely to say “work would primarily be data-driven; theory would be developed in response to questions raised by empiri­cal findings.”

Most important, and maybe concerning, is that the survey found no real effect of age, stage or gender – i.e. existing attitudes are deeply ingrained and show no sign of changing.

Why is it so important that we reconcile the theoretical/empirical issue? The paper “Rising complexity and falling explanatory power in ecology” offers a pretty compelling reason in its title. Ecological research is getting harder, and we need to marshall all the resources available to us to continue to progress. 

The paper suggests that ecological research is experiencing falling mean Rvalues. Values in published papers have fallen from above 0.75 prior to 1950 to below 0.5 in today's papers.
The worrying thing is that as a discipline progresses and improves, you might predict that the result is an improving ability to explain ecological phenomenon. For comparison, criminology was found to show no decline in R2 values as that matured through time. Why don’t we have that? 

During the same period, however, it is notable that the average complexity of ecological studies also increased – the number of reported p-values is 10x larger on average today compared to the early years (where usually only a single p-value relating to a single question was reported). 

The fall in R2 values and the rise in reported p-values could mean a number of things, some worse for ecology than others. The authors suggest that R2 values may be declining as a result of exhaustion of “easy” questions (“low hanging fruit”), increased effort in experiments, or a change in publication bias, for example. The low hanging fruit hypothesis may have some merit – after all, studies from before the 1950s were mostly population biology with a focus on a single species in a single place over a single time period. Questions have grown increasingly more complex, involving assemblages of species over a greater range of spatial and temporal scales. For complex sciences, this fits a common pattern of diminishing returns: “For example, large planets, large mammals, and more stable elements were discovered first”.

In some ways, ecologists lack a clear definition of success. No one would argue that ecology is less effective now than it was in the 1920s, for example, and yet a simplistic measure (R2) of success might suggest that ecology is in decline. Any biases between theorists and empiricists is obviously misplaced, in that any definition of success for ecology will require both.  

Friday, August 15, 2014

#ESA2014 Day 4: Battle Empiricism vs Theory

You are our only hope!(?)
First off, the Theory vs. Empiricism Ignite session was a goldmine for quotes:

In God we trust, all others bring data” (H. Edwards Deming)
Models are our only hope” (Greg Dwyer)
"Nature represents a special part of parameter space" (Jay Stachowicz)

The Theory vs. Empiricism Ignite session was designed in response to an impromptu survey at ESA last year that found that 2/3 s of an audience did not believe that there are general laws in ecology. Speakers were asked to choose whether an empirical paper or a theoretical paper would be most important for ecology, and to defend their choice, perhaps creating some entertaining antagonism along the way. 

There wasn't actually much antagonism to be had: participants were mostly conciliatory and hardly controversial. Despite this, the session was entertaining and also insightful, but perhaps not in the way I expected. First though, I should say that I think the conversation could have used some definitions of the terms (“theory”,  “empiricism”). We throw these terms around a lot but they mean different things to different people. What counts as theory to a field based scientist may be consider no more than a rule of thumb or statistical model to a pure theoretician. Data from a microcosm might not count as experimental evidence to a fieldwork-oriented ecologist.

The short talks included examples and arguments as to how theoretical or empirical science is a necessary and valuable contributor to ecological discoveries. That was fine, but the subtext from a number of talks turned out to be more interesting. The tension, it seemed, was not about whether theory is useful or empiricism is valuable, but about which one is more important. Should theory or empiricism be the driver of ecological research? (Kudos to Fred Adler for the joke that theory wants to be a demanding queen ant with empiricists as the brainless order-following workers!) And funding should follow the most worthy work. Thus empiricists bemoan the lack of funding for natural history, while theoreticians argue that pure theory is even harder to get grants for. The question of which one should lead research was sadly mostly unanswered (and 5 minutes per person didn't offer much space for a deeper discussion). 

Of course there was the inevitable call for reconciliation of the two areas, of some way to breach the arrogance and ignorance (to paraphrase Brad Cardinale) holding them apart. Or, perhaps all ecologists should be renaissance scientists, who have mastered theory and empiricism equally. Hard to say. For me, considering the example of ecological subfields that have found a balance and feedback between theory and data is wise. Areas such as disease ecology or population biology incorporate models and experiments successfully, for example. Why do some other fields like community ecology or conservation biology struggle so much more?

Thursday, August 14, 2014

#ESA2014 - Day 3 bringing together theory and empiricism

I was tied up in a session all afternoon, so most of the interesting comments below are from Topher Weiss-Lehman, who caught what sounds like a pretty thought provoking session about theory and conservation biology, with thought provoking talks from Hugh Possingham and David Ackerly. This concept of bringing theory and empiricism together permeated through a number of talks, including the session I moderated on using microbes in theoretical ecology and applying theory to microbial ecology (although at the moment, the distance between those things still feels large).

The most thought-provoking talk I saw was Peter Chesson's, on "Diversity maintenance: new concepts and theory for communities as multiple-scale entities". Chesson discussed his discomfort with how his coexistence theory is sometimes applied (I suppose that is the definition of success, that you see your ideas misused). His concerns fall with those of many ecologists on the question of how to define and research an ecological community. Is the obsession with the looking at 'local' communities limiting and misguided, particularly when paired with the ridiculous assumption that such communities are closed systems? Much like Ricklef's well known paper on the defining a 'regional community', Chesson suggests we move to a multi-scale emphasis for community ecology.

Rather than calculating coexistence in a local community, Chesson argued that ecologists should be begin to think about how coexistence mechanisms varied in strength across multiple spatial scales. For example, is frequency dependence more importance at smaller or larger scales? He used a concept similar to the idea of Ricklef's regional community, in which a larger extent encompassed a number of increasingly smaller scale communities. The regional community likely includes environmental gradients, and species distributions that vary across them. Chesson presented some simulations based on a multi-scale model of species interactions to illustrate the potential of his multi-scale coexistence theory framework. The model appears to bring together Chesson's work on coexistence mechanisms-- including the importance of fitness differences (here with fitness calculated at each scale as the change in density over a time step) and stabilizing forces, and the invasion criteria (where coexistence has a signal of a positive growth rate from low density)--and his scale-transition theory work. This is a very obvious advance, and a sensible way of recognizing the scale-dependent nature of ecology in coexistence mechanisms. His approaches allows ecologists to drop their obsession with defining some spatial area as "the community" and a regional community decreases the importance of the closed system assumption. My one with is that there be some discussion of how this concept fits with existing ideas about scale and communities in ecology. For example, how compatible are existing larger scale approaches like macroecology/biogeography and other theoretical paradigms like metacommunity theory with this?  

#Notes from Topher Weiss-Lehman

Applied Theory I spent the morning of my third day at ESA in a symposium on Advancing Ecological Theory for Conservation Biology. Hugh Possingham started out with a call for more grand theories in a talk titled “Theory for conservation decisions: the death of bravery.” Possingham argued for the development of theory tailored to the needs of conservation managers, identifying the SLOSS debate as an example of the scientific community agonizing over the answer to a question no managers were asking. He described the type of theory he meant as simple and easily applicable rather than relying on intensive computer simulations that managers are unlikely to be able to use for their own systems. Possingham is right that conservation managers need theory to help guide them in decisions over where and what species to protect, however I can’t help but think about the scientific advances that arose specifically as a result of the SLOSS debate and computational models. The talk left me wondering if theoretical ecology, like other scientific fields, could be split into basic and applied theory.

The other talks in the session approached the topic of theory for conservation from a number of perspectives. Justin Kitzes discussed the ways in which macroecology can inform conservation concerns and Annette Ostling explored how niche and neutral community dynamics affect extinction debts. H. Resit Akakaya provided a wonderful example of the utility of computer simulations for conservation issues. He presented results predicting the extinction risk of species due to climate change via simulations based on niche modeling coupled with metapopulation dynamics. Jennifer Dunne then explored how the network structure of food webs changed as a result of human arrival and hunting in several systems. The session ended with a presentation by David Ackerly calling for a focus on disequilibrium dynamics in ecology. Ackerly made a compelling case for the importance of considering disequilibrium dynamics, particularly when making predictions of species reactions to climate change or habitat alteration. However the most memorable part of his talk for me was the last 5 minutes or so. He suggested that we reconsider what conservation success should mean. Since systems are changing and will continue to change, Ackerly argued that to set conservation goals based on keeping them the way they are is setting ourselves up for failure. Instead, we need to understand that systems are transitioning and that while we have a crucial role in deciding what they might transition into, we can’t and shouldn’t try to stop them from changing.

The talks today gave me lots of ideas and new papers to read, but they also left me pondering more questions on the philosophy of science (what we do, why we do it, and what our goals should be) than I expected.


Wednesday, August 13, 2014

#ESA2014: Day two, what are we measuring and how?

It's probably in part because I attended sessions that are along similar lines today, but I noticed a common theme played across a number of talks. Ecological data is in some ways becoming very complex - a single analysis may include traits, phylogenetic distances, and taxonomic information, and climate and soil variables, possibly at multiple spatial scales. How to combine disparate data appropriately and how to determine the comparable "scales" across which to measure each variable is more important than ever. But it is still difficult to determine what an appropriate comparison actually is.

Studies of intraspecific variation frequently have to determine how to measure and compare variables. (i.e. Do you measure intraspecific trait variation at the genotype level, the individual level, etc?) For example, in a nice talk by Jessica Abbott, the effects of intraspecific variation in genetic relatedness and trait similarity on intraspecific competition among eelgrass hit upon exactly this point. There was no relationship between trait similarity between genotypes and their degree of genetic relatedness. Traits, not relatedness, were the clearest predictor of competitive success. A number of the talks I saw today incorporated intraspecific variation, including a couple of excellent talks on Daphnia by Sarah Duple and Chris Holmes. Both of the Daphnia talks found evidence of great intraspecific trait variation in the Daphnia but weak relationships between that variation and competitive interactions or diversity. These talks were all nice examples of how empirical work can relate to larger ecological theory, and found fairly mixed evidence for the importance of intraspecific variation. There are many reasons why intraspecific variation is not always strongly tied to ecological processes - intraspecific variation may simply have low explanatory power, for example. But it is also interesting to consider the issues that arise as we ask questions at ever smaller and more precise scales. How do we distinguish a low importance of intraspecific variation, or trait variation, or phylogenetic variation from incorrect scale of measurement? Asking questions with multiple measures opens up new and important issues - how should we measure genetic relatedness to be truly comparable to trait variation at intraspecific or interspecific scales? How does combining mismatched variables (intraspecific trait values with interpolated large scale environmental values, for example) affect the explanatory power of those variables? Given the increasingly multi-faceted nature of ecological analyses it seems important that we consider these questions.


#Lauren Shoemaker
I started Day 2 of ESA attending talks focusing on quantifying coexistence mechanisms and the role of intraspecific competition in coexistence. Yue Li and Peter Chesson started the day presenting work quantifying the storage effect in three desert winter annuals in Arizona’s Goldwater Range. This work highlighted the methodology for quantifying the storage effect in empirical systems—which was refreshing for me since I spend so much time thinking about spatial storage mechanisms in simplified, theoretical systems.

In the same session, Peter Adler presented his work with Chengjin Chu examining the strength of stabilizing niche differences and fitness differences. When stabilizing niche differences are too low relative to fitness differences, competitive exclusion occurs, while high stabilizing niche forces create coexistence. Using long-term demographic data of perennial grasses from five communities, they found that all species exhibited high niche differences and low fitness differences, creating high coexistence strength. For all communities, stabilizing niche differences likely resulted from recruitment. The high niche differentiation highlights the need for a stronger focus on intraspecific density dependence and for more models of coexistence with explicit intraspecific competition.

In the afternoon, Louie Yang argued that ecologists as a whole need to more explicitly consider changes in species interaction through time, especially with increasing effects of climate change. Using an example of 17-year cicada cycles, he showed that questions of “bottom up or top-down” are often really bottom up and then top-down when viewed in a temporally explicit framework. He even ended his talk with an excellent analogy comparing historic artwork and ecology—a hard analogy to pull off!

As an added bonus, I finished the day with a long list of paper citations to look up and read after the conference.

Tuesday, August 12, 2014

#ESA2014: Day 1, just getting started

First off, apparently I wrote that I would be 'live blogging ESA'. Actually, all that means is that, I'm alive, I'm blogging, and I'm at ESA. :-)

Secondly, several other people will be giving snippets from their days this week, including Lauren Shoemaker, and Geoff Legault (below).

The first day is always more about the experience than the content: you are often lost, have no firm idea of where you need to be, and are constantly running into friends and acquaintances. It's great, but not conducive to settling into talks.

For that reason, I'll just mention the experiences that I found most exciting today. First, I saw a number of Ignite talks. These are a recent addition to ESA and are basically 5 minute talks using slides that advance every 15s. This requires a certain ability on the part of the speaker to be brief and yet informative, minimalist but not inaccurate, practiced, but not robotic. I thought that many of the speakers in the Ecosystems in the Third National Climate Assessment achieved this. One speaker, Linda Joyce said -  "if you want to feel like a graduate student again, sign up for an Ignite talk." Presumably because it makes you feel nerves like you haven't felt in years!

Joyce gave a great talk, as did others. Some of the conversation around the ecosystem assessment fell into the discourse that ecosystems provide services, and services imply people. Are ecosystem assessments only about people? Obviously this is too challenging a topic for a 5 minute talk, but it certainly sparks to further discussion on the topic, as it was meant to.

The second session of interest to me was an organized symposium in which early career scientists gave talks about their work. The central thread was simply that all of the speakers were pre-tenure academics. This really worked as a theme to tie the session together. At the end, the speakers answered questions briefly about their careers, advice, and research. Their best advice was really very good, if in line with what you here on attempting a job in academia. Find mentors. Set boundaries between your personal and private life. Say no sometimes, if it means maintaining some sort of sanity (e.g travel less, have more time with your family). A point that came up multiple times was simply, you have to have passion for science, have to love talking about your work. Having something you're passionate about is better than having ten things you are lukewarm on. And always find people to collaborate with, to talk with, to support.

Finally, there are many paths to success. And failure is universal, but not final.

(My favourite quote - someone who mentioned measuring effort in 'undergraduate work hours')

#Lauren Shoemaker

ESA had some excellent talks to start the 99th conference in Sacramento, California. I stayed in Community Assembly and Neutral Theory for several talks before running back and forth between the Hyatt, Sheraton, and conference center (missing the first few minutes of several talks).

In Community Assembly, Maria Stockenreiter gave a fantastic talk on community assembly in phytoplankton communities while building on the theory of Miller et al. (2009) examining the role of unsuccessful invaders in shaping communities. Even unsuccessful invaders within a community can alter environmental conditions or species distributions such that an unsuccessful invasion can exclude a current or future potentially successful invader. Maria tested this theory using two phytoplankton communities—a lab strain with no shared ecological history and a Gull lake community with shared history. While all invaders were unsuccessful in the experiments, they had large effects on community diversity. Unsuccessful invasion decreased diversity in the lab strains but increased diversity in the Gull Lake community, showing both the “ghost effect” of competition and the role of shared ecological histories.

In Paleoecology, Matthew Knope examined the functional diversity-taxonomic diversity relationships for marine animals during the past 500 million years. It was fun to think of a relationship I only consider in current-times over such a long timescale. Matthew categorized marine mammals according to their location in a discrete 3-dimensional niche space (tiering on sea floor, feeding mode, and motility). The data show that the amount of functional diversity was far lower than expected based on taxonomic diversity until only recently. Additionally, I was amazed to see a consistent trend (from 3 different mass extinctions in the dataset) that mass extinctions promote functional diversity 10-20 million years post extinction leading to even higher functional diversity than pre-extinction.

Back at the convention center in the Biodiversity I session, Pascal Niklaus examined if interspecific vertical canopy space partitioning promoted productivity in subtropical forests. While light is a directional resource, creating a large advantage for being tall, Pascal found that vertical niche partitioning still occurred when comparing monocultures to multiple species assemblages. Species in higher diversity communities also had narrower niches, and similar species shifted their vertical leaf biomass niche, but only in shaded treatments. Vertical niche partitioning did, indeed, promote higher ecosystem function.

#Geoff Legault
I arrived in Sacramento this afternoon so I did not get a chance to see many talks (though I did enjoy Meghan Duffy’s talk about possible hydra effects in Daphnia). I did, however, see a number of excellent posters, particularly one by Nick Rasmussen on the interactive effects of density and phenology on the recruitment of toads. I was impressed by his use of mesocosms to directly manipulate these factors and found that he made a compelling case for the idea that the degree of synchrony in hatching can determine which form of intraspecific competition dominates recruitment.


Monday, August 4, 2014

#ESA2014 : Getting ready for (and surviving) ESA

There is less than one week until ecology's largest meeting. ESA's annual meeting starts August 10th in Sacramento, California, and it can be both exciting and also be overwhelming in its size and scope. Here are a few suggestions for making it a success.

Getting ready for ESA.
Sure, things start in a week and you're scheduled for a talk/poster/meeting with a famous prof, but you haven't started preparing yet.

First off, no point beating yourself up for procrastinating: if you've been thinking about your presentation but doing other projects, you might be in the company of other successful people.

If you're giving a talk, and given it before or are an old hand at this sort of thing, go ahead and put it together the night before your talk. One benefit for the truly experienced or gifted speaker is that this talk will never sound over-rehearsed.

Regardless, all speakers should try for a talk that is focused, with a clear narrative and argument, and within the allotted time. (Nothing is more awkward for everyone involved than watching the moderate have to interrupt a speaker). The good news is that ESA audiences will probably be a) educated to at least a basic level on your topic, and b) are usually generous with their attention and polite with their questions. This blog has some really practical advice on putting together an academic talk.
If at all possible, practice in front of a friendly audience ahead of time.

The questions after your talk will vary, and if you're lucky they will relate to future directions, experimental design, quantitative double-checks, and the truly insightful thoughts. However, there other common questions that you should recognize: the courtesy question (good moderators have a few in hand), the "tell-me-how-it-relates-to-my-work" question, and the wandering unquestion.

Giving a poster is much different than giving a talk, and it has pros and cons. First, you have to have it finished in time to have it printed, so procrastination is less possible. Posters are great if you want one-on-one interactions with a wide range of people. You have to make your poster attractive and interesting: this always means don't put too much text on your poster. The start of this pdf gives some nice advice on getting the most out of your poster presentation.

For both posters and presentations, graphics and visual appeal make a big difference. Check out the blog, DeScience, which has some great suggestions for science communication.

Academic meetings. These run the gamut from collaborators that you're just catching up with, to strangers that you have contacted to meet to discuss common scientific interests. If scientists that you share common research activities and interests with are attending ESA, it never hurts to try to meet with them. Many academics are generous with their time, especially for young researchers. If they say yes, come prepared for the conversation. If necessary, review their work that relates to your own. Come prepared to describe your interests and the project/question/experiment you were looking for advice on. It can be very helpful to have some specific questions in mind, in order facilitate the conversation.

What to wear. Impossible to say. Depending on who you are and wear you work normally, you can wear anything from torn field gear and binos to a nice dress or suit (although not too many people will be in suits).

Surviving ESA.
ESA can be very large and fairly exhausting. The key is to pace yourself and take breaks: you don't need to see talks all day long to get your money's worth from ESA. Prioritize the talks that you want to see based on things like speaker or topic. Sitting in on topics totally different from those you study can be quite energizing as well. In this age of smartphones, the e-program is invaluable.

Social media can help you find popular or interesting sounding talks, or fill you in on highlights you missed. This year the official hashtag on twitter is #ESA2014.

One of the most important things you can do is be open to meeting new people, whether through dinner and lunch invites, mixers, or other organized activities. Introverts might cringe a little, but the longest lasting outcome from big conferences is the connections you make there.

Eat and try to get some sleep.

**The EEB & Flow will be live-blogging during ESA 2014 in Sacramento, as we have for the last few years. See everyone in Sacramento!**