Thursday, April 24, 2014

Data merging: are we moving forward or dealing with Frankenstein's monster


I’m sitting in the Sydney airport waiting for my delayed flight –which gives me some time to ruminate about the mini-conference I am leaving. The conference, hosted by the Centre for Biodiversity Analysis (CBA) and CSIRO in Australia, on "Understanding biodiversity dynamics using diverse data sources", brought together several fascinating thinkers working on disparate areas including ecology, macroecology, evolution, genomics, and computer science. The goal of the conference was to see if merging different forms of data could lead to greater insights into biodiversity patterns and processes. 

Happy integration

On the surface, it seems uncontroversial to say that bringing together different forms of data really does promote new insights into nature. However, this only really works if the data we combine meaningfully complement one another. When researchers bring together data, there are under-appreciated risks, and the resulting effort could result in trying to combine data that make weird bedfellows.
Weird bedfellows

The risks include data that mismatch in the scale of observation, resulting in meaningful variation being missed. Data are often generated according to certain models with specific assumptions, and these data-generation steps can be misunderstood by end-users, resulting in inappropriate uses of data. Further, different data may be combined in standard statistical models, but the linkages between data types is much more subtle and nuanced, requiring alternative models.

Why these are issues stems from the fact that researchers now have an unprecedented access to numerous large data sets. Whether these are large trait data sets, spatial locations, spatial environmental data, genomes, or historical data, they are all built with specific underlying uses, limitations and assumptions.  

Regardless of these issues of concern, the opportunity and power to address new questions is greatly enhanced by multiple types of data. One thing I gained from this meeting is that there is a new world of biodiversity analysis and understanding emerging by smart people doing smart things with multiple data. We will soon live in a world where the data and analytical tools allow research to truly combine multiple processes to predict species' distributions, or to move from evolutionary events in deep history to modern day ecological patterns.


Wednesday, April 23, 2014

Guest Post: You teach science, but is your teaching scientific? (Part I)

The first in a series of guest posts about using scientific teaching, active learning, and flipping the classroom by Sarah Seiter, a teaching fellow at the University of Colorado, Boulder. 

As a faculty member teaching can sometimes seem like a chore – your lectures compete with smartphones and laptops. Some students see themselves as education “consumers” and haggle over grades. STEM (science, technology, engineering, and math) faculty have a particularly tough gig – students need substantial background to succeed in these courses, and often arrive in the classroom unprepared. Yet, the current classroom climate doesn’t seem to be working for students either. About half of STEM college majors ultimately switch to a non-scientific field. It would be easy to frame the problem as one of culture – and we do live in a society that doesn’t always value science or education. However, the problem of reforming STEM education might not take social change, but rather could be solved using our own scientific training. In the past few years a movement called “scientific teaching” has emerged, which uses quantitative research skills to make the classroom experience better for instructors as well as students.

So how can you use your research skills to boost your teaching? First, you can use teaching techniques that have been empirically tested and rigorously studied, especially a set of techniques called “active learning”. Second, you can collect data on yourself and your students to gauge your progress and adjust your teaching as needed, a process called “formative assessment”. While this can seem daunting, it helps to remember that as a researcher you’re uniquely equipped to overhaul your teaching, using the skills you already rely on in the lab and the field. Like a lot of paradigm shifts in science, using data to guide your teaching seems pretty obvious after the fact, but it can be revolutionary for you and your students.

What is Active Learning:

There are a lot of definitions of active learning floating around, but in short active learning techniques force students to engage with the material, while it is being taught. More importantly, students practice the material and make mistakes while they are surrounded by a community of peers and instructors who can help. There are a lot of ways to bring active learning strategies to your classroom, such as clicker response systems (handheld devices that allow them to take short quizzes throughout the lecture). Case studies are another tool: students read about scientific problems and then apply the information to real world problems (medical and law schools have been them for years). I’ll get into some more examples of these techniques in post II; there are lots of free and awesome resources that will allow you to try active learning techniques in your class with minimal investment.

Formative Assessment:

The other way data can help you overhaul your class is through formative assessment, a series of small, frequent, low stakes assessment of student learning. A lot of college courses use what’s called summative assessment – one or two major exams that test a semester’s worth of material, with a few labs or a term paper for balance. If your goal is to see if your students learned anything over a semester this is probably sufficient. This is also fine if you’re trying to weed out underperforming students from your major (but seriously, don’t do that). But if you’re interested in coaching students towards mastery of the subject matter, it probably isn’t enough to just tell them how much they learned after half the class is over. If you think about learning goals like we think of fitness goals, this is like asking students to qualify for the Boston marathon, without giving them any times for their training runs.

Formative assessment can be done in many ways: weekly quizzes or taking data with classroom clicker systems. While a lot of formative assessment research focuses on measuring student progress, instructors have lots to gain by measuring their own pedagogical skills. There are a lot of tools out there to measure improvement in teaching skills (K-12 teachers have been getting formatively assessed for years), but even setting simple goals for yourself (“make at least 5 minutes for student questions”) and monitoring your progress can be really helpful. Post III will talk about how to do (relatively) painless formative assessment in your class.

How does this work and who does it work for:

Scientific teaching is revolutionary because it works for everyone, faculty and students alike. However, it has particularly useful benefits for some types of instructors and students.

New Faculty: inexperienced faculty can achieve results as good or better than experienced faculty by using evidence based teaching techniques. In a study at the University of Colorado, physics students taught by a graduate TA using scientific teaching outperformed those taught by an experienced (and well loved) professor using a standard lecture style (you can read the study here). Faculty who are not native English speakers, or who are simply shy can get a lot of leverage using scientific teaching techniques, because doing in-class activities relieves the pressure to deliver perfect lectures.
Test scores between a lecture-taught physics section
and a section taught using active learning techniques.

Seasoned Faculty: For faculty who already have their teaching style established, scientific teaching can spice up lectures that have become rote or help you address concepts that you see students struggle with year after year. Even if you feel like you have your lectures completely dialed in, consider whether you’re using the most cutting edge techniques in your lab, and if you your classroom deserves the same treatment.

Students also stand to gain from scientific teaching, and some groups of students are particularly poised to benefit from it:
Students who don’t plan to go into science: Even in majors classes, most of the students we teach won’t go on to become scientists. But skills like analyzing data, and writing convincing evidence based arguments are useful in almost any field. Active learning trains students to be smart consumers of information, and formative assessment teaches students to monitor their own learning – two skills we could stand to see more of in any career.

Students Who Love Science: Active learning can give star students a leg up on the skills they’ll need to succeed as academics, for all the reasons listed above. Occasionally really bright students will balk at active learning, because having to wrestle with complicated data makes them feel stupid. While it can feel awful to watch your smartest students struggle, it is important to remember that real scientists have to confront confusing data every day. For students who want research careers, learning to persevere through messy and inconclusive results is critical.

Students who struggle with science: Active learning can be a great leveler for students who come from disadvantaged backgrounds. A University of Washington study showed that active learning and student peer tutoring could eliminate achievement gaps for minority students. If you partially got into academia because you wanted to make a difference in educating young people, here is one empirically proven way to do that.

Are there downsides?

Like anything, active learning involves tradeoffs. While the overwhelming evidence suggests that active learning is the best way to train new faculty (the white house even published a report calling for more of it!), there are sometimes roadblocks to scientific teaching.

Content Isn’t King Anymore: Taking time to work with data, or apply scientific research to policy problems takes more time, so instructors can cover fewer examples in class. In active learning, students are developing scientific skills like experimental design or technical writing, but after spending an hour hammering out an experiment to test the evolution of virulence, they often feel like they’ve only learned about “one stupid disease”. However, there is lots of evidence that covering topics in depth is more beneficial than doing a survey of many topics. For example, high schoolers that studied a single subject in depth for more than a month were more likely to declare a science major in college than students who covered more topics.

Demands on Instructor Time: I actually haven’t found that active learning takes more time to prepare –case studies and clickers actually take a up a decent amount of class time, so I spend less time prepping and rehearsing lectures. However, if you already have a slide deck you’ve been using for years, developing clicker questions and class exercises requires an upfront investment of time. Formative assessment can also take more time, although online quiz tools and peer grading can help take some of the pressure off instructors.

If you want to learn more about the theory behind scientific teaching there are a lot of great resources on the subject:

These podcasts are a great place to start:
http://americanradioworks.publicradio.org/features/tomorrows-college/lectures/

http://www.slate.com/articles/podcasts/education/2013/12/schooled_podcast_the_flipped_classroom.html

This book is a classic in the field:
http://www.amazon.com/Scientific-Teaching-Jo-Handelsman/dp/1429201886

Monday, April 21, 2014

Null models matter, but what should they look like?

Neutral Biogeography and the Evolution of Climatic Niches. Florian C. Boucher, Wilfried Thuiller, T. Jonathan Davies, and Sébastien Lavergne. The American Naturalist, Vol. 183, No. 5 (May 2014), pp. 573-584

Null models have become a fundamental part of community ecology. For the most part, this is an improvement over our null-model free days: patterns are now interpreted with reference to patterns that might arise through chance and in the absence of ecological processes of interest. Null models today are ubiquitous in tests of phylogenetic signals, patterns of species co-occurrence, models of species distribution-climate relationships. But even though null models are a success in that they are widespread and commonly used, there are problems--in particular, there is a disconnect between how null models are chosen and interpreted and what information they actually provide. Unfortunately, simple and easily applied null models tend to be favoured, but they are often interpreted as though they are complicated, mechanism-explicit models.

The new paper “Neutral Biogeography and the Evolution of Climatic Niches” from Boucher et al. provides a good example of this problem. The premise of the paper is straightforward: studies of phylogenetic niche conservation tend to rely on simple null models, and as a result may misinterpret what their data shows because of the type of null models that they use. The study of phylogenetic niche conservation and niche evolution is becoming increasingly popular, particularly studies on how species' climatic niches evolve and how climate niches relate to patterns of diversity. In a time of changing climates, there are also important applications looking at how species respond to climatic shifts. Studies of changes in climate niches through evolutionary time usually rely on a definition of the climate niche based on empirical data, more specifically, the mean position of a given species along a continuous abiotic gradient. Because this is not directly tied to physiological measurements, climate niche data may also capture the effect of dispersal limitations or biotic interactions. Hence the need for null models, however the null models used in these studies primarily flag changes in climate niche that result from to random drift or selection in a varying environment. These types of null models use Brownian motion (a "random walk") to answer questions about whether niches are more or less similar than expected due to chance, or else whether a particular model of niche evolution is a better fit to the data than a model of Brownian motion.

The authors suggest that the reliance on Brownian motion is problematic, since these simple null models cannot actually distinguish between patterns of climate niches that arise simply due to speciation and migration but no selection on climate niches, and those that are the result of true niche evolution. If this is true, conclusions about niche evolution may be suspect, since they depend on the null model used. The authors used a neutral, spatially explicit model (known as an "alternative neutral biogeographic model") that simulates dynamics driven only by speciation and migration, with species being neutral in their dynamics. This provides an alternative model of patterns that may arise in climate niches among species, despite the absence of direct selection on the trait. The paper then looks at whether climatic niches exhibit phylogenetic signals when they arise via neutral spatial dynamics; if gradualism a reasonable neutral expectation for the evolution of climatic niches on geological timescales; and whether constraints on climatic niche diversification can arise simply through bounded geographic space. Simulations of the neutral biogeographic model used a gridded “continent” with variable climate conditions: each cell has a carrying capacity, and species move via migration and split into two species either by point mutation, or else by vicariance (a geographic barrier appears, leading to divergence of 2 populations). Not surprisingly, their results show that even in the absence of any selection on species’ climate niches, patterns can result that differ greatly from a simple Brownian motion-based null model. So the simple null model (Brownian motion) often concluded that results from the more complex null model were different from the random/null expectation. This isn't a problem per se. The problem is that currently interpretations of the Brownian motion model may be that anything different from null is a signal for niche evolution (or conservation). Obviously that is not  correct.

This paper is focused on the issue of choosing null models for studies of climate niche evolution, but it fits into a current of thought about the problems with how ecologists are using null models. It is one thing to know that you need and want to use a null model, but it is much more difficult to construct an appropriate null model, and interpret the output correctly. Null models (such as the Brownian motion null model) are often so simplistic that they are straw man arguments – if ecology isn't the result of only randomness, your null model is pretty likely to be a poor fit to the data. On the other hand, the more specific and complex the null model is, the easier it is to throw the baby out with the bathwater. Given how much data is interpreted in the light of null models, it seems that choosing and interpreting null models needs to be more of a priority.

Thursday, April 3, 2014

Has science lost touch with natural history, and other links

A few interesting links, especially about the dangers of when one aspect of science, data analysis, or knowledge receives inordinate focus.

A new article in Bioscience repeats the fear that natural history is losing its place in science, and that natural history's contributions to science have been devalued. "Natural history's place in science and society" makes some good points as to the many contributions that natural history has made to science, and it is fairly clear that natural history is given less and less value within academia. As always though, the issue is finding a ways to value useful natural history contributions (museum and herbarium collections, Genbank contributions, expeditions, citizen science) in a time of limited funds and (over)emphasis on the publication treadmill. Nature offers its take here, as well.

An interesting opinion piece on how the obsession with quantification and statistics can go too far, particularly in the absence of careful interpretation. "Can empiricism go too far?"

And similarly, does Big Data have big problems? Though focused on applications for the social sciences, there are some interesting points about the space between "social scientists who aren’t computationally talented and computer scientists who aren’t social-scientifically talented", and again, the need for careful interpretation. "Big data, big problems?"

Finally, a fascinating suggestion about how communication styles vary globally. Given the global academic society we exist in, it seems like this could come in handy. The Canadian one seems pretty accurate, anyways. "These Diagrams Reveal How To Negotiate With People Around The World." 

Thursday, March 27, 2014

Are we winning the science communication war?

Since the time that I was a young graduate student, there have been constant calls for ecologists to communicate more with the public and policy makers (Norton 1998, Ludwig et al. 2001). The impetus for these calls is easy to understand –we are facing serious threats to the maintenance of biodiversity and ecosystem health, and ecologists have the knowledge and facts that are needed to shape public policy. To some, it is unconscionable that ecologists have not done more advocacy, while others see a need to better educate ecologists in communication strategies. While the reluctance for some ecologists to engage in public communication could be due to a lack of skills that training could overcome, the majority likely has had a deeper unease. Like all academics, ecologists have many demands on their time, but are evaluated by research output. Adding another priority to their already long list of priorities can seem overwhelming. More fundamentally, many ecologists are in the business of expanding our understanding of the world. They see themselves as objective scientists adding to global knowledge. To these ‘objectivists’, getting involved in policy debates, or becoming advocates, undermines their objectivity.

Regardless of these concerns, a number of ecologists have decided that public communication is an important part of their responsibilities. Ecologists now routinely sit on the boards of different organizations, give public lectures, write books and articles for the public, work more on applied problems, and testify before governmental committees. Part of this shift comes from organizations, such as the Nature Conservancy, which have become large, sophisticated entities with communication departments. But, the working academic ecologist likely talks with more journalists and public groups than in the past.

The question remains: has this increased emphasis on communication yielded any changes in public perception or policy decisions. As someone who has spent time in elementary school classrooms teaching kids about pollinators and conservation, the level of environmental awareness in both the educators and children surprises me. More telling are surprising calls for policy shifts from governmental organizations. Here in Canada, morale has been low because of a federal government that has not prioritized science or conservation. However signals from international bodies and the US seem to be promising for the ability of science to positively influence science.

Two such policy calls are extremely telling. Firstly, the North American Free Trade Agreement (NAFTA), which includes the governments of Mexico, Canada, and the USA, which normally deals with economic initiatives and disagreements, announced that they will form a committee to explore measures to protect monarch butterflies. They will consider instituting toxin-free zones, where the spraying of chemicals will be prohibited, as well as the construction of a milkweed corridor from Canada to Mexico. NAFTA made this announcement because of declining monarch numbers and calls from scientists for a coordinated strategy.

The second example is the call from 11 US senators to combat the spread of Asian carp. Asian carp have invaded a number of major rivers in the US, and have their spread has been of major concern to scientists. The 11 senators have taken this scientific concern seriously, requesting federal money and that the Army Corps of Engineers devise a way to stop the Asian carp spread.


There seems to be promising anecdotal evidence that issues of scientific concern are influencing policy decisions. This signals a potential shift; maybe scientists are winning the public perception and policy war. But the war is by no means over. There are still major issues (e.g., climate change) that require more substantial policy action. Scientists, especially those who are effective and engaged, need to continue to communicate with public and policy audiences. Every scientifically informed policy decision should be seen as a signal of the willingness of audiences to listening to scientists and that communicating science can work.



References
Ludwig D., Mangel M. & Haddad B. (2001). ECOLOGY, CONSERVATION, AND PUBLIC POLICY. Annual Review of Ecology and Systematics, 32, 481-517.

Norton, B. G. 1998. IMPROVING ECOLOGICAL COMMUNICATION: THE ROLE OF ECOLOGISTS IN ENVIRONMENTAL POLICY FORMATION. Ecological Applications 8:350–364


Monday, March 24, 2014

Debating the p-value in Ecology

It is interesting that p-values still garner so much ink: it says something about how engrained and yet embattled they are. This month’s Ecology issue explores the p-value problem with a forum of 10 new short papers* on the strengths and weaknesses, defenses and critiques, and various alternatives to “the probability (p) of obtaining a statistic at least as extreme as the observed statistic, given that the null hypothesis is true”.

The defense for p-values is lead by Paul Murtaugh, who provides the opening and closing arguments. Murtaugh, who has written a number of good papers about ecological data analysis and statistics, takes a pragmatic approach to null hypothesis testing and p-values. He argues p-values are not flawed so much as they are regularly and egregiously misused and misinterpreted. In particular, he demonstrates mathematically that alternative approaches to the p-value, particularly the use of confidence intervals or information theoretic criteria (e.g. AIC), simply present the same information as p-values in slightly different fashions. This is a point that the contribution by Perry de Valpine supports, noting that all of these approaches are simply different ways of displaying likelihood ratios and the argument that one is inherently superior ignores their close relationship. In addition, although acknowledging that cutoff values for significant p-values are logically problematic (why is a p-value of 0.049 so much more important than one of 0.051?), Murtaugh notes that cutoffs reflect decisions about acceptable error rates and so are not inherently meaningless. Further, he argues that dAIC cutoffs for model selection are no less arbitrary. 

The remainder of the forum is a back and forth argument about Murtaugh’s particular points and about the merits of the other authors’ chosen methods (Bayesian, information theoretic and other approaches are represented). It’s quite entertaining, and this forum is really a great idea that I hope will feature in future issues. Some of the arguments are philosophical – are p-values really “evidence” and is it possible to “accept” or “reject” a hypothesis using their values? Murtaugh does downplay the well-known problem that p-values summarize the strength of evidence against the null hypothesis, and do not assess the strength of evidence supporting a hypothesis or model. This can make them prone to misinterpretation (most students in intro stats want to say “the p-value supports the alternate hypothesis”) or else interpretation in stilted statements.

Not surprisingly, Murtaugh receives the most flak for defending p-values from researchers working in alternate worldviews like Bayesian and information-theoretic approaches. (His interactions with K. Burnham and D. Anderson (AIC) seem downright testy. Burnham and Anderson in fact start their paper “We were surprised to see a paper defending P values and significance testing at this time in history”...) But having this diversity of authors plays a useful role, in that it highlights that each approach has it's own merits and best applications. Null hypothesis testing with p-values may be most appropriate for testing the effects of treatments on randomized experiments, while AIC values are useful when we are comparing multi-parameter, non-nested models. Bayesian similarly may be more useful to apply to some approaches than others. This focus on the “one true path” to statistics may be behind some of the current problems with p-values: they were used as a standard to help make ecology more rigorous, but the focus on p-values came at the expense of reporting effect sizes, making predictions, or careful experimental design.

Even at this point in history, it seems like there is still lots to say about p-values.

*But not open-access, which is really too bad. 

Monday, March 17, 2014

How are we defining prediction in ecology?

There is an ongoing debate about the role of wolves in altering ecosystem dynamics in Yellowstone, which has stimulated a number of recent papers, and apparently inspired an editorial in Nature. Entitled “An Elegant Chaos”, the editorial reads a bit like an apology for ecology’s failure at prediction, suggesting that we should embrace ecology’s lack of universal laws and recognize that “Ecological complexity, which may seem like an impenetrable thicket of nuance, is also the source of much of our pleasure in nature”.

Most of the time, I also fall squarely into the pessimistic “ecological complexity limits predictability” camp. And concerns about prediction in ecology are widespread and understandable. But there is also something frustrating about the way we so often approach ecological prediction. Statements such as “It would be useful to have broad patterns and commonalities in ecology” feel incomplete. Is it that we really lack “broad patterns and commonalities in ecology”, or has ecology adopted a rather precise and self-excoriating definition for “prediction”? 

.
We are fixated on achieving particular forms of prediction (either robust universal relationships, or else precise and specific numerical outputs), and perhaps we are failing at achieving these. But on the other hand, ecology is relatively successful in understanding and predicting qualitative relationships, especially at large spatial and temporal scales. At the broadest scales, ecologists can predict the relationships between species numbers and area, between precipitation, temperature and habitat type, between habitat types and the traits of species found within, between productivity and the general number of trophic levels supported. Not only do we ignore this foundation of large-scale predictable relationships, but we ignore the fact that prediction is full of tradeoffs. As a paper with the excellent title, “The good, the bad, and the ugly of predictive science” states, any predictive model is still limited by tradeoffs between: “robustness-to-uncertainty, fidelity-to-data, and confidence-in-prediction…. [H]igh-fidelity models cannot…be made robust to uncertainty and lack-of-knowledge. Similarly, equally robust models do not provide consistent predictions, hence reducing confidence-in-prediction. The conclusion of the theoretical investigation is that, in assessing the predictive accuracy of numerical models, one should never focus on a single aspect.” Different types of predictions have different limitations. But sometimes it seems that ecologists want to make predictions in the purest, trade-off free sense - robustness-to-uncertainty, fidelity-to-data, and confidence-in-prediction - all at once. 

In relation to this, ecological processes tend to be easier to represent in a probabilistic fashion, something that we seem rather uncomfortable with. Ecology is predictive in the way medicine is predictive – we understand the important cause and effect relationships, many of the interactions that can occur, and we can even estimate the likelihood of particular outcomes (of smoking causing lung cancer, of warming climate decreasing diversity), but predicting how a human body or ecosystem will change is always inexact. The complexity of multiple independent species, populations, genes, traits, all interacting with similarly changing abiotic conditions makes precise quantitative predictions at small scales of space or time pretty intractable. So maybe that shouldn’t be our bar for success. The analogous problem for an evolutionary biologist would be to predict not only a change in population genetic structure but also the resulting phenotypes, accounting for epigenetics and plasticity too. I think that would be considered unrealistic, so why is that where we place the bar for ecology? 

In part the bar for prediction is set so high because the demand for ecological knowledge, given habitat destruction, climate change, extinction, and a myriad of other changes, is so great. But in attempting to fulfill that need, it may be worth acknowledging that predictions in ecology occur on a hierarchy from those relationships at the broadest scale that we can be most certain about, moving down to the finest scale of interactions and traits and genes where we may be less certain. If we see events as occurring with different probabilities, and our knowledge of those probability distributions declining the farther down that hierarchy we travel, then our predictive ability will decline as well. New and additional research adds to the missing or poor relationships, but at the finest scales, prediction may always be limited.

Tuesday, March 11, 2014

The lifetime of a species: how parasitism is changing Darwin's finches

Sonia Kleindorfer, Jody A. O’Connor, Rachael Y. Dudaniec, Steven A. Myers,Jeremy Robertson, and Frank J. Sulloway. (2014). Species Collapse via Hybridization in Darwin’s Tree Finches. The American Naturalist, Vol. 183, No. 3, pp. 325-341

Small Galapagos tree finch,
Camarhynchus parvulus
Darwin’s finches are some of the best-known examples of how ecological conditions can cause character displacement and even lead to speciation. Continuing research on the Galapagos finches has provided the exceptional opportunity to follow post-speciation communities and explore how changes in ecological processes affect species and the boundaries between them. Separate finch species have been maintained in sympatry on the islands because various barriers maintain the species' integrity, preventing hybrids from occurring (e.g. species' behavioural differences) or being recruited (e.g. low fitness). As conditions change though, hybrids may be a source of increased genetic variance and novel evolutionary trajectories and selection against them may weaken. Though speciation is interesting in its own right, it is not the end of the story: ecological and evolutionary pressures continue and species continue to be lost or added, to adapt, or to lose integrity.

A fascinating paper by Kleindorfer et al. (2014) explores exactly this issue among the small, medium, and large tree finches (Camarhynchus spp.of Floreana Island, Galapagos. Large and small tree finches first colonized Floreana, with the medium tree finch speciating on the island from a morph of the large tree finch. This resulted in three sympatric finch species that differ in body and beak size, but otherwise share very similar behaviour and appearance. However, ecological and environmental conditions have not remained constant on Floreana since observations in the 1800s: a parasite first observed on the island in 1997, Philornis downsi has taken residence and has caused massive nestling mortality (up to 98%) for the tree finches. Since parasite density is correlated with tree finch body size, the authors predicted that high parasite intensity should be linked to declining recruitment of the large tree finch. If females increasingly prefer smaller mates, there may also be increased hybridization, particularly if there is some advantage in having mixed parental ancestry. To test this, the authors sampled tree finch populations on Floreana in both 2005 and 2010. Parasite numbers increase with high precipitation, and so by combining museum records (collected between 1852-1906, when no parasites were present), 2005 sampling records (dry conditions, lower parasite numbers), and 2010 sampling records (high rainfall, high parasite numbers), they could examine a gradient of parasite effects. They measured a number of morphological variables, collected blood for genotyping, estimated individual age, measured parasite intensity in nests, and observed mate choice.

Philornis downsi:
larval stage parasitizes nestlings.
(a Google image search will provide
some more graphic illustrations)
For each time period, morphological measurements were used to cluster individuals into putative species. The museum specimens from the 1800s had 3 morphologically distinguishable populations, the true small, medium and large tree finch species usually written about. In 2005 there were still 3 distinct clusters, but the morphological overlap between them had increased. By 2010, the year with the highest parasite numbers, there were only two morphologically distinguishable populations. Which species had disappeared? Although recent studies have labelled the two populations as the “small” tree finch and “large” tree finch, the authors found that the 2010 “large” population is much smaller than the true large tree finches collected in 1852-1906, suggesting perhaps the large tree finch was no longer present. Genetic population assignment suggested that despite morphological clustering, there were actually only two distinct species on Floreana in 2005 and 2010: it appeared that the large tree finch species had gone extinct, and the boundary between the small and medium tree finch species had become porous, leading to morphologically intermediate hybrids.

The question then, is whether the extinction of the large tree finch and the collapse of the boundary between small and medium tree finches can be attributed to the parasite, and the changing selective pressures associated with it. Certainly there were clear changes in size structure (from larger birds to smaller birds) and in recruitment (from few young hybrids to many young hybrids) between the low parasite year (2005) and the high parasite year (2010). Strikingly, parasite loads in nests were much lower for hybrids and smaller-bodied populations than for the larger-bodied population (figure below). Compared to their large-bodied parents, hybrids somehow avoided parasite attack even in years with high parasite densities (2010). When parasite loads are high, hybrid offspring have a fitness advantage, as evidenced by the large number of young hybrids in 2012. The collapse of the large tree finch population is also likely a product of parasite pressures as well, as females selected smaller mates with comparatively lower parasite loads. Despite the apparent importance of the parasites in 2010, the existence of only a few older hybrid individuals, and greater morphological distance between populations seen in the 2005 survey (a low parasite period) suggests that selection for hybrids varies greatly through time. Though the persistence of the Philornis parasite on Floreana may prevent re-establishment of the large tree finch, changing parasite densities and other selective pressures may continue to cause the boundaries of the remaining finch populations to overlap and retract in the future. The story of Darwin's finches is even more interesting if we consider that it doesn't stop at character displacement but continues to this day.
From Kleindorfer et al 2014: Philornis parasite intensity in nests sampled in 2005 (lower parasite) and 2010/2012 (higher parasite), for nests of the small-bodied (population 1), intermediate hybrid, and larger-bodied (population 2) individuals.

Friday, March 7, 2014

EEB & Flow inclusion in Library of Congress Web Archives

I just received this email the other day, and nearly deleted it as another spam email (along with fake conference invites, obscure journal submission invites, and offers to make millions). But apparently it's legit, and the US Library of Congress has been archiving web sites for some time. They are now building a collection of science blogs (link, link), which is a pretty cool idea, and we're excited to be part of it :-)

Monday, February 24, 2014

Evolution at smaller and smaller scales: a role for microgeographic adaptation in ecology?

Jonathan L. Richardson, Mark C. Urban, Daniel I. Bolnick, David K. Skelly. 2014. Microgeographic adaptation and the spatial scale of evolution. Trends in Ecology & Evolution, 19 February 2014.

Among other trends in ecology, it seems that there is a strong trend towards re-integration of ecological and evolutionary dynamics, and also in partitioning ecological dynamics to finer and finer scales (e.g. intraspecific variation). So it was great to see a new TREE article on “Microgeographic adaptation and the spatial scale of evolution”, which seemed to promise to contribute to both topics.

In this paper, Richardson et al. attempt to define and quantify the importance of small-scale adaptive differences that can arise between even neighbouring populations. These are given the name “microgeographic adaptation”, and defined as arising via trait differences across fine spatial scales, which lead to fitness advantages in an individual’s home sites. The obvious question is what spatial scale does 'microgeographic' refer to, and the authors define it very precisely as “the dispersal neighborhood … of the individuals located within a radius extending two standard deviations from the mean of the dispersal kernel of a species”. (More generally they forward an argument for a unit--the ‘wright’--that would measure adaptive divergence through space relative to dispersal neighbourhoods.) The concept of microgeographic adaptation feels like it is putting a pretty fine point on already existing ideas about local adaptation, and the authors acknowledge that it is a special case of adaptation at scales where gene flow is usually assumed to be high. Though they also suggest that microgeographic adaptation has received almost no recognition, it is probably fairer to say that in practice the assumption is that on fine scales, gene flow is large enough to swamp out local selective differences, but many ecologists could name examples of trait differences between populations at close proximity.

From Richardson et al. (2014). One
example of microgeographic adaptations.
Indeed, despite the general disregard to fine-scale evolutionary differences, they note that there are some historical and more recent examples of microgeographic variation. For example, Robert Selander found that despite the lack of physical barriers to movement, mice in neighbouring barns show allelic differences, probably due to territorial behaviour. As you might expect, microgeographic adaptations result when migration is effectively lower than expected given geographic distance and/or selection is stronger (as when neighbouring locations are very dissimilar). A variety of mechanisms are proposed, including the usual suspects – strong natural selection, landscape barriers, habitat selection, etc.

A list of the possible mechanisms leading to microgeographic adaptation is rather less interesting than questions about how to quantify the importance and commonness of microgeographic adaptation, and especially about its implications for ecological processes. At the moment, there are just a few examples and fewer still studies of the implications, making it difficult to say much. Because of either the lack of existing data and studies or else the paper's attempt to be relevant to both evolutionary biologists and ecologists, the vague discussion of microgeographic differences as a source of genetic variation for restoration or response to climate change, and mention of the existing—but primarily theoretical—ecological literature feels limited and unsatisfying. The optimistic view is that this paper might stimulate a greater focus on (fine) spatial scale in evolutionary biology, bringing evolution and ecology closer in terms of shared focus on spatial scale. For me though, the most interesting questions about focusing on smaller and smaller scales (spatial, unit of diversity (intraspecific, etc)) are always about what they can contribute to our understanding. Does complexity at small scales simply disappear as we aggregate to larger and larger scales (a la macroecology) or does it support greater complexity as we scale up, and so merit our attention? 

Tuesday, February 18, 2014

P-values, the statistic that we love to hate

P-values are an integral part of most scientific analyses, papers, and journals, and yet they come with a hefty list of concerns and criticisms from frequentists and Bayesians alike. An editorial in Nature (by Regina Nuzzo) last week provides a good reminder of some of the more concerning issues with the p-value. In particular, she explores how the obsession with "significance" creates issues with reproducibility and significant but biologically meaningless results.

Ronald Fischer, inventor of the p-value, never intended it to be used as a definitive test of “importance” (however you interpret that word). Instead, it was an informal barometer of whether a test hypothesis was worthy of continued interest and testing. Today though, p-values are often used as the final word on whether a relationship is meaningful or important, on whether the the test or experimental hypothesis has any merit, even on whether the data is publishable. For example in ecology, significance values from a regression or species distribution model are often presented as the results. 

This small but troubling shift away from the original purpose for p-values is tied to concerns about false alarms and with replicability of results. One recent suggestion for increasing replicability is to make p-values more stringent - to require that they be less that 0.005. But the point the author makes is that although p-values are typically interpreted as “the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true”, this doesn't actually mean that a p-value of 0.01 in one study is exactly consistent with a p-value of 0.01 found in another study. P-values are not consistent or comparable across studies because the likelihood that there was a real (experimental) effect to start with alters the likelihood that a low p-value is just a false alarm (figure). The more unlikely the test hypothesis, the more likely a p-value of 0.05 is a false alarm. Data mining in particular will be (unwittingly) sensitive to this kind of problem. Of course one is unlikely to know what the odds of the test hypothesis are, especially a priori, making it even more difficult to correctly think about and use p-values. 

from: http://www.nature.com/news/scientific-method-statistical-errors-1.14700#/b5
The other oft-repeated criticism of p-values is that a highly significant p-value make still be associated with a tiny (and thus possibly meaningless) effect size. The obsession with p-values is particularly strange then, given that the question "how large is the effect?", should be more important than just answering “is it significant?". Ignoring effect sizes leads to a trend of studies showing highly significant results, with arguably meaningless effect sizes. This creates the odd situation that publishing well requires high profile, novel, and strong results – but one of the major tools for identifying these results is flawed. The editorial lists a few suggestions for moving away from the p-value – including to have journals require effect sizes and confidence intervals be included in published papers, to require statements to the effect of “We report how we determined our sample size, all data exclusions (if any), all manipulations and all measures in the study”, in order to limit data-mining, or of course to move to a Bayesian framework, where p-values are near heresy. The best advice though, is quoted from statistician Steven Goodman: “The numbers are where the scientific discussion should start, not end.”

Monday, February 10, 2014

Ecological progress, what are we doing right?

A post from Charles Krebs' blog called "Ten limitations on progress in ecology" popped up a number of times on social media last week. Krebs is a established population ecologist who has been working in the field for a long time, and he suggests some important problems leading to a lack of progress in ecology. These concerns range from lack of jobs and funding for ecologists, to the fracturing of ecology into poorly integrated subfields. Krebs' post is a continuation of the ongoing conversation about limitations and problems in ecology, which has been up for discussion for decades. And as such, I agree with many of the points being made. But it reminded me of something I have been thinking about for a while, which is that it seems much more rare to see ecology’s successes listed. For many ecologists, it is probably easier to come up with the problems and weaknesses, but I think that's more of a cognitive bias than a sign that ecology is inescapably flawed. And that’s unfortunate: recognizing our successes and advances also helps us improve ecology. So what is there to praise about ecology, and what successes we can build on?

Despite Krebs’ concerns about lack of jobs for ecologists, it is worth celebrating how much ecology has grown in numbers and recognition as a discipline. The first ESA annual meeting in 1914 had 307 attendees, recent years’ attendance is somewhere between 3000-4000 ecologists. Ecology is also increasingly diverse. Ecology and Evolutionary Biology departments are now common in big universities, and sometimes replacing Botany and/or Zoology programs. On a more general level, the idea of “ecology” has increasing recognition by the public. Popular press coverage of issues such as biological invasions, honeybee colony collapses, wolves in Yellowstone, and climate change, have at least made the work of ecologists slightly more apparent.

Long-term ecological research is probably more common and more feasible now than it has ever been. There are long-term fragmentation, biodiversity and ecosystem function studies, grants directed at LTER, and a dedicated institute (the National Ecological Observatory Network (NEON)) funded by the NSF for longterm ecological data collection. (Of course, not all long term research sites have had an easy go of things – see the Experimental Lakes Area in Canada).

Another really positive development is that academic publishing is becoming more inclusive – not only are there more reputable open access publishing options for ecologists, the culture is changing to one where data is available online for broad access, rather than privately controlled. Top journals are reinforcing this trend by requiring that data be published in conjunction with publications.

Multi-disciplinary collaboration is more common than ever, both because ecology naturally overlaps with geochemistry, mathematics, physics, physiology, and others, and also because funding agencies are rewarding promising collaborations. For example, I recently saw a talk where dispersal was considered in the context of wind patterns based on meteorological models. It felt like this sort of mechanistic approach provided a much fuller understanding of dispersal than the usual kernel-based model.

Further, though subdisciplines of ecology have at times lost connection with the core knowledge of ecology, some subfields have taken paths that are worth emulating, integrating multiple areas of knowledge, while still making novel contributions to ecology in general. For example, disease ecology is multidisciplinary, integrating ecology, fieldwork, epidemiological models and medicine with reasonable success.

Finally, more than ever, the complexity of ecology is being equalled by available methods. More than ever, the math, the models, the technology, and the computing resources available are sufficient. If you look at papers from ecology’s earliest years, statistics and models were restricted to simple regressions or ANOVAs and differential equations that could be solved by hand. Though there is uncertainty associated with even the most complex model, our ability to model ecological processes is higher than ever. Technology allows us to observe changes in alleles, to reconstruct phylogenetic trees, and to count species too small to even see. If used carefully and with understanding, we have the tools to make and continue making huge advances.

Maybe there are other (better) positive advances that I’ve overlooked, but it seems that – despite claims to the contrary – there are many reasons to think that ecology is a growing, thriving discipline. Not perfect, but successfully growing with the technological, political, and environmental realities.
Ecology may be successfully growing, but it's true that the timing is rough...

Tuesday, February 4, 2014

Competition and mutualism may be closely related: one example from myrmecochory


Robert J. Warren II, Itamar Giladi, Mark A. Bradford 2014. Competition as a mechanism structuring mutualisms. Journal of Ecology. DOI: 10.1111/1365-2745.12203.

As ecologists usually think about them, competition and mutualism are very different types of interactions. Competition has a negative effect on resource availability for a species, while mutualism should have a positive impact on resource availability. Mutualisms involve interactions between two or more species, and as such are biotic in nature. While the typical definition of the fundamental niche includes all (and only) abiotic conditions necessary for a population’s persistence, with the realized niche showing those areas that are suitable once biotic interactions are considered (Pulliam 2000), mutualisms are a reminder that the a niche is not as simple as we hope. Mutualisms may be necessary for a population’s persistence, as in the case of obligate pollinators, and so some biotic interactions might be “fundamental”. More complicated still, species may compete for mutualist partners – plant species for pollinators, for example. If the mutualist partner is considered a resource, mutualism and competition may not be so far apart after all. 

The relation between competition and mutualism is probably most acknowledged in terms of pollinators – patterns of staggered flowering in a plant community arise in part to decrease simultaneous demand for limited pollinator resources. Another possibly fundamental biotic resource is dispersers, which may be necessary for population persistence of some species. In Warren, Giladi, and Bradford (2014), the authors attempt to expand this idea of competition for mutualist partners to ant-mediated seed dispersal or myrmecochory. Myrmecochorous plant species are common in a number of regions of the world. They rely on ant dispersal to move their seeds, helping to increase the distance between parent and offspring (and thus decrease competition), lower seed predation, and introduce seeds to novel habitats. Ant species that disperse these seeds benefit from the high-energy seed attachment (elaiosome) provided by the plant. While myrmecochorous plants are dependent on ants for successful dispersal, most ants do not rely solely on elaiosomes for food; further, there are fewer seed-dispersing ant species than there are ant-requiring plant species. As a result, competition for ants between myrmecochorous species is a reasonable hypothesis. If there is competition for mutualist partners, the predictions are that species either increase their attractiveness as a competitor by making their seeds most attractive, or else decrease the intensity of competition by staggering seed release.

Warren et al. tested this predictions for eastern North American woodland perennials: at least 50 plant species rely on ant dispersal in this region, but a much smaller number of ants actually disperse seeds. This dearth of mutualist partners implies that competition for ant dispersers should be particularly strong. One way to successfully monopolize a mutualist is to ensure that the timing of seed release is coordinated with ant availability and attraction: in fact comparisons between myrmecochorous and non-myrmecochorous plant species suggests that those requiring ants set seed earlier, when ant attraction to seeds is higher (insect prey become more attractive later in the season). To look at competition within myrmecochorous species, the authors as whether seed size (and thereby attractiveness to ants) was staggered through time. Smaller mymecochore seeds should, for example, become available when larger and more attractive seeds are not in competition. This prediction held – small, less attractive seeds were available earlier in the season than the larger, more attractive later seeds. The authors then experimentally tested whether small and large seeds were in competition for ants and differed in their success in attracting them. Using weigh boats secured to the forest floor, the researchers provided either i) only small myrmecochore seeds, ii) only large seeds, or iii) a combination of both seed sizes. Not that surprisingly, the presence of large seeds inhibits the removal of smaller less attractive seeds by as much as 100% (i.e. no small seeds were removed).

The authors do a nice job of showing that species differ in their success in attracting ant dispersers, and species with differing seed attractiveness appear to partition the season in such a way as to maximize their success. Whether or not this likely competition for dispersers extends to impact the species’ spatial distribution or whether species are prevented from co-occurring by competition for mutualists is less clear, and an interesting future direction. The authors also hypothesize that dispersers, rather than pollinators, may drive flowering/seed production in a system, which is an alternative the usual assumption that pollinators, not dispersers are more important drivers of evolution. More generally, the paper is a reminder that, at least for some species, biotic interactions are fundamental to the niche. Or even more likely, that the separation between the determinants of a fundamental and realized niche aren’t so very distinct. And that’s a reminder that has value for many sections of ecology, from species distribution models to invasive species research.

Wednesday, January 29, 2014

Guest post: One way to quantify ecological communities

This is a guest post by Aspen Reese, a graduate student at Duke University, who in addition to studying the role of trophic interactions in driving secondary succession, is interested in how ecological communities are defined. Below she explains one possible way to explicitly define communities, although it's important to note that communities must explicitly be networks for the below calculations.

Because there are so many different ways of defining “community”, it can be hard to know what, exactly, we’re talking about when we use the term. It’s clear, though, that we need to take a close look at our terminology. In her recent post, Caroline Tucker offers a great overview of why this is such an important conversation to have. As she points out, we aren’t always completely forthright in laying out the assumptions underlying the definition used in any given study or subdiscipline. The question remains then: how to function—how to do and how to communicate good research—in the midst of such a terminological muddle?

We don’t need a single, objective definition of community (could we ever agree? And why should we?). What we do need, though, are ways to offer transparent, rigorous definitions of the communities we study. Moreover, we need a transferable system for quantifying these definitions.

One way we might address this need is to borrow a concept from the philosophy of biology, called entification. Entification is a way of quantifying thingness. It allows us to answer the question: how much does my study subject resemble an independent entity? And, more generally, what makes something an entity at all?

Stanley Salthe (1985) gives us a helpful definition: Entities can be defined by their boundaries, degree of integration, and continuity (Salthe also includes scale, but in a very abstract way, so I’ll leave that out for now). What we need, then, is some way to quantify the boundedness, integration, and continuity of any given community. By conceptualizing the community as an ecological network*—with a population of organisms (nodes) and their interactions (edges)—that kind of quantification becomes possible.

Consider the following framework: 

Boundedness
Communities are discontinuous from the environment around them, but how discrete that boundary is varies widely. We can quantify this discreteness by measuring the number of nodes that don’t have interactions outside the system relative to the total number of nodes in the system (Fig. 1a). 

Boundedness = (Total nodes without external edges)/(Total nodes)

Integration
Communities exhibit the interdependence and connections of their parts—i.e. integration. For any given level of complexity (which we can define as the number of constitutive part types, i.e. nodes (McShea 1996)), a system becomes more integrated as the networks and feedback loops between the constitutive part types become denser and the average path length decreases. Therefore, degree of integration can be measured as one minus the average path length (or average distance) between two parts relative to the total number of parts (Fig. 1b).

Integration 1-((Average path length)/(Total nodes))

Continuity
All entities endure, if only for a time. And all entities change, if only due to entropy. The more similar a community is to its historical self, the more continuous it is. Using networks from two time points, a degree of continuity is calculated with a Jaccard index as the total number of interactions unchanged between both times relative to the total number of interactions at both times (Fig. 1c).

Continuity = (Total edges-changed edges)/(Total edges)
Fig 1. The three proposed metrics for describing entities—(A) boundedness, (B) integration, and (C) continuity—and how to calculate them. 

Let’s try this method out on an arctic stream food web (Parker and Huryn 2006). The stream was measured for trophic interactions in June and August of 2002 (Fig. 2). If we exclude detritus and consider the waterfowl as outside the community, we calculate that the stream has a degree of boundeness of 0.79 (i.e. ~80% of its interactions are between species included in the community), a degree of integration of 0.98 (i.e. the average path length is very close to 1), and a degree of continuity of 0.73 (i.e. almost 3/4 of the interactions are constant over the course of the two months). It’s as easy as counting nodes and edges—not too bad! But what does it mean?
Fig. 2: The food web community in an arctic stream over summer 2002. Derived from Parker and Huryn (2006). 

Well, compare the arctic stream to a molecular example. Using a simplified network (Burnell et al. 2005), we can calculate the entification of the cellular respiration pathway (Fig. 3). We find that for the total respiration system, including both the aerobic and anaerobic pathways, boundedness is 0.52 and integration is 0.84. The continuity of the system is likely equal to 1 at most times because both pathways are active, and their makeup is highly conserved. However, if one were to test for the continuity of the system when it switches between the aerobic and the anaerobic pathway, the degree of continuity drops to 0.6.
Fig. 3: The anaerobic and aerobic elements of cellular respiration, one part of a cell’s metabolic pathway. Derived from Burnell et al. (2005)
Contrary to what you might expect, the ecological entity showed greater integration than the molecular pathway. This makes sense, however, since molecular pathways are more linear, which increases the average shortest distance between parts, thereby decreasing continuity. In contrast, the continuity of molecular pathways can be much higher when considered in aggregate. In general, we would expect the boundedness score for ecological entities to be fairly low, but with large variation between systems. The low boundedness score of the molecular pathway is indicative of the fact that we are only exploring a small part of the metabolic pathway and including ubiquitous molecules (e.g. NADH and ATP).

Here are three ways such a system could improve community ecology: First, the process can highlight interesting ecological aspects of the system that aren’t immediately obvious. For example, food webs display much higher integration when parasites are included, and a recent call (Lafferty et al. 2008) to include these organisms highlights how a closer attention to under-recognized parts of a network can drastically change our understanding of a community. Or consider how the recognition that islands, which have clear physical boundaries, may have low boundedness due to their reliance on marine nutrient subsidies (Polis and Hurd 1996) revolutionized how we study them. Second, this methodology can help a researcher find a research-appropriate, cost-effective definition of the study community that also maximizes its degree of entification. A researcher could use sensitivity analyses to determine what effect changing the definition of her community would have on its characterization. Then, when confronted with the criticism that a certain player or interaction was left out of her study design, she could respond with an informed assessment of whether the inclusion of further parts or processes would actually change the character of the system in a quantifiable way. Finally, the formalized process of defining a study system will facilitate useful conversation between researchers, especially those who have used different definitions of communities. It will allow for more informed comparisons between systems that are similar in these parameters or help indicate a priori when systems are expected to differ strongly in their behavior and controls.

Communities, or ecosystems for that matter, aren’t homogeneous; they don’t have clear boundaries; they change drastically over time; we don’t know when they begin or end; and no two are exactly the same (see Gleason 1926). Not only are communities unlike organisms, but it is often unclear whether or not communities or ecosystems are units of existence at all (van Valen 1991). We may never find a single objective definition for what they are. Nevertheless, we work with them every day, and it would certainly be helpful if we could come to terms with their continuous nature. Whatever definition you choose to use in your own research—make it explicit and make it quantifiable. And be willing to discuss it with your peers. It will make your, their, and my research that much better.

Monday, January 27, 2014

Gender diversity begets gender diversity for invited conference speakers


There are numerous arguments for why the academic pipeline leaks - i.e. why women are increasingly less represented in higher academic ranks. Among others, the suggestion has been made there can be simple subconscious biases regarding the image that accompanies the idea of "a full professor" or "seminar speaker". A useful new paper by Arturo Casadevall and Jo Handelsman provides some support for this idea. The authors identified invited talks at academic conferences as an example of important academic career events, which provide multiple benefits and external recognition of a researcher’s work. However, a number of studies have shown that women are less represented as invited speakers, but proportionally and in absolute numbers. To explore this further, the authors asked whether the presence or absence of women as conveners for the American Microbial Society (ASM) meetings affects the number of female invited speakers. Conveners for ASM meetings are involved of selection of speakers, either directly or in consultation with program committee members. The two annual meetings run by the ASM involve 4000-6000 attendees, of which female members constitute approximately 40% (37% when only full members were considered). Despite this nearly 40% female membership, for session where all conveners were male, the percentage of invited speakers who were female was consistently near 25%. While explanations for these sorts of poor representation of females in academia are often structural, the authors show that in this case, simple changes might change this statistic. If one or more women were conveners for a session, the proportion of female invited speakers in that session rises to around 40%, or in line with women’s general representation in the ASM. The authors don’t offer precise explanations for these striking results, but note that women conveners may be more likely to be aware of gender and may make a conscious effort to invite female speakers. Implicit biases, our “search images”, may unconsciously favour males, but these results are positive in suggesting that even small changes and greater awareness can make a big difference.

 
The proportion of invited speakers in a session who are female from 2011-2013, for the two annual meetings (GM & ICAAC) organized by the ASM. Compare black bars - no female conveners - and grey bars - at least one female convener.

Tuesday, January 21, 2014

A multiplicity of communities for community ecology

Community ecologists have struggled with some fundamental issues for their discipline. A longstanding example is that we have failed to formally and consistently define our study unit – the ecological community. Textbook definitions are often broad and imprecise: for example, according to Wikipedia "a community...is an assemblage or associations of populations of two or more different species occupying the same geographical area". The topic of how to define the ecological community is periodically revived in the literature (for example, Lawton 1999; Ricklefs 2008), but in practice, papers rely on implicit but rarely stated assumptions about "the community". And even if every paper spent page space attempting to elucidate what it is we mean by “community”, little consistency would be achieved: every subdiscipline relies on its own communally understood working definition.

In their 1994 piece on ecological communities, Palmer and White suggested “that community ecologists define community operationally, with as little conceptual baggage as possible…”. It seems that ecological subdisciplines have operationalized some definition of "the community", but one of the weaknesses of doing so is that the conceptual basis for these communities is often obscured. Even if a community is simply where you lay your quadrat, you are making particular assumptions about what a community is. And making assumptions to delimit a community is not problematic: the problem is when results are interpreted without keeping your conceptual assumptions in mind. And certainly understanding what assumptions each subfield is making is far more important than simply fighting, unrealistically, for consistent definitions across every study and field.
 
Defining ecological communities.
Most definitions of the ecological community vary in terms of only a few basic characteristics (figure above) that are required to delimit *their* community. Communities can be defined to require that a group of species co-occur together in space and/or time, and this group of species may or may not be required to interact. For example, a particular subfield might define communities simply in terms of co-occurrence in space and time, and not require that interactions be explicitly considered or measured. This is not to say they don't believe that such interactions occur, just that they are not important for the research. Microbial "communities" tend to be defined as groups of co-occurring microbes, but interspecific interactions are rarely measured explicitly (for practical reasons). Similarly, a community defined as "neutral" might be studied in terms of characteristics other than species interactions. Studies of succession or restoration might require that species interact in a given space, but since species composition has or is changing through time, temporal co-occurrence is less important as an assumption. Subdisciplines that include all three characteristics include theoretical approaches, which tend to be very explicit in defining communities, and studies of food webs similarly require that species are co-existing and interacting in space and time. On the other hand, a definition such as “[i]t is easy to define local communities where in species interact by affecting each other’s demographic rates” (Leibold et al. 2004) does not include any explicit relationship of those species with space – making it possible to consider regionally coexisting species.

How you define the scale of interest is perhaps more important in distinguishing communities than the particulars of space, time, and interactions. Even if two communities are defined as having the same components, a community studied at the spatial or temporal scale of zooplankton is far different than one studied in the same locale and under the same particulars, but with interest in freshwater fish communities. The scale of interactions considered by a researcher interested in a plant community might include a single trophic level, while a food web ecologist would expand that scale of interactions to consider all the trophic levels. 

The final consideration relates to the historical debate over whether communities are closed and discrete entities, as they are often modelled in theoretical exercises, or porous and overlapping entities. The assumption in many studies tends to be that communities are discrete and closed, as it is difficult to model communities or food webs without such simplifying assumptions about what enters and leaves the system. On the other hand, some subdisciplines must explicitly assume that their communities are open to invasion and inputs from external communities. Robert Ricklef, in his 2008 Sewall Wright Address, made one of the more recent calls for a move from unrealistic closed communities to the acceptance that communities are really composed of the overlapping regional distributions of multiple organisms, and not local or closed in any meaningful way.

These differences matter most when comparing or integrating results which used different working definitions of "the community". It seems more important to note possible incompatibilities in working definitions than to force some one-size-fits-all definition on everything. In contrast to Palmer and White, the focus should not be on ignoring the conceptual, but rather on recognizing the relationship between practice and concept. For example, microbial communities are generally defined as species co-occurring in space and time, but explicit interactions don't have to be shown. While this is sensible from a practical perspective, the problem comes when theory and literature from other areas that assume interactions are occurring is directly applied to microbial communities. Only by embracing this multiplicity of definitions can we piece together existing data and evidence across subdisciplines to more fully understand “community ecology” in general.

Monday, January 13, 2014

The generosity of academics

A cool tumblr gives credit to the often under-acknowledged kindness of academics http://academickindness.tumblr.com/. It’s a topic I sometimes think about, because the culture of academics (at least for ecology) has always seemed to me to be driven by generous interactions.

Most of us have a growing lifetime acknowledgement list starting at the earliest point in our careers. After four years in my PhD, my thesis’ acknowledgements included other graduate students and lab mates, post-docs, undergrads, faculty at several institutions, and my supervisor. Almost everyone on this list expected nothing in exchange for their time and knowledge. Of course there are going to be exceptions, people who refuse to share their data, rarely interact with strangers, have little time for grad students, or are difficult to interact with. But that's pretty exceptional. Instead, one-sided  interactions regularly occur. Where else could you email a stranger, hoping they will meet with you at a conference to talk about your research? Or have a distant lab mail you cultures to replace ones that died? Or email the creator of an R package, because you can’t figure out where your data is going wrong, and get a detailed reply? And these aren’t untypical interactions in academia.

The lower you are down the academic ladder, the more you benefit from (maybe rely on) the kindness of busy people – committee members, collaborators, lab managers. Busy, successful faculty members, for example, took time to meet with me many times, kindly and patiently answering my questions. I can think of two reasons for this atmosphere, first that most ecologists simply are passionate about their science. They like to think about it, talk about, and exchange ideas with other people who are similarly inclined. The typical visit of an invited speaker includes hours and hours of meetings and meals with students, and most seem to relish this. Like most believers, they have a little of the zeal of the converted. Secondly, many of the structures of academic science rely heavily on goodwill and generosity. For example, reviews of journal submissions rely entirely on a system of volunteerism. That would be untenable for most businesses, but has survived this far in academic publishing. Grad student committees, although they have some value for tenure applications, are mostly dependent on the golden rule (I’ll be on your student’s committee, if you’ll be on mine). And then there are supervisor/supervisee relationships. These obviously vary between personalities, and universities, and countries, but good supervisors invest far more time and energy than the bare minimum necessary to get publications and a highly qualified personal out of it. That we rely on these interactions so heavily becomes most apparent when they fail—when you wait months on a paper because there are no reviewers, when your supervisor disappears—progress stops.

Of course, this sort of system only lasts if everyone feels like they gain some benefit, and everyone feels like the weight on them is fair. The ongoing problems with the review system suggest that this isn’t always true. Still, the posts on academickindness.tumblr.com are a reminder of that altruism is still alive and well in academia.