Monday, November 7, 2016

What is a community ecologist anyways?

I am organizing a 'community ecology' reading group, and someone asked me whether I didn’t think focusing on communities wasn’t a little restrictive. And no. The thought never crossed my mind. Which I realized is because I internally define community ecology as a large set of things including ‘everything I work on’ :-) When people ask me what I do, I usually say I’m a community ecologist.

Obviously community ecology is the study of ecological communities [“theoretical ideal the complete set of organisms living in a particular place and time as an ecological community sensu lato”, Vellend 2016]. But in practice, it's very difficult to define the boundaries of what a community is (Ricklefs 2008), and the scale of time and space is rather flexible.

So I suppose my working definition has been that a community ecologist researches groups of organisms and understands them in terms of ecological processes. There is flexibility in terms of spatial and temporal scale, number and type of trophic levels, interaction type and number, or response variables of interest. It’s also true that this definition could be encompass much of modern ecology…

On the other hand, a colleague argued that only the specific study of species interactions should be considered as ‘community ecology’: e.g. pollination ecology, predator-prey interactions, competition, probably food web and multi-trophic level interactions. 

Perhaps my definition is so broad as to be uninformative, and my colleague's is too narrow to include all areas. But it is my interest in community ecology that leads me to sometimes think about larger spatial and temporal scales. Maybe that's what community ecologists have in common: the flexibility needed to deal with the complexities of ecological communities.

Monday, October 17, 2016

Reviewing peer review: gender, location and other sources of bias

For academic scientists, publications are the primary currency for success, and so peer review is a central part of scientific life. When discussing peer review, it’s always worth remembering that since it depends on ‘peers’, broader issues across ecology are often reflected in issues with peer review. A series of papers from Charles W. Fox--and coauthors Burns, Muncy, and Meyer--do a great job of illustrating this point, showing how diversity issues in ecology are writ small in the peer review process.

The journal Functional Ecology provided the authors up to 10 years of data on the submission, editorial, and review process (between 2004 and 2014, maximum). This data provides a unique opportunity to explore how factors such as gender and geographic local affects the peer review process and outcomes, and also how this has changed over the past decade.

Author and reviewer gender were assigned using an online database (genderize.io) that includes 200,000 names and an associated probability reflecting the genders for each name. Geographic location of editors and reviewers were also identified based on their profiles. There are some clear limitations to this approach, particularly that Asian names had to be excluded. Still, 97% of names were present in the genderize.io database, and 94% of those names were associated with a single gender >90% of the time.

Many—even most—of Fox et al.’s findings are in line with what has already been shown regarding the causes and effects of gender gaps in academia. But they are interesting, nonetheless. Some of the gender gaps seem to be tied to age: senior editors were all male, and although females make up 43% of first authors on papers submitted to Functional Ecology, they are only 25% of senior authors.

Implicit biases in identifying reviewers are also fairly common: far fewer women were suggested then men, even when female authors or female editors were identifying reviewers. Female editors did invite more female reviewers than male editors. ("Male editors selected less than 25 percent female reviewers even in the year they selected the most women, but female editors consistently selected ~30–35 percent female").  Female authors also suggested slightly more female reviewers than male authors did.

Some of the statistics are great news: there was no effect of author gender or editor gender on how papers were handled and their chances of acceptance, for example. Further, the mean score given to a paper by male and female reviewers did not differ – reviewer gender isn’t affecting your paper’s chance of acceptance. And when the last or senior author on a paper is female, a greater proportion of all the authors on the paper are female too.

The most surprising statistic, to me, was that there was a small (2%) but consistent effect of handling editor gender on the likelihood that male reviewers would respond to review requests. They were less likely to respond and less likely to agree to review, if the editor making the request is female.

That there are still observable effects of gender in peer review despite an increasing awareness of the issue should tell us that the effects of other forms of less-discussed bias are probably similar or greater. Fox et al. hint at this when they show how important the effect of geographic locale is on reviewer choice. Overwhelmingly editors over-selected reviewers from their own geographic locality. This is not surprising, since social and professional networks are geographically driven, but it can have the effect of making science more insular. Other sources of bias – race, country of origin, language – are more difficult to measure from this data, but hopefully the results from these papers are reminders that such biases can have measurable effects.

From Fox et al. 2016a. 

Thursday, October 6, 2016

When individual differences matter - intraspecific variation in 2016

Maybe it is just confirmation bias, but there seems to have been an upswing in the number of cool papers on the role of intraspecific variation in ecology. For example, three new papers highlight the importance of variation among individuals for topics ranging from conservation, coexistence, and community responses to changing environments. All are worth a deeper read.

An Anthropocene map of genetic diversity’ asks how intraspecific variation is distributed globally, a simple but important question. Genetic diversity in a species is an important predictor of their ability to adapt to changing environments. For many species, however, as their populations decline in size, become fragmented, or experience strong selection related to human activities, genetic diversity may be in decline. Quantifying a baseline for global genetic diversity is an important goal. Further, with the rise of ‘big data’ (as people love to brand it) it is now an accessible one: there are now millions of genetic sequences in GenBank and associated GPS coordinates. 
Many of the global patterns in genetic diversity agree with those seen for other forms of diversity: for example, some of the highest levels are observed in the tropical Andes and Amazonia, and there is a peak in the mid-latitudes and human presence seems to decrease genetic diversity.

From Miraldo et al. (2016): Map of uncertainty. Areas in green represent high sequence availability and taxonomic coverage (of all species known to be present in a cell). All other colors represent areas lacking important data.
The resulting data set represents ~ 5000 species, so naturally the rarest species and the least charismatic are underrepresented. The authors identify this global distribution of ignorance, highlighting just how small our big data still is.

Miraldo, Andreia, et al. "An Anthropocene map of genetic diversity." Science353.6307 (2016): 1532-1535.


In ‘How variation between individuals affects species coexistence’, Simon Hart et al. do the much needed work to answer the question of how intraspecific variation fits into coexistence theory. Their results reinforce the suggestion that in general, intraspecific variation should making coexistence more difficult, since it increases the dominance of superior competitors, and reduces species' niche differentiation. (Note this is a contrast to the argument Jim Clark has made with individual trees, eg. Clark 2010)

Hart, Simon P., Sebastian J. Schreiber, and Jonathan M. Levine. "How variation between individuals affects species coexistence." Ecology letters (2016).


The topic of evolutionary rescue is an interesting, highlighting (see work from Andy Gonzalez and Graham Bell for more details) the ability of populations to adapt to stressors and changing environments, provided enough underlying additive genetic variation and time is available. It has been suggested that phenotypic plasticity can reduce the chance of evolutionary rescue, since it reduces selection on genetic traits. Alternatively, by increasing survival time following environmental change, it may aid evolutionary rescue. Ashander et al. use a theoretical approach to explore how plasticity interacts with a change in environmental conditions (mean and predictability/autocorrelation) to affect extinction risk (and so the chance of evolutionary rescue). Their results provide insight into how the predictability of new environments, through an affect on stochasticity, in turn changes extinction risk and rescue.


Tuesday, September 20, 2016

The problematic effect of small effects

Why do ecologists often get different answers to the same question? Depending on the study, for example, the relationship between biodiversity and ecosystem function could be positive, negative, or absent (e.g. Cardinale et al. 2012). Ecologists explain this in many ways - experimental issues and differences, context dependence. However, it may also be due to an even simpler issue, that of the statistical implications of small effect sizes.

This is the point that Lemoine et al. make in an interesting new report in Ecology. Experimental data from natural systems (e.g. for warming experiments, BEF experiments) is often highly variable, has low replication, and effect sizes are frequently small. Perhaps it is not surprising we see contradictory outcomes, because data with small true effect sizes are prone to high Type S (reflect the chance of obtaining the wrong sign for an effect) and Type M (the amount by with an effect size must be overestimated in order to be significant). Contradictory results arise from these statistical issues, combined with the idea that papers that do get published early on may simply have found significant effects by chance (the Winner's Curse). 

Power reflects the chance of failing to correctly reject the null hypothesis (Ho). The power of ecological experiments increases with sample size (N), since uncertainty in data decreases with increasing N. However, if your true effect size is small, studies with low power have to significantly overestimate the effect size to have a significant p-value. This is the result of the fact that if the variation in your data is large and your effect size is small, the critical value for a significant z-score is quite large. Thus for your results to be significant, you need to observe an effect larger than this critical value, which will be much larger than the true effect size. It's a catch-22 for small effect sizes: if your result is correct, it very well may not be significant; if you have a significant result, you may be overestimating the effect size. 

From Lemoine et al. 2016. 
The solution to this issue is clearly a difficult one, but the authors make some useful suggestions. First, it's really the variability of your data, more than the sample size, that raises the Type M error. So if your data is small but beautifully behaved, this may not be a huge issue for you (but you must be working in a highly atypical system). If you can increase your replication, this is the obvious solution. But the other solutions they see are cultural shifts when we publish statistical results. As with many other, the authors suggest we move away from reliance on p-values as a pass/fail tool for results. In addition to reporting p-values, they suggest we report effect sizes and their error rates. Further, that this be done for all variables regardless of whether the results are significant. Type M error and power analyses can be reported in a fashion meant to inform interpretation of results: “However, low power (0.10) and high Type M error (2.0) suggest that this effect size is likely an overestimate. Attempts to replicate these findings will likely fail.” 

Lemoine, N. P., Hoffman, A., Felton, A. J., Baur, L., Chaves, F., Gray, J., Yu, Q. and Smith, M. D. (2016), Underappreciated problems of low replication in ecological field studies. Ecology. doi: 10.1002/ecy.1506