Thursday, June 30, 2016

The pessimistic and optimistic view of BEF experiments?

The question of the value of biodiversity-ecosystem function (BEF) experiments—their results, their relevancy—has become a heated one in the literature. An extended argument over the last few years has debated the assumption that local biodiversity is in fact in decline (e.g. Vellend et al. 2013; Dornelas et al. 2014; Gonazalez et al. 2016). If biodiversity isn't disappearing from local communities, the logical conclusion would be that experiments focussed on the local impacts of biodiversity loss are less relevant.

Two papers in the Journal of Vegetation Science (Wardle 2016 and Eisenhauer et al. 2016) continue this discussion regarding the value of BEF experiments for understanding biodiversity loss in natural ecosystems. From reading both papers, it seems as though broadly speaking, the authors agree on several key points: that results from biodiversity-ecosystem functioning experiments don’t always match observations about species loss and functioning in nature, and that nature is much more complex, context-dependent, and multidimensional than typical BEF experimental systems. (The question of whether local biodiversity is declining may be more contested between them). 

Biodiversity and ecosystem experiments typically involve randomly assembled plant communities containing either the full complement of species, or subsets containing different numbers of species. Communities containing lower numbers are meant to provide information about the loss of species diversity a system. Functions (often including, but not limited to, primary productivity or biomass) are eventually measured and analysed in relation to treatment diversity. Although some striking results have come out of these types of studies (e.g. Tilman and Downing 1996), they can vary a fair amount in their findings (Cardinale et al. 2012).

David Wardle’s argument is that BEF experiments differ a good deal from natural systems: in natural systems, BEF relationships can take different forms and explain relatively little variation, and so extrapolating from existing experiments seems uninformative. In nature, changes in diversity are driven by ecological processes (invasion, extinction) and experiments involving randomly assembled communities and randomly lost species do nothing to simulate these processes. Wardle seems to feel that the popularity of typical BEF experiments has come at the cost of more realistic experimental designs. This is something of a zero-sum argument, (although in some funding climates that may be true...). But it is true that big BEF experiments tend to be costly and take time and labour, meaning that there is an impetus to publish as much as possible from each one. Given BEF experiments have changed drastically in design once already, in response to criticisms about their inability to disentangle complementarity vs. portfolio effects, it seems they are not inflexible about design though.

Eisenhauer et al. agree in principle that current experiments frequently lack a realistic design, but suggest that there are plenty of other types of studies (looking at functional diversity or phylogenetic diversity, for example, or using random loss of species) being published as well. For them too, there is value in having multiple similar experiments: this allows metaanalysis and comparison aggregation, and will help to tease apart the important mechanisms eventually. Further, realism is difficult to obtain in the absence of a baseline for a “natural, untouched, complete system” from which to remove species.

The point that Eisenhauer et al. and Wardle appear to agree on most strongly is that real systems are complex, multi-dimensional and context-dependent. Making the leap from a BEF experiment with 20 plant species to the real world is inevitably difficult. Wardle sees this is a massive limitation, Eisenhauer et al. sees it as a strength. Inconsistencies between experiments and nature are information that highlight when context matters. By having controlled experiments in which you vary context (such as by manipulating both nutrient level and species richness), you can begin to identify mechanisms.

Perhaps this is the greatest problem with past BEF work, is that there is a tendency to oversimplify the interpretation of results – to conclude that ‘loss of diversity is bad’ but with less attention to ‘why’, 'where', or 'when’. The best way to do this depends on your view of how science should progress. 

Wardle, D. A. (2016), Do experiments exploring plant diversity–ecosystem functioning relationships inform how biodiversity loss impacts natural ecosystems?. Journal of Vegetation Science, 27: 646–653. doi: 10.1111/jvs.12399

Eisenhauer, N., Barnes, A. D., Cesarz, S., Craven, D., Ferlian, O., Gottschall, F., Hines, J., Sendek, A., Siebert, J., Thakur, M. P., Türke, M. (2016), Biodiversity–ecosystem function experiments reveal the mechanisms underlying the consequences of biodiversity change in real world ecosystems. Journal of Vegetation Science. doi: 10.1111/jvs.12435

Additional References:
Vellend, Mark, et al. "Global meta-analysis reveals no net change in local-scale plant biodiversity over time." Proceedings of the National Academy of Sciences 110.48 (2013): 19456-19459.

Dornelas, Maria, et al. "Assemblage time series reveal biodiversity change but not systematic loss." Science 344.6181 (2014): 296-299.

Gonzalez, Andrew, et al. "Estimating local biodiversity change: a critique of papers claiming no net loss of local diversity." Ecology (2016).

Tilman, David, and John A. Downing. "Biodiversity and stability in grasslands." Ecosystem Management. Springer New York, 1996. 3-7.

Cardinale, Bradley J., et al. "Biodiversity loss and its impact on humanity."Nature 486.7401 (2012): 59-67.

Tuesday, June 14, 2016

Rebuttal papers don’t work, or citation practices are flawed?

Brian McGill posted an interesting follow up to Marc’s question about whether journals should allow post-publication review in the form of responses to published papers. I don’t know that I have any more clarity as to the answer to that question after reading both (excellent) posts. Being idealistic, I think that when there are clear errors, they should be corrected, and that editors should be invested in identifying and correcting problems in papers in their journals. Based on the discussions I’ve had with co-authors about a response paper we’re working on, I’d also like to believe that rebuttals can produce useful conversations, and ultimately be illuminating for a field. But pragmatically, Brian McGill pointed out that it seems that rebuttals rarely make an impact (citing Banobi et al 2011). Many times this was due to the fact that citations of flawed papers continued, and “were either rather naive or the paper was being cited in a rather generic way”.

Citations are possibly the most human part of writing scientific articles. Citations form a network of connections between research and ideas, and are the written record of progress in science. But they're also one of the clearest points at which biases, laziness, personal relationships (both friendships and feuds), taxonomic biases, and subfield myopia are apparent. So why don't we focus on improving citation practices? 

Ignoring more extreme problems (coercive citations, citation fraud, how to cite supplementary materials, data and software), as the literature grows more rapidly and pressure to publish increases, we have to acknowledge that it is increasingly difficult to know the literature thoroughly enough to cite broadly. A couple of studies found that 60-70% of citations were scored as accurate (Todd et al. 2007; Teixeira et al. 2013) (Whether you can see that as too low or pretty high depends on your personality). Key problems were the tendency to cite 'lazily' (citing reviews or synthetic pieces rather than delve into the literature within) or 'naively' (citing high profile pieces in an offhand way without considering rebuttals and follow ups (a key point of the Banobi et al. piece)). At least one limited analysis (Drake et al. 2013) showed that citations tended to be much more accurate in higher IF journals (>5), perhaps (speculating) due to better peer review or copy editing. 

Todd et al (2007) suggest that journals institute random audits of citations to ensure authors take greater care. This may be a good idea that is difficult to institute in journals where peer reviewers are already in short supply. It may also be useful to have rebuttal papers considered as part of the total communication surrounding a paper - the full text would include them, they would be automatically downloaded in the PDF, there would be a tab (in addition to author information, supplementary material, references, etc) for responses. 

More generally - why don't we learn how to cite well as students? The vast majority of advice on citation practices with a quick google search regards the need to avoiding plagiarism and stylistic concerns. Some of it is philosophical, but I have never heard a deep discussion of questions like, 'What’s an appropriate number of citations – for an idea?'; 'For a manuscript?'; 'How deep do I cite? (Do I need to go to Darwin?)'. It would be great if there were a consensus advice publication, like the sort the BES is so good at on best practices in citation.

Which is to say, that I still hope that rebuttals can work and be valuable.