Thursday, October 30, 2014

Deconstructing creationist "scientists"

I’ve been fascinated by creationism since I first moved to Tennessee over twelve years ago –home of the Scopes “monkey” trial. And though I’ve been away from Tennessee for about seven years now, creationism still fascinates me. I find it interesting not because their arguments are persuasive or scientifically credible –they’re absolutely not; but rather my interest in it is as a social or maybe psychological phenomenon. Why, in the light of so much compelling evidence, do otherwise intelligent people hold on to something that contradicts the record of life that surrounds us. I’m a biologist because I find the tapestry of life full of wonder and richness, with an amazing story to tell.

But what fascinates me most of all are trained scientists, who hold legitimate PhDs, who take up the cause of creationism. This is interesting from two angles –first the ‘scientists’ (more on them later), and second the organizations that support and fund their operations. Creationist organizations readily adopt and promote these scientist-turned-creationists, even though they routinely belittle and try to undermine working scientists. Its like the Republican party that dismisses the Hollywood elite as not real Americans, but proudly flaunting Chuck Norris or Clint Eastwood. When the PhDs are on the side of creationism, they are great scholars with meaningful expertise, and when they are against creationism (as are 99% of working scientists) they are elitist and part of a conspiracy.

Enter the latest parade of creationist scientists, who’s authority is meant to persuade the public, at a  ‘Origin Summit’ at Michigan State University in a few days. The first thing you see are four bespectacled PhDs, who are authoritized by the fact that they are PhD ‘scientists’. They are: Gerald Bergman, Donald DeYoung, Charles Jackson, and John Sanford. But, unfortunately for them, not all scientists are created equally.




What makes a scientist? That is not easily answered, but education is one element –and having a PhD from a recognized program and University is a good start. But being trained is not enough, there needs to be some sort of evaluation by the broader scientific community. First and foremost, a scientist needs to communicate their research findings to other scientists by publishing papers in PEER-REVIEWED academic publications. Peer-reviewed means that experts on the topic with examine your paper closely, especially the experimental design and analysis, a provide criticisms. All papers are criticized at this stage, but those with especially egregious problems will not be published. Scientists are also evaluated by other scientists when applying for research funds, being considered for promotion (for example, your record and papers should be sent to 5-8 scientists so they can evaluate the meaningfulness of your contributions), or being considered for scientific awards.

Table 1: How to know that you are doing science.

So then, the ability to publish and survive scrutiny is paramount to being a successful scientist. Of course someone who subscribes to science as conspiracy will say: “wait, then scientists control who gets to be a scientists, and so those with new or controversial ideas will be kept out of the club”. The next thing to understand is what makes a scientist “famous” within the scientific community. The most famous scientists of all time have overturned scientific orthodoxy –that is the scientists that were trailblazers and who came up with better explanations of nature. Many scientists appreciate new ideas and new theories, but work on these has to be scientifically robust in terms of methodology and analysis.

Now back to our Origin Summit scientists, how do they compare to normal expectations for a successful scientist? We will use the average expectations for an academic scientist to get tenure as our benchmark (Table 1). First, Gerald Bergman –biologist. He has a staggering number of degrees, some from legitimate institutions (e.g., Wayne State University), and some from unaccredited places with dubious legal standing (e.g., Columbia Pacific University). He had a real faculty position at Bowling Green University but was denied tenure in 1979. He claims that he was fired because of his anti-evolution religious beliefs (his claim –which to me says his creationism cannot be science). He went to court, and long-story-short he lost because he misrepresented his PhD to get a job in the first place. More importantly to our story here is, what was his record? Fortunately for us, scientific publications, like the fossil record, accurately reflect historical events. Looking through scholarly search engines for the period of time between 1976-1980 (when he would be making a case for tenure) I could only find one publication credited to G.R. Bergman, and it appears to be a published version of his dissertation on reducing recidivism among criminal offenders. Published theses are seldom peer reviewed, and this is certainly not biology. He does not meet our basic expectations for the scientific authority he is promoted as.



Next, is Donald DeYoung –astronomer. He is a professor in the Department of Science and Mathematics at Grace College, and Christian post-secondary institution. It has some accreditation, especially for some programs such as counselling and business. Its not fully accredited, but it seems to be a legitimate Christian school. I searched for legitimate peer-reviewed publications, which was tricky because there also exists another D. B. DeYoung, also on the math/astronomy side of the business. If we ignore his non-peer reviewed books, there may be only one legitimate publication from 1975 in the Journal of Chemical Physics, looking at a particular iron isotope –nothing to do with the age of the Earth or evolution. One paper, so he does not meet our expectations.

Third is Charles Jackson with a PhD in education. There is nothing meaningful on this guy to suggest he is a scientist by any stretch of the imagination. Next.

Finally, we have John Sanford, a geneticist. Now we are getting somewhere! How can a person who studies the basic building blocks of life, deny its role in shaping life? He is a plant breeder and was in an experimental agriculture station associated with Cornell University. I found about a dozen real papers published in scientific journals from his pre-tenure time. None are actually on evolution; they seem to be largely about pollen fertilization and transfer, and production of crops. His publications definitely changed later in his tenure, from basic plant breeding to creationist works. Most interestingly, he has a paper on a computer simulator called Mendel’s Accountant, published in 2007, that simulates genetic mutation and population fitness –the basic stuff of evolution, but which can presumably be used to support his theories about mutations causing ‘devolution’ and not the fuel for real evolution. I read the paper. The genetic theory underpinning is not in line with modern theory, and this is further evidenced by the scant referencing of the rich genetics literature. Most of the models and assumptions seem to be made de novo, to suit the simulation platform, instead of the simulator fitting what is actually understood about genetic mechanisms. I assume this is why the paper is not published in a genetics journal, but rather a computer science one, and one that is not listed in the main scientific indexing services (often how we judge a journal to be legitimate). Regardless, of the scientific specifics, Sanford is a legitimate scientist, and he is the one person I would love to ask deep questions about his understanding of the material he talks about.

The one thing to remember is that a PhD does not make one an expert in everything. I have a PhD in ecology and evolution, but I am not competent in basic physiology for example, and would/should not present myself as an authority to a broader public who may not know the difference between phylogeney and physiology.


So, at the end of the day, here is another creationist conference with a panel of scientific experts. One of the four actually deserves to be called that, and even then, he is likely to be talking about material he has not actually published on or researched. There is a reason why creationist organizations have a tough time getting real scientists on board, and instead are relegated to using mostly failed hacks, because there isn’t a scientific underpinning to creationist claims.

Monday, October 27, 2014

Making multi-authored papers work

Collaborative writing is almost unavoidable for ecologists – first author papers are practically a novelty these days, given the dominance of data-sharing, multidisciplinary projects, and large-scale experiments. And frankly, despite the inevitable frustrations of co-authors, collaborative writing tends to make a manuscript better. Co-authors help prevent things from getting too comfortable: too reliant on favourite references, myopic arguments, or slightly inaccurate definitions.

The easiest collaborative writing, I think, involves small numbers of authors. Writing with large groups of people – and for me that’s probably anything over 5 – has unique difficulties and challenges. Collaborative writing with large groups has two types of challenges: first, the problems innate in attempting to find consensus from many competing opinions; second, the logistical constraints and challenges that arise with having many authors attempting to contribute to a single manuscript.

I’ve recently been lead author/wrangler on a manuscript with 15 authors. It seems to be turning out really well, mostly because all of the authors are interested and invested: all 15 have made significant contributions to the text. I’m by no means an expert on the topic of large collaborations, but I wanted to share some of the things I learned (or wish I had known to start with). All of this assumes that the writing process is indeed collaborative; if it is actually one or two main authors and a bunch of non-writing authors this may be much simpler (if prone to its own set of frustrations).

Process: It’s important to determine how things are going to be done early on and keep everyone updated on how that process is going. If parts of the manuscript will be split up, or, if certain figures and analyses will be done by particular people, that should be determined early on and reasonable timelines agreed on. Whoever is managing or leading should keep in touch with all of the authors with updates and timelines, so the project doesn’t fall under the radar. Some thought should really go into what software you will be using, since once you’ve committed it’s difficult to switch. A lot of the time, frankly, you’re limited by the lowest common denominator– programs need to be broadly available (usually free or else very common) and not require a higher level of technical skill than most authors are comfortable with. This is the downside of using LaTex or GitHub, for example. It’s easier (better?) to use an inferior program than to have half of the authors struggle with the learning curve on the program you chose. For that reason, programs that centrally host files, papers, and analysis, like Dropbox, Google Drive or folders hosted on a private server are popular. As with every part of this process, version control, version control, version control. GitHub is the most common choice for version control of software code. Dropbox allows you to revert back to older versions of files, but with limits (unless you’re paying for the pro version, I think).

The more people that are involved, the more variation to expect from your plans: deadlines will be missed, key people will be on holidays, and not everyone will feel the same level of urgency. Note: if you give 10 academics a deadline, 1 person will be early, 7 will finish in the final hours before the deadline, and the rest will want an extension. Consider having explicit deadlines for important milestones, but assume you’ll need to provide some flexibility.

Editing and revising: In the best case scenario, writing with a large number of people is like having an extensive peer review before the paper ever gets published. If you can satisfy each of these experts, the chances of the manuscript making it through peer review unscathed are much higher.

When sending a draft out for edits and revisions from multiple authors it may be helpful to be clear on what you are hoping for from this revision. What should the other authors focus on? Scientific merit, appropriate references, clarity and structure, and/or grammar and style? It may be that any or all opinions are welcome, but getting edits of prose tense or “which” vs. “that” may not be helpful on an early draft.

I’m not sure if there is really a perfect program for collaborative writing/editing that fits the ‘lowest common denominator’ requirement. Optimally a program would be free or very common, require little in the way of installation, allow real time co-authoring, commenting, version-control, and easy import and export. Problems with compatibility between different operating systems, for example, can seem minor with a single user but turn into a nightmare when a document is being opened across many different systems and versions. For smaller papers, I think many academics simply email a copy of the manuscript (often as MS Word or a PDF) around to the authors, and that’s workable for 3 or 4 sets of comments. But dealing with 15 conflicted copies of a manuscript sounds like hell. Using Google Docs/Google Drive was the compromise choice, and it mostly fulfilled our needs, with some irritations. The benefits includes that Google Docs now has different editing modes: 'editing', 'suggesting', and 'viewing'. Only 'editing' allows direct changes to be made to the text. The 'suggesting' mode is more like ‘track changes’ in MS Word, and allows co-authors to comment, add or delete text, in such a way that the main author can later choose to accept or reject each suggestion. The biggest benefits of G.Docs are that co-authors can edit at the same time, in real-time, and so the comments tended to be very conversational, since each co-author can respond to other co-author suggestions. This really helps identify when there is consensus or different opinions among authors. The downside was particularly that some authors prefer being able to edit offline or in general follow the process they are most comfortable with. It seems like restructuring a manuscript is more difficult in shared manuscript, where others might disagree, than on a personal copy. If a few authors dislike collaborative edit, you will still end up with a few conflicting copies, no matter how hard you try to avoid them. There are probably better ways, although I haven’t figured them out yet, and hope someone will comment. For users of LaTex, there is an online collaborative program—writeLaTex—that might be useful. Also, though I’ve never tried it, penflip looks pretty promising as an alternative to Google Docs.

No matter what program you use, you’ll end up with many comments and edits, often conflicting opinions. I think it’s usually good best to defer to the subject matter expert – if a co-author wrote the seminal paper on the topic, consider what they say. That said, without a strong vision, many-authored papers can be unfocused, and trying to make everyone happy almost certainly will make no one happy. After taking into consideration all the comments and expert opinions, in the end the main author has the power :)

Postscript - Authorship order/inclusion/exclusion is always difficult when so many people are involved. Some advice here; also NutNet has some rather well thought out authorship guidelines.

Wednesday, October 15, 2014

Putting invasions into context

How can we better predict invasions?

Ernesto Azzurro, Victor M. Tuset,Antoni Lombarte, Francesc Maynou, Daniel Simberloff,  Ana Rodríguez-Pérez and Ricard V. Solé. External morphology explains the success of biological invasions. Ecology Letters (2014) 17: 1455–1463.

Fridley, J. D. and Sax, D. F. (2014), The imbalance of nature: revisiting a Darwinian framework for invasion biology. Global Ecology and Biogeography, 23: 1157–1166. doi: 10.1111/geb.12221

Active research programs into invasion biology have been ongoing since the 1990s, but their results make clear that while it is sometimes possible to explain invasions post hoc, it is very difficult to predict them. Darwin’s naturalization hypothesis gets so much press in part because it is the first to state the common acknowledgement that the struggle for existence should be strongest amongst closely related species, implying that ‘invasive species must somehow be different than native species to be so successful’. Defining more generally what this means for invasive species in terms of niche space, trait space, or evolutionary history has had at best mixed results. 

A couple of recent papers come to the similar-but rather different-conclusion that predicting invasion success is really about recognizing context. For example, Azurro et al. point out that despite the usual assumption that species’ traits reflect their niches, trait approaches to invasion that focus on the identifying traits associated with invasiveness have not be successful. Certainly invasive species may be more likely to show certain traits, but these are often very weak from a predictive standpoint, since many non-invasive species also have these traits. Morphological approaches may still be useful, but the authors argue that the key is to consider the morphological (trait) space of the invaders in the context of the morphological space used by the resident communities.
Figure 1. From Azurro et al. 2014. A resident community uses morphospace as delimited by the polygon in (b). Invasive species may fill morphospace within the same area occupied by the community (c) or (d)) or may use novel morphospace (e). Invasiveness should be greatest in situation (e). 
The authors use as an illustration, the largest known invasion by fish - the invasion of the Mediterranean Sea after the construction of the Panama Canal, an event known as the ‘Lessepsian migration’. They hypothesize that when a new species entering a community that fills some defined morphospace will face one of 3 scenarios (Figure 1): 1) they will be within the existing morphospace and occupy less morphospace than their closest neighbour; 2) they will be within the existing morphospace but occupy more morphospace than their closest neighbour; or 3) they will occupy novel morphospace compared to the existing community. The prediction being that invasion success should be highest for this third group, for whom competition should be weakest. Their early results are encouraging, if not perfect – 73% of species located outside of the resident morphospace became abundant or dominant in the invaded range. (Figure 2)
Figure 2. From Azurro et al. 2014. Invasion success of fish to the Mediterranean Sea in relation to morphospace, over multiple historical periods. Invasive (red) species tended to exist in novel morphospace compared to the resident community. 
A slightly different approach to invasion context comes from Jason Fridley and Dov Sax, who revision invasion in terms of evolution - the Evolutionary Imbalance Hypothesis (EIH). In the EIH, the context for invasion success is the characteristics of the invaders' home range. If, as Darwin postulated, invasion success is simply the natural expectation of natural selection, then considering the context for natural selection may be informative. 

In particular, the postulates of the EIH are that 1) Evolution is contingent and imperfect, thus species are subject to the constraints of their histories; 2) The degree to which species are ecologically optimized increases as the number of ‘evolutionary experiments’ increases, and with the intensity of competition (“Richer biotas of more potential competitors and those that have experienced a similar set of environmental conditions for a longer period should be more likely to have produced better environmental solutions (adaptations) to any given environmental challenge”); and 3) Similar sets of ecological conditions exist around the world. When these groups are mixed, some species will have higher fitness and possibly be invasive. 

Figure 3. From Fridley and Sax, 2014.
How to apply this rather conversational set of tenets to actual invasion research? A few factors can be considered when quantifying the likelihood of invasion success: “the amount of genetic variation within populations; the amount of time a population or genetic lineage has experienced a given set of environmental conditions; and the intensity of the competitive environment experienced by the population.” In particular, the authors suggest using phylogenetic diversity (PD) as a measure of the evolutionary imbalance between regions. They show for several regions that the maximum PD in a home region is a significant predictor of the likelihood of species from that region becoming invasive. The obvious issue with max PD being used as a predictor is that it is a somewhat imprecise proxy for “evolutionary imbalance” and one that correlates with many other things (including often species richness). Still, the application of evolutionary biology to a problem often considered to be primarily ecological may make for important advances. 
Figure 4. From Fridley and Sax 2014. Likelihood of becoming invasive vs. max PD in the species' native region.

Monday, October 6, 2014

What is ecology’s billion dollar brain?

(*The topic of the billion dollar proposal came up with Florian Hartig (@florianhartig), with whom I had an interesting conversation on the idea*)

Last year, the European Commission awarded 1 billion dollars to a hugely ambitious project to recreate the human brain using supercomputers. If successful, the Human Brain Project would revolutionize neuroscience. (Although skepticism remains as to whether this project is a more of a pipe dream than reasonable goal). For ecology and evolution, where infrastructure costs are relatively low (compared to say, a Large Hadron Collider), 1 billion dollars means that there is essentially no financial limitation on your proposal, so nearly any project, experiment, analysis, dataset, or workforce, is within the realm of possibility. The European Commission call was for a proposal for research to occur over 10 years, meaning that the constraints on project length (usually driven by grant terms and graduate student theses) are low. So if you could write a proposal, upon which there are essentially no constraints at all, what would it be for? (*if you think that 10 years is too limiting for a proper long-term study, feel free to assume you can set up the infrastructure in 10 years and run it for as long as you want).

The first thing I recognized was that in proposing the 'ultimate' ecological project, you're implicitly stating how you think ecology should be done. For example, do you could focus on the most general questions and start from the bottom. If this is the case, it might be most effective to ask a single fundamental question. It would not be unreasonable to propose to measure metabolic rates under standardized conditions for every extent species, and develop a database of parameter values for them. This would be the most complete ecological database ever, that certainly seems like an achievement. 

But perhaps you choose something that is still of general importance but less simplistic, and run a standardized experiment in multiple systems. This has been effective for the NutNet project. Propose to run replicate experiments with top-of-the-line warming arrays on plant communities in every major ecosystem. Done for 10 years, over a reasonably large scale, with data recorded on physiology and important life history events, this might provide some ability to predict how warming temperatures are affecting ecosystems. 

The alternative is embrace ecological complexity (and the ability to deal with complexity that 1 billion dollars offers). Given the analytic power, equipment, and man hours that 1 billion dollars can buy, you could record every single variable--biotic, abiotic, weather--in a particular system (say, a wetland) for every second of every day. If you don’t simply drown in the data you’ve gathered, maybe you can reconstruct that wetland, predict every property from the details. While that may seem a bit extreme, if you are a complexity-fatalist, you start to recognize that even the general experiments are quickly muddied by complexity. Even that simple, general list of species' metabolic parameters quickly spirals into complexity. Does it make sense to use only one set of standardized conditions? After all, conditions that are reasonable for a rainforest tree are meaningless for an ocean shark or a tundra shrub. Do you use the mean condition for each ecosystem as the standard, knowing that species may only interact with the variance or extremes in those conditions (such as desert annuals that bloom after rains, or bacteria that use cyst stages to avoid harsh environments). What about ontogenetic or plastic differences? Intraspecific differences?

It's probably best then to realize that there is no perfect ecological experiment. The interesting thing about the Human Brain project is that neuroscience is more like ecology than many scientific fields - it deals with complex organic systems with emergent properties and great variability. What ecology needs, ever so simplistically, is more data and better models. Maybe, like neuroscience, we should request a supercomputer that could located and incorporate all ecological data ever collected, across fields (natural history, forestry, agronomy, etc) and recognize the connections between that data, based on geography, species, or scale. This could both give us the most sophisticated possible data map, showing where the data gaps exist, and where areas are data-rich and ready for model development. Further, it could (like the Human Brain) begin to develop models for the interconnections between data. 

Without too many billion dollar calls going on, this is only a thought experiment, but I have yet to find someone who had an easy answer for what they would propose to do (ecologically) with 1 billion dollars. Why is it so difficult?

Monday, September 15, 2014

Links: Reanalyzing R-squares, NSF pre-proposals, and the difficulties of academia for parents

First, Will Pearse has done a great job of looking at the data behind the recent paper looking at declining R and p-values in ecology, and his reanalysis suggests that there is a much weaker relationship between r2 values and time (only 4% rather than 62% as reported). Because the variance is both very large within-years and also not equal through time, a linear model may not be ideal for capturing this relationship.
Thanks @prairiestopatchreefs for linking this.

From the Sociobiology blog, something that most US ecologists would probably agree on: the NSF pre-proposal program has been around long enough (~3 years) to judge on its merits, and it has not been an improvement. In short, pre-proposals are supposed to use a 5 page proposal to allow NSF to identify the best ideas and then invite those researchers to submit a full proposal similar to the traditional application. Joan Strassman argues that not only is this program more work for applicants (you must write two very different proposals in short order if you are lucky to advance), it offers very few benefits for them.

The reasons for the gender gap in STEM academic careers gets a lot of attention, and rightly so given the continuing underrepresentation of women. The demands of parenthood often receive some of the blame. The Washington Post is reporting on a study that considers parenthood from the perspective of male academics. The study took an interview-based, sociological approach, and found that the "majority of tenured full professors [interviewed] ... have either a full-time spouse at home who handles all caregiving and home duties, or a spouse with a part-time or secondary career who takes primary responsibility for the home." But the majority of these men also said they wanted to be more involved at home. As one author said, “Academic science doesn’t just have a gender problem, but a family problem...men or women, if they want to have families, are likely to face significant challenges.”

On a lighter note, if you've ever joked about PNAS' name, a "satirical journal" has taken that joke and run with it. PNIS (Proceedings of the Natural Institute of Science) looks like the work of bored post-docs, which isn't necessarily a bad thing. The journal has immediately split into two subjournals: PNIS-HARD (Honest and Real Data) and PNIS-SOFD (Satirical or Fake Data), which have rather interesting readership projections:


Friday, September 12, 2014

Do green roofs enhance urban conservation?

ResearchBlogging.orgGreen roofs are now commonly included in the design of new public and private infrastructure, bolstered by energy savings, environmental recognition and certification, bylaw compliance, and in some cases tax or other direct monetary incentives (e.g., here).  While green roofs clearly provide local environmental benefits, such as reduced albedo (sunlight reflectance), storm water retention, CO2 sequestration, etc., green roof proponents also frequently cite biodiversity and conservation enhancement as a benefit. This last claim has not been broadly tested, but existing data was assessed by Nicholas Williams and colleagues in a recent article published in the Journal of Applied Ecology.

Williams and colleagues compiled all available literature on biodiversity and conservation value of green roofs and they explicitly tested six hypotheses: 1) Green roofs support higher diversity and abundance compared to traditional roofs; 2) Green roofs support comparable diversity and composition to ground habitat; 3) Green roofs using native species support greater diversity than traditional green roofs; 4) Green roofs aid in rare species conservation; 5) Green roofs replicate natural communities; and 6) Green roofs facilitate organism movement through urban areas.

Photo by: Marc Cadotte


What is surprising is that given the abundance of papers on green roofs in ecology and environmental journals, very few quantitatively assessed some of these hypotheses. What is clear is that green roofs support greater diversity and abundance compared to non-green roofs, but we know very little about how green roofs compare to other remnant urban habitats in terms of species diversity, ecological processes, or rare species. Further, while some regions are starting to require that green roofs try to maximize native biodiversity, there are relatively few comparisons, but those that exist reveal substantial benefits for biodiverse green roofs.

How well green roofs replicate ground or natural communities is an important question, with insufficient evidence. It is important because, according to the authors, there is some movement to use green roofs to offset lost habitat elsewhere. This could represent an important policy shift, and one that may ultimately lead to lost habitats being replaced with lower quality ones. This is a policy direction that simply requires more science.

There is some evidence that green roofs, if designed correctly, could aid in rare species conservation. However, green roofs, which by definition are small patches in an inhospitable environment, may assist rare species management in only a few cases. The authors caution that enthusiasm for using green roofs to assist with rare species management needs to be tempered by designs that are biologically and ecologically meaningful to target species. They cite an example where green roofs in San Francisco were designed with a plant that is an important food source for an endangered butterfly, Bay Checkerspot, which currently persists in a few fragmented populations. The problem was that the maximum dispersal distance of the butterfly is about 5 km, and there are no populations within 15 km of the city. These green roofs have the potential to aid in rare species conservation, but it needs to be coupled with additional management activities, such as physically introducing the butterfly to the green roofs.

Overall, green do provide important environmental and ecological benefits in urban settings. Currently, very few studies document the ways in which green roofs provide ecological processes and services, enhance biodiversity, replicate other ground level habitats, or aid in biodiversity conservation. As the prevalence of green roofs increases, we will need scientifically valid ecological understanding of green roof benefits to better engage with municipal managers and affect policy.

Williams, N., Lundholm, J., & MacIvor, J. (2014). Do green roofs help urban biodiversity conservation? Journal of Applied Ecology DOI: 10.1111/1365-2664.12333

Monday, September 8, 2014

Edicts for peer reviewing

Reviewing is a right of passage for many academics. But for most graduate students or postdocs, it is also a bit of a trial by fire, since reviewing skills are usually assumed to be gained osmotically, rather than through any specific training. Unfortunately, the reviewing system seems ever more complicated for reviewers and authors alike (slow, poor quality, unpredictable). Concerns about modern reviewing pop up every few months, and different solutions to the difficulties of finding qualified reviewers and the quality of modern reviews (including publishing an instructional guide, taking alternative approaches (PeerJ, etc), or skipping peer review altogether (arXiv)). Still, in the absence of a systematic overhaul of the peer review system, an opinion piece in The Scientist by Matthew A. Mulvey and Dean Tantin provides a rather useful guide for new reviewers and a useful reminder for experienced reviewers. If you are going to do a review (and you should, if you are publishing papers), you should do it well. 
From "An Ecclesiastical Approach to Peer Review" 
"The Golden Rule
Be civil and polite in all your dealings with authors, other reviewers, editors, and so on, even if it is never reciprocated.
As a publishing scientist, you will note that most reviewers break at least a few of the rules that follow. Sometimes that is OK—as reviewers often fail to note, there is more than one way to skin a cat. As an author you will at times feel frustrated by reviews that come across as unnecessarily harsh, nitpicky, or flat-out wrong. Despite the temptation, as a reviewer, never take your frustrations out on others. We call it the “scientific community” for a reason. There is always a chance that you will be rewarded in the long run. 
The Cardinal Rule
If you had to publish your review, would you be comfortable doing so? What if you had to sign it? If the answer to either question is no, start over. (That said, do not make editorial decisions in the written comments to the authors. The decision on suitability is the editors’, not yours. Your task is to provide a balanced assessment of the work in question.) 
The Seven Deadly Sins of sub-par reviews
  1. Laundry lists of things the reviewer would have liked to see, but have little bearing on the conclusions.
  2. Itemizations of styles or approaches the reviewer would have used if they were the author.
  3. Direct statements of suitability for publication in Journal X (leave that to the editor).
  4. Vague criticism without specifics as to what, exactly, is being recommended. Specific points are important—especially if the manuscript is rejected.
  5. Unclear recommendations, with little sense of priority (what must be done, what would be nice to have but is not required, and what is just a matter of curiosity).
  6. Haphazard, grammatically poor writing. This suggests that the reviewer hasn’t bothered to put in much effort.
  7. Belligerent or dismissive language. This suggests a hidden agenda. (Back to The Golden Rule: do not abuse the single-blind peer review system in order to exact revenge or waylay a competitor.) 
Vow silence
The information you read is confidential. Don’t mention it in public forums. The consequences to the authors are dire if someone you inform uses the information to gain a competitive advantage in their research. Obviously, don’t use the findings to further your own work (once published, however, they are fair game). Never contact the authors directly.
Be timely
Unless otherwise stated, provide a review within three weeks of receiving a manuscript. This old standard has been eroded in recent years, but nevertheless you should try to stick to this deadline if possible. 
Be thorough
Read the manuscript thoroughly. Conduct any necessary background research. Remember that you have someone’s fate in your hands, so it is not OK to skip over something without attempting to understand it completely. Even if the paper is terrible and in your view has no hope of acceptance, it is your professional duty to develop a complete and constructive review.
Be honest
If there is a technique employed that is beyond your area of expertise, do the best you can, and state to the editor (or in some cases, in your review) that although outside your area, the data look convincing (or if not, explain why). The editor will know to rely more on the other reviewers for this specific item. If the editor has done his or her job correctly, at least one of the other reviewers will have the needed expertise.
Testify
Most manuscript reviews cover about a page or two. Begin writing by briefly summarizing the state of the field and the intended contribution of the study. Outline any major deficits, but refrain from indicating if you think they preclude publication. Keep in mind that most journals employ copy editors, so unless the language completely obstructs understanding, don’t bother criticizing the English. Go on to itemize any additional defects in the manuscript. Don’t just criticize: saying that X is a weakness is not the same as saying the authors should address weakness X by providing additional supporting data. Be clear and provide no loopholes. Keep in mind that you are not an author. No one should care how you would have done things differently in a perfect world. If you think it helpful, provide additional suggestions as minor comments—the editor will understand that the authors are not bound to them.
Judgment Day
Make a decision as to the suitability of the manuscript for the specific journal in question, keeping in mind their expectations. Is it acceptable in its current state? Would a reasonable number of experiments performed in a reasonable amount of time make it so, or not? Answering these questions will allow you to recommend acceptance, rejection, or major/minor revision. 
If the journal allows separate comments to the editor, here is the place to state that in your opinion they should accept and publish the paper as quickly as possible, or that the manuscript falls far below what would be expected for Journal X, or that Y must absolutely be completed to make the manuscript publishable, or that if Z is done you are willing to have it accepted without seeing it again. Good comments here can make the editor’s job easier. The availability of separate comments to the editor does not mean that you should provide only positive comments in the written review and reserve the negative ones for the editor. This approach can result in a rejected manuscript being returned to the authors with glowing reviewer comments. 
Resurrection
A second review is not the same as an initial review. There is rarely any good reason why you should not be able to turn it around in a few days—you are already familiar with the manuscript. Add no new issues—doing so would be the equivalent of tripping someone in a race during the home stretch. Determine whether the authors have adequately addressed your criticisms (and those of the other reviewers, if there was something you missed in the initial review that you think is vital). In some cases, data added to a revised manuscript may raise new questions or concerns, but ask yourself if they really matter before bringing them up in your review. Be willing to give a little if the authors have made reasonable accommodation. Make a decision: up or down. Relay it to the editor. 
Congratulations. You’ve now been baptized, confirmed, and anointed a professional manuscript reviewer."

Monday, August 25, 2014

Researching ecological research

Benjamin Haller. 2014. "Theoretical and Empirical Perspectives in Ecology and Evolution: A Survey". BioScience; doi:10.1093/biosci/biu131.

Etienne Low-Décarie, Corey Chivers, and Monica Granados. 2014. "Rising complexity and falling explanatory power in ecology". Front Ecol Environ 2014; doi:10.1890/130230.

A little navel gazing is good for ecology. Although maybe it seems like it, ecology spends far less time evaluating its approach, compared to simply doing research. Obviously we can't spend all of our time navel-gazing, but the field as a whole would benefit greatly from ongoing conversations about its strength and weaknesses. 

For example, the issue of theory vs. empirical research. Although this issue has received attention and arguments ad nauseum over the years (including here, 1, 2, 3), it never completely goes away. And even though there are arguments that it's not an issue anymore, that everyone recognizes the need for both, if you look closely, the tension continues to exist in subtle ways. If you have participated in a mixed reading group did the common complaint “do we have to read so many math-y papers?" ever arise; or equally “do we have to read so many system specific papers and just critique the methods?” Theory and empirical research don't see eye to eye as closely as we might want to believe.

The good news? Now there is some data. Ben Haller did a survey on this topic that just came out in BioScience. This paper does the probably necessary task of getting some real data beyond the philosophical and argumentative about the theory/data debate. Firstly, he defines empirical research as being involved in the gathering and analysis of real world data, while theoretical research does not gather or analyze real world data, instead involves mathematical models, numerical simulations, and other such work. The survey included 614 scientists from related ecology and evolutionary biology fields, representing a global (rather North American) perspective.

The conclusions are short, sweet and pretty interesting: "(1) Substantial mistrust and tension exists between theorists and empiricists, but despite this, (2) there is an almost universal desire among ecologists and evolutionary biologists for closer interactions between theoretical and empirical work; however, (3) institutions such as journals, funding agencies, and universities often hinder such increased interactions, which points to a need for institutional reforms."
 
For interpreting the plots – the empirical group represents respondents whose research is completely or primarily empirical; the theoretical group's research is mostly or completely related to theory, while the middle group does work that falls equally into both types. Maybe the results don't surprise anyone – scientists still read papers, collaborate, and coauthor papers mostly with others of the same group. What is surprising is that this trend is particularly strong for the empirical group. For example, nearly 80% of theorists have coauthored a paper with someone in the empirical group while only 42% of empiricists have coauthored at least one paper with a theorist. Before we start throwing things at empiricists, it should be noted that this could relate to a relative scarcity of theoretical ecologists, rather than insularity on the part of the empiricists. However, it is interesting that while the responses to the question “how should theory and empiricism coexist together?” across all groups agreed that “theoretical work and empirical work would coexist tightly, driving each other in a continuing feedback loop”, empirical scientists were significantly more likely to say “work would primarily be data-driven; theory would be developed in response to questions raised by empiri­cal findings.”

Most important, and maybe concerning, is that the survey found no real effect of age, stage or gender – i.e. existing attitudes are deeply ingrained and show no sign of changing.

Why is it so important that we reconcile the theoretical/empirical issue? The paper “Rising complexity and falling explanatory power in ecology” offers a pretty compelling reason in its title. Ecological research is getting harder, and we need to marshall all the resources available to us to continue to progress. 

The paper suggests that ecological research is experiencing falling mean Rvalues. Values in published papers have fallen from above 0.75 prior to 1950 to below 0.5 in today's papers.
The worrying thing is that as a discipline progresses and improves, you might predict that the result is an improving ability to explain ecological phenomenon. For comparison, criminology was found to show no decline in R2 values as that matured through time. Why don’t we have that? 

During the same period, however, it is notable that the average complexity of ecological studies also increased – the number of reported p-values is 10x larger on average today compared to the early years (where usually only a single p-value relating to a single question was reported). 

The fall in R2 values and the rise in reported p-values could mean a number of things, some worse for ecology than others. The authors suggest that R2 values may be declining as a result of exhaustion of “easy” questions (“low hanging fruit”), increased effort in experiments, or a change in publication bias, for example. The low hanging fruit hypothesis may have some merit – after all, studies from before the 1950s were mostly population biology with a focus on a single species in a single place over a single time period. Questions have grown increasingly more complex, involving assemblages of species over a greater range of spatial and temporal scales. For complex sciences, this fits a common pattern of diminishing returns: “For example, large planets, large mammals, and more stable elements were discovered first”.

In some ways, ecologists lack a clear definition of success. No one would argue that ecology is less effective now than it was in the 1920s, for example, and yet a simplistic measure (R2) of success might suggest that ecology is in decline. Any biases between theorists and empiricists is obviously misplaced, in that any definition of success for ecology will require both.  

Thursday, August 14, 2014

#ESA2014 Day 4: Battle Empiricism vs Theory

You are our only hope!(?)
First off, the Theory vs. Empiricism Ignite session was a goldmine for quotes:

In God we trust, all others bring data” (H. Edwards Deming)
Models are our only hope” (Greg Dwyer)
"Nature represents a special part of parameter space" (Jay Stachowicz)

The Theory vs. Empiricism Ignite session was designed in response to an impromptu survey at ESA last year that found that 2/3 s of an audience did not believe that there are general laws in ecology. Speakers were asked to choose whether an empirical paper or a theoretical paper would be most important for ecology, and to defend their choice, perhaps creating some entertaining antagonism along the way. 

There wasn't actually much antagonism to be had: participants were mostly conciliatory and hardly controversial. Despite this, the session was entertaining and also insightful, but perhaps not in the way I expected. First though, I should say that I think the conversation could have used some definitions of the terms (“theory”,  “empiricism”). We throw these terms around a lot but they mean different things to different people. What counts as theory to a field based scientist may be consider no more than a rule of thumb or statistical model to a pure theoretician. Data from a microcosm might not count as experimental evidence to a fieldwork-oriented ecologist.

The short talks included examples and arguments as to how theoretical or empirical science is a necessary and valuable contributor to ecological discoveries. That was fine, but the subtext from a number of talks turned out to be more interesting. The tension, it seemed, was not about whether theory is useful or empiricism is valuable, but about which one is more important. Should theory or empiricism be the driver of ecological research? (Kudos to Fred Adler for the joke that theory wants to be a demanding queen ant with empiricists as the brainless order-following workers!) And funding should follow the most worthy work. Thus empiricists bemoan the lack of funding for natural history, while theoreticians argue that pure theory is even harder to get grants for. The question of which one should lead research was sadly mostly unanswered (and 5 minutes per person didn't offer much space for a deeper discussion). 

Of course there was the inevitable call for reconciliation of the two areas, of some way to breach the arrogance and ignorance (to paraphrase Brad Cardinale) holding them apart. Or, perhaps all ecologists should be renaissance scientists, who have mastered theory and empiricism equally. Hard to say. For me, considering the example of ecological subfields that have found a balance and feedback between theory and data is wise. Areas such as disease ecology or population biology incorporate models and experiments successfully, for example. Why do some other fields like community ecology or conservation biology struggle so much more?

#ESA2014 - Day 3 bringing together theory and empiricism

I was tied up in a session all afternoon, so most of the interesting comments below are from Topher Weiss-Lehman, who caught what sounds like a pretty thought provoking session about theory and conservation biology, with thought provoking talks from Hugh Possingham and David Ackerly. This concept of bringing theory and empiricism together permeated through a number of talks, including the session I moderated on using microbes in theoretical ecology and applying theory to microbial ecology (although at the moment, the distance between those things still feels large).

The most thought-provoking talk I saw was Peter Chesson's, on "Diversity maintenance: new concepts and theory for communities as multiple-scale entities". Chesson discussed his discomfort with how his coexistence theory is sometimes applied (I suppose that is the definition of success, that you see your ideas misused). His concerns fall with those of many ecologists on the question of how to define and research an ecological community. Is the obsession with the looking at 'local' communities limiting and misguided, particularly when paired with the ridiculous assumption that such communities are closed systems? Much like Ricklef's well known paper on the defining a 'regional community', Chesson suggests we move to a multi-scale emphasis for community ecology.

Rather than calculating coexistence in a local community, Chesson argued that ecologists should be begin to think about how coexistence mechanisms varied in strength across multiple spatial scales. For example, is frequency dependence more importance at smaller or larger scales? He used a concept similar to the idea of Ricklef's regional community, in which a larger extent encompassed a number of increasingly smaller scale communities. The regional community likely includes environmental gradients, and species distributions that vary across them. Chesson presented some simulations based on a multi-scale model of species interactions to illustrate the potential of his multi-scale coexistence theory framework. The model appears to bring together Chesson's work on coexistence mechanisms-- including the importance of fitness differences (here with fitness calculated at each scale as the change in density over a time step) and stabilizing forces, and the invasion criteria (where coexistence has a signal of a positive growth rate from low density)--and his scale-transition theory work. This is a very obvious advance, and a sensible way of recognizing the scale-dependent nature of ecology in coexistence mechanisms. His approaches allows ecologists to drop their obsession with defining some spatial area as "the community" and a regional community decreases the importance of the closed system assumption. My one with is that there be some discussion of how this concept fits with existing ideas about scale and communities in ecology. For example, how compatible are existing larger scale approaches like macroecology/biogeography and other theoretical paradigms like metacommunity theory with this?  

#Notes from Topher Weiss-Lehman

Applied Theory I spent the morning of my third day at ESA in a symposium on Advancing Ecological Theory for Conservation Biology. Hugh Possingham started out with a call for more grand theories in a talk titled “Theory for conservation decisions: the death of bravery.” Possingham argued for the development of theory tailored to the needs of conservation managers, identifying the SLOSS debate as an example of the scientific community agonizing over the answer to a question no managers were asking. He described the type of theory he meant as simple and easily applicable rather than relying on intensive computer simulations that managers are unlikely to be able to use for their own systems. Possingham is right that conservation managers need theory to help guide them in decisions over where and what species to protect, however I can’t help but think about the scientific advances that arose specifically as a result of the SLOSS debate and computational models. The talk left me wondering if theoretical ecology, like other scientific fields, could be split into basic and applied theory.

The other talks in the session approached the topic of theory for conservation from a number of perspectives. Justin Kitzes discussed the ways in which macroecology can inform conservation concerns and Annette Ostling explored how niche and neutral community dynamics affect extinction debts. H. Resit Akakaya provided a wonderful example of the utility of computer simulations for conservation issues. He presented results predicting the extinction risk of species due to climate change via simulations based on niche modeling coupled with metapopulation dynamics. Jennifer Dunne then explored how the network structure of food webs changed as a result of human arrival and hunting in several systems. The session ended with a presentation by David Ackerly calling for a focus on disequilibrium dynamics in ecology. Ackerly made a compelling case for the importance of considering disequilibrium dynamics, particularly when making predictions of species reactions to climate change or habitat alteration. However the most memorable part of his talk for me was the last 5 minutes or so. He suggested that we reconsider what conservation success should mean. Since systems are changing and will continue to change, Ackerly argued that to set conservation goals based on keeping them the way they are is setting ourselves up for failure. Instead, we need to understand that systems are transitioning and that while we have a crucial role in deciding what they might transition into, we can’t and shouldn’t try to stop them from changing.

The talks today gave me lots of ideas and new papers to read, but they also left me pondering more questions on the philosophy of science (what we do, why we do it, and what our goals should be) than I expected.