Wednesday, May 14, 2014

Addressing the mental health problem in academia


The Guardian UK is publishing an insightful series this May called “Mental health: a university crisis”, as part of Mental Health Month. Although mental health issues for undergraduates are the focus of a variety of different services and programs at most universities, the Guardian includes a unique focus on the issues of academics—graduate students, postdocs, professors and other researchers—for whom it seems that mental health issues are disproportionately common.

The whole series is an important read, and comes at the issue from many different perspectives. A recent survey of university employees not surprisingly found that academics have higher stress levels than other university employees, which they attribute to heavy workloads (!), lack of support (from the department or otherwise), and particularly for early career researchers, feelings of isolation. One particularly insightful piece (with the tagline "I drink too much and haven't had a good night's sleep since last year. Why? Research") argues that academics have particularly unique problems leading to mental health issues. There are typical issues that many high stress jobs include—the ever-regenerating todo list, and the many teaching, research, and service tasks that academics need to accomplish. But academia also seems to attract a high proportion of intense, perfectionistic, passionate people willing to go the extra mile (and encouraged to, given the difficult job market). Worse, research is a creative, even emotional activity – there are highs and lows and periods of intense work that come at the expense of everything else. Ideas are personal, and so the separation between person and research is very slim. The result is often a lack of work-life balance that might produce academic success, but strains mental health. Mental health issues further have dire implications for most research activities, since the symptoms – loss of motivation, concentration and clarity of thought – affect crucial academic skills.

If such issues are so common in academia (and there’s a form of anxiety ubiquitous among graduate students, the imposter syndrome; other common illnesses include anxiety, depression, and panic attacks), why are most of the lecturers and postdocs writing about their mental health experiences for the Guardian choosing to be anonymous? It still seems common to simply downplay or hide problems with stress and mental illness (in the linked study, 61% of academics with mental health problems say their colleagues are unaware of their problems). This may be a reflection of the fact that academia is focused individual performance and individual reputation. Colleagues choose to work with you, to invite you to their department, to hire you, based in no small part on your reputation. Admitting to having suffered from mental illness can feel like adding an obstacle to the already difficult academic landscape. For many, admitting to struggling can feel like failure, particularly since everyone around them seems to be managing the harsh conditions just fine (whether or not that is really true). Academic workdays have less structure than most, which can be isolating. Academics can keep unpredictable hours, disappear for days, send emails at 2 am, sleep at work, and be unkempt and exhausted without much comment; as a result, it can be difficult to identify those colleagues who are at risk (compared to those who are simply unconventional :-) ).

It will be interesting to see where the Guardian series goes. Mental health issues in academia are in many ways the same as those that have affected women and minorities looking for inclusion in academia – subtle comments or stigma, lack of practical support. I remember once hearing a department chair disgusted a co-author who had failed to respond to emails because they were “certifiably crazy; in a mental hospital”. No doubt that was exactly the response the co-author was hoping to avoid. More subtle but more common is lip-service to work-life balance that is counterbalanced by proud references to how hard one or one’s lab works. There is nothing wrong with working hard, but maybe we should temper our praise of sleeping in the lab, coming in every holiday and weekend. It happens and it may be necessary, but is that the badge of honour we really want to claim? It would be sad if the nature of academia, its competitiveness and atmosphere of masochism (“my students are in the lab on Christmas”) limits progress.

Friday, May 9, 2014

Scaling the publication obstacle: the graduate student’s Achilles’ heel

There is no doubt that graduate school can be extremely stressful and overwhelming. Increasingly, evidence points to these grad school stressors contributing to mental health problems (articles here and here). Many aspects of grad school contribute to self-doubt and unrelenting stress: is there a job for me after? am I as smart as everyone else? is what I’m doing even interesting?

But what seems to really exacerbate grad school stress is the prospect of trying to publish*. The importance of publishing can’t be dismissed. To be a scientist, you need to publish. There are differing opinions about what makes a scientist (e.g., is it knowledge, job title, etc.), but it is clear that if you are not publishing, then you are not contributing to science. This is what grad students hear, and it is easy to see how statements like this do not help with the pressure of grad school.

There are other aspects of the grad school experience that are important, like teaching, taking courses, outreach activities, and serving on University committees or in leadership positions. These other aspects can be rewarding because they expand the grad school experience. There is also the sense that they are under your control and the rewards are more directly influenced by your efforts. Here then, publishing is different. The publication process does not feel like it is under your control and that the rewards are not necessarily commensurate with your efforts.

Cartoon by Nick Kim, Massey University, Wellington, accessed here

Given the publishing necessity, how then can grad students approach it with as little trauma as possible? The publication process will be experienced differently by different people, some seem like they can shrug off negative experiences while others internalize them, with negative experiences gnawing away at their confidence. There is no magic solution to making the publishing experience better, but here are some suggestions and reassurances.

1) It will never be perfect! I find myself often telling students to just submit already. There is a tendency to hold on to a manuscript and read and re-read it. Part of this is the anxiety of actually submitting it, and procrastination is a result of anxiety. But often students say that it doesn’t feel ready, or that they are unhappy with part of the discussion, or that it is not yet perfect. Don’t ever convince yourself that you will make it perfect –you are setting yourself up for a major disappointment. Referees ALWAYS criticize, even when they say a paper is good. There is always room for improvement and you should view the review process as part of the process that improves papers. If you think of it this way, then criticisms are less personal (i.e., why didn’t they think it was perfect too?) and feel more constructive, and you are at peace with submitting something that is less than perfect.

2) Let's dwell on part of the first point: reviewers ALWAYS criticize. It is part of their job. It is not personal. Remember, the reviewers are putting time and effort into your paper, and their comments should be used to make the product better. Reviewers are very honest and will tell you exactly what could be done to improve a manuscript. They are not attacking you personally, but rather assessing the manuscript. 

3) Building on point 2, the reviewers may not always be correct or provide the best advice. It is OK to state why you disagree with them. You should always appreciate their efforts (unless they are unprofessional), but you don’t have to always agree with them.

4) Not every paper is a literature masterpiece. Effective scientific communication is sometimes best served by very concise and precise papers. If you have an uncomplicated, relatively simple experiment, don’t make more complex by writing 20 pages. Notes, Brevia, Forum papers are all legitimate contributions.

5) Not every paper should be a Science or Nature paper (or whatever the top journals are in a given subdiscipline). Confirmatory or localized studies are helpful and necessary. Large meta-analyses and reviews are not possible without published evidence. Students should try to think how their work is novel or broadly general (this is important for selling yourself later on), but it is ok to acknowledge that your paper is limited in scope or context, and to just send it to the appropriate journal. It takes practice to fit papers to the best journals, so ask colleagues where they would send it. This journal matching can save time and trauma.

6) And here is the important one: rejection is ok, natural, and normal. We all get rejections. What I mean by this is that we all get rejections. Your rejection is not abnormal, you don’t suck more than others, and your experience has been experienced by all the best scientists. When your paper is reviewed, and then rejected, there is usually helpful information that should be useful in revising your work to submit elsewhere. Many journals are inundated with papers and are looking for reasons to reject. In the journal I edit, we accept only about 18% of submissions, and so it doesn’t take much to reject a paper. This is unfortunate, but currently unavoidable (though with the changing publishing landscape, this norm may change). Rejection is hard, but don’t take it personally, and feel free to express your rage to your friends.



Publishing is a tricky, but necessary, business for scientists. When you are having problems with publishing, don’t internalize it. Instead complain about it to your friends and colleagues. They will undoubtedly have very similar experiences. Students can be hesitant to share rejections with other students because they feel inferior, but sharing can be therapeutic. When I was a postdoc at NCEAS, the postdocs would share quotes from their worst rejection letters. What would have normally been a difficult, confidence-bashing experience, became a supportive, reassuring experience.

Publishing is necessary, but also very stressful and potentially adding to low-confidence and a feeling that grad school is overwhelming. I hope that the pointers above can help make the experience less onerous. But when you do get that acceptance letter telling you that your paper will be published, hang on to that. Celebrate and know that you have been rewarded for your hard work, but move on from the rejections.


*I should state that my perspective is from science, and my views on publishing are very much informed by the publishing culture in science. I have no way of knowing if the pressures in the humanities or economics are the same for science students.

Tuesday, April 29, 2014

Unexpected effects of global warming in novel environments: butterflies emerge later in warming urban areas.

ResearchBlogging.orgThere is now ample evidence that warming temperatures cause advances in the timing of organismal activity (i.e., phenology). Studies have shown that rising temperatures are responsible for earlier plant leafing and flowering (Miller-Rushing & Primack 2008, Wolkovich et al. 2012), pest insect emergence and abundance (Willis et al. 2008), and even local species loss and reduced diversity (Willis et al. 2008). One emerging expectation from global warming studies is that insects should emerge earlier since winters are milder and spring temperatures are warmer. This expectation should hold so long as high temperatures or other environmental stressors don’t adversely affect the insects. And the concern about shifts in emergence and insect activity is the potential for mismatches between plant flowering and the availability of pollinators (Willmer 2012) –if insects emerge too soon, they may miss the flowers.

Photo by Marc Cadotte


In a forthcoming paper in Ecology by Sarah Diamond and colleagues study 20 common butterfly species across more than 80 sites in Ohio. These sites were located in a range of places across a rural to urban gradient. Instead of finding earlier emergence in warmer places, which were typically urban areas, they found that a number of species were delayed in warmer urban areas. Even though the butterflies might emerge earlier in warmer rural habitats, they were adversely affected in urbanized areas. 

These results highlight the need to consider multiple sources of stress from different types of environmental change. Observations from a few locales or from controlled experiments may not lead to conclusions about interactive influences or warming and urbanization, and that's why this study is so important. It observes a counter-intuitive result because of the influence of multiple stressors. 

A next step should be to determine if pollinator-plant interactions are being disrupted in these urban areas. The reason why we should care so much about pollinator emergence is that they provide a key ecological service by pollinator wild, garden, and agricultural plants, as well has being an important food source to other species. A mismatch in timing and disrupt these important interactions.

References

Diamond S.E., Cayton H., Wepprich T., Jenkins C.N., Dunn R.R., Haddad N.M. & Ries L. (2014). Unexpected phenological responses of butterflies to the interaction of urbanization and geographic temperature. Ecology.

Miller-Rushing A.J. & Primack R.B. (2008). Global warming and flowering times in Thoreau's Concord: a community perspective Ecology, 89, 332-341.

Roos J., Hopkins R., Kvarnheden A. & Dixelius C. (2011). The impact of global warming on plant diseases and insect vectors in Sweden. Eur J Plant Pathol, 129, 9-19.

Willis C.G., Ruhfel B., Primack R.B., Miller-Rushing A.J. & Davis C.C. (2008). Phylogenetic patterns of species loss in Thoreau's woods are driven by climate change. Proceedings of the National Academy of Sciences, 105, 17029-17033.

Willmer P. (2012). Ecology: pollinator-plant synchrony tested by climate change. Curr. Biol., 22, R131-R132.

Wolkovich E.M., Cook B.I., Allen J.M., Crimmins T.M., Betancourt J.L., Travers S.E., Pau S., Regetz J., Davies T.J., Kraft N.J.B., Ault T.R., Bolmgren K., Mazer S.J., McCabe G.J., McGill B.J., Parmesan C., Salamin N., Schwartz M.D. & Cleland E.E. (2012). Warming experiments underpredict plant phenological responses to climate change. Nature, 485, 494-497.


Diamond, S., Cayton, H., Wepprich, T., Jenkins, C., Dunn, R., Haddad, N., & Ries, L. (2014). Unexpected phenological responses of butterflies to the interaction of urbanization and geographic temperature Ecology DOI: 10.1890/13-1848.1

Thursday, April 24, 2014

Data merging: are we moving forward or dealing with Frankenstein's monster


I’m sitting in the Sydney airport waiting for my delayed flight –which gives me some time to ruminate about the mini-conference I am leaving. The conference, hosted by the Centre for Biodiversity Analysis (CBA) and CSIRO in Australia, on "Understanding biodiversity dynamics using diverse data sources", brought together several fascinating thinkers working on disparate areas including ecology, macroecology, evolution, genomics, and computer science. The goal of the conference was to see if merging different forms of data could lead to greater insights into biodiversity patterns and processes. 

Happy integration

On the surface, it seems uncontroversial to say that bringing together different forms of data really does promote new insights into nature. However, this only really works if the data we combine meaningfully complement one another. When researchers bring together data, there are under-appreciated risks, and the resulting effort could result in trying to combine data that make weird bedfellows.
Weird bedfellows

The risks include data that mismatch in the scale of observation, resulting in meaningful variation being missed. Data are often generated according to certain models with specific assumptions, and these data-generation steps can be misunderstood by end-users, resulting in inappropriate uses of data. Further, different data may be combined in standard statistical models, but the linkages between data types is much more subtle and nuanced, requiring alternative models.

Why these are issues stems from the fact that researchers now have an unprecedented access to numerous large data sets. Whether these are large trait data sets, spatial locations, spatial environmental data, genomes, or historical data, they are all built with specific underlying uses, limitations and assumptions.  

Regardless of these issues of concern, the opportunity and power to address new questions is greatly enhanced by multiple types of data. One thing I gained from this meeting is that there is a new world of biodiversity analysis and understanding emerging by smart people doing smart things with multiple data. We will soon live in a world where the data and analytical tools allow research to truly combine multiple processes to predict species' distributions, or to move from evolutionary events in deep history to modern day ecological patterns.


Wednesday, April 23, 2014

Guest Post: You teach science, but is your teaching scientific? (Part I)

The first in a series of guest posts about using scientific teaching, active learning, and flipping the classroom by Sarah Seiter, a teaching fellow at the University of Colorado, Boulder. 

As a faculty member teaching can sometimes seem like a chore – your lectures compete with smartphones and laptops. Some students see themselves as education “consumers” and haggle over grades. STEM (science, technology, engineering, and math) faculty have a particularly tough gig – students need substantial background to succeed in these courses, and often arrive in the classroom unprepared. Yet, the current classroom climate doesn’t seem to be working for students either. About half of STEM college majors ultimately switch to a non-scientific field. It would be easy to frame the problem as one of culture – and we do live in a society that doesn’t always value science or education. However, the problem of reforming STEM education might not take social change, but rather could be solved using our own scientific training. In the past few years a movement called “scientific teaching” has emerged, which uses quantitative research skills to make the classroom experience better for instructors as well as students.

So how can you use your research skills to boost your teaching? First, you can use teaching techniques that have been empirically tested and rigorously studied, especially a set of techniques called “active learning”. Second, you can collect data on yourself and your students to gauge your progress and adjust your teaching as needed, a process called “formative assessment”. While this can seem daunting, it helps to remember that as a researcher you’re uniquely equipped to overhaul your teaching, using the skills you already rely on in the lab and the field. Like a lot of paradigm shifts in science, using data to guide your teaching seems pretty obvious after the fact, but it can be revolutionary for you and your students.

What is Active Learning:

There are a lot of definitions of active learning floating around, but in short active learning techniques force students to engage with the material, while it is being taught. More importantly, students practice the material and make mistakes while they are surrounded by a community of peers and instructors who can help. There are a lot of ways to bring active learning strategies to your classroom, such as clicker response systems (handheld devices that allow them to take short quizzes throughout the lecture). Case studies are another tool: students read about scientific problems and then apply the information to real world problems (medical and law schools have been them for years). I’ll get into some more examples of these techniques in post II; there are lots of free and awesome resources that will allow you to try active learning techniques in your class with minimal investment.

Formative Assessment:

The other way data can help you overhaul your class is through formative assessment, a series of small, frequent, low stakes assessment of student learning. A lot of college courses use what’s called summative assessment – one or two major exams that test a semester’s worth of material, with a few labs or a term paper for balance. If your goal is to see if your students learned anything over a semester this is probably sufficient. This is also fine if you’re trying to weed out underperforming students from your major (but seriously, don’t do that). But if you’re interested in coaching students towards mastery of the subject matter, it probably isn’t enough to just tell them how much they learned after half the class is over. If you think about learning goals like we think of fitness goals, this is like asking students to qualify for the Boston marathon, without giving them any times for their training runs.

Formative assessment can be done in many ways: weekly quizzes or taking data with classroom clicker systems. While a lot of formative assessment research focuses on measuring student progress, instructors have lots to gain by measuring their own pedagogical skills. There are a lot of tools out there to measure improvement in teaching skills (K-12 teachers have been getting formatively assessed for years), but even setting simple goals for yourself (“make at least 5 minutes for student questions”) and monitoring your progress can be really helpful. Post III will talk about how to do (relatively) painless formative assessment in your class.

How does this work and who does it work for:

Scientific teaching is revolutionary because it works for everyone, faculty and students alike. However, it has particularly useful benefits for some types of instructors and students.

New Faculty: inexperienced faculty can achieve results as good or better than experienced faculty by using evidence based teaching techniques. In a study at the University of Colorado, physics students taught by a graduate TA using scientific teaching outperformed those taught by an experienced (and well loved) professor using a standard lecture style (you can read the study here). Faculty who are not native English speakers, or who are simply shy can get a lot of leverage using scientific teaching techniques, because doing in-class activities relieves the pressure to deliver perfect lectures.
Test scores between a lecture-taught physics section
and a section taught using active learning techniques.

Seasoned Faculty: For faculty who already have their teaching style established, scientific teaching can spice up lectures that have become rote or help you address concepts that you see students struggle with year after year. Even if you feel like you have your lectures completely dialed in, consider whether you’re using the most cutting edge techniques in your lab, and if you your classroom deserves the same treatment.

Students also stand to gain from scientific teaching, and some groups of students are particularly poised to benefit from it:
Students who don’t plan to go into science: Even in majors classes, most of the students we teach won’t go on to become scientists. But skills like analyzing data, and writing convincing evidence based arguments are useful in almost any field. Active learning trains students to be smart consumers of information, and formative assessment teaches students to monitor their own learning – two skills we could stand to see more of in any career.

Students Who Love Science: Active learning can give star students a leg up on the skills they’ll need to succeed as academics, for all the reasons listed above. Occasionally really bright students will balk at active learning, because having to wrestle with complicated data makes them feel stupid. While it can feel awful to watch your smartest students struggle, it is important to remember that real scientists have to confront confusing data every day. For students who want research careers, learning to persevere through messy and inconclusive results is critical.

Students who struggle with science: Active learning can be a great leveler for students who come from disadvantaged backgrounds. A University of Washington study showed that active learning and student peer tutoring could eliminate achievement gaps for minority students. If you partially got into academia because you wanted to make a difference in educating young people, here is one empirically proven way to do that.

Are there downsides?

Like anything, active learning involves tradeoffs. While the overwhelming evidence suggests that active learning is the best way to train new faculty (the white house even published a report calling for more of it!), there are sometimes roadblocks to scientific teaching.

Content Isn’t King Anymore: Taking time to work with data, or apply scientific research to policy problems takes more time, so instructors can cover fewer examples in class. In active learning, students are developing scientific skills like experimental design or technical writing, but after spending an hour hammering out an experiment to test the evolution of virulence, they often feel like they’ve only learned about “one stupid disease”. However, there is lots of evidence that covering topics in depth is more beneficial than doing a survey of many topics. For example, high schoolers that studied a single subject in depth for more than a month were more likely to declare a science major in college than students who covered more topics.

Demands on Instructor Time: I actually haven’t found that active learning takes more time to prepare –case studies and clickers actually take a up a decent amount of class time, so I spend less time prepping and rehearsing lectures. However, if you already have a slide deck you’ve been using for years, developing clicker questions and class exercises requires an upfront investment of time. Formative assessment can also take more time, although online quiz tools and peer grading can help take some of the pressure off instructors.

If you want to learn more about the theory behind scientific teaching there are a lot of great resources on the subject:

These podcasts are a great place to start:
http://americanradioworks.publicradio.org/features/tomorrows-college/lectures/

http://www.slate.com/articles/podcasts/education/2013/12/schooled_podcast_the_flipped_classroom.html

This book is a classic in the field:
http://www.amazon.com/Scientific-Teaching-Jo-Handelsman/dp/1429201886

Monday, April 21, 2014

Null models matter, but what should they look like?

Neutral Biogeography and the Evolution of Climatic Niches. Florian C. Boucher, Wilfried Thuiller, T. Jonathan Davies, and Sébastien Lavergne. The American Naturalist, Vol. 183, No. 5 (May 2014), pp. 573-584

Null models have become a fundamental part of community ecology. For the most part, this is an improvement over our null-model free days: patterns are now interpreted with reference to patterns that might arise through chance and in the absence of ecological processes of interest. Null models today are ubiquitous in tests of phylogenetic signals, patterns of species co-occurrence, models of species distribution-climate relationships. But even though null models are a success in that they are widespread and commonly used, there are problems--in particular, there is a disconnect between how null models are chosen and interpreted and what information they actually provide. Unfortunately, simple and easily applied null models tend to be favoured, but they are often interpreted as though they are complicated, mechanism-explicit models.

The new paper “Neutral Biogeography and the Evolution of Climatic Niches” from Boucher et al. provides a good example of this problem. The premise of the paper is straightforward: studies of phylogenetic niche conservation tend to rely on simple null models, and as a result may misinterpret what their data shows because of the type of null models that they use. The study of phylogenetic niche conservation and niche evolution is becoming increasingly popular, particularly studies on how species' climatic niches evolve and how climate niches relate to patterns of diversity. In a time of changing climates, there are also important applications looking at how species respond to climatic shifts. Studies of changes in climate niches through evolutionary time usually rely on a definition of the climate niche based on empirical data, more specifically, the mean position of a given species along a continuous abiotic gradient. Because this is not directly tied to physiological measurements, climate niche data may also capture the effect of dispersal limitations or biotic interactions. Hence the need for null models, however the null models used in these studies primarily flag changes in climate niche that result from to random drift or selection in a varying environment. These types of null models use Brownian motion (a "random walk") to answer questions about whether niches are more or less similar than expected due to chance, or else whether a particular model of niche evolution is a better fit to the data than a model of Brownian motion.

The authors suggest that the reliance on Brownian motion is problematic, since these simple null models cannot actually distinguish between patterns of climate niches that arise simply due to speciation and migration but no selection on climate niches, and those that are the result of true niche evolution. If this is true, conclusions about niche evolution may be suspect, since they depend on the null model used. The authors used a neutral, spatially explicit model (known as an "alternative neutral biogeographic model") that simulates dynamics driven only by speciation and migration, with species being neutral in their dynamics. This provides an alternative model of patterns that may arise in climate niches among species, despite the absence of direct selection on the trait. The paper then looks at whether climatic niches exhibit phylogenetic signals when they arise via neutral spatial dynamics; if gradualism a reasonable neutral expectation for the evolution of climatic niches on geological timescales; and whether constraints on climatic niche diversification can arise simply through bounded geographic space. Simulations of the neutral biogeographic model used a gridded “continent” with variable climate conditions: each cell has a carrying capacity, and species move via migration and split into two species either by point mutation, or else by vicariance (a geographic barrier appears, leading to divergence of 2 populations). Not surprisingly, their results show that even in the absence of any selection on species’ climate niches, patterns can result that differ greatly from a simple Brownian motion-based null model. So the simple null model (Brownian motion) often concluded that results from the more complex null model were different from the random/null expectation. This isn't a problem per se. The problem is that currently interpretations of the Brownian motion model may be that anything different from null is a signal for niche evolution (or conservation). Obviously that is not  correct.

This paper is focused on the issue of choosing null models for studies of climate niche evolution, but it fits into a current of thought about the problems with how ecologists are using null models. It is one thing to know that you need and want to use a null model, but it is much more difficult to construct an appropriate null model, and interpret the output correctly. Null models (such as the Brownian motion null model) are often so simplistic that they are straw man arguments – if ecology isn't the result of only randomness, your null model is pretty likely to be a poor fit to the data. On the other hand, the more specific and complex the null model is, the easier it is to throw the baby out with the bathwater. Given how much data is interpreted in the light of null models, it seems that choosing and interpreting null models needs to be more of a priority.

Thursday, April 3, 2014

Has science lost touch with natural history, and other links

A few interesting links, especially about the dangers of when one aspect of science, data analysis, or knowledge receives inordinate focus.

A new article in Bioscience repeats the fear that natural history is losing its place in science, and that natural history's contributions to science have been devalued. "Natural history's place in science and society" makes some good points as to the many contributions that natural history has made to science, and it is fairly clear that natural history is given less and less value within academia. As always though, the issue is finding a ways to value useful natural history contributions (museum and herbarium collections, Genbank contributions, expeditions, citizen science) in a time of limited funds and (over)emphasis on the publication treadmill. Nature offers its take here, as well.

An interesting opinion piece on how the obsession with quantification and statistics can go too far, particularly in the absence of careful interpretation. "Can empiricism go too far?"

And similarly, does Big Data have big problems? Though focused on applications for the social sciences, there are some interesting points about the space between "social scientists who aren’t computationally talented and computer scientists who aren’t social-scientifically talented", and again, the need for careful interpretation. "Big data, big problems?"

Finally, a fascinating suggestion about how communication styles vary globally. Given the global academic society we exist in, it seems like this could come in handy. The Canadian one seems pretty accurate, anyways. "These Diagrams Reveal How To Negotiate With People Around The World." 

Thursday, March 27, 2014

Are we winning the science communication war?

Since the time that I was a young graduate student, there have been constant calls for ecologists to communicate more with the public and policy makers (Norton 1998, Ludwig et al. 2001). The impetus for these calls is easy to understand –we are facing serious threats to the maintenance of biodiversity and ecosystem health, and ecologists have the knowledge and facts that are needed to shape public policy. To some, it is unconscionable that ecologists have not done more advocacy, while others see a need to better educate ecologists in communication strategies. While the reluctance for some ecologists to engage in public communication could be due to a lack of skills that training could overcome, the majority likely has had a deeper unease. Like all academics, ecologists have many demands on their time, but are evaluated by research output. Adding another priority to their already long list of priorities can seem overwhelming. More fundamentally, many ecologists are in the business of expanding our understanding of the world. They see themselves as objective scientists adding to global knowledge. To these ‘objectivists’, getting involved in policy debates, or becoming advocates, undermines their objectivity.

Regardless of these concerns, a number of ecologists have decided that public communication is an important part of their responsibilities. Ecologists now routinely sit on the boards of different organizations, give public lectures, write books and articles for the public, work more on applied problems, and testify before governmental committees. Part of this shift comes from organizations, such as the Nature Conservancy, which have become large, sophisticated entities with communication departments. But, the working academic ecologist likely talks with more journalists and public groups than in the past.

The question remains: has this increased emphasis on communication yielded any changes in public perception or policy decisions. As someone who has spent time in elementary school classrooms teaching kids about pollinators and conservation, the level of environmental awareness in both the educators and children surprises me. More telling are surprising calls for policy shifts from governmental organizations. Here in Canada, morale has been low because of a federal government that has not prioritized science or conservation. However signals from international bodies and the US seem to be promising for the ability of science to positively influence science.

Two such policy calls are extremely telling. Firstly, the North American Free Trade Agreement (NAFTA), which includes the governments of Mexico, Canada, and the USA, which normally deals with economic initiatives and disagreements, announced that they will form a committee to explore measures to protect monarch butterflies. They will consider instituting toxin-free zones, where the spraying of chemicals will be prohibited, as well as the construction of a milkweed corridor from Canada to Mexico. NAFTA made this announcement because of declining monarch numbers and calls from scientists for a coordinated strategy.

The second example is the call from 11 US senators to combat the spread of Asian carp. Asian carp have invaded a number of major rivers in the US, and have their spread has been of major concern to scientists. The 11 senators have taken this scientific concern seriously, requesting federal money and that the Army Corps of Engineers devise a way to stop the Asian carp spread.


There seems to be promising anecdotal evidence that issues of scientific concern are influencing policy decisions. This signals a potential shift; maybe scientists are winning the public perception and policy war. But the war is by no means over. There are still major issues (e.g., climate change) that require more substantial policy action. Scientists, especially those who are effective and engaged, need to continue to communicate with public and policy audiences. Every scientifically informed policy decision should be seen as a signal of the willingness of audiences to listening to scientists and that communicating science can work.



References
Ludwig D., Mangel M. & Haddad B. (2001). ECOLOGY, CONSERVATION, AND PUBLIC POLICY. Annual Review of Ecology and Systematics, 32, 481-517.

Norton, B. G. 1998. IMPROVING ECOLOGICAL COMMUNICATION: THE ROLE OF ECOLOGISTS IN ENVIRONMENTAL POLICY FORMATION. Ecological Applications 8:350–364


Monday, March 24, 2014

Debating the p-value in Ecology

It is interesting that p-values still garner so much ink: it says something about how engrained and yet embattled they are. This month’s Ecology issue explores the p-value problem with a forum of 10 new short papers* on the strengths and weaknesses, defenses and critiques, and various alternatives to “the probability (p) of obtaining a statistic at least as extreme as the observed statistic, given that the null hypothesis is true”.

The defense for p-values is lead by Paul Murtaugh, who provides the opening and closing arguments. Murtaugh, who has written a number of good papers about ecological data analysis and statistics, takes a pragmatic approach to null hypothesis testing and p-values. He argues p-values are not flawed so much as they are regularly and egregiously misused and misinterpreted. In particular, he demonstrates mathematically that alternative approaches to the p-value, particularly the use of confidence intervals or information theoretic criteria (e.g. AIC), simply present the same information as p-values in slightly different fashions. This is a point that the contribution by Perry de Valpine supports, noting that all of these approaches are simply different ways of displaying likelihood ratios and the argument that one is inherently superior ignores their close relationship. In addition, although acknowledging that cutoff values for significant p-values are logically problematic (why is a p-value of 0.049 so much more important than one of 0.051?), Murtaugh notes that cutoffs reflect decisions about acceptable error rates and so are not inherently meaningless. Further, he argues that dAIC cutoffs for model selection are no less arbitrary. 

The remainder of the forum is a back and forth argument about Murtaugh’s particular points and about the merits of the other authors’ chosen methods (Bayesian, information theoretic and other approaches are represented). It’s quite entertaining, and this forum is really a great idea that I hope will feature in future issues. Some of the arguments are philosophical – are p-values really “evidence” and is it possible to “accept” or “reject” a hypothesis using their values? Murtaugh does downplay the well-known problem that p-values summarize the strength of evidence against the null hypothesis, and do not assess the strength of evidence supporting a hypothesis or model. This can make them prone to misinterpretation (most students in intro stats want to say “the p-value supports the alternate hypothesis”) or else interpretation in stilted statements.

Not surprisingly, Murtaugh receives the most flak for defending p-values from researchers working in alternate worldviews like Bayesian and information-theoretic approaches. (His interactions with K. Burnham and D. Anderson (AIC) seem downright testy. Burnham and Anderson in fact start their paper “We were surprised to see a paper defending P values and significance testing at this time in history”...) But having this diversity of authors plays a useful role, in that it highlights that each approach has it's own merits and best applications. Null hypothesis testing with p-values may be most appropriate for testing the effects of treatments on randomized experiments, while AIC values are useful when we are comparing multi-parameter, non-nested models. Bayesian similarly may be more useful to apply to some approaches than others. This focus on the “one true path” to statistics may be behind some of the current problems with p-values: they were used as a standard to help make ecology more rigorous, but the focus on p-values came at the expense of reporting effect sizes, making predictions, or careful experimental design.

Even at this point in history, it seems like there is still lots to say about p-values.

*But not open-access, which is really too bad. 

Monday, March 17, 2014

How are we defining prediction in ecology?

There is an ongoing debate about the role of wolves in altering ecosystem dynamics in Yellowstone, which has stimulated a number of recent papers, and apparently inspired an editorial in Nature. Entitled “An Elegant Chaos”, the editorial reads a bit like an apology for ecology’s failure at prediction, suggesting that we should embrace ecology’s lack of universal laws and recognize that “Ecological complexity, which may seem like an impenetrable thicket of nuance, is also the source of much of our pleasure in nature”.

Most of the time, I also fall squarely into the pessimistic “ecological complexity limits predictability” camp. And concerns about prediction in ecology are widespread and understandable. But there is also something frustrating about the way we so often approach ecological prediction. Statements such as “It would be useful to have broad patterns and commonalities in ecology” feel incomplete. Is it that we really lack “broad patterns and commonalities in ecology”, or has ecology adopted a rather precise and self-excoriating definition for “prediction”? 

.
We are fixated on achieving particular forms of prediction (either robust universal relationships, or else precise and specific numerical outputs), and perhaps we are failing at achieving these. But on the other hand, ecology is relatively successful in understanding and predicting qualitative relationships, especially at large spatial and temporal scales. At the broadest scales, ecologists can predict the relationships between species numbers and area, between precipitation, temperature and habitat type, between habitat types and the traits of species found within, between productivity and the general number of trophic levels supported. Not only do we ignore this foundation of large-scale predictable relationships, but we ignore the fact that prediction is full of tradeoffs. As a paper with the excellent title, “The good, the bad, and the ugly of predictive science” states, any predictive model is still limited by tradeoffs between: “robustness-to-uncertainty, fidelity-to-data, and confidence-in-prediction…. [H]igh-fidelity models cannot…be made robust to uncertainty and lack-of-knowledge. Similarly, equally robust models do not provide consistent predictions, hence reducing confidence-in-prediction. The conclusion of the theoretical investigation is that, in assessing the predictive accuracy of numerical models, one should never focus on a single aspect.” Different types of predictions have different limitations. But sometimes it seems that ecologists want to make predictions in the purest, trade-off free sense - robustness-to-uncertainty, fidelity-to-data, and confidence-in-prediction - all at once. 

In relation to this, ecological processes tend to be easier to represent in a probabilistic fashion, something that we seem rather uncomfortable with. Ecology is predictive in the way medicine is predictive – we understand the important cause and effect relationships, many of the interactions that can occur, and we can even estimate the likelihood of particular outcomes (of smoking causing lung cancer, of warming climate decreasing diversity), but predicting how a human body or ecosystem will change is always inexact. The complexity of multiple independent species, populations, genes, traits, all interacting with similarly changing abiotic conditions makes precise quantitative predictions at small scales of space or time pretty intractable. So maybe that shouldn’t be our bar for success. The analogous problem for an evolutionary biologist would be to predict not only a change in population genetic structure but also the resulting phenotypes, accounting for epigenetics and plasticity too. I think that would be considered unrealistic, so why is that where we place the bar for ecology? 

In part the bar for prediction is set so high because the demand for ecological knowledge, given habitat destruction, climate change, extinction, and a myriad of other changes, is so great. But in attempting to fulfill that need, it may be worth acknowledging that predictions in ecology occur on a hierarchy from those relationships at the broadest scale that we can be most certain about, moving down to the finest scale of interactions and traits and genes where we may be less certain. If we see events as occurring with different probabilities, and our knowledge of those probability distributions declining the farther down that hierarchy we travel, then our predictive ability will decline as well. New and additional research adds to the missing or poor relationships, but at the finest scales, prediction may always be limited.