Showing posts with label Academic life. Show all posts
Showing posts with label Academic life. Show all posts

Monday, September 8, 2014

Edicts for peer reviewing

Reviewing is a right of passage for many academics. But for most graduate students or postdocs, it is also a bit of a trial by fire, since reviewing skills are usually assumed to be gained osmotically, rather than through any specific training. Unfortunately, the reviewing system seems ever more complicated for reviewers and authors alike (slow, poor quality, unpredictable). Concerns about modern reviewing pop up every few months, and different solutions to the difficulties of finding qualified reviewers and the quality of modern reviews (including publishing an instructional guide, taking alternative approaches (PeerJ, etc), or skipping peer review altogether (arXiv)). Still, in the absence of a systematic overhaul of the peer review system, an opinion piece in The Scientist by Matthew A. Mulvey and Dean Tantin provides a rather useful guide for new reviewers and a useful reminder for experienced reviewers. If you are going to do a review (and you should, if you are publishing papers), you should do it well. 
From "An Ecclesiastical Approach to Peer Review" 
"The Golden Rule
Be civil and polite in all your dealings with authors, other reviewers, editors, and so on, even if it is never reciprocated.
As a publishing scientist, you will note that most reviewers break at least a few of the rules that follow. Sometimes that is OK—as reviewers often fail to note, there is more than one way to skin a cat. As an author you will at times feel frustrated by reviews that come across as unnecessarily harsh, nitpicky, or flat-out wrong. Despite the temptation, as a reviewer, never take your frustrations out on others. We call it the “scientific community” for a reason. There is always a chance that you will be rewarded in the long run. 
The Cardinal Rule
If you had to publish your review, would you be comfortable doing so? What if you had to sign it? If the answer to either question is no, start over. (That said, do not make editorial decisions in the written comments to the authors. The decision on suitability is the editors’, not yours. Your task is to provide a balanced assessment of the work in question.) 
The Seven Deadly Sins of sub-par reviews
  1. Laundry lists of things the reviewer would have liked to see, but have little bearing on the conclusions.
  2. Itemizations of styles or approaches the reviewer would have used if they were the author.
  3. Direct statements of suitability for publication in Journal X (leave that to the editor).
  4. Vague criticism without specifics as to what, exactly, is being recommended. Specific points are important—especially if the manuscript is rejected.
  5. Unclear recommendations, with little sense of priority (what must be done, what would be nice to have but is not required, and what is just a matter of curiosity).
  6. Haphazard, grammatically poor writing. This suggests that the reviewer hasn’t bothered to put in much effort.
  7. Belligerent or dismissive language. This suggests a hidden agenda. (Back to The Golden Rule: do not abuse the single-blind peer review system in order to exact revenge or waylay a competitor.) 
Vow silence
The information you read is confidential. Don’t mention it in public forums. The consequences to the authors are dire if someone you inform uses the information to gain a competitive advantage in their research. Obviously, don’t use the findings to further your own work (once published, however, they are fair game). Never contact the authors directly.
Be timely
Unless otherwise stated, provide a review within three weeks of receiving a manuscript. This old standard has been eroded in recent years, but nevertheless you should try to stick to this deadline if possible. 
Be thorough
Read the manuscript thoroughly. Conduct any necessary background research. Remember that you have someone’s fate in your hands, so it is not OK to skip over something without attempting to understand it completely. Even if the paper is terrible and in your view has no hope of acceptance, it is your professional duty to develop a complete and constructive review.
Be honest
If there is a technique employed that is beyond your area of expertise, do the best you can, and state to the editor (or in some cases, in your review) that although outside your area, the data look convincing (or if not, explain why). The editor will know to rely more on the other reviewers for this specific item. If the editor has done his or her job correctly, at least one of the other reviewers will have the needed expertise.
Testify
Most manuscript reviews cover about a page or two. Begin writing by briefly summarizing the state of the field and the intended contribution of the study. Outline any major deficits, but refrain from indicating if you think they preclude publication. Keep in mind that most journals employ copy editors, so unless the language completely obstructs understanding, don’t bother criticizing the English. Go on to itemize any additional defects in the manuscript. Don’t just criticize: saying that X is a weakness is not the same as saying the authors should address weakness X by providing additional supporting data. Be clear and provide no loopholes. Keep in mind that you are not an author. No one should care how you would have done things differently in a perfect world. If you think it helpful, provide additional suggestions as minor comments—the editor will understand that the authors are not bound to them.
Judgment Day
Make a decision as to the suitability of the manuscript for the specific journal in question, keeping in mind their expectations. Is it acceptable in its current state? Would a reasonable number of experiments performed in a reasonable amount of time make it so, or not? Answering these questions will allow you to recommend acceptance, rejection, or major/minor revision. 
If the journal allows separate comments to the editor, here is the place to state that in your opinion they should accept and publish the paper as quickly as possible, or that the manuscript falls far below what would be expected for Journal X, or that Y must absolutely be completed to make the manuscript publishable, or that if Z is done you are willing to have it accepted without seeing it again. Good comments here can make the editor’s job easier. The availability of separate comments to the editor does not mean that you should provide only positive comments in the written review and reserve the negative ones for the editor. This approach can result in a rejected manuscript being returned to the authors with glowing reviewer comments. 
Resurrection
A second review is not the same as an initial review. There is rarely any good reason why you should not be able to turn it around in a few days—you are already familiar with the manuscript. Add no new issues—doing so would be the equivalent of tripping someone in a race during the home stretch. Determine whether the authors have adequately addressed your criticisms (and those of the other reviewers, if there was something you missed in the initial review that you think is vital). In some cases, data added to a revised manuscript may raise new questions or concerns, but ask yourself if they really matter before bringing them up in your review. Be willing to give a little if the authors have made reasonable accommodation. Make a decision: up or down. Relay it to the editor. 
Congratulations. You’ve now been baptized, confirmed, and anointed a professional manuscript reviewer."

Monday, August 25, 2014

Researching ecological research

Benjamin Haller. 2014. "Theoretical and Empirical Perspectives in Ecology and Evolution: A Survey". BioScience; doi:10.1093/biosci/biu131.

Etienne Low-Décarie, Corey Chivers, and Monica Granados. 2014. "Rising complexity and falling explanatory power in ecology". Front Ecol Environ 2014; doi:10.1890/130230.

A little navel gazing is good for ecology. Although maybe it seems like it, ecology spends far less time evaluating its approach, compared to simply doing research. Obviously we can't spend all of our time navel-gazing, but the field as a whole would benefit greatly from ongoing conversations about its strength and weaknesses. 

For example, the issue of theory vs. empirical research. Although this issue has received attention and arguments ad nauseum over the years (including here, 1, 2, 3), it never completely goes away. And even though there are arguments that it's not an issue anymore, that everyone recognizes the need for both, if you look closely, the tension continues to exist in subtle ways. If you have participated in a mixed reading group did the common complaint “do we have to read so many math-y papers?" ever arise; or equally “do we have to read so many system specific papers and just critique the methods?” Theory and empirical research don't see eye to eye as closely as we might want to believe.

The good news? Now there is some data. Ben Haller did a survey on this topic that just came out in BioScience. This paper does the probably necessary task of getting some real data beyond the philosophical and argumentative about the theory/data debate. Firstly, he defines empirical research as being involved in the gathering and analysis of real world data, while theoretical research does not gather or analyze real world data, instead involves mathematical models, numerical simulations, and other such work. The survey included 614 scientists from related ecology and evolutionary biology fields, representing a global (rather North American) perspective.

The conclusions are short, sweet and pretty interesting: "(1) Substantial mistrust and tension exists between theorists and empiricists, but despite this, (2) there is an almost universal desire among ecologists and evolutionary biologists for closer interactions between theoretical and empirical work; however, (3) institutions such as journals, funding agencies, and universities often hinder such increased interactions, which points to a need for institutional reforms."
 
For interpreting the plots – the empirical group represents respondents whose research is completely or primarily empirical; the theoretical group's research is mostly or completely related to theory, while the middle group does work that falls equally into both types. Maybe the results don't surprise anyone – scientists still read papers, collaborate, and coauthor papers mostly with others of the same group. What is surprising is that this trend is particularly strong for the empirical group. For example, nearly 80% of theorists have coauthored a paper with someone in the empirical group while only 42% of empiricists have coauthored at least one paper with a theorist. Before we start throwing things at empiricists, it should be noted that this could relate to a relative scarcity of theoretical ecologists, rather than insularity on the part of the empiricists. However, it is interesting that while the responses to the question “how should theory and empiricism coexist together?” across all groups agreed that “theoretical work and empirical work would coexist tightly, driving each other in a continuing feedback loop”, empirical scientists were significantly more likely to say “work would primarily be data-driven; theory would be developed in response to questions raised by empiri­cal findings.”

Most important, and maybe concerning, is that the survey found no real effect of age, stage or gender – i.e. existing attitudes are deeply ingrained and show no sign of changing.

Why is it so important that we reconcile the theoretical/empirical issue? The paper “Rising complexity and falling explanatory power in ecology” offers a pretty compelling reason in its title. Ecological research is getting harder, and we need to marshall all the resources available to us to continue to progress. 

The paper suggests that ecological research is experiencing falling mean Rvalues. Values in published papers have fallen from above 0.75 prior to 1950 to below 0.5 in today's papers.
The worrying thing is that as a discipline progresses and improves, you might predict that the result is an improving ability to explain ecological phenomenon. For comparison, criminology was found to show no decline in R2 values as that matured through time. Why don’t we have that? 

During the same period, however, it is notable that the average complexity of ecological studies also increased – the number of reported p-values is 10x larger on average today compared to the early years (where usually only a single p-value relating to a single question was reported). 

The fall in R2 values and the rise in reported p-values could mean a number of things, some worse for ecology than others. The authors suggest that R2 values may be declining as a result of exhaustion of “easy” questions (“low hanging fruit”), increased effort in experiments, or a change in publication bias, for example. The low hanging fruit hypothesis may have some merit – after all, studies from before the 1950s were mostly population biology with a focus on a single species in a single place over a single time period. Questions have grown increasingly more complex, involving assemblages of species over a greater range of spatial and temporal scales. For complex sciences, this fits a common pattern of diminishing returns: “For example, large planets, large mammals, and more stable elements were discovered first”.

In some ways, ecologists lack a clear definition of success. No one would argue that ecology is less effective now than it was in the 1920s, for example, and yet a simplistic measure (R2) of success might suggest that ecology is in decline. Any biases between theorists and empiricists is obviously misplaced, in that any definition of success for ecology will require both.  

Monday, August 4, 2014

#ESA2014 : Getting ready for (and surviving) ESA

There is less than one week until ecology's largest meeting. ESA's annual meeting starts August 10th in Sacramento, California, and it can be both exciting and also be overwhelming in its size and scope. Here are a few suggestions for making it a success.

Getting ready for ESA.
Sure, things start in a week and you're scheduled for a talk/poster/meeting with a famous prof, but you haven't started preparing yet.

First off, no point beating yourself up for procrastinating: if you've been thinking about your presentation but doing other projects, you might be in the company of other successful people.

If you're giving a talk, and given it before or are an old hand at this sort of thing, go ahead and put it together the night before your talk. One benefit for the truly experienced or gifted speaker is that this talk will never sound over-rehearsed.

Regardless, all speakers should try for a talk that is focused, with a clear narrative and argument, and within the allotted time. (Nothing is more awkward for everyone involved than watching the moderate have to interrupt a speaker). The good news is that ESA audiences will probably be a) educated to at least a basic level on your topic, and b) are usually generous with their attention and polite with their questions. This blog has some really practical advice on putting together an academic talk.
If at all possible, practice in front of a friendly audience ahead of time.

The questions after your talk will vary, and if you're lucky they will relate to future directions, experimental design, quantitative double-checks, and the truly insightful thoughts. However, there other common questions that you should recognize: the courtesy question (good moderators have a few in hand), the "tell-me-how-it-relates-to-my-work" question, and the wandering unquestion.

Giving a poster is much different than giving a talk, and it has pros and cons. First, you have to have it finished in time to have it printed, so procrastination is less possible. Posters are great if you want one-on-one interactions with a wide range of people. You have to make your poster attractive and interesting: this always means don't put too much text on your poster. The start of this pdf gives some nice advice on getting the most out of your poster presentation.

For both posters and presentations, graphics and visual appeal make a big difference. Check out the blog, DeScience, which has some great suggestions for science communication.

Academic meetings. These run the gamut from collaborators that you're just catching up with, to strangers that you have contacted to meet to discuss common scientific interests. If scientists that you share common research activities and interests with are attending ESA, it never hurts to try to meet with them. Many academics are generous with their time, especially for young researchers. If they say yes, come prepared for the conversation. If necessary, review their work that relates to your own. Come prepared to describe your interests and the project/question/experiment you were looking for advice on. It can be very helpful to have some specific questions in mind, in order facilitate the conversation.

What to wear. Impossible to say. Depending on who you are and wear you work normally, you can wear anything from torn field gear and binos to a nice dress or suit (although not too many people will be in suits).

Surviving ESA.
ESA can be very large and fairly exhausting. The key is to pace yourself and take breaks: you don't need to see talks all day long to get your money's worth from ESA. Prioritize the talks that you want to see based on things like speaker or topic. Sitting in on topics totally different from those you study can be quite energizing as well. In this age of smartphones, the e-program is invaluable.

Social media can help you find popular or interesting sounding talks, or fill you in on highlights you missed. This year the official hashtag on twitter is #ESA2014.

One of the most important things you can do is be open to meeting new people, whether through dinner and lunch invites, mixers, or other organized activities. Introverts might cringe a little, but the longest lasting outcome from big conferences is the connections you make there.

Eat and try to get some sleep.

**The EEB & Flow will be live-blogging during ESA 2014 in Sacramento, as we have for the last few years. See everyone in Sacramento!**

Tuesday, July 15, 2014

What papers influenced your journey as an ecologist?

For ESA’s centennial year, they are running a pretty cool series called “The Paper Trail”. A variety of ecologists write about the particular paper or papers that catalyzed their research path. Sometimes the papers are valuable for bringing up particular questions, sometimes they facilitated the connection of particular ideas.

William Reiner provides some insight into the value of this exercise: “What are some of the generalizations one can deduce from this paper trail? For me there are five. First, in ecology one cannot take too large a view of the problem one is addressing. Second, it is useful to step out of one's science into others to gain useful new ways of addressing questions. Collaboration with others outside ones field facilitates this complementarity. Third, teaching provides a useful forum for developing one's ideas. Fourth, there is no literature that is too old to have no value for current issues. And fifth, one must take time to read to be a thoughtful, creative scholar.”

In general, people are writing about papers that either specifically related to their own research at the time and opened their eyes to something new, or else broadly inspired or fascinated them at a critical time. (For Lee Frelich, this was reading The Vegetation of Wisconsin, an Ordination of Plant Communities at 12 years old.) I probably fall into the second group. My undergrad degree was in general biology and math, so although I had taken a couple of ecology courses, I knew essentially nothing about the fundamentals of ecological literature. So I was an impressionable PhD student, and I read a lot of papers. When I started, my plan was to do something related to macroecology, and the first paper I remember being excited about was James H. Brown’s 1984 “On the Relationship between Abundance and Distribution of Species”. It is everything a big idea paper should be – confident, persuasive, suggesting that simple tradeoffs may allow us to predict broad ecological patterns. And while with time I feel that some of the logic in the paper is flawed or at least unsupported, it definitely is a reminder of how exciting thinking big can be (and 1870 citations suggests others agree).

The next paper was R.H. Whittaker’s “Gradient analysis of vegetation” (1967). There is a lot of recommend in Whittaker’s work, in particular the fact that it straddles so well modern ecology and traditional ecology. He introduces early multidimensional analyses of plant ecology and asks what an ecological community is, while also having such a clear passion for natural history.

Finally, and perhaps not surprisingly, the biggest influence was probably Chesson (2000) “Mechanisms of the maintenance of species diversity”. The value of the ideas in this paper is that they can (and have) be applied to many modern ecological questions. In many ways, this felt like the most important advance in ecological theory in some time. It is also the sort of paper that you can read many times (and probably have to) and still something new every time.

Of course, there are many other papers that could be on this list, and I’ve probably overlooked something. Also, makes me miss having free time to read lots of papers :)

Monday, May 19, 2014

Guest Post: You teach science, but is your teaching scientific? Part 2: Flipping your class.

The second in a series of guest posts about using scientific teaching, active learning, and flipping the classroom by Sarah Seiter, a teaching fellow at the University of Colorado, Boulder. 

When universities first opened in the middle ages, lecturing was the most cutting edge information technology available to a professor - books were copied by hand so the fastest way to transfer information was to talk at your students (see the awesome TED talk below for a breakdown of how universities can and should change). Lecturing is still the default at most universities, and faculty spend hours developing their lecture skills. But studies have shown over and over again that lecturing is one of the worst possible ways to get students to learn. This means that our most accomplished scientists are working like crazy to master a method of teaching that is straight up medieval.
Lecturing isn’t going away any time soon, but you do a lot for your classes by incorporating active learning techniques, sometimes called “flipping” a class. The main feature of a flipped class is that students do the knowledge acquisition (the lecture-like) part of the course at home, and then do “homework” in the classroom with the instructor and peers to help them apply knowledge.

Flipped Classroom Fears:

Instructors often imagine a Lord of The Flies style scenario when they start flipping their classrooms, but this isn’t usually the case. In fact, most students are actually so conditioned to sit quietly in class that it can be difficult to get them to talk about the material. However, there are a few things you can do to get students in the frame of mind for productive discussion.
Flipping your classroom will probably not result­ in chaos. Nobody is going to smash the conch shell and kill Piggy, but they might learn something.
  • Start small: If you’re just getting into transforming your class, it can be helpful to start with something small, like flipping once a week.
  • Get extra staff: Since group work key to flipped classroom, it helps to have extra staff to facilitate peer discussions. If you have graduate TAs, consider deputizing them to lead group exercises. If your university has an undergrad TA program, get as many as you can and spend a day training them on how to ask good questions and facilitate conversations.
  • Explain to students why flipping works: Students will sometimes complain if they’re used to sitting passively in lecture, and they’re suddenly forced to do homework in class. But flipping builds skills that they’ll need in the workplace or graduate school, so reemphasizing what they’re gaining can help get them to buy in. 

Tools For Flipping: Case Studies

Case studies usually involve taking scientific data or ideas and then applying them to a real world situation (medical, law and business schools have been using them for years). Case studies are all over the internet, although the largest clearing house is the National Case Study Library (the American Museum of Natural History, the National Geographic Society, the Smithsonian, and the Understanding Evolution project at Berkeley also have great resources). The National Case Study Library is the largest and is searchable by topic and age, and includes teaching notes for each case, and can be a great place to get started.

Picking Case Studies: Some case studies are purely hypothetical, but I tend to gravitate to those that use real data from published studies like this one on the evolution of skin color that uses studies from a lot of disciplines to build to a conclusion, or this one on conservation corridors and meta-populations. A lot of case studies open with a fictional story, but this approach is a little corny for me, and I’d rather focus on the real scientists and their questions (the narrative case studies can also get weird (like this paternity case study that could also be a great Maury episode). In general, just pick things that work for you and your students. 

DIY Case Studies: If you have papers that you already like to teach, then consider turning them into a case study. To do this, I usually write an intro briefly framing the problem or question. Then I give students actual graphs from the paper with follow up questions to help them process the information. It is OK if the study has a few confusing elements; while we often want a clear story to present to our classes, there’s great evidence that using “messy” data builds scientific skills. You may have to modify graphs, or remake them for extra readability. This could mean re-labeling axes to remove jargon (e.g. in a paper on insects, “instar” becomes “developmental stage”). It might mean dropping some treatments (you don’t need 10 nitrogen treatments to understand eutrophication). Usually I follow every graph with 2-3 questions that follow a basic format:
  • Question 1: Ask students to detect any trends or differences in the graph. 
  • Question 2: Have students think of an explanation for the results
  • Question 3: Ask students to apply their “findings” to the question or problem posed in the case study
The above formula is just a starting place so add or alter questions to suit your needs. Sometimes I’ll use two or three graphs, and use the formula above. I usually end with a question that ties all the graphs together, like asking them to recommend a policy solution, or contrast the findings of different researchers.

Using Case Studies In Class: You can prepare students for a case either through a short lecture or through a homework assignment or reading quiz (this can be done using classroom management software like Blackboard, or Sakai). Once students have the background, have them break into groups of two to three, and work through the questions. It can be helpful to stop every few minutes to go through the answers (some case studies build on earlier questions, so early feedback is key). A great feature of case studies is that they can take nearly an entire class period, so you can go an entire day without having to lecture.

Clicker Questions 

The other main tool for flipping your classroom is clicker questions. Clickers are basically a real-time poll of your students so you can check how they are learning. Most instructors use them for participation points, rather than grading them for correctness (this encourages students to jump in and grapple with material, and not worry about making mistakes). Your university might have a set of clickers that you can borrow, or you have students use laptops, tablets and smartphones in place of clicker with apps like Poll Everywhere, GoSoapbox, Pinnion, or Socrative (these have different features and price points, so see what works for you). For a more comprehensive list of clicker tools , see this article from a team at Princeton.

Writing Good Clicker Questions: Good clicker questions should encourage discussion, and force students to apply their knowledge, not just test what they remember. This can mean using information to make recommendations, doing a calculation, or making predictions about the outcome of experiment. Standard clickers only allow for multiple choice questions, but other web-based tools will allow your students to do free responses, draw graphs, or give other types of answers. There are lots of great web resources on how to design clicker questions (in appendix). The slide show below shows some clicker questions we used in our flipped evolution class at CU Boulder.



Using Clickers In Class: Once you have your clicker questions written, then you’ll need to deploy them in class. Below is a basic blueprint for how to run a clicker question

1. Tell students to break into groups and get ready to discuss a clicker question
2. Give students about a minute to discuss the question, and open whatever clicker software you’re using. You’ll usually hear a 30 second surge in talking that dies down after about a minute. After about a minute give students a warning and tell then close out the clicker question.
3. At this point you can show the results of the clicker poll and start to unpack the question. If your questions are challenging, you should be getting significant amounts of wrong answers, so seeing a wide range of answers means you’re doing it right. Usually if 10% of your students are getting the question wrong, it is worth discussing the question in depth
4. Make students be able to articulate why right answers are right and why wrong answers are wrong. You can call on groups to get them to explain their answers (this is nicer than cold-calling individual students). If nobody wants to talk about wrong answers, say something like “why might someone think that B is a tempting answer?” so that nobody has to admit to being wrong in front of their peers.
5. It can be helpful to follow up with another question asking them to apply the material in a different way.

In conclusion, flipping your classroom can be done pretty cheaply and without that much more work than lecturing. This post is really just a starting place, and there are ton of great resources on the web to take you further. I’ve compiled just a few of them below. Good luck and happy flipping!

By Sarah Seiter


Resources

Videos on Flipped Classrooms:
https://www.youtube.com/watch?v=EMhJcwvmamY

Resources for Clicker Qs:
Clicker Question Guides from University of Colorado Boulder
http://www.colorado.edu/sei/documents/clickeruse_guide0108.pdf
http://www.slideshare.net/stephaniechasteen/writing-great-clicker-questions
Vanderbilt:
http://cft.vanderbilt.edu/guides-sub-pages/clickers/


Wednesday, May 14, 2014

Addressing the mental health problem in academia


The Guardian UK is publishing an insightful series this May called “Mental health: a university crisis”, as part of Mental Health Month. Although mental health issues for undergraduates are the focus of a variety of different services and programs at most universities, the Guardian includes a unique focus on the issues of academics—graduate students, postdocs, professors and other researchers—for whom it seems that mental health issues are disproportionately common.

The whole series is an important read, and comes at the issue from many different perspectives. A recent survey of university employees not surprisingly found that academics have higher stress levels than other university employees, which they attribute to heavy workloads (!), lack of support (from the department or otherwise), and particularly for early career researchers, feelings of isolation. One particularly insightful piece (with the tagline "I drink too much and haven't had a good night's sleep since last year. Why? Research") argues that academics have particularly unique problems leading to mental health issues. There are typical issues that many high stress jobs include—the ever-regenerating todo list, and the many teaching, research, and service tasks that academics need to accomplish. But academia also seems to attract a high proportion of intense, perfectionistic, passionate people willing to go the extra mile (and encouraged to, given the difficult job market). Worse, research is a creative, even emotional activity – there are highs and lows and periods of intense work that come at the expense of everything else. Ideas are personal, and so the separation between person and research is very slim. The result is often a lack of work-life balance that might produce academic success, but strains mental health. Mental health issues further have dire implications for most research activities, since the symptoms – loss of motivation, concentration and clarity of thought – affect crucial academic skills.

If such issues are so common in academia (and there’s a form of anxiety ubiquitous among graduate students, the imposter syndrome; other common illnesses include anxiety, depression, and panic attacks), why are most of the lecturers and postdocs writing about their mental health experiences for the Guardian choosing to be anonymous? It still seems common to simply downplay or hide problems with stress and mental illness (in the linked study, 61% of academics with mental health problems say their colleagues are unaware of their problems). This may be a reflection of the fact that academia is focused individual performance and individual reputation. Colleagues choose to work with you, to invite you to their department, to hire you, based in no small part on your reputation. Admitting to having suffered from mental illness can feel like adding an obstacle to the already difficult academic landscape. For many, admitting to struggling can feel like failure, particularly since everyone around them seems to be managing the harsh conditions just fine (whether or not that is really true). Academic workdays have less structure than most, which can be isolating. Academics can keep unpredictable hours, disappear for days, send emails at 2 am, sleep at work, and be unkempt and exhausted without much comment; as a result, it can be difficult to identify those colleagues who are at risk (compared to those who are simply unconventional :-) ).

It will be interesting to see where the Guardian series goes. Mental health issues in academia are in many ways the same as those that have affected women and minorities looking for inclusion in academia – subtle comments or stigma, lack of practical support. I remember once hearing a department chair disgusted a co-author who had failed to respond to emails because they were “certifiably crazy; in a mental hospital”. No doubt that was exactly the response the co-author was hoping to avoid. More subtle but more common is lip-service to work-life balance that is counterbalanced by proud references to how hard one or one’s lab works. There is nothing wrong with working hard, but maybe we should temper our praise of sleeping in the lab, coming in every holiday and weekend. It happens and it may be necessary, but is that the badge of honour we really want to claim? It would be sad if the nature of academia, its competitiveness and atmosphere of masochism (“my students are in the lab on Christmas”) limits progress.

Friday, May 9, 2014

Scaling the publication obstacle: the graduate student’s Achilles’ heel

There is no doubt that graduate school can be extremely stressful and overwhelming. Increasingly, evidence points to these grad school stressors contributing to mental health problems (articles here and here). Many aspects of grad school contribute to self-doubt and unrelenting stress: is there a job for me after? am I as smart as everyone else? is what I’m doing even interesting?

But what seems to really exacerbate grad school stress is the prospect of trying to publish*. The importance of publishing can’t be dismissed. To be a scientist, you need to publish. There are differing opinions about what makes a scientist (e.g., is it knowledge, job title, etc.), but it is clear that if you are not publishing, then you are not contributing to science. This is what grad students hear, and it is easy to see how statements like this do not help with the pressure of grad school.

There are other aspects of the grad school experience that are important, like teaching, taking courses, outreach activities, and serving on University committees or in leadership positions. These other aspects can be rewarding because they expand the grad school experience. There is also the sense that they are under your control and the rewards are more directly influenced by your efforts. Here then, publishing is different. The publication process does not feel like it is under your control and that the rewards are not necessarily commensurate with your efforts.

Cartoon by Nick Kim, Massey University, Wellington, accessed here

Given the publishing necessity, how then can grad students approach it with as little trauma as possible? The publication process will be experienced differently by different people, some seem like they can shrug off negative experiences while others internalize them, with negative experiences gnawing away at their confidence. There is no magic solution to making the publishing experience better, but here are some suggestions and reassurances.

1) It will never be perfect! I find myself often telling students to just submit already. There is a tendency to hold on to a manuscript and read and re-read it. Part of this is the anxiety of actually submitting it, and procrastination is a result of anxiety. But often students say that it doesn’t feel ready, or that they are unhappy with part of the discussion, or that it is not yet perfect. Don’t ever convince yourself that you will make it perfect –you are setting yourself up for a major disappointment. Referees ALWAYS criticize, even when they say a paper is good. There is always room for improvement and you should view the review process as part of the process that improves papers. If you think of it this way, then criticisms are less personal (i.e., why didn’t they think it was perfect too?) and feel more constructive, and you are at peace with submitting something that is less than perfect.

2) Let's dwell on part of the first point: reviewers ALWAYS criticize. It is part of their job. It is not personal. Remember, the reviewers are putting time and effort into your paper, and their comments should be used to make the product better. Reviewers are very honest and will tell you exactly what could be done to improve a manuscript. They are not attacking you personally, but rather assessing the manuscript. 

3) Building on point 2, the reviewers may not always be correct or provide the best advice. It is OK to state why you disagree with them. You should always appreciate their efforts (unless they are unprofessional), but you don’t have to always agree with them.

4) Not every paper is a literature masterpiece. Effective scientific communication is sometimes best served by very concise and precise papers. If you have an uncomplicated, relatively simple experiment, don’t make more complex by writing 20 pages. Notes, Brevia, Forum papers are all legitimate contributions.

5) Not every paper should be a Science or Nature paper (or whatever the top journals are in a given subdiscipline). Confirmatory or localized studies are helpful and necessary. Large meta-analyses and reviews are not possible without published evidence. Students should try to think how their work is novel or broadly general (this is important for selling yourself later on), but it is ok to acknowledge that your paper is limited in scope or context, and to just send it to the appropriate journal. It takes practice to fit papers to the best journals, so ask colleagues where they would send it. This journal matching can save time and trauma.

6) And here is the important one: rejection is ok, natural, and normal. We all get rejections. What I mean by this is that we all get rejections. Your rejection is not abnormal, you don’t suck more than others, and your experience has been experienced by all the best scientists. When your paper is reviewed, and then rejected, there is usually helpful information that should be useful in revising your work to submit elsewhere. Many journals are inundated with papers and are looking for reasons to reject. In the journal I edit, we accept only about 18% of submissions, and so it doesn’t take much to reject a paper. This is unfortunate, but currently unavoidable (though with the changing publishing landscape, this norm may change). Rejection is hard, but don’t take it personally, and feel free to express your rage to your friends.



Publishing is a tricky, but necessary, business for scientists. When you are having problems with publishing, don’t internalize it. Instead complain about it to your friends and colleagues. They will undoubtedly have very similar experiences. Students can be hesitant to share rejections with other students because they feel inferior, but sharing can be therapeutic. When I was a postdoc at NCEAS, the postdocs would share quotes from their worst rejection letters. What would have normally been a difficult, confidence-bashing experience, became a supportive, reassuring experience.

Publishing is necessary, but also very stressful and potentially adding to low-confidence and a feeling that grad school is overwhelming. I hope that the pointers above can help make the experience less onerous. But when you do get that acceptance letter telling you that your paper will be published, hang on to that. Celebrate and know that you have been rewarded for your hard work, but move on from the rejections.


*I should state that my perspective is from science, and my views on publishing are very much informed by the publishing culture in science. I have no way of knowing if the pressures in the humanities or economics are the same for science students.

Wednesday, April 23, 2014

Guest Post: You teach science, but is your teaching scientific? (Part I)

The first in a series of guest posts about using scientific teaching, active learning, and flipping the classroom by Sarah Seiter, a teaching fellow at the University of Colorado, Boulder. 

As a faculty member teaching can sometimes seem like a chore – your lectures compete with smartphones and laptops. Some students see themselves as education “consumers” and haggle over grades. STEM (science, technology, engineering, and math) faculty have a particularly tough gig – students need substantial background to succeed in these courses, and often arrive in the classroom unprepared. Yet, the current classroom climate doesn’t seem to be working for students either. About half of STEM college majors ultimately switch to a non-scientific field. It would be easy to frame the problem as one of culture – and we do live in a society that doesn’t always value science or education. However, the problem of reforming STEM education might not take social change, but rather could be solved using our own scientific training. In the past few years a movement called “scientific teaching” has emerged, which uses quantitative research skills to make the classroom experience better for instructors as well as students.

So how can you use your research skills to boost your teaching? First, you can use teaching techniques that have been empirically tested and rigorously studied, especially a set of techniques called “active learning”. Second, you can collect data on yourself and your students to gauge your progress and adjust your teaching as needed, a process called “formative assessment”. While this can seem daunting, it helps to remember that as a researcher you’re uniquely equipped to overhaul your teaching, using the skills you already rely on in the lab and the field. Like a lot of paradigm shifts in science, using data to guide your teaching seems pretty obvious after the fact, but it can be revolutionary for you and your students.

What is Active Learning:

There are a lot of definitions of active learning floating around, but in short active learning techniques force students to engage with the material, while it is being taught. More importantly, students practice the material and make mistakes while they are surrounded by a community of peers and instructors who can help. There are a lot of ways to bring active learning strategies to your classroom, such as clicker response systems (handheld devices that allow them to take short quizzes throughout the lecture). Case studies are another tool: students read about scientific problems and then apply the information to real world problems (medical and law schools have been them for years). I’ll get into some more examples of these techniques in post II; there are lots of free and awesome resources that will allow you to try active learning techniques in your class with minimal investment.

Formative Assessment:

The other way data can help you overhaul your class is through formative assessment, a series of small, frequent, low stakes assessment of student learning. A lot of college courses use what’s called summative assessment – one or two major exams that test a semester’s worth of material, with a few labs or a term paper for balance. If your goal is to see if your students learned anything over a semester this is probably sufficient. This is also fine if you’re trying to weed out underperforming students from your major (but seriously, don’t do that). But if you’re interested in coaching students towards mastery of the subject matter, it probably isn’t enough to just tell them how much they learned after half the class is over. If you think about learning goals like we think of fitness goals, this is like asking students to qualify for the Boston marathon, without giving them any times for their training runs.

Formative assessment can be done in many ways: weekly quizzes or taking data with classroom clicker systems. While a lot of formative assessment research focuses on measuring student progress, instructors have lots to gain by measuring their own pedagogical skills. There are a lot of tools out there to measure improvement in teaching skills (K-12 teachers have been getting formatively assessed for years), but even setting simple goals for yourself (“make at least 5 minutes for student questions”) and monitoring your progress can be really helpful. Post III will talk about how to do (relatively) painless formative assessment in your class.

How does this work and who does it work for:

Scientific teaching is revolutionary because it works for everyone, faculty and students alike. However, it has particularly useful benefits for some types of instructors and students.

New Faculty: inexperienced faculty can achieve results as good or better than experienced faculty by using evidence based teaching techniques. In a study at the University of Colorado, physics students taught by a graduate TA using scientific teaching outperformed those taught by an experienced (and well loved) professor using a standard lecture style (you can read the study here). Faculty who are not native English speakers, or who are simply shy can get a lot of leverage using scientific teaching techniques, because doing in-class activities relieves the pressure to deliver perfect lectures.
Test scores between a lecture-taught physics section
and a section taught using active learning techniques.

Seasoned Faculty: For faculty who already have their teaching style established, scientific teaching can spice up lectures that have become rote or help you address concepts that you see students struggle with year after year. Even if you feel like you have your lectures completely dialed in, consider whether you’re using the most cutting edge techniques in your lab, and if you your classroom deserves the same treatment.

Students also stand to gain from scientific teaching, and some groups of students are particularly poised to benefit from it:
Students who don’t plan to go into science: Even in majors classes, most of the students we teach won’t go on to become scientists. But skills like analyzing data, and writing convincing evidence based arguments are useful in almost any field. Active learning trains students to be smart consumers of information, and formative assessment teaches students to monitor their own learning – two skills we could stand to see more of in any career.

Students Who Love Science: Active learning can give star students a leg up on the skills they’ll need to succeed as academics, for all the reasons listed above. Occasionally really bright students will balk at active learning, because having to wrestle with complicated data makes them feel stupid. While it can feel awful to watch your smartest students struggle, it is important to remember that real scientists have to confront confusing data every day. For students who want research careers, learning to persevere through messy and inconclusive results is critical.

Students who struggle with science: Active learning can be a great leveler for students who come from disadvantaged backgrounds. A University of Washington study showed that active learning and student peer tutoring could eliminate achievement gaps for minority students. If you partially got into academia because you wanted to make a difference in educating young people, here is one empirically proven way to do that.

Are there downsides?

Like anything, active learning involves tradeoffs. While the overwhelming evidence suggests that active learning is the best way to train new faculty (the white house even published a report calling for more of it!), there are sometimes roadblocks to scientific teaching.

Content Isn’t King Anymore: Taking time to work with data, or apply scientific research to policy problems takes more time, so instructors can cover fewer examples in class. In active learning, students are developing scientific skills like experimental design or technical writing, but after spending an hour hammering out an experiment to test the evolution of virulence, they often feel like they’ve only learned about “one stupid disease”. However, there is lots of evidence that covering topics in depth is more beneficial than doing a survey of many topics. For example, high schoolers that studied a single subject in depth for more than a month were more likely to declare a science major in college than students who covered more topics.

Demands on Instructor Time: I actually haven’t found that active learning takes more time to prepare –case studies and clickers actually take a up a decent amount of class time, so I spend less time prepping and rehearsing lectures. However, if you already have a slide deck you’ve been using for years, developing clicker questions and class exercises requires an upfront investment of time. Formative assessment can also take more time, although online quiz tools and peer grading can help take some of the pressure off instructors.

If you want to learn more about the theory behind scientific teaching there are a lot of great resources on the subject:

These podcasts are a great place to start:
http://americanradioworks.publicradio.org/features/tomorrows-college/lectures/

http://www.slate.com/articles/podcasts/education/2013/12/schooled_podcast_the_flipped_classroom.html

This book is a classic in the field:
http://www.amazon.com/Scientific-Teaching-Jo-Handelsman/dp/1429201886

Thursday, April 3, 2014

Has science lost touch with natural history, and other links

A few interesting links, especially about the dangers of when one aspect of science, data analysis, or knowledge receives inordinate focus.

A new article in Bioscience repeats the fear that natural history is losing its place in science, and that natural history's contributions to science have been devalued. "Natural history's place in science and society" makes some good points as to the many contributions that natural history has made to science, and it is fairly clear that natural history is given less and less value within academia. As always though, the issue is finding a ways to value useful natural history contributions (museum and herbarium collections, Genbank contributions, expeditions, citizen science) in a time of limited funds and (over)emphasis on the publication treadmill. Nature offers its take here, as well.

An interesting opinion piece on how the obsession with quantification and statistics can go too far, particularly in the absence of careful interpretation. "Can empiricism go too far?"

And similarly, does Big Data have big problems? Though focused on applications for the social sciences, there are some interesting points about the space between "social scientists who aren’t computationally talented and computer scientists who aren’t social-scientifically talented", and again, the need for careful interpretation. "Big data, big problems?"

Finally, a fascinating suggestion about how communication styles vary globally. Given the global academic society we exist in, it seems like this could come in handy. The Canadian one seems pretty accurate, anyways. "These Diagrams Reveal How To Negotiate With People Around The World." 

Tuesday, February 18, 2014

P-values, the statistic that we love to hate

P-values are an integral part of most scientific analyses, papers, and journals, and yet they come with a hefty list of concerns and criticisms from frequentists and Bayesians alike. An editorial in Nature (by Regina Nuzzo) last week provides a good reminder of some of the more concerning issues with the p-value. In particular, she explores how the obsession with "significance" creates issues with reproducibility and significant but biologically meaningless results.

Ronald Fischer, inventor of the p-value, never intended it to be used as a definitive test of “importance” (however you interpret that word). Instead, it was an informal barometer of whether a test hypothesis was worthy of continued interest and testing. Today though, p-values are often used as the final word on whether a relationship is meaningful or important, on whether the the test or experimental hypothesis has any merit, even on whether the data is publishable. For example in ecology, significance values from a regression or species distribution model are often presented as the results. 

This small but troubling shift away from the original purpose for p-values is tied to concerns about false alarms and with replicability of results. One recent suggestion for increasing replicability is to make p-values more stringent - to require that they be less that 0.005. But the point the author makes is that although p-values are typically interpreted as “the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true”, this doesn't actually mean that a p-value of 0.01 in one study is exactly consistent with a p-value of 0.01 found in another study. P-values are not consistent or comparable across studies because the likelihood that there was a real (experimental) effect to start with alters the likelihood that a low p-value is just a false alarm (figure). The more unlikely the test hypothesis, the more likely a p-value of 0.05 is a false alarm. Data mining in particular will be (unwittingly) sensitive to this kind of problem. Of course one is unlikely to know what the odds of the test hypothesis are, especially a priori, making it even more difficult to correctly think about and use p-values. 

from: http://www.nature.com/news/scientific-method-statistical-errors-1.14700#/b5
The other oft-repeated criticism of p-values is that a highly significant p-value make still be associated with a tiny (and thus possibly meaningless) effect size. The obsession with p-values is particularly strange then, given that the question "how large is the effect?", should be more important than just answering “is it significant?". Ignoring effect sizes leads to a trend of studies showing highly significant results, with arguably meaningless effect sizes. This creates the odd situation that publishing well requires high profile, novel, and strong results – but one of the major tools for identifying these results is flawed. The editorial lists a few suggestions for moving away from the p-value – including to have journals require effect sizes and confidence intervals be included in published papers, to require statements to the effect of “We report how we determined our sample size, all data exclusions (if any), all manipulations and all measures in the study”, in order to limit data-mining, or of course to move to a Bayesian framework, where p-values are near heresy. The best advice though, is quoted from statistician Steven Goodman: “The numbers are where the scientific discussion should start, not end.”

Monday, February 10, 2014

Ecological progress, what are we doing right?

A post from Charles Krebs' blog called "Ten limitations on progress in ecology" popped up a number of times on social media last week. Krebs is a established population ecologist who has been working in the field for a long time, and he suggests some important problems leading to a lack of progress in ecology. These concerns range from lack of jobs and funding for ecologists, to the fracturing of ecology into poorly integrated subfields. Krebs' post is a continuation of the ongoing conversation about limitations and problems in ecology, which has been up for discussion for decades. And as such, I agree with many of the points being made. But it reminded me of something I have been thinking about for a while, which is that it seems much more rare to see ecology’s successes listed. For many ecologists, it is probably easier to come up with the problems and weaknesses, but I think that's more of a cognitive bias than a sign that ecology is inescapably flawed. And that’s unfortunate: recognizing our successes and advances also helps us improve ecology. So what is there to praise about ecology, and what successes we can build on?

Despite Krebs’ concerns about lack of jobs for ecologists, it is worth celebrating how much ecology has grown in numbers and recognition as a discipline. The first ESA annual meeting in 1914 had 307 attendees, recent years’ attendance is somewhere between 3000-4000 ecologists. Ecology is also increasingly diverse. Ecology and Evolutionary Biology departments are now common in big universities, and sometimes replacing Botany and/or Zoology programs. On a more general level, the idea of “ecology” has increasing recognition by the public. Popular press coverage of issues such as biological invasions, honeybee colony collapses, wolves in Yellowstone, and climate change, have at least made the work of ecologists slightly more apparent.

Long-term ecological research is probably more common and more feasible now than it has ever been. There are long-term fragmentation, biodiversity and ecosystem function studies, grants directed at LTER, and a dedicated institute (the National Ecological Observatory Network (NEON)) funded by the NSF for longterm ecological data collection. (Of course, not all long term research sites have had an easy go of things – see the Experimental Lakes Area in Canada).

Another really positive development is that academic publishing is becoming more inclusive – not only are there more reputable open access publishing options for ecologists, the culture is changing to one where data is available online for broad access, rather than privately controlled. Top journals are reinforcing this trend by requiring that data be published in conjunction with publications.

Multi-disciplinary collaboration is more common than ever, both because ecology naturally overlaps with geochemistry, mathematics, physics, physiology, and others, and also because funding agencies are rewarding promising collaborations. For example, I recently saw a talk where dispersal was considered in the context of wind patterns based on meteorological models. It felt like this sort of mechanistic approach provided a much fuller understanding of dispersal than the usual kernel-based model.

Further, though subdisciplines of ecology have at times lost connection with the core knowledge of ecology, some subfields have taken paths that are worth emulating, integrating multiple areas of knowledge, while still making novel contributions to ecology in general. For example, disease ecology is multidisciplinary, integrating ecology, fieldwork, epidemiological models and medicine with reasonable success.

Finally, more than ever, the complexity of ecology is being equalled by available methods. More than ever, the math, the models, the technology, and the computing resources available are sufficient. If you look at papers from ecology’s earliest years, statistics and models were restricted to simple regressions or ANOVAs and differential equations that could be solved by hand. Though there is uncertainty associated with even the most complex model, our ability to model ecological processes is higher than ever. Technology allows us to observe changes in alleles, to reconstruct phylogenetic trees, and to count species too small to even see. If used carefully and with understanding, we have the tools to make and continue making huge advances.

Maybe there are other (better) positive advances that I’ve overlooked, but it seems that – despite claims to the contrary – there are many reasons to think that ecology is a growing, thriving discipline. Not perfect, but successfully growing with the technological, political, and environmental realities.
Ecology may be successfully growing, but it's true that the timing is rough...

Monday, January 27, 2014

Gender diversity begets gender diversity for invited conference speakers


There are numerous arguments for why the academic pipeline leaks - i.e. why women are increasingly less represented in higher academic ranks. Among others, the suggestion has been made there can be simple subconscious biases regarding the image that accompanies the idea of "a full professor" or "seminar speaker". A useful new paper by Arturo Casadevall and Jo Handelsman provides some support for this idea. The authors identified invited talks at academic conferences as an example of important academic career events, which provide multiple benefits and external recognition of a researcher’s work. However, a number of studies have shown that women are less represented as invited speakers, but proportionally and in absolute numbers. To explore this further, the authors asked whether the presence or absence of women as conveners for the American Microbial Society (ASM) meetings affects the number of female invited speakers. Conveners for ASM meetings are involved of selection of speakers, either directly or in consultation with program committee members. The two annual meetings run by the ASM involve 4000-6000 attendees, of which female members constitute approximately 40% (37% when only full members were considered). Despite this nearly 40% female membership, for session where all conveners were male, the percentage of invited speakers who were female was consistently near 25%. While explanations for these sorts of poor representation of females in academia are often structural, the authors show that in this case, simple changes might change this statistic. If one or more women were conveners for a session, the proportion of female invited speakers in that session rises to around 40%, or in line with women’s general representation in the ASM. The authors don’t offer precise explanations for these striking results, but note that women conveners may be more likely to be aware of gender and may make a conscious effort to invite female speakers. Implicit biases, our “search images”, may unconsciously favour males, but these results are positive in suggesting that even small changes and greater awareness can make a big difference.

 
The proportion of invited speakers in a session who are female from 2011-2013, for the two annual meetings (GM & ICAAC) organized by the ASM. Compare black bars - no female conveners - and grey bars - at least one female convener.

Monday, January 13, 2014

The generosity of academics

A cool tumblr gives credit to the often under-acknowledged kindness of academics http://academickindness.tumblr.com/. It’s a topic I sometimes think about, because the culture of academics (at least for ecology) has always seemed to me to be driven by generous interactions.

Most of us have a growing lifetime acknowledgement list starting at the earliest point in our careers. After four years in my PhD, my thesis’ acknowledgements included other graduate students and lab mates, post-docs, undergrads, faculty at several institutions, and my supervisor. Almost everyone on this list expected nothing in exchange for their time and knowledge. Of course there are going to be exceptions, people who refuse to share their data, rarely interact with strangers, have little time for grad students, or are difficult to interact with. But that's pretty exceptional. Instead, one-sided  interactions regularly occur. Where else could you email a stranger, hoping they will meet with you at a conference to talk about your research? Or have a distant lab mail you cultures to replace ones that died? Or email the creator of an R package, because you can’t figure out where your data is going wrong, and get a detailed reply? And these aren’t untypical interactions in academia.

The lower you are down the academic ladder, the more you benefit from (maybe rely on) the kindness of busy people – committee members, collaborators, lab managers. Busy, successful faculty members, for example, took time to meet with me many times, kindly and patiently answering my questions. I can think of two reasons for this atmosphere, first that most ecologists simply are passionate about their science. They like to think about it, talk about, and exchange ideas with other people who are similarly inclined. The typical visit of an invited speaker includes hours and hours of meetings and meals with students, and most seem to relish this. Like most believers, they have a little of the zeal of the converted. Secondly, many of the structures of academic science rely heavily on goodwill and generosity. For example, reviews of journal submissions rely entirely on a system of volunteerism. That would be untenable for most businesses, but has survived this far in academic publishing. Grad student committees, although they have some value for tenure applications, are mostly dependent on the golden rule (I’ll be on your student’s committee, if you’ll be on mine). And then there are supervisor/supervisee relationships. These obviously vary between personalities, and universities, and countries, but good supervisors invest far more time and energy than the bare minimum necessary to get publications and a highly qualified personal out of it. That we rely on these interactions so heavily becomes most apparent when they fail—when you wait months on a paper because there are no reviewers, when your supervisor disappears—progress stops.

Of course, this sort of system only lasts if everyone feels like they gain some benefit, and everyone feels like the weight on them is fair. The ongoing problems with the review system suggest that this isn’t always true. Still, the posts on academickindness.tumblr.com are a reminder of that altruism is still alive and well in academia.

Thursday, December 19, 2013

More links for 2013: the 'new' conservation, the IPCC report in haiku, and more.

Conservation science has been at the receiving end of some harsh criticisms in the last couple of years, particularly from the current chief scientist of the Nature Conservancy, Peter Kareiva (e.g. 1).  They have suggested that conservation science needs to be redefined and refocused on human-centred benefits and values if it is to be successful. Some pushback in the form of TREE article from Dan Doak et al. suggests that reframing conservation in terms of its human benefits is not the best or only solution.

In a similar vein, another new paper in TREE asks what issues should the conservation community be addressing. A short-list of 15 issues suggests highly specific problems that should be addressed soon, including the exploitation of Antarctica, rapid geographic expansion of macroalgal cultivation for biofuels, and the loss of rhinos and elephants.

Even if the official IPCC report proves too long or dry for the average person to read before the end of the year, there is also a haiku version. The pretty watercolour illustrations don't make the report any more cheerful, unfortunately.

Finally, a new journal, "Elementa: Science of the Anthropocene" seems positioned to focus precisely on these kind of issues. According to their website: 

"Elementa is a new, open-access, scientific journal founded by BioOne, Dartmouth, Georgia Tech, the University of Colorado Boulder, the University of Michigan, and the University of Washington.
Elementa represents a comprehensive approach to the challenges presented by this era of accelerated human impact, embracing the concept that basic knowledge can foster sustainable solutions for society....Elementa publishes original research reporting on new knowledge of the Earth’s physical, chemical, and biological systems; interactions between human and natural systems; and steps that can be taken to mitigate and adapt to global change. "


It will be interesting to see how it develops.





Thursday, December 5, 2013

What can the future of ecology learn from the past?

Ecology has been under pressure to mature and progress as a discipline several times in its short life, always in response to looming environmental threats and the perception that ecological knowledge could be of great value. This happened notably in the 1960s, when the call for ecology to be better applicable occurred in relation to the publication of Silent Spring and fears about nuclear power and the Manhattan Project. Voices in academia, government, and the public called for ecology to become a “Big Science”, and focus on bigger scales (the ecosystem) and questions. And yet, “[Silent Spring] brought ecology as a word and concept to the public…A study committee, prodded by the publication of the book, reported to the ESA that their science was not ready to take on the responsibility being given to it.”

Arguably ecology has grown a lot since then: there have been advances in statistical approaches, spatial and temporal considerations, mechanistic understanding of multiple processes, in the number and type of systems and species studied, and the applications being considered. But it is once again facing a call (one that frankly has been ongoing for a number of years) to quickly progress as a science. The Anthropocene has proven an age of extinctions, human-mediated environmental changes, and threats to species and ecosystems from warming, habitat loss and fragmentation, extinctions, and invasions abound. Never has (applied) ecology appeared more relevant as a discipline to the general public and government. This is reflected in the increasing inclusion of buzzwords like “climate change”, “restoration”, “ecosystem services”, “biodiversity hotspot”, or “invasion” as keys to successful self-justification. Also similar to the 1960s is the feeling that ecology is not ready or able to meet the demand. Worse, that the time ecology has to respond is more limited than ever.

This first point--that ecology isn’t ready--is repeated in Georgina Mace’s (the outgoing president of the British Ecological Society) must-read editorial in Nature. The globe is in trouble, from climate change, disease, overpopulation, loss of habitat and biodiversity and Mace argues that ecology is incapable in its current form of responding to the need. She suggests that unless ecology evolves, it will fail as a discipline. Despite the growth of ecology that followed the 1960s, it is still a 'small' discipline: collaborations are mostly intra-disciplinary, data has been privately controlled, and the tendency remains to specialize on a particular system or organism of interest. However, this 'small' approach provides very little insight into the big problems of today - particularly understanding and predicting how the effects of global change on ecosystems and multispecies assemblages. To Mace, the solution, the undeniable necessity, is for ecology to get bigger. In particular, collaborations need to be broader and larger, with data sharing and availability (“big data”) the default. Ecological models and experiments/observations should be scaled up so that we can understand ecosystem effects and identify general trends across species or systems. In this new 'big' ecology, “[g]oals would be shaped by scientists, policy-makers and users of the resulting science, rather than by recent publishing trends”. Making research more interdisciplinary and including end-product users would allow the most important questions to receive the attention they deserve.

The difficulty with the looming environmental crises and the pressure on ecology to grow, is that the important decisions to be made have to be made rapidly and perhaps without complete information. Often scientific progress is afforded the time for slow progression and self-correction. After all, change is costly and risky, it requires reinvesting effort and funding, and may or may not pay off, and so science (including ecology) is often conservative. For example, a conservative mind would note that Mace’s suggestions are not without uncertainty and risk. Big data, for example, is acknowledged to have its strengths and its weaknesses, it may or may not be the cure-all it is touted as. Regardless of the amounts of data, good questions need to be asked and data, no matter how high quality, may not be appropriate for some questions. Context is often so important in ecology that attempts to combine data for meta-analysis may be questionable. Long running arguments within ecology reflect the fear that making ecological research more useful for applications and interdisciplinary questions may come at the expense of basic research and theory. It seems then that ecology is in an even worse scenario than Mace suggests, since not only must ecology change in order to respond to need, but it also must predict with incomplete information which future path will be most effective.

So ecological science is at an important junction with choices to make about future directions, limits on the information with which to make those choices, little time to make them, and much pressure to make them correctly. Perhaps we can take some comfort from the fact that ecology has been here before, though. There are some lessons we can draw from ecology’s last identity crisis, both the successes and failures. The last round resulted in ecology gaining legitimacy as a science and being integrated into policy and governance (the EPA, environmental assessments, etc). It appears, particularly in some countries, that ecology is more difficult to sell to policy and government today, but at the very least ecology has established a toehold it can take advantage of. Ecology also tried to focus on bigger scales in the 1960s--the concept of the 'ecosystem' resulted from that time--but the criticism was that the new ideas about ecosystems and evolutionary ecology weren't well integrated into ecological applications, and so their effect wasn't as broad as it could have been. Concepts like ecosystem services and function today integrate ecosystem science into applied outputs, and the cautionary tale is the value of balancing theoretical and applied development. It also seems that ecology must first consider what its duty as a science is to society (Mace’s assumption being that we have a great duty to be of value), since that is the key determinate of what path we decide to take. Then, we can hopefully consider what have we done right in recent years, what have we done wrong, and then decide where to go from here.
Page from "Silent Spring", Rachel Carson.