Showing posts with label Academic life. Show all posts
Showing posts with label Academic life. Show all posts

Thursday, January 4, 2018

Some of the best advice on the internet: several years of links

I started off the New Year with a much-needed bookmark reorganization and deletion, which also gave me a chance to re-read some of the links I've held onto (sometimes for years). There's an ever-increasing amount of useful content on the internet, but these have proven some of the most helpful, concrete, and lasting guides for navigating a scientific life.

I thought I'd collate the list here with the hope others might find some of these useful.

How to make it as early career researcher and new faculty: 
Identity and academia:
  • I think most of us took different and often interesting routes to science (for example, I grew up in an evangelical Christian family, took a number of years to finally start my undergrad, and had no particular knowledge of ecology when I started my BSc. I wanted to be a vet, but now I'm an ecologist. Close enough :) ) and so I like to hear the many different routes by which scientists found science (SEAS).
  • Overcoming imposter syndrome - there are many websites devoted to the topic, but this one provides particularly concrete steps to overcoming this common problem. 
  • No one is perfect, and feedback can hurt - why feedback hurts and how to over come that. And no, it isn't enough to say, 'grow a thicker skin' (The Thesis Whisperer).
  • Diversify EEB - a useful list of women and minorities working in EEB, worth keeping in mind when making nominations, selecting reviewers, and making various invitations. 
  • And it's worth remembering that there is a dark side (one slightly bitter take on it). (Fear and Loathing in Academia)
Mentoring and leadership:
Computing/Data management:
Data visualization:
  • There are some really beautiful infographics about science from Eleanor Lutz here (Tabletop Whale).
  • Information is Beautiful - infographics for inspiration
  • Show me Shiny - some great examples of how R Shiny has transformed data visualization and interaction.
  • If you are familiar with Edward Tufte's influential work on data visualization, you can use R to produce similar plots here. (Lukasz Piwek)
Teaching:
Miscellaneous links:

Wednesday, December 13, 2017

More authors, more joy?


It seems that ecologists have been complaining that no one writes single author manuscripts anymore since at least the 1960s. de Solla Price predicted in 1963:
"…the proportion of multi-author papers has accelerated steadily and powerfully, and it is now so large that if it continues at the present rate, by 1980 the single-author paper will be extinct”
Fortunately, an interesting new editorial in the Journal of Applied Ecology has the data (from their archives of published and submitted papers) to evaluate to ask whether this disastrous outcome has actually occurred.

It turns out that Price was wrong about single-author extinction, although he hadn't misread the trends. Since the 1970s, the proportion of single-authored papers at the journal have declined to less than 4% and the mean number of authors has risen to more than 5 (Figure 1).

Fig 1. 
It's also notable that single-authored papers are cited significantly less often and are 2.5x less likely to be accepted (!). (If that statistic doesn't make you want to gather some coauthors, nothing will). These trends agree with others reported in the literature.

The authors hypothesize that a number of factors drive this result. Ecology has gotten 'bigger' in many ways - analyses are less likely to focus on single populations or species and more likely to be replicated through space and or time. This increased breadth requires more students or assistants to aid with experimental or field work, or collaborations with other labs to bring such data together. Similarly, ecological data collection and analyses often require multiple types of specialized knowledge, whether statistical, mathematical, technological, or systems-based. And by relying on multiple researchers to play specialized roles, the overall quality of a manuscript might be higher (as compared to a jack-of-all trades). The authors also suggest that factors including the growing number of ecologists, the more international scope of many research activities, and more democratic approaches to authorship have increased the mean number of co-authors.

What makes these results particularly interesting is that I think there is still something of a cachet for the sole-authored paper. The conceit is that writing a sole authored paper means that you have a fully realized research plan, and you're accomplished enough to bring it to fruition by yourself. But these stats at least seem to suggest that you're better off with a few friends :)


Barlow, Jos, Stephens, Philip A., Bode, Michael, Cadotte, Marc W., Lucas, Kirsty, Newton, Erika, Nuñez, Martin A., Pettorelli, Nathalie. On the extinction of the single-authored paper: The causes and consequences of increasingly collaborative applied ecological research. J Appl Ecol. 55(1): 1365-2664. doi.org/10.1111/1365-2664.13040

Friday, October 6, 2017

Blogging about science for yourself

In case you missed it, a new paper in Royal Society Open Science from seven popular ecology blogs discusses the highlights and values of science community blogging. It provides some insights into the motivations behind posting and the reach and impacts that result. It's a must-read if you've considered or already have a blog about science.

It was nice to see how universal the 'pros' of blogging seem to be – the things I most appreciate about contributing to a blog are pretty similar to the things the authors here reported on too. According to the archives, I've been posting here since 2010, when I was a pretty naïve PhD student interacting with the ecological literature for the first time. I had a degree of enthusiasm and wonder upon interacting with ideas for the first time that I miss, actually. I just started a faculty job this fall, and I think that the blog allowed me to explore and experiment with ideas as I figured out where I was going as a scientist (which is still an ongoing process).

As Saunders et al. note, one of the other major upsides to blogging is the extent to which it produces networking and connections with colleagues. In a pretty crowded job market, I think it probably helped me, although only as a complement to the usual suspects (publications, 'fit', research plans, interviewing skills). Saunders et al. also mentioned blogging as relevant to NSF's Broader Impacts section, which I actually hadn't considered. Beyond that, the greatest benefit by far for me is that forcing oneself to post regularly and publicly is amazing practice for writing about science.

Despite these positives, I don't necessarily think a science blog is for everyone and there are definitely things to consider before jumping in to it. It can be hard to justify posting on a blog when your to-do list overflows, and not everyone will –understandably- think that's a good use of their time. There is a time commitment and degree of prioritisation required that is difficult. This is one reason that having co-bloggers can be a lifesaver. It is also true that while writing a blog is great practice, it probably selects for people able to write quickly (and perhaps without perfectionistic tendencies).

When students ask me about blogging, they often hint at concerns in sharing their ideas and writing. It can be really difficult to put your ideas and writing out there (why invite more judgement and criticism?) and this is can feedback with imposter syndrome (speaking from my own experience). For a long time, minorities, women, students have been under-represented in ecology blogs, and I think this may be a contributor to that. It's nice to see more women blogging about these days, and hopefully there is a positive feedback from increasing the visibility of under-represented groups.

In any case, this paper was especially timely for me, because I've been re-evaluating over the past few months about whether to keep blogging or not, and this provided a reminder of the positive impacts that are easy to overlook.

Monday, July 31, 2017

Novelty or prediction or something else?

There is an interesting editorial at elife from Barak Cohen on "How should novelty be valued in science?" It connects to some of the discussions on this blog and in other places concerned about the most efficient and effective path for science (Cohen suggests a focus on predictive ability).

One relevant question is how 'understanding' differs from 'predicting' and whether a focus on 'prediction' can produce perverse incentives too, as the focus on novelty has.

[This pessimistic image about perverse incentives from Edwards and Roy (2017) and the discussion from Mike Taylor seemed an appropriate addition.

]

Friday, June 2, 2017

Image in academia

Not many seminar speakers are introduced with a discussion of their pipetting skills. When we talk about other scientists we discuss their intelligence, their rigour, their personality, above and beyond their learned skills. Most people have an image of what a scientist should be, and judge themselves against this idealized vision. There are a lot of unspoken messages that are exchanged in science and academia. It’s easy to think that the successful scientists around one interacts with are just innately intelligent, confident, passionate, and hard-working. No doubt imposter syndrome owes a lot to this one-sided internalization of the world. After all, you don’t feel like you fulfill these characteristics because you have evidence of your own personal struggles but not those of everyone else. 

"Maybe no one will notice".
The most enlightening conversation I had this year (really! Or at least a close tie with discovering that PD originally was discussed as a measure of homologous characters…) was with a couple of smart, accomplished female scientists, in which we all acknowledged that we—not infrequently—suffered from feeling totally out of our depths. It is hard to admit our failings or perceived inadequacies, for fear we’ll be branded with them. But it’s really helpful for others to see that reality is different than the image we’ve projected. If everyone is an imposter, no one is. There is something to be said for confidence when scientists are presenting consensus positions to the public, but on the other hand, I think that being open about the human side of science is actually really important. 

For those who already feel like outsiders in academia, perhaps because they (from the perspective of race, gender, orientation, social and economic background, etc) differ from the dominant stereotype of a ‘scientist’, it probably doesn’t take much to feel alienated and ultimately leave. Students have said things to me along the lines of “I love ecology but I don’t think I will try to continue in academic because academia is too negative/aggressive/competitive”. Those are legitimate reasons to avoid the field, but I always try to acknowledge that I feel the same way too sometimes. It’s helpful to acknowledge that others feel the same way, and that having this kind of feeling (e.g. that you aren’t smart enough, or you don’t have a thick enough skin) isn’t a sign that you don’t actually belong. Similarly, it’s easy to see finished academic papers and believe that they are produced in a single perfect draft and that writing a paper should be easy. But for 99% of people, that is not true, and a paper is the outcome of maybe 10 extreme edits, several rounds of peer review, and perhaps even a copy-editor. Science is inherently a work-in-progress and that’s true of scientists as well.

The importance of personal relationships and mentorship to help provide realistic images of science should be emphasized. Mentorship by people who are particularly sympathetic (by personal experience or otherwise) to the difficulties individuals face is successful precisely for this reason. This might be why blog posts on the human side of academia are so comparatively popular – we’re all looking for evidence that we are not alone in our experiences. (Meg Duffy writes nice posts along these lines, e.g. 1, 2). And though the height of the blogosphere might be over, the ability of blog posts to provide insight into humanity of academia might be its most important value.

Wednesday, April 12, 2017

The most "famous" ecologists (and some time wasting links) (Updated)

(Update: This has gotten lots more attention than I expected. Since first posted, the top 10 list has been updated 2 times based on commenters suggestions. You can also see everyone we looked up here. Probably I won't update this again, because there is a little time wasting, and there is a lot of time wasting :) )

At some point my officemates Matthias and Pierre and I started playing the 'who is the most famous ecologist' game (instead of, say, doing useful work), particular looking for ecologists with an h-index greater than 100. An h-index of 100 would mean that the scientist had 100 publications with at least 100 citations  and their other papers had less than 100 citations. Although the h-index is controversial, it is readily available and reasonably capture scientists that have above average citations per paper and high productivity. We restricted ourselves to only living researchers. We used Publish or Perish to query Google Scholar (which now believes everyone using the internet in our office may be a bot).

We identified only 12 ecologists at level 100 or greater. For many researchers in specialized subfields, an h-index this high is probably not achievable. The one commonality in these names seems to be that they either work on problems of broad importance and interest (particularly, climate change and human impacts on the landscape) or else were fundamental to one or more areas of work. They were also all men, and so we tried to identify the top 12 women ecologists. (We tried as best as we could, using lists here and here to compile our search). The top women ecologists tended to have been publishing for an average of 12 years less than the male ecologists (44 vs. 56 years) which may explain some of the rather jarring difference. The m-index is the h-index/years publishing and so standardizes for differences in career age.

(It's difficult to get these kind of analyses perfect due to common names, misspellings in citations, different databases used, etc. It's clear that for people with long publication lists, there is a good amount of variance depending on how that value is estimated).

Other links: 
(I've been meaning to publish some of these, but haven't otherwise had a time or space for it.. )
Helping graduate students deal with imposter syndrome (Link). Honestly, not only graduate students suffer from imposter syndrome, and it is always helpful to get more advice on how to escape the feeling that you've lucked into something you aren't really qualified for. 

A better way to teach the Tree of Life (Link). This paper has some great ideas that go beyond identifying common ancestors or memorizing taxonomy.

Analyzing scientists are on Twitter (Link). 

Recommendation inflation (Link). Are there any solutions to an arms race of positivity?  


Thursday, March 9, 2017

Data management for complete beginners

Bill Michener is a longtime advocate of data management and archiving practices for ecologists, and I was lucky to catch him giving talk on the topic this week. It clarified for me the value of formalizing data management plans for institutions and lab groups, but also the gap between recommendations for best practices in data management and the reality in many labs.

Michener started his talk with two contrasting points. First, we are currently deluged by data. There is more data available to scientists now than ever, perhaps 45000 exabytes by 2020. On the other hand, scientific data is constantly lost. The longer since a paper is published, the less likely its data can be recovered (one study he cited showed that data had a half life of 20 years). There are many causes of data loss, some technological, some due to changes in sharing and publishing norms. The rate at which data is lost may be declining though. We're in the middle of a paradigm shift in terms of how scientists see our data. Our vocabulary now includes concepts like 'open access', 'metadata', and 'data sharing'. Many related initiatives (e.g.  GenBank, Dryad, Github, GBIF) are fairly familiar to most ecologists. Journal policies increasingly ask for data to be deposited into publicly available repositories, computer code is increasingly submitted during the review process, and many funding agencies now require statements about data management practices.

This has produced huge changes in typical research workflows over the past 25 years. But data management practices have advanced so quickly there’s a danger that some researchers will begin to feel that it is unobtainable, due to the level of time, expertise, or effort involved. I feel like sometimes data management is presented as a series of unfamiliar tools and platforms (often changing) and this can make it seem hard to opt in. It’s important to emphasize good data management is possible without particular expertise, and in the absence of cutting edge practices and tools. What I liked about Michener's talk is that it presented practices as modular ('if you do nothing else, do this') and as incremental. Further, I think the message was that this paradigm shift is really about moving from a mindset in which data management is done posthoc ('I have a bunch of data, what should I do with it?') to considering how to treat data from the beginning of the research process.

Hierarchy of data management needs.

One you make it to 'Share and archive data', you can follow some of these great references.

Hart EM, Barmby P, LeBauer D, Michonneau F, Mount S, Mulrooney P, et al. (2016) Ten Simple Rules for Digital Data Storage. PLoS Comput Biol 12(10): e1005097. doi:10.1371/journal.pcbi.1005097

James A. Mills, et al. Archiving Primary Data: Solutions for Long-Term Studies, Trends in Ecology & Evolution, Volume 30, Issue 10, October 2015, Pages 581-589, ISSN 0169-5347.

https://software-carpentry.org//blog/2016/11/reproducibility-reading-list.html (lots of references on reproducibility)

K.A.S. Mislan, Jeffrey M. Heer, Ethan P. White, Elevating The Status of Code in Ecology, Trends in Ecology & Evolution, Volume 31, Issue 1, January 2016, Pages 4-7, ISSN 0169-5347.


Thanks to Matthias Grenié for discussion on this topic.

Tuesday, January 24, 2017

The removal of the predatory journal list means the loss of necessary information for scholars.

We at EEB & Flow periodically post about trends and issues in scholarly publishing, and one issue that we keep coming back to is the existence of predatory Open Access journals. These are journals that abuse a valid publishing model to make a quick buck and use standards that are clearly substandard and are meant to subvert the normal scholarly publishing pipeline (for example, see: here, here and here). In identifying those journals that, though their publishing model and activities, are predatory, we have relied heavily on Beall's list of predatory journals. This list was created by Jeffrey Beall, with the goal of providing scholars with the necessary information needed to make informed decisions about which journals to publish in and to avoid those that likely take advantage of authors.

As of a few days ago, the predatory journal list has been taken down and is no longer available online. Rumour has it that Jeffrey Beall removed the list in response to threats of lawsuits. This is really unfortunate, and I hope that someone who is dedicated to scholarly publishing will assume the mantle.

However, for those who still wish to consult the list, an archive of the list still exists online -found here.

Friday, January 20, 2017

True, False, or Neither? Hypothesis testing in ecology.

How science is done is the outcome of many things, from training (both institutional and lab specific), reviewers’ critiques and requests, historical practices, subdiscipline culture and paradigms, to practicalities such as time, money, and trends in grant awards. ‘Ecology’ is the emergent property of thousands of people pursuing paths driven by their own combination of these and other motivators. Not surprisingly, the path of ecology sways and stalls, and in response papers pop up continuing the decades old discussion about philosophy and best practices for ecological research.

A new paper from Betini et al. in the Royal Society Open Science contributes to this discussion by asking why ecologists don’t test multiple competing hypotheses (allowing efficient falsification or “strong inference” a la Popper). Ecologists rarely test multiple competing hypothesis test: Betini et al. found that only 21 of 100 randomly selected papers tested 2 hypotheses, and only 8 tested greater than 2. Multiple hypothesis testing is a key component of strong inference, and the authors hearken to Platt’s 1964 paper “Strong Inference” as to why ecologists should be adopting adopt strong inference. 
Platt
From Platt: “Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method-oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations.
[An aside to say that Platt was a brutally honest critic of the state of science and his grumpy complaints would not be out of place today. This makes reading his 1964 paper especially fun. E.g. “We can see from the external symptoms that there is something scientifically wrong. The Frozen Method. The Eternal Surveyor. The Never Finished. The Great Man With a Single Hypothesis. The Little Club of Dependents. The Vendetta. The All-Encompassing Theory Which Can Never Be Falsified.”]
Betini et al. list a number of common practical intellectual and practical biases that likely prevent researchers from using multiple hypothesis testing and strong inference. These range from confirmation bias and pattern-seeking to the fallacy of factorial design (which leads to unreasonably high replication requirements including of uninformative combinations). But the authors are surprisingly unquestioning about the utility of strong inference and multiple hypothesis testing for ecology. For example, Brian McGill has a great post highlighting the importance and difficulties of multi-causality in ecology - many non-trivial processes drive ecological systems (see also). 

Another salient point is that falsification of hypotheses, which is central to strong inference, is especially unserviceable in ecology. There are many reasons that an experimental result could be negative and yet not result in falsification of a hypothesis. Data may be faulty in many ways outside of our control, due to inappropriate scales of analyses, or because of limitations of human perception and technology. The data may be incomplete (for example, from a community that has not reached equilibrium); it may rely inappropriately on proxies, or there could be key variables that are difficult to control (see John A. Wiens' chapter for details). Even in highly controlled microcosms, variation arises and failures occur that are 'inexplicable' given our current ability to perceive and control the system.

Or the data might be accurate but there are statistical issues to be concerned about, given many effect sizes are small and replication can be difficult or limited. Other statistical issues can also make falsification questionable – for example, the use of p-values as the ‘falsify/don’t falsify’ determinant, or the confounding of AIC model selection with true multiple hypothesis testing.

Instead, I think it can be argued that ecologists have relied more on verification – accumulating multiple results supporting a hypothesis. This is slower, logically weaker, and undoubtedly results in mistakes too. Verification is most convincing when effect sizes are large – e.g. David Schindler’s lake 226, which provided a single and principal example of phosphorus supplementation causing eutrophication. Unfortunately small effect sizes are common in ecology. There also isn’t a clear process for dealing with negative results when a field has relied on verification - how much negative evidence is required to remove a hypothesis from use, versus just lead to caveats or modifications?

Perhaps one reason Bayesian methods are so attractive to many ecologists is that they reflect the modified approach we already use - developing priors based on our assessment of evidence in the literature, particularly verifications but also evidence that falsifies (for a better discussion of this mixed approach, see Andrew Gelman's writing). This is exactly where Betini et al.'s paper is especially relevant – intellectual biases and practical limitations are even more important outside of the strict rules of strong inference. It seems important as ecologists to address these biases as much as possible. In particular, better training in philosophical, ethical and methodological practices; priors, which may frequently be amorphous and internal, should be externalized using meta-analyses and reviews that express the state of knowledge in unbiased fashion; and we should strive to formulate hypotheses that are specific and to identify the implicit assumptions.

Friday, January 13, 2017

87 years ago, in ecology

Louis Emberger was an important French plant ecologist in the first half of the last century, known for his work on the assemblages of plants in the mediterranean.

For example, the plot below is his published diagram showing minimum temperature of the coolest month versus a 'pluviometric quotient' capturing several aspects of temperature and precipitation from:

Emberger; La végétation de la région méditerranienne. Rev. Gén. Bot., 42 (1930)

Note this wasn't an unappreciated or ignored paper - it received a couple hundred citations, up until present day. Further, updated versions have appeared in more recent years (see bottom).

So it's fascinating to see the eraser marks and crossed out lines, this visualisation of scientific uncertainty. The final message from this probably depends on your perspective and personality:
  • Does it show that plant-environment modelling has changed a lot or that plant environmental modelling is still asking about the same underlying processes in similar ways?
  • Does this highlight the value of expert knowledge (still cited) or the limitations of expert knowledge (eraser marks)? 
It's certainly a reminder of how lucky we are to have modern graphical software :)



E.g. updated in Hobbs, Richard J., D. M. Richardson, and G. W. Davis. "Mediterranean-type ecosystems: opportunities and constraints for studying the function of biodiversity." Mediterranean-Type Ecosystems. Springer Berlin Heidelberg, 1995. 1-42.











Thanks to Eric Garnier, for finding and sharing the original Emberger diagram and the more recent versions.

Wednesday, November 16, 2016

The value of ecology through metaphor

The romanticized view of an untouched, pristine ecosystem is unrealistic; we now live in a world where every major ecosystem has been impacted by human activities. From pollution and deforestation, to the introduction of non-native species, our activity has influenced natural systems around the globe. At the same time, ecologists have largely focused on ‘intact’ or ‘natural’ systems in order to uncover the fundamental operations of nature. Ecological theory abounds with explanations for ecological patterns and processes. However, given that the world is increasingly human dominated and urbanized, we need a better understanding of how biodiversity and ecosystem function can be sustained in the presence of human domination. If our ecological theories provide powerful insights into ecological systems, then human dominated landscapes are where they are desperately needed to solve problems.
From the Spectator

This demand to solve problems is not unique to ecology, other scientific disciplines measure their value in terms of direct contributions to human well-being. The most obvious is human biology. Human biology has transitioned from gross morphology, to physiology, to molecular mechanisms controlling cellular function, and all of these tools provide powerful insights into how humans are put together and how our bodies function. Yet, as much as these tools are used to understand how healthy people function, human biologists often stay focussed on how to cure sick people. That is, the proximate value ascribed to human biology research is in its ability to cure disease and improve peoples’ lives. 


In Ecology, our sick patients are heavily impacted and urbanized landscapes. By understanding how natural systems function can provide insights into strategies to improve degraded ecosystems. This value of ecological science manifests itself in shifts in funding and publishing. We now have synthesis centres that focus on the human-environment interaction (e.g., SESYNC). The journals that publish papers that provide applied solutions to ecological and environmental problems (e.g., Journal of Applied Ecology, Frontiers in Ecology and the Environment, etc.) have gained in prominence over the past decade. But more can be done.


We should keep the ‘sick patient’ metaphor in the back of our minds at all times and ask how our scientific endeavours can help improve the health of ecosystems. I was once a graduate student that pursued purely theoretical tests of how ecosystems are put together, and now I am the executive editor of an applied journal. I think that ecologists should feel like they can develop solutions to environmental problems, and that their underlying science gives them a unique perspective to improving the quality of life for our sick patients. 

Monday, November 7, 2016

What is a community ecologist anyways?

I am organizing a 'community ecology' reading group, and someone asked me whether I didn’t think focusing on communities wasn’t a little restrictive. And no. The thought never crossed my mind. Which I realized is because I internally define community ecology as a large set of things including ‘everything I work on’ :-) When people ask me what I do, I usually say I’m a community ecologist.

Obviously community ecology is the study of ecological communities [“theoretical ideal the complete set of organisms living in a particular place and time as an ecological community sensu lato”, Vellend 2016]. But in practice, it's very difficult to define the boundaries of what a community is (Ricklefs 2008), and the scale of time and space is rather flexible.

So I suppose my working definition has been that a community ecologist researches groups of organisms and understands them in terms of ecological processes. There is flexibility in terms of spatial and temporal scale, number and type of trophic levels, interaction type and number, or response variables of interest. It’s also true that this definition could be encompass much of modern ecology…

On the other hand, a colleague argued that only the specific study of species interactions should be considered as ‘community ecology’: e.g. pollination ecology, predator-prey interactions, competition, probably food web and multi-trophic level interactions. 

Perhaps my definition is so broad as to be uninformative, and my colleague's is too narrow to include all areas. But it is my interest in community ecology that leads me to sometimes think about larger spatial and temporal scales. Maybe that's what community ecologists have in common: the flexibility needed to deal with the complexities of ecological communities.

Monday, October 17, 2016

Reviewing peer review: gender, location and other sources of bias

For academic scientists, publications are the primary currency for success, and so peer review is a central part of scientific life. When discussing peer review, it’s always worth remembering that since it depends on ‘peers’, broader issues across ecology are often reflected in issues with peer review. A series of papers from Charles W. Fox--and coauthors Burns, Muncy, and Meyer--do a great job of illustrating this point, showing how diversity issues in ecology are writ small in the peer review process.

The journal Functional Ecology provided the authors up to 10 years of data on the submission, editorial, and review process (between 2004 and 2014, maximum). This data provides a unique opportunity to explore how factors such as gender and geographic local affects the peer review process and outcomes, and also how this has changed over the past decade.

Author and reviewer gender were assigned using an online database (genderize.io) that includes 200,000 names and an associated probability reflecting the genders for each name. Geographic location of editors and reviewers were also identified based on their profiles. There are some clear limitations to this approach, particularly that Asian names had to be excluded. Still, 97% of names were present in the genderize.io database, and 94% of those names were associated with a single gender >90% of the time.

Many—even most—of Fox et al.’s findings are in line with what has already been shown regarding the causes and effects of gender gaps in academia. But they are interesting, nonetheless. Some of the gender gaps seem to be tied to age: senior editors were all male, and although females make up 43% of first authors on papers submitted to Functional Ecology, they are only 25% of senior authors.

Implicit biases in identifying reviewers are also fairly common: far fewer women were suggested then men, even when female authors or female editors were identifying reviewers. Female editors did invite more female reviewers than male editors. ("Male editors selected less than 25 percent female reviewers even in the year they selected the most women, but female editors consistently selected ~30–35 percent female").  Female authors also suggested slightly more female reviewers than male authors did.

Some of the statistics are great news: there was no effect of author gender or editor gender on how papers were handled and their chances of acceptance, for example. Further, the mean score given to a paper by male and female reviewers did not differ – reviewer gender isn’t affecting your paper’s chance of acceptance. And when the last or senior author on a paper is female, a greater proportion of all the authors on the paper are female too.

The most surprising statistic, to me, was that there was a small (2%) but consistent effect of handling editor gender on the likelihood that male reviewers would respond to review requests. They were less likely to respond and less likely to agree to review, if the editor making the request is female.

That there are still observable effects of gender in peer review despite an increasing awareness of the issue should tell us that the effects of other forms of less-discussed bias are probably similar or greater. Fox et al. hint at this when they show how important the effect of geographic locale is on reviewer choice. Overwhelmingly editors over-selected reviewers from their own geographic locality. This is not surprising, since social and professional networks are geographically driven, but it can have the effect of making science more insular. Other sources of bias – race, country of origin, language – are more difficult to measure from this data, but hopefully the results from these papers are reminders that such biases can have measurable effects.

From Fox et al. 2016a. 

Tuesday, September 6, 2016

Examples of pre-interview questions

Last year, several postdocs at my institute (including me) were applying for faculty positions at North American institutions. Frequently, before on campus interviews, a 'long' list of people are asked to take part in phone/Skype interviews before a short list for campus visits is decided on. Since this step is now so common, postdocs put together an informal list of all the questions people had been asked during this initial interview*.

I found the list helpful. The usual caveats apply - different types of institutes and search committees will have different priorities and focus on different types of questions (e.g. teaching vs. research). Thinking about the answers to these questions ahead of time can be helpful for developing a vision of how you approach teaching and research, and being clear in how you communicate that.

(*Thanks to Iris Levin for originally curating this list)

Big picture questions:
Why X institution?
What do the liberal arts mean to you? Why are you interested in a career at a liberal arts college?
Tell us about contributing to XX college’s emphasis on liberal arts in practice, interdisciplinary and/or international aspects of education
How will our Biology Dept enhance your teaching and research?

Teaching focused questions:
General approach
What courses are you best suited to teach and how would you teach it?
What does a typical day in your class look like?
What do you feel you would add to graduate and undergraduate training in the department?
What is the biggest challenge in teaching?
You will teach X course every semester, how would you keep it exciting?
How would you teach a lab differently for introductory, intermediate or advanced students?

Specifics about courses
How would you teach X class?
What sort of interdisciplinary and/or first-year seminar course would you teach?
What sort of non-majors course would you teach? How would you teach it differently for non-majors vs. majors?
What new course(s) would you develop and how?
Tell us about your approach to teaching an XXX course for students who have had one introductory biology course
Tell us about incorporating quantitative and analytical reasoning into an XXX course
Tell us about using open-ended, inquiry-based group work in an introductory biology course

Research focused questions:
Approach and interests
Briefly summarize your most significant research contribution.
Tell us about your research program
You work on xyz – how would you conduct your research here?
How do you see your research complementing that of others in the department, and what do you view as your unique strengths?
Where do you see yourself in 5 years? Where do you see yourself in 10 years?
Who would you collaborate with here? 
How would you collaborate with faculty and bridge different fields?
What sort of projects would you do with graduate students? 
How would undergrads be involved with your research and what would the outcomes be?
Tell us about your approach to mentoring undergraduates in research

Funding
What sources of funding would you pursue to support your research program?
What grants would you apply to? 

Integration with teaching?
What contributions would your research make to these courses?
How would you involve students in your research outside or inside classroom

Misc (what type of colleague would you be?):
How would you contribute to the larger campus community?
How do you address diversity in your teaching and research?
What do you feel you can contribute to efforts to cultivate a wide diversity of people and perspectives at XX College?
Describe what you know about X college, how you would fit in, and any concerns.
How do you deal with conflict?
What has been the biggest obstacle in your professional development?


If you have more to add, please comment!

Friday, September 2, 2016

Science in many languages.

The lingua franca of biology is English, although through history it has variously been Latin, German, or French. Communication is fundamental to the modern scientific landscape, and English dominates the international ecological community. To be indexed by SCOPUS, a journal must be written at least in part in English. All major ecological journals are published in English, and clear, understandable writing is unquestionably an advantage in having work published. Large international conferences are usually conducted in English. Sometimes there is no translation for a key word and the English version is used directly, regardless of the language of the conversation. Even base commands in coding languages like R are in English. There is an undeniable but some times unmentioned advantage to being a native English speaker in science.

A common language is inevitable and necessary to communicate in a time of global connectivity, but it is also necessary to acknowledge that many scientists speak English as a second (or third, or fourth) language and barriers can arise as a result of this. The energy activation to move between languages is high for people, and it can take longer to read and write. But sometimes the costs are more subtle: for example, students may be less likely to give oral talks at conferences as a result of concerns about being understood. Even if they are relatively proficient, the question period after talks is difficult, since questions are often spoken quickly, are not clear, and are expressed in a variety of accents. That’s a difficult situation to address directly, but there are ways to facilitate communication across a variety of English proficiencies. And many of these are simply good practices for communication in any language.

First: slow down. Some of us are guiltier than others, but if you speak too fast, you lose listeners. This is another reason to consciously try to breath and relax during presentations and lectures. Some people speak so quickly that even the native English speakers have trouble following along. Now imagine listening to that talk while needing a little extra processing time.

When you give lectures and presentations, make sure that the slides and the verbal component both provide the overall message. I’ve followed talks in French and Spanish before, because the slides were well-composed (and in English). If someone misses something you say, it should be possible to follow the important points by the slides alone. And vice versa. This is good advice for any talk. Don’t be boring, but also be aware of when overuse of idioms or culture-specific references prevent understanding.

Sometimes fluent English speakers unknowingly dominate conversations because they speak faster and may be more confident in expressing themselves. In group activities like workshops and meetings, allow breaks in the conversation so that non-native speakers (or just less dominating personalities and quieter people) have a chance to express themselves as well.

An ear for accents comes from practice listening. Practice speaking improves accent. It’s a mutually beneficial relationship.

Also, remember that culture and language interact. English is interesting in that we have no pronouns differentiating between formal and informal relationships (we have ‘you’, not ‘tu’/‘vous’, etc.). This can make English speakers seem informal and friendly, or disrespectful, depending on the context. Keep this context in mind when interpreting interactions.

Tuesday, June 14, 2016

Rebuttal papers don’t work, or citation practices are flawed?

Brian McGill posted an interesting follow up to Marc’s question about whether journals should allow post-publication review in the form of responses to published papers. I don’t know that I have any more clarity as to the answer to that question after reading both (excellent) posts. Being idealistic, I think that when there are clear errors, they should be corrected, and that editors should be invested in identifying and correcting problems in papers in their journals. Based on the discussions I’ve had with co-authors about a response paper we’re working on, I’d also like to believe that rebuttals can produce useful conversations, and ultimately be illuminating for a field. But pragmatically, Brian McGill pointed out that it seems that rebuttals rarely make an impact (citing Banobi et al 2011). Many times this was due to the fact that citations of flawed papers continued, and “were either rather naive or the paper was being cited in a rather generic way”.

Citations are possibly the most human part of writing scientific articles. Citations form a network of connections between research and ideas, and are the written record of progress in science. But they're also one of the clearest points at which biases, laziness, personal relationships (both friendships and feuds), taxonomic biases, and subfield myopia are apparent. So why don't we focus on improving citation practices? 

Ignoring more extreme problems (coercive citations, citation fraud, how to cite supplementary materials, data and software), as the literature grows more rapidly and pressure to publish increases, we have to acknowledge that it is increasingly difficult to know the literature thoroughly enough to cite broadly. A couple of studies found that 60-70% of citations were scored as accurate (Todd et al. 2007; Teixeira et al. 2013) (Whether you can see that as too low or pretty high depends on your personality). Key problems were the tendency to cite 'lazily' (citing reviews or synthetic pieces rather than delve into the literature within) or 'naively' (citing high profile pieces in an offhand way without considering rebuttals and follow ups (a key point of the Banobi et al. piece)). At least one limited analysis (Drake et al. 2013) showed that citations tended to be much more accurate in higher IF journals (>5), perhaps (speculating) due to better peer review or copy editing. 

Todd et al (2007) suggest that journals institute random audits of citations to ensure authors take greater care. This may be a good idea that is difficult to institute in journals where peer reviewers are already in short supply. It may also be useful to have rebuttal papers considered as part of the total communication surrounding a paper - the full text would include them, they would be automatically downloaded in the PDF, there would be a tab (in addition to author information, supplementary material, references, etc) for responses. 

More generally - why don't we learn how to cite well as students? The vast majority of advice on citation practices with a quick google search regards the need to avoiding plagiarism and stylistic concerns. Some of it is philosophical, but I have never heard a deep discussion of questions like, 'What’s an appropriate number of citations – for an idea?'; 'For a manuscript?'; 'How deep do I cite? (Do I need to go to Darwin?)'. It would be great if there were a consensus advice publication, like the sort the BES is so good at on best practices in citation.

Which is to say, that I still hope that rebuttals can work and be valuable.

Friday, May 27, 2016

How to deal with poor science?

Publishing research articles is the bedrock of science. Knowledge advances through testing hypotheses, and the only way such advances are communicated to the broader community of scientists is by writing up the results in a report and sending it to a peer-reviewed journal. The assumption is that papers passing through this review filter report robust and solid science.

Of course this is not always the case. Many papers include questionable methodology and data, or are poorly analyzed. And a small minority actually fabricate or misrepresent data. As Retraction Watch often reminds us, we need to be vigilant against bad science creeping into the published literature.



Why should we care about bad science? Erroneous results or incorrect conclusions in scientific papers can lead other researchers astray and result in bad policy. Take for example the well-flogged Andrew Wakefield, a since discredited researcher who published a paper linking autism to vaccines. The paper is so flawed that it does not stand up to basic scrutiny and was rightly retracted (though how it could have passed through peer review is an astounding mystery). However, this incredibly bad science invigorated an anti-vaccine movement in Europe and North America that is responsible for the re-emergence of childhood diseases that should have been eradicated. This bad science is responsible for hundreds of deaths.

From Huffington Post 

Of course most bad science will not result in death. But bad articles waste time and money if researchers go down blind alleys or work to rebut papers. The important thing is that there are avenues available to researchers to question and criticize published work. Now days this usually means that papers are criticized through two channels. First is through blogs (and other social media). Researchers can communicate their concerns and opinion about a paper to the audience that reads their blog or through social media shares. A classic example was the blog post by Rosie Redfield criticizing a paper published in Science that claimed to have discovered bacteria that used arsenic as a food source.

However, there are a few problems with this avenue. First is that it is not clear that the correct audience is being targeted. For example, if you normally blog about your cat, and your blog followers are fellow cat lovers, then a seemingly random post about a bad paper will likely fall on deaf ears. Secondly, the authors of the original paper may not see your critique and do not have a fair opportunity to rebut your claims. Finally, your criticism is not peer-reviewed and so flaws or misunderstandings in your writing are less likely to be caught.

Unlike the relatively new blog medium, the second option is as old as scientific publication –writing a commentary that is published in the same journal (and often with an opportunity for the authors of the original article to respond). These commentaries are usually reviewed and target the correct audience, namely the scientific community that reads the journal. However, some journals do not have a commentary section and so this avenue is not available to researchers.

Caroline and I experienced this recently when we enquired about the possibility to write a commentary on an article was published and that contained flawed analyses. The Editor responded that they do not publish commentaries on their papers! I am an Editor-in-Chief and I routinely deal with letters sent to me that criticize papers we publish. This is important part of the scientific process. We investigate all claims of error or wrongdoing and if their concerns appear valid, and do not meet the threshold for a retraction, we invite them to write a commentary (and invite the original authors to write a response). This option is so critical to science that it cannot be overstated. Bad science needs to be criticized and the broader community of scientists should to feel like they have opportunities to check and critique publications.


I could perceive that there are many reasons why a journal might not bother with commentaries –to save page space for articles, they’re seen as petty squabbles, etc. but I would argue that scientific journals have important responsibilities to the research community and one of them must be to hold the papers they publish accountable and allow for sound and reasoned criticism of potentially flawed papers.

Looking over the author guidelines of the 40 main ecology and evolution journals (and apologies if I missed statements -author guidelines can be very verbose), only 24 had a clear statement about publishing commentaries on previously published papers. While they all had differing names for these commentary type articles, they all clearly spelled out that there was a set of guidelines to publish a critique of an article and how they handle it. I call these 'Group A' journals. The Group A journals hold peer critique after publication as an important part of their publishing philosophy and should be seen as having a higher ethical standard.



Next are the 'Group B' journals. These five journals had unclear statements about publishing commentaries of previously published papers, but they appeared to have article types that could be used for commentary and critique. It could very well be that these journals do welcome critiques of papers, but they need to clearly state this.


The final class, 'Group C' journals did not have any clear statements about welcoming commentaries or critiques. These 11 journals might accept critiques, but they did not say so. Further, there was no indication of an article type that would allow commentary on previously published material. If these journals do not allow commentary, I would argue that they should re-evaluate their publishing philosophy. A journal that did away with peer-review would be rightly ostracized and seen as not a fully scientific journal and I believe that post publication criticism is just as essential as peer review.


I highlight the differences in journals not to shame specific journals, but rather highlight that we need a set of universal standards to guide all journals. Most journals now adhere to a set of standards for data accessibility and competing interest statements, and I think that they should also feel pressured into accepting a standardized set of protocols to deal with post-publication criticism.