As scientists, we’re all wrong, at least sometimes. The question is, how are we wrong?
The arsenic bacteria saga, which we’ve discussed on this blog before, is turning out to be a very public example of failure in science. First announced by NASA press conference in December 2010, authors lead by Felisa Wolfe-Simon shared their discovery of a bacterium capable of replacing phosphorus in its DNA with arsenic, suggesting the possibility of life in phosphorus-limited conditions. This apparently momentous discovery was published in Science, and met with disbelief and severe criticism. Critics throughout the blogosphere and academic departments began to compile a comprehensive list of failings on the part of the paper—8 technical criticisms were published in Science—and as the result of the intense focus on the paper’s lead author is no longer associated with the lab group where this research was carried out. This is failure at its worst—the science was flawed and it drew immediate and intense censure. This is the kind of failure that most young scientists fear: judgment, intense criticism, career-long repercussions. But it’s also probably the least common type of failure in science.
However, it’s arguable that the saddest form of failure is the opposite of this: when a paper is right—innovative, ahead of its time—but somehow never receives the attention it deserves. There are lots of famous examples of scientific obscurity, with Gregor Mendel being the poster child for scientists who toil for years in anonymity. In ecology, for example, papers that considered species as equivalent (a la neutral theory) to explain coexistence were around in the 1950’s-1960s, but received little attention. Other papers suggesting variation in environment as a possible mechanism for plant coexistence were published prior to Chesson and Huntly's influential paper, yet essentially uncited. Most researchers can name at least one paper that foreshadows the direction the field will take many years later, yet is unacknowledged and poorly cited. There are many reasons that papers could be under recognized—they are written by scientists outside of the dominant geographical areas or social networks, or who lack the ability to champion their ideas, either in writing or in person. In some instances the intellectual climate may not be conducive to an idea that, at a later time, will take off.
If that is the saddest type of failure, then the best type of failure is when being wrong inspires an explosion of new research and new ideas. Rather than causing an implosion, as the arsenic-bacteria paper did, these wrong ideas reinvigorate their field. Great examples in ecology include Steve Hubbell’s Unified Neutral Theory of Biodiversity, which although criticized rightly for its flaws, produced a high-quality body of literature debating its merits and flaws. When Jared Diamond (1975) proposed drawing conclusions about community assembly processes based on patterns of species co-occurrence, the disagreement, led by Dan Simberloff ultimately led to the current focus on null models. Cam Webb’s hypothesis that there should be a relationship between phylogenetic patterns in communities and the importance of different processes in structuring those communities sparked a decade-long investigation into the link between phylogenetic information and community assembly. Although Webb’s hypothesis proved too simplistic, it still informs current research. This is the kind of failure on which you can build a career, particularly if you are willing to continually revisit and develop your theory as the body of evidence against it grows.
However, the most common form of failure occurs when a paper is published that is wrong, yet no one notices or worse, cares. For every paper that blows up to the proportion of the arsenic bacteria paper, or inspires years of new research, there are hundreds of papers that just fade away, poorly cited and poorly read. Is it better to fail quietly, or to take the chance at public failure, with all its risks and rewards?
6 comments:
Nice post. I wonder if in the current scientific world filled with blogging and tweeting, if some papers will be cited and read more than others due to how much the author(s) advertise their paper on blogs, twitter, and the like.
Heather Piwowar's research in PLoS One suggested that papers with open source data are cited more than papers that don't provide their data.
Very interesting post, but I would quibble with some details. In particular, what little-cited papers are you thinking of that preceded Chesson and Huntly? Because I can think of some MUCH-cited papers that preceded them--Hutchinson 1961, for starters! (that Hutchinson had incorrect reasons for claiming that envi. variation promotes coexistence does not mean he didn't make the claim, or that no one noticed his claim) Further, while I think it's great that you and the folks in your lab take Chesson and Huntly '97 as a landmark paper (so do I), the truth is that folks who do are a small minority of all ecologists. Take it from me, Chesson and Huntly unfortunately failed to kill off, or even make much of a dent in, the zombie ideas they were attacking. Their paper, not related papers preceding it, is an example of a great paper not getting the attention it deserves.
As for "productive failures" like Hubbell's neutral theory or Webb et al.'s ideas about phylogenetic relatedness and coexistence, there's definitely a debate to be had about the difference between productive failures (false ideas that prompt much research of lasting value), mere fads (ideas that prompt much research of no lasting value), and unproductive failures (false ideas that prompt much research of no lasting value, thereby distracting attention from more productive avenues and wasting everyone's time, money, and effort). I agree there are such things as productive failures, but I'm not sure I agree that Hubbell 2001 or Webb et al. 2002 are good examples. I would argue, for instance, that much of the research prompted by Hubbell involves ecologists "learning the hard way"--that is, relearning things that evolutionary biologists learned decades ago, such as what kind of data can or cannot detect non-zero selection coefficients.
Hey Jeremy- thanks for your comments! It was definitely an ambitious topic for a post, so I'm not surprised that some of the details might be arguable. Some of the work out of Dan Cohen's lab (1994 in Plant Species Biology) seems to precede Chesson and Huntly, but received about 4 citations, so that was what I had in mind. However, he was certainly a successful scientist in evolutionary ecology, so I didn't want to name him and make him sound like he had "failed". That said, propably both our labs do give Chesson and Huntly's paper more importance than most ecologists.
I totally agree with your last points about the difference between productive failures and unproductive failures, however I'd disagree about which category Webb and Hubbell's works fall into. I would be curious as to what works you'd consider as productive failures?
Your example of Diamond (1975) is a good example of a productive failure. Prompted both a very useful conceptual/methodological debate that had a long-lasting effect on ecologists (though the effect eventually wore off), and ultimately led to sustained experimental effort to measure competition in the field.
What a most depressing title.
Post a Comment