The Decline Effect explored in an article by Jonah Lehrer in the New Yorker refers to a temporal decline in the size of an observed effect: for example, the therapeutic value of antidepressants appears to have declined threefold since the original trials.

This may be a result of selective reporting – scientists focus on results that are novel and interesting, even if they are in fact simply statistical outliers, or worse, the result of unconscious human bias. This sentiment is troubling; humans – scientists or not– are proficient pattern finders, but our subconscious (or conscious) beliefs influence what we search for. Lehrer argues that replication – the process of carrying out additional, comparable but independent studies – isn’t an effective part of the scientific method. After all, if study results are biased, and replications don’t agree, how can we know what to trust?
Even if the decline effect is rampant, does it represent a failure of replicability? Lehrer states that replication is flawed because “it appears that nature often gives us different answers”. As ecologists though, we know that nature doesn’t give different answers, we ask it different questions (or the same question in different contexts). Ecology is complex and context-dependent, and replication is about investigating the general role of a mechanism that may have been studied only in a specific system, organism, or process. Additional studies will likely produce slightly or greatly different results, and optimally a comprehensive understanding of the effect results. The real danger is that scientists, the media, and journals over-emphasize the significance of initial, novel results, which haven’t (and may never be) replicated.
Is there something wrong with the scientific method (which is curiously never defined in the article)? The decline effect hardly seems like evidence that we’re all wasting our time as scientists – for one, the fact that “unfashionable” results are still publishable suggests that replicability is doing what it’s supposed to, that is, correct for unusual outcomes and produce something close to the average effect size. True, scientists are not infallible, but the strength of the scientific process today is that it doesn’t operate on the individual level: it relies on a scientific community made of peers, reviewers, editors, and co-authors, and hopefully this encourages greater accuracy in our conclusions.