A new paper from Betini et al. in the Royal Society Open Science contributes to this discussion by asking why ecologists don’t test multiple competing hypotheses (allowing efficient falsification or “strong inference” a la Popper). Ecologists rarely test multiple competing hypothesis test: Betini et al. found that only 21 of 100 randomly selected papers tested 2 hypotheses, and only 8 tested greater than 2. Multiple hypothesis testing is a key component of strong inference, and the authors hearken to Platt’s 1964 paper “Strong Inference” as to why ecologists should be adopting adopt strong inference.
Platt |
From Platt: “Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method-oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations.”
[An aside to say that Platt was a brutally honest critic of the state of science and his grumpy complaints would not be out of place today. This makes reading his 1964 paper especially fun. E.g. “We can see from the external symptoms that there is something scientifically wrong. The Frozen Method. The Eternal Surveyor. The Never Finished. The Great Man With a Single Hypothesis. The Little Club of Dependents. The Vendetta. The All-Encompassing Theory Which Can Never Be Falsified.”]Betini et al. list a number of common practical intellectual and practical biases that likely prevent researchers from using multiple hypothesis testing and strong inference. These range from confirmation bias and pattern-seeking to the fallacy of factorial design (which leads to unreasonably high replication requirements including of uninformative combinations). But the authors are surprisingly unquestioning about the utility of strong inference and multiple hypothesis testing for ecology. For example, Brian McGill has a great post highlighting the importance and difficulties of multi-causality in ecology - many non-trivial processes drive ecological systems (see also).
Another salient point is that falsification of hypotheses, which is central to strong inference, is especially unserviceable in ecology. There are many reasons that an experimental result could be negative and yet not result in falsification of a hypothesis. Data may be faulty in many ways outside of our control, due to inappropriate scales of analyses, or because of limitations of human perception and technology. The data may be incomplete (for example, from a community that has not reached equilibrium); it may rely inappropriately on proxies, or there could be key variables that are difficult to control (see John A. Wiens' chapter for details). Even in highly controlled microcosms, variation arises and failures occur that are 'inexplicable' given our current ability to perceive and control the system.
Or the data might be accurate but there are statistical issues to be concerned about, given many effect sizes are small and replication can be difficult or limited. Other statistical issues can also make falsification questionable – for example, the use of p-values as the ‘falsify/don’t falsify’ determinant, or the confounding of AIC model selection with true multiple hypothesis testing.
Or the data might be accurate but there are statistical issues to be concerned about, given many effect sizes are small and replication can be difficult or limited. Other statistical issues can also make falsification questionable – for example, the use of p-values as the ‘falsify/don’t falsify’ determinant, or the confounding of AIC model selection with true multiple hypothesis testing.
Perhaps one reason Bayesian methods are so attractive to many ecologists is that they reflect the modified approach we already use - developing priors based on our assessment of evidence in the literature, particularly verifications but also evidence that falsifies (for a better discussion of this mixed approach, see Andrew Gelman's writing). This is exactly where Betini et al.'s paper is especially relevant – intellectual biases and practical limitations are even more important outside of the strict rules of strong inference. It seems important as ecologists to address these biases as much as possible. In particular, better training in philosophical, ethical and methodological practices; priors, which may frequently be amorphous and internal, should be externalized using meta-analyses and reviews that express the state of knowledge in unbiased fashion; and we should strive to formulate hypotheses that are specific and to identify the implicit assumptions.