Most research findings are false


It can be proven that most claimed research findings are false.

So says John Ioannidis in a short and highly readable article in PLoS Medicine (free access). He postulates that the probability of research findings being wrong is higher when:

1. The sample sizes are small
2. The estimated effects are small
3. There is flexibility in research design, definitions, outcomes etc
4. There are financial or other interests and prejudices
5. The scientific field is 'hot'. i.e. many scientific teams work against the clock to beat the competition.

He is right, but there is less to his thesis than meets the eye.

1. Not all scientific findings are born equal. Just because a few new papers based on a handful of observations have been published claiming X has an effect on Y does not mean that their findings are accepted as the God-given truth. Some papers are more convincing than others, and readers will apply a probability that any research finding is wrong. The five factors above do increase the probability that a published finding is wrong, but they also increase the probability the results will be taken with a pinch of salt.

2. Small effects are rarely important in a 'practical' sense - as economists would say, small effects are unlikely to have important 'policy implications'. The probability that a finding is 'false' diminishes when the estimated effect is large (as per Ioannidis's second corrolary). Even if you argue that scientific results are often consumed by readers that are unable to critically assess them, it is unlikely that findings which are 'wrong' will have damaging consequences.

To cut a long story short, most research findings are indeed wrong. But is the number of false findings divided by the total number of findings saying much? Simply weigh these findings by their credibility and importance, and the world of scientific discovery looks rosy again.

Here's a WSJ article on the matter. HT to Ben Muse.

1 comments:

  1. Anonymous Says:

    "The five factors above do increase the probability that a published finding is wrong, but they also increase the probability the results will be taken with a pinch of salt."

    Yet lots of very influential research in economics is based on small samples and produces highly influential results. E.g. in a lot of the literature on economic growth the sample size is a cross section of country growth rates, which is a starting sample under 100 once you knock out sub-Saharan Africa and countries dependent primarily on resource extraction.