Eeny, meeny miney, mo—children use these funny rhymes to select one of their peers by chance. The chosen lucky—or unlucky—one then has to chase the others in a game of hide-and-seek or prove his or her courage in some other contest. What children do while playing, life does every day, choosing the lucky individual to win the lottery or the unlucky one to die in an accident or from a disease.
Humankind has always looked for ways to predict these things—by consulting oracles or fortune-tellers, interpreting natural phenomena or asking some benevolent deity for a sign. But it was science that developed the most effective tool: the use of statistical analysis to predict the odds that a certain event will take place under a variety of circumstances and probabilities.
Over time, statistical analyses have gained a huge importance in health, politics and economics and in many other aspects of society. Medical statistical studies aim to predict, for instance, the chance of developing cancer due to exposure to certain substances or the likelihood of an onset of breast cancer when a certain gene has a certain mutation. Economists use statistical analysis and market indicators to decide whether interest rates should be raised or lowered. And lawmakers rely on such studies when, for instance, they decide whether a certain food additive is harmful and thus should be banned.
Statistics reverberated through the media recently when the US National Institutes of Health (NIH) halted a clinical trial in July on hormone replacement therapy (HRT) and recommended that all 16 000 participating women immediately stop taking these drugs. The fact that HRT is linked to some increased health risks is not new, but the NIH revealed quite alarming numbers. Their study found that, out of 10 000 women taking a combination of estrogen and progestin for 5 years, eight more women will develop breast cancer than those not taking the hormones, another eight will have strokes and eight more will develop blood clots in their lungs. On the plus side, the study found that six fewer women will develop colorectal cancer and five less will have hip fractures. Twenty-four in 10 000 women with an increased risk of cancer, stroke and blood clots might not sound significant. But extrapolating the numbers to the ∼6 million women in the USA alone that have been prescribed HRT produces an overall figure of 14 400 more women at risk, so the NIH's decision to stop the HRT study is obviously the right one.
This is a prime example of how such studies and statistics influence our lifestyle and how they have largely replaced the belief in fate or the actions of some higher power. Of course, statistical analysis has dramatically improved public health—studies have clearly shown that smoking increases the risk of lung cancer and cardiovascular disease and they have led to a ban on many environmental pollutants and carcinogenic substances, such as DDT or the use of lead in paint or as a fuel additive. But statistics is a double-edged sword that can as easily be exploited to create consumer scare stories or to draw conclusions that serve only particular interests. Woman's or health magazines feature several pages variably informing their readers that, for example, coffee may or may not increase their risk of a heart attack, that drinking wine may or may not decrease their risk of cardiovascular disease or that electromagnetic emissions from cell phones may or may not increase their risk of brain cancer, depending on the latest study on these topics. And the number of statistical analyses showing that the release of carbon dioxide into the atmosphere is a cause for global climate change is equal to the number of studies showing that it is not.
This is not to say that the evidence is sloppy; it indicates either that the scientific link is missing or that the conclusions based on these studies are highly questionable. Many articles on the health risks or benefits of coffee, red wine or whatever else is the latest fad are based on medical studies, performed by scientists and funded by scientific agencies. But despite these high credentials, they do not necessarily create consumer confidence, particularly if they become a source of news stories.
The problem is statistics itself. The tool is so powerful that it is basically possible to 'prove' anything—for instance, that Londoners are more likely to suffer a heart attack than New Yorkers or that French people are more likely to develop colorectal cancer than Russian people. Often the result misses the real cause: maybe the French are more likely to develop colorectal cancer because they live longer on average than Russians; maybe Londoners are more likely to die of a heart attack because the London Underground is more packed during rush hour, thus creating more stress, than New York's subway trains. The problem is that there are so many confounding factors that it is often impossible to find a link based on statistical analysis alone. Furthermore, even convincing statistical data supported by scientific evidence, as in the NIH's HRT study, does not necessarily mean a call for harsh measures. HRT effectively prevents the onset of osteoporosis for those women at risk and is a huge relief for women who suffer from the sometimes debilitating side effects of the menopause. The only firm conclusion so far from the debate surrounding HRT is that physicians have to be much more aware of their patients' specific needs when prescribing HRT.
Statistics gives hints and directions, but it is just a starting point for more research into a topic in order to create experimental data and ultimately find a link between cause and effect. Lawmakers also have to keep these limits of statistics in mind when they make decisions to protect citizens from potential harm. Or, as the mathematicians say, 'Never believe a statistical analysis that you haven't done yourself.'
