Skip to main content
Journal of Epidemiology and Community Health logoLink to Journal of Epidemiology and Community Health
. 2007 Nov;61(11):931.

When do we know enough to recommend action? The need to be bold but not reckless

Paula Braveman 1
PMCID: PMC2465621  PMID: 17933948

Researchers' scientific training instils the need to recognise and explicitly acknowledge the limitations of their findings as the basis for policy recommendations. It is a matter of ethics (being truthful) and our reputations as scientists. When we present our results, we therefore take pains to state the caveats, such as potential biases, lack of statistical significance and uncertain generalisability, which could alter conclusions. “On the one hand this, but on the other hand that” rarely provides guidance for practical decisions, however, and policymakers generally tune this out.

My colleagues and I were recently commissioned to make recommendations regarding a large public programme targeting particular health inequalities, which were not narrowing despite years of programme efforts. Our task was to recommend whether/how the programme should change. We reviewed literature, made site visits, and consulted programme staff, key informants, and a community advisory board. Although we have always tried to make our research relevant to policy, this was a different proposition altogether. Neither “On the one hand…” nor “More research is needed...” would be helpful to the decision‐makers, yet more is unknown than known about the causes and prevention of the health inequalities of concern. It seemed clear that the largely clinical, downstream approaches used to date were not yielding results, but most literature on alternative interventions was methodologically weak. There were biologically plausible hypotheses with some, but not conclusive, supporting evidence, suggesting promising but largely untested alternatives focused at least somewhat more upstream than the existing programme model. Lacking definitive evidence of effective interventions to reduce the health inequalities of concern, how could we responsibly recommend action?

Our thinking evolved while wrestling with this dilemma. Cost was a prime consideration; acting on misguided recommendations could waste scarce resources, and disparities might even widen. At the same time, these potential costs should be weighed against the continuing human and economic costs of the status quo—that is, persistent large disparities in serious health outcomes. This kind of trade‐off is rarely considered. Like others,1,2 we realised that ‘”gold standard” evidence of effectiveness from randomised controlled trials is rarely available for upstream interventions targeting root causes of health inequalities such as low educational attainment, poverty and racism, and the disempowerment they foster; only downstream approaches such as medical care methods generally have such evidence to back them.

It is reckless to recommend a direction for which there is no scientific basis, especially if there are well‐substantiated alternatives. It is another thing entirely, however, to recommend an approach that has: (1) strong biological plausibility based on current knowledge of relevant causal pathways; (2) some, albeit inconclusive, evidence of effectiveness for the desired purpose, which is at least as strong as evidence supporting existing/alternative approaches; (3) likely feasibility; and (4) a well‐documented role in improving other important outcomes (in this case, other related health inequalities). Acquiring solid knowledge about the effectiveness of upstream approaches requires testing them on a large scale in diverse populations and settings, using the most rigorous designs possible, which calls for creativity. We need bold, but not reckless, experiments to test the most promising, plausible, and theoretically sound interventions to reduce health inequalities, and this requires enlisting policymakers. Perhaps the way could be paved by increasing policymakers' understanding of the limitations of the downstream approaches that have predominated for decades, with costs incommensurate with outcomes. Others have struggled with this challenge,3,4,5 and hopefully many more will wrestle with it in the future, providing guidance not only for researchers but for those enlightened policymakers who use research to inform their work.

References

  • 1.Jackson N, Waters E. Criteria for the systematic review of health promotion and public health interventions. Health Promot Int 200520367–374. [DOI] [PubMed] [Google Scholar]
  • 2.Kelly M P, Bonnefoy J, Morgan A.et alThe development of the evidence base about the social determinants of health Geneva: World Health Organization Commission on Social Determinants of Health, Measurement and Evidence Knowledge Network, 2006
  • 3.Anderson L M, Brownson R C, Fullilove M T.et al Evidence‐based public health policy and practice: promises and limits. Am J Prev Med 200528(Suppl)226–230. [DOI] [PubMed] [Google Scholar]
  • 4.Macintyre S. Evidence based policy making. BMJ 20033265–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Kaufman J S, Kaufman S, Poole C. Causal inference from randomized trials in social epidemiology. Soc Sci Med 2003572397–2409. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Epidemiology and Community Health are provided here courtesy of BMJ Publishing Group

RESOURCES