Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
. 2015 Apr;105(4):665–669. doi: 10.2105/AJPH.2014.302433

Old Myths, New Myths: Challenging Myths in Public Health

Sarah M Viehbeck 1,, Mark Petticrew 1, Steven Cummins 1
PMCID: PMC4358183  PMID: 25713962

Abstract

Myths are widely held beliefs and are frequently perpetuated through telling and retelling. We examined 10 myths in public health research and practice. Where possible, we traced their origins, interrogated their current framing in relation to the evidence, and offered possible alternative ways of thinking about them. These myths focus on the nature of public health and public health interventions, and the nature of evidence in public health. Although myths may have some value, they should not be privileged in an evidence-informed public health context.


Myths are “beliefs held to be true despite substantial refuting evidence.”1(p447) They are frequently perpetuated through telling and retelling and periodically there is a need to “bust” myths by examining them in relation to current evidence. In the public health context, this has been done for myths about systematic reviews,2,3 tobacco control,4 and obesity.1

THE MYTHS

The myths discussed here are focused on the nature of public health and public health interventions, and the nature of evidence in public health.

1. Public Health Interventions Inevitably Represent a “Nanny State”

It has been argued that public health interventions that seek to “interfere” with freedoms or “intervene” in the lives of citizens represent “nanny statism.” Nanny statism arises as a result of governments “telling people how to live their lives.”5(p146) The tension between libertarian and utilitarian perspectives on interventions has been discussed by the Nuffield Council on Bioethics, which recommends that the role of the state in relation to public health interventions should justify the intensity of intervention with the rationale and strength of evidence for intervening.5(pxix)

There are alternatives; recent policy directions of the current British government has suggested that a preferred approach is to “nudge” citizens toward healthy choices6 rather than to “nanny” the public by overly regulating environments. Although there is some evidence that nudging may work in shaping consumer behavior, there are also cases in which increased regulation may be the more effective route to achieve the scale of change needed to support population health outcomes.7 Many of the greatest public health achievements globally have been the result of regulatory efforts of governments to shape environments and industry behavior8,9 and should be components of the government’s role as a steward of the public’s health as opposed to being a “nanny.”10

Nanny statism is not inevitably a bad thing and, in some cases, state interventions are a more effective approach than more individualized approaches (which depend on individual choice) to problems that are not addressed, or addressed inequitably, when left to individual choice. For example, in a recent systematic review in the area of alcohol, Martineau et al. found that evidence of effectiveness existed across a number of different population-level interventions, many of which were policy-oriented, and that benefits generally outweighed harms, suggesting a role for government intervention.11

2. Prevention Should Require Fewer Resources

Benjamin Franklin’s expression “an ounce of prevention is worth a pound of cure” has become a widely stated idiom for public health12–14 with some making a corollary assumption that prevention should also require fewer resources than other areas of health spending.15,16 Compared with the health care system, public health receives a small proportion of the overall investment in health and preventive services. In Canada, public health accounts for approximately 5% of health expenditures nationally17—it is closer to 3% in the United States.18 These figures capture a range of public health interventions beyond health promotion, disease prevention, and health inspection and may also include basic infrastructure and human resource costs.18

The comparatively small investment could result in a false conclusion that public health interventions are therefore “cheap” (particularly when compared with health care interventions) and should require fewer resources. Although it is true that there is relatively limited cost-effectiveness evidence for many public health interventions (e.g., Owen et al.)19,20 and, of what evidence was examined, only cost savings in 15% of interventions, it is clear that public health interventions can benefit from significant and sustained upstream investments. Consider, for example, the relationship between large-scale and well-resourced public health interventions such as the Massachusetts and California tobacco control programs and the corresponding large-scale population-level outcomes.21,22

3. The Only Job of Public Health Research Is to “Translate” Basic Science Discoveries

There is an absolutely critical role for public health in translational research and knowledge translation. Over the past 10 years, considerable literature has accumulated regarding the distinctions among diffusion, dissemination, and knowledge translation and the nature of the public health evidence base and how it is generated to achieve greater impact.23 The increased focus on investing not only in basic biomedical research or in research on the efficacy of highly controlled interventions but also toward issues of implementation science (e.g., why public health interventions work, for whom, under what conditions, and at what cost) is imperative to seeing the fields of dissemination and implementation research flourish.24

That said, translation of public health research to action is not the only role for public health research. If the pipeline model to research is taken literally, public health research may fall toward the end of a developmental pathway of research designed to move from bench to bedside and perhaps on to populations. In addition to the relevance and knowledge translation challenges that have already been described when one looks along the pipeline,25,26 there is also the implication that the only role of public health research is to translate knowledge from earlier in the pipeline as opposed to also generating discoveries and new knowledge within the public health field in and of itself as opposed to scaling from basic biomedical research or clinical interventions.

Basic theoretical and methodological work within public health is also needed to drive forward the field,27 and work that challenges existing assumptions underpinning population health interventions.28 Furthermore, this myth may result in a devaluing of developmental public health research such as feasibility studies that may be at too early a stage to be translated or replication of intervention research, and may place too great an emphasis on the impact of single studies which have yet to become a part of a body of knowledge or synthesized in relation to other evidence within a systematic review, for example.

4. Ten Percent of an Intervention’s Cost Should Be Spent on Evaluation

One widely held and pervasive myth is that 10% of any program budget should be spent on evaluation. For the purposes of unpacking this myth, it is helpful to think of evaluation as an intervention, which has costs and benefits like any other intervention, at which point the 10% figure looks less convincing. We would not argue that 10% of a health budget would be spent on hip replacements or another medical intervention without a formal needs assessment. Some evaluations need more than 10%, and some less. The 10% rule of thumb is partly driven by funders’ expectations: the Economic Opportunities Studies office expects applicants to build in an evaluation budget that is 10% to 15% of the total budget.29

However, the exact figure fluctuates. It could be more than 10%: according to the US Administration for Children and Families, “A useful rule of thumb is to estimate that your evaluation will cost approximately 15 to 20 percent of your total program budget.”30(p30) It could be less: the W. K. Kellogg Foundation says that evaluation costs can be 5% to 7% of a project’s budget,31 and the International Labor Organization suggests a minimum of 2%.32 For some projects, the appropriate figure is not 10%, and may even be 0%—some interventions may not be worth evaluating. For example, if the intervention is implemented in such a way that an informative evaluation is not possible or if a decision has already been taken about the program’s future and findings are unlikely to have relevance.33

Overall, it is more likely that the budget for evaluation will depend on the nature of the intervention, its stage of development, its size, the value of the information that the evaluation is expected to provide, and how much is already known about the intervention. European Union guidance suggests that for large-scale relatively routine programs the budgets required for evaluation will be a small proportion of the program resources (normally less than 1%). On the other hand, for interventions that are relatively innovative in character, and in which evaluation has a strong learning and participatory aspect, the costs could constitute a relatively high proportion of program resources. Recent work by Leviton et al.34,35 has proposed a Systematic Screening and Assessment method to extend existing evaluability assessment methodologies by identifying which innovations are more likely to be effective and thereby most worth evaluating. The approach aims to improve allocation of evaluation resources, while also linking evaluation explicitly to issues of likely impact and scalability of interventions.

5. Benefits of Prevention Interventions Take a Long Time to Accrue

Although the burden of chronic disease is increasingly globally, a frequently cited reason for prevention investments being harder to “sell” politically is that the accrual of health impacts are too far off and may exceed the time horizon of a typical electoral cycle.36 Furthermore, investment decisions may be necessarily influenced by crises such as the short-term impacts of something like an infectious disease outbreak compared with the sustained impacts that something like increases in chronic disease prevalence may have over time.

Although there is no doubt that the long-term health benefits of prevention interventions may take time to be demonstrated and that a longer-term perspective on health outcomes is critical to ensuring the long-term health of populations,35 there are examples of prevention interventions that have had significant impacts over relatively short time-scales. The “top 10” public health achievements in the United States, from 2001 to 2010, all involved interventions that have demonstrated public health impact in a period of 10 years or less.9 Examples include the impacts of workplace smoking bans, which resulted in reduced respiratory and sensory symptoms, as well as reductions in hospital admissions because of acute myocardial infarction within a relatively short period following the introduction of such bans,37 reductions in mortality following seat belt and air bag use,38 and reductions in smoking prevalence following introduction of taxation measures.39 Furthermore, repeals of public health measures can also result in rapid behavior changes such as in Finland where, within a 1-year period of a policy change that reduced the tax on alcohol, there were increases in alcohol-related deaths.40

To provide evidence on health impacts attributable to interventions, appropriate surveillance mechanisms must be in place to detect both exposure to the intervention and outcome measures. Such monitoring is a key input to ensuring that premature conclusions are not made regarding effectiveness or broader harms and benefits of interventions.

6. “If It Only Helps 1 Person, Then It's Worth Doing”

It is sometimes asserted that public health and health care interventions that are obviously beneficial are worth doing anyway, irrespective of their effectiveness, because if they “help just 1 person. . . .” One of us heard this view expressed about the World Health Organization Healthy Cities program, where debate about what effect it might have had was rejected as unimportant by a practitioner who argued that if it “had saved just 1 life” then it was worth it.

The myth is particularly prevalent in the case of screening programs, where the logic of trying to identify early cases of disease seems unassailable. However, for the argument to be true, it would require that no expenditure is too great to save a life. Even ineffective, harmful interventions are likely to save just 1 life in a large enough population. This does not make them “worth” doing.

7. The Public Health Evidence Base Is Weak

The argument is often made that the public health evidence base is “weak” compared with other areas (medicine for example). The assumption is that a large part of this perceived weakness is attributable to the lack of studies using randomized controlled trial (RCT) designs that, when taken as a body of evidence, will enable causal inferences to be made. There are several assumptions here that can be challenged. One is that the lack of trials is an indicator of a weak evidence base. Another is that there is something different, and perhaps “softer,” about public health science in its inability to marshal a large body of experimental evidence to inform decision-making. Many fields, however, have been built largely on observational evidence—astronomy, for example. In others, observational evidence is often used with caveats to inform major policy decisions (such as transport and climate change).

In public health, the use of nonexperimental methods is often entirely appropriate and sufficient, particularly when causal chains are short and effect sizes are large.41 Valuable evidence about public health interventions can be gathered through RCTs, non-RCTs, and many other types of research, particularly mixed-methods approaches, depending on the question and the level of certainty required from the answer.42 The prevailing view that every nonrandomized study in public health is either weak, or is simply a failed attempt at an RCT, needs to be challenged, particularly in view of the need to better address issues of external validity, which persist within some highly controlled studies. Defining observational evidence by what it is not (“nonrandomized”) is also simplistic. Observational studies are not simply “nonrandomized studies,” any more than RCTs are “nonobservational” studies.

8. Every Gap in the Evidence Base Needs Filling

We often point to the gaps in the public health evidence base as a problem. However, some of these gaps are problematic and some are not. Every evidence base—even in the hard sciences—has, and should have, gaps. Some gaps are there for a reason and should never be filled because they represent minor or unimportant questions, or low priorities, or are simply not fillable—because they represent unanswerable questions—such as whether, if all societies were completely reorganized along egalitarian lines, they would be more healthy.

Pointing to such research gaps implies that every intervention needs to be evaluated, and this cannot be the case. The “payback” from some evaluations in terms of the knowledge likely to be gained is likely to be so low that they are not worth conducting.32 What is needed is a greater focus, not on identifying gaps, but on identifying gaps that, when filled, will yield the greatest payback in terms of public health.43

9. There Is a Hierarchy of Evidence

One of the criticisms of the application of evidence-based thinking to public health is that it depends on the existence of a hierarchy of evidence, which may underprivilege public health research because of the relative absence of RCT evidence. This argument is often repeated and entirely spurious. The perhaps surprising fact is that there is no such thing as a hierarchy of evidence (outside such straw-man arguments).

What there is, however, is a hierarchy of evidence of efficacy or effectiveness.44 This was developed by the Canadian Task Force on the Periodic Health Examination to help decide on priorities when searching for studies to answer clinical questions about effectiveness, and was subsequently adopted by the US Preventive Services Task Force. Its origins have been said to derive from Campbell and Stanley’s seminal text on evaluation.45 It was not intended to be an overarching hierarchy of evidence for all types of questions. Yet, the original focus of the hierarchy—on effectiveness—is generally overlooked, and it is often assumed that it is a once-and-for-all hierarchy of evidence. Quite simply, there is no such thing.

10. Public Health Interventions Should Only Be Based on Research

The gap between research and its use in practice and policy exists in many disciplines, including public health.46 The phrase “evidence-based” may imply to practitioners and policymakers that researchers expect decisions to be based only on research evidence and nothing else. Such a worldview is not only unrealistic, but also may in fact obstruct public health action that will occur in the absence of research. To paraphrase Muir Gray, decisions should be based on the best evidence available as opposed to the best evidence possible.47

Thinking about evidence-based public health only in relation to research is inconsistent with the nuanced description of evidence-based public health, which focuses on the processes needed to ground public health decision-making in evidence and a wide evidence base to both inform and evaluate decisions and interventions before or once implemented.48 This broadened understanding is more deliberately inclusive of the many inputs into public health decision-making alongside research evidence.49–51

In public health, the adaptation of interventions to local context is also often important. It is conceivable that an intervention that is solely evidence-based and taken off the shelf and implemented without adequate consideration or adaptation for contextual considerations may see decreased effectiveness or not be adequately adjusted for critical implementation factors. As noted differently by Green and by Hawe et al., a shift toward greater appreciation for contextualized evidence is needed from best practices to “best processes”52 and understanding and maintaining the “active ingredients” within interventions to maintain the functional (rather than compositional) fidelity that contribute to their effectiveness and implementation.53

REVIEW

We have presented a selection of salient and pervasive myths in public health research and practice, including their origins and arguments for why they may not be truisms. As the field continues to develop and debate, it is likely that more myths will be generated and it is fully acknowledged that there is a larger set of myths in our field.

Limitations

The main limitation of the article is that the selected myths are not comprehensive—they were chosen for their relevance by the authors, informed by informal consultations with colleagues, and drawing on relevant literature. Clearly, not all myths in public health have been covered and some other myths (not discussed here) have already been given thorough treatment elsewhere—for example, the myth that RCTs are only able to respond to questions of intervention effectiveness and not questions related to questions of external validity and “for whom” and “under what circumstances” interventions are effective41,54; the myth that public health relates only to the health sector 55; the myth that waiting lists should only exist for health care56; and the myth that public health interventions are always beneficial.57

As new data become available, we would encourage public health practitioners and researchers to dispel myths when opportunities present themselves to do so as part of a commitment to evidence-informed public health practice and the evolution of public health training programs and curricula.

Conclusions

Myths are perpetuated by telling and retelling. They also have value: in other contexts they are a source of comfort, and help people explain where they come from, why the world is as it is, and why things are the way they are. However they should have no such privileged place in evidence-informed public health.

Acknowledgments

S. M. Viehbeck was funded through an Emerging Researcher Award through the Population Health Improvement Research Network and was supported as a Visiting Scholar to the London School of Hygiene and Tropical Medicine in 2013 by the Public Health Research Consortium. M. Petticrew receives support from the United Kingdom National Institutes of Health Research (UK-NIHR), and the Medical Research Council Methodology Research Programme. S. Cummins is supported by a UK-NIHR Senior Research Fellowship.

The authors gratefully acknowledge the contributions of the anonymous reviewers to an earlier version of the article. The authors acknowledge Erica Di Ruggiero, Nancy Edwards, Penny Hawe, and Ken McLeroy for contributing to discussions that informed the ideas in the article.

Note. The views and opinions expressed herein are those of the authors and do not necessarily reflect those of the UK-NIHR or the UK Department of Health.

Human Participant Protection

No protocol approval was needed for this article because no human participants were involved.

References

  • 1.Casazza K, Fontaine K, Astrup A et al. Myths, presumptions, and facts about obesity. N Engl J Med. 2013;368(5):446–454. doi: 10.1056/NEJMsa1208051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Petticrew M. Systematic reviews from astronomy to zoology: myths and misconceptions. BMJ. 2001;322(7278):98–101. doi: 10.1136/bmj.322.7278.98. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Moat KA, Lavis JN, Wilson MG, Røttingen JA, Bärnighausen T. Twelve myths about systematic reviews for health system policymaking rebutted. J Health Serv Res Policy. 2013;18(1):44–50. doi: 10.1258/jhsrp.2012.011175. [DOI] [PubMed] [Google Scholar]
  • 4.Frieden TR, Blakeman D. The dirty dozen: 12 myths that undermine tobacco control. Am J Public Health. 2005;95(9):1500–1505. doi: 10.2105/AJPH.2005.063073. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Nuffield Council on Bioethics. Public Health: Ethical Issues. Cambridge, England: Nuffield Council on Bioethics; 2007. [Google Scholar]
  • 6.Haynes L, Service O, Goldacre B, Torgerson D. Test, Learn, Adapt: Developing Public Policy With Randomised Controlled Trials. London, England: The Cabinet Office Behavioural Insights Team; 2012. [Google Scholar]
  • 7.Marteau TM, Ogilvie D, Roland M, Suhrcke M, Kelly M. Judging nudging: can nudging improve population health? BMJ. 2011;342:d228. doi: 10.1136/bmj.d228. [DOI] [PubMed] [Google Scholar]
  • 8. World Health Organization. WHO Framework Convention on Tobacco Control. 2013. Available at: http://www.who.int/fctc/en. Accessed July 24, 2013.
  • 9.Domestic Public Health Achievements Team. Centers for Disease Control and Prevention. Ten great public health achievements—United States, 2001–2010. MMWR Morbid Mortal Wkly Rep. 2011;60(19):619–623. [PubMed] [Google Scholar]
  • 10.Jochelson K. Nanny or Steward? The Role of Government in Public Health. London, England: The King’s Fund; 2005. [DOI] [PubMed] [Google Scholar]
  • 11.Martineau F, Tyner E, Lorenc T, Petticrew M, Lock K. Population-level interventions to reduce alcohol-related harm: an overview of systematic reviews. Prev Med. 2013;57(4):278–296. doi: 10.1016/j.ypmed.2013.06.019. [DOI] [PubMed] [Google Scholar]
  • 12. McGraw-Hill Dictionary of American Idioms and Phrasal Verbs. 2002. Available at: http://idioms.thefreedictionary.com/ounce+of+prevention+is+worth+a+pound+of+cure. Accessed July 24, 2013.
  • 13.Canadian Public Health Association to the Standing Senate Committee on Social Affairs. Science and Technology. Looking Back, Looking Forward: Public Health Within a Federal–Provincial/Territorial Health Transfer Agreement. Ottawa, ON: Canadian Public Health Association; 2011. [Google Scholar]
  • 14. Department of Health and Human Services, Centers for Disease Control and Prevention. The “Ounce of Prevention” Campaign. 2008. Available at: http://www.cdc.gov/ounceofprevention. Accessed July 24, 2013.
  • 15.Satcher D. The prevention challenge and opportunity. Health Aff (Millwood) 2006;25(4):1009–1011. doi: 10.1377/hlthaff.25.4.1009. [DOI] [PubMed] [Google Scholar]
  • 16.Woolf SH. The power of prevention and what it requires. JAMA. 2008;299(20):2437–2439. doi: 10.1001/jama.299.20.2437. [DOI] [PubMed] [Google Scholar]
  • 17.Canadian Institute for Health Information. National Health Expenditure Trends, 1975 to 2012. Ottawa, Ontario: Canadian Institute for Health Informatics; 2012. p. 40. [Google Scholar]
  • 18.Committee on Valuing Community-Based, Non-Clinical Prevention Programs, Board on Population Health and Public Health Practice, Institute of Medicine. An Integrated Framework for Assessing the Value of Community-Based Prevention. Washington, DC: Institute of Medicine of the National Academies; 2012. [PubMed] [Google Scholar]
  • 19.Owen L, Morgan A, Fischer A, Ellis S, Hoy A, Kelly M. The cost-effectiveness of public health interventions. J Public Health (Oxf) 2012;34(1):37–45. doi: 10.1093/pubmed/fdr075. [DOI] [PubMed] [Google Scholar]
  • 20.Public Health Agency of Canada. Investing in Prevention the Economic Perspective: Key Findings From a Survey of the Recent Evidence. Ottawa, ON: Public Health Agency of Canada; 2009. [Google Scholar]
  • 21.Koh HK, Judge C, Robbins H, Cobb Celebucki C, Walker D, Connolly G. The first decade of the Massachusetts Tobacco Control Program. Public Health Rep. 2005;120(5):482–495. doi: 10.1177/003335490512000503. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Lightwood JM, Dinno A, Glantz S. Effect of the California Tobacco Control Program on personal health care expenditures. PLoS Med. 2008;5(8):e178. doi: 10.1371/journal.pmed.0050178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Green LW, Ottoson JM, Garcia C, Hiatt RA. Diffusion theory, knowledge dissemination, utilization, and integration in public health. Annu Rev Public Health. 2009;30:151–174. doi: 10.1146/annurev.publhealth.031308.100049. [DOI] [PubMed] [Google Scholar]
  • 24.Glasgow RE, Vinson C, Chambers D, Khoury MJ, Kaplan RM, Hunter C. National Institutes of Health approaches to dissemination and implementation science: current and future directions. Am J Public Health. 2012;102(7):1274–1281. doi: 10.2105/AJPH.2012.300755. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Green L. Making research relevant: if it is an evidence-based practice, where’s the practice-based evidence? Fam Pract. 2008;25(suppl 1):i20–i24. doi: 10.1093/fampra/cmn055. [DOI] [PubMed] [Google Scholar]
  • 26.Westfall JM, Mold J, Fagnan L. Practice-based research—“blue highways” on the NIH roadmap. JAMA. 2007;297(4):403–406. doi: 10.1001/jama.297.4.403. [DOI] [PubMed] [Google Scholar]
  • 27.Potvin L, Gendron S, Bilodeau A, Chabot P. Integrating social theory into public health practice. Am J Public Health. 2005;95(4):591–595. doi: 10.2105/AJPH.2004.048017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Frohlich KL, Potvin L. Transcending the known in public health practice: the inequality paradox: the population approach and vulnerable populations. Am J Public Health. 2008;98(2):216–221. doi: 10.2105/AJPH.2007.114777. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. The Evaluation Part of a Proposal Budget. Washington, DC: Economic Opportunity Studies; N.D.
  • 30.Office of Planning, Research and Evaluation. The Program Manager’s Guide to Evaluation. 2nd ed. Washington, DC: Administration for Children and Families; 2010. pp. 6–12. [Google Scholar]
  • 31.W. K. Kellogg Foundation Evaluation Handbook. Battle Creek, MI: W. K. Kellogg Foundation; 2010. [Google Scholar]
  • 32. International Labour Organization. Partnerships and field support. 2013. Available at: http://www.ilo.org/pardev/lang–en/index.htm. Accessed July 24, 2013.
  • 33.Ogilvie D, Cummins S, Petticrew M, White M, Jones A, Wheeler K. Assessing the evaluability of complex public health interventions: five questions for researchers, funders, and policymakers. Milbank Q. 2011;89(2):206–225. doi: 10.1111/j.1468-0009.2011.00626.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Leviton L, Gutman A. Overview and rationale for the systematic screening and assessment method. New Dir Eval. 2010;125:7–31. [Google Scholar]
  • 35.Leviton LC, Kettel Khan L, Rog D, Dawkins N, Cotton D. Evaluability assessment to improve public health policies, programs, and practices. Annu Rev Public Health. 2010;31:213–233. doi: 10.1146/annurev.publhealth.012809.103625. [DOI] [PubMed] [Google Scholar]
  • 36.Graham H. Where is the future in public health? Milbank Q. 2010;88(2):149–168. doi: 10.1111/j.1468-0009.2010.00594.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Callinan JE, Clarke A, Doherty K, Kelleher C. Legislative smoking bans for reducing secondhand smoke exposure, smoking prevalence and tobacco consumption. Cochrane Database Syst Rev. 2010;14(4):CD005992. doi: 10.1002/14651858.CD005992.pub2. [DOI] [PubMed] [Google Scholar]
  • 38.Crandall CS, Olson L, Sklar D. Mortality reduction with air bag and seat belt use in head-on passenger car collisions. Am J Epidemiol. 2001;153(3):219–224. doi: 10.1093/aje/153.3.219. [DOI] [PubMed] [Google Scholar]
  • 39.Jha P, Chaloupka FJ. The economics of global tobacco control. BMJ. 2000;321(7257):358–361. doi: 10.1136/bmj.321.7257.358. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Koski A, Sirén R, Vuori E, Poikolainen K. Alcohol tax cuts and increase in alcohol-positive sudden deaths: a time-series intervention analysis. Addiction. 2007;102(3):362–368. doi: 10.1111/j.1360-0443.2006.01715.x. [DOI] [PubMed] [Google Scholar]
  • 41.Craig P, Cooper C, Gunnell D et al. Using natural experiments to evaluate population health interventions: new Medical Research Council guidance. J Epidemiol Community Health. 2012;66(12):1182–1186. doi: 10.1136/jech-2011-200375. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Bonell C, Fletcher A, Morton M, Lorenc T, Moore L. Realist randomised controlled trials: a new approach to evaluating complex public health interventions. Soc Sci Med. 2012;75(12):2299–2306. doi: 10.1016/j.socscimed.2012.08.032. [DOI] [PubMed] [Google Scholar]
  • 43.Robinson KA, Saldanha IJ, Mckoy NA. Rockville, MD: Agency for Healthcare Research and Quality; 2011. Frameworks for determining research gaps during systematic reviews. Methods Future Research Needs Reports, no. 2. [PubMed] [Google Scholar]
  • 44.Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health. 2003;57(7):527–529. doi: 10.1136/jech.57.7.527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Glasziou P, Vandenbroucke J, Chalmers I, Lind J. Assessing the quality of research. BMJ. 2004;328(7430):39–41. doi: 10.1136/bmj.328.7430.39. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Green LW, Mercer SL. Can public health researchers and agencies reconcile the push from funding bodies and the pull from communities? Am J Public Health. 2001;91(12):1926–1929. doi: 10.2105/ajph.91.12.1926. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Muir Gray JA. Evidence-Based Healthcare: How to Make Health Policy and Management Decisions. New York, NY; Edinburgh, Scotland: Churchill Livingstone; 1998. [Google Scholar]
  • 48.Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201. doi: 10.1146/annurev.publhealth.031308.100134. [DOI] [PubMed] [Google Scholar]
  • 49.Sweet M, Moynihan R. New York, NY: Milbank; 2007. Milbank Memorial Fund, Centers for Disease Control and Prevention. Improving population health: the uses of systematic reviews. [Google Scholar]
  • 50.Ciliska D, Thomas H, Buffett C. An Introduction to Evidence-Informed Public Health and a Compendium of Critical Appraisal Tools. Hamilton, ON: National Collaborating Centre for Methods and Tools; 2008. [Google Scholar]
  • 51.Bosch-Capblanch X, Lavis JN, Lewin S et al. Guidance for evidence-informed policies about health systems: rationale for and challenges of guidance development. PLoS Med. 2012;9(3):e1001185. doi: 10.1371/journal.pmed.1001185. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Green LW. From research to “best practices” in other settings and populations. Am J Health Behav. 2001;25(3):165–178. doi: 10.5993/ajhb.25.3.2. [DOI] [PubMed] [Google Scholar]
  • 53.Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? BMJ. 2004;328(7455):1561–1563. doi: 10.1136/bmj.328.7455.1561. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Moore L, Moore GF. Public health evaluation: which designs work, for whom and under what circumstances? J Epidemiol Community Health. 2011;65(7):596–597. doi: 10.1136/jech.2009.093211. [DOI] [PubMed] [Google Scholar]
  • 55.Adelaide Statement on Health in All Policies Moving Towards a Shared Governance for Health and Well-Being. Adelaide, Australia: World Health Organization, Government of South Australia; 2010. [DOI] [PubMed] [Google Scholar]
  • 56.Edwards NC, Riley BL. Can we develop wait lists for public health issues? CMAJ. 2006;174(6):794–796. doi: 10.1503/cmaj.050731. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Macintyre S, Petticrew M. Good intentions and received wisdom are not enough. J Epidemiol Community Health. 2000;54(11):802–803. doi: 10.1136/jech.54.11.802. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES