The causal impact framework is a conceptual framework encompassing internal validity, external validity, and population intervention effects, which we argue can help us produce evidence of greater utility to public health decision-making.
To improve the health of a population, public health research should consider factors that can be changed, particularly exposures that are potential targets of intervention. Thus, useful public health research will focus on identifying causes of health and disease, rather than simply associated risk factors. Such causal public health (and clinical) research designs generally focus on ensuring internal validity1: generating an accurate estimate of a causal effect for the people in the study, such as obtained from a double-masked randomized controlled trial with no loss to follow-up. In such a trial setting, we are typically interested in contrasting two exposure distributions: (1) what if everyone were assigned to the experimental treatment, and (2) what if everyone were assigned to the comparison treatment?
This focus on internal validity informs the conduct and reporting of randomized trials and the framing of observational data analysis. Yet, if our goal is to understand the potential impact of a specific public health policy in a real population, establishing internal validity is merely a first step; we must also consider external validity and population intervention impact, which together we describe as the causal impact framework.
A MOTIVATING EXAMPLE
Here we focus on generating evidence for public health action beginning with an individually randomized trial (or observational study conducted in place of a trial). In particular, consider a randomized trial of antiretroviral agents for preexposure prophylaxis for HIV prevention (PrEP), in which an active arm is compared with a placebo in a group of HIV-negative volunteers who are followed up through regular clinic visits to identify new HIV infections. With perfect follow-up, a comparison of unadjusted survival curves between the active and placebo trial arms will yield a valid estimate of the causal effect of assignment to PrEP on risk of HIV acquisition in the study sample.2
The difference in outcomes between participants assigned to each treatment arm, allowing for differential adherence, is typically referred to as the effectiveness of the treatment. We often prioritize effectiveness of an intervention strategy over its efficacy (effect under perfect adherence or in laboratory settings) because in the real world we cannot force people to adhere to an intervention. Accordingly, effectiveness is sometimes referred to as the public health effect of the treatment. However, further work is needed to translate effectiveness to an estimate of the impact of an intervention in a real population.
EXTERNAL VALIDITY
An essential ingredient of the causal impact framework is the ability to generalize or transport to populations beyond that under study; thus, we must consider possible differences in characteristics between our study sample and our target population, the population in which we want to implement the intervention.3 If the study sample is a random sample of the target population, external validity is guaranteed in expectation. However, study samples and target populations nearly always differ systematically because of factors such as inclusion and exclusion criteria in a trial and self-selection into studies through the informed consent process. These differences may modify the impact of the intervention in the target population, leading to external validity bias. In the trial just described, if PrEP is more effective at preventing HIV in women than in men, and if women were more likely to participate in the trial than men, then PrEP could appear more effective in the trial than it would be in the target population.
In such a simple case, the effectiveness of PrEP in the target population can be estimated by standardization. But if the effectiveness of the intervention varied because of a complex combination of several variables (e.g., gender, age, race) the joint distribution of which differed between the study sample and the target population, then model-based strategies are recommended.3 Of course, if one or more of those variables were unmeasured, no quantitative approach would be guaranteed to provide an accurate estimate of the true population effect (a close parallel to the problem of uncontrolled confounding in an observational study). There is a growing literature on quantitative methods to “generalize” results from a study sample to the population from which the sample was selected,3 and to “transport” results from a study sample to a different population entirely.4 However, even if we successfully estimate the effectiveness of a proposed intervention in a target population, in the causal impact framework we must make efforts to understand the effect of the intervention under real-world conditions.
POPULATION INTERVENTION EFFECTS
As noted earlier, in our trial setting, we are likely contrasting two exposure distributions: (1) what if everyone were assigned to PrEP, and (2) what if everyone were assigned to placebo? But the real-world effects of a population-level PrEP intervention (e.g., a countrywide policy to promote PrEP to all people who meet certain criteria) may well differ from what is estimated in a specific trial. On one hand, not everyone will be targeted by PrEP campaigns, nor will everyone targeted choose to take the treatment, and adherence may be better in a trial because of Hawthorne effects. On the other hand, preventing a single HIV infection with PrEP may prevent subsequent transmission events, a dependency not likely captured in a small study sample. Finally, implementation challenges may lead to adaptation of the intervention when scaling up from a study to a population.
Nonexperimental settings raise additional challenges: observational public health research often focuses on effects of harmful exposures, rather than on interventions to limit such exposures. Using the results of a study of the effect of a harmful exposure (e.g., smoking) to estimate the potential effect of a population intervention to reduce prevalence of that exposure (e.g., mass campaign for smoking prevention) requires articulation of assumptions about the intervention in question and its side effects,5,6 and careful estimation (possibly using the g-methods of Robins, as in Westreich6). Population intervention effects,5,7 which can be thought of informally as causal effects tied to contrasts between the observed population and exposure distributions under realistic interventions, are of key importance to the causal impact framework goal of translating scientific results into policy-relevant findings. Such effects may serve as more natural inputs into cost-effectiveness and decision-theoretic models than typically reported study results.
REMARKS
There are numerous approaches for the estimation of policy-relevant public health effects, including large-scale representative and pragmatic randomized trials and comparative interrupted time series.1 Despite calls for wider adoption of these methods,1 traditional randomized trials and nonexperimental studies remain central to the production of evidence for public health practice. Such studies are valuable, but typically focus centrally on questions of internal validity, ignoring external validity and population intervention impact. Considering all three, as in the causal impact framework, may help us produce research more relevant to policymaking and, thus, help produce a Public Health of Consequence.
Footnotes
See also Galea and Vaughan, p. 973.
REFERENCES
- 1.Victora CG, Habicht JP, Bryce J. Evidence-based public health: moving beyond randomized trials. Am J Public Health. 2004;94(3):400–405. doi: 10.2105/ajph.94.3.400. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Murnane PM, Brown ER, Donnell D et al. Estimating efficacy in a randomized trial with product nonadherence: application of multiple methods to a trial of preexposure prophylaxis for HIV prevention. Am J Epidemiol. 2015;182(10):848–856. doi: 10.1093/aje/kwv202. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Cole SR, Stuart EA. Generalizing evidence from randomized clinical trials to target populations: The ACTG 320 trial. Am J Epidemiol. 2010;172(1):107–115. doi: 10.1093/aje/kwq084. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hernán MA, Vanderweele TJ. Compound treatments and transportability of causal inference. Epidemiology. 2011;22(3):368–377. doi: 10.1097/EDE.0b013e3182109296. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Fleischer NL, Fernald LC, Hubbard AE. Estimating the potential impacts of intervention from observational data: methods for estimating causal attributable risk in a cross-sectional analysis of depressive symptoms in Latin America. J Epidemiol Community Health. 2010;64(1):16–21. doi: 10.1136/jech.2008.085985. [DOI] [PubMed] [Google Scholar]
- 6.Westreich D. From exposures to population interventions: pregnancy and response to HIV therapy. Am J Epidemiol. 2014;179(7):797–806. doi: 10.1093/aje/kwt328. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Hubbard AE, Laan MJ. Population intervention models in causal inference. Biometrika. 2008;95(1):35–47. doi: 10.1093/biomet/asm097. [DOI] [PMC free article] [PubMed] [Google Scholar]
