Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Dec 1.
Published in final edited form as: Prev Sci. 2014 Dec;15(6):789–798. doi: 10.1007/s11121-013-0429-z

Research Priorities for Economic Analyses of Prevention: Current Issues & Future Directions

D Max Crowley 1, Laura Griner Hill 2, Margaret R Kuklinski 3, Damon E Jones 4
PMCID: PMC4051858  NIHMSID: NIHMS518591  PMID: 23963624

Abstract

In response to growing interest in economic analyses of prevention efforts, a diverse group of prevention researchers, economists, and policy analysts convened a scientific panel, on “Research Priorities in Economic Analysis of Prevention” at the 19th annual conference of the Society for Prevention Research. The panel articulated four priorities that, if followed in future research, would make economic analyses of prevention efforts easier to compare and more relevant to policymakers, and community stakeholders. These priorities are: (1) increased standardization of evaluation methods, (2) improved economic valuation of common prevention outcomes, (3) expanded efforts to maximize evaluation generalizability and impact, as well as (4) enhanced transparency and communicability of economic evaluations. In this paper we define three types of economic analyses in prevention, provide context and rationale for these four priorities as well as related sub-priorities, and discuss the challenges inherent in meeting them.

Keywords: Economic Analysis, Benefit-Cost, Economics of Prevention, Prevention Efficiency


Growing recognition of mental, emotional and behavioral disorders’ costs to society, in conjunction with the current fiscal climate, has illustrated the need for high-quality economic evaluations of evidence-based prevention programs (Kilburn & Karoly, 2008; Kuklinski, Briney, Hawkins, & Catalano, 2011; Miller & Hendrie, 2008; O’Connell, Boat, & Warner, 2009). Such analyses may be used to inform policy, guide community prevention efforts and increase programmatic efficiency (i.e., return on investment; (Aos et al., 2011; Aos, Lieb, Mayfield, Miller, & Pennucci, 2004; Crowley, Jones, Greenberg, Feinberg, & Spoth, 2012; Kuklinski et al., 2012). While some rigorous economic evaluations of prevention programs are available, existing work illustrates the complexities that arise in conducting them. Of central concern to researchers and those hoping to utilize their findings is the host of assumptions required to carry out these complex studies. When assumptions fail to be explicitly articulated or are applied inconsistently, research findings lack transparency and are difficult to compare, which in turn limits the utility of this research for decision making (Foster, Dodge, & Jones, 2003; Foster, Porter, Ayers, Kaplan, & Sandler, 2007). Consequently, the field would greatly benefit from clear research priorities and best practices that would increase the comparability, transparency, and impact of future efforts.

In this paper we identify and elaborate on research priorities that could guide future economic analyses of prevention. These priorities were initially articulated as part of a scientific panel (Research Priorities in Economic Analysis of Prevention; Crowley & Hill, 2011) held at the 19th annual meeting of the Society for Prevention Research. We intend this document to be a starting point for much needed dialogue around how to facilitate high-quality economic evaluations of prevention programs. Panelists at the meeting identified four priorities that included (1) increased standardization of evaluation methods, (2) improved economic valuation of common prevention outcomes, (3) expanded efforts to maximize evaluation generalizability and impact, and (4) enhanced transparency and communicability of economic evaluations. We first provide a brief background on economic analyses in prevention research. We then discuss these priorities, identify challenges to progress in each area, and propose suggestions for meeting those challenges.

Economic Analyses in Prevention Research

Economic analyses include a variety of techniques for quantifying the resources required to install, run, and maintain prevention efforts, and they frequently link resource investments to prevention outcomes (Drummond, 2005; Weinstein, Siegel, Gold, Kamlet, & Russell, 1996). Here we describe three analytic approaches often employed to study such investments, including cost analysis, cost-effectiveness analysis, and benefit-cost analysis (Haddix, Teutsch, & Corso, 2003). Each can be employed to understand the costs and outcomes experienced by different stakeholders affected by new programs, practices, or policies.

Cost Analysis

The foundation for all economic evaluations of prevention programs are cost analyses (Haddix et al., 2003), which identify, quantify, and value the resources consumed by a new or existing effort (Anderson, Bowland, Cartwright, & Bassin, 1998; Foster et al., 2007). In economic terms, costs represent the value lost when resources are used for one purpose over another, generally referred to as an opportunity cost. Cost analyses go beyond accounting exercises and budget evaluations that may only consider direct labor, materials, and capital investment by valuing all resource inputs involved in carrying out a program or policy (e.g. volunteer time and value of donated space). Researchers conducting cost analyses also incorporate a program’s direct costs as well as major secondary or induced costs, such as participant transportation to a parenting program, which would typically fall outside the scope of an accounting approach. Analysts must be as inclusive as possible and define what represents a “cost” as early as possible in an evaluation. Without accurate and robust cost estimates, more complex economic analyses can be misleading and result in erroneous conclusions about the economic value of a program or policy (Crowley et al., 2012; Drummond, 2005; Haddix et al., 2003).

Cost-Effectiveness Analysis

Cost-effectiveness analysis (CEA) goes beyond cost analysis by linking resource investments to program outcomes. It is often employed to understand the cost of achieving a unit of outcome, such as the cost per case of disease prevented (Brouwer, Koopmanschap, & Rutten, 1997; Brouwer et al., 1997; Russell, Gold, Siegel, Daniels, & Weinstein, 1996). CEA is generally used to compare costs of different interventions or intervention components, for example, to compare the cost per case of reducing teen substance use via two different effective prevention programs (Dino, Horn, Abdulkadri, Kalsekar, & Branstetter, 2008; Russell et al., 1996). CEAs typically include calculation of incremental cost-effectiveness ratios, which represent the marginal investment required to achieve an additional unit of improvement in an outcome (e.g., a 1% drop in underage drinking arrests, or one additional quality-adjusted-life-year). Different programs can be compared to determine which requires the fewest resources per gain in outcome (Gold, Stevenson, & Fryback, 2002). CEAs are useful for considering program investments in relation to program outcomes expressed in their naturally occurring units. However, they can be unwieldy when programs affect multiple outcomes, which is a common characteristic of evidence-based prevention that targets early risk and protective factors (Kuklinski et al., 2012).

Benefit-Cost Analysis

A third analytic technique, known as benefit-cost analysis (BCA) is similar to CEA in that it links resource investments to program outcomes. However, in BCA, outcomes are monetized – in other words, translated from their naturally occurring units (e.g. a case of addiction averted) to dollars or other form of currency. Dollars reflect the costs avoided or revenues generated because of an intervention. Because benefits are expressed in monetary terms and consider outcomes over many years to multiple stakeholders, it is relatively easy to sum across outcomes to calculate the cumulative long-term benefit of a program or policy, which makes this tool potentially powerful and attractive to stakeholders interested in allocating scarce resources efficiently. Results of BCAs are generally summarized in the programs costs, benefits and net present value (e.g. a program’s benefits minus its costs, after discounting and accounting for inflation). This net present value is generally considered the “gold standard” summary measure of a BCA. Studies also typically report the benefit-cost ratio (e.g. benefits divided by costs per participant), commonly referred to as the return-on-investment from a prevention effort. Beyond these summary estimates, rigorous economic evaluations can be used to identify what intervention components contribute most to a program’s costs and what outcomes contribute the most to a program’s economic benefits.

Although BCAs in the field of prevention were initially employed to evaluate early childhood education programs (Karoly, 1998), in recent years they have been used to assess prevention programs addressing a broad set of issues for youth at different ages (Lee et al., 2012), including substance abuse, delinquency, and health. However, prevention scientists have used BCAs infrequently, in part because of the complexity of monetizing prevention outcomes. The Society for Benefit-Cost Analysis has drafted principles and standards for conducting these complex analyses, which may be useful to prevention scientists (Zerbe, Davis, Garland, & Scott, 2010). The Pew Center for the States is also assisting state governments in applying BCAs to help ensure that scarce resources are being invested in effective and cost-beneficial programs (The Pew Charitable Trusts, 2012).

Research Priorities in Economic Analysis of Prevention

While examples of each of these analytic techniques are available in prevention research, their execution is not yet guided by a common framework for estimating costs or benefits (Foster et al., 2003; Wolfenstetter, 2011). Panelists identified four priority areas and eleven sub-priorities within those areas that are intended to foster integration of various approaches and strengthen the utility of economic analyses to prevention research. We have summarized these priorities in Table 1 and discuss them in the remainder of this paper. We begin each section by identifying issues of concern and then turn to priorities for advancing this work.

Table 1.

Research Priorities for Economic Analyses of Prevention Programs, Policies and Practices

  1. Increased standardization of methods

    • 1.1

      Consistent approach for categorizing costs

    • 1.2

      Measurement of comparison group costs

    • 1.3

      Evaluation of uncertainty

  2. Improved estimates of prevention’s economic value

    • 2.1

      Update, refine, and broaden current estimates of distal outcomes

    • 2.2

      Estimate the value of proximal prevention outcomes

    • 2.3

      Better understanding of the impact of scale-up on prevention efficiency

  3. Maximizing evaluation generalizability and impact

    • 3.1

      Economic analysis of prevention in real-world settings

    • 3.2

      Identify promising areas to conduct economic analysis

  4. Improve transparency and communicability

    • 4.1

      Develop guidelines to support openness and usability

    • 4.2

      Employ multiple perspectives

Increase Standardization of Methods

The first priority area pertains to increasing standardization of the methods used in economic analyses of prevention. The failure of current economic evaluations of prevention programs to follow a consistent methodology has made it difficult to compare findings across studies. Methodological inconsistency has arisen largely because economic analyses have been adapted to and developed within various disciplines (e.g., business, environmental, transportation, health economics), each with its own methodological traditions and issues of concern. To date, economic analyses in prevention have been influenced by these different disciplines rather than being bound by a single, widely accepted analytic framework. For example, in BCA, studies may differ with respect to focusing on actual versus projected benefits, or whether to include quality of life gains from prevention outcomes versus limiting benefits to only those that are tangible and easily monetizable. These key decisions have a direct effect on the conclusions reached, as well as the ability to compare results across studies (see Beatty, 2009; Kilburn & Karoly, 2008).

Economic analyses are also driven by multiple assumptions that may differ across research studies (Diehr, Yanez, Ash, Hornbrook, & Lin, 1999; Drummond, 2005). For instance, there is great debate about the appropriate rate for discounting future benefits to present value terms (that is, acknowledging that the value of a dollar received far in the future is not worth as much as a dollar received today and discounting accordingly). Analysts typically use annual discount rates ranging from 3% to 7%, with dramatic differences on study conclusions (Claxton et al., 2006; Gravelle, Brouwer, Niessen, Postma, & Rutten, 2007; Lazaro, 2002; Weinstein et al., 1996)). For instance, a researcher applying a 5% annual discount rate to the impact of a pre-school program that does not see economic benefits until the participants enter the workforce yields a much different result than a researcher applying a rate of 3%. Similarly, decisions about what outcomes should be valued, which is the appropriate inflation index, and what costs should be included in analyses differ across studies and may result in widely differing conclusions. Greater consensus with respect to analytic procedures and assumptions would increase the comparability and rigor of economic evaluations in prevention. SPR panelists identified three standardization sub-priorities: (1) develop a consistent approach for categorizing costs, (2) measure comparison group costs accurately, and (3) employ methods for evaluating estimate uncertainty.

(1) Consistent Approach for Categorizing Costs

Different types of prevention programs have different associated costs. Organizing cost data can be overwhelming (Foster et al., 2007; Yates, 1996), particularly when the goal is to determine the economic cost of large-scale prevention efforts that receive support from and interact with multiple service systems (schools, public health agencies, universities, extension services; Crowley et al., 2012; Kuklinski et al., 2011). Greater consistency in the way costs are assessed would increase the comparability and utility of cost analyses (French, Salomé, Sindelar, & McLellan, 2002; Yates, 1994).

Micro-costing procedures—sometimes referred to as an ‘ingredients-based’ approach—are increasingly employed in prevention efforts by grouping costs into functional activities or cost categories (Yates, 1994). Prevention researchers have incorporated these methods into early frameworks for guiding cost analyses of prevention initiatives (Crowley et al., 2012; Foster et al., 2007), but more work is needed facilitate robust and comparable cost estimates.

Developing categories that have broad applicability to prevention efforts is one approach that could be employed to increase consistency across economic analyses of prevention. For example, many prevention programs require investments in system readiness, including training and technical assistance, which can be separated from ongoing prevention program operating costs (Crowley et al., 2012). Monitoring and evaluation are another cost category relevant to many types of prevention programs, particularly as the need for accountability increases. Within prevention program type (e.g., early childhood education, juvenile justice, substance abuse prevention), it may be possible to achieve even greater consistency in cost categories. Program evaluators should balance the need for detail about resource consumption with the need for applicability across a wide range of prevention efforts. For example, not every prevention effort requires a cost category for participant incentives, but most would include a category for recruitment costs, and incentives could be subsumed into this more general category.

A related issue has to do with the treatment of accounting versus economic costs, from theoretical and practical standpoints (where accounting costs refer to budgetary expenditures and economic costs include the opportunity costs of resource allocation). Disagreement within the field about how to account for volunteer time illustrates these issues. This is particularly important to community-based prevention efforts that rely greatly on volunteers (Crowley et al., 2012; Kuklinski et al., 2012). Some argue that volunteer time should be valued at the cost of the volunteer’s leisure time (i.e., the actual wage rate), others contend this time should be valued at the wage-plus-benefits rate for the job category of volunteer work performed (targeted wages), and still others argue that the net cost is zero because costs are offset by the benefits enjoyed by the individual from volunteering. The relevance of volunteer costs also depends on the cost perspective. From a societal perspective where all resource use is considered, volunteer costs are relevant. From a taxpayer perspective, the focus is on costs incurred by taxpayers, and so volunteer time is of less concern. This example highlights the need to delineate accounting and economic costs.

(2) Measurement of Comparison Group Costs

Prevention program effects are typically defined in relation to a comparison group, with the highest research standard being a randomized-controlled trial. The relevant costs are those reflecting the incremental or marginal prevention activity undertaken, and in well-controlled studies calculation of incremental costs for intervention versus control groups may be straightforward. However, estimation of prevention costs in real-world service settings is more complicated, since comparison groups may be exposed to no intervention, or to a variety of prevention efforts with a range of costs. In such service contexts comparison group members are likely exposed to some prevention effort, which has an associated cost (e.g., Ringwalt et al., 2008), rendering the assumption of no intervention–and thus no costs–invalid (Ramsey et al., 2005). Evaluators must carefully define and estimate treatment and comparison group prevention program exposure and use this to guide cost analyses (Hawkins et al., 2008; Kuklinski et al., 2012). It is quite possible for a program’s incremental or marginal implementation costs to be lower than its initial implementation costs (i.e., start-up costs and capacity building). This distinction is important for communities considering whether to replace an existing prevention program with an alternative evidence-based program. For instance, Safe and Drug Free Schools (SDFS) provided funding for a vast majority of US institutions to provide school-based substance abuse prevention efforts (Noble, 2002). When a school adopted a new evidence-based program, it could discontinue an existing program and reallocate its SDFS funding to the new program at a marginal cost that is much lower than the total implementation cost.

(3) Evaluation of Uncertainty

It is essential that researchers evaluate how analytic assumptions and related uncertainty in estimates influence final estimates of cost, cost-effectiveness or benefit-cost analysis results (Briggs, Sculpher, & Buxton, 1994; Haddix et al., 2003). Uncertainty is inherent in economic analyses and arises from multiple sources. Model uncertainty concerns issues such as whether all relevant costs have been captured in the analysis or whether benefits models are well conceptualized. Measurement uncertainty concerns precision in measurement and issues such as sample representativeness. Parameter uncertainty concerns decisions such as which discount rate to use. Variability in some assumptions may have little impact (i.e. the estimates are robust), but changing others may greatly influence results, so it is important to quantify the effects of uncertainty. Two common methods are most often employed. First, analysts can conduct post-hoc sensitivity analyses by varying parameters to examine the effect of parameter uncertainty on results (Claxton et al., 2006). For instance, a single-tailed sensitivity analysis can be used to test the implications of a variety of assumptions (e.g., a 3% versus 5% discount rate). The second method, Monte Carlo analysis, offers the chance to examine the effect of multiple sources of measurement and parameter uncertainty simultaneously. This powerful tool simulates economic analyses through a large number of iterations, making random draws of parameter estimates from their sampling distributions. Monte Carlo analysis can provide a much greater understanding of the implications of uncertainty, which is useful for decision makers (Aos et al., 2004; Doubilet, Begg, Weinstein, Braun, & McNeil, 1985). For instance, these analyses can be used to estimate the likelihood an investment in a prevention program will result in a positive return as well the likelihood the investment will not lead to a favorable return.

Improved Estimates of Prevention’s Economic Value

Prevention efforts generally seek to reduce risk factors and improve protective factors to enhance positive youth development (e.g., reduce substance abuse, obesity, violence, increase educational attainment; Durlak, 1998; Hawkins, Catalano, & Miller, 1992). As noted, the goal of benefit-cost analysis in prevention is to estimate the economic or dollar value of program effects in terms of increased revenues or cost savings in the near and long term. Some prevention outcomes have direct financial consequences. Reducing crime, for example, leads directly to reductions in costs to the criminal justice system and to crime victims. Other prevention outcomes achieve financial benefits because they are linked theoretically and empirically to increased revenues or avoided costs. For example, prevention effects on education can be monetized because there is broad-based scholarly agreement that increased education leads to increased earnings on average (Heckman, 2006; Karoly, 1998). Increased earnings benefit program participants as well as broader society through increased tax revenue (Mishan & Quah, 2007).

In their latest benefit-cost analysis of prevention and youth development programs, Aos and colleagues monetize a broad set of prevention outcomes, including for example, reduced crime, substance use initiation, and better mental health based on their extensive review of the literature (Lee et al., 2012). This work represents a major step forward in efforts to value prevention outcomes, but much work remains. The SPR panel identified three valuation-related sub-priorities: (1) the need to update and refine current estimates of distal prevention outcomes; 2) the need to estimate the value of proximal prevention outcomes; and (3) the need to better understand how the value of prevention efforts is influenced as they go to scale.

(1) Update, Refine, and Broaden Current Estimates of Distal Outcomes

Estimates of prevention’s long-term effects rely on strong theory, supported by empirical data, linking present to future behavior. Because it is not always feasible to follow participants over the long-term, benefit-cost analyses typically incorporate projection or simulation models linking today’s prevention outcomes to avoided costs or increased revenues over many years. The strongest models rely on a foundation of empirical studies and national databases. For example, benefits from increased educational attainment are informed by national earnings data from the Current Population Survey and fringe benefits data from the Bureau of Labor Statistics (Bureau of Labor Statistics and the Census Bureau, 2012). Long-term benefits from programs that prevent youth smoking rely on a body of research indicating that delaying smoking initiation in adolescence reduces the likelihood of adult smoking, as well as CDC data establishing causal links between smoking and health concerns (U.S. Department of Health and Human Services, 2012). National databases and scholarly evidence are not static, and therefore analysts should routinely update models in light of current evidence, particularly as population norms change.

Analysts also should refine estimates as more nuanced data become available. For example, evidence may emerge showing that the value of prevention varies across groups that differ in risk (Foster, Jones, & And the Conduct Problems Prevention Research Group, 2006; Reynolds, Temple, White, Ou, & Robertson, 2011). Developments in addiction science indicate that different substances are abused by different people and that diverse groups have varying consequences for abuse of different drugs (e.g., racial disparities in sentencing for drug crimes; Hansen, Oster, Edelsberg, Woody, & Sullivan, 2011; Kautt & Spohn, 2002; Nicosia, Pacula, Kilmer, Lundberg, & Chiesa, 2009). Benefits models should evolve in line with advances in research findings and data sources.

The range of monetizable outcomes should also expand with scientific advances. Just as benefit-cost analyses have broadened from an early focus on early childhood intervention to a broad set of youth development programs, they will need to continue to grow as effective prevention programs expand their reach. For instance, effective prevention of HIV or obesity as well as promotion of social-emotional learning should be associated with future cost savings.

(2) Estimate the Value of Proximal Prevention Outcomes

A large body of research demonstrates the importance of intervening early in development to maximize the impact of prevention efforts (Heckman, Moon, Pinto, Savelyev, & Yavitz, 2010; Heckman, 2006), yet many benefit-cost analyses do not place an economic value on proximal prevention outcomes. Instead, they examine the long-term implications of outcomes such as greater high school graduation rates or lower crime (Belfield, Nores, Barnett, & Scheweinhart, 2006). For example, follow-up evaluation of home visiting programs two decades post-intervention shows significant differences in education and criminal justice systems costs as well as increased participant employment and earnings, but most do not present savings estimates for shorter-term outcomes (Heckman et al., 2010; Karoly, 1998). Miller et al., is an exception, estimating the short-term costs associated with underage drinking allowing for a more complete accounting of program impact (Miller, Levy, Spicer, & Taylor, 2006). Evaluations that incorporate both short and long-term monetary consequences would provide a more nuanced, complete, and compelling picture of the economic benefit of preventive interventions.

Further research might incorporate contingent valuation methods to reveal individuals’ willingness-to-pay for short-term outcomes such as improved relationship quality or decreased personal suffering, where cost data are largely non-existent. For example, Corso and colleagues found that respondents valued the prevention of a single child-maltreatment death at about $15 million based on results from a willingness-to-pay survey (Corso, Fang, & Mercy, 2011). These findings were in sharp contrast to earlier research using a human capital approach, which estimated that each death from child maltreatment costs society an average net present value of $1 million in unrealized future wages (2007). Careful use of contingent valuation techniques can be used to supplement information gained from human capital approaches.

(3) Better Understanding of the Impact of Scale-up on Prevention Efficiency

A major goal of prevention science is to achieve large-scale prevention efforts capable of making a sustained public health impact (Olds, Hill, Obrien, Racine, & Moritz, 2003; Spoth & Greenberg, 2005; Welsh, Sullivan, & Olds, 2009). As prevention efforts expand and require increasing resources, concern about the economic value of prevention grows (Flay et al., 2005; O’Connell et al., 2009). It is important to understand the changing impact of a prevention program as it goes to scale and serves much broader populations, both in terms of effectiveness and economic value (e.g., from losses in implementation quality or gains in economies of scale; Aos et al., 2011; Olds et al., 2003; Welsh et al., 2009). In particular, there is a need to understand the economic value of cultivating a prevention programming infrastructure capable of adopting, implementing, and sustaining evidence-based preventive interventions (Crowley et al., 2012). Specific questions include whether the additional reach provided by these systems justifies their worth; whether there are economies of scale that result in lower per-participant cost; and whether there are optimal configurations for allocating resources to different system areas (Anderson et al., 1998; Crowley et al., 2012; Foster, Prinz, Sanders, & Shapiro, 2008).

Additionally, researchers need to further explore how variability in programming dynamics such as recruitment efficacy, intervention fidelity, and participant engagement affect the economic value of scaled prevention efforts. For example, decision makers may wish to know if some aspects of programs can be trimmed to reduce costs (Akerlund, 2000; August, Bloomquist, Lee, Realmuto, & Hektner, 2006; Gruen et al., 2008). Use of large sample sizes, sub-group analyses to tailor findings to specific populations, and state-of-the art approaches for statistical control (e.g., propensity analyses; Foster, 2003; McCaffrey, Ridgeway, & Morral, 2004) may enhance confidence in model estimates. Specifically, employing different approaches for statistical control may allow researchers to understand how taking a program to scale will ultimately influence the benefits of programs on different groups by allowing researchers to evaluate program impact with quasi-experimental data.

Maximizing Evaluation Generalizability & Impact

Economic evaluations are useful in guiding decisions about where to allocate scarce public resources in real-world settings (Cookson, Drummond, & Weatherly, 2009; Hoffmann et al., 2002), yet many economic evaluations are conducted in the context of carefully controlled research trials. Thus there is a need for greater evaluation of programs operating in actual program delivery settings (August et al., 2006; Crowley et al., 2012; Flay et al., 2005). The tradeoff for increasing the external validity of such analyses is a reduced capacity to effectively exert methods of design control to maintain adequate internal validity and make reliable causal inferences (e.g., randomization; Crowley, Coffman, Feinberg & Spoth, in press; Foster, 2003; Hill, Goates, & Rosenman, 2010). Because of the limited funding available for economic analyses of prevention programs, panelists agreed that prevention scientists need to target specific domains of prevention that are likely to demonstrate the often assumed, but rarely evaluated, efficiencies gained from prevention over treatment strategies. Two sub-priorities related to maximizing the generalizability and impact of economic analyses in prevention include (1) identifying approaches that allow researchers to estimate costs and benefits of prevention programs in real-world settings, and (2) identifying promising substantive areas of prevention programming that would benefit most from economic analysis.

(1) Economic Analyses of Prevention in Real World Settings

Economic analyses of efficacy trials are unlikely to generalize to real-world settings (Drummond, 2005; Karoly, 2010). For example, the loss of fidelity when prevention programs are translated to communities may result in smaller program effects (Aos et al., 2004; Elliott & Mihalic, 2004); the same is true when programs are delivered to populations different from those targeted in clinical trials. Few studies are available that evaluate how such dynamics influence the value of prevention efforts, in part because analytic methods for design control (i.e. experimental designs) generally limit our ability to estimate ‘real-world’ program effects accurately. However, econometric and statistical approaches offer additional opportunities to maintain the internal validity of causal inferences, and thus of economic evaluations, in non-experimental program evaluation. Such approaches include propensity score matching, inverse probability weighting, and principal stratification based on the potential outcomes framework (Cole & Hernan, 2008; Frangakis & Rubin, 2002; Lunceford & Davidian, 2004; Rosenbaum & Rubin, 1983, 1984). These approaches may be used to create treatment-comparison groups from quasi-experimental or observational data similar to those found in randomized controlled trials and to control for selection effects that threaten the validity of causal inference in non-experimental evaluations (Hill et al., 2010). Employing such methods allows researchers to loosen design control and evaluate the effects of implementation variability, which is more consistent with real-world implementation of prevention programs.

(2) Identify Promising Areas to Conduct Economic Analyses

Prevention scientists have successfully developed evidence-based programs in a variety of domains including substance abuse, violence, obesity, and conduct disorder (Bierman et al., 2004; Greenberg, Domitrovich, & Bumbarger, 2001; O’Connell et al., 2009). Some substantive areas are more developed than others, and the level of efficacy and effectiveness evidence can vary greatly. Other areas may have only one or two ‘promising programs’ that have yet to undergo full randomized controlled trials. To maximize the value of allocating limited research funding to economic analyses, it is important to acknowledge that some areas and programs have a greater level of readiness to undergo such evaluations. Further, there are some programs that are effective, but have had low uptake rates by local stakeholders and may not be ideal candidates for economic evaluation (e.g., Ringwalt et al., 2008). Lastly, because of the cost of conducting economic analyses and the need for appropriate comparison groups, there is value in seeking out ongoing trials of multiple programs that are being delivered in real-world settings. As a result, in identifying promising areas of prevention that should undergo economic analyses, it is valuable to consider the (1) maturity of the area of research, (2) the adoptability of the program, and (3) the possibility of leveraging ongoing multi-program trials.

Improve Transparency & Communicability

Economic evaluations are often information intensive, require multiple types of analyses to estimate program costs and benefits, as well as require researchers to make numerous assumptions that can greatly affect findings (Cookson et al., 2009). Ultimately, the complexity of such evaluations can undermine the evaluation’s credibility (Adler & Posner, 1999; Williams, 1972). Further, technical aspects of economic analysis are difficult to translate to decision makers in straightforward terms without glossing over important details. In light of these concerns, the panel identified a fourth priority area pertaining to the need for improved transparency and communicability of economic analyses of preventive interventions. In particular, the panel identified two sub-priorities: (1) develop guidelines that support analytic transparency and (2) consider the value of prevention programs from multiple analytic perspectives.

(1) Develop Guidelines to Support Openness & Usability

A major priority identified by the panel was the need for a set of guidelines that clearly describe which aspects of an economic analysis should be described within papers and reports. Specifically, there are certain aspects of economic evaluations that are vulnerable to researchers’ analytic assumptions, and those areas should be described in a uniform manner and tested with sensitivity analyses (Haddix et al., 2003; Lave & Joshi, 1996). In these cases each assumption should be clearly described and defended. Further, there was general agreement that, in situations where space or time is limited, such descriptions could be provided through online appendices that are freely accessible. The development of such guidelines has standardized and improved reporting of both randomized controlled and quasi-experimental trials (e.g, Transparent Reporting of Evaluations with Non-Randomized Designs; Armstrong et al., 2008). Drummond and Jefferson (1996) provide guidelines for authors and reviewers who are evaluating the quality of economic analyses in manuscript submissions; their early work should serve as a starting point to develop guidelines for standardization and increased transparency of reporting economic analyses. Standardization would also facilitate meta-analyses and would increase the number of systematic reviews eligible to be placed in review repositories such as the UK’s Health Economic Evaluations Database (Cochrane Collaboration, 2012).

Economic analyses can produce a variety of estimates, many of which may be qualified based upon sensitivity analyses (Drummond, 2005; Haddix et al., 2003). As a result, it can be difficult to communicate evaluation results effectively. The panel agreed that this translational process could be facilitated through the development of a standardized approach for communicating the results of economic analyses. Such an approach would involve a clear outline that could be provided to policy makers regarding what assumptions were made, how they were tested, and how they influenced the results. The Consolidated Standards of Reporting Trials (CONSORT) statement, which requires authors to complete a checklist of over 20 items in reporting clinical trials and has been adopted by over 150 medical and psychological journals (Armstrong et al., 2008), exemplifies a standardized reporting approach that provides multiple advantages. It offers needed guidance for authors and simplifies reporting; increases transparency; enables comparison across studies; and eases the interpretation of methods and results. Economic analyses of prevention would benefit from similar reporting guidelines.

In addition to the need for greater transparency, the field may benefit from working with decision scientists and communication experts to understand how findings can be presented to provide the greatest utility to decision makers. For instance, the Washington State Institute for Public Policy has refined a ‘portfolio based approach’ for facilitating policy-makers’ investment decisions around crime prevention. This approach and others could be tested in small field studies through initiatives to facilitate evidence-based policy making (e.g., The Pew Center on the States’ Results First Project, The Coalition for Evidence-Based Policy, The Vera Institute of Justice’s Cost-Benefit Analysis Unit).

(2) Employ Multiple Perspectives

In economic analysis, researchers select an analytic perspective that provides a reference point for the rest of the analysis (Drummond, 2005; Weinstein et al., 1996). Often researchers take a societal perspective, but a societal perspective alone may not provide decision makers with enough information to guide their resource allocations. In response, panelists identified the need to consider multiple analytic perspectives within the same evaluation. In particular, beyond a societal perspective, it is ideal to consider subsets of individuals (e.g., taxpayers, direct service recipients). For instance, a cost to society is not always a cost to the taxpayer. Often prevention programs rely on in-kind donations and volunteer labor to successfully carry out the effort—costs that the taxpayer does not incur (August et al., 2006; Gruen et al., 2008; Savaya & Spiro, 2011). While it is important to quantify these resources from a societal perspective, as they may be essential for the effort’s success, taxpayers may also be more likely to support the initiative if they realize they are not shouldering the total cost alone. Further, a benefit to society may not always result in benefits to participants. Economic evaluations that assess program impact on participants allow future participants to make more informed decisions before enrolling. Ultimately all relevant stakeholders’ perspectives should be considered in order to maximize the utility of economic analysis and facilitate communication.

Future Directions

Resources for preventive efforts grow more scarce, both because of tightening budgets and because of the flood of new untested prevention programs competing with existing evidence-based efforts. Thus it is essential to build a strong evidence base that demonstrates the economic efficiency of rigorous preventative intervention. To facilitate this work, the four priority areas identified by the panel can serve as a guide to develop best practices for conducting and communicating economic analyses of prevention. However, it is important to remember that a primary goal of economic analysis is to enable allocation of resources in a manner that reflects stakeholder and societal values, and that methodological practices reflect decisions about those values. Specifically, evaluators should be transparent not only about the methods used, but also about the values that underlie analytic assumptions. For instance, presentation of multiple stakeholder perspectives or multiple sets of results based on a variety of assumptions may encourage policy makers to consider more than bottom-line summary measures (e.g. sensitivity analyses in which the estimated economic value for pain and suffering is and is not included). Furthermore, a blind focus on improving and standardizing methods may obscure both the difficulties and the opportunities of these evaluations in practice. In particular, one of the greatest benefits of economic analyses is that they force researchers to conduct systems analyses that would otherwise be neglected. This requires some degree of innovation and flexibility that any new standards should take into account. In this context, by attending to the priorities and issues outlined here, prevention researchers can refine the analytic tools needed to develop more efficient programs, provide a deeper understanding of the costs required to take evidence-based prevention programs to scale, and make the case for the value of prevention more persuasively.

Acknowledgments

We would likely to gratefully acknowledge the insight of all panel members involved in the SPR 2011, 2012 meetings including: Steve Aos, Washington State Institute for Public Policy, Jon Baron, Coalition for Evidence Based Policy, Lynn Karoly, Rand Corporation, Beverlie Fallik, Center for Substance Abuse Prevention, Substance Abuse and Mental Health Service Administration. We also appreciate the contributions of Lauren Supplee, Office of Research Planning and Evaluation, Department of Health and Human Services, and Kenneth Dodge, Center for Child & Family Policy, Duke University.

Contributor Information

D. Max Crowley, Duke University.

Laura Griner Hill, Washington State University.

Margaret R. Kuklinski, University of Washington

Damon E. Jones, The Pennsylvania State University

References

  1. Adler MD, Posner EA. Rethinking Cost-Benefit Analysis. The Yale Law Journal. 1999;109(2):165. doi: 10.2307/797489. [DOI] [Google Scholar]
  2. Akerlund KM. Prevention program sustainability: The state’s perspective. Journal of Community Psychology. 2000;28(3):353–362. doi: 10.1002/(SICI)1520-6629(200005)28:3<353::AID-JCOP9>3.0.CO;2-6. [DOI] [Google Scholar]
  3. Anderson D, Bowland B, Cartwright W, Bassin G. Service-Level Costing of Drug Abuse Treatment. Journal of Substance Abuse Treatment. 1998;15(3):201–211. doi: 10.1016/S0740-5472(97)00189-X. [DOI] [PubMed] [Google Scholar]
  4. Aos S, Lee S, Drake EK, Pennucci A, Klima T, Millier M, Burley M. Return on Investment: Evidence-Based Options to Improve Statewide Outcomes. 2011 Retrieved from http://www.wsipp.wa.gov/pub.asp?docid=11-07-1201.
  5. Aos S, Lieb R, Mayfield J, Miller M, Pennucci A. Benefits and Costs of Prevention and Early Intervention Programs for Youth. 2004 Retrieved from http://www.wsipp.wa.gov/pub.asp?docid=04-07-3901.
  6. Armstrong R, Waters E, Moore L, Riggs E, Cuervo LG, Lumbiganon P, Hawe P. Improving the reporting of public health intervention research: advancing TREND and CONSORT. Journal of Public Health. 2008;30(1):103–109. doi: 10.1093/pubmed/fdm082. [DOI] [PubMed] [Google Scholar]
  7. August GJ, Bloomquist ML, Lee SS, Realmuto GM, Hektner JM. Can Evidence-Based Prevention Programs be Sustained in Community Practice Settings? The Early Risers’ Advanced-Stage Effectiveness Trial. Prevention Science. 2006;7:151–165. doi: 10.1007/s11121-005-0024-z. [DOI] [PubMed] [Google Scholar]
  8. Beatty AS. Strengthening benefit-cost analysis for early childhood interventions workshop summary. Washington, D.C: National Academies Press; 2009. [Google Scholar]
  9. Belfield C, Nores M, Barnett S, Scheweinhart L. The high/scope Perry preschool program: Cost-benefit analysis using data from the age-40 follow up. Journal of Human Resources. 2006;41(1):162–190. [Google Scholar]
  10. Bierman KL, Coie JD, Dodge KA, Foster EM, Greenberg MT, Lochman JE, Pinderhughes EE. The Effects of the Fast Track Program on Serious Problem Outcomes at the End of Elementary School. Journal of Clinical Child & Adolescent Psychology. 2004;33:650–661. doi: 10.1207/s15374424jccp3304_1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Briggs A, Sculpher M, Buxton M. Uncertainty in the economic evaluation of health care technologies: The role of sensitivity analysis. Health Economics. 1994;3:95–104. doi: 10.1002/hec.4730030206. [DOI] [PubMed] [Google Scholar]
  12. Brouwer WBF, Koopmanschap MA, Rutten FFH. Productivity costs in cost-effectiveness analysis: numerator or denominator: a further discussion. Health Economics. 1997;6(5):511–514. doi: 10.1002/(SICI)1099-1050(199709)6:5<511::AID-HEC297>3.0.CO;2-K. [DOI] [PubMed] [Google Scholar]
  13. Bureau of Labor Statistics and the Census Bureau. Current Population Survey (CPS) 2012 Retrieved from http://www.census.gov/cps/
  14. Claxton K, Sculpher M, Culyer A, McCabe C, Briggs A, Akehurst R, Brazier J. Discounting and cost-effectiveness in NICE - stepping back to sort out a confusion. Health Economics. 2006;15(1):1–4. doi: 10.1002/hec.1081. [DOI] [PubMed] [Google Scholar]
  15. Cochrane Collaboration. Health Economic Evaluations Database (HEED) extended to all Cochrane contributors. Campbell & Cochrane Economics Methods Group; 2012. Retrieved from http://www.cochrane.org/news/blog/health-economic-evaluations-database-heed-extended-all-cochrane-contributors. [Google Scholar]
  16. Cole S, Hernan M. Constructing Inverse Probability Weights for Marginal Structural Models. American Journal of Epidemiology. 2008;168(6):656–664. doi: 10.1093/aje/kwn164. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Cookson R, Drummond M, Weatherly H. Explicit incorporation of equity considerations into economic evaluation of public health interventions. Health Economics, Policy and Law. 2009;4(02):231. doi: 10.1017/S1744133109004903. [DOI] [PubMed] [Google Scholar]
  18. Corso P, Mercy J, Simon T, Finkelstein E, Miller T. Medical Costs and Productivity Losses Due to Interpersonal and Self-Directed Violence in the United States. American Journal of Preventive Medicine. 2007;32(6):474–482.e2. doi: 10.1016/j.amepre.2007.02.010. [DOI] [PubMed] [Google Scholar]
  19. Corso PS, Fang X, Mercy JA. Benefits of Preventing a Death Associated With Child Maltreatment: Evidence From Willingness-to-Pay Survey Data. American Journal of Public Health. 2011;101(3):487–490. doi: 10.2105/AJPH.2010.196584. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Crowley DM, Jones DE, Greenberg MT, Feinberg ME, Spoth R. Resource Consumption of a Diffusion Model for Prevention Programs: The PROSPER Delivery System. Journal of Adolescent Health. 2012;50(3):256–263. doi: 10.1016/j.jadohealth.2011.07.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Diehr P, Yanez D, Ash A, Hornbrook M, Lin DY. METHODS FOR ANALYZING HEALTH CARE UTILIZATION AND COSTS. Annual Review of Public Health. 1999;20(1):125–144. doi: 10.1146/annurev.publhealth.20.1.125. [DOI] [PubMed] [Google Scholar]
  22. Dino G, Horn K, Abdulkadri A, Kalsekar I, Branstetter S. Cost-Effectiveness Analysis of the Not On Tobacco Program for Adolescent Smoking Cessation. Prevention Science. 2008;9(1):38–46. doi: 10.1007/s11121-008-0082-0. [DOI] [PubMed] [Google Scholar]
  23. Doubilet P, Begg CB, Weinstein MC, Braun P, McNeil BJ. Probabilistic sensitivity analysis using Monte Carlo simulation. A practical approach. Medical Decision Making: An International Journal of the Society for Medical Decision Making. 1985;5(2):157–177. doi: 10.1177/0272989X8500500205. [DOI] [PubMed] [Google Scholar]
  24. Drummond MF. Methods for the economic evaluation of health care programmes. Oxford; New York: Oxford University Press; 2005. [Google Scholar]
  25. Drummond MF, Jefferson T. Guidelines for authors and peer reviewers of economic submissions to BMJ. BMJ. 1996;313:275–83. doi: 10.1136/bmj.313.7052.275. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Durlak JA. Common risk and protective factors in successful prevention programs. American Journal of Orthopsychiatry. 1998;68(4):512–520. doi: 10.1037/h0080360. [DOI] [PubMed] [Google Scholar]
  27. Elliott DS, Mihalic S. Issues in Disseminating and Replicating Effective Prevention Programs. Prevention Science. 2004;5:47–53. doi: 10.1023/B:PREV.0000013981.28071.52. [DOI] [PubMed] [Google Scholar]
  28. Flay BR, Biglan A, Boruch RF, Castro FG, Gottfredson D, Kellam S, Ji P. Standards of Evidence: Criteria for Efficacy, Effectiveness and Dissemination. Prevention Science. 2005;6:151–175. doi: 10.1007/s11121-005-5553-y. [DOI] [PubMed] [Google Scholar]
  29. Foster EM. Propensity Score Matching. Medical Care. 2003;41:1183–1192. doi: 10.1097/01.MLR.0000089629.62884.22. [DOI] [PubMed] [Google Scholar]
  30. Foster EM, Dodge KA, Jones D. Issues in the Economic Evaluation of Prevention Programs. Applied Developmental Science. 2003;7:76–86. doi: 10.1207/S1532480XADS0702_4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Foster EM, Jones D the Conduct Problems Prevention Research Group. Can a Costly Intervention Be Cost-effective?: An Analysis of Violence Prevention. Archives of General Psychiatry. 2006;63:1284–1291. doi: 10.1001/archpsyc.63.11.1284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Foster EM, Porter MM, Ayers TS, Kaplan DL, Sandler I. Estimating the Costs of Preventive Interventions. Evaluation Review. 2007;31(3):261–286. doi: 10.1177/0193841X07299247. [DOI] [PubMed] [Google Scholar]
  33. Foster EM, Prinz RJ, Sanders MR, Shapiro CJ. The costs of a public health infrastructure for delivering parenting and family support. Children and Youth Services Review. 2008;30(5):493–501. doi: 10.1016/j.childyouth.2007.11.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Frangakis C, Rubin D. Principal Stratification in Causal Inference. Biometrics. 2002;58(1):21–29. doi: 10.1111/j.0006-341X.2002.00021.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. French MT, Salomé HJ, Sindelar JL, Thomas McLellan A. Benefit-Cost Analysis of Addiction Treatment: Methodological Guidelines and Empirical Application Using the DATCAP and ASI. Health Services Research. 2002;37:433–455. doi: 10.1111/1475-6773.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Gold MR, Stevenson D, Fryback DG. HALYS and QALYS and DALYS, oh my: similarities and differences in summary measures of population health. Annual Review of Public Health. 2002;23(1):115–134. doi: 10.1146/annurev.publhealth.23.100901.140513. [DOI] [PubMed] [Google Scholar]
  37. Gravelle H, Brouwer W, Niessen L, Postma M, Rutten F. Discounting in economic evaluations: stepping forward towards optimal decision rules. Health Economics. 2007;16(3):307–317. doi: 10.1002/hec.1168. [DOI] [PubMed] [Google Scholar]
  38. Greenberg MT, Domitrovich C, Bumbarger B. The prevention of mental disorders in school-aged children: Current state of the field. Prevention & Treatment. 2001;4 doi: 10.1037/1522-3736.4.1.41a. [DOI] [Google Scholar]
  39. Gruen R, Elliott J, Nolan M, Lawton P, Parkhill A, Mclaren C, Lavis J. Sustainability science: an integrated approach for health-programme planning. The Lancet. 2008;372(9649):1579–1589. doi: 10.1016/S0140-6736(08)61659-1. [DOI] [PubMed] [Google Scholar]
  40. Haddix AC, Teutsch SM, Corso PS. Prevention effectiveness: a guide to decision analysis and economic evaluation. Oxford; New York: Oxford University Press; 2003. [Google Scholar]
  41. Hansen RN, Oster G, Edelsberg J, Woody GE, Sullivan SD. Economic Costs of Nonmedical Use of Prescription Opioids. The Clinical Journal of Pain. 2011;27(3):194–202. doi: 10.1097/AJP.0b013e3181ff04ca. [DOI] [PubMed] [Google Scholar]
  42. Hawkins JD, Catalano RF, Arthur MW, Egan E, Brown EC, Abbott RD, Murray DM. Testing Communities That Care: The Rationale, Design and Behavioral Baseline Equivalence of the Community Youth Development Study. Prevention Science. 2008;9(3):178–190. doi: 10.1007/s11121-008-0092-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Hawkins JD, Catalano RF, Miller JY. Risk and protective factors for alcohol and other drug problems in adolescence and early adulthood: Implications for substance abuse prevention. Psychological Bulletin. 1992;112(1):64–105. doi: 10.1037/0033-2909.112.1.64. [DOI] [PubMed] [Google Scholar]
  44. Heckman J. Skill Formation and the Economics of Investing in Disadvantaged Children. Science. 2006;312(5782):1900–1902. doi: 10.1126/science.1128898. [DOI] [PubMed] [Google Scholar]
  45. Heckman J, Moon SH, Pinto R, Savelyev PA, Yavitz A. The rate of return to the HighScope Perry Preschool Program. Journal of Public Economics. 2010;94:114–128. doi: 10.1016/j.jpubeco.2009.11.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Hill LG, Goates SG, Rosenman R. Detecting Selection Effects in Community Implementations of Family-Based Substance Abuse Prevention Programs. American Journal of Public Health. 2010;100(4):623–630. doi: 10.2105/AJPH.2008.154112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Hoffmann C, Stoykova BA, Nixon J, Glanville JM, Misso K, Drummond MF. Do Health-Care Decision Makers Find Economic Evaluations Useful? The Findings of Focus Group Research in UK Health Authorities. Value in Health. 2002;5:71–78. doi: 10.1046/j.1524-4733.2002.52109.x. [DOI] [PubMed] [Google Scholar]
  48. Karoly LA. Investing in our children: what we know and don’t know about the costs and benefits of early childhood interventions. Santa Monica, CA: Rand; 1998. [Google Scholar]
  49. Karoly LA. PRINCIPLES AND STANDARDS FOR BENEFIT-COST ANALYSIS OF EARLY CHILDHOOD INTERVENTIONS. RAND; 2010. [Google Scholar]
  50. Kautt P, Spohn C. Crack -ing down on black drug offenders? Testing for interactions among offenders’ race, drug type, and sentencing strategy in federal drug sentences *. Justice Quarterly. 2002;19(1):1–35. doi: 10.1080/07418820200095151. [DOI] [Google Scholar]
  51. Kilburn K, Karoly LA. The Economics of Early Childhood Policy What the Dismal Science Has to Say About Investing in Children. 2008 Retrieved from http://www.rand.org/pubs/occasional_papers/OP227.html.
  52. Kuklinski MR, Briney JS, Hawkins JD, Catalano RF. Cost-Benefit Analysis of Communities That Care Outcomes at Eighth Grade. Prevention Science. 2012 doi: 10.1007/s11121-011-0259-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Lave LB, Joshi SV. Benefit-Cost Analysis in Public Health. Annual Review of Public Health. 1996;17(1):203–219. doi: 10.1146/annurev.pu.17.050196.001223. [DOI] [PubMed] [Google Scholar]
  54. Lazaro A. Theoretical arguments for the discounting of health consequences: where do we go from here? PharmacoEconomics. 2002;20(14):943–961. doi: 10.2165/00019053-200220140-00001. [DOI] [PubMed] [Google Scholar]
  55. Lee S, Aos S, Drake EK, Pennucci A, Miller GE, Anderson L. Return on Investment: Evidence-Based Options to Improve Statewide Outcomes April 2012 Update. Olympia, WA: Washington State Institute for Public Policy; 2012. Retrieved from http://www.wsipp.wa.gov/pub.asp?docid=12-04-1201. [Google Scholar]
  56. Lunceford JK, Davidian M. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Statistics in Medicine. 2004;23:2937–2960. doi: 10.1002/sim.1903. [DOI] [PubMed] [Google Scholar]
  57. McCaffrey DF, Ridgeway G, Morral AR. Propensity Score Estimation With Boosted Regression for Evaluating Causal Effects in Observational Studies. Psychological Methods. 2004;9(4):403–425. doi: 10.1037/1082-989X.9.4.403. [DOI] [PubMed] [Google Scholar]
  58. Miller, Hendrie D. Substance Abuse Prevention Dollars and Cents: A Cost-Benefit Analysis (No. DHHS Pub. No. (SMA) 07-4298) Rockville, MD: Center for Substance Abuse Prevention, Substance Abuse and Mental Health Services Administration; 2008. [Google Scholar]
  59. Miller T, Levy D, Spicer R, Taylor D. Societal costs of underage drinking. Journal of Studies on Alcohol. 2006;67:519–528. doi: 10.15288/jsa.2006.67.519. [DOI] [PubMed] [Google Scholar]
  60. Mishan EJ, Quah E. Cost-benefit analysis. London; New York: Routledge; 2007. [Google Scholar]
  61. Nicosia N, Pacula R, Kilmer B, Lundberg R, Chiesa J. The Costs of Methamphetamine Use. Santa Monica, CA: RAND; 2009. [Google Scholar]
  62. Noble PV. Safe and drug free schools. New York: Novinka Books; 2002. [Google Scholar]
  63. O’Connell ME, Boat TF, Warner KE. Preventing mental, emotional, and behavioral disorders among young people: progress and possibilities. Washington, D.C: National Academies Press; 2009. [PubMed] [Google Scholar]
  64. Olds D, Hill P, Obrien R, Racine D, Moritz P. Taking preventive intervention to scale: The nurse-family partnership*. Cognitive and Behavioral Practice. 2003;10:278–290. doi: 10.1016/S1077-7229(03)80046-9. [DOI] [Google Scholar]
  65. Ramsey S, Willke R, Briggs A, Brown R, Buxton M, Chawla A, Reed S. Good Research Practices for Cost-Effectiveness Analysis Alongside Clinical Trials: The ISPOR RCT-CEA Task Force Report. Value in Health. 2005;8(5):521–533. doi: 10.1111/j.1524-4733.2005.00045.x. [DOI] [PubMed] [Google Scholar]
  66. Reynolds AJ, Temple JA, White BAB, Ou SR, Robertson DL. Age 26 Cost-Benefit Analysis of the Child-Parent Center Early Education Program. Child Development. 2011;82(1):379–404. doi: 10.1111/j.1467-8624.2010.01563.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Ringwalt CL, Hanley S, Vincus AA, Ennett ST, Rohrbach LA, Bowling JM. The prevalence of effective substance use prevention curricula in the nation’s high schools. The Journal of Primary Prevention. 2008;29(6):479–488. doi: 10.1007/s10935-008-0158-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70:41–55. doi: 10.1093/biomet/70.1.41. [DOI] [Google Scholar]
  69. Rosenbaum PR, Rubin DB. Reducing Bias in Observational Studies Using Subclassification on the Propensity Score. Journal of the American Statistical Association. 1984;79:516. doi: 10.2307/2288398. [DOI] [Google Scholar]
  70. Russell LB, Gold MR, Siegel JE, Daniels N, Weinstein MC. The Role of Cost-effectiveness Analysis in Health and Medicine. JAMA: The Journal of the American Medical Association. 1996;276(14):1172–1177. doi: 10.1001/jama.1996.03540140060028. [DOI] [PubMed] [Google Scholar]
  71. Savaya R, Spiro SE. Predictors of Sustainability of Social Programs. American Journal of Evaluation. 2011 doi: 10.1177/1098214011408066. [DOI] [Google Scholar]
  72. Spoth R, Greenberg MT. Toward a Comprehensive Strategy for Effective Practitioner–Scientist Partnerships and Larger-Scale Community Health and Well-Being. American Journal of Community Psychology. 2005;35(3–4):107–126. doi: 10.1007/s10464-005-3388-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. The Pew Charitable Trusts. Results First. 2012 Retrieved from http://www.pewstates.org/projects/results-first-328069.
  74. U.S. Department of Health and Human Services. Preventing Tobacco Use Among Youth and Young Adults: A Report of the Surgeon General. Atlanta, GA: Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health; 2012. Retrieved from http://www.surgeongeneral.gov/library/reports/preventing-youth-tobacco-use/full-report.pdf. [Google Scholar]
  75. Weinstein MC, Siegel JE, Gold MR, Kamlet MS, Russell LB. Recommendations of the Panel on Cost-Effectiveness in Health and Medicine. JAMA: The Journal of the American Medical Association. 1996;276(15):1253–1258. doi: 10.1001/jama.1996.03540150055031. [DOI] [PubMed] [Google Scholar]
  76. Welsh BC, Sullivan CJ, Olds DL. When Early Crime Prevention Goes to Scale: A New Look at the Evidence. Prevention Science. 2009;11(2):115–125. doi: 10.1007/s11121-009-0159-4. [DOI] [PubMed] [Google Scholar]
  77. Williams A. Cost-benefit analysis: Bastard science? and/or insidious poison in the body politick? Journal of Public Economics. 1972;1(2):199–225. doi: 10.1016/0047-2727(72)90002-3. [DOI] [Google Scholar]
  78. Wolfenstetter SB. Conceptual Framework for Standard Economic Evaluation of Physical Activity Programs in Primary Prevention. Prevention Science. 2011 doi: 10.1007/s11121-011-0235-4. [DOI] [PubMed] [Google Scholar]
  79. Yates BT. Toward the incorporation of costs, cost-effectiveness analysis, and cost-benefit analysis into clinical research. Journal of Consulting and Clinical Psychology. 1994;62(4):729–736. doi: 10.1037/0022-006X.62.4.729. [DOI] [PubMed] [Google Scholar]
  80. Yates BT. Analyzing costs, procedures, processes, and outcomes in human services. Thousand Oaks, Calif: Sage Publications; 1996. [Google Scholar]
  81. Zerbe RO, Davis T, Garland N, Scott T. Toward principles and standards in the use of benefit-cost analysis. Seattle, WA: Benefit-Cost Analysis Center, Evans School of Public Affairs, University of Washington; 2010. Retrieved from http://evans.washington.edu/files/Final-Principles-and%20Standards-Report--6_23_2011.pdf. [Google Scholar]

RESOURCES