Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2010 Mar 11.
Published in final edited form as: Appl Dev Sci. 2003 Apr 1;7(2):76–86. doi: 10.1207/S1532480XADS0702_4

Issues in the Economic Evaluation of Prevention Programs

E Michael Foster 1, Kenneth A Dodge 2, Damon Jones 3
PMCID: PMC2836594  NIHMSID: NIHMS146871  PMID: 20228955

Abstract

Economic analysis plays an increasingly important role in prevention research. In this article, we describe one form of economic analysis, a cost analysis. Such an analysis captures not only the direct costs of an intervention but also its impact on the broader social costs of the illness or problem targeted. The key question is whether the direct costs are offset by reductions in the other, morbidity-related costs, such as the use of expensive services. We begin by describing how economists think about costs. We then outline the steps involved in calculating the costs of delivering an intervention, including both implicit and explicit costs. Next we examine methods for estimating the morbidity-related costs of the illness or problem targeted by the intervention. Finally, we identify the challenges one faces when conducting such an analysis. Throughout the article, we illustrate key points using our experiences with evaluating the Fast Track intervention, a multiyear, multicomponent intervention targeted to children at risk of emotional and behavioral problems.


Policymakers and researchers increasingly recognize that economic analysis is a key component of prevention research. This recognition is apparent in efforts to make prevention and services research more policy relevant. The National Institute of Mental Health’s plan (National Advisory Mental Health Council, 1996) for prevention research states that

A critical component of community-based trials, with important policy implications, is determination of the cost-benefit or cost-effectiveness of preventive interventions…. Too little attention, however, has been focused on conducting cost-benefit or cost-effectiveness analyses that can reliably demonstrate whether prevention interventions indeed save money and improve health. Such information is of great importance to policy makers. Cost-benefit or cost-effectiveness analyses need to become an integral part of NIMH-supported prevention effectiveness trials. (pp. 17–18)

A second advisory group highlighted this theme as well (National Advisory Mental Health Council’s Clinical Treatment and Services Research Workgroup, 1999). It recommended that the National Institutes of Mental Health (NIMH) “improve measures and analyses of costs in intervention studies” and that “large intervention studies include a cost-effectiveness component” (p. 35).

Economic evaluation is valued because it answers an essential question: “Is the prevention program ‘worth it’ in a financial sense?” Such an assessment is critical. An intervention may produce statistically significant impacts on targeted outcomes, but without an economic evaluation, one cannot judge whether the intervention is a good use of society’s limited resources. The presumption is often that an effective prevention program is also cost-effective, but such may not be the case, especially when the analyst accounts for the full costs of the program (Russell, 1986). Other features of prevention programs work against establishing cost-effectiveness. Unless they are very well targeted, such programs often involve program expenditures on individuals who would not develop the illness or condition even in the absence of the program. Furthermore, many programs involve expenditures in the present, whereas the benefits emerge over time (indeed, often far into the future) and have lower present values as a result.

The answer to the previously posed question, therefore, is seldom obvious. Economic evaluation has other benefits as well. It allows the analyst or policymaker to compare disparate programs in terms of a common outcome metric (e.g., net benefits or quality of life). Furthermore, because economic evaluation explicitly discounts future costs or benefits (or well-being), it also provides a way to compare interventions providing benefits that differ in timing.

An economic evaluation may take one of several forms: benefit–cost analysis, cost-effectiveness analysis, and cost-utility analysis. Perhaps best known is benefit–cost analysis. A benefit–cost analysis provides a full accounting of the resource implications of an intervention, policy, or program. One measures both the costs and benefits of the intervention and then calculates net benefits—that is, the benefits of the intervention less its costs. If the net benefits are positive, then the intervention is desirable.

A second form of economic evaluation is cost-effectiveness analysis. Although the term cost-effectiveness is often used as a synonym for economic evaluation, cost-effectiveness analysis actually refers to a specific form of such an evaluation. Unlike benefit–cost analysis, cost-effectiveness analysis does not require one to measure outcomes in dollar terms. Rather, the outcome measures remain in their natural metric (e.g., a 1-point difference on a symptom checklist or a percentage point reduction in the number of teenagers giving birth). The analyst then compares interventions or programs in terms of their added (or incremental) costs per added unit of the outcome measure (Zerbe & Dively, 1994). One could calculate such ratios for a variety of outcome measures.

A third form of economic evaluation, cost-utility analysis, is actually a specific form of cost-effectiveness analysis. The outcome or measure of effectiveness is a measure of overall well-being based on respondent ratings of several dimensions of well-being. The scores on the different dimensions are then combined using weights that reflect the relative desirability of different combinations of the attributes. Those weights reflect measures of caregiver or other stake holder preferences for the attributes involved. A familiar measure of this sort is the quality-adjusted life year (Drummond, O’Brien, Stoddart, & Torrance, 1997).

Regardless of the form of economic evaluation chosen, the foundation of each is a good estimate of an intervention’s costs. As discussed subsequently, such an estimate should include both the implicit and explicit costs of the intervention and should not be limited to the costs borne by the agency or program supporting the intervention.

As highlighted in the NIMH services report, however, an analysis of costs should move beyond the direct costs of the program to include morbidity-related costs. In particular, it should attempt to answer the question posed by the Clinical Treatment and Services Research Workgroup: “What is the cost to society as a whole, not simply to the treatment system, of leaving mental illness untreated” (p. 35)?

A full discussion of the various forms of economic analysis is beyond the scope of this article. Rather, we outline the steps required to conduct such a cost analysis broadly defined. We illustrate our discussion based on our experiences estimating the costs of Fast Track, the largest prevention study ever funded by NIMH (Conduct Problems Prevention Research Group [CPPRG], 1992, 2000). The cost analysis of Fast Track captures the direct costs of the intervention as well as morbidity-related costs, such as the costs of related services. These services include special education and (somatic) health services as well as mental health services and substance abuse treatment. Reductions in these costs represent benefits of the intervention, and the costs analysis assesses whether these reductions exceed or offset the direct costs of the intervention itself.1

This article has five sections. The first briefly describes the Fast Track intervention. The second provides background on how economists think about costs. The third examines the means by which one might measure the direct costs of an intervention. The fourth examines morbidity-related costs. The final section considers controversial topics in economic evaluations.

The Fast Track Intervention

Fast Track is an ongoing, multisite, randomized clinical trial designed to prevent the onset of serious conduct disorder and chronic violent crime in adolescence. It is being implemented in 55 schools at four geographic sites with three cohorts of elementary school age boys and girls. Children were assigned to intervention or control groups in kindergarten through randomization at the school level. The intervention includes a universal component as well as indicated components targeted to “early starters”—children exhibiting pervasive conduct problems in early childhood (Lochman & CPPRG, 1995). The ongoing 10-year intervention involves the child and his or her teacher, parents, tutors, mentors, and peers (Bierman, Greenberg, & CPPRG, 1996). Consistent with research on the origins of behavior problems, the intervention is multifaceted, targeting development risks associated with the early initiation of conduct problems. Intervention components focus both on building the child’s behavioral and cognitive skills and on changing patterns of interaction with important people in the child’s social environment (family, school, and peers; McMahon, Slough, & CPPRG, 1996). The components of the intervention can be organized into three levels:

  1. Universal prevention support provided at the population level (within intervention schools).

  2. Standard selective prevention provided to families of children identified as high risk during the initial kindergarten screening.

  3. Additional individualized selective support provided to high-risk children and families based on criterion-referenced assessments over time.

Analyses to date indicate that children assigned to the intervention condition have fared more favorably than children assigned to the control condition. At the universal level (i.e., all children in intervention schools contrasted with all children in control schools), the intervention group demonstrated less aggressive and more socially competent behavior after 1 year (CPPRG, 1999a). At the indicated level (i.e., the high-risk children assigned to receive intensive services contrasted with high-risk children assigned as controls), the intervention group demonstrated better outcomes across a range of behaviors targeted during the intervention. In particular, intervention children demonstrated better skills in reading, social problem solving, and understanding emotion after 1 year. Furthermore, their parents demonstrated less coercive discipline strategies (CPPRG, 1999b). By the end of the 3rd year of intervention, the intervention group demonstrated less aggressive behavior in the classroom and at home, was less likely to have been placed into special education, and was less likely to demonstrate serious conduct problems (CPPRG, 2002a). Furthermore, these findings held across ethnic, gender, and socioeconomic groups, suggesting some generalizability of the effects (CPPRG, 2002b). Finally, the positive effects continued through the end of fourth grade, and mediation analyses indicated that positive intervention effects on antisocial outcomes could be accounted for by planned intervening effects on parenting behavior and children’s social–cognitive abilities (CPPRG, 2002).

How Do Economists Think About Costs?

When considering how economists define and measure costs, one should remember four principles. The first principle is that the costs of a program or intervention vary depending on the perspective from which they are assessed. For example, an intervention targeted to women on welfare may reduce their subsequent use of welfare. The reduction in those expenditures represents savings to taxpayers. That same reduction, however, represents a cost to the participants (Plotnick, 1999).

Costs and benefits can be assessed from any of several perspectives. Economists, however, emphasize the societal perspective, which encompasses the perspective of all groups, such as intervention participants and taxpayers. In some instances, the effects of a program on different groups offset each other. In the case of reduced welfare use, the only (net) societal cost involves program administration: The gain to taxpayers offsets the losses born by the former recipients. There are other instances where the societal perspective diverges from that of other perspectives. For example, payments made for mental health service may not equal the costs of producing that service (Hargreaves, Shumway, Hu, & Cuffel, 1998). Those charges are the ‘costs’ for the agency or program that pays for the services. These payments, however, may be a poor proxy for societal costs. This divergence exists for several reasons. As a result of market imperfections, payments made by some clients may implicitly subsidize other clients. The privately insured, for example, may subsidize the uninsured. As a result, payments made on behalf of the latter may understate the costs of society for the services involved.

The societal perspective represents the bottom line for economists—it is used to gauge the “efficiency” or overall desirability of a societal allocation of resources. The second through fourth characteristics of economists’ view of costs correspond to this broader, societal perspective.

A second principle is that economists measure costs in terms of opportunity costs, the value of a resource in its next best use (Gold, Russell, Siegel, & Weinstein, 1996). In many ways, this emphasis on foregone uses is what distinguishes an economist’s approach from that of an accountant. This difference is most apparent in instances where a cost (or resource use) generates no bookkeeping entry. As an example, volunteer time requires no payment by the agency sponsoring an intervention. The time involved, however, has a value in alternative uses—the volunteer could spend that time at work or in leisure activities (or even volunteering at another program). These implicit time costs also might involve the time of program participants. Although economists may disagree somewhat as to how that time should be valued, they generally agree that such costs should be included. Other implicit costs include the value of donated space.

A third principle shaping economists’ reckoning of costs is that some costs are indirect or morbidity related. In a prevention program targeted to the mental health of children, these costs are particularly important. Children with emotional and behavioral problems are frequently involved in many child-serving sectors, and the costs of the services involved are potentially enormous. In many cases, these costs are actually reduced by a prevention program and so represent areas of so-called cost offset. For example, improvements in a child’s mental health may reduce his or her use of health services or the use of mental health services by his or her parents (Foster & Bickman, 2000) or expenditures in the child-welfare sector (Foster, Connor and Nguyen, 2001). On the other hand, a prevention program may link families to these services and so increase their use (and related expenditures) as a result (e.g., the Starting Early, Starting Smart program; Karoly, Kilburn, Bigelow, Caulkins, & Cannon, 2001). In some cases, these indirect costs may not be immediately apparent. For example, an intervention may reduce school dropout. Although this effect has obvious benefits, it also creates costs related to resources used while the individuals remain in school.

A fourth feature of an economist’s view of costs is that marginal costs are the costs that matter (Warner & Luce, 1982). By “marginal” an economist means costs that change as a result of the activity involved. Consider, for example, an intervention that affects the use of special education. The relevant costs are those above and beyond the costs of education in a regular classroom—after all, the latter would be incurred even if the child were not in special education (unless the intervention affects the likelihood that a child leaves school altogether).

The notion of marginal costs is particularly salient when one is considering the costs of expanding an established intervention. In this case, the relevant margin is that of providing the intervention to an additional child. From this perspective, the costs of developing the intervention have already been incurred at that point–they are “sunk” costs. As a result, these so-called first-copy costs are irrelevant (Gold et al., 1996).2

A final point worth noting is that cost estimates that reflect these four principles are often somewhat speculative. This point may surprise readers who assume that good accounting easily can document actual costs. Some costs are subtle, and other costs have an uncertain dollar value. Estimating opportunity costs involves specifying the foregone activities to which the resources would have been devoted. For example, in valuing a participant’s time, one must speculate whether that person would have been working for pay or enjoying leisure in the absence of the program. Valuing teachers’ time is particularly difficult in the case of a classroom intervention; teachers may not be paid extra to participate but must carve out time for a program during the regular school day while still delivering 100% of the regular curriculum. Other examples involve the value of borrowed space and donated equipment. In some instances, data on a comparison group can clarify these uncertainties (e.g., the opportunity costs of participant time) but not in others (e.g., the value of donated resources used in the intervention).

In the next section, we consider how these principles would be applied to estimating the direct costs of a prevention program.

Measuring and Valuing the Direct Costs of a Prevention Program

Gold et al. (1986) identified three steps in measuring the costs of an intervention or service: identifying the resources involved, measuring their use, and valuing the resources used in dollar terms. We examine each of these for the direct costs of the program.

Note that the following discussion presumes that the prevention program of interest is being evaluated and that evaluation or research and service delivery are conducted by the same unit. As a result, the two activities share space and administration; furthermore, some individuals work on both tasks. Although common, this sharing of tasks not only raises issues about blinding the individuals involved to the intervention status of participants but also complicates estimating the costs of the intervention. These personnel must track their allocation of time to intervention and research. This task might involve time sheets that relevant personnel complete weekly. Ideally, because retrospective reports may be unreliable, these sheets would be completed prospectively. For other shared resources, such as the costs of space, one can either track the use of space or divide the costs between the two activities based on other information (as discussed later).

Identifying Resources Involved

Consistent with the economic principles identified previously, we want to capture all of the resources involved in delivering an intervention. This accounting includes implicit costs (those resources for which no explicit payments are made), such as parental time and donated space. Time contributed by volunteers also would be included.

Table 1 enumerates the different resources used in delivering an intervention. The explicit costs of the intervention involve both fixed and variable costs. Fixed costs are those costs that do not change as the number of participants expands. In this case, fixed costs include the costs of facilities. Variable costs, on the other hand, depend on the number of participants.

Table 1.

Resources Used in Intervention Delivery

Explicit
Variable Fixed Implicit
Personnel Space Parent timea
Supplies Utilities Teacher timea
Travel Administration
Incentives–parents Equipment Volunteer time
Incentives–teachers Training Other space costs
Participant’s out-of-pocket costs
a

Net of any incentives paid.

Measuring Resource Use

Information on the resources involved could be determined from several sources. Principal among these are project budgets, which identify the resources used as well as costs to the project of those resources. In the case of some resources (particularly implicit costs), additional information would be needed from other sources, such as parental or mentor reports of time use.

Valuing the Resources Used in Dollar Terms

In the case of explicit costs, these costs are naturally expressed in dollar terms. The challenge here is to allocate these costs between intervention delivery and other activities, such as research. In the case of implicit costs, measuring the resources involved in dollar terms often involves additional information.

Explicit costs

For many interventions, labor costs are a primary component of explicit costs. These costs can be calculated by using budget information on wages and salaries and on fringe benefits. Total labor costs would be allocated to the intervention based on the division of time use reported on the time sheets (discussed previously). Individuals devoting their time exclusively to research could be ignored or, if one were interested in the total costs of research, included in a separate tabulation. Note that administrative labor costs are included in the fixed costs allocated subsequently.

Next, one would estimate other variable costs, such as supplies and materials. To the extent these resources could be related directly to intervention delivery, expenditures would be included in the costs of the intervention. Expenditures on items that could not be linked to either the intervention or research (e.g., photocopying costs that were not tracked) could be included in (joint) fixed costs that are allocated as described later.

Next, one would allocate fixed costs, including those costs that could not be divided between the intervention and research. Principal among these are space costs, including utilities and telecommunication costs. One could potentially include the costs of space used by specific personnel in the same proportion as they use their time. However, this would leave other space used by intervention and project personnel (such as conference rooms and meeting space) unallocated. For that reason, following Hargreaves et al. (1998), we recommend that all space and similar shared costs be allocated based on the overall distribution of personnel time (and resulting costs) between the intervention and evaluation.3

Note that some costs involve resources that are purchased in a given year but that are used by project staff over several years. These costs include equipment costs, such as computers. These costs can be amortized over time by using standard accounting principles. Also included in this category are training costs. Project staff may be trained in a given year but work with program participants over time. As a result, some portion of their training should be attributed to future years. Using an estimate of the average amount of time personnel remain with a project, one could amortize those costs as well.

As discussed previously, not all explicit costs can be tracked on project budgets. These costs include out-of-pocket costs of participation borne by families. Included here are transportation costs as well as baby-sitting costs for a participant’s siblings. One could estimate those costs by having parents complete a short questionnaire at a few intervention sessions.

These explicit costs represent the costs of the intervention to taxpayers (or other funding source) and participants. They also are part of the costs of the intervention to society.

Implicit costs

Implicit costs are primarily of two types—time and space. The latter involves space used by an intervention for which no payments are made, such as classrooms used for evening parent training.4 One could argue that the opportunity cost for this space is often zero as well: These groups are conducted after the normal business day or at a time when the space would not otherwise be used. This point is debatable, however, and one might consider the sensitivity of one’s conclusions to this assumption. Estimates of the opportunity cost include the costs of similar space one might rent in the community.

Time costs represent a second type of implicit costs. In an intervention like Fast Track, these costs involve parental time related to the indicated components of the intervention as well as teacher time related to the classroom-level intervention.5 Parents receive incentive payments, but those payments may not fully compensate them for their time. Although family groups were scheduled at convenient times, parental participation reduces leisure time. Such time, however, is not without value. Because they conceivably could work during those hours, parents pay an implicit price for their leisure (in terms of reduced wages). This suggests that their leisure time is worth at least as much as their wage rate.6 For that reason, following Gorsky, Haddix, and Shaffer (1996), we recommend that one value parental time using parents’ wage rate.7 One could calculate these costs using the results of a brief survey of parents concerning time spent on intervention-related activities and their wage rate. (To avoid double-counting costs, one would only include the amount by which these costs exceed any incentive payments made.)

A second implicit labor cost involves volunteer time. In the case of Fast Track, volunteer mentors worked with intervention children as they entered later grades. Like parental time costs, these costs could be estimated by using wage rates reported by mentors. In the case of Fast Track, however, some mentors were paid, and a sensible alternative is to value the time of all mentors using wages paid to paid mentors for all mentors.

A third implicit cost involves the time teachers spend preparing for the intervention that is uncompensated. Although Fast Track provides a modest incentive payment, this payment only partially reimburses teachers for their time. We ask teachers to estimate preparation time, and, following the same logic used to value parental time, we value that time using their wage rate.8

A final, related implicit cost involves the time teachers spend delivering the universal intervention. The opportunity cost of that in-class time is the value of lessons or materials that were foregone as a result. How might one approximate the value of that time? One estimate would be the value taxpayers place on that time. One proxy for that value is the salary and fringe benefits teachers earn during that time period, calculated as the appropriate percentage of their salary. A different approach would be to assume that the teacher is still responsible for 100% of the regular curriculum, so all of the time that the teacher devotes to the prevention curriculum occurs as a result of increased efficiency and elimination of noncurricular activities (e.g., breaks). In such a case, the relevant marginal costs are zero.

These implicit costs are borne by the persons involved and represent components of the direct social costs of a prevention program.

Measuring and Valuing the Morbidity-Related Costs of an Intervention

As discussed previously, a second type of cost involves morbidity-related costs—namely, the costs of alternative services used. The first step in measuring these costs is identifying the resources (or services) involved. Having done that, we consider the means for measuring and valuing each service.

Identifying Resources Involved

The list of potential services and resources one might include is endless. Because research resources are not limitless, one has to prioritize based on what one knows about the prevention program and the population it targets. Possible criteria include the potential magnitude of costs involved as well as whether one would expect any relation between them and the intervention.

In the case of Fast Track, we targeted special education and health services, including behavioral health services. We focused on these services because the potential expenditures were so large and because of the expected relation to the intervention. Preliminary results showed a substantial reduction in the use of special education (CPPRG, 1998). We also focused on the use of health services because of the link between conduct disorder and the use of those services. We included mental health services because prior research links externalizing behavioral problems to high-cost service use9 and because preliminary analyses showed relatively high rates of service use among study participants (Jones, Dodge, Foster, & Nix, 2002).

Measuring Resource Use

Potential sources of information on the use of special education include self- or parental reports. Because of concerns about the accuracy of such reports, the Fast Track study relies on school record reviews. These reviews are conducted during the summer and collect information such as whether a child has an individualized education plan as well as the type of special education received (e.g., resource room). The review also records the amount of time a study participant receives each type of service (e.g., the child spends 60% of his or her time in a regular classroom and the remainder in a resource room). This level of detail is often necessary because an intervention might affect not only whether an individual received special education services but also the type received.

Research on the validity and reliability of parental reports of children’s health and mental health service use indicates that parents are fairly accurate in terms of identifying the types of services received (especially for services in intensive—and expensive—settings; Bean et al., 2000; Breda, 1996). Parental reports of the volume of services are much less reliable. Furthermore, parents often do not know total payments on services (including those made by insurers). As a result, record reviews are essential to measuring the costs of health and mental health services (Hargreaves et al., 1998).

Fast Track is reviewing records of medical and behavioral health services with a four-step process. First, each summer, parents of program participants complete a short instrument describing the use of (somatic) health and behavioral health services.10 That instrument records the name and address of any provider that the parent indicates provided his or her child with services. The instrument captures mental health services broadly defined—that is, families are asked about specialty mental health services (such as outpatient therapy) as well as related services, such as family preservation services provided by community agencies. The instrument also asks about foster care. After a provider is identified, the interviewer asks the parent to complete an authorization for record release from that provider.11

In the second step, project staff contact agencies or providers that have been identified to schedule and then complete an agency-level interview with an administrator at that facility. That interview is semistructured and provides information on the types of children and adolescents served and on the full range of services provided. During the interview with the administrator, the project staff inquire how and when they can access client records.

As a third step, a trained research assistant visits each service provider named by the parent. During the visit, the research assistant records the dates, number of visits, and types of services received by the child. In addition, the reviewer examines billing records to determine charges and source of payment (parent, private health insurance, Medicaid, state government, write-off, or other). The research assistant uses a series of forms that were designed for the study to record the necessary information. As a final step, information that is collected is transcribed, coded, and transformed into variables stored for analysis.

Valuing the Resources Used in Dollar Terms

For each type of service, one can convert measures of service use into dollar values using per-unit costs. There are two potential sources of per-unit costs (Wolff, 1998). The first involves supplemental data, such as national data on the costs of a service (such as an emergency room visit). An alternative is to rely on billing or budgetary information for the specific service provider or agency involved. The use of supplemental data has several advantages. The first is ease of use. Obtaining billing or accounting records from the providers involved (e.g., a social service agency) may be time-consuming, and some agencies or providers may refuse to provide the records. Furthermore, supplemental data may provide nationally representative estimates, which may improve the external validity of study findings.

As a source of per-unit costs, such data have limitations as well. Analyses of actual costs or expenditures from the providers involved may allow for greater disaggregation of services. For example, one likely will be unable to find estimates of the costs of special education at a highly disaggregated level. As a result, one may have to categorize services into cruder categories, potentially masking the effect of an intervention (that shifts individuals from more or less intensive types of services in a given category). In addition, supplemental data do not necessarily imply national data. For example, one might be able to obtain estimates of the costs of family preservation services for a handful of programs around the country. Those figures may neither be nationally representative nor well describe family preservation as delivered in the study community. An added problem is that national figures may be quite dated and, in an area where policy and service delivery are changing rapidly, rather inaccurate as a result. Sturm et al. (2000), for example, derived per-unit costs from older Medicaid data describing services delivered under fee-for-service arrangements. As a result, the applicability of these figures to services delivered in a managed care environment is rather limited. Because nationally representative and current data are not widely available for many of the services considered, the Fast Track study attempts to estimate actual costs incurred at the study sites.

Special education

Fast Track is planning to estimate the costs of special education by using budget information from the school districts involved using the Resource-Cost Model (Hartman, 1983; Hartman & Fay, 1996). The Resource-Cost Model is a procedure for dividing a school into program units and tracking the allocation of resources across those units. The resources are then costed out by using actual average payments for resources or standardized costs taken from secondary sources. This procedure produces an estimated cost for general education and for each type of special education service provided. One could combine these per-unit costs with reports of time spent in each service type to estimate the costs for each study child. Any reductions in the use and costs of special education would represent savings to taxpayers and to society.

Health and behavioral health services

As with special education, our estimates of the costs of health services will rely on local budget and expenditure data. As noted, the record review will provide information on the use of the health and behavioral health services as well as billing information. This information includes payments made by parents of study participants or by insurers, such as Medicaid. Those payments are adequate for the purpose of calculating cost savings (or benefits) from the payor’s perspective. When judged from the societal perspective, however, these payments only approximate actual opportunity costs. To estimate net benefits from a social perspective, one needs to adjust charge data, especially for expensive inpatient care. One can do this using charge-to-cost ratios reported in Medicare cost reports submitted to the Health Care Financing Administration (and maintained in the Health Care Provider Cost Report Information System). At this point, there is not a good means for doing so for outpatient services (see Hargreaves et al., 1998).

For some services, billing information may not be available. Behavioral health services may be financed through block grants. As a result, management information systems in drug and alcohol treatment facilities (for example) may not provide billing information (Cartwright, 1998). In those cases, we attempt to develop estimates of “slot” costs (average cost per patient) using information on total program costs and the total number of patients.

As noted previously, our services instrument also includes foster care. One can value those resources in dollar terms using relevant budget information, such as foster care payment levels and administrative costs. Any reductions in the use and costs of these services would represent savings to the payors involved and to society.

Discussion

This article describes the steps necessary to estimate the full economic costs of an intervention. This process involves identifying the resources involved, measuring their use, and valuing the resources used in dollar terms. We describe each of these steps in detail. The resulting costs include both the direct and morbidity-related costs of the intervention. These figures could be decomposed into implicit and explicit costs and could be presented from any of several perspectives, including that of society as a whole.

The costs identified in this manner would extend over time. In many cases, the direct costs would be concentrated in the early years of the study when the intervention was actually delivered. Morbidity-related costs would extend well into the future. The stream of costs could be summarized into a single figure by using discounting, a method for converting future dollar amounts into present value (Zerbe & Dively, 1994).

Note that such an analysis should not result in a single figure. As we have discussed, key decisions must be made at several points in the process. These decisions involve various, plausible alternatives for measuring resource use or valuing those resources in dollar terms. In several cases (e.g., the value of parental time), no alternative is completely convincing. As a result, one should avoid presenting a single “bottom-line” estimate. Rather, economists typically present a range of estimates, calculated for each of the competing assumptions or figures. These “sensitivity analyses” indicate the degree to which the cost estimates are robust across a range of plausible assumptions. Furthermore, a confidence interval for the net costs figure should be provided as well.

Supplemental analyses also might examine variation in the impact of the intervention. For example, the savings might be relatively great for children with more severe problems (who cost society a great deal), or, alternately, they might be relatively great for those children whose futures could be brightest. This range in savings might not be random and could be accounted for in statistical models that use pretreatment factors as moderators.

The resulting net costs figure would have many uses. That figure could be used to determine whether future costs savings offset the initial direct expenditures for a given intervention or to compare two interventions producing rather different profiles of costs over time. The costs figures could be combined with measures of program effectiveness to generate cost-effectiveness ratios.12

The limitations of such an analysis should be noted as well. These limitations primarily involve the scope with which benefits are measured. First, the benefits of the intervention are captured only in terms of cost savings; the study described does not include a range of outcomes that a more extensive study could measure in dollar terms, including future employment and earnings for both the child and the parents, who must care for the child. The losses for the parent may approach those that have been estimated in the past for families of children with disabilities or chronic illness. Furthermore, it does not include intangible outcomes, such as emotional well-being. Both of these might be incorporated in a full benefit–cost analysis. Such an analysis would include a broader range of program benefits (and not just averted costs) and would enumerate any intangible benefits measured (even if they are not converted into dollars).

In estimating the full costs of Fast Track, we have encountered a series of difficult issues that represent areas for future research. In some cases, potential solutions exist but are not universally accepted. In other instances, the research literature provides no clear guidance. We discuss our experiences with these issues here.

Projecting Future Costs and Benefits

To this point, the cost analyses described include only those costs that can be directly measured during the study period. If a study ends when a child reaches age 18, for example, then the analysis ignores future costs such as those involving social services or incarceration. For a prevention program that alters a child’s developmental trajectory, the reduction in these costs could be enormous.

One way to address this issue might be to estimate future costs using secondary analyses of existing longitudinal studies, such as the National Longitudinal Survey of Youth or the Panel Study of Income Dynamics. These data could be used to link behaviors and outcomes observed for intervention study participants (e.g., school performance or delinquency) to future costs (e.g., costs of incarceration during adulthood). One could project differences in future costs based on treatment–control differences observed for key outcomes during adolescence and the link between adolescent and adult behaviors observed in the secondary data.

Although the value of projections is apparent, potential problems remain. First, the projection models presume that the relations they represent generalize to participants in the intervention being evaluated. Many interventions, however, are targeted to special populations; even a “universal” intervention like that included in Fast Track may be targeted to a group of relatively high-risk children (e.g., living in poor neighborhoods). As a result, a valid concern is whether the estimated model properly projects the future behavior of intervention participants and their control-group counterparts. The impact of school performance, for example, may be different in high-poverty areas. As a result, analyses based on secondary data may exaggerate (or understate) future treatment–control differences generated by observed differences in school performance.

A second concern involves the causal nature of the parameters of the projection model. Given that any poststudy treatment–control differences are inferred from secondary data, the implicit assumption is that improving school performance among intervention participants would increase earnings to the extent that high- and low-performing individuals differ in the secondary data. The analyst assumes that the difference between the two groups is due to school performance per se—in other words, that the estimated impact of school performance on earnings is causal. If this assumption is false, then projected differences between treatment and control groups may be incorrect. Suppose, for example, that in the analysis of secondary data, school performance is correlated with unmeasured family background. In that case, the estimated impact of school performance on future costs captures the effect of school performance as well as that of family background. As a result, that estimate may overstate the future benefits of the intervention, which has improved school performance but may not have changed the unmeasured background characteristics.

Global Measures of Effectiveness

Most analyses of costs and benefits assume a variable-level approach. That is, the models test whether individual variables (e.g., mental health service use, incarceration) covary or change over time. The result is a description of variables rather than persons and, subsequently, an emphasis on prevention and policy to change variables instead of persons. This approach has disadvantages in circumstances in which the variables intercorrelate in complex or nonlinear ways, and it is problematic for interventionists who work with persons, not variables.

Consider the case where an intervention alters both high school dropout rates (which have a projected dollar outcome based on one set of studies) and arrest rates (which have a different projected dollar outcome based on different studies). It is likely that these two outcomes are correlated, however. In the absence of a study that captures the economic outcomes for a group for which both variables are measured, it is unclear whether the economic outcomes should be summed or combined in some other manner.

With the recent exception of Nagin (1999), a different approach is the person-level profile approach, in which a more holistic or global measure of an outcome is defined. Economic analyses have rarely taken this approach. In considering the economic evaluation of crime-prevention programs, Nagin argued that “successful intervention is tantamount to saving a human life and should be valued accordingly” (p. i).

The concept of individual lifelong careers in crime was put forth by Blumstein, Cohen, Roth, and Visher (1986), who traced the involvement of individuals through the committing of crimes, adjudication, incarceration, and then recidivism. Work by Cohen (1998) has indicated that these career criminals cost society between $1.3 and $2 million each. A prevention program might be evaluated in terms of its ability to prevent individuals from embarking on this costly career path. The Fast Track program, for example, might cost $40,000 per high-risk participant. At that cost, only 3% of the participants would need to be prevented from entering this career path (relative to controls) for the program to be cost-beneficial.

More research needs to be completed on the person–level approach before this approach will be universally accepted. Particularly important is the question of whether the costs of career criminals to society can be attributed exclusively to the commission of crime or, alternately, to the correlates of crime (e.g., the family context in which these individuals are raised).

Costs of Prevention as Disseminated by a Community Agency

Researchers and policymakers recognize that the impact of an intervention under ideal conditions (“efficacy”) may differ from that under real-world conditions (“effectiveness”). This distinction extends to cost estimates as well. The costs of an intervention as delivered in a research study may differ substantially from those incurred in a dissemination. The methodology described earlier emphasized the importance of removing research-related expenses from the cost estimates, but there are broader issues than separating personnel costs into research and intervention delivery. When disseminated, an intervention may be delivered using a rather different set of resources. Rather than a university facility, for example, the intervention may be delivered in space rented by a local agency. One would expect the costs involved to be different than those related to university facilities if only because the local agency may be located in a poor neighborhood. Intervention personnel may have different backgrounds or pay scales. This difference is most apparent in the case of university faculty who serve as researchers and program administrators. Although their administrative responsibilities would have to be fulfilled in the real world, one would not hire PhD-level staff for those tasks.

Furthermore, a community agency or provider is more likely to use donated resources. For example, Head Start requires local grantees to match federal funds with local resources, which can be donated space or other resources. Although the value of these resources might be included in sensitivity analyses, their increased use creates greater uncertainty surrounding the costs of an intervention as disseminated.

Apart from allowing for differences in resources used, the task of estimating the costs of an intervention as implemented is likely to be more complex. One wrinkle is that the agencies or providers involved may offer a multitude of programs. Separating shared costs (such as administrative expenses) attributable to the intervention of interest is likely to be difficult.

In sum, this discussion highlights several areas for future research. Although projections have many benefits, the development of models that capture causal relations and that generalize to the populations often targeted by interventions is an important area of future research. Another challenge is to understand the true costs of crime for career criminals beyond the costs to society that are attributed to the backgrounds of these individuals. A final challenge involves estimating the costs of an intervention as disseminated. As interventions move into the real world, it is important that the actual costs that occur be included in any or all program evaluations. This information could be useful in a revised assessment of cost-effectiveness or in understanding differences in effectiveness and efficacy.

Footnotes

1

The actual Fast Track study considers costs not discussed here, such as costs incurred in the juvenile justice system. The economic evaluation moves beyond costs to capture benefits of the intervention, such as increased future earnings. This information will be used to conduct a full benefit–cost study.

2

That is, they are irrelevant for answering the question of what the intervention costs for a given participant. Those costs, however, are not irrelevant for all economic questions. As an example, one might calculate the rate of return on the initial investment made to develop an intervention.

3

One could use the breakdown either of labor costs or of full-time equivalents. The relative accuracy of each depends on whether higher paid workers use proportionately more space.

4

The use of such space might involve some payment, such as cleaning fees. Generally, those payments represent only a small portion of the costs of the space involved.

5

Note that this ignores the opportunity costs of the child’s time. Economists have no good way to value the time of children (Gold et al., 1996).

6

This assumption is not without controversy. By using wage data, the analyst potentially builds the biases present in labor markets into his or her analysis. For example, if African American workers earn less because of discrimination, then their time is undervalued in a cost study. Unless one accounts for discrimination, an intervention targeted to black families will appear less costly than one targeted to white families (Gold et al., 1996).

7

Hargreaves et al. (1996) argued that this practice is an overestimate in cases where parents are unlikely to work. For this reason, one might examine the sensitivity of one’s findings to the handling of parental time costs. Sensitivity analyses are described later.

8

Another possibility is that the teachers would spend this time preparing other material. In that case, the marginal cost of participation is zero. (In that case, the costs of Fast Track should include the value of those foregone activities.) The handling of these costs represents another area for sensitivity analyses.

9

Among children served under the multisite Comprehensive Community Mental Health Services for Children and Their Families Program, for example, conducted-related diagnoses were the most common (Foster, Kelsch, Kamradt, Sosna, & Yang, 2001).

10

That instrument is a modified version of the Services Assessment for Children and Adolescents, an instrument developed for the Utilization, Need, Outcomes, and Costs for Child Adolescent Populations (UNOCCAP Study; Bean et al., 2000; Stiffman et al., 2000).

11

In most cases, this is a generic authorization form. However, some providers require their own authorization form.

12

See Gold et al. (1996) for a discussion of which costs to include in calculating cost-effectiveness ratios.

Contributor Information

E. Michael Foster, Pennsylvania State University.

Kenneth A. Dodge, Duke University

Damon Jones, Pennsylvania State University.

References

  1. Bean DL, Leibowitz A, Rotherman-Borus MJ, Duan N, Horwitz S, et al. False-negative reporting and mental health services utilization: Parents’ reports about child and adolescent services. Mental Health Services Research. 2000;2:239–249. [Google Scholar]
  2. Bierman KL, Greenberg MT . the Conduct Problems Prevention Research Group. Integrating social skill training interventions with parent training and family-focused support to prevent conduct disorder in high risk populations: The FAST Track Multi-Site Demonstration Project. In: Ferris CF, Grisso T, editors. Understanding aggressive behavior in children. New York: The New York Academy of Sciences; 1996. pp. 256–264. [DOI] [PubMed] [Google Scholar]
  3. Blumstein A, Cohen J, Roth JA, Visher CA, editors. Criminal careers and career criminals. Washington, DC: National Research Council, National Academy Press; 1986. [Google Scholar]
  4. Breda CS. Parent and institutional agreement of children’s case of mental health services. Evaluation & Program Planning. 1996;19:165–173. [Google Scholar]
  5. Cohen MA. The monetary value of saving a high-risk youth. Journal of Quantitative Criminology. 1998;14:5–33. [Google Scholar]
  6. Conduct Problems Prevention Research Group. A developmental and clinical model for the prevention of conduct disorders: The FAST Track Program. Development and Psychopathology. 1992;4:509–527. [Google Scholar]
  7. Conduct Problems Prevention Research Group. Results of the Fast Track Prevention Trial. Paper presented as part of a symposium at the Life History Research Society Meeting; Seattle, WA. 1998. May, [Google Scholar]
  8. Conduct Problems Prevention Research Group. Initial impact of the Fast Track Prevention Trial for Conduct Problems: I. The high-risk sample. Journal of Consulting and Clinical Psychology. 1999a;67:631–647. [PMC free article] [PubMed] [Google Scholar]
  9. Conduct Problems Prevention Research Group. Initial impact of the Fast Track Prevention Trial for Conduct Problems: II. Classroom effects. Journal of Consulting and Clinical Psychology. 1999b;67:648–657. [PMC free article] [PubMed] [Google Scholar]
  10. Conduct Problems Prevention Research Group. Merging universal and indicated prevention programs: The Fast Track Model. Addictive Behaviors. 2000;25:913–927. doi: 10.1016/s0306-4603(00)00120-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Conduct Problems Prevention Research Group. Evaluation of the first three years of the Fast Track Prevention Trial with children at high risk for adolescent conduct problems. Journal of Abnormal Child Psychology. 2002a;30:19–35. doi: 10.1023/a:1014274914287. [DOI] [PubMed] [Google Scholar]
  12. Conduct Problems Prevention Research Group. Predictor and moderator variables associated with positive Fast Track outcomes at the end of third grade. Journal of Abnormal Child Psychology. 2002b;30:37–52. [PMC free article] [PubMed] [Google Scholar]
  13. Conduct Problems Prevention Research Group. Using the Fast Track randomized prevention trial to test the early-starter model of the development of serious conduct problems. 2002 doi: 10.1017/s0954579402004133. Unpublished manuscript. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Drummond MF, O’Brien B, Stoddart G, Torrance GW. Methods for the economic evaluation of health care programmes. 2. Oxford, England: Oxford University Press; 1997. [Google Scholar]
  15. Foster EM, Bickman L. Refining the costs analyses of the Fort Bragg evaluation: The impact of cost offset and cost shifting. Mental Health Services Research. 2000;2:13–25. doi: 10.1023/a:1010139823791. [DOI] [PubMed] [Google Scholar]
  16. Foster EM, Connor T, Nguyen H. A comparison of services delivered and costs incurred in a system of care and traditional service system. Paper presented at A System of Care for Children’s Mental Health: Expanding the Research Base, 11th Annual Research Conference; Tampa, FL. 2001. Feb, [Google Scholar]
  17. Foster EM, Kelsch CC, Kamradt B, Sosna T, Yang Z. Expenditures and sustainability in systems of care. Journal of Emotional Behaviors and Disorders. 2001;9:53–62. [Google Scholar]
  18. Gold MR, Russell LB, Siegel JE, Weinstein MC. Cost-effectiveness in health and medicine. New York: Oxford University Press; 1996. [Google Scholar]
  19. Gorsky RD, Haddix AC, Schaffer PA. Cost of an intervention. In: Haddix AC, Teutsch SM, Schaffer PA, Dunet DO, editors. Prevention effectiveness: A guide to decision analysis and economic evaluation. New York: Oxford University Press; 1996. pp. 57–75. [Google Scholar]
  20. Hargreaves WA, Shumway M, Hu T-W, Cuffel B. Cost-outcome methods for mental health. NewYork: Academic; 1998. [Google Scholar]
  21. Hartman WT. Projecting special education costs. In: Chambers JG, Hartman WT, editors. Special education policies: Their history, implementation and finance. Philadelphia: Temple University Press; 1983. pp. 241–288. [Google Scholar]
  22. Hartman WT, Fay TA. Cost-effectiveness of instructional support teams in Pennsylvania. Journal of Education Finance. 1996;21:555–580. [Google Scholar]
  23. Jones DE, Dodge KA, Foster EM, Nix RL Group C.P.P.R. Early identification of children at risk for costly mental health service use. Prevention Science. 2002;3:247–256. doi: 10.1023/a:1020896607298. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Karoly LA, Kilburn MR, Bigelow JH, Caulkins JP, Cannon JS. Assessing costs and benefits of early childhood intervention programs: Overview and application to the starting early starting smart program. Santa Monica, CA: Rand; 2001. [Google Scholar]
  25. Lochman JE The Conduct Problems Prevention Research Group. Screening of child behavior problems for prevention programs at school entry. Journal of Consulting and Clinical Psychology. 1995;63:549–559. doi: 10.1037//0022-006x.63.4.549. [DOI] [PubMed] [Google Scholar]
  26. McMahon RJ, Slough N . the Conduct Problems Prevention Research Group. Family-based intervention in the FAST Track Program. In: Peters RD, McMahon RJ, editors. Preventing childhood disorders, substance use, and delinquency. Thousand Oaks, CA: Sage; 1996. pp. 90–110. [Google Scholar]
  27. National Advisory Mental Health Council. A plan for prevention research for the National Institute of Mental Health. Washington, DC: National Institutes of Health, National Institute of Mental Health; 1996. [Google Scholar]
  28. National Advisory Mental Health Council’s Clinical Treatment and Services Research Workgroup. Bridging science and service. Washington, DC: National Institutes of Health, National Institute of Mental Health; 1999. [Google Scholar]
  29. Plotnick R. Using benefit-cost analysis to assess child abuse prevention and intervention programs. Child Welfare. 1999;78:381–407. [PubMed] [Google Scholar]
  30. Russell S. Is prevention better than the cure? Washington, DC: Brookings Institute; 1986. [Google Scholar]
  31. Stiffman AR, Horwitz SM, Hoagwood K, Compton W, Cottler L, Bean D, et al. The Service Assessment for Children and Adolescents (SACA): Adults and child reports. Journal of the American Academy of Child and Adolescent Psychiatry. 2000;39(8):1032–1039. doi: 10.1097/00004583-200008000-00019. [DOI] [PubMed] [Google Scholar]
  32. Sturm R, Ringel J, Bao C, Stein B, Kapur K, Zhang W, Zeng F. National estimates of mental health utilization and expenditures for children in 1998. Santa Monica, CA: Rand; 2000. Working paper 205. [DOI] [PubMed] [Google Scholar]
  33. Warner KE, Luce BR. Cost-benefit and cost-effectiveness analysis in health care. Ann Arbor, MI: Health Administration Press; 1982. [Google Scholar]
  34. Wolff N. Measuring costs: What is counted and who is accountable? Disease Management and Clinical Outcomes. 1998;1:114–128. [Google Scholar]
  35. Zerbe RO, Jr, Dively DD. Benefit-cost analysis in theory and practice. New York: Harper Collins College; 1994. [Google Scholar]

RESOURCES