Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jan 1.
Published in final edited form as: Psychiatry Res. 2019 Jun 7;283:S0165-1781(19)30752-8. doi: 10.1016/j.psychres.2019.06.008

Economic Evaluation in Implementation Science: Making the Business Case for Implementation Strategies

Andria B Eisman a, Amy M Kilbourne b,c, Alex R Dopp d, Lisa Saldana e, Daniel Eisenberg f,g
PMCID: PMC6898762  NIHMSID: NIHMS1531936  PMID: 31202612

Abstract

Implementation researchers have made notable progress in developing and testing implementation strategies (i.e., highly-specified methods used to help providers improve uptake of mental health evidence-based practices: EBPs). Yet, implementation strategies are not widely applied in healthcare organizations to improve delivery of EBPs. Economic considerations are a key factor influencing the use of implementation strategies to deliver and sustain mental health evidence-based practices, in part because many health care leaders are reluctant to invest in ongoing implementation strategy support without knowing the return-on-investment. Comparative economic evaluation of implementation strategies provides critical information for payers, policymakers, and providers to make informed decisions if specific strategies are an efficient use of scarce organizational resources. Currently, few implementation studies include implementation cost data and even fewer conduct comparative economic analyses of implementation strategies. This summary will introduce clinicians, researchers and other health professionals to the economic evaluation in implementation science. We provide an overview of different economic evaluation methods, discuss differences between economic evaluation in health services and implementation science. We also highlight approaches and frameworks to guide economic evaluation of implementation, provide an example for a cognitive-behavioral therapy program and discuss recommendations.

Keywords: implementation science, cost-effectiveness analysis, cost-benefit analysis, economic evaluation, evidence-based practice, community mental health services

1. Background: Introduction to Economic Evaluation

Over the last several decades, many highly effective evidence-based practices (EBPs) have been developed to address mental health challenges, thanks to considerable investment in intervention research (Nathan & Gorman, 2015; Raghavan, 2018). This development, however, has not been accompanied by widespread, routine use of EBPs by clinicians and other practitioners (Onken, Carroll, Shoham, Cuthbert, & Riddle, 2014). Bridging this gap between research and practice is imperative to realizing the benefits of EBPs and improving the public’s mental health. Implementation science addresses this gap through the study of methods that promote systematic uptake of EBPs into routine practice in order to improve health services and, ultimately, patient and public health outcomes (Eccles & Mittman, 2006).

To achieve this goal, implementation science researchers have made notable progress in developing effective implementation strategies to improve EBP uptake. Implementation strategies are “methods or techniques used to enhance the adoption, implementation, and sustainability of a clinical program or practice” (Proctor, Powell, & McMillen, 2013, p. 2). Key examples of implementation strategies used in psychiatric research include clinical-level (e.g., audit and feedback), system level (e.g., identifying and preparing champions of EBPs), and policy levels (e.g., performance incentives). Implementation strategies, however, can range in intensity and complexity; they may include multiple components (e.g., tailoring the EBP packaging and training and providing ongoing expert consultation to enhance EBP integration) that can be time intensive and expensive to deploy. In addition, few community-based mental health organizations have allocated resources to support implementation strategies. Cost is often cited as a barrier to implementation and sustainment (Bond et al., 2014; Roundfield & Lang, 2017). Consequently, economic considerations are a key factor influencing the use of implementation strategies to deliver and sustain EBPs.

While implementation science as a field has grown exponentially over the past two decades, including research on creating, refining and testing implementation strategies, economic evaluation of implementation is nascent. Health care leaders are reluctant to invest in ongoing implementation strategy support without knowing the return-on-investment. Cost data on implementation strategies is crucial if payers, policymakers, and providers intend to use them. Currently, few implementation studies (less than 10%) include information about implementation costs and even fewer conduct comparative economic analyses of implementation strategies (Powell et al., 2019; Vale, Thomas, MacLennan, & Grimshaw, 2007).

This summary will introduce clinicians, researchers and other health professionals to economic evaluation in implementation science. We provide an overview of different economic evaluation methods used in mental health services research and discuss differences between economic evaluation in health services and implementation science. In particular, we focus on economic evaluation of implementation strategies, which are highly-specified methods used to support uptake of effective practices. Understanding the costs of implementation strategies is essential for determining the costs and resources needed to scale up and spread effective practices. We also highlight key frameworks used to guide economic evaluation of implementation, provide an example for a cognitive-behavioral therapy program, and discuss recommendations.

1.1. Standard methods for economic evaluation

Economic evaluation refers to the comparative analysis of costs and consequences for alternative courses of action. In mental health care economic evaluation is used to inform decisions about whether specific programs at the clinic level, and implementation strategies at the organizational, provider, and patient (consumer) levels, are an efficient use of resources (Drummond, Sculpher, Torrance, O’Brien, & Stoddart, 2005). The purpose of economic evaluation is to quantify the costs and consequences of specific interventions, with the larger goal of determining efficient resource allocation. Within the broader category of economic evaluation, there are several specific methods, including cost-effectiveness and cost-utility analysis, cost-benefit analysis, and budget impact analysis. These methods are closely related, differing mainly in the metric for capturing health-related effects of the intervention and unit of analysis (e.g., patient-level in cost-effectiveness analysis, county level in budget impact analysis).

1.1.1. Cost-Effectiveness and Cost-Utility Analysis

The goal of cost-effectiveness and cost-utility analysis is to estimate how much it costs to gain a particular unit of health. In cost-effectiveness analysis, the health unit can be any health-related outcome, such as a depression-free-days or a point on a clinical severity scale (Russell, Gold, Siegel, Daniels, & Weinstein, 1996). Cost-utility analysis is a more specific form of cost-effectiveness analysis in which health is summarized as a composite measure of mortality and morbidity, most commonly via the quality adjusted life year (QALY). In the context of an implementation study, the health-related outcome might refer to a fidelity score or other indicator of implementation success (Dopp, Mundey, Beasley, Silovsky, & Eisenberg, 2019). In that case, the more flexible, generic cost-effectiveness analysis can be used to accommodate the implementation-specific outcome. The disadvantage is that the main result, the cost-effectiveness ratio, cannot necessarily be compared to results from other studies. In general, in cost-effectiveness and cost-utility analysis, one must decide upon a threshold ratio that demarcates the line between cost-effective and not cost-effective and determining such a threshold is difficult with context-specific outcomes (e.g., how much should be spent to achieve a unit increase in fidelity?).

1.1.2. Cost-Benefit Analysis

Cost-benefit analysis quantifies all outcomes, including health, into monetary terms (Ciellini & Kee, 2015). This allows for costs to be compared directly to benefits of any program and for benefits from multiple outcomes to be pooled together; if the benefits are larger than the costs, this implies that the intervention is an efficient use of resources. Cost-benefit analysis is used infrequently in health services research; this may be because of disagreement in the field about how to monetize health outcomes. Another reason cost-benefit may be used infrequently is that generally cost-effectiveness or cost-utility analysis is sufficient to guide policy decisions if the goal is only to compare health interventions to each other. Some notable exceptions illustrate the value of conducting cost-benefit in contexts where important outcomes include not just health but also other outcomes such as crime (Dopp et al., 2018; Hoomans & Severens, 2014).

1.1.3. Budget impact analysis

Budget impact analysis focuses on the financial impact of an intervention or program on the budgets of specific payers, such as a State Medicaid program (Mauskopf et al., 2007). This narrower perspective stands in contrast to the full, societal perspective, which incorporates all relevant costs and benefits and is recommended for the economic evaluations methods described above (Sanders et al., 2016). While this is narrower perspective can potentially ignore important costs or benefits from the broad societal standpoint, it has the practical value of explicitly addressing the budgetary concerns of payers. Thus, this approach fits well with implementation science contexts where financial factors are often central to whether programs and services are adopted and sustained (Humphreys, Wagner, & Gage, 2011).

1.1.4. Summary of standard methods for economic evaluation

The methods described above have been developed and deployed primarily in the context of directly evaluating clinical or health services and programs. These methods have served that purpose well and are being used routinely to inform policy and practice decisions about which services and programs represent good value. As we discuss in the following section, the principles remain the same but there are important new considerations when extending economic evaluation methods to implementation science.

1.2. Economic evaluation in implementation science

1.2.1. Overview

Economic evaluation has traditionally focused exclusively on the costs of interventions or EBPs themselves at the patient level and not costs related to the resources needed to implement them. In contrast, economic evaluation in implementation science addresses explicitly these implementation costs, including costs of implementation strategies to deploy and sustain behavioral interventions. Economic evaluation of implementation refers to the systematic evaluation of what outcomes a specific implementation strategy or competing strategies achieves and the costs of achieving them (Raghavan, 2018). In implementation science, cost is a primary consideration from an organizational or system perspective when implementing EBPs (Proctor et al., 2011). Implementation strategies can vary in their resource requirements and intensity (e.g., computerized automated audit and feedback versus ongoing face to face provider consultation to prepare program champions). Comparative economic evaluation of implementation strategies provides vital information for decision makers to guide implementation decisions that will help achieve organizational goals within the context of resource constraints (Powell et al., 2019). Researchers, practitioners and organizations increasingly recognize the critical role costs play in using implementation strategies to improve uptake of EBPs. Ignoring these costs can have considerable consequences, including inefficiencies in program delivery and exacerbating inequities in program access, which is contrary to the very purpose of implementation science (Hoomans & Severens, 2014). Organizations benefit from evidence that supports (or refutes) investment in specific strategies as an efficient use of resources, and this can help prioritize implementation efforts (Bauer, Damschroder, Hagedorn, Smith, & Kilbourne, 2015; Proctor et al., 2011; Raghavan, 2018).

1.2.2. Differences between implementation economic evaluation and traditional health services research

Traditionally, economic evaluations of health services (see Drummond, Sculpher, Claxton, Stoddart, & Torrance, 2015; Gold, 1996; Muennig, 2008) compare incremental differences in costs and health-related patient outcomes among discrete clinical practices (e.g., two or more interventions in a randomized controlled trial: RCT). Such methods can inform implementation efforts, such as by providing information about potential economic impact when deciding whether to implement an intervention. However, when researchers wish to examine the economic impact of a particular implementation strategy or set of strategies, this represents an extension of traditional economic evaluation methods (see Table 1).

Table 1:

Characteristics of economic evaluations for interventions versus implementation

Characteristic Economic evaluation of interventions Economic evaluation of implementation Approach in exemplar studya
Relevant costs Discrete costs of intervention Expansive costs of intervention (i.e., costs for replication) + implementation strategy Calculated LC costs in terms of direct (i.e., training, consultation, administration) and indirect (i.e., lost opportunities for alternative activities) expenses
Relevant benefits Clinical outcomes Implementation, service, and clinical outcomes Examined (1) clinician competence (implementation outcome); and (2) trauma-related mental health symptoms (clinical outcome)
Time horizon Variable, but can be brief (< 1 year) Often multi-year, can include short-term implementation and long-term sustainment Measured costs and benefits across each 12-to14 month LC; will examine sustainment outcomes in future research
Perspective Variable, but full societal perspective often encouraged Health care system perspective is often most relevant Measured costs and benefits from perspective of mental health service agencies
Study design Research methods are chosen to maximize internal validity, rigor, comprehensiveness “Minimum acceptable” research methods; must be pragmatic, feasible for practice settings Used a pre-post evaluation design; collected simple, inexpensive measures of costs (e.g., budgets) and outcomes (e.g., self-report)
Impact of context Minimized; often standardized interventions delivered in ideal settings Variable; often multi-site studies with variability in intervention, implementation across settings Collected data from 13 LCs in 5 states; used sensitivity analysis to examine influence of variation in number of participants, number of LC sessions, and billing rates
Relevant decision-makers Health care payers (invest in clinical care) Health system payers(invest in infrastructure) Recommended that administrators in mental health service agencies should consider our findings in their decisionmaking
a

Description of the approach taken in an exemplar economic evaluation of an implementation strategy (Dopp et al., 2017).

LC = learning collaborative, the implementation strategy examined in the exemplar study. We also discuss the exemplar study in more detail in the text.

Economic evaluations in implementation science involve comparing the costs of different implementation strategies to the outcomes of those strategies. Outcomes include those relevant to implementation research questions (e.g., acceptability, fidelity, reach) instead of, or in addition to, service (e.g., safety, equity, patient-centeredness) and clinical outcomes (e.g., functioning, satisfaction, symptoms). Specifically, we consider implementation costs to include all processes and strategies required to implement an EBP (Raghavan, 2018). We focus on implementation strategy costs as implementation strategies are designed to enhance uptake of EBPs in clinical and community settings. As mentioned, implementation strategy costs can vary widely depending on the method or compilation of methods used. These costs include intervention replication costs that would be required by those adopting the program (Gold et al., 2003), costs related to intervention tailoring or adaptation, costs to engage practitioners and participants, and any other costs related to implementation strategies such as training, supplies and labor costs for activities such as provider coaching or consultation (Ritzwoller, Sukhanova, Gaglio, & Glasgow, 2009). Economic evaluation of implementation strategies can inform not only allocation of organizational resources for clinics, but also widespread scale up of use of these strategies (Raghavan, 2018).

2. Approaches for economic evaluation in implementation science

2.1. Pragmatic approaches

Pragmatic approaches are needed for implementation strategies because they involve costs from different levels (patient, provider, system/organization). Researchers and practitioners are increasingly recognizing the need for more widespread application of economic evaluation to understanding the cost-effectiveness of implementation strategies (Reeves, Edmunds, Searles, & Wiggers, 2019). Implementation strategies must compete for the same resources as other interventions; identifying, measuring and valuing resources required to deploy implementation strategies are critical to informing efficient use of these scarce resources (Reeves et al., 2019). As implementation science is concerned with the translation of research to real-world clinical settings, approaches to economic evaluation need to be rigorous and pragmatic for healthcare organizations. Community-based practice organizations may face additional challenges to capturing implementation costs as they may not have the capacity to collect detailed cost-related data such as may be obtained through electronic health records. Tools for conducting such evaluation, including costing guides and/or frameworks that can be used by practitioners and implementation scientists, are critical to achieving desired outcomes. These tools can include surveys/tracking forms, time-motion surveys and integrated tools in electronic health records to track time spent on implementation activities (Kilbourne et al., 2018; Liu et al., 2009). Costing guides include the World Health Organization’s Choosing Interventions that are Cost Effective (CHOICE; World Health Organization, 2003) and Costs of Implementing New Strategies (COINS, discussed in more detail later in this paper; Saldana, Chamberlain, Bradford, Campbell, & Landsverk, 2014) and the practical costing guide described by Ritzwoller et al. (2009); applying these approaches can help strengthen the science of economic evaluation of implementation through standardizing costing procedures that are flexible enough to suit a variety of implementation settings and user-friendly enough to be widely applied (Raghavan, 2018).

2.2. Mixed Methods Approaches

Recently, implementation researchers have begun to recognize how the results of traditional (quantitative) economic evaluations are limited by a remaining “qualitative residual” of contextual information and stakeholders’ perspectives (Dopp et al., 2019). This information is essential to interpretation of monetary values, particularly in implementation research where the costs and outcomes are dependent on the context in which implementation takes place. To address this important limitation, researchers recommend that implementation science embrace a mixed-methods research agenda that merges traditional quantitative approaches with qualitative methods for economic evaluation (see Jefferson et al., 2014; Starr, 2014) such as interviews, and ethnographic field work. To assist implementation scientists in making use of mixed methods in this research context, Dopp et al. (2019) provided an adapted taxonomy of mixed-method studies relevant to economic evaluation and an adapted version of reporting standards for economic evaluations that incorporates mixed methods. Implementation researchers can maximize the contributions of their economic evaluations by incorporating detailed, context-specific qualitative data that helps tell the full story of the costs and outcomes of implementation.

3. Frameworks to Guide Economic Evaluation of Implementation

As the field of Implementation Science has blossomed, so have the number of frameworks that can guide implementation research and practice (Tabak, Chambers, Hook, & Brownson, 2018). The role of costs is noted as a significant factor in implementation approaches and outcomes in a number of frameworks. Among these, the EPIS (Aarons, Hurlburt, & Horwitz, 2011) framework which highlights the role of funding and costs in both the inner (i.e., environment within the implementing system, organization, or agency) and outer (e.g., sociopolitical and external regional environment in which the implementation is embedded) contexts, across all four phases of the implementation process. Similarly, the Consolidated Framework for Implementation Research (CFIR; Damschroder et al., 2009) references consideration of costs across multiple domains of the implementation process including the (1) cost of the intervention, (2) external policies and incentives in the outer setting, and (3) available resources in the inner setting. And more generally, the Interactive Systems Framework (ISF; Wandersman et al., 2008) denotes funding as one of four key outer context variables that influences implementation outcomes. These frameworks, however, do not provide guidance for estimating the cost of implementation.

Frameworks for measuring implementation costs must be flexible enough to allow for variation in costs expended and resources available throughout the full implementation process. Unlike typical accounting methods for assessment of intervention costs (e.g., for medications, equipment, etc.), assessment of implementation costs is activity-driven based on effort taken to complete implementation tasks (Raghavan, 2018). Consistent with this activity-driven assessment, the Stages of Implementation Completion (SIC: Chamberlain, Brown, & Saldana, 2011; Liu et al., 2009; Saldana et al., 2014), is an 8-staged tool to assess implementation process and milestones. Stages range from Engagement (Stage 1) with the program implementers, to the site delivering the intervention with Competency (Stage 8), and span three phases of implementation including pre-implementation, implementation, and sustainment. Within each stage are a range of activities that vary in level of complexity depending on the intervention being adopted. Thus, the SIC lends itself well to providing a framework for implementation costs.

Accordingly, the Cost of Implementing New Strategies (COINS: Saldana et al., 2014) approach was developed as an adjunct to the SIC. Alongside SIC observations, as implementation activities are completed by a site, an assessment of direct (e.g., cost of purchasing new technology equipment to monitor fidelity) and indirect (e.g., time away from other responsibilities to prepare for new intervention) expenses necessary to complete the task are recorded. Using COINS, differences in cost structures (i.e., the when and how much for resource allocation) have been identified between implementation conditions. Patterns of resource allocation have identified and shown difference in direct expenditures versus effort needed to complete tasks per implementation stage. Such information can be highly beneficial for decision-makers to avoid both over and under-estimation of “what it takes” to implement a new intervention, and subsequently ensure more accurate planning for success. Both SIC and COINS have been developed into a web-based tool (https://sic.oslc.org/) for data collection, tracking of implementation progress, and outcomes.

4. Considerations for economic evaluation of implementation

4.1. Business case analysis for economic evaluation of implementation

Business case analysis is essential to capture the overall picture of the value of implementation strategies, from the perspectives of the health care leaders who will ultimately choose to resource implementation support over time. A fundamental part of the business case analysis is determining the cost of the implementation strategies from the perspective of the healthcare payers in order to make the case to them to sustain the implementation strategy support over time, especially without other resources such as research funding. Leatherman et al (2012) developed an approach for assessing the business case that can be applied to implementation strategy studies in mental health. Specifically, the business case includes mixed methods analyses to determine the return on investment of implementation strategies from the perspectives of the health care payer (e.g., Medicaid) or organization, notably by ascertaining information on implementation strategy costs as well as the overall impact of deploying the clinical interventions delivered through these strategies. In a business case analysis, the potential cost savings of deploying implementation strategies is calculated along with the value added from the clinical intervention across a wide range of outcomes including improved provider outcomes such as engagement, reduced burnout, satisfaction, and reduction in turnover, in addition to improved consumer outcomes (e.g., satisfaction, symptom management, and reduction of critical service encounters such as hospitalizations). For example, value can be estimated using methods derived from traditional cost-effectiveness analyses, such as using changes in depressive symptom scores to determine the number of depression–free days (Lave, Frank, Schulberg, & Kamlet, 1998), or changes in average provider tenure (number of FTE-years of providers). Additional research is needed, however, to obtain a better understanding of how to monetize implementation-related outcomes in order to produce more credible and useful business case analyses.

4.2. Economic evaluation and health disparities

Economic evaluation of implementation is especially important for treatment and prevention programs delivered within organizations serving marginalized or underserved populations. These organizations often are under significant financial strain and must make resource allocation decisions in the context of competing demands and scarce resources. While EBPs delivered in such settings may aim to reduce health disparities, they risk failing to reach desired outcomes and may even exacerbate disparities if insufficient attention is given to resources required to effectively implement them. Additional resources may be required to tailor interventions to ensure that EBPs are appropriate and applicable for the context and target population while keeping key program elements intact (Stirman, Miller, Toder, & Calloway, 2013). In addition to implementation and behavioral outcomes, the equity impacts of investment in implementation strategies can also be evaluated; this provides important information about the overall intervention success in the context of ameliorating health disparities (Reeves et al., 2019). Understanding the resources needed to achieve desired outcomes (e.g., improved mental health, reduced substance use) is critical to achieving program goals and long-term sustainability in low-resource communities. In addition, economic evaluation of implementation strategies to address health disparities has notable potential to move the field forward; resource constraints have the potential to spur innovation and may help accelerate new discoveries in implementation science that are efficient and suitable for delivery in a wide range of clinical contexts (Yapa & Bärnighausen, 2018).

5. Case study: Learning Collaboratives to Implement CBT

For this example (Dopp, Hanson, Saunders, Dismuke, & Moreland, 2017), we focus on the method used to compare costs with clinical and implementation outcomes (see Table 1 for more information about the evaluation approach taken in the study). As part of an NIMH-funded study, Dopp et al. (2017) evaluated the cost-effectiveness of learning collaboratives (LCs), an implementation strategy in which teams from different organizations and professional roles (e.g., clinicians, supervisors, senior leaders) work together to implement and sustain an EBP (Ebert, Amaya-Jackson, Markiewicz, Kisiel, & Fairbank, 2012). In this case, LC participants worked to implement Trauma-Focused Cognitive-Behavioral Therapy (TF-CBT; Cohen, Mannarino, & Deblinger, 2006) through a combination of pre-LC activities (e.g., formation of agency teams), 2–3 in-person training sessions (spread over 12–14 months), and “action periods” between trainings (e.g., consultation calls, engagement in Plan-Do-Study-Act cycles). Using program budgets and pre-post data from 13 completed collaboratives (ns = 574 clinicians and 1,410 youth), the authors calculated cost-effectiveness for two key outcomes: (1) clinician TF-CBT competency (measured on a clinician-rating scale); and (2) youth trauma-related mental health (i.e., youth- and parent-rating scales for traumatic stress and depression symptoms). Total LC costs in 2015 USD, including direct (i.e., training, consultation, and administration) and indirect (i.e., lost opportunities for alternative activities such as service delivery) expenses, were $11,523 per clinician or $4,684 per youth who received treatment. Cost-effectiveness ratios revealed that costs of $18,679 were associated with each unit increase on the TF-CBT competency scale; and (2) costs from $5,318 to $6,548 were associated with each unit decrease in youth trauma-related symptoms. The authors interpreted these cost-effectiveness ratios by comparing them to thresholds ($3,000 for TF-CBT competence; $39,000 for trauma-related symptoms) derived from relevant economic studies.

Several aspects of this study are relevant to the challenges of applying economic evaluation methods to implementation research. First, the indirect costs incurred from LC participants attending trainings and consultation activities were about 4 times as high as the direct costs. This highlights the importance of comprehensive cost measurement for implementation strategies. Second, when comparing their results to the aforementioned thresholds, the authors concluded that the impact of LC participation on clinician competence did not produce favorable cost-effectiveness ratios; yet, reductions in youth mental health symptoms demonstrated high cost-effectiveness. Given the established link between clinician competence and outcomes in evidence-based practices (e.g., Beidas & Kendall, 2010), the authors conclude that investment in clinician training, while not cost-effective when considered in isolation, likely contributed to the competent delivery of TF-CBT to youth in this study. This illustrates the challenges inherent in interpreting the results of economic evaluations that focus only on implementation outcomes – whose value can be difficult to document – as opposed to clinical outcomes. Last, as in many implementation research studies, the practice-based nature of this project resulted in several methodological characteristics (e.g., pre-post design, use of pragmatic self-report measures) that differ from highly controlled RCTs often favored in traditional economic evaluations. Methodological quality is critical to producing high-quality economic evidence in any field, but this study demonstrates valuable opportunities to learn from the large-scale implementation initiatives that are already being undertaken.

6. Discussion

Although evidence-based practices for mental health treatment exist, few populations in need receive these EBPs. Implementation strategies are key to enhancing uptake of EBPs in organizations but can be resource intensive. Healthcare payers and organizations benefit from information that supports or refutes the use of specific implementation strategies as an efficient use of organizational resources. Yet, economic evaluation of implementation remains an understudied area of implementation science.

Economic evaluation of implementation strategies is essential to bridging the research-to-practice gap; it provides stakeholders and decision makers with vital information to guide implementation decisions that will support efficient allocation of resources to promote uptake and sustainability of EBPs, ultimately improving mental health care. Currently, there is a dearth of research investigating costs of implementation, and even less research comparing the relative costs and benefits of using one strategy over another (Powell et al., 2019). The current summary seeks to advance economic evaluation of implementation strategies through discussing useful approaches and frameworks to guide this investigation. We also provide examples of applying implementation science frameworks to the study of costs and comparative economic approaches for mental health treatment programs in communities.

The field of implementation science is inherently transdisciplinary, and in incorporating economic evaluation into implementation science we need to extend traditional approaches. As in evaluation of other implementation outcomes, it will be critical to position findings within relevant contextual information and stakeholder (e.g., providers, healthcare organizations and patients) perspectives. Approaches such as mixed methods are vital to ascertaining complete, well-rounded understanding of cost information from providers, practices, health systems and policy makers (Dopp et al., 2019). As the field evolves, we must address some key issues including identifying appropriate implementation outcomes that are generalizable, determining reasonable return on investment for implementation and service outcomes, such as fidelity (implementation outcome) or wait time (service outcome). Such thresholds are not currently available in health economics literature, presenting challenges for implementation researchers (see Dopp et al., 2017). Other issues include identifying and handling sources of uncertainty (i.e., sensitivity analyses) when conducting comparative economic evaluation of implementation strategies and approaches to evaluating the equity impact of implementation strategies (Dopp et al., 2019; Reeves et al., 2019). Finally, we need to consider what perspective the economic evaluation should take (i.e., health system, societal), given the numerous stakeholders involved with implementation efforts (Muennig, 2008).

Moving the field forward will require implementation and mental health services researchers, stakeholders organizations and practitioners to develop understanding of economic evaluation and for economists to develop understanding of implementation science (Hoomans & Severens, 2014; Raghavan, 2018; Starr, 2014). Continued development and application of resources (e.g., toolkits, interdisciplinary collaboration) that support the development of research to provide practical approaches to economic evaluation of EBP implementation. Such resources would add to the growing plethora of implementation science initiatives (Darnell et al., 2017)—paving the way for more innovative, contextually valid, and impactful studies to effectively bridge the research-to-practice gap.

HIGHLIGHTS.

  • Economic considerations influence implementation of evidence-based mental health practices.

  • Decision makers and other stakeholders need cost information to make implementation decisions.

  • Economic evaluation of implementation strategies differs from traditional economic evaluation.

  • Economic evaluation of implementation is critical for efficient use of organizational resources.

Acknowledgments

This research was supported in part by grants from the National Institute on Drug Abuse, (K01DA044279) and the National Institute of Mental Health (R01MH114203, R01MH099898). The views expressed in this article are those of the authors and do not necessarily represent the views of the VA or other public entity.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Aarons G, Hurlburt M, & Horwitz S (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health, 38, 4–23. 10.1007/s10488-010-0327-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bauer M, Damschroder L, Hagedorn H, Smith J, & Kilbourne A (2015). An introduction to implementation science for the non-specialist. TT −. BMC Psychology, 3(32), 12 10.1186/s40359-015-0089-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Beidas R, & Kendall P (2010). Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: Science and Practice, 17(1), 1–30. 10.1111/j.1468-2850.2009.01187.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bond G, Drake R, McHugo G, Peterson A, Jones A, & Williams J (2014). Long-term sustainability of evidence-based practices in community mental health agencies. Administration and Policy in Mental Health and Mental Health Services Research, 41(2), 228–236. 10.1007/s10488-012-0461-5 [DOI] [PubMed] [Google Scholar]
  5. Chamberlain P, Brown C, & Saldana L (2011). Observational measure of implementation progress in community based settings: the Stages of Implementation Completion (SIC). Implementation Science : IS, 6, 116–116. 10.1186/1748-5908-6-116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Ciellini S, & Kee J (2015). Cost-effectiveness and cost-benefit analysis In Hatry H & Wholey J (Eds.), Handbook of Practical Program Evaluation. Jossey-Bass. [Google Scholar]
  7. Cohen JA, Mannarino A, & Deblinger E (2006). Treating trauma and traumatic grief in children and adolescents /. New York:  : Guilford Press; U-M Catalog Search. [Google Scholar]
  8. Damschroder L, Aron D, Keith R, Kirsh S, Alexander J, & Lowery J (2009). Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science, 4(50), 1–15. 10.1186/1748-5908-4-50 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Darnell D, Dorsey C, Melvin A, Chi J, Lyon A, & Lewis C (2017). A content analysis of dissemination and implementation science resource initiatives: what types of resources do they offer to advance the field? Implementation Science, 12(1), 137 10.1186/s13012-017-0673-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Dopp A, Coen A, Smith A, Reno J, Bernstein D, Kerns S, & Altschul D (2018). conomic Impact of the Statewide Implementation of an Evidence-Based Treatment: Multisystemic Therapy in New Mexico. Behavior Therapy, 49(4), 551–566. 10.1016/j.beth.2017.12.003 [DOI] [PubMed] [Google Scholar]
  11. Dopp A, Hanson R, Saunders B, Dismuke C, & Moreland A (2017). Community-based implementation of trauma-focused interventions for youth: Economic impact of the learning collaborative model. Psychological Services, 14(1), 57–65. 10.1037/ser0000131 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Dopp A, Mundey P, Beasley L, Silovsky J, & Eisenberg D (2019). Mixed-method approaches to strengthen economic evaluations in implementation research. Implementation Science, 14(2). 10.1186/s13012-018-0850-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Drummond M, Sculpher M, Claxton K, Stoddart G, & Torrance G (2015). Methods for the economic evaluation of health care programmes. Oxford ; New York: Oxford University Press. [Google Scholar]
  14. Drummond M, Sculpher M, Torrance G, O’Brien B, & Stoddart G (2005). Methods for the economic evaluation of health care programmes. Oxford ; New York: Oxford University Press. [Google Scholar]
  15. Ebert L, Amaya-Jackson L, Markiewicz JM, Kisiel C, & Fairbank JA (2012). Use of the Breakthrough Series Collaborative to support broad and sustained use of evidence-based trauma treatment for children in community practice settings. Administration and Policy in Mental Health and Mental Health Services Research, 39(3), 187–199. 10.1007/s10488-011-0347-y [DOI] [PubMed] [Google Scholar]
  16. Eccles M, & Mittman B (2006). Welcome to Implementation Science. Implementation Science, 1(1), 1 10.1186/1748-5908-1-1 [DOI] [Google Scholar]
  17. Edejer T, Baltussen R, Adam T, Hutubessy R, Acharya A, Evans D, & Murray C (Series Ed.). (2003). Making Choices in Health: WHO Guide to Cost-Effectiveness Analysis. Geneva, Switzerland: World Health Organization. [Google Scholar]
  18. Gold M (1996). Cost-effectiveness in health and medicine /. New York: Oxford University Press. [Google Scholar]
  19. Hoomans T, & Severens JL (2014). Economic evaluation of implementation strategies in health care. Implementation Science : IS, 9 10.1186/s13012-014-0168-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Humphreys K, Wagner T, & Gage M (2011). If substance use disorder treatment more than offsets its costs, why don’t more medical centers want to provide it? A budget impact analysis in the Veterans Health Administration. Journal of Substance Abuse Treatment, 41(3), 243–251. 10.1016/j.jsat.2011.04.006 [DOI] [PubMed] [Google Scholar]
  21. Jefferson T, Austen S, Sharp R, Ong R, Lewin G, & Adams V (2014). Mixed-methods research: What’s in it for economists? The Economic and Labour Relations Review, 25(2), 290–305. 10.1177/1035304614530819 [DOI] [Google Scholar]
  22. Kilbourne A, Smith S, Choi S, Koschmann E, Liebrecht C, Rusch A, … Almirall D (2018). Adaptive School-based Implementation of CBT (ASIC): clustered-SMART for building an optimized adaptive implementation intervention to improve uptake of mental health interventions in schools. Implementation Science, 13(119), 1–15. 10.1186/s13012-018-0808-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Lave JR, Frank RG, Schulberg HC, & Kamlet MS (1998). Cost-effectiveness of treatments for major depression in primary care practice. Archives of General Psychiatry, 55(7), 645–651. 10.1001/archpsyc.55.7.645 [DOI] [PubMed] [Google Scholar]
  24. Leatherman S, Metcalfe M, Geissler K, & Dunford C (2012). Integrating microfinance and health strategies: examining the evidence to inform policy and practice. Health Policy and Planning, 27(2), 85–101. 10.1093/heapol/czr014 [DOI] [PubMed] [Google Scholar]
  25. Liu C-F, Rubenstein LV, Kirchner JE, Fortney JC, Perkins MW, Ober SK, … Chaney EF (2009). Organizational Cost of Quality Improvement for Depression Care. Health Services Research, 44(1), 225–244. 10.1111/j.1475-6773.2008.00911.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Mauskopf JA, Sullivan SD, Annemans L, Caro J, Mullins CD, Nuijten M, … Trueman P (2007). Principles of Good Practice for Budget Impact Analysis: Report of the ISPOR Task Force on Good Research Practices—Budget Impact Analysis. Value in Health, 10(5), 336–347. 10.1111/j.1524-4733.2007.00187.x [DOI] [PubMed] [Google Scholar]
  27. Muennig P (2008). Cost-effectiveness analyses in health: a practical approach. Retrieved from http://mirlyn.lib.umich.edu/Record/005627202%20CN%20%20-%20RA%20410.5%20.M84%202008
  28. Nathan PE, editor., & Gorman JM, editor. (2015). A guide to treatments that work / (Fourth edition., Vols. 1–1 online resource.). Oxford : Oxford University Press, [2015]. U-M Catalog Search. [Google Scholar]
  29. Gold MR, Siegel JE, Russell LB, Weinstein MC, 2003. Cost-effectiveness in health and medicine. Oxford University Press, New York. [Google Scholar]
  30. Onken L, Carroll K, Shoham V, Cuthbert B, & Riddle M (2014). Reenvisioning Clinical Science: Unifying the Discipline to Improve the Public Health. Clinical Psychological Science, 2(1), 22–34. 10.1177/2167702613497932 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Powell B, Fernandez M, Williams N, Aarons G, Beidas RS, Lewis C, … Weiner BJ (2019). Enhancing the Impact of Implementation Strategies in Healthcare: A Research Agenda. Frontiers in Public Health, 7 10.3389/fpubh.2019.00003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Proctor E, Powell B, & McMillen J (2013). Implementation strategies: recommendations for specifying and reporting. Implementation Science : IS, 8(1), 139 10.1186/1748-5908-8-139 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, … Hensley M (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38(2), 65–76. 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Raghavan R (2018). The Role of Economic Evaluation in Dissemination and Implementation Research In Brownson R, Colditz G, & Proctor E (Eds.), Dissemination and implementation research in health: translating science to practice (pp. 89–106). Oxford ; New York: Oxford University Press. [Google Scholar]
  35. Reeves P, Edmunds K, Searles A, & Wiggers J (2019). Economic evaluations of public health implementation-interventions: a systematic review and guideline for practice. Public Health, 169, 101–113. 10.1016/j.puhe.2019.01.012 [DOI] [PubMed] [Google Scholar]
  36. Ritzwoller DP, Sukhanova A, Gaglio B, & Glasgow R (2009). Costing Behavioral Interventions: A Practical Guide to Enhance Translation. Annals of Behavioral Medicine, 37(2), 218–227. 10.1007/s12160-009-9088-5 [DOI] [PubMed] [Google Scholar]
  37. Roundfield K, & Lang J (2017). Costs to community mental health agencies to sustain an evidence-based practice. Psychiatric Services, 68(9), 876–882. 10.1176/appi.ps.201600193 [DOI] [PubMed] [Google Scholar]
  38. Russell LB, Gold MR, Siegel JE, Daniels N, & Weinstein MC (1996). The role of cost-effectiveness analysis in health and medicine. Panel on Cost-Effectiveness in Health and Medicine. JAMA, 276(14), 1172–1177. [PubMed] [Google Scholar]
  39. Saldana L, Chamberlain P, Bradford W, Campbell M, & Landsverk J (2014). The cost of implementing new strategies (COINS): A method for mapping implementation resources using the stages of implementation completion. Children and Youth Services Review, 39, 177–182. 10.1016/j.childyouth.2013.10.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Sanders G, Neumann P, Basu A, Brock D, Feeny D, Krahn M, … Ganiats TG (2016). Recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses: Second panel on cost-effectiveness in health and medicine. JAMA - Journal of the American Medical Association, 316(10), 1093–1103. 10.1001/jama.2016.12195 [DOI] [PubMed] [Google Scholar]
  41. Starr MA (2014). Qualitative and mixed-methods research in economics: Surprising growth, promising future. Journal of Economic Surveys, 28(2), 238–264. 10.1111/joes.12004 [DOI] [Google Scholar]
  42. Stirman S, Miller C, Toder K, & Calloway A (2013). Development of a framework and coping system for modifications and adaptations of evidence-based interventions. Implementation Science, 8(65), 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Tabak R, Chambers D, Hook M, & Brownson R (2018). The conceptual basis for dissemination and implementation research In Dissemination and Implementation Research in Health: Translating Science to Practice (2nd ed., pp. 73–88). New York, NY: Oxford University Press, Inc. [Google Scholar]
  44. Vale L, Thomas R, MacLennan G, & Grimshaw J (2007). Systematic review of economic evaluations and cost analyses of guideline implementation strategies. The European Journal of Health Economics, 8(2), 111–121. Retrieved from JSTOR. [DOI] [PubMed] [Google Scholar]
  45. Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, & Blachman. (2008). Bridging the gap between prevention research and practice: The interactive systems framework for dissemination and implementation. American Journal of Community Psychology, 41(3–4), 171–181. 10.1007/s10464-008-9174-z [DOI] [PubMed] [Google Scholar]
  46. Yapa HM, & Bärnighausen T (2018). Implementation science in resource-poor countries and communities. Implementation Science, 13(1), 154 10.1186/s13012-0180847-1 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES