Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Mar 1.
Published in final edited form as: J Am Coll Radiol. 2023 Mar;20(3):292–298. doi: 10.1016/j.jacr.2022.11.018

How to Perform Economic Evaluation in Implementation Studies: Imaging-Specific Considerations and Comparison of Financial Models

Stella K Kang 1,2,*, Heather T Gold 2
PMCID: PMC10112005  NIHMSID: NIHMS1882891  PMID: 36922103

Abstract

Economic evaluation for implementation science merits unique considerations for a local context, including the main audience of local decision makers. This local context is in contrast with traditional methods for developing coverage policy for medical tests and interventions, which typically emphasize benefits and costs more broadly, for society. Regardless of the strength of evidence backing the efficacy or effectiveness of a clinical intervention, local context is paramount when implementing evidence-based practices. Understanding the costs throughout the processes of implementing a program will inform the decision of whether to plan for and adopt the program, how to sustain the program, and whether to scale up widely. To guide economic evaluation for implementation of evidence-based imaging practices, we describe approaches that consider local stakeholders’ needs and connect these with outcomes of cost and clinical utility. Illustrative examples of implementation strategies and economic evaluation are explored in areas of cancer screening and care delivery.

Introduction

Implementation science evaluates the most effective strategies to increase the uptake of evidence-based practices (EBPs), whether focused on efficacious treatments, diagnostic tests, or clinical guidelines. There are myriad potential implementation strategies, including incentives, assessing barriers and facilitators to implementing an EBP, or evaluating and adapting physical equipment (1). Given the key role of enterprises or practices in deciding whether or not to enact and sustain implementation efforts for EBPs, there are unique perspectives and methodological questions to consider when presenting the fiscal side of the implementation plan. However, approaches for economic evaluation tend to be incomplete or limited in scope and highly varied (2). Consider a hypothetical example of evaluating implementation strategies for evidence-based imaging in initial staging of a cancer. Consideration of the costs could vary widely depending on whether the perspective includes that of only the healthcare sector (e.g. educational training, audit/feedback for providers) or also includes the patients (e.g. time or transportation, especially if the EBP or implementation strategy results in more than one visit to an imaging facility due to needing more than one imaging modality).

Economic evaluation in implementation science is conducted inconsistently if done at all, yet represents a high priority need, because costs form a potential barrier to implementing EBPs. Guidance for both local and broad policy decisions requires a more complete accounting of costs, depending upon perspective and with consideration of the implementation phase. Namely, decision makers may be more or less interested in capturing the costs of the pre-implementation/planning phase, implementation phase (or de-implementation for low-value practices), the EBP phase, and/or downstream follow up. Funding opportunities for implementation science have grown over the last 20 years (3), but implementation relies on evidence of an intervention/program’s effectiveness first; then, for implementation to be feasible, decision makers must pragmatically consider costs and value of the implementation process, while prioritizing EBPs.

Despite its practicality, economic evaluation of implementing EBPs is unusual outside of research, with methodological uniformity under development. While some forms of cost analysis have focused heavily on budget impact or return on investment, an adapted form of cost-effectiveness analysis (CEA) for economic evaluation could capture the most meaningful costs and also clinical utility in a methodologically rigorous fashion for more clinically meaningful comparisons across implementation strategies. In this article, we summarize current expert guidance on such economic evaluation for implementation studies and apply concepts with particular examples of implementation science for evidence-based imaging practices. As prototypical cases, we explore the economic evaluation of implementing evidence-based imaging practices in cancer screening and care delivery.

Types of Economic Evaluation Used in Implementation Science

Costs are one possible outcome measure when evaluating implementation strategies for a local practice or network, and may in turn affect other implementation outcomes such as sustainability or penetration. The selection of implementation strategies is ideally evidence-based for imaging EBPs as with any other EBP (Table 1) (1).

Table 1.

Implementation strategies and examples in imaging

Type of Implementation Strategy* Example of Implementation Strategy Example of Application in Radiology
Provide Interactive Assistance Web-based automated scheduling assistant for patients Imaging appointments are offered at the correct interval and with targeted messaging to encourage completion of follow-up scans in lung cancer screening
Develop Stakeholder Interrelationships Identify and prepare clinical champions for appropriate use of imaging Clinic directors are designated champions for guideline adherence at faculty meetings, to decrease advanced imaging follow up after diagnosis of thyroid papillary microcarcinoma
Train and educate stakeholders Conduct ongoing training of providers on imaging options and clinical indications A clinic’s nurse practitioner is trained and coached on guiding patient decisions for cancer screening in high-risk individuals
Prepare patients to be active participants in decisions Provide decision aids or decision coaches to improve evidence-based practices A decision aid about percutaneous ablation is evaluated as both a website showing pictures of procedures and as a printed pamphlet
Promote adaptability Identify aspects of tools or strategies that could be re-designed to fit local needs while retaining core elements In response to the lack of smart phone use by many patients in a particular setting, a telephone-based system is utilized
Place innovation on fee for service lists/formularies Gain reimbursement for technology that bolsters evidence-based practices Previously tested software for tracking and messaging about evidence-based imaging follow up receives a procedural code for reimbursement
Create incentives Additional pay for innovative technology that has been shown to improve the quality of care A departmental chair offers extra payments for reaching goals for reporting criteria [e.g., according to Response Evaluation Criteria in Solid Tumors (RECIST)]
*

Implementation strategies are selected from a compilation described by Powell et al (1)

We will use the term “new strategy” to describe implementation strategies evaluated against a comparator of an existing or “standard approach.” Budget impact analysis, CEA, and return on investment are the major types of economic evaluation reported in the health care delivery or implementation science literature. A budget impact analysis calculates the total costs of an implementation strategy and EBP from the payer perspective within a given timeframe, often a period of at least one year (4). The goal of the budget impact analysis is to consider whether the organization has the financial resources to move forward with the new intervention (i.e., evaluate affordability). Thus, the health-related benefits, typical overhead costs, or long-term costs are not considered (4, 5). The total potential population affected might be included as a subanalysis in budget impact analysis. The budget impact model can be constructed alongside an economic evaluation that does consider health consequences, such as cost-effectiveness analysis (6). CEA quantifies the difference in health or health outcome and difference in costs when comparing the new strategy to the current approach, and thus yields the value of the new strategy via a cost-per-health outcome unit, or the “[health] bang for the buck”. Meanwhile, a return on investment (ROI) analysis incorporates the gains or losses that may be realized from an investment in a new strategy, considering the costs of enacting the strategy, reduction in costs for processes, and/or gains in revenue from these strategies. For a payer or health system, the return on investment may guide consideration of competing strategies or technologies that improve care delivery. We focus mostly on guiding principles for economic evaluation generally, with adaptations from CEA specifically, for implementation science, given the more straightforward application of budget impact and return on investment analysis to this field.

Guidance for Conducting Economic Evaluation in Implementation Studies

There are [over 100] frameworks for guiding implementation studies, and few include costs as an outcome. Proctor et al mentioned implementation costs as a potential outcome to include in one framework, though without much detail or guidance (7). While CEA methods are well-defined and widely applied, traditional CEA guidelines may not suit the purpose of evaluating implementation strategies (8). The goals of implementation studies often differ from traditional cost-effectiveness analyses in several ways, making it necessary to adapt. First, stakeholders in implementation are most often interested in a shorter analytic timeframe than the lifetime horizon recommended for standard CEA (8). Practical considerations for economic modeling of implementing imaging-based practices include a framework of costs per phase of implementation and per site or location (Figure 1). In standard CEA the usual reported outcomes are cumulative costs and life expectancy, the latter in life-years, or life expectancy adjusted for quality of life and yielding quality-adjusted life years. For decision makers gauging the acceptability and feasibility of implementation strategies, a one-year time horizon would be highly relevant, and often a slightly longer timeframe (3–5 years) can be analyzed in addition to ensure the capture of important benefits (and costs) that may take longer to manifest but are still direct consequences of implementing an EBP (e.g., proportion of cancer cases that present for curative treatment, costs of late-stage diagnosis compared with earlier stage). Furthermore, instead of life-years gained, the uptake of the EBP is often of greater interest, since gains in life-years can take much longer to be realized and be biased toward severe acute conditions and against preventive measures.

Figure 1.

Figure 1.

EPIS Process Framework (Stages of Implementation) and Corresponding Considerations for Costs (15)

As an example, a standard CEA of lung cancer screening showed small gains in life expectancy that are typical over a lifetime time horizon for widely recommended cancer screening when widespread adherence is assumed (9). Both standard CEA and economic evaluation focused upon implementation would ideally account for the total costs of the implementation strategies for increased screening (i.e., the EBP) among eligible patients. From an implementation standpoint, the costs related to gains in adherence to the screening recommendations would be of interest, while outcomes may focus upon the gains in early stage cases and proportion of eligible patients in the subpopulation of interest undergoing CT screening.

Although no specific guidelines have been released on conducting economic analysis in implementation studies, guidelines for formal CEA indicate that outcomes selected should reflect upon health rather than only costs (e.g., return on investment) (2, 8, 10, 11). For example, the main outcome of an economic evaluation for an implementation strategy could be the cost per patient undertaking the evidence-based practice. An example in cancer care delivery might be a strategy of clinical decision support to increase use of the appropriate imaging workup for initial staging, where the relevant outcome is the cost per patient to provide education to clinicians about using the clinical decision support system. The cost per life-year gained could be a relevant outcome if the selected implementation strategy, i.e., clinical decision support, and its downstream impact are associated with evidence on longer-term health outcomes. In contrast, a budget impact analysis, would entail a cross-sectional assessment of prevalence of the condition or size of the relevant patient population to account for total costs, with consideration of the upper and lower bounds on prevalence estimates to understand the impact on potential total costs.

The Importance of Perspective and Challenges in Choosing the Most Relevant Costs

In economic evaluation, there are several major analytic or stakeholder perspectives to consider: society, patients, the health care system, and the healthcare sector (includes all costs accruing to the sector and not just the system). Expert panels have recommended using a societal or healthcare sector perspective due to their comprehensive assessment of financial impact (8); societal perspective is broad and includes any cost or benefit of the implementation strategy, EBP, downstream impacts, and effects on other non-health sectors. In reality, the types of patient costs included in societal analyses vary tremendously, and generally only the most frequent and impactful (i.e., large) costs are estimated for inclusion in economic evaluation. In particular, it may be important to consider patient costs such as transportation, lost wages for time related to the implementation strategy, and the EBP intervention (2). In terms of downstream consequences, the societal perspective considers both the differences in benefits and harms of improving uptake of an EBP, and the potential downsides of any increased healthcare utilization, such as procedures, medications, or visits, and potential spillover effects beyond healthcare. The healthcare sector perspective focuses on costs of all healthcare services and outcomes, ideally using nationally-representative Medicare reimbursement rates or time-in-motion studies, or microcosting, to represent the cost to the U.S. healthcare system for each discrete visit or procedure. For implementation science, the analytic perspectives could focus on the local health system or individual practices, hospital(s), or network for the relevant costs that would inform local decisions and allow one to tailor considerations to local patient needs, staffing, or facilities.

Furthermore, the approach for choosing implementation strategies to optimize EPB use would thus incorporate the perspective of the entity paying for the implementation strategies themselves rather than only the services comprising the EBP. Thus, the costs of implementation that may be most relevant are those that affect the local health care system and its patients. Identifying the relevant costs is exceedingly important to budget sufficiently, but is also tied to the barriers and facilitators identified in that local system. Costs that go toward mitigating the biggest barriers and leveraging the top facilitators of the EBP can create an organized and systematic basis for program evaluation and revision or adaptation. Meanwhile, costs to patients for the implementation strategies can be challenging to anticipate.

Multilevel and complex multifaceted interventions in particular will require precise point estimates with probable ranges of costs. O’Leary et al recommended systems mapping and stakeholder engagement methods for thorough and accurate estimation of implementation strategy costs, and to set up a system to guide data collection (12). For example, a strategy for increasing evidence-based screening and follow up completion may be a multilevel approach in training clinical staff, patients through telehealth decision coaching, and automated text messaging. Costs may include the decision coach’s time, and patient time lost from work to participate in a 45-min session. The local costs of telehealth equipment and technical support for telehealth and automated text messaging may vary depending on existing infrastructure, and clinic and enterprise leaders would help to incorporate the necessary components into the model. The potential for erroneous assumptions about availability of resources or staffing, time taken to administer the intervention, or subsequent underestimates of cost would affect the economic analysis results. Eventually, data collected regarding each implementation strategy and EBP would serve to revise estimates and inform local decision makers who could adopt and sustain the implementation program. Examples for evaluating the costs of increasing colorectal cancer screening uptake have been illustrative in multicomponent evaluation (13, 14).

Aligning Expectations with Stakeholders when Selecting and Evaluating Outcomes

The decision makers who would fund the programs to increase EBP uptake represent a key stakeholder position for economic modeling in implementation studies and from this perspective, the cost share associated with hiring and training personnel, maintaining availability of a service and resources, the facilities and equipment, and other opportunity costs may be substantial. Indeed, sustainability is a recognized implementation outcome (7). Aligning perspectives with these decision makers may be challenging due to a number of factors, such as uncertainty about 1) how much a focus of the chosen implementation strategy will influence the most important outcomes, 2) costs attributed to each strategy, and 3) selecting the key outcomes and the time horizon for these endpoints. Because the observed costs and outcomes may differ from initial estimates and merit further critical analysis, a careful selection of data points and time horizon with stakeholders will allow incorporation of new information into re-appraisal and decision making. Finally, the selection of intermediate outcomes for each multi-level intervention may be key to sustaining the program. A simple example is completing the guideline-recommended evaluation of abnormal results on cancer screening, where an intervention may focus on scheduling a follow-up appointment and order for the indicated procedure. The intermediate outcome of scheduled follow up may be measured, along with the final outcome of completed evaluation. However, other steps that must follow scheduling may still lead to failure to complete the evaluation. Even if other implementation strategies are in place but not well connected or measured as part of the process from stakeholders’ points of view, the reasons for failure to improve upon the target endpoint may remain unclear. Given the gap in process measures, support by decision makers to continue the overarching program may falter, because of an apparent lack of benefit from increasing staff for appointments and orders. Therefore, for the purposes of evaluating a health program for implementation, analyses that forecast a return on investment or budget impact are less informative for a goal of improving EBP use.

Another dimension of studying the successes or challenges of implementation is local context. Without considering the local context carefully and each cost associated with the selected strategies for implementation, there is potential to over- or under-estimate the favorability of implementation. Under-estimating economic favorability stems from over-fore-casting costs or under-projecting benefits of implementation strategies. Illustrative examples of root causes of underestimating economic favorability might include: 1) a focus on life expectancy instead of cross sectional improvements in uptake of an EBP or short-term benefits to quality of life, 2) assuming small gains in uptake of the EBP based on the general population, when a substantial subpopulation at particularly high risk may have large gains in uptake at the same cost, 3) other unaccounted benefits of heightening EBP uptake (e.g. elevating the patient or clinician experience or increased engagement and therefore likelihood of adhering to a recommendation), 4) opportunistic timing of a strategy leading to increased uptake such as designing a strategy to coincide with an incentive for another service, and 5) attributing costs to the implementation strategy when they are in fact offset by existing resources. For example, an automated referral to a program on hepatocellular carcinoma surveillance in viral hepatitis B and C patients, coupled with free mental health counseling sessions or partner pharmacies that address related needs, might result in greater EBP uptake than typically reported for automated electronic referral programs due to the multifaceted approach. The evidence-based estimate of uptake could then be revised as a trial proceeds. Another example is that the projected costs related to training a group of clinic staff in an intervention might be reduced unexpectedly by an adaptation in which trained staff then train their peers, reducing the number of hours spent by staff in offsite training sessions.

On the other hand, over-estimating economic favorability can occur from underestimating costs or over-estimating clinical benefits of the strategies. Common sources of underestimating costs include 1) costs that are missed or forgotten for any aspect of an implementation strategy due to under-guidance by stakeholders, whether related to time for staff, additional services that results as a consequence of an implementation strategy; or 2) over-estimated benefits due to uptake of the EBP where population risk, healthcare access, or adherence differ from the local context.

Conclusion

Implementation is a continual process, requiring feedback, re-exploration, and potentially adaptation to ensure that the practices are optimized according to internal and external context. Whenever key barriers or facilitators for any stakeholder perspective are not detected or being closely evaluated, there is potential for misrepresentation and poor forecasting of the outcomes (including costs, benefits, and harms). The continued re-evaluation of factors such as fidelity or penetration may facilitate an accurate understanding of how favorable an implementation approach is over time and improve estimates beyond the initial period of implementation, when those involved in implementation are still in the learning phase. On the other hand, fidelity to the intended strategies could degrade over time for various reasons, or low sustainability could be found despite initial success, despite initially well-selected strategies. Mis-estimation of costs, benefits, and harms could lead to difficulties with continued, sustained uptake of the EBP; loss of fidelity to the implementation approach due to lack of resources; and unexpected decreases in health benefits over time.

Table 2.

Comparison of Types of Cost Analysis

Cost Analysis Method Goal Costs Included Results Examples of Typical Users
Budget Impact Analysis Assess affordability of strategy for a real-world patient population such as for a local health system Costs to the local system or practices of administering and implementing the intervention(s) Estimate of costs associated with implementing intervention(s) for a given budget cycle; scenario-specific results Decision makers in a health system decide whether implementing intervention(s) locally falls within budget
Return on Investment Assess financial return of strategy (i.e. net financial gains or losses) for a real-world patient population such as local health system Costs to the local system or practices of administering and implementing the intervention(s) Ratio of financial gains over costs of the investment; applies estimated revenue and costs associated with implementing intervention(s) for a given budget cycle Decision makers in a health system deciding whether implementing intervention(s) may result in increased revenue or how long it may take to break even for a care delivery investment
Cost Effectiveness Analysis Assess alternative strategies for comparative differences in effectiveness (e.g. life expectancy benefit) and associated costs at a broad population level (typically healthcare sector or society) Costs for any services associated with strategies and downstream impacts; if taking a societal perspective, includes costs to patients for time, transportation, other needs created by interventions Incremental Cost Effectiveness Ratio; time horizon for analysis is often lifetime Policy makers, payers making selections among alternative programs that improve health outcomes, given different costs of the programs
Economic Evaluation for Implementation Science Assess alternative implementation strategies for and associated interventions for comparative differences in effectiveness (e.g. life expectancy benefit) and costs for a local patient population Costs to the local system for administering and implementing the intervention(s) and also to patients due to time, transportation, etc Ratio of benefits and harms over a short time horizon, e.g. over 1–5 years; incremental cost effectiveness ratio could be calculated Decision makers for systems or practices considering implementation

Acknowledgments

This work was supported by the National Cancer Institute (P.I. Kang, R01CA262375). Dr. Kang also receives other grant support from the NIH/NCI, NIH/National Institute of Dental and Craniofacial Research, and the Doris Duke Foundation for unrelated work, and royalties from Wolters Kluwer for unrelated work. Dr. Gold reports no financial support for related work. The authors declare no conflict of interest.

References

  • 1.Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Gold HT, McDermott C, Hoomans T, Wagner TH. Cost data in implementation science: categories and approaches to costing. Implement Sci. 2022;17(1):11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Neta G, Sanchez MA, Chambers DA, Phillips SM, Leyva B, Cynkin L, et al. Implementation science in cancer prevention and control: a decade of grant funding by the National Cancer Institute and future directions. Implement Sci. 2015;10:4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Sullivan SD, Mauskopf JA, Augustovski F, Jaime Caro J, Lee KM, Minchin M, et al. Budget impact analysis-principles of good practice: report of the ISPOR 2012 Budget Impact Analysis Good Practice II Task Force. Value Health. 2014;17(1):5–14. [DOI] [PubMed] [Google Scholar]
  • 5.Mauskopf JA, Sullivan SD, Annemans L, Caro J, Mullins CD, Nuijten M, et al. Principles of good practice for budget impact analysis: report of the ISPOR Task Force on good research practices--budget impact analysis. Value Health. 2007;10(5):336–47. [DOI] [PubMed] [Google Scholar]
  • 6.Yagudina RI, Kulikov AU, Serpik VG, Ugrekhelidze DT. Concept of Combining Cost-Effectiveness Analysis and Budget Impact Analysis in Health Care Decision-Making. Value Health Reg Issues. 2017;13:61–6. [DOI] [PubMed] [Google Scholar]
  • 7.Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Sanders GD, Neumann PJ, Basu A, Brock DW, Feeny D, Krahn M, et al. Recommendations for Conduct, Methodological Practices, and Reporting of Cost-effectiveness Analyses: Second Panel on Cost-Effectiveness in Health and Medicine. JAMA. 2016;316(10):1093–103. [DOI] [PubMed] [Google Scholar]
  • 9.Black WC, Gareen IF, Soneji SS, Sicks JD, Keeler EB, Aberle DR, et al. Cost-effectiveness of CT screening in the National Lung Screening Trial. N Engl J Med. 2014;371(19):1793–802. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Saldana L, Ritzwoller DP, Campbell M, Block EP. Using economic evaluations in implementation science to increase transparency in costs and outcomes for organizational decision-makers. Implement Sci Commun. 2022;3(1):40. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Husereau D, Drummond M, Augustovski F, de Bekker-Grob E, Briggs AH, Carswell C, et al. Consolidated Health Economic Evaluation Reporting Standards 2022 (CHEERS 2022) Statement: Updated Reporting Guidance for Health Economic Evaluations. Value Health. 2022;25(1):3–9. [DOI] [PubMed] [Google Scholar]
  • 12.O’Leary MC, Hassmiller Lich K, Frerichs L, Leeman J, Reuland DS, Wheeler SB. Extending analytic methods for economic evaluation in implementation science. Implement Sci. 2022;17(1):27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Kim KE, Tangka FKL, Jayaprakash M, Randal FT, Lam H, Freedman D, et al. Effectiveness and Cost of Implementing Evidence-Based Interventions to Increase Colorectal Cancer Screening Among an Underserved Population in Chicago. Health Promot Pract. 2020;21(6):884–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Conn ME, Kennedy-Rea S, Subramanian S, Baus A, Hoover S, Cunningham C, et al. Cost and Effectiveness of Reminders to Promote Colorectal Cancer Screening Uptake in Rural Federally Qualified Health Centers in West Virginia. Health Promot Pract. 2020;21(6):891–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES