Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Mar 1.
Published in final edited form as: J Subst Abuse Treat. 2020 Dec 3;122:108217. doi: 10.1016/j.jsat.2020.108217

Effectiveness of value-based purchasing for substance use treatment engagement and retention

Sharon Reif a, Maureen T Stewart a, Maria E Torres a,b, Margot T Davis a, Beth Mohr Dana a,c, Grant A Ritter a
PMCID: PMC8380407  NIHMSID: NIHMS1651755  PMID: 33509415

Abstract

Introduction:

Many people drop out of substance use disorder (SUD) treatment within the first few sessions, which suggests the need for innovative strategies to address this. We examined the effectiveness of incentive-based contracting for Maine’s publicly funded outpatient (OP) and intensive outpatient (IOP) SUD treatment, to determine its potential for improving treatment engagement and retention.

Methods:

Maine’s incentive-based contract with federally block grant–funded OP and IOP treatment agencies created a natural experiment, in which we could compare treatment engagement and retention with a group of state-licensed treatment agencies that were not part of the incentive-based contract. We used administrative data for OP (N=18,375) and IOP (N=5,986) SUD treatment admissions from FY2005–FY2011 to capture trends prior to and after the FY2008 contract implementation date. We performed multivariable difference-in-difference logistic regression models following propensity score matching of clients.

Results:

Two-thirds (66%) of OP admissions engaged in treatment, defined as 4+ treatment sessions, and 85% of IOP admissions satisfied the similar criteria of 4+ treatment days. About 40–45% of OP admissions reached the threshold for retention, defined as 90 days in treatment. IOP treatment completion was attained by 50–58% of admissions. For OP, the incentive and nonincentive groups had no significant differences in percentages with treatment engagement (AOR=1.28, DID=5.9%, p=.19), and 90-day retention was significant in the opposite direction of what we hypothesized (AOR=0.80, DID = –4.6%, p=.0003). For IOP, the incentive group had a significant, but still small, increase in percentage with treatment engagement (AOR=1.52, DID=5.5%, p=.003), but the corresponding increase in treatment completion was not similarly significant (AOR=1.12, DID=2.7%, p=.53). In all models, individual-level variables were strong predictors of outcomes.

Conclusion:

We found little to no impact of the incentive-based contract on the treatment engagement, retention, and completion measures, adding to the body of evidence that shows few or null results for value-based purchasing in SUD treatment programs. The limited success of such efforts is likely to reflect the bandwidth that providers and programs have to focus on new endeavors, the importance of the incentive funding to their bottom line, and forces beyond their immediate control.

Keywords: Substance use disorder, Treatment, Value-based purchasing, Treatment engagement, Treatment retention, Incentives

1. Introduction

Many studies have demonstrated the link between staying in treatment and improved outcomes for people with substance use disorder (SUD) (Garner, Godley, Funk, Lee, & Garnick, 2010; Simpson, 2004; Yang et al., 2013; Zhang, Friedmann, & Gerstein, 2003). However, research has also shown that treatment participation drops precipitously after the first few sessions (Hoffman, Ford, Tillotson, Choi, & McCarty, 2011), with less than half of those who initiated treatment attaining early treatment engagement (Garnick, Lee, Horgan, & Acevedo, 2009; Liu et al., 2020; NCQA, 2020). Although dropout may occur for a variety of reasons related to the program (e.g., location or hours) or individual (e.g., not recognizing the need for treatment) (Laudet, Stanick, & Sands, 2009), research highlights the importance of organizational and policy approaches in effecting change (Hoffman et al., 2011; Institute of Medicine, 2001, 2006; McCarty et al., 2007). With research recognizing treatment engagement and longer retention as key factors to increasing the likelihood of moving a person toward recovery, programs must find innovative ways to improve these treatment measures for people with SUD.

Treatment providers can use performance measurement as a tool to establish their goals, assess their performance on these goals, and drive improvements. It is important for providers to enhance their understanding of client participation in treatment and to inspire programs to seek ways to improve. The National Committee on Quality Assurance (NCQA), as part of their Healthcare Effectiveness Data and Information Set (HEDIS) measures, has certified the performance measure of treatment engagement, defined as the proportion of clients with several additional visits within 30 days after initiating treatment, and numerous health plans (NCQA, 2020), the Centers for Medicare and Medicaid Services (CMS, 2020), and many states have adopted this performance measure. This engagement measure emphasizes the importance of patients’ going beyond the first several visits to truly benefit from treatment (Garnick et al., 2009; McCorry, Garnick, Bartlett, Cotter, & Chalk, 2000). Treatment engagement has been linked to improved outcomes, such as improved employment and reduced arrests and mortality (Dunigan et al., 2014; Garnick et al., 2014; Harris, Humphreys, Bowe, Tiet, & Finney, 2010; Paddock et al., 2017). Retention in treatment has not been systematically implemented as a performance measure to date, but has shown itself to be an important correlate of improved outcomes (Garner et al., 2010; Simpson, 2004; Yang et al., 2013; Zhang et al., 2003).

Performance-based contracting is a value-based purchasing (VBP) approach that structures payment contingent upon meeting specified levels of performance, aligning payment with quality improvement (Institute of Medicine, 2007; Rosenthal, Fernandopulle, Song, & Landon, 2004). The underlying theory is that a financial reward or penalty can be used as a lever that drives potential recipients to change their behaviors (Institute of Medicine, 2007). The payer chooses the specific measure(s) to reward, which highlight what the payer believes are most important to improve, and which the providers ideally have the ability to control (Hodgkin et al., 2020). Such contracts are expected to drive change in performance by the appeal of a reward (or the desire not to be penalized). Performance measure data enable programs to evaluate where improvements are needed, in part to receive the reward. This becomes a tradeoff between effort to improve performance, presuming that it is even feasible, and the reward that comes from doing so. The payer uses penalties to indicate that failure to improve on a given measure has consequences.

Health care has incorporated VBP to improve quality of care and control costs. However, the literature evaluating these approaches remains limited, is largely based in the medical area, and shows mixed results (Jha, 2013; Markovitz & Ryan, 2017). VBP initiatives have lagged in behavioral health care. A recent review of VBP efforts in behavioral health since 1997 identified only 17 initiatives across a range of VBP models; more than half addressed substance use treatment and most relied on process measures (Carlo, Benson, Chu, & Busch, 2020). Evaluation of these efforts showed some promise (Garner et al., 2012; Garnick et al., 2017; Haley, Dugosh, & Lynch, 2011; Klein, Lloyd, & Asper, 2016; M. T. Stewart, Horgan, Garnick, Ritter, & McLellan, 2013; M. T. Stewart et al., 2018; R. E. Stewart, Lareef, Hadley, & Mandell, 2017; Stuart et al., 2017; Unutzer et al., 2012; Vandrey, Stitzer, Acquavita, & Quinn-Stabile, 2011).

Treatment engagement and retention, specifically, have been rewarded measures in several VBP studies of outpatient SUD treatment, with mixed results. In Delaware the performance-based contract reduced waiting time and increased length of stay in outpatient treatment (M. T. Stewart et al., 2013). In 1992, Maine created the first substance use performance-based contract in the U.S., and found some improvements but also unintended consequences, such as gaming or cherry-picking, where programs treated fewer severely ill clients to improve measured performance (Commons, McGuire, & Riordan, 1997; Lu, 1999; Lu, Ma, & Yuan, 2003; Shen, 2003). Following feedback on its original design, Maine revised its approach in 2007 (Brucker & Stewart, 2011; M. T. Stewart et al., 2018). Initial analysis of this revised performance-based contract found unexpectedly lower retention in incentivized programs compared to nonincentivized programs (Brucker & Stewart, 2011). However, this analysis examined only the first year of the contract and did not use a matched comparison group, so its results may not be robust.

Given the limited uptake of VBP in SUD treatment systems and the mixed results on VBP to date, we still need additional evidence to understand if VBP is an important tool to improve outcomes for clients in SUD treatment. This paper hypothesizes that financial incentives will have an impact on treatment effectiveness as indicated by improved measures for engagement and retention. To conduct a more comprehensive study of Maine’s revised incentive-based approach, we applied a rigorous quasi-experimental design using a matched sample of treatment and comparison clients. We examined treatment engagement and retention in this sample of clients who we exposed or did not expose to the incentive-based contract. We considered possible changes in these outcomes over time, analyzing multiple years of data before and after Maine had implemented the contract. Specifically, we examined treatment engagement and retention in Maine’s publicly funded outpatient and intensive outpatient SUD treatment settings from 2005 through 2011 and tested the hypothesis that financially incentivizing these performance measures was effective in driving improvements in the programs.

2. Methods

2.1. Maine’s incentive-based contract

Maine’s incentive-based contract (IC) that began in July 2008 and continued through 2016 was required for all Maine outpatient (OP) and intensive outpatient (IOP) nonmethadone agencies that received federal Substance Abuse Treatment and Prevention (SATP) block grant funds, as administered by the Maine Office of Substance Abuse and Mental Health Services (SAMHS). SAMHS also licensed all agencies in the state that provided SUD treatment and billed Medicaid for their services, some of which did not receive SATP block grant funds. This enabled a natural experiment to compare agencies that were SATP-funded and thus incentivized with a comparison group of agencies that did not receive block grant funds and were not under the incentive contract.

The incentive-based contract included 5 performance measures, each of which had penalty and incentive thresholds, which FY2006 performance on the same measures and expected ability to improve over time determined. The rewarded measures included two access measures (waiting time for first appointment, and time from first contact to first face-to-face visit) (M. T. Stewart et al., 2018) and two types of retention measures (treatment engagement for both OP and IOP coupled with either retention [OP] or completion [IOP], described further below), which research has linked to improved outcomes. The state followed a NIATx learning collaborative approach (e.g., Roosa, Scripa, Zastowny, & Ford, 2011) to develop the measures and obtain provider buy-in. The incentive-based contract also established expectations for delivered units of service for all outpatient services and clients, not just state-contracted units, to ensure that programs maintain the agreed-upon utilization and to reduce the likelihood of client selection to game the measures. Programs could earn or lose up to 9% of their base payment for reaching or failing to reach performance thresholds: ±1% for each access and retention measure, and ±5% for units of service. The state calculated performance monthly, separately for adult and adolescent clients; this study focuses on adult clients. The state made payment or penalty adjustments to the contract quarterly. The state facilitated some technical assistance efforts during the early years of the program; it hosted learning collaboratives and NIATx training (e.g., plan-do-study-act cycles of rapid improvement), and engaged with programs around their data and performance. Additional detail is available elsewhere (Brucker & Stewart, 2011; M. T. Stewart et al., 2018)

2.2. Data source and matched sample

The data for this study are from SAMHS admission and discharge records that all licensed OP and IOP SUD treatment agencies in Maine provided from FY2005 through FY2011. We excluded sole practitioners or agencies with fewer than 30 clients annually. The study period represents three years before the incentive contract went into effect at the beginning of FY2008 through four years following its implementation. SAMHS required agencies to report admission and discharge data for all clients regardless of payer (e.g., block grant, Medicaid, private-pay, self-pay). The study compares the outcomes of clients in agencies subject to the incentive-based contract (IC group) with clients from a comparison group of agencies that the state licensed but not subject to the incentive-based contract (non-IC group). In this natural experiment, we did not randomly assign clients to treatment programs and we did not randomly assign programs to the incentive groups, thus we used quasi-experimental methods.

We developed the initial analysis file by linking admission and discharge records based on agency, client identifier, and admission date. To address possible selection bias imbalances due to lack of randomization, we developed propensity scores, based on pre-period agency and client characteristics, and we used them to match IC with non-IC admissions. To match, we used quintile of propensity score and admission year using random sampling with replacement. We matched all IC admissions successfully, and we achieved good balance for all pre-period agency and client characteristics except referral from criminal justice. The final matched sample consisted of 18,375 OP records and 5,986 IOP records. Additional detail about the sample and approach are available elsewhere (M. T. Stewart et al., 2018). The Brandeis University Institutional Review Board approved this research.

2.3. Variables

SAMHS defined our dependent variables separately for OP and IOP under the incentivized contract, but for both settings we designed them to reflect engagement and longer-term treatment duration. For OP agencies, we defined engagement as attending four or more (4+) treatment sessions and we defined the longer-term measure as remaining in treatment for 90 days or longer. For IOP agencies, which have shorter lengths of treatment by design, we defined engagement as attending four or more (4+) days of treatment and we defined the longer-term measure as treatment completion. We did not collect encounter-level data with service dates, thus no time frame was associated with these measures. We evaluated each measure independently; that is, the 90-day retention and treatment completion measures were not contingent on the 4+ session or 4+ day measures. In each case, we hypothesized that the IC group would improve more over time—that is, after the IC implementation in FY2008—than the non-IC group.

We controlled for potential confounding variables used in many SUD treatment studies to adjust for demographics and potential influence on treatment engagement and retention. We obtained these data from the SAMHS admission and discharge forms: sociodemographics (age, gender, education, marital status); health insurance (Medicaid as primary payment source at admission); criminal justice system (CJS) referral; mental illness/disorder; substance use severity; receipt of wraparound services; prior treatment episodes; and primary drug (that led to the current treatment admission—alcohol, opioids, and other drugs).

2.4. Statistical analysis

Treatment admission was the unit of observation. We stratified all analyses by OP and IOP since we defined the outcome measures differently for the two settings.

We used a difference-in-difference (DID) analytic approach (Meyer, 1995) with the propensity score matched control group, to adjust for unobserved factors or potential confounders that might have coincided with IC implementation. The DID effect is equal to the pre-post change in the retention measure for the IC group minus the pre-post change in the measure for the non-IC group. A significant DID effect indicates that the two groups changed differently over time. A positive DID effect for the retention measure, for instance, indicates that the IC was associated positively with this outcome; e.g., a proportionally greater percentage of IC clients stayed in treatment after contract implementation compared to the non-IC group.

We estimated the adjusted DID effects of IC for each outcome measure using multivariable logistic regression models with independent variables for period of admission (pre: FY2005–FY2007, post: FY2008–FY2011), IC agency (y/n), the interaction between the two, and our set of potential confounders. The interaction term represented the DID effect, estimating the differential increment of change in outcome over time among IC clients. To account for the clustering of observations within propensity score matching quintiles, we estimated the DID logistic regression models using generalized estimating equations (GEE) (Zeger, Liang, & Albert, 1988).

To provide some notion of how the incentivized contract changed the probability of a positive outcome, Figure 1 provides treatment/time period–based probability estimates, as derived from our multivariable logistic regression models, for a hypothetical client with the following characteristics: male, aged 25–34, never married, HS/GED education, primary expect payer is not Medicaid, not referred via the CJS, primary drug is alcohol, comorbid mental disorder, no wraparound services, prior SUD treatment, and average SUD severity score. The DID derived from these probabilities estimates how much more likely such a client would be to have a positive outcome under the IC contract.

Figure 1:

Figure 1:

Pre-Post Rates by Group, by Retention and Completion Measures: adjusted estimates of percent of admissions for hypothetical “standard” client*

3. Results

3.1. Sample characteristics

The characteristics of the matched sample are as expected for clients in public addiction treatment. The average age at admission was close to 35, the majority were male, about two-thirds were unemployed at admission, and Medicaid was the primary payer for about half of admissions. About half of the sample had CJS involvement at admission. Alcohol was the most common primary substance (40–50% with slight variations by setting and IC group), followed by opioids (23–31% with slight variations by setting and IC group). About 70% had been in SUD treatment previously. The majority had a comorbid mental disorder.

3.2. Outpatient treatment

3.2.1. Attended 4+ OP treatment sessions

Two-thirds of the OP admissions to IC agencies attended 4 or more treatment sessions in both pre- and post-periods (65.7% and 66.1%, respectively) (Table 1). A similar two-thirds (66.2%) of OP admissions to non-IC agencies attended 4 or more treatment sessions in the pre-period, but this decreased to 58.7% in the post-period.

Table 1:

Treatment retention and completion by IC group and time period, matched sample, unadjusted rates.

OP admissions IOP admissions

Group and Time Period a Attended 4+ sessions Retention 90+ days Attended 4+ days Completed treatment

N % N % N % N %

IC b
Pre 7,179 65.7 7,635 42.3 1,950 84.5 2,040 52.5
Post 10,722 66.1 11,100 43.2 3,879 84.5 3,944 50.4
Non-IC b
Pre 7,142 66.2 7,635 39.9 1,845 87.5 2,041 58.5
Post 10.457 58.7 11,100 44.5 3,822 82.8 3,945 57.0
a

Pre=FY2005–2007, Post=FY2008–2011;

b

IC = Incentive Group; Non-IC = Non-Incentive Group

Further, multivariate regression modeling found no difference in attending 4 or more treatment sessions between the IC and non-IC groups over time (as indicated by the “INC x POST” interaction term in Table 2, AOR=1.28, DID=5.9%, p=.1893). Individual characteristics of age, gender, marital status, education, Medicaid, CJS referral, receipt of wraparound services, prior SUD treatment, and SUD severity were all significant predictors of attending 4+ sessions. Figure 1a offers a graphical representation of the probabilities of a positive outcome, pre/post and by IC (solid line) and non-IC (dotted line) group, for the hypothetical client described in Methods (Section 2.4).

Table 2:

Logistic regression results and difference-in-difference estimates, outpatient admissions.

Attended 4+ sessions (Engagement) 90+ days in treatment (Retention)

Characteristic* AOR Lower CL Upper CL p AOR Lower CL Upper CL p

IC*Post 1.28 0.88 1.86 .1893 0.80 0.71 0.91 .0003
Incentive Group (IC) 0.97 0.82 1.15 .7638 1.12 1.07 1.16 <.0001
Admission in Post Period (Post) 0.76 0.55 1.04 .0823 1.21 1.13 1.28 <.0001

Age (referent = 18–24)
 25–34 1.13 1.06 1.20 .0001 1.23 1.13 1.34 <.0001
 35–44 1.20 1.08 1.33 .0005 1.46 1.40 1.52 <.0001
 45+ 1.48 1.35 1.61 <.0001 1.58 1.43 1.73 <.0001
Female 0.86 0.82 0.89 <.0001 0.99 0.93 1.05 .7617
Marital status (ref = married)
 Never married 0.89 0.80 0.99 .0302 0.93 0.88 0.99 .0276
 Separated, divorced, widowed 0.84 0.77 0.92 .0002 0.91 0.83 1.01 .0722
Education (ref = HS grad/GED)
 Less than high school grad/GED 0.85 0.77 0.93 .0003 0.96 0.86 1.06 .4280
 Some college 1.05 0.95 1.15 .3163 1.08 1.02 1.15 .0100
Medicaid as expected payment source 1.18 1.10 1.26 <.0001 1.30 1.16 1.45 <.0001
Criminal justice system as primary referral source 1.48 1.34 1.63 <.0001 1.52 1.41 1.65 <.0001
Primary substance (ref = alcohol)
 Opioids 0.96 0.90 1.03 .2725 1.49 1.38 1.62 <.0001
 Other 0.98 0.89 1.08 .7324 1.04 0.97 1.12 .2531
Comorbid mental disorder 1.05 0.99 1.11 .0972 1.04 0.97 1.11 .3104
Received wrap-around services during tx 2.30 2.17 2.44 <.0001 1.94 1.78 2.12 <.0001
Prior SUD treatment 1.13 1.03 1.24 .0115 1.06 0.97 1.16 .2272
SUD severity 0.91 0.85 0.98 .0082 0.90 0.85 0.95 <.0001

Difference-in-difference estimate (p) 5.92 .1893 −4.57 .0003
*

Individual characteristics at admission; AOR=odds ratio

3.2.2. Stayed in OP treatment at least 90 days

More than 40% of OP admissions in the IC group stayed in treatment at least 90 days in both periods (42.3% and 43.2%, respectively) (Table 1). The non-IC group was slightly lower in the pre-period (39.9%) but increased to 45.5% in the post-period.

Contrary to our hypothesis, the multivariate regression model (Table 2) showed a significant difference between the groups over time, with the IC group in the post-period less likely to have stayed in treatment 90+ days (AOR=0.80, DID = –4.6%, p=.0003). Individual characteristics of age, marital status, education, Medicaid, CJS referral, primary substance, receipt of wraparound services, comorbid mental disorder and SUD severity were significant. Figure 1b illustrates the four probabilities resulting in a DID of –4.7% for our hypothetical client.

3.3. Intensive outpatient treatment

3.3.1. Attended 4+ IOP treatment days

A high proportion of IOP admissions attended 4 or more days of treatment, with 84.5% of the IC group doing so in both periods (Table 1). The non-IC group also had high rates, with 87.5% in the pre-period, and 82.8% in the post-period.

This outcome was the only one of the four that in the multivariate regression model showed a significant positive DID effect as hypothesized (Table 3, AOR=1.52, DID=5.5%, p=.0033). This indicates that the increment of change over time was significantly higher for the IC group compared to the non-IC group. Individual characteristics of age, marital status, education, Medicaid, CJS referral, comorbid mental disorder, and SUD severity were significant. As before, Figure 1c illustrates the four probabilities resulting in a DID of 5.5% for our hypothetical client.

Table 3:

Logistic regression results and difference-in-difference estimates, intensive outpatient admissions.

Attended 4+ sessions (Engagement) 90+ days in treatment (Retention)

Characteristic* AOR Lower CL Upper CL p AOR Lower CL Upper CL p

IC*Post 1.52 1.15 2.01 .0033 1.12 0.79 1.58 .5332
Incentive Group (IC) 0.77 0.63 0.95 .0148 0.77 0.59 0.99 .0411
Admission in Post Period (Post) 0.70 0.53 0.92 .0107 0.91 0.72 1.17 .4729

Age (referent = 18–24)
 25–34 1.13 0.96 1.33 .1531 1.27 1.13 1.43 <.0001
 35–44 1.13 1.01 1.26 .0329 1.32 1.12 1.55 .0009
 45+ 1.36 1.16 1.58 <.0001 1.61 1.34 1.94 <.0001
Female 0.95 0.83 1.10 .5189 1.05 0.95 1.15 .3254
Marital status (ref = married)
 Never married 0.89 0.77 1.02 .0990 0.80 0.71 0.90 .0002
 Separated, divorced, widowed 0.77 0.67 0.89 .0005 0.84 0.77 0.91 <.0001
Education (ref = HS grad/GED)
 Less than high school grad/GED 0.84 0.77 0.91 <.0001 0.82 0.73 0.93 .0013
 Some college 1.04 0.91 1.18 .5969 1.15 1.01 1.30 .0326
Medicaid as expected payment source 0.86 0.75 0.99 .0313 0.76 0.70 0.84 <.0001
Criminal justice system as primary referral source 1.47 1.15 1.89 .0022 1.56 1.38 1.77 <.0001
Primary substance (ref = alcohol)
 Opioids 1.05 0.92 1.19 .5050 0.99 0.88 1.11 .8131
 Other 0.93 0.82 1.06 .2971 0.98 0.87 1.09 .6538
Comorbid mental disorder 0.87 0.81 0.94 .0002 0.70 0.62 0.78 <.0001
Received wrap-around services during tx 1.23 0.90 1.68 .1917 1.57 1.39 1.77 <.0001
Prior SUD treatment 0.88 0.70 1.10 .2692 0.77 0.67 0.88 .0001
SUD severity 0.92 0.89 0.95 <.0001 0.89 0.85 0.94 <.0001

Difference-in-difference estimate (p) 5.52 .0033 2.74 .5332
*

Individual characteristics at admission; AOR=odds ratio

3.3.2. Completed IOP treatment

About 50% of the IC group completed intensive outpatient treatment in both periods (52.5% and 50.4%, respectively) (Table 1). About 57% did so for the non-IC group (58.5% and 57.0%, respectively). Our multivariate regression model showed no difference between the groups over time (Table 3, AOR=1.12, DID=2.7%, p=.5332). Individual characteristics of age, marital status, education, Medicaid, CJS referral, comorbid mental disorder, wraparound services, prior SUD treatment, and SUD severity were significant. Figure 1d illustrates the probabilities resulting in a DID of 2.7 for our hypothetical client.

4. Discussion

This quasi-experimental study relied on a natural experiment that the implementation of an incentive-based contract and comparable licensed SUD treatment programs that were not incentivized afforded us. Our methods were rigorous, using propensity score matching to adjust for potential differences in clients who attended the IC and non-IC programs, thus reducing the likelihood of a biased sample. The results show little to no impact of the incentive-based contract on the treatment engagement, retention, and completion measures. Prior analyses had similar findings for the incentivized measures of access under the same incentive-based contract in Maine (M. T. Stewart et al., 2018).

Only for treatment engagement in IOP was there a significant finding in the hypothesized positive direction. However, this finding reflects stability in the IC group rather than improvement over time, in comparison to the decline over time that we saw in the non-IC group. While the OP 90-day retention measure was also significant, the difference in trends was in the wrong direction, suggesting that the non-IC group improved over time and the IC group did not. Neither OP treatment engagement nor IOP treatment completion showed any difference by the incentive groups. Further, all estimated differences were small thus the effects, even for the significant finding in the expected direction, were minimal. These results show that the incentivized contract was not associated with improvement in treatment engagement or retention, adding to the body of evidence that finds limited or null results for VBP in SUD treatment programs.

Research has identified treatment engagement and retention as appropriate process measures for VBP programs, as early indicators of successful SUD treatment outcomes. By definition, engagement and retention require an interaction with a treatment program and a provider, thus we might reasonably expect that programs and providers may be able to affect these measures if they make organizational or practice changes (Hoffman et al., 2011; Institute of Medicine, 2001, 2006; McCarty et al., 2007). On the surface, it seems that program and provider efforts could drive engagement and retention, such as appointment reminders or flexible scheduling. Yet many factors are likely to influence if and how incentives work, including the role of nonmutable characteristics of the client, provider, and program; external forces that determine capability to remain engaged in treatment; and competing demands on providers and programs.

As our findings evince, and reflecting many other studies about treatment retention, client factors, such as age, gender, and insurance status, may primarily drive whether someone returns to treatment early on or stays in treatment. This highlights that these individual-level “predisposing” and “enabling” resources are essential to how people determine whether to seek care (Anderson, 1995) and presumably whether they remain in care, even when considering aspects of the treatment system and external environment. Further, client preferences related to therapeutic alliance (Meier, Barrowclough, & Donmall, 2005) or aspects of the treatment program itself may be drivers in the decision to engage in or continue treatment. Longer retention and treatment completion may depend on external factors as well, including other obligations such as employment or childcare, out-of-pocket costs, how long insurance will pay for treatment, treatment accessibility, and transportation availability.

VBP relies on the presumption that organizations and providers can change their approaches to meet the benchmarks for a certain measure. If individual characteristics drive treatment retention, it seems unlikely that an incentive to providers will be able to implement operational changes that will have a demonstrable impact. With multiple factors related to treatment retention, programs and providers would need to address multiple elements together to increase the likelihood of program-wide changes (McCarty et al., 2007). However, we do not know how much information about the incentives the state communicated to individual providers versus the program itself. In addition, heterogeneity of the client population (i.e., case mix) makes it hard for a program to change retention across the full caseload (R. E. Stewart et al., 2017). Other studies have demonstrated the difficulty of a provider affecting client retention, reflecting these same concerns (Garnick et al., 2009). A review of 15 studies that included length of stay as a VBP measure identified 10 positive results related to incentives, but 6 with negative results (R. E. Stewart et al., 2017). More broadly, VBP efforts in substance use treatment have difficulty meeting key criteria for success (Hodgkin et al., 2020).

Systems changes such as VBP are not implemented in a vacuum. The success of such efforts is likely to reflect the bandwidth that providers and programs have to focus on new endeavors, the importance of the funding to their bottom line, and forces beyond their immediate control (Davis, Torres, Ngyuen, Stewart, & Reif, 2019). Organizational readiness to change also drives ability for treatment programs to innovate (Kelly, Hegarty, Barry, Dyer, & Horgan, 2017; Simpson, Joe, & Rowan-Szal, 2007). Innovation requires time and attention and, in particular with programs such as Maine’s incentive-based contract that allow program flexibility in how to respond, a fair amount of insight and new ways of thinking about what might make a difference in client engagement and retention.

Maine introduced its VBP approach during a time of societal and state-level strain, including the great recession of 2007–2009, Maine’s contraction of Medicaid services and enrollees, and the opioid crisis, as well as ongoing difficulties maintaining services in a poor, rural, and geographically dispersed state. The incentive-based contract may not have been a high priority, with program mergers and closures not uncommon. In addition, treatment programs have many funders, thus an incentive contract under one funder, with a reward/penalty that is a small portion of their overall budgets, may not be enough to elicit behavior change (Hodgkin et al., 2020; M. T. Stewart et al., 2018). Further, with addiction treatment programs that may be operating on limited resources, and the likelihood that operational change will require investment, relatively small incentives are likely to be insufficient to encourage change in program operations (McLellan, 2011).

We should note some limitations to this study. Although the control group was quite similar to the treatment group, and we used propensity score matching to reduce differences, the control group programs were likely different in unobserved ways, since they did not receive state funding for SUD treatment. We had no data on other characteristics of the programs, such as their approach to engaging with or discharging clients. The state designed the incentive-based contract with provider input about measures and benchmarks, but it did not adjust either the design or benchmarks over time. VBP is likely to work the best when it rewards for both meeting a benchmark and improving from baseline, whereas the Maine approach only used the benchmark. Since the state implemented the performance contract in 2008, these data are somewhat dated. We do not expect that the hypothesized underlying relationship between incentives and performance changed since 2008, but other factors may have changed. For example, it is likely that with a larger proportion of clients who are part of the opioid-dependent population, the levers for providers to address engagement and retention may differ. We could not examine this hypothesis, although we did include opioid as a primary substance in the models.

Despite these concerns, this study’s quasi-experimental design was a strength. Its ability to examine trends before and after the implementation of the incentive-based contract to account for secular changes was also a strength. These findings suggest that treatment retention is complex, and as such may not be an ideal measure for VBP, and VBP efforts in SUD treatment remain a challenge.

Highlights.

  • Substance use treatment engagement and retention are linked to improved outcomes.

  • High dropout rates suggest that innovation is needed to improve outcomes.

  • Value-based purchasing in substance use treatment has limited but mixed results.

  • Quasi-experimental study of the effectiveness of an incentive-based contract.

  • No effect of the incentive-based contract on treatment engagement or retention.

Acknowledgments:

We are grateful for the research assistance of AnMarie Nguyen. The study team would like to acknowledge and thank the Maine substance use treatment providers who participated in this study as well as the Maine Office of Substance Abuse and Mental Health Services, Ruth Blauer, and the Maine Association of Substance Abuse Providers. Without the cooperation and support of these individuals and organizations, this research would not have been possible. Earlier versions of this paper have been presented at AcademyHealth, the Addiction Health Service Research conference, the College for Problems on Drug Dependence, and the Research Society on Alcoholism.

Funding: This work was supported by the National Institute on Drug Abuse at the National Institutes of Health [grant numbers R01 DA033402 and P30 DA035772]. The funder had no role in study design; collection, analysis and interpretation of data; writing of the report; or the decision to submit the article for publication.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

REFERENCES

  1. Anderson RM (1995). Revisiting the behavioral model and access to medical care: Does it matter? Journal of Health and Social Behavior, 36(1), 1–10. [PubMed] [Google Scholar]
  2. Brucker D, & Stewart MT (2011). Performance-based contracting and differences in substance abuse treatment access and retention measures. Journal of Behavioral Health Services & Research, 38(3), 383–397. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Carlo AD, Benson NM, Chu F, & Busch AB (2020). Association of alternative payment and delivery models with outcomes for mental health and substance use disorders: a systematic review. JAMA Netw Open, 3(7), e207401. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. CMS. (2020). Overview of Substance Use Disorder Measures in the 2020 Adult and Health Home Core Sets Retrieved from: https://www.medicaid.gov/medicaid/quality-ofcare/downloads/performance-measurement/factsheet-sud-adult-core-set.pdf. (Accessed May 19 2020).
  5. Commons M, McGuire TG, & Riordan MH (1997). Performance contracting for substance abuse treatment. Health Services Research, 32(5), 631–650. [PMC free article] [PubMed] [Google Scholar]
  6. Davis MT, Torres ME, Ngyuen A, Stewart MT, & Reif S. (2019). Improving quality and performance in substance use treatment programs: What is being done and why is it so hard? Journal of Social Work, epub ahead of print, 13 Aug 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Dunigan R, Acevedo A, Campbell K, Garnick DW, Horgan CM, Huber A, … Ritter GA (2014). Engagement in outpatient substance abuse treatment and employment outcomes. Journal of Behavioral Health Services & Research, 41(1), 20–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Garner BR, Godley MD, Funk RR, Lee MT, & Garnick DW (2010). The Washington Circle continuity of care performance measure: predictive validity with adolescents discharged from residential treatment. Journal of Substance Abuse Treatment, 38(1), 3–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Garner BR, Godley SH, Dennis ML, Hunter BD, Bair CM, & Godley MD (2012). Using pay for performance to improve treatment implementation for adolescent substance use disorders: results from a cluster randomized trial. Archives of Pediatric and Adolescent Medicine, 166(10), 938–944. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Garnick DW, Horgan CM, Acevedo A, Lee MT, Panas L, Ritter GA, … Bean-Mortinson J. (2017). Influencing quality of outpatient SUD care: Implementation of alerts and incentives in Washington State. Journal of Substance Abuse Treatment, 82, 93–101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Garnick DW, Horgan CM, Acevedo A, Lee MT, Panas L, Ritter GA, … Wright D. (2014). Criminal justice outcomes after engagement in outpatient substance abuse treatment. Journal of Substance Abuse Treatment, 46(3), 295–305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Garnick DW, Lee MT, Horgan CM, & Acevedo A. (2009). Adapting Washington Circle performance measures for public sector substance abuse treatment systems. Journal of Substance Abuse Treatment, 36(3), 265–277. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Haley SJ, Dugosh KL, & Lynch KG (2011). Performance contracting to engage detoxification-only patients into continued rehabilitation. Journal of Substance Abuse Treatment, 40(2), 123–131. [DOI] [PubMed] [Google Scholar]
  14. Harris AH, Humphreys K, Bowe T, Tiet Q, & Finney JW (2010). Does meeting the HEDIS substance abuse treatment engagement criterion predict patient outcomes? Journal of Behavioral Health Services & Research, 37(1), 25–39. [DOI] [PubMed] [Google Scholar]
  15. Hodgkin D, Garnick DW, Horgan CM, Busch AB, Stewart MT, & Reif S. (2020). Is it feasible to pay specialty substance use disorder treatment programs based on patient outcomes? Drug and Alcohol Dependence, 206, 107735. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Hoffman KA, Ford JH, Tillotson CJ, Choi D, & McCarty D. (2011). Days to treatment and early retention among patients in treatment for alcohol and drug disorders. Addictive Behavior, 36(6), 643–647. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Institute of Medicine. (2001). Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press. [PubMed] [Google Scholar]
  18. Institute of Medicine. (2006). Improving the Quality of Health Care for Mental and Substance Use Conditions. Washington, DC: National Academies Press. [PubMed] [Google Scholar]
  19. Institute of Medicine. (2007). Rewarding Provider Performance - Aligning Incentives in Medicare. Washington. DC: National Academies Press. [Google Scholar]
  20. Jha AK (2013). Time to get serious about pay for performance. JAMA, 309(4), 347–348. [DOI] [PubMed] [Google Scholar]
  21. Kelly P, Hegarty J, Barry J, Dyer KR, & Horgan A. (2017). A systematic review of the relationship between staff perceptions of organizational readiness to change and the process of innovation adoption in substance misuse treatment programs. Journal of Substance Abuse Treatment, 80, 6–25. [DOI] [PubMed] [Google Scholar]
  22. Klein AA, Lloyd KD, & Asper LM (2016). The feasibility of implementing a pay-for-performance program in the treatment of alcohol/drug addiction: Implementation and initial results. Journal of Hospital Administration, 5(5), 1. [Google Scholar]
  23. Laudet AB, Stanick V, & Sands B. (2009). What could the program have done differently? A qualitative examination of reasons for leaving outpatient treatment. Journal of Substance Abuse Treatment, 37(2), 182–190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Liu J, Storfer-Isser A, Mark TL, Oberlander T, Horgan C, Garnick DW, & Scholle SH (2020). Access to and engagement in substance use disorder treatment over time. Psychiatric Services, epub 24 Feb 2020. [DOI] [PubMed] [Google Scholar]
  25. Lu M. (1999). Separating the true effect from gaming in incentive-based contracts in health care. Journal of Economic Management Strategy, 8(3), 383–431. [Google Scholar]
  26. Lu M, Ma CT, & Yuan L. (2003). Risk selection and matching in performance-based contracting. Health Economics, 12(5), 339–354. [DOI] [PubMed] [Google Scholar]
  27. Markovitz AA, & Ryan AM (2017). Pay-for-performance disappointing results or masked heterogeneity? Medical Care Research and Review, 74(1), 3–78. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. McCarty D, Gustafson DH, Wisdom JP, Ford J, Choi D, Molfenter T, … Cotter F. (2007). The Network for the Improvement of Addiction Treatment (NIATx): enhancing access and retention. Drug and Alcohol Dependence, 88(2–3), 138–145. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. McCorry F, Garnick DW, Bartlett J, Cotter F, & Chalk M. (2000). Developing performance measures for alcohol and other drug services in managed care plans. Washington Circle Group. Joint Commission Journal of Quality Improvement, 26(11), 633–643. [DOI] [PubMed] [Google Scholar]
  30. McLellan AT (2011). Considerations on performance contracting: a purchaser’s perspective. Addiction, 106(10), 1731–1732. [DOI] [PubMed] [Google Scholar]
  31. Meier PS, Barrowclough C, & Donmall MC (2005). The role of the therapeutic alliance in the treatment of substance misuse: a critical review of the literature. Addiction, 100(3), 304–316. [DOI] [PubMed] [Google Scholar]
  32. Meyer BD (1995). Natural and quasi-experiments in economics. Journal of Business & Economic Statistics, 13(2), 151–161. [Google Scholar]
  33. NCQA. (2020). Initiation and Engagement of Alcohol and Other Drug Abuse or Dependence Treatment (IET). Retrieved from: https://www.ncqa.org/hedis/measures/initiation-and-engagement-of-alcohol-and-other-drug-abuse-or-dependence-treatment/. (Accessed May 19 2020).
  34. Paddock SM, Hepner KA, Hudson T, Ounpraseuth S, Schrader AM, Sullivan G, & Watkins KE (2017). Association between process-based quality indicators and mortality for patients with substance use disorders. Journal of Studies on Alcohol and Drugs, 78(4), 588–596. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Roosa M, Scripa JS, Zastowny TR, & Ford JH, 2nd. (2011). Using a NIATx based local learning collaborative for performance improvement. Evaluation and Program Planning, 34(4), 390–398. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Rosenthal MB, Fernandopulle R, Song HR, & Landon B. (2004). Paying for quality: providers’ incentives for quality improvement. Health Affairs (Millwood), 23(2), 127–141. [DOI] [PubMed] [Google Scholar]
  37. Shen Y. (2003). Selection incentives in a performance-based contracting system. Health Services Research, 38(2), 535–552. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Simpson DD (2004). A conceptual framework for drug treatment process and outcomes. Journal of Substance Abuse Treatment, 27(2), 99–121. [DOI] [PubMed] [Google Scholar]
  39. Simpson DD, Joe GW, & Rowan-Szal GA (2007). Linking the elements of change: Program and client responses to innovation. Journal of Substance Abuse Treatment, 33(2), 201–209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Stewart MT, Horgan CM, Garnick DW, Ritter G, & McLellan AT (2013). Performance contracting and quality improvement in outpatient treatment: effects on waiting time and length of stay. Journal of Substance Abuse Treatment, 44(1), 27–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Stewart MT, Reif S, Dana BM, Nguyen A, Torres ME, Davis MT, … Horgan CM (2018). Incentives in a public addiction treatment system: Effects on waiting time and selection. Journal of Substance Abuse Treatment, 95, 1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Stewart RE, Lareef I, Hadley TR, & Mandell DS (2017). Can we pay for performance in behavioral health care? Psychiatric Services, 68(2), 109–111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Stuart EA, Barry CL, Donohue JM, Greenfield SF, Duckworth K, Song Z, … Huskamp HA (2017). Effects of accountable care and payment reform on substance use disorder treatment: evidence from the initial 3 years of the alternative quality contract. Addiction, 112(1), 124–133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Unutzer J, Chan YF, Hafer E, Knaster J, Shields A, Powers D, & Veith RC (2012). Quality improvement with pay-for-performance incentives in integrated behavioral health care. American Journal of Public Health, 102(6), e41–e45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Vandrey R, Stitzer ML, Acquavita SP, & Quinn-Stabile P. (2011). Pay-for-performance in a community substance abuse clinic. Journal of Substance Abuse Treatment, 41(2), 193–200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Yang Y, Knight K, Joe GW, Rowan-Szal GA, Lehman WEK, & Flynn PM (2013). The influence of client risks and treatment engagement on recidivism. Journal of Offender Rehabilitation, 52(8), 544–564. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Zeger SL, Liang K-Y, & Albert PS (1988). Models for longitudinal data: a generalized estimating equation approach. Biometrics, 1049–1060. [PubMed] [Google Scholar]
  48. Zhang Z, Friedmann PD, & Gerstein DR (2003). Does retention matter? Treatment duration and improvement in drug use. Addiction, 98(5), 673–684. [DOI] [PubMed] [Google Scholar]

RESOURCES