Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Dec 1.
Published in final edited form as: J Subst Abuse Treat. 2018 Sep 7;95:1–8. doi: 10.1016/j.jsat.2018.09.002

Incentives in a Public Addiction Treatment System: Effects on Waiting time and Selection

Maureen T Stewart a,*, Sharon Reif a, Beth Dana a, AnMarie Nguyen a, Maria Torres a,b, Margot Davis a, Grant Ritter a, Dominic Hodgkin a, Constance M Horgan a
PMCID: PMC6324836  NIHMSID: NIHMS1507248  PMID: 30352665

Abstract

Program-level financial incentives are used by some payers as a tool to improve quality of substance use treatment. However, evidence of effectiveness is mixed and performance contracts may have unintended consequences such as creating barriers for more challenging clients who are less likely to meet benchmarks. This study investigates the impact of a performance contract on waiting time for substance use treatment and client selection. Admission and discharge data from publicly funded Maine outpatient (OP) and intensive outpatient (IOP) substance use treatment programs (N=38,932 clients) were used. In a quasiexperimental pre-post design, pre-period (FY 2005–2007) admission data from incentivized (IC) and non-incentivized (non-IC) programs were compared to post-period (FY 2008–2012) using propensity score matching and multivariate difference-in-difference regression. Dependent variables were waiting time (incentivized) and client selection (severity: history of mental disorders and substance use severity, not incentivized). Despite financial incentives designed to reduce waiting time for substance use treatment among state-funded outpatient programs, average waiting time for treatment increased in the post period for both IC and non-IC groups, as did client severity. There were no significant differences in waiting time between IC and nonIC groups over time. Increases in client severity over time, with no group differences, indicate that programs did not restrict access for more challenging clients. Adequate funding and other approaches to improve quality may be beneficial.

Keywords: Value-based purchasing, pay-for-performance, performance contracting, substance use treatment, quality of care, access to care

1. Introduction

Substance use disorders are a top cause of death and disability in the United States. Many people do not access treatment (SAMHSA, 2017) and for those who do, about half have relapsed within 6 months (National Institute on Drug Abuse, 2018). Improving quality of substance use treatment is imperative. One strategy is to align payment with performance, an approach that has been implemented in medical care (Centers for Medicare and Medicaid Services, 2007, 2016; Institute of Medicine, 2007) and to a lesser extent in substance use treatment (Bremer, Scholle, Keyser, Houtsinger, & Pincus, 2008; R. E. Stewart, Lareef, Hadley, & Mandell, 2017).

Performance contracts are one tool which aim to align providers’ incentives to deliver quality care and maximize income with purchasers’ goals to control costs while providing quality care (Custers, Hurley, Klazinga, & Brown, 2008). This is a shift from traditional payment systems for SUD treatment, where state agencies provide a fixed amount of funding to treatment agencies (commonly called block grants), and from approaches that insurers might use such as fee-for-service, both of which lack incentives for delivery of high quality care (Robinson, 2001). Block grants are fixed payments based on volume and are not linked to quality of care. In contrast, a performance contract links payment at least partially to performance. In implementing performance contracts, it is important to ensure that they do not result in unintended consequences that limit access to care, such as selection of individuals who will be less expensive to care for or more likely to have better outcomes (Charland, 2007).

Performance contracting programs vary in many ways including how incentives are targeted (e.g. programs, clinicians, teams); how performance is measured (e.g. meeting a target, improving from baseline, a combination of the two); and how incentives are structured (e.g. bonus payment, penalty). This variation likely contributes to mixed findings regarding effectiveness of financial incentives (Markovitz & Ryan, 2017), thus it is important to study different types of incentive programs to identify best practices.

Studies of financial incentives in the medical field have shown mixed results with modest improvement on some measures and not others (Damberg et al., 2014; Eijkenaar, Emmert, Scheppach, & Schöffski, 2013; Markovitz & Ryan, 2016; Van Herck et al., 2010). While performance contracting in substance use treatment is not common, a few studies have found mixed results (Bremer et al., 2008). As in the medical field, it is difficult to summarize studies of incentives in SUD treatment because of variation in treatment setting, incentive design, populations and measures.

Contrary to expectations, early findings from a non-randomized pilot test of pay-forperformance in UK drug treatment programs found that patients were less likely to enter treatment and complete treatment if they went to providers that volunteered to be part of the pilot study as compared to non-participating providers(Mason et al., 2015). A study of an early initiative in Maine substance use treatment used data self-reported by programs without a comparison group, and found evidence of reduced drug use following implementation of a performance contracting program, particularly in programs more dependent on the state for funding (Commons, McGuire, & Riordan, 1997). Later analyses revealed possible unintended consequences, suggesting that programs selected individuals who were more likely to meet the performance measures by admitting less severely ill clients (Shen, 2003), although a subsequent study of the same data suggested these changes in admission were due to appropriately referring more severely ill patients to higher levels of care (Lu, Ma, & Yuana, 2003).

Mixed results were identified in two performance contracting programs implemented in Delaware: one with incentives targeted to detoxification programs and one targeted to outpatient treatment programs. The analysis of the Delaware detox initiative identified improvement in detox occupancy and some improvement in transition to treatment, but no effect on retention in treatment following detox (Haley, Dugosh, & Lynch, 2011). In addition to its methodologic limitations of a pre-post study without controls, generalizability is limited in that the detox programs did not taper patients with opioid addiction to pharmacotherapy, the evidence-based treatment for OUD. The Delaware initiative in outpatient treatment, which provided both program-level incentives and penalties, was associated with increased outpatient utilization (McLellan, Kemp, Brooks, & Carise, 2008), as well as shorter waiting time for treatment, improved engagement and longer lengths of stay, suggesting improved quality of care (M. T. Stewart, Horgan, Garnick, Ritter, & McLellan, 2013).

Further evidence of a positive effect from performance contracting was seen in a study of adolescent substance use treatment where clients were more likely to initiate services in programs exposed to financial incentives (Lee et al., 2012). In another randomized trial, financial incentives were found to improve implementation of evidence-based-practices in adolescent substance use treatment (Garner et al., 2012). On the other hand, a randomized trial of incentives and provider alerts in Washington State addiction treatment programs did not improve quality of care, and qualitative findings indicate that providers may have been overwhelmed by other changes in the environment and therefore unable to attend to the incentive program (Garnick et al., 2017).

Despite the lack of conclusive research on performance contracting, initiatives to develop and expand these approaches continue. There is much to learn about the different settings and features of payment design that are likely to influence results (Conrad 2016).

1.1. Access to Substance Use Treatment

Timely access to evidence-based SUD treatment is critical to improving health. Shorter waiting times have been shown to increase likelihood of attendance in substance use treatment (Claus & Kindleberger, 2002; Festinger, Lamb, Kountz, Kirby, & Marlowe, 1995; Simpson, Joe, & Brown, 1997; Stasiewicz & Stalker, 1999). Despite its importance, access to care is often limited by long waiting times (Guerrero, Fenwick, Kong, Grella, & D’Aunno, 2015; Pollini, McCall, Mehta, Vlahov, & Strathdee, 2006; Stasiewicz & Stalker, 1999), which are associated with adverse events including overdose (Pollini et al., 2006). This is particularly problematic for a disorder where most people do not seek treatment on their own (SAMHSA, 2015) therefore it is key for treatment to be available when individuals are open to it. Treatment delays may result in a reduction in treatment initiation or adherence.

Performance contracting may improve access to SUD treatment because many of the necessary activities to improve access are within the control of treatment organizations, such as changing admissions systems and expanding capacity

1.2. Performance contracting in Maine’s public addiction treatment system

After the first Maine performance contract had mixed results (Commons et al., 1997; Lu et al., 2003) the state revised their approach and implemented a second-generation performance contract in 2007 for outpatient and intensive outpatient non-methadone programs that received public funds. In the first Maine performance contract, outpatient, residential and detoxification programs were evaluated on 24 measures related to utilization, outcomes and special populations. Financial rewards were promised; penalties were not used. Data were reported annually by individual programs for state-funded clients only. Timely feedback on performance was not available to the programs. Program performance was used to refine contract allocations and payment types and low performing programs submitted corrective action plans but payments were not immediately adjusted (Commons et al., 1997). In the second-generation performance contract, the state made a concerted effort to eliminate problems identified in the first-generation program by using fewer measures, making more timely payments so that the incentives would be salient, and putting larger incentives on utilization rates to discourage client selection.

Preliminary research on this second iteration performance contract found no effect on waiting time, client engagement or length of stay (Brucker & Stewart, 2011). However, findings were limited because waiting time data were only included for 2008, the first year of the contract; waiting time was measured as a binary variable indicating whether the access standards were met, not as length of time; and the comparison admissions were not matched on patient characteristics. Therefore, the Brucker study was not able to determine whether state-wide performance on waiting time, in the absence of incentives, changed over time or differentially by program participation in the performance contract.

This paper takes a more rigorous approach to address this question by examining whether the second Maine performance contract approach resulted in intended outcomes, using a matched control group and methods that control for historical changes via a pre-post difference-in-differences design. Further, this paper explores the risk of unintended consequences of the performance contract. Specifically, the paper addresses the following research questions: (1) Did the Maine performance contract result in shorter wait times to access SUD treatment (an incentivized measure) in incentivized agencies? (2) Did the Maine performance contract result in cherry-picking or client selection, by reducing the proportion of clients of greater mental health or substance use severity, as a proxy indicating clients who are less likely to meet the incentivized measures?

2. Methods

2.1. Setting

The Maine performance contract included five performance measures, each of which had penalty and incentive thresholds, based on a treatment program’s FY2006 performance on the same measures and its expected ability to improve over time (Table 1). The full performance contract approach is described here, although this paper focuses only on the waiting time measure, which is conceptually different from retention. The incentivized measures included two measures of waiting time and two retention measures which have been shown to be linked to improved outcomes (Simpson et al., 1997). The contract also incentivized units of service delivered for all outpatient addiction treatment services, not just state contracted units. The units of service goals were determined prior to the start of the contract, and all programs were expected to meet that goal (i.e., 90–100% of expected units of service) for the base contract amount. It was possible to exceed that goal (i.e., >100%) and the program would then receive a bonus for increasing the numbers of clients served.

Table 1:

Maine performance contract requirements

Contract requirement Performance target Payment amount**
Penalty Incentive
Units of service <90% >100% ±5%
(contracted # admissions)
Access (days waiting)
1st contact to face-to-face session >5 days <2 days ±1%
Assessment to 1st treatment >14 days <7 days ±1%
Retention (% admissions)
Attend 4+ sessions <50% clients >65% ±1%
Stay 90 days or more* <30% clients >40% ±1%
**

Payment is % of base contract amount

*

Stay 90 days for OP; Treatment completion for IOP

The performance contract addressed the potential for decreased performance by having both a reward for performance that met the contract goal and a penalty when a minimum performance standard was not met. Programs could earn or lose up to 9% of their base payment for reaching or failing to reach targets.

The state chose utilization of all outpatient services (units of service) as the measure with the highest reward and penalty, to reduce the likelihood that programs might “game” their results. By measuring performance for all clients, the state removed the incentive to shift clients to a different payer (e.g. shifting from state-funded to Medicaid) or create other barriers for more challenging clients in order to exclude the client from the performance contract.

Performance was calculated separately for adult and adolescent clients; this study focuses on adult clients because only adults were included in the performance contract itself. Performance measures differed for outpatient and intensive outpatient programs; analyses here are conducted separately by program type. Contracts were renewed annually, performance calculated monthly, and payments made quarterly for the appropriate penalty or incentive. Although the original goal was to modify targets over time, changes were never made to the waiting time and retention targets.

The performance contract was required for all programs that received public funds through the federal Substance Abuse Treatment and Prevention (SATP) block grant, administered by the Maine Office of Substance Abuse and Mental Health Services (SAMHS). SAMHS also licenses and collects data from all programs in the state that provide substance use treatment and bill Medicaid for their services. This provided a research opportunity by allowing comparisons of clients in programs that were incentivized (IC group) and were not incentivized(non-IC group) under the performance contract. While programs not under the performance contract may be different than those participating (i.e., by size and profit status), propensity score matching that included program characteristics was used to ensure the clients were similar in the two groups. The first year of the performance contract, FY 2008 had 18 incentivized programs; the median number of admissions per IC program was 103, (range 24378). In FY 2011, the last year of our study, there were 16 incentivized programs; the median number of admissions per IC program was 118 (range 65 – 305).

2.2. Data source

The study used admission and discharge data from FY 2005 through FY 2011 that all licensed adult outpatient (OP) and intensive outpatient (IOP) substance use treatment programs in Maine are required to submit to the state. This time-frame captures the period three years before the performance contract went into effect (FY 2008) and four years following. Data are reported for all clients regardless of payer; clients were funded by the block grant, Medicaid, private-pay and self-pay.

2.3. Comparison group and propensity score matching

With participation in the performance contract determined by Maine well before this study was in place, the research design is quasi-experimental. The initial analytic file was developed by selecting all adult admissions in OP or IOP treatment and linking them to discharge records by program, client identifier, and admission date. The initial analytic sample thus consisted of 38,932 records: 24,721 from incentive programs (IC group) and 14,211 from non-incentive programs (non-IC group).

Since programs were not randomly assigned to the performance contract, it was hypothesized that the IC and non-IC groups might differ on program and client characteristics even before the introduction of incentives. To assess possible selection bias, groups were compared on program-level (program size, multi-site, profit status) and admission-level characteristics (demographics, type of substance use, use of selected services, and all dependent variables) in the pre-period. The sample sizes were large enough to yield small pvalues for most comparisons. Therefore, the magnitude of the differences between the groups was assessed using Cohen’s d effect sizes (Chinn, 2000; Cohen, 1988). An imbalance was defined as an absolute value of Cohen’s d greater than 0.20, which corresponds to the upper threshold of a small effect size (Cohen, 1988).

In the pre-period, prior to matching, the IC and non-IC groups were imbalanced on all program-level characteristics and on several admission-level characteristics (OP: Medicaid, receipt of wrap-around services, medical care referred/provided, substance use severity score, and primary drug; IOP: legal involvement, treatment intensity, and substance use severity score). To address the imbalances, propensity score matching was used (D’Agostino, 1998). Independent variables were selected using forward stepwise selection (p<.25 to enter/stay in model); the predictive ability of each model was evaluated using the c-statistic (OP: 0.70; IOP: 0.74). Only pre-period admissions were used to construct the propensity score model and that model was then applied to the full sample to obtain a propensity score for each admission, i.e., the conditional probability of being in the IC group. Each IC admission was matched to one nonIC admission based on quintile of propensity score and admission year, using random sampling with replacement. All IC admissions were successfully matched, and good balance was achieved for all characteristics except age and IOP referral from criminal justice. See appendix for comparison of pre-period characteristics between groups before and after matching. The final analytic matched sample consisted of 26,722 OP and 12,210 IOP admissions

2.4. Dependent variables

The principal dependent variable was waiting time for treatment, a measure of access to care. The Maine performance contract used two measures for waiting time: number of days from first contact to intake and number of days from intake to first treatment. Agency policies differed on coding these dates so waiting time was calculated as the number of days from the client’s first contact with the program (by phone or in person) to the first day of treatment.

As noted earlier, programs may try to exclude more severely ill clients who might have a harder time meeting the benchmarks for the incentivized measures. Clients with mental illness and/or more severe substance use are less likely to attend appointments so programs may try to admit fewer patients with these characteristics. Therefore, for a better picture of the effect of the performance contract on access to care, selection issues were examined. The hypothesis was that IC programs may engage in selecting less severely ill clients who would be more likely to meet the incentivized measures in order to receive incentive payments. To test this, the study examined two measures of client severity: mental illness/disorder history and substance use severity.

Mental illness/disorder history was operationalized as admissions that identified a cooccurring DSM-IV mental illness diagnosis, any outpatient mental health visit in the past 12 months, or any psychiatric hospitalization the past two years.

The substance use severity measure is a validated composite index that captures multiple dimensions of substance use that are readily available in administrative data, including frequency and duration of substance use, use of “hard” drugs (e.g., cocaine, heroin, methamphetamine), intravenous drug use, employability, employment status, and income (McCamant, Zani, McFarland, & Gabriel, 2007; McFarland, Deck, McCamant, Gabriel, & Bigelow, 2005). Each dimension is given equal weight in calculating the substance use severity score, which ranges between zero and one, with higher scores denoting greater severity; specific severity levels were not constructed.

2.5. Independent variables

The regression models contained terms for period of admission (pre: 2005–2007 post: 2008–2012), IC group (y/n), and the interaction between the two. The key independent variable is the interaction term. If this is statistically significant, it indicates an association between IC implementation and the outcome in question (a “difference in the differences”).

2.6. Covariates

The analytic models controlled for potential confounding variables. Sociodemographic factors include age, gender, education, marital status, and unemployed at admission. Other covariates include health insurance, criminal justice system referral, receipt of wraparound services and prior treatment episodes. Health insurance was defined as whether Medicaid was the primary payment source at admission. Criminal justice system (CJS) referrals identified admissions that were referred by correctional facilities, such as county jails, state/federal court, probation offices, and drug court. Wraparound services were indicated if childcare, transportation, employment, housing, financial legal, academic, and/or vocational services were provided during treatment. Primary drug represents the substance that led to the current treatment admission and is classified into three categories: alcohol (reference group), opioids, and other drugs. The regression models also controlled for mental illness/disorder history and substance use severity score, as defined previously. When mental illness and SUD severity were modeled as the dependent variable, the respective variable was excluded as a covariate in the respective regressions.

2.7. Statistical analysis

A difference-in-differences (DID) analytic approach (Meyer, 1995) with a propensity score matched control group was used to test the impact of the performance contract on waiting time and selection of clients in OP and IOP substance use treatment programs. DID models adjust for unobserved factors which might have coincided with IC implementation. The DID statistic is equal to the pre-post change in the measure for the IC group minus the pre-post change in the measure for the non-IC group. A positive DID statistic indicates that the IC was associated positively with the outcome variable, e.g. faster growth or slower decline.

The unit of observation was the treatment admission. All analyses were stratified by OP and IOP since incentives might not have the same effect on OP and IOP admissions, and the rewarded measures were conceptualized differently in each. To account for the clustering of observations within each propensity score matching stratum, the DID regression models were estimated using generalized estimating equations (GEE) (Zeger, Liang, & Albert, 1988). A negative binomial regression was used for days waiting for treatment because it is a count variable, a logistic regression model was employed for the binary history of mental illness, and OLS was used for the continuous substance use severity outcome. For the logistic regression, because the impact of the interaction term may vary for different levels of covariates, the mean interaction effect was also calculated using the Ai-Norton method (Ai & Norton, 2003; KaracaMandic, Norton, & Dowd, 2012). To see if the effect of the incentives was stronger in years immediately following implementation and later dissipated, sensitivity analyses with dummy variables for each year were conducted. Hierarchical random effects models, which included program-level factors in addition to client-level covariates were tested, but this effort produced quasi-complete separation and these models were not pursued further. As a result, the study could not tease out specific program-level factors that might drive differences, although it controlled for program-level clustering.

3. RESULTS

3.1. Sample characteristics and program financial performance

The sample reflects high-need clients often seen in public addiction treatment, with high levels of unemployment, criminal justice involvement, and mental disorders (Table 2). Many are poor, as indicated by Medicaid enrollment. Alcohol and opioids represented the most common primary substances used, with rare use of other drugs. Clients are primarily men in OP, and close to half are men in IOP. Mean age is 34. Most are high school graduates and many have never been married. For 70% of clients, this is not their first SUD treatment episode. Table 3 describes the financial results of the incentivized contract over this time period. On average, programs received three percent on top of their base contract.

Table 2:

Sample description. Characteristics of substance use treatment admissions to outpatient and intensive-outpatient programs in incentive (IC) and non-incentive (Non-IC) agencies

Clients in outpatient programs N= 26,722 matched sample Clients in intensiveoutpatient programs N= 12,210 matched sample
IC Non-IC IC Non-IC
% of admissions
Age at admission (yr.)
    18–24 23.9 24.0 24.3 21.6
    25–34 34.4 34.5 34.1 32.5
    35–44 22.5 22.0 21.1 22.6
    45+ 19.2 19.5 20.5 23.3
Age at admission-mean (SD) 34.4 (11.2) 34.4 (11.3) 34.6 (11.5) 35.6 (11.8)
Male 60.7 60.6 57.1 49.8
Marital status- current
    Never married 48.7 47.2 45.4 45.5
    Separated/divorced/widowed 26.0 25.4 26.8 29.5
    Married/Cohab 25.3 27.5 27.7 25.0
Education
    <HS 26.0 26.6 24.2 20.6
    HS/GED 52.1 50.9 49.9 48.9
    >HS 21.8 22.5 25.9 30.6
Unemployed 60.3 60.6 66.5 63.8
Medicaid - primary payment source 50.8 55.0 49.8 50.4
Referral from criminal justice system 31.0 24.4 20.4 19.9
Criminal justice system involvement 58.0 54.1 51.6 47.7
Prior SUD treatment 71.5 69.2 71.6 69.2
Primary substance used
    Alcohol 44.4 49.3 39.9 49.0
    Opioids 35.0 30.2 45.1 35.0
    Other drugs 20.7 20.5 15.0 16.0
Mental disorder diagnosis or history of psychiatric admission 53.5 50.8 65.8 58.0

Table 3:

Maine performance contract average payments per contract

2008* 2009 2010 2011
Number of contracts 16 15 17 17
Base contract amount (mean) $193,616 $208,789 $193,180 $193,180
SD $146,214 $146,297 $137,815 $137,815
Available incentives (mean) $13,052 $18,791 $17,385 $17,385
SD $9,889 $13,167 $12,405 $12,405
Proportion of base contract received
Among programs earning bonus 103% 101% 103% 103%
Among programs penalized 98% 99% 97% 98%
Range of penalty and bonus 96% - 105% 99% - 102% 96% - 108% 98% - 104%
*

2008 was for 3 quarters only

3.2. Waiting time for treatment

3.2.1. Outpatient treatment

Waiting time increased during the study period in both IC and non-IC programs. In the pre-period, prior to implementation of the performance contract, IC clients waited an average of 7.5 days for treatment increasing to 12 days in the post period. Non-IC programs started with a longer waiting time (8.7 days) and increased to 11 days during same period (Table 4).

Table 4.

Dependent variables: Waiting time and client severity by IC group and time period

IC Non-IC
Pre Post Pre Post
Outpatient Treatment (N) 7635 11100 7635 11100
Access: Days waiting for treatment
mean 7.52 (11.2) 12.10 (15.9) 8.72 (17.6) 11.00 (16.4)
Client Severity:
History of mental disorder (%) 47.8 57.3 43.6 55.7
Substance use severity (0–1 scale, higher is more severe) (mean) 0.27 (0.14) 0.28 (0.14) 0.26 (0.14) 0.29 (0.14)
Intensive Outpatient Treatment (N) 2041 3945 2041 3945
Access: Days waiting for treatment
mean 5.30 (12.33) 8.44 (12.36) 3.69 (12.69) 9.63 (20.00)
Client Severity:
History of mental disorder (%) 58.26 69.66 55.56 59.29
Substance use severity (0–1 scale, higher is more severe) (mean) 0.30 (0.13) 0.33 (0.13) 0.30 (0.14) 0.31 (0.14)

There was no difference in the increase in waiting time between the IC and non-IC groups in multivariate regression analyses (Table 5) [β=.06, DID = .68, p = .55]. The finding indicates that the performance contract did not affect waiting times between the IC and non-IC groups. Sensitivity analyses indicated that the results reported here were consistent both immediately following introduction of the program and in later years (data not shown). Various client factors were associated with longer waiting time: being older, unmarried, having less education, and referral from the criminal justice system. Having prior drug/alcohol treatment episodes and being female were associated with shorter waiting time.

Table 5:

GEE model results & difference-in-difference estimates

Outpatient admissions Intensive Outpatient admissions
# Days Waiting for Treatment Mental Disorder History Substance use severity # Days Waiting for Treatment Mental Disorder History Substance use severity
Estimate SE Estimate SE Estimate SE Estimate SE Estimate SE Estimate SE
Intercept 1.82** 0.07 −1.55** 0.09 0.11** 0.01 0.75** 0.16 −1.27** 0.26 0.18** 0.01
Incentive Group 0.00 0.07 0.20 0.13 0.00 0.00 0.55** 0.12 0.17 0.26 −0.01 0.00
Post Period 0.50** 0.10 0.21 0.12 0.00 0.02 0.88** 0.15 −0.05 0.36 0.00 0.01
Incentive*Post 0.06 0.10 -0.07 0.20 -0.01 0.00 -0.30 0.19 0.44 0.40 0.01 0.01
Age at admission (Ref: 18-
    25–34 0.14** 0.04 0.07** 0.03 0.02** 0.00 0.35** 0.07 0.07 0.07 0.02** 0.01
    35–44 0.11** 0.03 0.12** 0.04 0.04** 0.00 0.11** 0.09 0.18** 0.09 0.02** 0.01
    45+ 0.09* 0.04 0.25** 0.05 0.05** 0.01 0.15** 0.08 0.18** 0.05 0.02** 0.00
Female −0.04* 0.02 0.57** 0.03 0.00 0.00 0.02 0.06 0.66** 0.06 0.00 0.00
Marital status (Ref: single):
    Never Married 0.05* 0.02 0.19** 0.04 0.04** 0.00 −0.02 0.06 0.04 0.07 0.03** 0.00
    Separated, Divorced, Widowed 0.05* 0.02 0.19** 0.03 0.04** 0.00 0.11 0.05 0.07 0.07 0.03** 0.00
Highest Grade Completed (Ref: HS):
    < High School 0.06* 0.02 0.00** 0.06 0.03** 0.00 0.12* 0.05 −0.13** 0.08 0.02** 0.00
    >High School −0.04 0.04 0.18** 0.03 −0.03** 0.00 −0.13** 0.08 0.32** 0.07 −0.03** 0.00
Medicaid as Expected Primary Payment −0.05 0.07 0.09 0.06 0.04** 0.00 −0.03 0.10 0.24* 0.10 0.03** 0.01
Primary Referral Source: Criminal Justice System 0.40** 0.08 −0.44** 0.04 −0.03** 0.00 0.85** 0.15 −0.43** 0.07 0.01 0.01
Primary Drug Used (Ref: Alcohol):
    Opioids −0.04 0.05 −0.37** 0.03 0.09** 0.01 0.10 0.11 −0.40** 0.07 0.09** 0.00
    Other 0.07 0.05 −0.01** 0.10 0.06** 0.01 −0.03 0.10 −0.11** 0.08 0.05** 0.01
Received Traditional Wrap-around services during treatment 0.12 0.08 −0.02 0.03 0.05** 0.01 0.57** 0.16 0.46** 0.11 0.03** 0.01
Prior drug/alcohol treatment episodes −0.20** 0.07 0.47** 0.04 0.01** 0.00 −0.18 0.11 0.56** 0.06 0.01 0.01
Mental disorder history −0.01 0.01 -- -- 0.05** 0.00 −0.22* 0.09 -- -- 0.04** 0.00
Substance use severity 0.25 0.29 3.33** 0.26 -- -- 0.22 0.42 2.53** 0.25 -- --
Marginal effect of interaction:
INC*Post (Mean, SE) −0.02 0.04 -- -- -- -- 0.07 0.08 -- -- -- --
Difference-in-difference estimate (p value) 0.68 (0.55) −1.89 (0.72) −0.009 (0.06) −0.11 (0.11) 10.4 (0.27) 0.008 (0.46)
*

p<0.05

**

p<0.01

3.2.2. Intensive outpatient treatment

Waiting time for IOP treatment also increased over time. In the pre-period, clients in the IC programs waited an average of 5.3 days for admission, increasing to 8.4 days in the postperiod. Clients in the non-IC programs waited almost 4 days for admission in the pre-period, increasing sharply to an average of 9.6 days in the post-period (Table 4).

Similar to the outpatient analysis, this difference in growth of waiting time between the IC and non-IC groups was not statistically significant in multivariate regression analyses (Table 5) [β=−.30, DID = −.11, p = .11]. As in the outpatient admissions, the performance contract did not impact waiting time differentially for the IOP admissions. Factors that were shown to be associated with longer waiting time for IOP admissions include being in an IC program, in the post-period, older, receiving wraparound services during treatment, and a referral from the criminal justice system. Only having a history of a mental disorder or previous psychiatric admission was associated with shorter waiting time in the IOP admissions.

3.3. Selection of clients

3.3.1. Outpatient treatment

Bivariate analyses indicate that clients treated in IC programs before the performance contract had slightly higher average substance use severity than clients in non-IC programs during this time (Table 4). Substance use severity increased in both groups over time. The proportion of clients with a history of mental disorders increased substantially in both groups from 48% in the pre-period to 57% in the post period in IC programs and from 44% in the preperiod to 56% in the post period in non-IC programs. Multivariate regression did not identify a significant difference in the change in proportion of clients with a history of mental disorder or in the level of substance use severity (Table 5).

3.3.1. Intensive outpatient treatment

Severity of clients admitted to intensive outpatient treatment also increased over time in both groups. Admissions in the IC programs continued to exhibit higher levels of substance use and mental health severity than those in non-IC programs (Table 4). Multivariate regressions do not identify a significant difference in change over time between the IC and nonIC programs for history of mental health diagnosis or the composite substance use severity measure among intensive outpatient programs (Table 5).

4. Discussion

While performance contracting programs are increasingly common and many payers are moving toward adopting these strategies, there are few rigorous studies with long time horizons and strong research designs to indicate the effect of organization-level incentives, particularly for SUD treatment. This study employed rigorous analytic methods to examine a performance contract which was developed to improve upon previous shortcomings. Findings of this study are contrary to expectations for the contract at the time it was implemented. This analysis of performance contracting in a public addiction treatment system found an increase in the incentivized waiting time measure across all programs; however the change in waiting time was not significantly different in incentivized programs versus non-incentivized programs. With sufficient power to identify fairly small differences, this study’s results suggest that the differences between the groups were rather small.

This study also looked for evidence of client selection as an unintended effects. Evidence of selection problems was not identified in this performance contract, an encouraging finding. Several possible explanations arise: (1) the design of this performance contract put additional weight on the utilization measure, which successfully discouraged the “cherry-picking” of clients; (2) providers are intrinsically motivated to provide the best care possible to all individuals suffering from SUD, thus do not want to cherry-pick; or (3) the contract was too small in the context of their overall operating budget to warrant provider’s attention at a level that encouraged selection bias.

The lack of effect of the performance contract on waiting time for substance use treatment is similar to results of a study of incentive payments for addiction treatment programs in Washington State (Garnick et al., 2017) and to studies in other fields including long-term care (Werner, Kolstad, Stuart, & Polsky, 2011) and inpatient hospital care (Ryan, Sutton, & Doran, 2014). The design of incentive program, changes over time in population, or changes in the environment, likely influenced the lack of effect of the performance contract.

4.1. Design of the performance contract

Although the performance contract tried to incentivize change, the programs may have lacked sufficient funding from it to support investment in programmatic changes that might have helped make a bigger difference. Programs were eligible, on average, to receive about $16,000 in addition to the state’s base contract with the program, a fairly small amount in the context of a program’s operating costs, although the amount represented 9% of the base contract. Programs could also lose money and receive less than the base contract. Among programs that received bonuses, the bonuses averaged about 3% of the base contract. Programs that lost money lost an average of 3% of the base contract, in addition to not earning their potential bonus payments. Across all programs the state ended up paying out about the same amount of money as it would have without the performance contract. These payment rates may have been too low to support investment needed to reduce waiting time or even to slow the increase in waiting time that occurred across all programs and improve quality of care.

When the performance contract was initially conceived, the target measures were expected to change over time; however, this change was never implemented and the measures and benchmarks remained the same over the study period. Since the measures were not changed, the programs may not have paid as much attention to the contract over time. In addition, there may have been a mismatch in the level of the incentive. The financial incentive went to the programs; while many of the changes to be successful under the contract had to be made by individual providers and direct service staff and clinicians may not have been aware of the performance contract. In this case, an incentive to the individual may have been more effective (Rosenthal & Dudley, 2007) (M. T. Stewart et al., 2013; Vandrey, Stitzer, Acquavita, & Quinn-Stabile, 2011).

4.2. Environmental changes

The environment in Maine changed dramatically over the seven-year study period and these changes likely resulted in the observed increases in waiting time for SUD treatment. There was an increase in use of illicit drugs and alcohol to some of the highest rates in the country (SAMHSA, 2016). This increase in substance use, combined with the state’s severe economic recession that occurred contemporaneously with the rollout of the performance contract, resulted in more people in need of treatment. At the same time, the state had fewer resources available for substance use treatment as several treatment programs closed and merged over this period. This increase in need combined with a decrease in available programs, complicated by the rural nature of Maine, are likely to have contributed to our finding that wait times increased for both IC and non-IC programs over the study period.

4.3. Limitations

The performance contract measured performance quarterly; for a state agency this was a relatively quick turnaround, but monthly payments may have been more salient (Conrad & Perry, 2009). The incentives were targeted to the program-level, but many actions that could be done to respond to the performance contract (e.g., flexible scheduling of intakes) required clinicians to make changes; improving the alignment of incentives might have different results.

The targets were not updated during the study period so programs may not have been carefully attending to performance. Conducting the study in Maine highlights another set of challenges. Maine is a largely rural state with a small, predominantly white population, relatively few treatment programs, and little to no access to public transportation which may affect access to care. Thus, the experience here may not generalize to other locations.

The study is limited in several ways. Analyses rely on administrative data thus lack broader context. Program-level factors that might influence the effect of financial incentives were not examined, although controls for clustering of clients within programs were used. Further, if the patients in the control group were systematically different from the treatment group in unobserved ways, this could bias the findings. This potential was addressed by the use of propensity score matching and difference-in-differences models (to control for confounding trends). Despite the strength of these methods, remaining unobserved differences could affect the validity of conclusions. Finally, this paper focuses on the impact of the performance contract on waiting time and client selection. A forthcoming paper will examine the effect of the performance contract on retention in treatment.

4.4. Conclusion

Financial incentives may be one way to help focus programs on quality of care, but in this case were not a sufficient lever to support and improve quality of care, as indicated here by wait times to access treatment. This study adds context to knowledge about performance contracting overall, and deepens our understanding of its limits in substance use treatment programs, which operate in a challenging environment.

The increase in waiting time that occurred under the performance contract may reflect the difficulties of implementing new interventions, particularly that require systemic changes in contracting and budgeting by treatment programs, in the context of uncertain times. This performance contract was implemented during an economic recession and challenging political environment which made program survival questionable. The performance contract may have been a low priority during this period, even if the program conceptually supported the efforts as a way to improve care.

Highlights.

  • Financial incentives for substance use treatment programs failed to influence waiting time for services

  • No evidence that programs engaged in selection of clients who were more likely to meet the incentivized measures, as client severity increased over time in all programs

  • Adequate funding and other approaches to improve quality of substance use treatment may be necessary

Acknowledgements:

The study team would like to acknowledge and thank the Maine substance use treatment providers who participated in this study as well as the Maine Office of Substance Abuse and Mental Health Services, Ruth Blauer, and the Maine Association of Substance Abuse Providers. Without the cooperation and support of these individuals and organizations, this research would not have been possible.

Funding: This study was funded by the National Institute on Drug Abuse (R01 DA033402) with additional support from the Brandeis-Harvard NIDA Center (P30 DA035772). Preliminary findings were presented at the Addiction Health Services Research Conference in October 2014, at the AcademyHealth Annual Research Meeting in June 2015, and at the College on Problems of Drug Dependence in 2014 and 2015

This work was supported by the National Institutes of Health [grant numbers NIDA R01 DA033402 and P30 DA035772].

The study team would like to acknowledge and thank the Maine substance use treatment providers who participated in this study as well as the Maine Office of Substance Abuse and Mental Health Services, Ruth Blauer, and the Maine Association of Substance Abuse Providers. Without the cooperation and support of these individuals and organizations, this research would not have been possible.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Ai C, & Norton EC (2003). Interaction terms in logit and probit models. Economics letters, 80(1), 123–129. [Google Scholar]
  2. Bremer RW, Scholle SH, Keyser D, Houtsinger JVK, & Pincus HA (2008). Pay for performance in behavioral health. Psychiatr Serv, 59(12), 1419–1429. [DOI] [PubMed] [Google Scholar]
  3. Brucker DL, & Stewart M (2011). Performance-based contracting within a state substance abuse treatment system: a preliminary exploration of differences in client access and client outcomes. Journal of Behavioral Health Services Research, 38(3), 383–397. doi: 10.1007/s11414-010-9228-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Centers for Medicare and Medicaid Services. (2007). Report to Congress: Plan to Implement a Medicare Hospital Value-Based Purchasing Program. Retrieved from https://www.cms.gov/Medicare/Medicare-Fee-for-Service- Payment/AcuteInpatientPPS/downloads/HospitalVBPPlanRTCFINALSUBMITTED2007.pdf
  5. Centers for Medicare and Medicaid Services. (2016). Quality Payment Program: Delivery System Reform, Medicare Payment Reform, & MACRA:
  6. The Merit-Based Incentive Payment System (MIPS) & Alternative Payment Models (APMs). Retrieved from https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment- Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-MIPS-and- APMs.html
  7. Charland K (2007, 2007/09//). Pay for performance comes to Medicare in 2009. Healthcare Financial Management, 61, 60–64. [PubMed] [Google Scholar]
  8. Chinn S (2000). A simple method for converting an odds ratio to effect size for use in metaanalysis. Statistics in Medicine, 19, 3127–3131. [DOI] [PubMed] [Google Scholar]
  9. Claus RE, & Kindleberger LR (2002). Engaging substance abusers after centralized assessment: predictors of treatment entry and dropout. Journal of Psychoactive Drugs, 34(1), 25–31. doi: 10.1080/02791072.2002.10399933 [DOI] [PubMed] [Google Scholar]
  10. Cohen J (1988). Analysis for the Behavioral Sciences (2nd ed. ed.). Hillsdale, NJ: Lawrence Erlbaum Associates Publishers. [Google Scholar]
  11. Commons M, McGuire TG, & Riordan MH (1997). Performance contracting for substance abuse treatment. Journal of Health Services Research, 32(5), 631–650. [PMC free article] [PubMed] [Google Scholar]
  12. Conrad DA, & Perry L (2009). Quality-based financial incentives in health care: can we improve quality by paying for it? Annual review of public health, 30, 357–371. [DOI] [PubMed] [Google Scholar]
  13. Custers T, Hurley J, Klazinga NS, & Brown AD (2008). Selecting effective incentive structures in health care: A decision framework to support health care purchasers in finding the right incentives to drive performance. BMC Health Serv Res, 8, 66. doi:14726963–8-66 [pii] 10.1186/1472-6963-8-66 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. D’Agostino RB (1998). Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Statistics in Medicine, 17, 2265–2281. [DOI] [PubMed] [Google Scholar]
  15. Damberg CL, Sorbero ME, Lovejoy SL, Martsolf GR, Raaen L, & Mandel D (2014). Measuring success in health care value-based purchasing programs: Findings from an environmental scan, literature review, and expert panel discussions. Rand Health Quarterly, 4(3). [PMC free article] [PubMed] [Google Scholar]
  16. Eijkenaar F, Emmert M, Scheppach M, & Schöffski O (2013). Effects of pay for performance in health care: A systematic review of systematic reviews. Health Policy, 110(2–3), 115–130. doi: 10.1016/j.healthpol.2013.01.008 [DOI] [PubMed] [Google Scholar]
  17. Festinger DS, Lamb RJ, Kountz MR, Kirby KC, & Marlowe D (1995). Pretreatment dropout as a function of treatment delay and client variables. Addictive Behaviors, 20(1), 111–115. doi:https://doi.org/ 10.1016/0306-4603(94)00052-Z [DOI] [PubMed] [Google Scholar]
  18. Garner BR, Godley SH, Dennis ML, Hunter BD, Bair CL, & Godley MD (2012). Using pay for performance to improve treatment implementation for adolescent substance use disorders: Results from a cluster randomized trial. Archives of Pediatrics & Adolescent Medicine, 166(10), 938–944. doi: 10.1001/archpediatrics.2012.802 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Garnick DW, Horgan CM, Acevedo A, Lee MT, Panas L, Ritter GA, . . . BeanMortinson J (2017). Influencing quality of outpatient SUD care: Implementation of alerts and incentives in Washington State. Journal of substance abuse treatment, 82, 93101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Guerrero EG, Fenwick K, Kong Y, Grella C, & D’Aunno T (2015). Paths to improving engagement among racial and ethnic minorities in addiction health services. Substance abuse treatment, prevention, and policy, 10(1), 40. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Haley SJ, Dugosh KL, & Lynch KG (2011). Performance contracting to engage detoxification-only patients into continued rehabilitation. Journal of substance abuse treatment, 40(2), 123–131. doi:https://doi.org/ 10.1016/j.jsat.2010.09.001 [DOI] [PubMed] [Google Scholar]
  22. Institute of Medicine. (2007). Rewarding Provider Performance - Aligning Incentives in Medicare. Washington. D.C.: National Academies Press. [Google Scholar]
  23. Karaca-Mandic P, Norton EC, & Dowd B (2012). Interaction terms in nonlinear models. Health Services Research, 47(1pt1), 255–274. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Lee MT, Garnick DW, O’Brien PL, Panas L, Ritter GA, Acevedo A, . . . Godley MD (2012). Adolescent treatment initiation and engagement in an evidence-based practice initiative. Journal of substance abuse treatment, 42(4), 346–355. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Lu M, Ma C.-t. A., & Yuana L (2003). Risk selection and matching in performance-based contracting. Health Economics, 12, 339–354. [DOI] [PubMed] [Google Scholar]
  26. Markovitz AA, & Ryan AM (2016). Pay-for-Performance Disappointing Results or Masked Heterogeneity? Medical Care Research and Review, 1077558715619282. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Markovitz AA, & Ryan AM (2017). Pay-for-performance: Disappointing results or masked heterogeneity? Medical Care Research and Review, 74(1), 3–78. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Mason T, Sutton M, Whittaker W, McSweeney T, Millar T, Donmall M, . . . Pierce M (2015). The impact of paying treatment providers for outcomes: difference‐indifferences analysis of the ‘payment by results for drugs recovery’ pilot. Addiction, 110, 1120–1128. doi: 10.1111/add.12920 [DOI] [PubMed] [Google Scholar]
  29. McCamant LE, Zani BG, McFarland BH, & Gabriel RM (2007). Prospective validation of substance abuse severity measures from administrative data. Drug and Alcohol Dependence, 86, 37–45. [DOI] [PubMed] [Google Scholar]
  30. McFarland BH, Deck DD, McCamant LE, Gabriel RM, & Bigelow DA (2005). Outcomes for Medicaid Clients With Substance Abuse Problems Before and After Managed Care. The journal of behavioral health services & research, 32(4), 351–367. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. McLellan AT, Kemp J, Brooks A, & Carise D (2008). Improving public addiction treatment through performance contracting: The Delaware experiment. Health Policy, 87(3), 296308. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Meyer BD (1995). Natural and quasi-experiments in economics. Journal of business & economic statistics, 13(2), 151–161 [Google Scholar]
  33. National Institute on Drug Abuse. (2018). Principles of Drug Addiction Treatment: A ResearchBased Guide (Third Edition). Retrieved from https://www.drugabuse.gov/publications/principles-drug-addiction-treatmentresearch-based-guide-third-edition/principles-effective-treatment
  34. Pollini RA, McCall L, Mehta SH, Vlahov D, & Strathdee SA (2006). Non-fatal overdose and subsequent drug treatment among injection drug users. Drug & Alcohol Dependence, 83(2), 104–110. doi: 10.1016/j.drugalcdep.2005.10.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Robinson JC (2001). Theory and practice in the design of physician payment incentives. Milbank Quarterly, 79(2), 149–177, III. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Rosenthal MB, & Dudley RA (2007). Pay-for-performance: will the latest payment trend improve care? JAMA, 297(7), 740–744. [DOI] [PubMed] [Google Scholar]
  37. Ryan A, Sutton M, & Doran T (2014). Does winning a pay‐for‐performance bonus improve subsequent quality performance? Evidence from the hospital quality incentive demonstration. Health Services Research, 49(2), 568–587. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. SAMHSA. (2015). Drug Facts: Nationwide Trends. The National Survey on Drug Use and Health. Retrieved from https://www.drugabuse.gov/publications/drugfacts/nationwide-trends
  39. SAMHSA. (2016). National Survey on Drug Use and Health:Comparison of 2002–2003 and 2013– 2014 Population Percentages. Retrieved from: http://www.samhsa.gov/data/sites/default/files/NSDUHsaeLongTermCHG2014/NSDUH saeLongTermCHG2014.htm
  40. SAMHSA. (2017). Receipt of Services for Substance Use and Mental Health Issues among Adults: Results from the 2016 National Survey on Drug Use and Health. Retrieved from https://www.samhsa.gov/data/sites/default/files/NSDUH-DR-FFR2-2016/NSDUH-DR-FFR2-2016.htm [PubMed]
  41. Shen Y (2003). Selection incentives in a performance-based contracting system. Health Services Research, 38(2), 535–552. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Simpson DD, Joe GW, & Brown BS (1997). Treatment retention and follow-up outcomes in the Drug Abuse Treatment Outcome Study (DATOS). Psychology of Addictive Behaviors, 11(4), 294–307. doi: 10.1037/0893-164x.11.4.294 [DOI] [Google Scholar]
  43. Stasiewicz PR, & Stalker R (1999). Brief report a comparison of three “interventions” on pretreatment dropout rates in an outpatient substance abuse clinic. Addictive Behaviors, 24(4), 579–582. doi:https://doi.org/ 10.1016/S0306-4603(98)00082-3 [DOI] [PubMed] [Google Scholar]
  44. Stewart MT, Horgan CM, Garnick DW, Ritter GA, & McLellan A (2013). Performance contracting and quality improvement in outpatient treatment: effects on waiting time and length of stay. Journal of substance abuse treatment, 44(1), 27–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Stewart RE, Lareef I, Hadley TR, & Mandell DS (2017). Can We Pay for Performance in Behavioral Health Care? Psychiatric Services, 68(2), 109–111. doi: 10.1176/appi.ps.201600475 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Van Herck P, De Smedt D, Annemans L, Remmen R, Rosenthal MB, & Sermeus W (2010). Systematic review: Effects, design choices, and context of pay-for-performance in health care. BMC Health Services Research, 10(1), 1–13. doi: 10.1186/1472-6963-10247 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Vandrey R, Stitzer ML, Acquavita SP, & Quinn-Stabile P (2011). Pay-for-performance in a community substance abuse clinic. Journal of substance abuse treatment, 41(2), 193200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Werner R, Kolstad J, Stuart E, & Polsky D (2011). The effect of pay-for-performance in hospitals: lessons for quality improvement. Health Affairs, 30(4), 690–698. [DOI] [PubMed] [Google Scholar]
  49. Zeger SL, Liang K-Y, & Albert PS (1988). Models for longitudinal data: a generalized estimating equation approach. Biometrics, 1049–1060. [PubMed] [Google Scholar]

RESOURCES