Abstract
We evaluate effects of a performance contract (PC) implemented in Delaware in 2001 and participation in quality improvement (QI) programs on waiting time for treatment and length of stay (LOS) using client treatment episode level data from Delaware (n = 12,368) and Maryland (n = 147,151) for 1998 – 2006. Results of difference-in-difference analyses indicate waiting time declined 13 days following the PC, after controlling for client characteristics and historical trends. Participation in the PC and a formal QI program was associated with a decrease of 20 days. LOS increased 22 days under the PC and 24 days under the PC and QI programs, after controlling for client characteristics. The PC and QI program were associated with improvements in LOS and waiting time, although we cannot determine which aspects of the programs (incentives, training, monitoring) resulted in these changes.
Keywords: Performance contract, pay-for-performance, financial incentives, quality improvement, substance abuse treatment
1. INTRODUCTION
The Institute of Medicine recommended using performance measures and payment systems to align payment with quality improvement (Institute of Medicine, 2001, 2006; Pearson, Schneider, Kleinman, Coltin, & Singer, 2008). The Affordable Care Act (ACA) of 2010 includes pilot programs to redesign payment systems. Pay-for-Performance (P4P) programs are used frequently in general medical care by private insurers (Rosenthal, Landon, Normand, Frank, & Epstein, 2006), Medicare (Centers for Medicare and Medicaid Services, 2003, 2007; Kahn, Ault, Isenstein, Peotetz, & Van Gelder, 2006; Ryan, 2009) and Medicaid (Felt-Lisk, Gimm, & Peterson, 2007; Kuhmerker & Hartman, 2007), to improve quality of care and control costs. Performance contracts (PC), a form of P4P focused at the organizational level, are being implemented on a limited basis in behavioral health (Bremer, Scholle, Keyser, Houtsinger, & Pincus, 2008). However, there is little empirical evidence that P4P leads to quality improvement in general medicine (Rosenthal & Frank, 2006) (Pearson, et al., 2008) or behavioral health (Bremer, et al., 2008).
Recent reviews examining effects of financial incentives on quality of general medical care have found little evidence of the effectiveness of P4P (Christianson, Leatherman, & Sutherland, 2008; Mehrotra, Damberg, Sorbero, & Teleki, 2009; Petersen, Woodard, Urech, Daw, & Sookanan, 2006; Rosenthal & Frank, 2006). However, there are many different types of incentives, target behaviors and outcomes, and contingencies potentially relevant to healthcare improvement and to date only a few implementations of P4P have been studied (Mehrotra, et al., 2009); and many of those studies are limited to pre-post analyses that fail to control for secular trends (An et al., 2008; J. Berthiaume, Chung, Ryskina, Walsh, & Legorreta, 2006; J. T. Berthiaume, Tyler, Ng-Osorio, & LaBresh, 2004; Campbell et al., 2007; Doran et al., 2006; Greene et al., 2004; Levin-Scherz, DeVita, & Timbie, 2006; Nahra, Reiter, Hirth, Shermer, & Wheeler, 2006; Reiter, Nahra, Alexander, & Wheeler, 2006; Sautter et al., 2007; Whyte & Ansley, 2008). Other studies have failed to include controls for other initiatives implemented simultaneously with P4P programs (Beaulieu & Horrigan, 2005; Campbell, et al., 2007; Greene, et al., 2004; Levin-Scherz, et al., 2006).
Evidence of quality improvement from PC in behavioral health is mixed. Analyses of the Maine PC identified improved performance on some measures for programs with higher than average revenue from the PC (Commons, McGuire, & Riordan, 1997). However, a control group was not included and later analyses incorporating a control group found changes may have been due to misreporting of client severity (Lu & Ma, 2006). Shen identified a problem with cherry-picking of clients under the Maine PC (Shen, 2003), but in a more comprehensive examination of the Maine treatment system, Lu and Ma found much of the selection was explained by better matching of clients to treatment modalities (Lu, Ma, & Yuan, 2003). In an initial evaluation of the 2001 Delaware PC, McLellan and colleagues identified improvements in utilization rates and “systematic increases” in client participation in treatment following the PC (McLellan, Kemp, Brooks, & Carise, 2008).
Building on the initial program-level evaluation by McLellan and colleagues, we here examine the impact of the change in payment system from a global payment to a performance contract for Delaware outpatient substance abuse treatment programs using more detailed client level data. We examine the effect of participation in the PC and participation in parallel quality improvement (QI) initiatives; adding a comparison group of programs from another state not subject to the PC. We exploit the sequential implementation of the PC and QI initiatives to estimate effects of the PC only and the PC in conjunction with QI initiatives on waiting time for treatment (WT) and length of stay (LOS) while controlling for case mix and historical trends.
2. METHODS
2.1 Setting: The Delaware Addiction Treatment System
Delaware’s community-based alcohol and other drug (AOD) treatment programs are primarily funded through state funds and federal block grants. The Division of Substance Abuse and Mental Health (DSAMH), allocates block grant funds provided by Substance Abuse and Mental Health Services Administration (SAMHSA) along with additional state funds to AOD treatment programs for detoxification, inpatient, residential and outpatient treatment services. From 2001–2007 DSAMH contracted with five organizations to deliver outpatient AOD treatment services at 11 locations which provided approximately 2,000 episodes of AOD treatment annually.
2.2.1 Intervention: The Delaware Performance Contract
Prior to implementing the PC, Delaware paid AOD programs using a global budget; a fixed monthly installment, one twelfth of the annual amount which was determined prospectively through negotiations between the program and the state (Kemp, 2006). In 2000, DSAMH began to incorporate considerations of quality in purchasing. The move to the PC represented an attempt to incentivize provision of quality care, without dictating specifically how care is provided.
Delaware’s PC has been described previously (McLellan, et al., 2008); we highlight key concepts. Under the PC at least one evidence-based practice must be used; beyond this the state did not dictate how to provide services. State funding was increased 5% with the introduction of the PC. Organizations reported to the state by entering information into a spreadsheet and were paid monthly based on three performance measures: capacity utilization, client participation and treatment completion. Initially programs were given a six month hold-harmless period when performance data were collected and submitted, but programs’ payments were not affected. Beginning in January 2002 programs continued to be paid monthly but each allotment was either incremented or reduced based upon their measured performance in the previous month. It is important to note that turnaround time for reimbursement was designed to be very fast in order to strengthen the incentive’s impact.
Previous analyses did not find any evidence that programs engaged in “cherry-picking” of clients more likely to meet the performance measures; rather severity of the population seemed to increase over time following the PC (McLellan, et al., 2008). We examined our own data for changes in client severity over time and came to a similar conclusion that the population did appear to become more severe over time (analyses not shown).
2.2.2 Performance Measures
Performance measures, established in the contract between the state and the AOD treatment organizations, are the foundation of the PC. The treatment programs reported monthly to the state by entering information into a spreadsheet designed for the PC. The three performance measures are defined as follows:
Capacity Utilization: a program level measure of the proportion of the program’s capacity used in a month. The numerator is the number of clients enrolled in the program during the month. The denominator is negotiated between the program and the state and is based on the program’s size, geographic location and costs (McLellan, et al., 2008). It is important to note that because the denominator was negotiated it is not an entirely objective measure. It is possible that a savvy program director might negotiate a smaller denominator in order to make it easier for their organization to achieve the rate. It was not possible to examine whether this occurred with the available data.
Active Participation: a client level measure of attendance to treatment. Clients met this measure by attending a specified minimum number of treatment sessions each month. The minimum number of sessions required for a client to meet this criterion declined over time in a clinically sensible manner and is shown in Table 1, Treatment Participation Requirements.
Program Completion: a measure of retention in treatment to completion. It was defined as active participation in treatment for a minimum of 60 days, completion of the major goals of the treatment plan and submission of four consecutive weekly urine samples free of alcohol and illegal drugs. Programs were eligible for $100 bonus payments for each client completing the program. However, this measure was subject to budget constraints so once a program reached its annual budgetary maximum (between $3,000 – $9,000) the program did not receive program completion payments for future clients. This measure is not analyzed because the bonus was not available for every client due to budgetary constraints. Complete data on this measure were not available from the state because programs stopped tracking this information once they reached the annual maximum.
Table 1.
Program Capacity Utilization
|
Treatment Participation Requirements
|
|||||
---|---|---|---|---|---|---|
Target Rate 2001 – 2002 | Target Rate 2003 – 2007 | Payment: % of contract amount | Client treatment Phase | Client treatment participation requirement | % Clients required to meet target | Payment: %of contract amount |
80% | 90% | 100% | Phase 1 | 2 visits / week | 50% | 1% |
70% – 79% | 80% – 89% | 90% | Phase 2 | 4 visits / month | 60% | 1% |
60% – 69% | 70% – 79% | 70% | Phase 3 | 4 visits / month | 70% | 1% |
50% – 59% | 60% – 69% | 50% | Phase 4 | 2 visits / month | 80% | 1% |
Treatment Participation payments are conditional on achieving the capacity utilization requirement.
Additional 1% payment when the program meets all 4 participation targets
2.2.3 Payments
Delaware’s PC required programs to maintain a 90% capacity utilization rate to receive the full monthly payment. Both penalties and rewards were included. If an organization did not meet the utilization target, its payment was reduced accordingly, by up to 50%. If an organization met the utilization target and its clients met its active participation targets, the organization was eligible for bonus payments up to 5% beyond the contract amount. Table 1 shows target rates and payments for the utilization and active participation measures.
2.2.4 Quality Improvement Initiatives
Following PC implementation, Delaware outpatient treatment programs began participating in two QI programs: The Network for the Improvement of Addiction Treatment (NIATx) in 2004 and Advancing Recovery in 2006. NIATx is based on the rapid-cycle change concept and offers four goals: reduce waiting time, reduce no-shows, increase admissions and increase continuation of care (McCarty, Gustafson, Capoccia, & Cotter, 2009; McCarty et al., 2007). Advancing Recovery is designed to help programs work with state agencies to change administrative and clinical systems and implement evidenced-based practices. In our analyses it is important to account for participation in QI programs to distinguish and estimate their effects separately from the PC.
2.3 Data
Analyses are based on matched treatment and comparison groups and rely on administrative data from Delaware and Maryland as well as personal interviews with program CEOs. State agencies provided admission level data including demographic information, primary drug used and frequency of use for all adult clients treated in publicly funded outpatient AOD treatment programs between 1998 and 2006 in Delaware (n=12,368) and Maryland (n=147,151). The Delaware data used in this analysis were not part of the performance contract data and therefore we do not expect programs had an incentive to misreport data. Delaware did oversee programs’ data collection efforts under the PC so we expect the data collected after the PC was implemented to be more accurate. All data were collected by the states through standard reporting systems. Data collection and coding procedures in both states were examined and discharge dates were calculated using standard methods. Interviews, approximately one hour in length focusing on how programs responded to the PC, were conducted by the first author with CEOs of the four Delaware organizations providing outpatient AOD services. Findings from these interviews informed interpretation of the quantitative analyses. The study was approved by Brandeis University’s Institutional Review Board.
2.4 Creation of the matched sample
Sample construction was based on one-to-one matching of 12,368 Delaware clients with comparison Maryland subjects. We stratified the Maryland and Delaware databases into cells based on eight factors- admission year, gender, race, marital status, employment, type and frequency of primary drug, and previous treatment for alcohol or drugs – and randomly selected Maryland clients from each cell to match the number of Delaware clients in the cell. The substance use variables were included to eliminate any differences in measures of severity or type of use between the states. Previous treatment was included because it has been shown to be an important predictor of treatment outcomes.
For matching we employed the SAS procedure SurveySelect (SAS Institute Inc, 2009). We excluded 691 Maryland admissions due to negative or otherwise unreasonable LOS values. This left 146,460 admissions for selecting the matched sample. We excluded 926 Delaware admissions (7.5%) because of missing data on one or more of the stratifying variables. The remaining Delaware sample consisted of 11,442 admissions. Table 2 provides a description of the matched sample. As expected, the samples are almost identical on all characteristics, with the exception of the proportions of clients reporting no substance use within 30 days before admission, 41% in Delaware versus 44% in Maryland (p<.001). Although this is not a small difference, the sample is otherwise well-matched and this is not likely to drive results. In addition this variable is included in the regression models to control for the difference.
Table 2.
Delaware | Maryland | ||||
---|---|---|---|---|---|
| |||||
Item | Number | % | Number | % | P value |
Demographics
| |||||
Male | 8901 | 77.8 | 8901 | 77.8 | 1.0 |
Race | 1.0 | ||||
White | 6671 | 58.3 | 6671 | 58.3 | 1.0 |
Married | 1716 | 15.0 | 1716 | 15.0 | 1.0 |
No longer | |||||
married | 2384 | 20.8 | 2384 | 20.8 | 1.0 |
In work | |||||
force | 7196 | 62.9 | 7196 | 62.9 | 1.0 |
| |||||
Frequency of Drug use
| |||||
Weekly | 2837 | 24.8 | 2801 | 24.5 | 0.6 |
Daily | 1921 | 16.8 | 1828 | 16.0 | 0.1 |
Monthly | 1985 | 17.4 | 1894 | 16.6 | 0.1 |
No use in | |||||
past month | 4699 | 41.1 | 5076 | 44.4 | < .0001 |
| |||||
Primary drug used
| |||||
Cocaine | 2397 | 21.0 | 2401 | 21.0 | 1.0 |
Opiates | 927 | 8.1 | 930 | 8.1 | 1.0 |
Marijuana | 3484 | 30.5 | 3494 | 30.5 | 0.9 |
Alcohol | 4328 | 37.8 | 4338 | 37.9 | 0.9 |
Other drugs | 163 | 1.4 | 163 | 1.4 | 1.0 |
| |||||
Previous treatment for addiction
| |||||
Yes | 7460 | 65.2 | 7460 | 65.2 | 1.0 |
| |||||
Admission year
| |||||
1998 | 340 | 3.0 | 296 | 2.6 | 0.1 |
1999 | 354 | 3.1 | 381 | 3.3 | 0.3 |
2000 | 464 | 4.1 | 481 | 4.2 | 0.6 |
2001 | 1245 | 10.9 | 1246 | 10.9 | 1.0 |
2002 | 1710 | 14.9 | 1721 | 15.0 | 0.8 |
2003 | 1787 | 15.6 | 1775 | 15.5 | 0.8 |
2004 | 1913 | 16.7 | 1922 | 16.8 | 0.9 |
2005 | 1786 | 15.6 | 1825 | 16.0 | 0.5 |
2006 | 1843 | 16.1 | 1795 | 15.7 | 0.4 |
| |||||
Admission year category
| |||||
1998 – 2000 | 1158 | 10.1 | 1158 | 10.1 | 1.0 |
2001 – 2003 | 4742 | 41.4 | 4742 | 41.4 | 1.0 |
2004 – 2006 | 5542 | 48.4 | 5542 | 48.4 | 1.0 |
2.5 Dependent Variables
Client waiting time (WT) is defined as number of days between the day the individual first contacted the program and the admission day. This variable was included in the Maryland data and was calculated in the Delaware data. Just over 1.5% of admissions in Delaware were missing one or more of the data elements required to calculate WT and were excluded from the analysis. Twenty-six admissions in Delaware had admission and discharge on the same data and were also excluded from the analysis. None of the Maryland admissions had missing WT.
Client length of stay (LOS) was calculated by subtracting admission date from discharge date and adding one day so that clients admitted and discharged on the same day have a LOS of one day. Records missing LOS were excluded from the analysis, resulting in the exclusion of 304 records, 2.8% of the Delaware admissions. Records with LOS greater than or equal to 338 days, the 95th percentile of the Delaware data, were also excluded (n=1135).
Often there are concerns regarding reporting of discharge dates. It is possible a client might show multiple admissions and discharges within a short period of time, which would appropriately be grouped as a single episode of care for analysis. The Delaware data indicated that readmissions within 30 days of discharge occurred 220 times, representing fewer than 2% of the Delaware admissions. Identifying readmissions in the Maryland data was not possible without a unique client identifier, but since Delaware records were not combined it would have been inappropriate to combine them in Maryland.
2.6 Independent variables
Independent variables include client demographics, employment status, criminal justice involvement, and type and severity of drug use. These variables were chosen because they represent factors likely to influence treatment access and LOS. In addition, indicator variables for state and time period were included in the model. The post period is broken into two sections: the PC only time period (2002 – 2003) and the PC and QI period (2004 – 2006). Thus, within Delaware the indicator for the earlier time period (2002–2003) is a marker for treatment in a PC only program and the indicator for the later time period (2004–2006) is a marker for treatment in a program under both PC and QI initiatives.
2.7 Statistical Models
Analyses employed multilevel linear regressions, modeling WT and LOS as linear functions of state, time, interactions between state and time, client demographics, employment, educational status, previous treatment, criminal justice involvement, and type and frequency of substance used and were completed using SAS version 9.2 (SAS Institute Inc, 2009). Interactions between the indicator for Delaware admissions and the two time period indicators are the main variables of interest, as they represent for Delaware clients the effects of PC only and PC in conjunction with QI initiatives. The linear regression models were solved using generalized estimating equations, which account for clustering of clients within programs.
2.8 Sensitivity Tests
The comparison group was included to try to control for temporal trends not due to the payment system change, however it is important to keep in mind that SA treatment programs in the two states may be very different. In addition, with a relatively short baseline WT, Maryland had little room to change in response to temporal trends. Therefore, despite client matching, because of differences in baseline WT between Delaware and Maryland (Table 3) and variation across states in SA treatment, the comparison group may not serve as intended. Therefore we conducted a sensitivity analysis of WT and LOS based solely on Delaware’s admissions.
Table 3.
Waiting Time*
|
Length of Stay**
|
|||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Delaware
|
Maryland, matched sample
|
Delaware
|
Maryland, matched sample
|
|||||||||||||
Year | N | Mean | SD | Median | N | Mean | SD | Median | N | Mean | SD | Median | N | Mean | SD | Median |
1998 | 311 | 34.7 | 31.0 | 20 | 268 | 2.9 | 1.0 | 8 | 240 | 125.5 | 80.2 | 101 | 240 | 186.8 | 78.4 | 194 |
1999 | 307 | 32.0 | 27.0 | 22 | 376 | 4.3 | 1.0 | 9 | 358 | 122.3 | 72.6 | 107 | 358 | 112.1 | 89.9 | 87 |
2000 | 418 | 20.8 | 12.0 | 22 | 480 | 5.4 | 1.0 | 10 | 448 | 144.3 | 79.7 | 128 | 448 | 103.9 | 84.3 | 81 |
2001 | 1102 | 22.0 | 15.0 | 21 | 1240 | 7.2 | 1.0 | 12 | 1175 | 102.2 | 80.8 | 80 | 1175 | 116.1 | 86.3 | 99 |
2002 | 1571 | 17.9 | 9.0 | 20 | 1713 | 10.3 | 6.0 | 14 | 1648 | 113.2 | 82.4 | 92 | 1648 | 109.0 | 87.8 | 86 |
2003 | 1654 | 13.1 | 6.0 | 18 | 1770 | 9.7 | 6.0 | 13 | 1693 | 119.3 | 82.9 | 100 | 1693 | 106.3 | 85.6 | 84 |
2004 | 1823 | 9.5 | 1.0 | 16 | 1922 | 9.1 | 6.0 | 11 | 1826 | 119.2 | 87.7 | 94 | 1826 | 111.5 | 86.2 | 91 |
2005 | 1711 | 8.5 | 1.0 | 16 | 1824 | 9.5 | 6.0 | 12 | 1741 | 122.7 | 79.5 | 104 | 1741 | 114.1 | 88.9 | 92 |
2006 | 1792 | 6.2 | 1.0 | 14 | 1793 | 9.0 | 5.0 | 11 | 1736 | 115.9 | 78.3 | 95 | 1736 | 115.5 | 84.1 | 100 |
Excludes records with waiting time > 105 days, the 95th percentile of DE data
Excludes admissions with LOS > 338 days, the 95th percentile of DE data
The Delaware PC implemented the active participation criteria to incentivize programs to reach out to poorly attending clients and re-engage them in treatment. Under the prior global budget payment system it was not unusual for outpatient programs to keep clients officially on the rolls even though they had stopped attending treatment. Outpatient programs may choose not discharge a client because they know there is the likelihood of a readmission and that if there has been a discharge they will have to re-do paper work. In addition, counselors may hide behind “full” caseloads, not having to accept new patients even though visit frequency is very low. Thus the shift to PC represents a change in incentives for reporting discharge date that cannot be teased apart from the real change in LOS.
In the LOS analysis the pre-period is limited to2001 - after the change in reporting incentives went into effect. By limiting the pre-period we identify change in LOS due to change in treatment patterns without confounding with changes in reporting incentives. Although Maryland programs have a similar incentive not to discharge clients, there was no change in this incentive in Maryland over the study period. Therefore Maryland programs likely report longer than actual LOS which would bias our hypothesis to the null and may make it more difficult to identify improved LOS in Delaware.
3. RESULTS
3.1 Waiting time
Table 3 shows WT for outpatient treatment in Delaware and Maryland by admission year. In 2001 in Delaware mean WT was 22 days; down from 34 days in 1998. In 2001 in Maryland average WT was 7 days and had been increasing since 1998 when WT averaged 2.9 days.
Results of multivariate regression analyses are displayed in Table 4. Model 1 examines the change in WT based on the entire sample, including both Delaware and Maryland clients. The interaction coefficient for Delaware admissions in 2002 – 2003 is −13.3 (p < .0001) indicating the PC reduced WT by 13 days. The interaction coefficient for Delaware admissions in 2004 – 2006 is −20.0 (p < .0001) indicating that the PC in conjunction with QI initiatives reduced WT 20 days. Although there are likely many difference between Delaware and Maryland SA treatment programs, sensitivity analyses excluding Maryland data show similar results.
Table 4.
Parameter | Model 1: Pre-Post with Comparison Group n = 21,977
|
Model 2: Pre-Post Only n = 10,597
|
||||
---|---|---|---|---|---|---|
Estimate | SE | p | Estimate | SE | p | |
Intercept | 6.05 | 1.05 | <.0001 | 22.92 | 4.56 | <.0001 |
| ||||||
State and Admission Year (Maryland / 1998 – 2001) | ||||||
Delaware | 16.68 | 3.97 | <.0001 | |||
Admission in 2002 – 2003 | 4.16 | 1.30 | 0.001 | |||
Admission in 2004 – 2006 | 3.31 | 1.17 | 0.005 | |||
Delaware admission 2002 – 2003 | −13.27 | 2.36 | <.0001 | −9.09 | 1.95 | <.0001 |
Delaware admission 2004 – 2006 | −20.04 | 2.22 | <.0001 | −16.71 | 1.89 | <.0001 |
| ||||||
Client Characteristics | ||||||
Female (male) | 0.92 | 0.40 | 0.02 | 0.87 | 0.41 | 0.03 |
White (non white) | 2.15 | 0.63 | 0.001 | 1.29 | 0.81 | 0.11 |
Hispanic (non hispanic) | −1.43 | 0.90 | 0.11 | −1.69 | 1.52 | 0.27 |
| ||||||
Marital Status (single) | ||||||
Married | −0.30 | 0.32 | 0.35 | −0.08 | 0.45 | 0.86 |
No longer married | −0.13 | 0.36 | 0.72 | 0.45 | 0.59 | 0.45 |
| ||||||
Employment Status (employed) | ||||||
Not in workforce | 0.75 | 0.66 | 0.25 | 0.32 | 0.81 | 0.69 |
Unemployed | −1.18 | 1.14 | 0.30 | −1.38 | 1.11 | 0.22 |
| ||||||
Living Situation at admission (independent) | ||||||
Homeless | −0.56 | 0.86 | 0.51 | −0.36 | 1.16 | 0.76 |
Dependent Living | −0.55 | 0.77 | 0.48 | −0.38 | 2.30 | 0.87 |
| ||||||
Education level (less than HS graduate) | ||||||
HS education | 0.55 | 0.47 | 0.24 | 0.49 | 0.45 | 0.27 |
Some college | −0.62 | 0.50 | 0.22 | −0.66 | 0.57 | 0.25 |
| ||||||
Clinical History | ||||||
Previous treatment for mental health (no) | 0.29 | 0.35 | 0.42 | 0.38 | 0.28 | 0.18 |
Previous AOD treatment (no) | −0.51 | 0.93 | 0.58 | −0.57 | 0.98 | 0.56 |
| ||||||
Current Legal Involvement (none) | ||||||
Charges pending | 1.01 | 1.48 | 0.50 | 1.12 | 1.47 | 0.45 |
Convicted | 3.68 | 0.88 | <.0001 | 3.80 | 0.88 | <.0001 |
| ||||||
Primary drug at admission (alcohol) | ||||||
Marijuana | 0.01 | 0.40 | 0.98 | −0.41 | 0.79 | 0.61 |
Cocaine | −1.29 | 0.42 | 0.002 | −0.92 | 0.60 | 0.12 |
Opiates | −2.50 | 0.57 | <.0001 | −2.26 | 0.86 | 0.01 |
Other drug | −1.91 | 0.78 | 0.01 | −1.86 | 1.27 | 0.14 |
| ||||||
Frequency of use of primary drug (not in last month) | ||||||
Daily | −2.24 | 0.60 | 0.0002 | −1.63 | 0.67 | 0.02 |
Weekly | −1.26 | 0.50 | 0.01 | −0.86 | 0.71 | 0.22 |
Monthly | −0.48 | 0.52 | 0.36 | −0.20 | 0.88 | 0.82 |
Models adjusted for clustering of clients in programs.
Model 2, providing results of our sensitivity analysis which does not include Maryland data, gives −9.09 as the parameter estimate for 2002–2003, indicating that clients admitted to AOD treatment in Delaware in 2002–2003 waited nine fewer days for treatment than Delaware clients admitted between 1998 and 2001. The equivalent parameter estimate for 2004 – 2006 is −16.71, indicating almost a 17 day improvement in this later time period. These estimates do not control for temporal trends.
Findings from personal interviews with CEOs of Delaware AOD treatment programs indicate that the financial incentives in the PC played a significant role in clinicians’ and managers’ day-to-day activities. For example clinicians began to actively remind clients to attend treatment sessions and administrators tried to oversubscribe clients to group treatment sessions in order to meet the active participation criteria. In addition, the interviews indicated that financial pressure from the PC incentivized CEOs to attend carefully to the QI programs which they recognized could help the organizations improve WT and, therefore, improve the program’s performance on the PC.
3.2 Change over time in LOS
As shown in Table 3, average LOS in Delaware outpatient AOD treatment in 2001 was 102 days. This increased by 13% to almost 116 days by 2006. This increase contrasts sharply with the slight fluctuation in LOS for clients in Maryland programs which equaled 116 days for clients admitted in 2001, declined to 106 days for clients admitted in 2003 and increased to almost 116 days for clients admitted in 2006.
In multivariate analyses we examined change in LOS in Delaware from 1998 – 2006 using 1998 – 2001 as the pre-period and did not identify state-dependent changes in LOS over time (results not shown). However, the data do not allow us to tease apart differences brought about by in changes in incentives for reporting discharges in Delaware due to the PC from changes in LOS brought about in actual LOS by the PC. Model 1, shown in Table 5, limits the pre-period to 2001 only, after the change in active auditing of attendance for clients in treatment, and identifies an increase in LOS of 24 days in 2002–2003 (p = .02) and 22 days in 2004–2006 (p = .01). Because of the many potential and unmeasured differences between Delaware and Maryland, we also examined change in LOS in Delaware over time without the Maryland comparison group [Model 2]. This model shows LOS increased by 16 days in 2002 – 2003 (p = .11) and by 20 days in 2004 – 2006 (p = .01).
Table 5.
Parameter | Model 1: Pre-Post with Comparison Group n = 19,672
|
Model 2: Pre-Post Only n = 9,491
|
||||
---|---|---|---|---|---|---|
Estimate | SE | p | Estimate | SE | p | |
Intercept | 126.29 | 3.87 | <.0001 | 86.18 | 10.82 | <.0001 |
| ||||||
State and Admission Year (Maryland / 2001) | ||||||
Delaware | −30.37 | 12.73 | 0.02 | |||
Admission in 2002 – 2003 | −7.81 | 3.20 | 0.01 | |||
Admission in 2004 – 2006 | −1.82 | 3.47 | 0.60 | |||
Delaware admission 2002 – 2003 | 24.35 | 10.83 | 0.02 | 16.62 | 10.30 | 0.11 |
Delaware admission 2004 – 2006 | 22.05 | 8.46 | 0.01 | 20.26 | 7.68 | 0.01 |
| ||||||
Client Characteristics | ||||||
Female (male) | 2.48 | 2.41 | 0.30 | 0.83 | 2.14 | 0.70 |
White (non white) | 6.99 | 1.94 | 0.00 | 7.47 | 2.94 | 0.01 |
Hispanic (non hispanic) | 7.39 | 5.08 | 0.15 | 12.24 | 7.64 | 0.11 |
| ||||||
Marital Status (single) | ||||||
Married | 4.74 | 1.96 | 0.02 | 8.43 | 2.40 | 0.00 |
No longer married | 2.11 | 2.25 | 0.35 | 4.58 | 2.73 | 0.09 |
| ||||||
Employment Status (employed) | ||||||
Not in workforce | −10.76 | 4.17 | 0.01 | −3.06 | 2.77 | 0.27 |
Unemployed | −14.48 | 2.93 | <.0001 | −12.45 | 3.09 | <.0001 |
| ||||||
Living Situation at admission (independent) | ||||||
Homeless | −15.23 | 6.24 | 0.01 | −27.35 | 5.82 | <.0001 |
Dependent Living | −0.88 | 3.38 | 0.79 | −3.95 | 9.41 | 0.67 |
| ||||||
Education level (less than HS graduate) | ||||||
HS education | 4.45 | 1.96 | 0.02 | 4.42 | 1.88 | 0.02 |
Some college | 5.97 | 2.98 | 0.04 | 5.61 | 2.86 | 0.05 |
| ||||||
Clinical History | ||||||
Previous treatment for mental health (no) | −4.22 | 2.99 | 0.16 | −5.54 | 2.64 | 0.04 |
Previous AOD treatment (no) | 5.41 | 1.40 | 0.00 | 4.95 | 1.34 | 0.00 |
| ||||||
Current Legal Involvement (none) | ||||||
Charges pending | 19.99 | 7.09 | 0.00 | 20.35 | 7.03 | 0.00 |
Convicted | 17.48 | 5.12 | 0.00 | 19.44 | 5.25 | 0.00 |
| ||||||
Primary drug at admission (alcohol) | ||||||
Marijuana | 1.78 | 1.56 | 0.25 | 4.22 | 1.46 | 0.00 |
Cocaine | −19.07 | 3.33 | <.0001 | −11.45 | 3.22 | 0.00 |
Opiates | −24.36 | 3.44 | <.0001 | −14.02 | 4.44 | 0.00 |
Other drug | 0.75 | 7.58 | 0.92 | 14.32 | 9.81 | 0.14 |
| ||||||
Frequency of use of primary drug (not in last month) | ||||||
Daily | −20.45 | 2.71 | <.0001 | −14.92 | 3.16 | <.0001 |
Weekly | −15.39 | 2.65 | <.0001 | −8.67 | 3.68 | 0.02 |
Monthly | −10.20 | 2.95 | 0.00 | −7.23 | 5.47 | 0.19 |
Models adjusted for clustering of clients in programs.
4. DISCUSSION
Seven of eight programs serving public clients in Delaware began participating in the QI programs in 2004. Two of the main foci of the QI programs are to decrease WT and improve treatment continuation (McCarty, et al., 2007). Interestingly, and likely due to the effects of the PC incentives which had been in place since 2001, WT was already declining and LOS was increasing before the introduction QI programs. .
The continued decrease in WT in Delaware and increase in LOS from 2004 through 2006 is most likely attributed to a combination of the PC and QI or may be the result of additional experience with the PC. In interviews conducted with CEOs of AOD treatment organizations in Delaware, several stated that the PC provided the financial motivation to carefully attend to the QI programs. These findings suggest that the PC resulted in shorter WTs for treatment and longer LOS, and that the combination of the PC and QI had an even more significant effect on both measures. Both the interviews and the data support the idea that the willingness to adopt some of the QI procedures was likely due in part to the potential for financial incentives created by the PC. In other words the PC may have set the occasion for program willingness to engage in any new clinically sensible behavior that would help them meet their financial goals.
It is likely a number of factors influenced improvements in LOS and it was not possible to specify which of the many factors in operation were most responsible for observed changes. Perhaps clients who enter treatment when they are ready stay longer in treatment. In addition, because of the financial pressure of the PC, programs were actively working to improve LOS by reminding clients of appointments and incentivizing them to attend treatment.
Because Delaware discharge data is audited by the state and Maryland data is not it is likely that Maryland data is biased toward longer LOS, making our hypotheses biased toward the null. Therefore we can be confident that LOS increased dramatically in Delaware since we did see an increase relative to Maryland despite the differences in reporting incentives.
Regardless of the mechanism by which WT and LOS improved, the key is that PC provided the incentives and program accountability which set the stage for improvements. Ultimately clients benefit because longer LOS is associated with improved outcomes.
4.1 Study Limitations
This analysis is based on a PC in Delaware –one of the smallest states in the nation; PC may work differently in larger states. However, if, as in Delaware, the sponsor achieves provider buy-in, then success may be replicated in larger states or in counties. It is important to learn as much as possible about the PC implemented in Delaware as a number of other states are considering or beginning to implement similar incentive programs.
We examined AOD treatment episode data which can be unreliable because clients frequently discontinue treatment without a formal discharge and therefore are not available to provide required information. Fortunately, we relied almost exclusively on data collected at admission with discharge date the one exception. Discharge date may be of better quality in Delaware because with the implementation of the PC the state began formal auditing of patient records for active participation., as it was used in PC payments (Kemp, 2006). This incentive did not exist in Maryland during the comparison time period, and one would expect Maryland programs to be less conscientious in removing clients from their rolls. The effect of this discrepancy is to bias between-state LOS comparison toward the null. LOS in Maryland may be recorded as longer than they really are and our estimate of increased LOS in Delaware may be understated. Thus, though there were limitations in our ability to verify these key data, we are confident of our conclusions because any systematic, but unmeasured effect would have produced a failure to see a difference. Put differently, that there was almost certainly a bias toward reporting longer LOS in Maryland, the significantly longer LOS reported in Delaware under auditing conditions, suggests that the PC had a robust effect.
There are likely many differences between Delaware and Maryland SA treatment programs and although we matched clients based on their demographic characteristics, potential differences remain unmeasured. Although using a single state as a comparison is not an ideal control group, the magnitude of the effect combined with interview findings suggests the PC did influence WT and LOS. Sensitivity analyses also suggest an association between the PC and shorter WT and longer LOS.
4.2 Conclusion
Although financial incentives have been suggested as a way to improve quality of medical care (Institute of Medicine, 2001, 2006), few studies have identified improvements associated with financial incentives (Christianson, et al., 2008; Mehrotra, et al., 2009; Petersen, et al., 2006; Rosenthal & Frank, 2006). Building on a previous evaluation of the Delaware PC (McLellan, et al., 2008), this study identified improvements to treatment: shorter waiting time and increased LOS following implementation of the PC. The focus of AOD treatment programs may explain some of the success of the Delaware PC in the absence of similar findings for medical or behavioral health. Financial incentives for hospitals and medical groups often include a number of conditions, while outpatient AOD treatment services are more focused - dealing primarily with addiction to alcohol and drugs. In addition, the design of the Delaware PC – with relatively immediate (monthly) and financially significant financial consequences to the organization for clinically relevant changes in patient behavior - is likely important in the success of this PC. Future research should examine more closely design of successful PC systems.
This study suggests there may be a synergistic effect between performance contracting and quality improvement programs and the implementation of both types of programs in concert should be considered. It will be important to control for or estimate the effect of other QI programs in place at the same time as PC. In addition, future PC programs should be designed with a control group in mind from the beginning to more accurately estimate the effect of the PC.
The move to PC was positive for the purchaser – more individuals received services for a 5% increase in expenditures. Specifically, there were more than 650 additional admissions in 2006 as compared to 2001. Although programs could endeavor to improve utilization, reduce WT and increases LOS without a PC, this had not occurred in Delaware prior to the PC. The Delaware PC set new expectations for programs, held programs accountable and provided leeway in achieving expectations. Programs responded to the PC in different ways – by participating in QI, by incentivizing clients to attend treatment, and by incentivizing clinicians. By creating accountability and a market for innovation where previously there were no explicit financial incentives, the PC and QI programs are associated with longer length of stay and shorter waiting time for treatment. We cannot determine which aspects of the programs, for example, the financial incentives, additional attention and monitoring, or training in quality improvement, resulted in these improvements and future research should examine these aspects of the program in more detail.
Acknowledgments
This research was supported by NIDA Grant 5F31DA22822-2 and NIDA Grant P50 DA010233. Preliminary findings were presented at the Addiction Health Services Research Meetings October 28 – 30 2009, San Francisco, CA and October 25 – 27 2010 in Lexington, KY and at the Academy Health Research Meeting, June 28 – 30 2009 in Chicago, IL. We thank Meredith Rosenthal for helpful comments on this paper. Our thanks are extended to Jack Kemp, former Director of the Division of Substance Abuse and Mental Health in Delaware, the Delaware Division of Substance Abuse and Mental Health and the Maryland Alcohol and Drug Abuse Administration for providing the data.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- An LC, Bluhm JH, Foldes SS, Alesci NL, Klatt CM, Center BA, Manley MW. A randomized trial of a pay-for-performance program targeting clinician referral to a state tobacco quitline. Archives of Internal Medicine. 2008;168(18):1993–1999. doi: 10.1001/archinte.168.18.1993. [DOI] [PubMed] [Google Scholar]
- Beaulieu ND, Horrigan DR. Putting smart money to work for quality improvement. Health Services Research. 2005;40(5):1318–1334. doi: 10.1111/j.1475-6773.2005.00414.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Berthiaume J, Chung R, Ryskina K, Walsh J, Legorreta A. Aligning financial incentives with quality of care in the hospital setting. Journal of Healthcare Quality. 2006;28(2):36–44. doi: 10.1111/j.1945-1474.2006.tb00601.x. [DOI] [PubMed] [Google Scholar]
- Berthiaume JT, Tyler PA, Ng-Osorio J, LaBresh KA. Aligning financial incentives with “get with the guidelines” to improve cardiovascular care. American Journal of Managed Care. 2004;10(7):501–504. [PubMed] [Google Scholar]
- Bremer RW, Scholle SH, Keyser D, Houtsinger JVK, Pincus HA. Pay for Performance in Behavioral Health. Psychiatric Services. 2008;59(12):1419–1429. doi: 10.1176/ps.2008.59.12.1419. [DOI] [PubMed] [Google Scholar]
- Campbell S, Reeves D, Kontopantelis E, Middleton E, Sibbald B, Roland M. Quality of primary care in England with the introduction of pay for performance. New England Journal of Medicine. 2007;357(2):181–190. doi: 10.1056/NEJMsr065990. [DOI] [PubMed] [Google Scholar]
- Centers for Medicare and Medicaid Services. [Accessed March 20, 2009];2003 Available at: http://www.cms.hhs.gov/HospitalQualityInits/35_HospitalPremier.asp. Retrieved March 20, 2009. [PubMed]
- Centers for Medicare and Medicaid Services. Report to Congress: Plan to Implement a Medicare Hospital Value-Based Purchasing Program. 2007. [Google Scholar]
- Christianson JB, Leatherman S, Sutherland K. Lessons From Evaluations of Purchaser Pay-for-Performance Programs A Review of the Evidence. Medical Care Research and Review. 2008;65(6):5S–35S. doi: 10.1177/1077558708324236. [DOI] [PubMed] [Google Scholar]
- Commons M, McGuire TG, Riordan MH. Performance contracting for substance abuse treatment. Health Services Research. 1997;32(5):631–650. [PMC free article] [PubMed] [Google Scholar]
- Doran T, Fullwood C, Gravelle H, Reeves D, Kontopantelis E, Hiroeh U, Roland M. Pay-for-performance programs in family practices in the United Kingdom. New England Journal of Medicine. 2006;355(4):375–384. doi: 10.1056/NEJMsa055505. 355/4/375 [pii] [DOI] [PubMed] [Google Scholar]
- Felt-Lisk S, Gimm G, Peterson S. Making pay-for-performance work in Medicaid. Health Affairs-Web Exclusive. 2007;26:w516–w527. doi: 10.1377/hlthaff.26.4.w516. [DOI] [PubMed] [Google Scholar]
- Greene RA, Beckman H, Chamberlain J, Partridge G, Miller M, Burden D, Kerr J. Increasing adherence to a community-based guideline for acute sinusitis through education, physician profiling, and financial incentives. American Journal of Managed Care. 2004;10(10):670–678. [PubMed] [Google Scholar]
- Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C: National Academies Press; 2001. [PubMed] [Google Scholar]
- Institute of Medicine. Improving the Quality of Health Care for Mental and Substance Use Conditions. Washington, D.C: National Academies Press; 2006. [PubMed] [Google Scholar]
- Kahn CN, III, Ault T, Isenstein H, Peotetz L, Van Gelder S. Snapshot of hospital quality reporting and pay-for-performance under Medicare. Health Affairs. 2006;25:148–162. doi: 10.1377/hlthaff.25.1.148. [DOI] [PubMed] [Google Scholar]
- Kemp J. Personal Communication. 2006.
- Kuhmerker K, Hartman T. Pay-for-Performance in State Medicaid Programs: A Survey of State Medicaid Directors and Programs: The Commonwealth Fund. 2007. [Google Scholar]
- Levin-Scherz J, DeVita N, Timbie J. Impact of pay-for-performance contracts and network registry on diabetes and asthma HEDIS (R) measures in an integrated delivery network. Medical Care Research and Review. 2006;63(1):14S–28S. doi: 10.1177/1077558705284057. [DOI] [PubMed] [Google Scholar]
- Lu M, Ma CT. Financial incentives and gaming in alcohol treatment. Inquiry. 2006;43(1):34–53. doi: 10.5034/inquiryjrnl_43.1.34. [DOI] [PubMed] [Google Scholar]
- Lu M, Ma CT, Yuan L. Risk selection and matching in performance-based contracting. Health Econ. 2003;12(5):339–354. doi: 10.1002/hec.734. [DOI] [PubMed] [Google Scholar]
- McCarty D, Gustafson D, Capoccia V, Cotter F. Improving care for the treatment of alcohol and drug disorders. Journal of Behavioral Health Services Research. 2009;36(1):52–60. doi: 10.1007/s11414-008-9108-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCarty D, Gustafson DH, Wisdom JP, Ford J, Choi D, Molfenter T, Cotter F. The network for the improvement of addiction treatment (NIATx): Enhancing access and retention. Drug and Alcohol Dependence. 2007;88(2–3):138–145. doi: 10.1016/j.drugalcdep.2006.10.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McLellan AT, Kemp J, Brooks A, Carise D. Improving public addiction treatment through performance contracting: The Delaware experiment. Health Policy. 2008;87(3):296–308. doi: 10.1016/j.healthpol.2008.01.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mehrotra A, Damberg CL, Sorbero MES, Teleki SS. Pay for Performance in the Hospital Setting: What Is the State of the Evidence? American Journal of Medical Quality. 2009;24(1):19–28. doi: 10.1177/1062860608326634. [DOI] [PubMed] [Google Scholar]
- Nahra TA, Reiter KL, Hirth RA, Shermer JE, Wheeler JR. Cost-effectiveness of hospital pay-for-performance incentives. Medical Care Research and Review. 2006;63(1 Suppl):49S–72S. doi: 10.1177/1077558705283629. [DOI] [PubMed] [Google Scholar]
- Pearson S, Schneider E, Kleinman K, Coltin K, Singer J. The impact of pay-for-performance on health care quality in Massachusetts, 2001–2003. Health Affairs. 2008;27(4):1167–1176. doi: 10.1377/hlthaff.27.4.1167. [DOI] [PubMed] [Google Scholar]
- Petersen LA, Woodard LD, Urech T, Daw C, Sookanan S. Does pay-for-performance improve the quality of health care? Annals of Internal Medicine. 2006;145(4):265–272. doi: 10.7326/0003-4819-145-4-200608150-00006. 145/4/265 [pii] [DOI] [PubMed] [Google Scholar]
- Reiter K, Nahra T, Alexander J, Wheeler J. Hospital responses to pay-for-performance incentives. Health Services Management and Research. 2006;19(2):123–134. doi: 10.1258/095148406776829086. [DOI] [PubMed] [Google Scholar]
- Rosenthal MB, Frank RG. What is the Empirical Basis for Paying for Quality in Health Care? Medical Care Research and Review. 2006;63(2):135– 157. doi: 10.1177/1077558705285291. [DOI] [PubMed] [Google Scholar]
- Rosenthal MB, Landon BE, Normand SLT, Frank RG, Epstein AM. Pay for Performance in Commercial HMOs. New England Journal of Medicine. 2006;355(18):1895–1902. doi: 10.1056/NEJMsa063682. [DOI] [PubMed] [Google Scholar]
- Ryan AM. Effects of the Premier Hospital Quality Incentive Demonstration on Medicare Patient Mortality and Cost. Vol. 44. Health Services Research; 2009. pp. 821–842. [DOI] [PMC free article] [PubMed] [Google Scholar]
- SAS Institute Inc. SAS version 9.2. Cary, NC: 2009. [Google Scholar]
- Sautter KM, Bokhour BG, White B, Young GJ, Burgess JF, Berlowitz D, Wheeler JRC. The early experience of a hospital-based pay-for-performance program. Journal of Healthcare Management. 2007;52(2):95–107. [PubMed] [Google Scholar]
- Shen Y. Selection incentives in a performance-based contracting system. Health Services Research. 2003;38(2):535–552. doi: 10.1111/1475-6773.00132. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Whyte BS, Ansley R. Pay for Performance Improves Rural Ems Quality: Investment in Prehospital Care. Prehospital Emergency Care. 2008;12(4):495–497. doi: 10.1080/10903120802290810. [DOI] [PubMed] [Google Scholar]