Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Oct 1.
Published in final edited form as: J Subst Abuse Treat. 2017 Aug 2;81:44–52. doi: 10.1016/j.jsat.2017.07.014

Building Capacity for Continuous Quality Improvement (CQI): A Pilot Study

Sarah B Hunter 1, Carolyn M Rutter 1, Allison J Ober 1, Marika S Booth 1
PMCID: PMC5599160  NIHMSID: NIHMS898145  PMID: 28847454

Abstract

Background and objective

Little is known about the feasibility, effectiveness, and sustainability of CQI approaches in substance use disorder treatment settings.

Methods

In the initial phase of this study, eight programs were randomly assigned to receive a CQI intervention or to a waitlist control condition to obtain preliminary information about potential effectiveness. In the second phase, the initially assigned control programs received the CQI intervention to gain additional information about intervention feasibility while sustainability was explored among the initially assigned intervention programs.

Results and conclusions

Although CQI was feasible and sustainable, demonstrating its effectiveness using administrative data was challenging suggesting the need to better align performance measurement systems with CQI efforts. Further, although the majority of staff were enthusiastic about utilizing this approach and reported provider and patient benefits, many noted that dedicated time was needed in order to implement and sustain it.

Keywords: continuous quality improvement, substance use treatment, mixed qualitative and quantitative methods, pilot study

1. Introduction

Substance use disorders (SUDs) are a significant health problem. An estimated 21.7 million individuals in the United States aged 12 or older needed substance use treatment in the past year (Center for Behavioral Health Statistics and Quality, 2016). Substance use disorders have numerous health, legal, and social consequences; the National Institute on Drug Abuse (NIDA) estimates substance use disorders cost more than $700 billion annually in health care, crime, and productivity costs (National Institute on Drug Abuse, 2015).

In 2006, the Institute of Medicine (IOM) recommended the institution of quality improvement strategies for improving SUD care (Institute of Medicine, 2006). Continuous quality improvement (CQI) is one potential strategy for sustainable quality improvement. Definitions of CQI vary, three minimum features are systematic data guided activities, design with local conditions in mind, and iterative development and testing (Rubenstein et al., 2014). CQI could be an effective strategy for improving SUD treatment because examining data on an ongoing basis could help motivate program staff to try new strategies to improve outcomes and tailoring these approaches to local conditions could increase staff buy-in and engagement in the process (Roosa, Scripa, Zastowny, & Ford, 2011; Schierhout et al., 2013).

Although CQI has been a successful approach to improving quality and outcomes in the U.S. manufacturing sectors where it was initially developed, mixed results have been found when practiced in health care. Systematic reviews report varying levels of effectiveness (i.e., Schouten et al., 2008; Nadeem et al., 2013). Some researchers have reported that CQI initiatives fail about half of the time (Humphreys et al., 2012). Furthermore, it is unclear what CQI entails. One review examined studies of Plan-Do-Study-Act (PDSA) cycles and found few studies adhered to an iterative cycle framework, prediction-based and small-scale testing, use of data over time, and documentation (Taylor et al., 2014).

Moreover, there is relatively little known about CQI effectiveness in nontraditional health care settings, such as community based SUD treatment. One exception is the well-studied Network for the Improvement of Addiction Treatment (NIATx) approach (Gustafson et al., 2013; Hoffman, Ford, Choi, Gustafson, & McCarty, 2008; McCarty et al., 2007; Quanbeck et al., 2011). NIATx sought to improve specific treatment processes (i.e., wait times, program admissions, and retention) by using five principles to support organizational change. The five NIATx principles include PDSA cycles along with the following: understand and involve the customer; fix key problems; pick a powerful change leader; and get ideas from outside the organization (Hoffman et al., 2012). The NIATx approach was shown to reduce wait times and increase retention but not to increase admissions indicating some success in utilizing CQI methods to improve treatment processes (Gustafson et al., 2013; McCarty et al., 2007). However, the NIATx effort required data tracking on a number of different processes which was reported as burdensome by many participating agencies (Wisdom et al., 2006) indicating concerns about feasibility. Also, little is known about howCQI influences longer-term provider and patient outcomes in SUD treatment settings, and how often CQI is sustained once a researcher-led intervention ends.

Given the lack of understanding about CQI feasibility, effectiveness and sustainability in community-based SUD treatment settings under less intensive conditions as NIATx, this study set out to pilot test a more modest CQI approach. Specifically, this CQI approach did not attempt all five NIATx principles, but rather focused specifically on one, the use of PDSA cycles. Additionally, this study assessed staff and patient outcomes, rather than focusing solely on treatment process indicators. Also, this CQI approach engaged multiple levels within the SUD treatment setting, including those closest to the point of care (i.e., clinical staff) to select an area for improvement rather than focus on pre-determined treatment process changes outlined by NIATx (e.g., reducing wait time). This approach may be particularly relevant as previous studies have noted the importance of adapting CQI efforts to local priorities to enhance effectiveness (Schierhout et al., 2013).

Our study was guided by a stage-based treatment development approach (Rounsaville, Carroll, & Onken, 2001). The CQI intervention had been previously developed (Chinman, Hunter & Ebener, 2012) but had not been rigorously studied. Key research questions were whether this CQI approach was feasible and sustainable and whether preliminary evidence of its effectiveness could be generated. To determine feasibility, CQI training participation, the extent of CQI implementation, and staff perceptions of the approach were assessed. To measure preliminary effectiveness, a group randomized pilot study where programs received one year of CQI training and technical assistance were compared to programs that did not receive the intervention. Study hypotheses were that the CQI intervention would lead to staff and patient level improvements. CQI sustainability was explored by examining whether CQI activities were continued one year following the end of the CQI intervention.

2. Material and Methods

2.1. Design

Figure 1 outlines the study design. Eight treatment programs were randomized to receive either an immediate or delayed CQI intervention that lasted for one year (Hunter, Ober, Paddock, Hunt, & Levan, 2014). Randomization was stratified by service modality with two residential and two outpatient programs assigned to either immediate intervention (labeled “Cohort 1”) or to the wait-list control condition (labeled “Cohort 2”). Preliminary effectiveness was examined by comparing Cohort 1 and Cohort 2 outcomes after the first intervention year. Feasibility was examined utilizing data from both cohorts pre- and post-intervention. Sustainability was explored by examining data collected from Cohort 1 participants one year after the intervention had ended.

Figure 1. Study Design.

Figure 1

Note: A modified version of this figure was first published under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (see Hunter et al., 2014).

2.2. Study setting

The study setting was a large non-profit SUD treatment agency in Los Angeles County that received a mix of public and private funding. The programs served a diverse patient population (i.e., 60% male; 26% White, 29% African American, 40% Hispanic, and 5% Asian and/or other) with each program treating an average of 197 residential patients or 320 outpatients annually. Staff from eight of the agency’s largest SUD treatment programs were asked to participate in the study representing four residential and four outpatient programs.

2.3. Participants

As part of the intervention, two staff from each program were asked to attend monthly CQI meetings held at the agency’s headquarters, one member representing administrative staff (program director and/or clinical supervisor) and one clinical team member (i.e., counselor). Following randomization, program directors were asked to select a counselor who had been employed at least one year to attend the meetings. The size of the clinical staff at the eight sites ranged from 2 – 12 (median = 7).

2.4. CQI intervention

This intervention focused primarily on the use of Plan-Do-Study-Act (PDSA) cycles for quality improvement. The intervention incorporated an empowerment evaluation approach (Fetterman & Wandersman, 2005), where clinical program staff primarily led the development and execution of the “CQI Actions”, that is, specific improvement strategies, rather than organizational leadership or outside entities. The CQI actions were based on a systematic assessment of process and outcome data as part of the “Plan” phase of PDSA. Participating staff were supported by monthly in-person meetings facilitated by study investigators (SBH and AJO) and agency leadership (i.e., the Quality Assurance Coordinator).

The process and outcome data included admission rates, patient length of stay, patient satisfaction, patient discharge status along with agency or program funder benchmarks (e.g., 80% of patients will stay in treatment at least 30 days). The “plan” phase consisted of two 90-minute monthly meetings to discuss the process and outcome data from the past two fiscal years. In these meetings, program staff reviewed program data, compared it to other agency programs and to established benchmarks to identify areas in which the program was performing better, at average or worse than expected guided by worksheets that prompted staff to engage in these activities.

At the third monthly meeting, staff were instructed to identify a process or outcome indicator to address with an improvement strategy (labeled a “CQI Action”). The identification process included an assessment of how feasible it would be to execute a change in their program over the next 90 days. Next, staff were instructed to specify key CQI Action tasks, including who would be responsible for the tasks, what resources would be needed and timeline. Extra meeting time was allocated for program staff to describe their CQI Action plan with the other meeting attendees before implementation (i.e., 180 minutes was allocated for this meeting).

Following a decision to execute the CQI Action, participating staff continued to meet monthly with the CQI support team (study PIs and agency leadership) and staff from other participating programs, to discuss progress, share lessons learned and receive guidance on the next PDSA phases. During these monthly meetings, staff from each program were asked to report on what changes they had attempted, how staff and/or patients had responded to the changes, what their next steps were and whether they needed further assistance.

During each meeting, staff had access to worksheets from the CQI implementation toolkit (Hunter, Ebener, Chinman, Ober, & Huang, 2015) that helped to document their program’s progress in each PDSA phase. Once staff documented completion of the “Act” phase, they were congratulated and asked to consider attempting to develop a new “CQI Action” using the process and outcome data they had initially reviewed or more recent information about program performance. More information about the CQI actions are documented elsewhere (Hunt, Hunter, & Levan, 2017).

A timeline outlining the initiation and end of the intervention by cohort and alignment with study data is provided in Figure 2. Following baseline assessments and randomization, the first cohort received the intervention. A year later, Cohort 2 received the intervention. Staff data were collected annually, immediately prior to the start of the intervention and at the end of the delivery. Length of stay data was examined prior to when the study began and compared to the intervention period when the CQI Actions were taking place. Discharge status was available by the agency fiscal years (i.e., July–June).

Figure 2. Study timeline including key data collection and intervention activities.

Figure 2

Note: Shaded portions refer to the admission periods for patients utilized for the length of stay (LOS) outcome

2.5. Measures

We first present the measures related to the feasibility and sustainability study aims and then the staff and patient outcome measures related to the effectiveness aim.

2.5.1. Feasibility

2.5.1.1. CQI meeting attendance

Attendance at the monthly in-person CQI meetings was monitored. For programs receiving the intervention, whether an administrative representative and at least one clinical staff member from each of the programs attended the monthly CQI meetings was tracked.

2.5.1.2. Number of PDSA cycles accomplished

Investigators coded how many of the PDSA phases (1–4 representing the Plan, Do, Study and Act phases, respectively) and how many total PDSA cycles program staff achieved. This was accomplished by reviewing the worksheets that staff completed as part of the CQI intervention. The information was verified using field notes from the monthly meetings where staff reported on progress and from the semi-structured interviews completed by trained field interview staff at the 12- and 24-month periods that asked about status in regards to the PDSA cycle.

2.5.1.3. Staff perceptions of CQI

As part of the interviews, staff rated their enthusiasm about CQI participation on a one to ten-point scale where “1” represented “not at all enthusiastic” and “10” represented “very enthusiastic”. After the intervention, staff rated how difficult it had been personally to work on the program’s CQI Action with “1” representing “easy” to “10” representing “most difficult”. Following these ratings, an open-ended question to explain reason for rating was asked. After the intervention, participants were also asked what worked well, whether they thought CQI had an impact in their programs and what key issues or barriers arose.

Following the interviews, staff were prompted to complete a web-based survey. The survey questions included items from the Innovation Attribute scale (Moore & Benbasat, 1991) which was designed to measure perceptions of adopting a new innovation. The following sub-scales were used: Relative advantage over usual practices (5 items; α = 0.90; e.g., “Using CQI improves the quality of the work I do”); compatibility (3 items; α = 0.86; e.g., “Using CQI is compatible with all aspects of my work”); the ease of use (4 items; α = 0.84; e.g., “CQI is clear and understandable”); and observability/demonstrability (4 items; α = 0.79; e.g., “The results of using CQI are apparent to me”). These subscales are consistent with intervention feasibility definitions (e.g., see Proctor et al (2011)). Response options ranged from “1” (representing “extremely disagree”) to “7” (representing “extremely agree”), with higher values representing perceptions that CQI was more advantageous than existing practices, was compatible with their agency, CQI was easy to use, and CQI’s impact was easily observable/demonstrated.

2.5.2. CQI sustainability

At 24-months, Cohort 1 staff were asked in the interviews whether the different components of the CQI intervention had continued, such as maintenance of their CQI Action, the development of new CQI Actions, and CQI meeting participation.

2.5.3. Staff outcomes

2.5.3.1. Staff attrition

Staff employment rates at each of the programs were monitored to track attrition rates throughout the study using the agency employment records. Prior to data collection and randomization, staff rosters at each of the eight selected programs were obtained. Attrition was calculated as the percentage of workers employed by the program at pre-intervention who were no longer on the rosters at 12-months.

2.5.3.2. Job satisfaction

The job satisfaction scale consisted of six items that were part of the Texas Christian University’s Survey of Organizational Functioning (TCU SOF) instrument (Lehman, Greener, & Simpson, 2002). An example item is “You are satisfied with your present job”. Response options ranged from “1” for “Strongly Disagree” to “5” for “Strongly Agree”. Average scores were multiplied by 10 to rescale final scores from 10 to 50 (e.g., an average response of 2.6 became “26”).

2.5.3.3 Morale

Two subscales of the Maslach Burnout Inventory – Human Services Survey (Maslach & Jackson, 1996) were used to assess job morale, the emotional exhaustion subscale and the personal accomplishment subscale. The emotional exhaustion scale consisted of nine items (e.g., “I feel emotionally drained from my work”). The personal accomplishment scale consisted of eight items (e.g., “I deal very effectively with the problems of my clients”). Response options ranged from “0” representing “never” to “6” representing “every day”. Responses were summed and scores were categorized into low, average and high levels based on scale norms (i.e., for the exhaustion scale, Low <=16; Average = 17–26; High >=27; for the accomplishment scale, Low >=39; Average = 38–32; High <=31). Low morale is characterized by high scores on the exhaustion scale and low scores on the personal accomplishment scale.

2.5.4. Patient outcomes

2.5.4.1. Length of stay

Patient length of stay refers to the time from treatment admission to the date of last service. The proportion of clients staying 3 days or more, 30 days or more, and 90 days or more were examined. Three days or more refers to those patients that were successfully engaged following admission. Thirty and 90-day stays refer to measures that were aligned with reporting requirements for both the outpatient and residential programs. Also, these criteria were used rather than mean length of stay because the average value may be subject to bias due to censoring issues (i.e., some patients had not completed treatment by the end of the observation period).

2.5.4.2. Treatment completion status

Treatment completion status at discharge was coded by each patient’s primary treatment counselor. Positive treatment compliance was derived by summing the proportion of patients that successfully completed the program and the proportion of patients that left the program prior to completion with satisfactory progress. These data were aggregated at the program level each fiscal year.

2.6. Data collection procedures

2.6.1. Administrative data

Data were collected from the agency on staffing, patient length of stay and treatment completion status across pre-specified 12-month intervals. The staffing information was aggregated and shared with the research team before the intervention, and at the 12-month time point.

2.6.2. Staff interviews

Staff selected to participate in the CQI intervention were interviewed by trained field interview staff by phone at baseline, 12- and 24-months. The baseline interviews were conducted before randomization.

2.6.3. Staff surveys

Program administrative staff and clinical staff were asked to complete a web-based survey at three time-points: baseline, 12-months, and 24-months. The baseline surveys were conducted before randomization.

Following the three data collection periods, participating staff received $25. Study procedures were approved by the research organization’s Institutional Review Board prior to data collection.

2.7. Analytic strategy

2.7.1. Staff attrition

Attrition was evaluated using a difference in differences approach to compare changes in attrition between cohorts from baseline to 12-months using t-tests.

2.7.2. Qualitative data

All interviews were recorded by professional field interview staff and transcribed. Next, two members of the research team independently reviewed each transcript. The researchers derived themes that were common within and across the interviews and resolved discrepancies before summary analysis was conducted. The main themes were based on the interview questions and apriori study hypotheses and sub-themes were derived from participant responses. Definitions of the main and sub-theme codes were documented.

2.7.3. Survey data

Responses from staff who participated in the CQI monthly meetings were examined. CQI feasibility was evaluated using the difference from pre- to post-intervention responses across both cohorts. CQI effectiveness was evaluated using a difference in differences approach to compare changes in responses from baseline to 12-months by cohort for job satisfaction using t-tests. Proportion of responses in the different job morale categories were compared by cohort over time.

2.7.4. Patient length of stay

Length of stay was calculated using data from two six-month time periods corresponding to: 1) a pre-intervention period (February–August 2012); and 2) a first intervention period when CQI Actions were taking place in Cohort 1 (February–August 2013). Chi-square tests were used.

2.7.5. Patient treatment completion status

The percentage of patients with positive compliance was compared across Cohorts using Chi-square tests. The data was available by agency fiscal year (July – June).

3. Results

3.1. Programs and participants

Representatives from all eight programs participated in interviews at baseline, 12- and 24-months. Among the 24 administrative and clinical staff employed in the four Cohort 1 programs at pre-intervention, 9 (37.5%) participated in CQI meetings and interviews (i.e., 4 program directors and 5 clinical staff members). Among the 33 staff employed in Cohort 2 programs at 12-months, 9 (27.2%) participated in CQI meetings and interviews (i.e., 4 program directors and 5 clinical staff members). One Cohort 2 administrative staff left during Year 2 which limited the sample of respondents with repeated assessments.

Fourteen out of the 18 staff that participated in the CQI intervention completed the pre- and post-surveys for a 78% response rate. There were no statistically significant differences in response rates between cohorts.

3.2. Intervention feasibility

Assessment included CQI meeting participation, PDSA completion rates and information from interviews and surveys. Illustrative quotes related to feasibility are displayed in Table 1.

Table 1.

Illustrative quotes from respondents regarding feasibility, impact and sustainability

Theme Quote
Feasibility: Enthusiasm “… in the beginning, it was like, what did we get ourselves into. But, as each month went by, and it was explained to us easily, it became a little bit easier for us to understand what this was all about.”
Feasibility: Ease/Difficulty “…It was just giving it a chance. And [the CQI facilitators] broke it down to A, B, C, steps for us. It wasn’t a bunch of big words, talking at you, a bunch of graphics, pretty colors. It was, here it is. Very plain and simple. Start here, finish here. Write it down. Look at it. Analyze it and move forward. It was very simple.”
Feasibility: Facilitators
 Staff and Leadership Support and Buy-in “Getting the complete and total buy in from program directors. Directors having the ability to disseminate to the staff … once everyone fully understood that it wasn’t just extra work, possibly decrease some things, then everybody got involved.”
 Format and Technical Assistance “This [CQI project] gave a guideline with the follow-up meetings and the communication and working with other team members, or sites, that were on the project it allowed them to maintain focus on the change that they were trying to make. That was the most helpful. A lot of times we put stuff down on paper, and it doesn’t always pan out like that. So, to be able to make those adjustments, it had a lot to do with the feedback from the assistance we received from [the CQI facilitators] and other team members on the project.”
 Team Work “What worked well is when we got all the staff involved. Not just me. I happened to get involved. I think it works better when you come up with a CQI action as the whole team. Not just what do I think, or me and [another staff member] think, but what does everybody think.”
 Understanding of Patient Benefit “It might sound pretty funny, but I feel awesome. I truly do because, I got to do something that not only is going to change my clients’ outcome of their recovery, but it’s something that my company is willing to implement company-wide, and I get to be a part of that. That’s pretty awesome for me.”
Feasibility: Barriers
 Time “In this particular field, it’s hard to make the time for it. Even though it was once a month, it was still an hour and a half out of the day where we could be running a group, I could be meeting with clients, I could be doing intakes or writing notes. Meeting with my clinical director and talking about it again, everything takes away from the client. That’s the bottom line.”
 Staff Resistance “We are overwhelmed with what we’re already doing. Too much change is coming on. We’re overwhelmed. We’re a little resistant to change right now. I think that we just need to stay afloat, keep on working on the [CQI Action], and then get feedback.”
 Resources (other than time) “Always fiscal challenges that prevent from moving as quickly as you’d like. Imagine fiscal challenges will always be there. The process of things that needs to happen because we’re part of a huge organization. Steps that need to happen. Takes longer than you would like.”
Perceptions of CQI Impact
 Impact on Program/Staff “From what I saw, staff awareness. Bringing them in the loop of what their services and how their services are translated financially. Gave them opportunity to see where they were at. Staff awareness.”
 Impact on Patient Outcomes “It helped with retention … I could stay in touch with those clients better and they could get to know me better and gain some trust with me and could retain them.”
“Improvement in patient satisfaction Satisfied patients. Patients are getting the services they need, working with counselors. They’re being treated well. They’re lighting up while they’re here.”
 Created Sustainable Process or Action “We ask questions like “is there anything that you feel can be done different?”. We use the CQI techniques that we have learned from [CQI facilitators], and we have adapted it in-house; getting others input, seeing questions that we may have wanted to ask/other topics that we might have wanted to bring up for the [CQI Action] and how we can improve it.”
Sustainability: Facilitators
 Ongoing Training and Reinforcement “When we have turnover, maybe do a training again … maybe a refresher (for continuing staff). Constant refreshing. Similar to retail staff or hospital staff where they have mandatory training.”
“Probably the PDSA cycles need to be reinforced with the staff-the positive results. If there’s more pressure from the corporate office, follow-through. We have so many projects we have to work on, so it is like which ones do I have to focus on. I think it is possible to maintain CQI at our agency, we just need a little extra pushing. ”
 Maintain Staff Support and Buy-in “It’s a matter of getting consensus with the staff and coming to an agreement that this is what we need to do and this is what we’re going to do. And having the staff buy in it. It takes teamwork.”
 Make Part of Routine Procedures “Now, it’s [the CQI action] automatic. When you start to overthink it, there’s so much that comes in that you don’t start. “We could do this”, “we could do that”, and “we can do”. Well let’s just do. And that’s what CQI helped us with. We do a lot of brainstorming. But then there’s a solution and there’s a plan, do, act. We’ve really adapted to that.”
 Adequate Staffing “It’s [maintaining the CQI action] possible. When we get more staff, absolutely.”
Sustainability: Barriers
 Time “Just time. It’s time. If we’re going to implement a project, I need to pull in my staff. If I want them to be productive, then I’m taking them away from billable service. It’s dedicated time to the project.”
 Staff Turnover “Staff turnover. Nothing to do about those – some people want to go back to school. Have to bring someone in new, get them acclimated. There’s always going to be medical leaves, family emergencies. There’s nothing we can do about it.”

3.2.1. Participant attendance

As planned, we found that typically two staff members from each program attended the monthly CQI meetings. Ten group meetings were held over the course of the 12-month intervention period. The average number of meetings attended by participants was 6.62 (SD = 3.07), with no differences found between Cohort 1 and 2.

3.2.2. Number of PDSA cycles completed

Seven of the 8 programs completed at least one PDSA cycle during the intervention period with half of the programs completing two PDSA cycles. The mean number of PDSA phases completed, where one cycle is comprised of four phases (i.e., Plan, Do, Study and Act) was 6.63 (minimum = 2, median = 8 and maximum = 9). The one site that did not complete a PDSA cycle had the Program Director turn over twice during the intervention period.

3.2.3. Perceptions of CQI

Prior to the intervention, enthusiasm was relatively high (M = 8.50; SD = 1.85). Following the intervention, enthusiasm decreased but still remained relatively high, significantly above the mid-point (M = 7.93; SD = 2.24). Regarding perceptions of difficulty, most participants expressed that implementing CQI was relatively easy (M = 4.09, SD = 2.30). Table 2 shows pre- and post-intervention values on the Innovation Attribute measure. Pre-intervention perceptions were somewhat positive indicating CQI was perceived as more advantageous, compatible, easy to use and demonstrable. Post-intervention, perceptions were statistically significantly different from pre-intervention on all scales, demonstrating more positive attitudes. On average, the perceptions increased by one point on the seven-point scale. The greatest increase was seen on the results demonstrability subscale.

Table 2.

Staff perceptions of CQI’s innovation attributes pre- and post-intervention (n = 14)

Outcome Pre-Intervention
M (SD)
Post-Intervention
M (SD)
Mean Difference
(SD)
p-value
Innovation Attributes (overall) 4.80 (0.68) 5.80 (0.72) 1.01 (0.90) 0.001
 Relative Advantage 4.81 (0.73) 5.63 (0.86) 0.81 (0.95) 0.007
 Compatibility 4.79 (0.83) 5.64 (0.91) 0.86 (1.01) 0.007
 Ease of Use 4.73 (0.75) 5.93 (0.94) 1.19 (1.10) 0.001
 Results Demonstrability 4.81 (0.74) 6.12 (0.74) 1.31 (0.97) <0.001

Note: Scale range 1–7 with higher score indicating more positive value

CQI facilitators that emerged were: 1) Staff and leadership buy-in; 2) CQI format and technical assistance provided; 3) team work; and 4) understanding of patient benefit. CQI barriers were: 1) Time; 2) staff resistance; and 3) resources (other than time). Staff from at least two programs mentioned each of these facilitators and barriers.

When staff were asked about their perceptions of impact, three themes emerged: 1) patient satisfaction and patient retention (reported by 7 programs); 2) staffing (reported by 6 programs); and 3) service delivery improvements and a sustainable process (i.e., PDSA cycles; reported by 5 programs). More specifically, several respondents stated that CQI had benefitted patients by improving retention in treatment and patient satisfaction. Participants also thought that the CQI actions had a positive impact on program staff, such as holding them accountable to activities, raising awareness about resources and finances, and creating a better work environment. With regards to the CQI process, respondents discussed desire to use PDSA cycles again.

3.3. Intervention effectiveness

Intervention effectiveness was examined by comparing change from baseline to 12-months between the Cohort 1 and Cohort 2 programs.

3.3.1. Staff attrition

No difference in staff attrition was found (Cohort 1: 25%; Cohort 2: 29%).

3.3.2. Job satisfaction

Job satisfaction scores were similar across Cohort 1 (M = 42.22, SD = 5.40) and Cohort 2 (M = 43.33; SD = 8.50) at baseline. The average change in Cohort 1 (M = −2.50; SD = 4.31) and Cohort 2 (M = −3.06; SD = 5.31), and the average difference in these changes (M = 0.56; SD = 4.84) was not statistically significant. Job satisfaction ratings tended to decline in both groups, with a greater decline among the Cohort 2 programs.

3.3.3. Morale. Emotional exhaustion

At baseline, 63% of the Cohort 1 respondents’ exhaustion ratings were in the “low” category whereas 80% of Cohort 2 respondents were in the “low” category. Only one person in each group changed their ratings over time, one respondent in Cohort 1 shifted from a “high” rating to an “average” rating and one respondent in Cohort 2 shifted from an “average” to “low” rating.

Personal accomplishment

For this measure, we examined the percentage of respondents who changed from a “low” as compared to an “average” or “high” score between cohorts. At baseline, 38% of the Cohort 1 ratings were in the “low” category whereas 80% of Cohort 2 ratings were in the “low” category. Only one person in each group changed their ratings over time, one respondent in both groups shifted to a lower rating at the 12-month time point.

3.3.4. Patient length of stay

Table 3 presents the percentage of patients in each study condition over time that met the three retention criteria. Patients in both conditions had similar and relatively high rates meeting the 3 days or more criteria and this rate changed little from baseline to the 12-month period. However, patients in the Cohort 1 programs had significantly higher rates meeting the 30 days or more and 90 days or more retention rate at baseline. The Cohort 1 rates did not change significantly over time whereas the rates among the Cohort 2 programs improved, but never reached the levels achieved by the Cohort 1 programs.

Table 3.

Patient length of stay by experimental group and intervention period for the 3-, 30- and 90-days or more intervals

Pre-Intervention Intervention
Group, Criteria n % n % Diff p-value
Cohort 1: 3 days or more 389 96.7 403 97.3 0.6 0.621
Cohort 1: 30 days or more 78.9 80.4 1.5 0.600
Cohort 1: 90 days or more 49.9 49.1 −0.7 0.822
Cohort 2: 3 days or more 251 97.6 288 96.5 −1.1 0.454
Cohort 2: 30 days or more 66.9 74.7 7.7 0.047
Cohort 2: 90 days or more 39.0 43.1 4.0 0.335

3.3.5. Patient treatment completion status

The average percentage discharged with positive compliance was initially higher in the Cohort 1 programs (baseline: Cohort 1 = 53.9%; Cohort 2 = 39.6%). The average difference-in-difference though was small with the Cohort 1 programs slightly decreasing over time but still remaining significantly higher than Cohort 2 at 12-months (Cohort 1 = 49.9%; Cohort 2 = 40.1%).

3.4. Sustainability

At the 24-month time point, respondents from the Cohort 1 programs reported sustaining six out of the eight activities that were developed as part of the intervention. The sustained actions included: increased gender-specific programming, a more systematic intake process, new procedures for transitioning patients to aftercare, creation of a standardized curriculum, establishment of monthly staff meetings, supervision and trainings, and development of a family group. The two discontinued CQI Actions were the new patient orientation group and monitoring units of service. Staff reported discontinuing the two actions because they were no longer needed.

When asked how CQI could be sustained, respondents reported: ongoing training, monitoring and reinforcement; staff support and buy-in; creation of policies and procedures; and allocating staff time (see Table 1). Barriers to continuing CQI included time, resources, and staff support.

4. Discussion

This study found that CQI is feasible to implement in community based SUD treatment settings. Participating staff remained enthusiastic after the intervention and rated CQI as relatively easy to implement. Most programs implemented more than one PDSA cycle over oneyear. Aspects such as staff and organizational buy-in, technical assistance, teamwork, and recognizing benefits to patients were cited as CQI facilitators. Participants also identified a few barriers including lack of time, resources, and staff resistance. Regarding effectiveness, there were no significant changes over time in staff outcomes between the two experimental conditions. Similarly, changes in patient outcomes, also remained similar across time in both groups, although the intervention group had higher rates both pre- and post-intervention.

This study contributes to the literature in several important ways. First, in accordance with recommendations generated from systematic reviews (Nadeem et al., 2013; Schouten et al., 2008), this manuscript provides specific details about the CQI intervention to facilitate identification of the key components. The detailed description of the intervention and related implementation toolkit will support future research and replication efforts. Additionally, this study used an innovative study design that maximized the opportunity to examine feasibility, effectiveness and sustainability within a relatively short study timeframe (i.e., two years). This design is highly applicable to the National Institutes of Health’s intervention testing funding mechanism (R34) that provides three years of research support. Examining feasibility, effectiveness and sustainability within one trial may reduce the time between intervention development and translation into real world practice settings, which is well documented to be slow (Balas & Boren, 2000), especially with regard to community based SUD treatment (Lamb, Greenlick, & McCarty, 1998).

This study is also unique in that we examined staff and patient outcomes and used mixed methods to examine feasibility and effectiveness. In a previous review, approximately half of CQI studies used only self-reported staff measures (Schouten et al., 2008). The varying understanding of CQI and the pressure organizations feel to participate in such efforts may result in biased assessments (Counte & Meurer, 2001). We utilized two strategies proposed by Counte and Meurer (2001) to address this concern, we included respondents at both the administrator and clinician levels, and conducted assessments of the extent of CQI implementation through program documentation, staff reports, and in-depth interviews conducted by trained field staff rather than the CQI researchers, therefore reducing the potential for bias. Moreover, we collected survey and administrative data at the organizational and patient level to explore whether the intervention influenced staff and patient outcomes.

The feasibility results show that CQI can be implemented in SUD treatment settings. All but one program accomplished at least one PDSA cycle with most programs completing two PDSA cycles within the one-year intervention period. The one program that did not complete a PDSA cycle experienced administrative turnover twice during one year preventing consistent participation and support.

In general, staff perceptions were positive and remained enthusiastic after the intervention. Roosa et al. (2011) also reported that SUD treatment providers found that participating in quality improvement was of high value to patients and staff. The positive experience with CQI may have an additional benefit down the line, as Schierhout et al. (2013) reported that prior positive experiences with CQI contributed to collective efficacy, an important mechanism for CQI effectiveness. Although this intervention did not appear to impact staff retention, a longer-term study with a larger sample might be better positioned to examine this question.

In terms of developing CQI for SUD treatment settings, staff identified several feasibility barriers. For example, staff suggested that there needed to be dedicated and billable time for CQI, which would require an administrative change in terms of how contracts are organized and performance standards are measured. Staff resistance was also noted as an issue, particularly for programs that were already undergoing changes or periods of disruption due to leadership changes or staff turnover. Moreover, during the study period the agency was making broader procedural modifications as a result of Medicaid changes. These barriers highlight the need for CQI to be internalized into the process of care, not to be viewed as something outside the current practice or in addition to the current workload.

Regarding sustainability, 75% of the CQI Actions were continued for another year demonstrating that the activities appeared valuable to program staff and the changes were maintained without continued external support. However, staff suggested more assistance would increase CQI activity in their programs and argued for allocated time to help support its use. More specifically, the current environment in which CQI is not a “billable” activity prevented some administrative staff and especially clinical staff, from devoting more time to CQI activities. These findings are consistent with the emerging literature suggesting that external contextual factors such as reimbursement as well as internal contextual factors including staff training and organizational support are needed to sustain evidence-informed practices in community service settings (Aarons, Hurlburt, & Horwitz, 2011; Hunter, Han, Slaughter, Godley, & Garner, 2017).

Although our findings are not overwhelmingly conclusive, this is not surprising given the modest intensity of the intervention, the exploratory design with small sample size and the available data. For example, the CQI intervention lasted one year with the launch of the CQI Actions occurring approximately the third or fourth month of that period, after staff had studied existing process and outcome data and developed a vetted plan to make a change. One might not expect that changes in patient outcomes would be immediate given the average time in treatment was over 90 days. Moreover, despite favorable self-reported ratings of CQI, more “objective measures” such as attrition rates, job satisfaction and morale ratings did not seem to be influenced by the intervention, perhaps due to the CQI Actions not specifically designed to address these issues.

4.1. Limitations

There are several study limitations. First, the use of patient outcome data to explore CQI effectiveness proved challenging. For example, pinpointing the time points most closely associated with CQI’s hypothesized impact was difficult. Moreover, the program data were often aggregated making it difficult to tease apart for such purposes. For example, discharge data was only available by the agency fiscal year due to limitations in the electronic record systems, which did not align with intervention timing. A second study limitation was that many of the data from staff were based on self-report. Third, despite randomization, there were baseline differences between the two experimental conditions. We used difference-in-difference approaches to account for this but it does not address potential ceiling or floor effects. Moreover, lack of clinical improvements is not uncommon – a recent systematic review of quality improvement collaboratives using 24 articles found that the greatest impact was at the provider level and patient-level findings were less robust (Nadeem et al., 2013).

4.2. Conclusions

It is feasible to implement CQI in community based SUD treatment settings. Detailed documentation of the CQI approach used in this study is available for future use and replication efforts as recommended in prior research. Clinical as well as program management staff were enthusiastic about engaging in CQI and reported that they thought it improved patient experience. However, allocating the time and staffing to conduct CQI in these typical treatment settings were noted as potential barriers. Moreover, demonstrating CQI’s impact on providers and patients using administrative data sources was challenging. These findings suggest more work may be needed in order to align performance measurement systems to CQI efforts in order to empirically demonstrate effects.

Highlights.

  • Continuous quality improvement is feasible to implement in community based substance use treatment

  • Substance use treatment staff were enthusiastic about utilizing continuous quality improvement approaches but reported the need for dedicated time to practice it

  • More research is needed to determine the impact of continuous quality improvement on providers and patients

Acknowledgments

This paper was produced as part of funding from the National Institute of Drug Abuse (R34 DA032041). The content is solely the responsibility of the authors and does not necessarily represent the official views of NIDA or the National Institutes of Health. The authors would like to thank all of the participating treatment program staff and patients for whom this research would not have been possible. The authors express appreciation to Chau Pham who coordinated treatment staff data collection and Tiffany Hruby for assistance with manuscript preparation.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Conflict of Interest

All authors report no conflict of interest and no relevant financial interests, activities, relationships, and affiliations.

References

  1. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health. 2011;38(1):4–23. doi: 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Balas E, Boren SA. Yearbook of Medical Informatics. Geneva, Switzerland: International Medical Informatics Association; 2000. Managing clinical knowledge for health care improvement; pp. 65–70. [PubMed] [Google Scholar]
  3. Center for Behavioral Health Statistics and Quality. (HHS Publication No. SMA 16-4984, NSDUH Series H-51).Key substance use and mental health indicators in the United States: Results from the 2015 National Survey on Drug Use and Health. 2016 Retrieved May 8, 2017, from https://www.samhsa.gov/data/sites/default/files/NSDUH-FFR1-2015/NSDUH-FFR1-2015/NSDUH-FFR1-2015.pdf.
  4. Chinman M, Hunter SB, Ebener P. Employing continuous quality improvement in community-based substance abuse programs. International Journal of Health Care Quality Assurance. 2012;25(7):604–617. doi: 10.1108/09526861211261208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Counte MA, Meurer S. Issues in the assessment of continuous quality improvement implementation in health care organizations. International Journal for Quality in Health Care. 2001;13(3):197–207. doi: 10.1093/intqhc/13.3.197. [DOI] [PubMed] [Google Scholar]
  6. Fetterman DM, Wandersman A. Empowerment evaluation principles in practice. New York, NY: The Guilford Press; 2005. [Google Scholar]
  7. Gustafson DH, Quanbeck AR, Robinson JM, Ford JH, 2nd, Pulvermacher A, French MT, McCarty D. Which elements of improvement collaboratives are most effective? A cluster-randomized trial. Addiction. 2013;108(6):1145–1157. doi: 10.1111/add.12117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Hoffman KA, Ford JHn, Choi D, Gustafson DH, McCarty D. Replication and sustainability of improved access and retention within the network for the improvement of addiction treatment. Drug and Alcohol Dependence. 2008;98(1–2):63–69. doi: 10.1016/j.drugalcdep.2008.04.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Hoffman KA, Green CA, Ford JH, 2nd, Wisdom JP, Gustafson DH, McCarty D. Improving quality of care in substance abuse treatment using five key process improvement principles. Journal of Behavioral Health Services and Research. 2012;39(3):234–244. doi: 10.1007/s11414-011-9270-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Humphreys J, Harvey G, Coleiro M, Butler B, Barclay A, Gwozdziewicz M, Hegarty J. A collaborative project to improve identification and management of patients with chronic kidney disease in a primary care setting in Greater Manchester. BMJ Quality and Safety. 2012;21(8):700–708. doi: 10.1136/bmjqs-2011-000664. [DOI] [PubMed] [Google Scholar]
  11. Hunt P, Hunter SB, Levan D. Continuous quality improvement in substance abuse treatment facilities: How much does it cost? Journal of Substance Abuse Treatment. 2017;77:133–140. doi: 10.1016/j.jsat.2017.02.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Hunter SB, Ebener P, Chinman M, Ober AJ, Huang CY. Promoting Success: A Getting to Outcomes guide to implementing continuous quality improvement for community service (TL-179-NIDA) Santa Monica, CA: RAND Corporation; 2015. [Google Scholar]
  13. Hunter SB, Han B, Slaughter ME, Godley SH, Garner BR. Predicting evidence-based treatment sustainment: Results from a longitudinal study of the Adolescent-Community Reinforcement Approach. Implementation Science. 2017;12(1):75. doi: 10.1186/s13012-017-0606-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Hunter SB, Ober AJ, Paddock SM, Hunt PE, Levan D. Continuous quality improvement (CQI) in addiction treatment settings: Design and intervention protocol of a group randomized pilot study. Addiction Science & Clinical Practice. 2014;9:4. doi: 10.1186/1940-0640-9-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Institute of Medicine. Improving the quality of health care for mental and substance-use conditions. Washington, DC: National Academies Press; 2006. [PubMed] [Google Scholar]
  16. Lamb S, Greenlick MR, McCarty D. Bridging the gap between research and practice: Forging partnerships with community-based drug and alcohol treatment. Washington, DC: National Academies Press; 1998. [PubMed] [Google Scholar]
  17. Lehman WE, Greener JM, Simpson DD. Assessing organizational readiness for change. Journal of Substance Abuse Treatment. 2002;22(4):197–209. doi: 10.1016/s0740-5472(02)00233-7. [DOI] [PubMed] [Google Scholar]
  18. Maslach C, Jackson SE. Maslach burnout inventory—human services survey (MBI-HSS) In: Maslach C, Jackson SE, Leiter MP, editors. MBI manual. 3rd. Palo Alto, CA: Consulting Psychologists Press; 1996. p. 5. [Google Scholar]
  19. McCarty D, Gustafson DH, Wisdom JP, Ford J, Choi D, Molfenter T, Cotter F. The network for the improvement of addiction treatment (NIATx): Enhancing access and retention. Drug and Alcohol Dependence. 2007;88(2–3):138–145. doi: 10.1016/j.drugalcdep.2006.10.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Moore GC, Benbasat I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research. 1991;2(3):192–222. [Google Scholar]
  21. Nadeem E, Olin SS, Hill LC, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: A systematic literature review. Milbank Quarterly. 2013;91(2):354–394. doi: 10.1111/milq.12016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. National Institute on Drug Abuse. Trends & statistics. 2015 Retrieved May 17, 2016, from https://www.drugabuse.gov/related-topics/trends-statistics.
  23. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health. 2011;38(2):65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Quanbeck AR, Gustafson DH, Ford JH, 2nd, Pulvermacher A, French MT, McConnell KJ, McCarty D. Disseminating quality improvement: Study protocol for a large cluster-randomized trial. Implementation Science. 2011;6:44. doi: 10.1186/1748-5908-6-44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Roosa M, Scripa JS, Zastowny TR, Ford JH., 2nd Using a NIATx based local learning collaborative for performance improvement. Evaluation and Program Planning. 2011;34(4):390–398. doi: 10.1016/j.evalprogplan.2011.02.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Rounsaville BJ, Carroll KM, Onken LS. A stage model of behavioral therapies research: Getting started and moving on from stage I. Clinical Psychology: Science and Practice. 2001;8:133–142. [Google Scholar]
  27. Rubenstein L, Khodyakov D, Hempel S, Danz M, Salem-Schatz S, Foy R, Shekelle P. How can we recognize continuous quality improvement? International Journal for Quality in Health Care. 2014;26(1):6–15. doi: 10.1093/intqhc/mzt085. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Schierhout G, Hains J, Si D, Kennedy C, Cox R, Kwedza R, Lonergan K. Evaluating the effectiveness of a multifaceted, multilevel continuous quality improvement program in primary health care: developing a realist theory of change. Implementation Science. 2013;8(1):119. doi: 10.1186/1748-5908-8-119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Schouten LM, Hulscher ME, van Everdingen JJ, Huijsman R, Grol RP. Evidence for the impact of quality improvement collaboratives: Systematic review. BMJ. 2008;336(7659):1491–1494. doi: 10.1136/bmj.39570.749884.BE. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Quality and Safety. 2014;23(4):290–298. doi: 10.1136/bmjqs-2013-001862. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Wisdom JP, Ford JH, 2nd, Hayes RA, Edmundson E, Hoffman K, McCarty D. Addiction treatment agencies’ use of data: A qualitative assessment. Journal of Behavioral Health Services and Research. 2006;33(4):394–407. doi: 10.1007/s11414-006-9039-x. [DOI] [PubMed] [Google Scholar]

RESOURCES