Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jul 1.
Published in final edited form as: Drug Alcohol Depend. 2015 Apr 9;152:230–238. doi: 10.1016/j.drugalcdep.2015.03.033

Effects of a Strategy to Improve Offender Assessment Practices: Staff Perceptions of Implementation Outcomes

Wayne N Welsh 1, Hsiu-Ju Lin 2, Roger H Peters 3, Gerald J Stahler 1, Wayne EK Lehman 4, Lynda AR Stein 5, Laura Monico 6, Michele Eggers 2, Sami Abdel-Salam 7, Joshua C Pierce 2, Elizabeth Hunt 3, Colleen Gallagher 8, Linda K Frisman 2
PMCID: PMC4458146  NIHMSID: NIHMS679740  PMID: 25896737

Abstract

Background

This implementation study examined the impact of an organizational process improvement intervention (OPII) on a continuum of evidence based practices related to assessment and community reentry of drug-involved offenders: Measurement/Instrumentation, Case Plan Integration, Conveyance/Utility, and Service Activation/Delivery.

Methods

To assess implementation outcomes (staff perceptions of evidence-based assessment practices), a survey was administered to correctional and treatment staff (n=1509) at twenty-one sites randomly assigned to an Early- or Delayed-Start condition. Hierarchical Linear Models with repeated measures were used to examine changes in evidence-based assessment practices over time, and organizational characteristics were examined as covariates to control for differences across the 21 research sites.

Results

Results demonstrated significant intervention and sustainability effects for three of the four assessment domains examined, although stronger effects were obtained for intra- than inter-agency outcomes. No significant effects were found for Conveyance/Utility.

Conclusions

Implementation interventions such as the OPII represent an important tool to enhance the use of evidence-based assessment practices in large and diverse correctional systems. Intra-agency assessment activities that were more directly under the control of correctional agencies were implemented most effectively. Activities in domains that required cross-systems collaboration were not as successfully implemented, although longer follow-up periods might afford detection of stronger effects.

Keywords: Assessment, Correctional Treatment, Implementation, Evidence-Based Practice

1. INTRODUCTION

The use of evidence-based practices for assessment, case planning, and service delivery for offenders, particularly those in transition from correctional custody to community treatment, is not widespread (Belenko and Peugh, 2005; Friedmann et al., 2007; Henderson et al., 2008, 2009; Pelissier et al., 2007; Taxman et al., 2007a, 2007b). Improved assessment processes for offenders reentering the community has the potential to increase access and better match service delivery to assessed needs, thereby improving the likelihood of successful outcomes. For example, comprehensive screening and assessment of drug-involved offenders can expedite placement in treatment, reduce treatment dropout, and reduce recidivism (Shaffer, 2011).

Evidence-based assessment practices in criminal justice settings were a major focus of the Blending Initiative, a collaborative effort by the National Institute on Drug Abuse (NIDA) and the Substance Abuse and Mental Health Services Administration (SAMHSA) to improve the diffusion of research into practice. Through this initiative, the treatment planning M.A.T.R.S. guidelines (Measurable, Attainable, Time-limited, Realistic and Specific treatment objectives) were developed to promote the use of evidence-based instruments and the activation of appropriate treatment services (Condon et al., 2008; Garner, 2009; NIDA, 2012; Rossello et al., 2010; Stilen et al., 2007). The guidelines have been used to align evidence-based practices endorsed by NIDA and SAMHSA, including assessment of persons in the criminal justice system, and have also been used by the United Nations’ “Treatnet” network to help disseminate evidence-based practices internationally (Garner, 2009; Rosello et al., 2010).

A continuum of four core assessment practice domains (Measurement/Instrumentation, Case Plan Integration, Conveyance/Utility, Service Activation/Delivery) were identified for use in the current study as practical focal areas in which to implement the M.A.T.R.S. guidelines (Shafer et al., 2014). Despite their potential utility, these assessment practices are rarely implemented effectively with substance-involved offenders in correctional and community reentry programs (Peters et al., in press, Taxman et al., 2007a). The first domain, Measurement/Instrumentation, highlights the importance of using valid and reliable instruments in order to identify client strengths and needs, as well as prioritizing those in need of services (Hiller et al., 2011; Peters et al., 2000). The second domain, Case Plan Integration, emphasizes that individualized treatment plans should address the unique needs of each person involved in the assessment process. While studies highlight the importance of matching treatment plans to individual needs for effective programming, these practices are seldom implemented in correctional settings (Lowenkamp and Latessa, 2005; Taxman and Thanner, 2006; Taxman et al., 2007c). The third domain, Conveyance/Utility, focuses on sharing assessment results, case plans and client needs with community treatment providers (Fletcher et al, 2009; Moore and Mears, 2003; Taxman et al., 2007a; Wenzel et al., 2004). The final domain, Service Activation/Delivery, addresses strategies by which community treatment agencies deliver services based on valid assessment information (Belenko, 2006; Mellow and Christian, 2008; Taxman, 2004).

The Organizational Process Improvement Intervention (OPII), the focus of the current study, was designed to improve evidence-based assessment in these four core assessment domains. The OPII was one of three major projects in Criminal Justice Drug Abuse Treatment Systems (CJDATS), a five-year multi-site national research collaborative funded by the National Institute on Drug Abuse. CJDATS focused on improving implementation of evidence-based approaches for assessment and treatment of drug abuse within criminal justice settings (see also Ducharme et al., 2013; Shafer et al., 2014). Each of the CJDATS studies included some form of “change team” charged with implementation. Interventions involving the use of change teams have demonstrated effectiveness in improving the uptake and sustainability of evidence-based practices (Aarons et al., 2011; Capoccia et al., 2007; Damschroder et al., 2009, 2011; Edmonson, 2003; Lehman et al., 2009; McCarty et al., 2007; Proctor et al., 2009; Roosa et al., 2011).

Proctor and colleagues (2009) identified four levels of change within their conceptual model of implementation: individual, group/team, organization, and systems. Within individuals, key factors influencing change include knowledge, skill, and expertise. Within groups or teams, change is often related to cooperation, coordination and shared knowledge among team members. Within organizations, change is influenced by agency structure, strategy and culture. Within systems, reimbursement, legal, and regulatory policies are often key factors influencing change. While client outcomes (efficacy or effectiveness) are typically the focus of randomized clinical trials, implementation research focuses attention on more proximal implementation and service outcomes (Proctor et al., 2011). Implementation outcomes refer to the effects of deliberate and purposive actions to implement new treatments, practices, and services (Proctor et al., 2011). Service outcomes refer to standards of care for service delivery such as efficiency, safety, and equity. In the current study, our focus was on implementation outcomes.

Implementation outcomes are important for at least three reasons (Proctor et al., 2011). First, they serve as indicators of whether an intervention was implemented successfully or not. Second, implementation outcomes are proximal indicators of implementation processes. Third, implementation outcomes serve as critical preconditions for attaining desired changes in subsequent service and client outcomes. Proctor et al. (2011) emphasize that implementation outcomes should be assessed based on stakeholders’ knowledge of or direct experience with various dimensions of the change to be implemented. Staff perceptions of the change to be implemented are critically important, as agency personnel can through their values, behaviors, and interactions with clients, colleagues, and supervisors, constitute some of the strongest barriers or facilitators of change (Aarons et al., 2011).

Staff perceptions of the acceptability, appropriateness, feasibility and sustainability of any planned change are particularly important (Proctor et al., 2011). Acceptability refers to the perception among stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory. Appropriateness is the perceived fit, relevance, or compatibility of an evidence based practice for a given setting and/or problem. Feasibility is the degree to which an innovation can be successfully used within a given agency or setting. A specific innovation may be perceived as appropriate in that it is compatible with a program’s mission, but it may be viewed as unfeasible due to resource or training requirements. Sustainability is the extent to which a newly implemented practice is maintained or institutionalized within a service setting’s ongoing, stable operations. As Proctor et al. (2011) note, the construct of sustainability has so far received little attention in empirical studies of implementation. Other implementation outcomes such as costs, fidelity, and penetration are also relevant (Damschroder and Hagedorn, 2011), but were beyond the scope of the current study.

Using a cluster randomized trial design (Campbell et al., 2012), the CJDATS Collaborative, including nine research centers in locations around the country, examined whether the OPII resulted in the improved use of evidence-based assessment practices across the four core domains. We predicted that Early-Start sites that received the intervention would show greater improvements in staff perceptions of evidence-based assessment practices than Delayed-Start sites that did not receive the intervention during the same time period.

2. MATERIALS AND METHODS

2.1 Overview of the Intervention

The OPII was designed to provide a structured protocol to improve the use of evidence-based assessment practices in correctional settings. A detailed description of the intervention and the study design is available in a published protocol paper (Shafer et al., 2014). Following Proctor et al.’s (2009, 2011) conceptual model, the evidence-based practices targeted by the OPII were the four core assessment domains (Measurement/Instrumentation, Case Plan Integration, Conveyance/Utility, and Service Activation/Delivery), and the implementation strategy was a facilitated change team approach. Within each site, a local change team involving correctional agency staff (prison, jail, probation, or parole) and one or more community treatment partners identified by the correctional agency was formed to develop and implement strategic improvement plans. Change teams included 6 to 10 individuals, primarily correctional personnel with responsibility for offender assessment, treatment planning and referral functions. Community-based treatment agencies were also represented (typically 1–2 persons per team).

Change team leaders were middle- or upper-level correctional managers who had direct access to the director of the correctional agency. In consultation with the correctional agency, each research center employed an external facilitator (e.g., a professional consultant or trainer) who had experience working with correctional agencies and/or guiding strategic planning teams. Facilitators were not recruited from any of the participating agencies so as to reduce risk of perceived bias. The facilitator maintained communication with all members of the change team, helped conduct meetings, and provided assistance to the team in carrying out the activities of each phase.

An initial kick-off meeting was held with each of the local change teams to provide an overview of the goals, phases, and activities of the OPII. Change teams were briefed and technical assistance was provided by the facilitator as needed. Change teams met in-person at least once per month; in-person meetings were supplemented with telephone conference calls and group e-mails. Cross-site fidelity was encouraged through a detailed Facilitators’ Manual (Shafer and Hiller, 2010), weekly facilitator calls, weekly activity reports submitted by the facilitator to a secure web-portal, a monthly checklist submitted to an executive committee comprised of several lead researchers, and monthly researcher calls.

The OPII consisted of five phases: Team Development (1–2 months); Needs Assessment (3–4 months); Process Improvement Planning (3–4 months); Implementation (6 months); and Follow-Up/Sustainability (3 months). During the Team Development phase, change teams reviewed the aims of the study, elected a team leader, and set up a regular meeting schedule. In the Needs Assessment phase, the change teams engaged in guided activities to identify high-priority agency needs across the four core assessment domains.

During the Process Improvement Planning phase, change teams developed a plan to address the needs identified in the previous phase. Each change team developed a purpose statement articulating the team’s goals in one or more of the four core assessment domains; identified measurable objectives for each goal; and determined specific action steps for each goal. Each team used a Process Improvement Planning Worksheet to identify specific tasks, responsibilities, performance measures, and due dates. Facilitators and team leaders identified 2–3 change team members with relevant experience and formed subgroups to address each goal. Change teams focused on a median of three goals each. Across all 21 sites, nine goals (13%) addressed Measurement/Instrumentation; six goals (9%) addressed Case Plan Integration; thirty-six goals (52%) addressed Conveyance/Utility; and 18 (26%) addressed Service Activation/Delivery. The Process Improvement Plan was presented to the correctional agency director for approval or modification before proceeding to the next phase.

During the Implementation phase, each site implemented their improvement plans with technical assistance by the facilitator. The planning worksheet was reviewed and updated regularly to monitor progress toward the team’s goals. At the end of this phase, a team report summarizing progress was presented to the agency director for approval. During the Sustainability phase, change teams formulated plans to help maintain new assessment practices after the intervention formally ended (Shafer et al., 2014).

2.2 Research Design

This was a multi-site cluster randomized design, where each of the research centers participating in the CJ-DATS collaborative recruited at least two research sites to form the clusters. Cluster randomized designs are well suited to studies in which the intervention is targeted at the organizational rather than at the client level (Campbell et al., 2012; Glynn et al., 2007). Each independent cluster (k = 21) consisted of a correctional agency (e.g., prison, parole, or probation) partnered with one or more community-based treatment programs receiving client referrals from that agency. Within each center, prior to baseline data collection, clusters were randomly assigned to an Early-Start condition or to a Delayed-Start condition using the randomization function in Excel. Delayed-Start sites served as a comparison in that they continued to conduct their normal assessment practices while Early-Start sites received the intervention through the end of the Implementation phase (Shafer et al., 2014).

2.3 Participants

Correctional agency directors agreed that staff could be recruited to participate. Individual offenders were not recruited as subjects. Prior to beginning data collection, all research centers received approval from their Institutional Review Boards and study participants provided informed consent. Surveys were administered to 1,509 respondents at 21 sites randomly assigned to an Early- or Delayed-Start condition. Respondents included change team members as well as other correctional agency staff who were directly involved in conducting assessments, preparing case plans, or referring offenders to community-based treatment.

At Early-Start sites, surveys were administered to staff at three key points where we expected to see improved implementation of evidence based assessment practices over time: (a) at baseline, immediately following the initial kick-off meeting but prior to the start of the intervention, (b) at the end of the implementation phase, where each site had concluded the execution of its process improvement plan; and (c) at the end of follow-up or “Sustainability” phase, where each site developed plans to maintain new practices. At Delayed-Start sites, staff completed surveys at the first two intervals only. Wherever possible, data were collected from the same individuals at each time point. During survey administration to change team members, research assistants were present on-site to provide guidance and answer any questions regarding wording of survey items. Surveys of other staff were administered in person if the respondent was employed at the same facility where the change team meetings were held; or in other cases, via an anonymous mail survey returned directly to researchers in a sealed postage-paid envelope.

2.4 Measures

2.4.1 Dependent Measure

The Staff Perceptions of Assessment survey (Table 1) was designed for this study to measure the four core assessment domains: (1) Measurement/Instrumentation (4 items, α=0.82), (2) Case Plan Integration (5 items, α=0.83), (3) Conveyance/Utility (4 items, α=0.86), and (4) Service Activation/Delivery (2 items, α=0.75). As the thrust of the OPII intervention was on correctional systems and improving use of evidence based assessment practices, survey items were primarily tailored toward correctional personnel (including classification officers, correctional counselors, and drug and alcohol treatment counselors) who were involved in assessment, case planning, or referral activities. Community-based treatment personnel who served on change teams (typically 1–2 per team) were instructed to answer survey questions about assessment practices from their agency perspective based on the assessment information received by the treatment agency from the referring correctional agency. This survey took about 5 minutes to complete.

Table 1.

Staff Perceptions of Assessment Survey – Items and Subscale Reliabilities

Subscales and Items Alpha
Measurement/Instrumentation (4 items): This dimension is concerned with the breadth and quality of instruments that a correctional agency uses to identify the strengths, weaknesses, and service needs of substance-using offenders. α=.82
1. In our agency, the assessment process[s adequately identifies substance abuse treatment and other service needs.
2. In our agency, staff who conduct assessments are adequately trained.
3. The assessment instruments used in our agency are easy to read, interpret, and use.
4. I am satisfied with the instruments used in our agency to assess offender needs.
Case Plan Integration (5 items): This dimension is concerned with the extent to which the correctional case plan explicitly addresses service needs. It also seeks to gauge efficacy and suitability to the needs of the offender as called for in the written problem statement, goals, objectives, and suggested interventions. α=.83
5. In our agency, information from the assessment process is included in the case plan.
6. In our agency, it is clear who develops the case plan.
7. Offenders actively participate in the development of the case plan.
8. I am satisfied with the content of the case plans developed in our agency.
9. I am satisfied with the format of the case plans developed in our agency.
Conveyance/Utility (4 items): This dimension is concerned with the extent to which community-based treatment programs receive the information contained in the corrections agency case plan and with the degree to which the programs find the information useful in arranging services for clients. α=.86
10. Case plans are sent to the community treatment programs to which clients are referred.
11. The case plans are useful to community treatment providers in developing treatment plans and providing services to clients.
12. Community treatment providers communicate with our agency on the usefulness of the case plans.
13. The recommendations of the case plan are used by community treatment programs in delivering services to the clients referred from our agency.
Service Activation/Delivery (2 items): This dimension is concerned with whether the client is engaged in community treatment, with the type and nature of services received, and with communication between agencies about the treatment. α=.75
14. Staff at community treatment programs to which our agency refers clients provide us with information about the progress of those clients.
15. Staff at our agency and staff at community treatment programs to which we refer clients are in general agreement as to the services that clients need.

Note. All items were measured with five-point Likert scales where 1=Disagree Strongly, 2=Disagree, 3=Neither Agree or Disagree, 4=Agree, and 5=Agree Strongly.

To examine construct validity, confirmatory factor analysis was performed on the second half of a randomly split baseline sample (n=464), and a model based on exploratory factor analysis was compared to the theory-based model (i.e., the four assessment domains). Fit indices included the Root Mean Square Error of Approximation (RMSEA) and the Comparative Fit Index (CFI; Bentler, 1990; Hooper et al., 2008; Kenny, 2014; Kline, 2005; Steiger, 1990). A RMSEA of 0.01, 0.05, 0.08, and 0.10 suggests an excellent, good, adequate, and poor fit respectively (Kenny, 2014; MacCallum et al., 1996). The CFI can vary between 0 and 1, with values closer to 1 indicating a better fitting model (Bentler, 1990; Kenny, 2014; Kline, 2005; Steiger, 1990). A CFI ≥ 0.90 is often recommended (Hu and Bentler 1999). A 4-factor model with item #13 reloaded to the Conveyance subscale (rather than the 3-item Service Activation subscale defined a priori) had the best model fit (CFI=0.918, RMSEA=0.085).

2.4.2 Independent Measure

The independent variable was Study Condition, which compared Early- to Delayed-Start sites. Although we randomly assigned sites to the Early-Start or Delayed-Start condition, it was important to examine group balance on potential confounding factors to ensure that randomization had successfully generated equivalent groups. In addition to examining group balance on all measured site-level demographics (table 2), we also identified and examined group balance on site-level organizational characteristics known to influence the results of change efforts (e.g., Aarons et al., 2011; Greenhalgh et al., 2004; Lehman et al., 2002; Proctor et al., 2009).

Table 2.

Site-Level Demographics

Early Start Sites (N=10) Delayed-Start Sites (N=11) All Sites (N=21)

Percent Mean SD Percent Mean SD Percent Mean SD p
Respondent Type 100.0 100.0 100.0
 Correctional Director 4.1 7.7 6.0 .477
 Correctional Staff 38.2 37.0 37.6 .939
 Treatment Director 6.2 8.5 7.4 .701
 Treatment Staff 51.5 46.8 49.0 .774
Race 100.0 100.0 100.0
 African American 22.8 19.3 21.0 .727
 White 71.7 67.9 69.7 .712
 Other 5.5 12.8 9.3 .150
Gender 100.0 100.0 100.0
 Female 61.6 57.6 59.5 .703
 Male 38.4 42.4 40.5 .703
Education 100.0 100.0 100.0
 High School 14.5 12.4 13.4 .732
 Bachelors/Associates 52.3 49.9 51.0 .794
 Post Graduate (MA/PhD) 33.2 37.7 35.6 .592
Ethnicity 100.0 100.0 100.0
 Hispanic 6.6 12.5 9.6 .439
 Non-Hispanic 93.4 87.5 90.4 .439
Age 43.85 5.48 41.70 5.51 42.72 5.47 .381
Years in Corrections or Treatment 8.92 3.29 8.57 3.45 8.74 3.30 .815
Years at current employer 6.53 3.19 6.11 3.09 6.31 3.07 .757
Hours per week worked 38.78 2.04 39.37 2.19 39.09 2.10 .530
Direct client contact hours per week 21.05 5.78 20.13 3.74 20.57 4.72 .667
Number of Clients Seen Per Week 26.10 16.04 29.98 15.42 28.14 15.45 .579
Active Caseload 45.22 22.33 36.31 17.04 40.55 19.76 .314
*

p < .05.

2.4.3 Covariates

The Baseline Survey of Organizational Characteristics (BSOC) used in this study was specifically designed to assess organizational characteristics in correctional treatment settings. The BSOC was based primarily on subscales from the Organizational Readiness for Change (ORC) survey and the Survey of Organizational Functioning (SOF) (Broome et al., 2009; Lehman et al., 2002). Both have been widely validated for use in correctional settings and have demonstrated good psychometric properties. The BSOC included twenty-nine scales organized into five sections: (1) Needs/Pressures for Change, (2) Resources, (3) Staff Attributes, (4) Organizational Climate, and (5) Other (e.g., Support for Evidence-Based Practices). Demographics (e.g., age, race, ethnicity, gender, education, work experience) were also collected. This survey takes about 45–50 minutes to complete. All subscales were examined as possible covariates in order to adjust for potential differences between Early-Start and Delayed-Start sites. Results comparing Early- and Delayed-Start sites on all twenty-nine BSOC subscales are not shown but are available from the corresponding author upon request.

Only one BSOC scale, the 5-item Support scale (alpha = .79), revealed any significant difference (p < .053) between the Early- (mean = 35.2, s.d. = 3.11) and Delayed-Start sites (mean = 33.2, s.d. = 0.92), and was thus entered as a covariate. The Support scale (Shortell et al., 2004) assesses perceived organizational support for change efforts. Items include “Senior management in your organization strongly supports your work.” Because the Support scale was examined as a control variable rather than a main effect, we were not interested in interpreting its coefficient and it was not centered in HLM analyses.

2.5 Analyses

Mixed effects models, also known as Hierarchical Linear Models (Hedeker et al., 1994; Raudenbush and Bryk, 2002) were used to examine the effects of Study Condition (Early- or Delayed-Start) on staff perceptions of assessment practices over time. Mixed effects models account for the covariance structures for between-clusters as well as within-clusters, since staff perceptions were repeatedly measured at three time points, and staff were nested within clusters (sites). Mixed effects models do not require an equal number of observations for all participants, but allow use of all available cases when estimating effects. Site and research center were considered as random effects, while Study Condition, Interval, and their interaction (Study Condition × Interval) were considered fixed effects. Model fit was examined by inspecting residuals and examining goodness of fit indices (e.g., -2LL, Akaike, Bayesian).

Two a priori planned contrasts tested the effects of the intervention and its sustainability. As opposed to overall (omnibus) F-tests for interaction terms, planned contrasts isolate specific Study Condition × Interval mean comparisons and provide more statistically powerful tests (Maxwell and Delaney, 2004:745). Sequential Bonferroni contrasts (Holm, 1979; Miller, 1981; Rice, 1989) were used to adjust for inflated Type I error rates due to multiple comparisons. To test intervention effects, we compared the differential change between Early-Start sites from Interval A (baseline) to Interval B (end of implementation) with Delayed-Start sites Interval D (baseline 1) to Interval E (baseline 2). For Contrast #1, therefore, Intervention Effect = [(B−A) − (E−D)]. Next, to examine sustainability effects for Early-Start sites, we compared the differential change between Interval A (baseline) and Interval C (end of follow-up) for the Early-Start sites with the change between Interval D (baseline 1) and Interval E (baseline 2) for the Delayed-Start sites. A significantly higher rate of change between C and A, versus E and D, would provide evidence for the sustainability of the intervention. For Contrast #2, therefore, Sustainability = [(C−A) − (E−D)].

3. RESULTS

Following Consort guidelines for cluster randomized designs (Campbell et al., 2012), Figure 1 displays the number of surveys distributed and returned for each study condition and time interval. Response rates on the Staff Perceptions survey were 91.7% for the Early-Start group (639 forms returned/697 forms distributed) and 96.6% (872 forms returned/903 forms distributed) for the Delayed-Start group. After exclusion of records from respondents who were recorded as being in “neither” study condition (1 record) or those missing critical identifiers such as Interval (1 record), a total of 1,509 out of 1,511 records were available for analyses. Site-level characteristics for each group and for the study sample as a whole are presented in Table 2. Early- and Delayed-Start sites did not differ on these variables. The observed means for each dependent variable by study condition and interval are shown in Table 3. A general upward trend is suggested in the early-start sites over time, although significance tests of the planned contrasts are needed to test hypotheses about improvement over time.

Figure 1.

Figure 1

Study Design, Data Collection Points, and Response Rates.

Table 3.

Dependent Variables: Observed Means by Study Condition and Interval

Assessment Domains Early-Start Sites Delayed-Start Sites

A B C D E

BL IMP FU BL1 BL2
Measurement/Instrumentation 34.21
(8.13)
37.46
(7.08)
36.97
(8.19)
34.77
(7.74)
34.27
(7.79)
Case Plan Integration 34.46
(7.43)
37.02
(7.30)
37.34
(7.19)
35.20
(6.80)
34.46
(6.95)
Conveyance/Utility 29.89
(7.65)
31.67
(8.43)
30.29
(8.78)
28.33
(8.86)
28.60
(8.51)
Service Activation/Delivery 30.60
(8.36)
31.59
(9.40)
31.76
(10.42)
33.10
(8.90)
32.18
(8.63)

Note. Means shown are observed means from the Mixed Effects Models. Standard deviations are shown in parentheses. Significance tests for the planned contrasts testing change over time within the mixed effects models are shown in Table 4. Letters (A, B, etc.) refer to data collection points for each study condition: A = Early-Start/Baseline; B = Early-Start/End of Implementation; C = Early-Start/End of Follow-up; D = Delayed-Start/Baseline 1; and E = Delayed-Start/Baseline 2.

Results for the overall Mixed Effects models are presented in Table 4. Tests of the planned contrasts supported significant Intervention and Sustainability effects for three of the four assessment domains: Measurement/Instrumentation, Case Plan Integration, and Service Activation/Delivery. Neither effect was significant for Conveyance/Utility. Support, entered as a control variable, predicted Measurement/Instrumentation only. The standardized beta coefficients indicate that Intervention and Sustainability effects were strongest for Measurement/Instrumentation (b = 4.76 and 3.55, respectively) and Case Plan Integration (b = 4.10 and 3.02, respectively). The magnitude of effects for Service Activation/Delivery, although significant, were lower (b = 2.65 and 2.12, respectively). All random effects were non-significant, suggesting that there was little additional variance to be explained by differences between sites or centers.

Table 4.

Mixed Effects Models: Overall Results and Planned Contrasts

Measurement/Instrumentation Case Plan Integration Conveyance/Utility Service Activation/Delivery
Fixed Effects
F p F p F p F p
Study Condition   0.31 0.586   0.04 0.853   0.01 0.925   0.27 0.617
Interval 10.85 0.001 *** 10.38 0.001 ***   6.33 0.002 **   2.09 0.124
Study Condition * Interval 20.21 0.001 *** 16.81 0.001 ***   5.88 0.003 **   4.19 0.015 *
Support   6.00 0.025 *   2.73 0.115   0.73 0.404   1.30 0.272
Planned Contrasts
b SE p b SE p b SE p b SE p
Intervention Effect: (B−A) − (E−D)   4.76 0.79 0.001 ***   4.10 0.74 0.001 ***   1.93 0.88 0.056   2.65 0.93 0.008 **
Sustainability Effect: (C−A) − (E−D)   3.55 0.78 0.001 ***   3.02 0.74 0.001 ***   0.43 0.88 0.620   2.12 0.96 0.027 *
Random Effects b SE p b SE p b SE p b SE p
 Center   1.80 2.52 0.476   3.72 3.34 0.265   4.89 3.77 0.194 10.88 6.97 0.119
 Site   3.96 2.41 0.100   4.20 2.36 0.075   3.46 2.10 0.099   3.85 2.52 0.126
Residual (AR1)
 AR1 dia 56.15 2.26 0.001 *** 44.87 1.76 0.001 *** 63.29 2.49 0.001 *** 68.51 2.70 0.001 ***
 AR1 rho   0.51 0.03 0.001 ***   0.44 0.03 0.001 ***   0.46 0.03 0.001 ***   0.43 0.03 0.001 ***

Note. For the Planned Contrasts, letters (A, B, etc.) refer to data collection points for each study condition: A = Early-Start/Baseline; B = Early-Start/End of Implementation; C = Early-Start/End of Follow-up; D = Delayed-Start/Baseline 1; and E = Delayed-Start/Baseline 2. For all contrasts, the earlier interval (e.g., A or D) was coded as 0; the later interval (e.g., B, C, or E) was coded as 1; the Early-Start group was coded as 0 and the Delayed-Start group was coded as 1.

*

p<0.05

**

p<0.01

***

p<0.001; p values shown in the table have been adjusted using Sequential Bonferroni contrasts.

4. DISCUSSION

Staff perceptions of implementation outcomes are critically important, as agency personnel can through their values, attitudes, and behaviors provide important facilitators or barriers for change (Aarons et al., 2011; Proctor et al., 2011). The study findings supported the effectiveness of an implementation intervention intended to improve the use of evidence-based assessment practices for incarcerated offenders re-entering the community. Consistent with hypotheses, significant intervention effects were found for Measurement/Instrumentation, Case Plan Integration, and Service Activation/Delivery, and these improvements were sustained in the Early-Start sites through the end of the follow-up period. Contrary to expectations, no significant improvements in Conveyance/Utility were found.

Interviews with correctional staff suggested that the ability of the change teams to impact intra-agency policy and practice may explain in part the stronger intervention effects observed for Measurement/Instrumentation (b = 4.76) and Case Plan Integration (b = 4.10). As detailed qualitative analyses are in progress (Pankow et al., 2014; Shafer et al., 2014), only brief excerpts are presented here. In reference to changes in these domains, staff indicated that they relied more on intra- than inter-agency coordination: “…it’s a matter of senior people making decisions…saying to subordinates this is a new process and we are going to…do it.” In contrast, intervention effects related to Conveyance/Utility (b = 1.93) and Service Activation (b = 2.65) were more dependent on inter-agency coordination: “…everything else has been coming along good as far as the Measurement …and Case Plans…it’s starting to become routine, but the tricky part is…Conveyance. That’s been hard.” One correctional counselor stated, “…within the…institution they seem to be doing the forms…facilitating release…uploading the information, and then it seems to stop…they just haven’t figured out how to get that to the community.” Improved inter-agency collaboration may thus be one prerequisite for improvements in Conveyance/Utility. However, we cannot rule out the possibility that our intervention paid insufficient attention to the hand-off between the correctional agency and the community agency. Formative research could further identify inter-agency factors that influence the uptake, utilization, and sustainability of planned changes in offender assessment practices (Brown and Gerhardt, 2002; Stetler et al., 2006; Welsh and Harris, 2012), and help modify the OPII for this purpose.

Although significant improvements in Conveyance/Utility were not found, several planned changes to assessment practices were still in relatively early stages of implementation when the study ended. At several sites, implementation of new procedures to improve the electronic conveyance of assessment information to community treatment providers had not yet begun when the study ended, despite approval of policies and funds to do so. Optimal time frames for planning, executing, and measuring planned changes to assessment practices deserve closer attention, and future studies would benefit from further examining whether intervention effects are immediate or delayed, abrupt or gradual, and whether or not the effect persists or is temporary (Wagner et al., 2002).

Other study limitations are related to the number of sites (21) (Raudenbush, 1997; Spybrook and Raudenbush, 2009). For example, although agency-level support for change at baseline was entered as a control variable, we were unable to examine interactions with study condition and change over time (e.g., 3-way interactions). To explore how organizational characteristics influence change over time, a greater number of research sites would be needed. Similarly, although we controlled for site-level differences in analyses, it is still possible that other unmeasured site- or agency-level characteristics could have influenced the results. In addition, outcomes perceived by treatment agency staff may differ from those perceived by correctional agency staff, although larger samples of treatment agencies and staff would be needed to examine this possibility. In the current study, we were primarily interested in outcomes for correctional agencies and their treatment partners in aggregate (i.e., sites).

As Proctor et al. (2011) explain, implementation studies primarily focus on implementation or service outcomes rather than client-level outcomes. As such, this study examined specific implementation outcomes (staff perceptions of assessment practices). However, client-based outcomes may also be useful to more fully measure the long term effects of implementation interventions such as the OPII (see Shafer et al., 2013). Future studies could benefit from examining relationships between implementation and client outcomes, although the logistical challenges of measuring both in the same study are substantial and the time lag between observable implementation and client outcomes is often considerable (Proctor et al., 2011).

Other outcomes are also being examined as part of the larger CJDATS study (Shafer, 2013; Shafer et al., 2014). These include the degree of success at each site in achieving their targeted goals, measured by content analyses of site reports prepared by criminal justice partners at the end of the implementation phase. Another paper is examining changes in the use of evidence-based practices over time as assessed by researcher ratings of a sample of client case plans. A third paper is examining treatment agency staff perceptions of improvements in assessment practices, although the number of treatment agency personnel surveyed at each site was quite small and analyses will rely on other data sources including interviews. Other papers are examining the functioning of multiagency change teams (Melnick et al., 2015) and cross-site fidelity (Stein et al., 2015).

The intervention and sustainability effects found in this study provide a foundation for future studies targeting the improved implementation of evidence-based assessment practices for offenders reentering the community. Successful reentry for offender populations is often predicated on effective assessment, case planning and sharing of information between correctional and community treatment agencies (ONDCP, 2014). Implementation interventions involving change teams represent an important tool to enhance the use of evidence-based assessment practices in these large and diverse systems. Further research is still needed, however, to better understand the effective ingredients of implementation interventions such as the OPII, including the structure and process of change teams, organizational variables, and other factors that may influence implementation outcomes.

Highlights.

  • An intervention to improve assessment for drug-involved offenders was tested.

  • Outcomes included Measurement/Instrumentation and Case Plan Integration.

  • Evidence-based assessment practices in correctional systems were improved.

  • Stronger effects were obtained for intra- than inter-agency outcomes.

Acknowledgments

The authors gratefully acknowledge the collaborative contributions by NIDA; the Coordinating Center, AMAR International, Inc.; and the Research Centers participating in CJ-DATS. The Research Centers include: Arizona State University and Maricopa County Adult Probation (U01DA025307); University of Connecticut and the Connecticut Department of Correction (U01DA016194); University of Delaware and the New Jersey Department of Corrections (U01DA016230); University of Kentucky and the Kentucky Department of Corrections (U01DA016205); National Development and Research Institutes, Inc. and the Colorado Department of Corrections (U01DA016200); University of Rhode Island, Rhode Island Hospital and the Rhode Island Department of Corrections (U01DA016191); Texas Christian University and the Illinois Department of Corrections (U01DA016190); Temple University and the Pennsylvania Department of Corrections (U01DA025284); and the University of California at Los Angeles and the Washington State Department of Corrections (U01DA016211).

Role of funding source

This study was funded under a cooperative agreement from the U.S. Department of Health and Human Services, National Institutes of Health, National Institute on Drug Abuse (NIH/NIDA), with support from the Substance Abuse and Mental Health Services Administration (SAMHSA) and the Bureau of Justice Assistance, US Department of Justice. NIDA program officials participated in the conceptualization and monitoring of the research reported here. The views and opinions expressed in this report are those of the authors and should not be construed to represent the views of NIDA nor any of the sponsoring organizations, agencies, CJ-DATS partner sites, or the U.S. government.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributors

W. Welsh directed and participated in the conceptualization, data collection, analyses and writing of the manuscript. W. Welsh and H. Lin had primary responsibility for data analyses and preparation of tables and figures, with assistance from J. Pierce. R. Peters, J. Stahler, and E. Hunt reviewed the literature and participated in writing the introduction and discussion sections. W. Lehman, L. Stein, and S. Abdel-Salam participated in writing the methods sections and editing the manuscript. C. Gallagher provided consultation on the relevance of the study for correctional policy and participated in editing the manuscript. L. Monico and M. Eggers were primarily responsible for the qualitative analyses. L. Frisman participated in the conceptualization of the study, provided resources for the data analyses, and participated in writing and editing the manuscript. All authors contributed to and have approved the final manuscript.

Conflict of interest

None of the authors report a conflict of interest that could have influenced or be perceived to have influenced this work.

References

  1. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Healt. 2011;38:4–23. doi: 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Belenko S. Assessing released inmates for substance-abuse-related service needs. Crime Delinq. 2006;52:94–113. [Google Scholar]
  3. Belenko S, Peugh J. Estimating drug treatment needs among state prison inmates. Drug Alcohol Depend. 2005;77:269–281. doi: 10.1016/j.drugalcdep.2004.08.023. [DOI] [PubMed] [Google Scholar]
  4. Bentler PM. Comparative fit indexes in structural models. Psychol Bull. 1990;107:238–46. doi: 10.1037/0033-2909.107.2.238. [DOI] [PubMed] [Google Scholar]
  5. Broome KM, Knight DK, Edwards JR, Flynn PM. Leadership, burnout, and job satisfaction in outpatient drug-free treatment programs. J Subst Abuse Treat. 2009;37:160–170. doi: 10.1016/j.jsat.2008.12.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Brown K, Gerhardt M. Formative evaluation: an integrative practice model and case study. Personnel Psychol. 2006;55:951f. [Google Scholar]
  7. Campbell MK, Piaggio G, Elbourne DR, Altman DG. Consort 2010 statement: extension to cluster randomised trials. BMJ. 2012;345:e5661. doi: 10.1136/bmj.e5661. [DOI] [PubMed] [Google Scholar]
  8. Capoccia VA, Cotter F, Gustafson DH, Cassidy E, Ford J, Madden L, Owens BH, Farnum SO, McCarty D, Molfenter T. Making “stone soup”: how process improvement is changing the addiction treatment field. Jt Comm J Qual Patient Saf. 2007;33:95–103. doi: 10.1016/s1553-7250(07)33011-0. [DOI] [PubMed] [Google Scholar]
  9. Condon TP, Miner LL, Balmer CW, Pintello D. Blending addiction research and practice: strategies for technology transfer. J Subst Abuse Treat. 2008;35:156–160. doi: 10.1016/j.jsat.2007.09.004. [DOI] [PubMed] [Google Scholar]
  10. Damschroder L, Aron D, Keith R, Kirsh S, Alexander J, Lowery J. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Damschroder LJ, Hagedorn HJ. A guiding framework and approach for implementation research in substance use disorders treatment. Psychol Addict Behav. 2011;25:194–205. doi: 10.1037/a0022284. [DOI] [PubMed] [Google Scholar]
  12. Ducharme LJ, Chandler RK, Wiley TRA. Implementing drug abuse treatment services in criminal justice settings: introduction to the CJDATS study protocol series. Health Justice. 2013;1:5. doi: 10.1186/2194-7899-1-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Edmonson AC. Speaking up in the operating room: how team leaders promote learning in interdisciplinary action teams. J Manag Stud. 2003;40:1419–1452. [Google Scholar]
  14. Fletcher BW, Lehman WEK, Wexler HK, Melnick G, Taxman FS, Young DW. Measuring collaboration and integration activities in criminal justice and substance abuse treatment agencies. Drug Alcohol Depend. 2009;103S:S54–S64. doi: 10.1016/j.drugalcdep.2009.01.001. [DOI] [PubMed] [Google Scholar]
  15. Friedmann PD, Taxman FS, Henderson CE. Evidence-based treatment practices for drug-involved adults in the criminal justice system. J Subst Abuse Treat. 2007;32:267–277. doi: 10.1016/j.jsat.2006.12.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Garner BR. Research on the diffusion of evidence-based treatments within substance abuse treatment: A systematic review. J Subst Abuse Treat. 2009;36:376–399. doi: 10.1016/j.jsat.2008.08.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Glynn RJ, Brookhart MA, Stedman M, Avorn J, Solomon DH. Design of cluster-randomized trials of quality improvement interventions aimed at medical care providers. Med Care. 2007;45:S38–43. doi: 10.1097/MLR.0b013e318070c0a0. [DOI] [PubMed] [Google Scholar]
  18. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82:581–629. doi: 10.1111/j.0887-378X.2004.00325.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hedeker D, Gibbons RD, Flay BR. Random-effects regression models for clustered data with an example from smoking prevention research. J Consult Clin Psychol. 1994;62:757–765. doi: 10.1037//0022-006x.62.4.757. [DOI] [PubMed] [Google Scholar]
  20. Henderson CE, Taxman FS, Young D. A Rasch model analysis of evidence based treatment practices used in the criminal justice system. Drug Alcohol Depend. 2008;93:163–175. doi: 10.1016/j.drugalcdep.2007.09.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Henderson CE, Young DW, Farrell J, Taxman FS. Associations among state and local organizational contexts: use of evidence-based practices in the criminal justice system. Drug Alcohol Depend. 2009;103:S23–S32. doi: 10.1016/j.drugalcdep.2008.12.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Hiller ML, Belenko S, Welsh W, Zajac G, Peters RH. Screening and assessment: an evidence-based process for the management and care of adult drug-involved offenders. In: Leukefeld CG, Gregrich J, Gullotta T, editors. Handbook on Evidence-Based Substance Abuse Treatment Practice in Criminal Justice Settings. Springer; New York: 2011. pp. 45–62. [Google Scholar]
  23. Holm S. A simple sequentially rejective multiple test procedure. Scand J Stat. 1979;6:65–70. [Google Scholar]
  24. Hooper D, Coughlan J, Mullen MR. Structural equation modeling: Guidelines for determining model fit. EJBRM. 2008;6:53–60. [Google Scholar]
  25. Hu LT, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling. 1999;6:1–55. [Google Scholar]
  26. Kenny DA. Measuring model fit. 2014 Oct 6; Available at: http://davidakenny.net/cm/fit.htm.
  27. Kline RB. Principles and Practices of Structural Equation Modeling. Guilford; New York: 2005. [Google Scholar]
  28. Lehman WEK, Fletcher BW, Wexler HK, Melnick G. Organizational factors and collaboration and integration activities in criminal justice and drug abuse treatment agencies. Drug Alcohol Depend. 2009;103S:S65–S72. doi: 10.1016/j.drugalcdep.2009.01.004. [DOI] [PubMed] [Google Scholar]
  29. Lehman WEK, Greener JM, Simpson DD. Assessing organizational readiness for change. J Subst Abuse Treat. 2002;22:197–209. doi: 10.1016/s0740-5472(02)00233-7. [DOI] [PubMed] [Google Scholar]
  30. Lowenkamp CT, Latessa EJ. Developing successful re-entry programs: lessons learned from the “What Works” research. Correct Today. 2005 Apr;:72–77. [Google Scholar]
  31. MacCallum RC, Browne MW, Sugawara HM. Power analysis and determination of sample size for covariance structure modeling. Psychol Methods. 1996;1:130–149. [Google Scholar]
  32. Maxwell SE, Delaney HD. Designing Experiments and Analyzing Data: A Model Comparison Perspective. Taylor and Francis; New York: 2004. [Google Scholar]
  33. McCarty D, Gustafson DH, Wisdom JP, Ford J, Dongseok C, Molfenter T, Capoccia V, Cotter F. The network for the improvement of addiction treatment (NIATx): enhancing access and retention. Drug Alcohol Depend. 2007;88:138–145. doi: 10.1016/j.drugalcdep.2006.10.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Mellow J, Christian J. Transitioning offenders to the community: a content analysis of reentry guides. J Offender Rehabil. 2008;47:339–355. [Google Scholar]
  35. Melnick G, McKendrick K, Lehman W. Feasibility of multiagency change teams involving the Department of Corrections and community substance abuse treatment agencies. Prison J in press. [Google Scholar]
  36. Miller RG. Simultaneous Statistical Inference. McGraw Hill; New York: 1981. [Google Scholar]
  37. Moore GE, Mears DP. Research Report. Urban Institute; Washington, D.C: 2003. Voices from the Field: Practitioners Identify Key Issues in Corrections-Based Drug Treatment. [Google Scholar]
  38. Muhr T, Friese S. User’s Manual for Atlas.ti 5.0. 2. Scientific Software Development; Berlin: 2004. [Google Scholar]
  39. National Institute on Drug Abuse. Ttreatment Planning MATRS: Utilizing the Addiction Severity Index (ASI) to Make Required Data Collection Useful. 2012 Available at: http://www.drugabuse.gov/blending-initiative/treatment-planning-matrs.
  40. Office of National Drug Control Policy. In-Custody Treatment and Offender Reentry. 2014 Available at: http://www.whitehouse.gov/ondcp/in-custody-treatment-and-reentry.
  41. Pankow J, Yang Y, Knight K, Lehman W. Optimizing continuity-of-care opportunities to reduce health risks: shared qualitative perspectives from CJDATS 2 research. Addict Sci Clin Pract. 2014;10(Suppl. 1):A46. [Google Scholar]
  42. Pelissier B, Jones N, Cadigan T. Drug treatment aftercare in the criminal justice system: a systematic review. J Subst Abuse Treat. 2007;32:311–320. doi: 10.1016/j.jsat.2006.09.007. [DOI] [PubMed] [Google Scholar]
  43. Peters RH, Rojas L, Bartoi MG. Screening and assessment of co-occurring disorders in the justice system. SAMHSA’s National GAINS Center for Behavioral Health and Justice Transformation; Delmar NY: in press. [Google Scholar]
  44. Peters RH, Greenbaum PE, Steinberg ML, Carter CR, Ortiz MM, Fry BC, Valle SK. Effectiveness of screening instruments in detecting substance use disorders among prisoners. J Subst Abuse Treat. 2000;18:349–358. doi: 10.1016/s0740-5472(99)00081-1. [DOI] [PubMed] [Google Scholar]
  45. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Adm Policy Ment Health. 2009;36:24–34. doi: 10.1007/s10488-008-0197-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38:65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Raudenbush SW. Statistical analysis and optimal design for cluster randomized trials. Psychol Methods. 1997;2:173–185. doi: 10.1037/1082-989x.5.2.199. [DOI] [PubMed] [Google Scholar]
  48. Raudenbush SW, Bryk AS. Hierarchical Linear Models: Applications and Data Analysis Methods. Second. Sage; Newbury Park, CA: 2002. [Google Scholar]
  49. Rice WR. Analyzing tables of statistical tests. Evol. 1989;43:223–225. doi: 10.1111/j.1558-5646.1989.tb04220.x. [DOI] [PubMed] [Google Scholar]
  50. Rogers E. Diffusion of Innovations. Fifth. Free Press; New York: 2003. [Google Scholar]
  51. Roosa M, Scripa JS, Zastowny TR, Ford JH. Using a NIATx based local learning collaborative for performance improvement. Eval Program Plann. 2011;34:390–398. doi: 10.1016/j.evalprogplan.2011.02.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Rossello J, Rawson RA, Zarza MJ, Bellows A, Busse A, Saenz E, Freese T, Shawkey M, Carise D, Ali R, Ling W. United Nations Office on Drugs and Crime International Network of Drug Dependence Treatment and Rehabilitation Resource Centers: Treat. Subst Abuse. 2010;31:251–263. doi: 10.1080/08897077.2010.514243. [DOI] [PubMed] [Google Scholar]
  53. Shafer MS. Effectiveness of an Organizational Process Improvement Intervention (OPII) for improving the assessment, case planning and referral processes for offenders; Paper presented at the Addiction Health Services Research (AHSR) conference; Portland. October 25, 2013.2013. [Google Scholar]
  54. Shafer MS, Hiller M. Facilitator Manual. Arizona State University, Center for Applied Behavioral Health Policy; Phoenix, AZ: 2010. Improving Best Practices in Assessment and Service Planning: Organizational Process Improvement Intervention (OPII) [Google Scholar]
  55. Shafer MS, Prendergast M, Melnick G, Stein LA, Welsh WN, the CJDATS Assessment Workgroup A cluster randomized trial of an organizational process improvement intervention for improving the assessment and case planning of offenders: a Study Protocol. Health Justice. 2014;2:1. doi: 10.1186/2194-7899-2-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Shaffer DK. Looking inside the black box of drug courts: a meta-analytic review. Justice Q. 2011;28:493–521. [Google Scholar]
  57. Shortell SM, Marsteller JA, Lin M, Pearson ML, Wu S, Mendel P, Cretin S, Rosen M. The role of perceived team effectiveness in improving chronic illness care. Med Care. 2004;42:1040–1048. doi: 10.1097/00005650-200411000-00002. [DOI] [PubMed] [Google Scholar]
  58. Spybrook J, Raudenbush SW. An examination of the precision and technical accuracy of the first wave of group-randomized trials funded by the Institute of Education Sciences. Educ Eval Policy Anal. 2009;31:298–318. [Google Scholar]
  59. Steiger JH. Structural model evaluation and modification. Multivariate Behav Res. 1990;25:214–212. doi: 10.1207/s15327906mbr2502_4. [DOI] [PubMed] [Google Scholar]
  60. Stein LAR, Soenksen S, Welsh W, Clair M, Abdel-Salam S, Monico L, Clarke JG, Friedmann P, Gallagher C. Implementation of Organizational Change to Enhance Assessment Practices: Fidelity to Process. University of Rhode Island; 2015. Manuscript in preparation. [Google Scholar]
  61. Stetler CB, Legro MW, Wallace CM, Bowman C, Guihan M, Hagedorn H, Kimmel B, Sharp ND, Smith JL. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med. 2006;21(Suppl 2):S1–S8. doi: 10.1111/j.1525-1497.2006.00355.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Stilen P, Carise D, Roget N, Wendler A. Treatment planning MATRS: Utilizing the Addiction Severity Index (ASI) to Make Required Data Collection Useful. Kansas City, MO: Mid-America Addiction Technology Transfer Center in residence at the University of Missouri-Kansas City; 2007. [Google Scholar]
  63. Taxman FS. Reducing recidivism through a seamless system of care: components of effective treatment, supervision, and transition services in the community. In: Knight K, Farabee D, editors. Treating Addicted Offenders: A Continuum of Effective Practices. Civic Research Institute; Kingston, NJ: 2004. pp. 32-1–32-12. [Google Scholar]
  64. Taxman FS, Cropsey KL, Young DW, Wexler H. Screening, assessment, and referral practices in adult correctional settings: a national perspective. Crim Justice Behav. 2007a;34:1216–1234. doi: 10.1177/0093854807304431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Taxman FS, Perdoni M, Harrison LD. Drug treatment services for adult offenders: the state of the state. J Subst Abuse Treat. 2007b;32:239–254. doi: 10.1016/j.jsat.2006.12.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Taxman FS, Thanner M. Risk, need, and responsivity (RNR): it all depends. Crime Delinq. 2006;52:28–51. doi: 10.1177/0011128705281754. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Taxman FS, Young D, Wiersema B, Rhodes A, Mitchell S. The National criminal justice treatment practices survey: methods and procedures. J Subst Abuse Treat. 2007c;32:225–238. doi: 10.1016/j.jsat.2007.01.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Wagner AK, Soumerai SB, Zhang F, Ross-Degnan D. Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27:299–309. doi: 10.1046/j.1365-2710.2002.00430.x. [DOI] [PubMed] [Google Scholar]
  69. Welsh WN, Harris PW. Criminal Justice Policy and Planning. Fourth. Elsevier/Anderson; Cincinnati: 2012. [Google Scholar]
  70. Wenzel SL, Turner SF, Ridgely MS. Collaborations between drug courts and service providers: characteristics and challenges. J Crim Justice. 2004;32:253–263. [Google Scholar]

RESOURCES