Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jun 1.
Published in final edited form as: Adm Policy Ment Health. 2021 Mar;48(2):233–249. doi: 10.1007/s10488-020-01066-7

Critical Factors Influencing Interorganizational Relationships Between Juvenile Probation and Behavioral Health Agencies

Wayne N Welsh 1, Richard Dembo 2, Wayne E K Lehman 3, John P Bartkowski 4, Leah Hamilton 1, Carl G Leukefeld 5, Tisha Wiley 6
PMCID: PMC7854784  NIHMSID: NIHMS1608319  PMID: 32666324

Abstract

Although interorganizational relationships (IORs) are essential to the effective delivery of human services, very little research has examined relationships between juvenile justice agencies and behavioral health providers, and few studies have identified the most critical organizational and individual-level characteristics influencing IORs. Across 36 sites, juvenile probation officials (n = 458) and community behavioral health providers (n = 91) were surveyed about characteristics of their agencies, themselves, and IORs with each other. Generalized Linear Mixed Models were used to analyze the data. The strongest predictors included Perceived Organizational Support and individual Adaptability. Implications for research, theory and practice are discussed.

Keywords: Interorganizational Relationships, Implementation, NIDA, Juvenile Justice, Juvenile Probation, Behavioral Health


Interorganizational relationships (IORs) are essential to the effective coordination and delivery of human services, and they occupy a central place in conceptual frameworks of implementation science (Aarons, Hurlburt & Horwitz, 2011; Palinkas, Fuentes, Finno, Garcia, Holloway, & Chamberlain, 2014; Palinkas, Holloway, Rice, Brown, Valente, & Chamberlain, 2013). Recent studies have illustrated a need to better understand the complexities of IORs and their impacts on service delivery for justice-involved individuals (Welsh, Knudsen et al., 2016; Welsh, Prendergast, Knight, Knudsen, Monico, et al., 2016).

A focus on relationships between juvenile justice agencies and behavioral health agencies is critical to advance understanding of the factors associated with the effective delivery of substance use services to justice-involved youth engaged in substance misuse. Research has indicated that 11% or fewer of substance using youth receive any treatment (Lipari, Park-Lee, & Van Horn, 2016; Winters, Botzet, & Fahnhorst, 2011). Roughly 70% of arrested juveniles have previously misused substances and over one third present for a substance use disorder, in part because of their early onset of substance use. Although such youth are at high risk for recidivism, juvenile justice agencies often exhibit a lack of interorganizational ties to behavioral health providers and minimal use of evidence-based screening instruments (see Knight et al. 2016 for review). Consequently, the need to delineate the factors that influence effective coordination of substance use treatment services cannot be overstated. The present study explored individual and agency-level factors that predict key dimensions of service coordination between juvenile justice and community-based behavioral health care providers; and identified key facilitators and barriers affecting IORs across individuals and agencies in 36 sites participating in a National Institutes of Health (NIH)/National Institute on Drug Abuse (NIDA) cooperative partnership.

Background of the Study

JJ-TRIALS is an acronym for Juvenile Justice: Translational Research on Interventions for Adolescents in the Legal System. It is a cooperative effort involving research centers at Columbia University, Emory University, Mississippi State University, Temple University, Texas Christian University, the University of Kentucky, their juvenile justice agency and behavioral health partners, and a coordinating center at Chestnut Health Systems in Illinois (Knight et al., 2016). JJ-TRIALS focused on improving services for youth on probation with substance misuse issues, including tracking their involvement on a service continuum involving interconnected steps: screening, assessment, service referral, initiating and engaging in treatment, and follow-up care (Belenko et al., 2017).

The overall study design was grounded in the Aarons et al. (2011) Exploration-Preparation-Implementation-Sustainment (EPIS) framework, which identifies major influences on the adoption of evidence-based practices (Knight et al., 2016). Although the current paper examined baseline data from a staff survey only, the larger study employed a head-to-head cluster randomized trial whereby sites were randomly assigned to one of two conditions: a Core or Enhanced Implementation Intervention (Knight et al., 2016). Prior to randomization, staff in both conditions were trained on best practices for screening, assessment, referral and treatment for substance use and the use of data to make decisions and develop goals.

Literature Review

In this section, we discuss some of the major theoretical and empirical foundations related to interorganizational relationships between justice agencies and behavioral health providers. In particular, we are interested in implementation science approaches that focus on IORs as crucial factors influencing the use of evidence based practices. Wherever possible, we have emphasized literature that focuses on juvenile justice populations and settings. However, there is a relative dearth of such research in juvenile (as opposed to adult) justice settings, and key studies do not always separate neatly into studies of adult v. juvenile populations (Aarons & Palinkas; Aarons et al., 2011; Becan et al., 2020; Palinkas et al., 2014).

IORs between justice agencies and other service providers (e.g., education, behavioral health, substance abuse treatment) can vary considerably in terms of formality and structure (Fletcher, et al.2009; see also Bitar & Gee, 2010; Taxman, Henderson, & Belenko, 2009). For example, IORs differ regarding the nature and frequency of service integration, types of linkages (e.g., strategic alliances, joint ventures, cross-sector partnerships, coalitions, and consortia), and degree of system cohesion (e.g., Cropper, Ebers, Huxham, & Ring, 2008; Fletcher, Lehman, Wexler, Melnick, Taxman, & Young, 2009; Horwath & Morrison, 2007; Parmigiani & Rivera-Santos, 2011).

Interorganizational networks among youth-serving organizations may include children’s mental health and social service providers, youth drug treatment agencies, juvenile courts and probation, and schools (e.g., Aarons & Palinkas, 2007; Bowser et al., 2018; Chuang & Lucio, 2011; Lawrence, 1995; Smith & Mogro-Wilson, 2008). For juvenile justice agencies, resource sharing between partners, adherence to a structured collaboration strategy, and clearly defined organizational roles are often keys to forming strong IORs (Chuang & Wells, 2010; Gil-Garcia, Schneider, Pardo, & Cresswell, 2005; Rivard, Johnsen, Morrissey, & Starrett, 1999; Roman, Butts, & Roman, 2011).

Research often reflects interorganizational infrastructures composed of five key components: information exchanges, cross-agency client referrals, networking protocols, interorganizational councils, and integrated services (Howell, Kelly, Palmer, & Mangum, 2004). Such infrastructures are dynamic systems that often focus on critical transition points for clients. Therefore, robust interorganizational networks assist clients through developmental, service, and systemic transitions (Polgar, Cabassa, & Morrissey, 2016). The broader external environment (local, regional, and state politics; funding mechanisms; laws and policies) also may influence the structure and quality of interorganizational networks (Palinkas et al., 2014).

Although prior research has shed light on the structure and dynamics of IORs across diverse human service agencies, very limited research has examined service coordination between justice agencies (e.g., probation and parole) and behavioral health providers; and few studies have focused on identifying critical organizational and individual-level characteristics that influence IORs. Below, we briefly review research findings on the key organizational and individual influences of IORs.

Organizational Characteristics Influencing IORs

Several studies have described the characteristics of organizations that actively engage in IORs. Lehman, Fletcher, Wexler, and Melnick (2009) examined organizational factors related to collaboration between criminal justice agencies (community corrections, jails, and prisons) with substance abuse treatment programs. Collaboration and integration were assessed with a measure developed by Fletcher et al. (2009), using data from the National Criminal Justice Treatment Practices Survey (NCJTPS; Taxman et al., 2007) and based on Konrad’s (1996) hierarchical services integration framework. A cooperation and coordination scale included low structure activities such as sharing information, agreeing to similar requirements for program eligibility, written agreements for sharing space, and holding joint staffings. A collaboration and consolidation scale represented higher structure activities such as joint policy and procedure manuals, pooled funding, and sharing budgetary and operational oversight. Analyses examined organizational activities related to integration and collaboration with substance abuse treatment agencies separately for prisons, jails, and community corrections agencies. Overall, facilities with more formal and structured collaborative activities tended to be larger, served more specialized populations with a greater diversity of needs, and offered more substance abuse treatment and correctional programming, indicating that as more services are needed, collaborative relationships are more likely to develop to address those needs.

In a key study examining the role of collaborations in implementing evidence-based practices (EBP) among public agencies serving abused and neglected youth (Palinkas et al., 2014), analyses of semi-structured interview data revealed that effective inter-organizational implementation collaborations were associated with availability of funding to hire staff and sustain the EBP, size of the county in which the collaboration took place (while larger counties tended to have more resources, members in smaller counties may be more familiar with each other), the presence of shared clients across agencies, and governmental mandates. Organizational characteristics that contributed to collaboration included a common language between organizations, common recognition of the problem and agreement on the issues to be solved, common goals and values, organizational commitment to collaboration, equitable division of labor between collaborating organizations, and supportive leadership.

Individual Characteristics Influencing IORs

Aarons et al. (2011) and Proctor et al. (2007) have emphasized the need to examine staff level characteristics as important influences of interagency collaboration and implementation outcomes. To date, however, little empirical research has examined whether some staff characteristics might make the individual better suited or more receptive to group and collaborative work than others. For example, older staff with more experience and education may be more open to innovation and collaboration than younger, less experienced staff (Aarons et al., 2011). Potentially important individual-level influences suggested by extant research include demographic factors (gender, age, race/ethnicity, education) as well as the experiences of staff related to their professional roles, including adaptability, openness to change, communication skills, job satisfaction, efficacy, stress, and burnout (Aarons et al., 2011; Courtney, Joe, Rowan-Szal, & Simpson, 2007; Fuller, Rieckmann, Nunes, Miller Arfken, Edmundson, & McCarty, 2007; Klein & Sorra, 1996; Knight et al., 2019; Simpson, Joe, & Rowan-Szal, 2007; Smith and Mogro-Wilson, 2007).

In a study of relationships between adult probation/parole (P/P) personnel and community-based drug treatment providers across 20 different sites, survey data examined individual predictors of service coordination (Welsh, Prendergast, et al., 2016). For the P/P cohort, the strongest predictors of IORs with treatment providers were efficacy (+) and burnout (−). Higher job satisfaction correlated (+) with frequency of communication, while lower levels of education were associated with lower levels of coordination. For the treatment provider cohort, the strongest correlates of IORs with P/P were staff adaptability (+), job satisfaction (+) and burnout (−). White respondents (−) were less likely to perceive high levels of resource dependence while Black respondents perceived higher levels (+) of effectiveness of relationship and quality of communication. Males reported greater frequency of communication (+) with P/P personnel than females. Similar to an earlier study of child welfare/substance abuse treatment partnerships (Smith and Mogro-Wilson 2007), individual-level variables accounted for a greater portion of the explained variance (67–90%) than organizational variables (12–33%). A focus on the individual as well as organizational characteristics influencing IORs thus seems critical, especially in juvenile justice settings where there has been a dearth of such research.

Informed by implementation science literature, the major aim of this study was to explore individual and agency-level factors associated with key dimensions of IORs (e.g., frequency and quality of communication, effectiveness of relationship, etc.) between juvenile justice (JJ) agencies and community-based Behavioral Health (BH) providers, and to identify the strongest facilitators/barriers to IORs. Doing so can enhance theory in this area, but also allow agency leaders and researchers to target critical variables for change prior to or during an intervention that requires interorganizational coordination to improve client outcomes.

Methods

Study Design

Baseline staff survey data were collected from juvenile justice staff and behavioral health staff participating in the JJ-TRIALS study (Knight et al., 2016). The baseline staff survey examined IORs between local juvenile justice agencies and community adolescent behavioral health service providers. Staff respondents were recruited from 36 sites in 7 states (Florida, Georgia, Kentucky, Mississippi, New York, Pennsylvania, and Texas). Sites were defined as a county or regional service region. Each research site included a juvenile justice office, such as a county youth court or probation department, and at least one community-based behavioral health services agency. Specific criteria for inclusion included the following (see also Knight et al., 2016): (a) Each probation agency served (at least in part) youth who were under JJ supervision in the community and who were eligible to receive behavioral health services in non-secure settings; (b) At least one treatment provider in service areas where the JJ agency did not provide treatment directly; and (c) a minimum of 8 staff per site.

Participants

Prior to the random assignment of sites to either experimental or comparison conditions (Knight et al., 2016), each site formed an Interorganizational Workgroup composed of 8–12 representatives from JJ and BH agencies. Composition of the workgroups included JJ leadership (e.g., Chief Probation Officer), BH leadership (e.g., Program Director), other JJ and BH agency staff, and other key stakeholders who might be involved in subsequent process improvement efforts (e.g., Juvenile Court Administrator, JJ Data Manager). The interorganizational workgroup participated in research planning at each site, including setting up dates for initial Leadership and Staff Orientations where researchers explained the purpose of the study and solicited participation in the baseline staff survey. The interorganizational workgroup also helped identify relevant staff to invite to participate in the project at each site (Knight et al., 2016).

Informed consent was obtained from all staff who participated (Knight et al., 2016). Each JJ-TRIALS research center was given some latitude in the mode of survey administration (e.g., online or in-person administration), and each set its own parameters for sending survey reminders per the procedures approved by their local Institutional Review Boards (IRBs). Some participants completed the baseline survey with a Qualtrics-based web survey, while others completed paper surveys in person at the Orientation meetings or mailed back paper surveys distributed during these orientation meetings. No financial incentives were provided to any respondents. Administration time for the survey was approximately 30–45 minutes.

Lastly, and as expected due to the overall cooperative agreement study design and parameters developed by NIDA, participation by state juvenile justice agencies was required (Knight et al., 2016). As per the grant award requirements, juvenile probation staff were the primary research participants. Consequently, juvenile justice staff outnumbered behavioral health agency staff participants since behavioral health staff were asked to participate and consented during subsequent study implementation. A total of 458 juvenile justice staff and 91 behavioral health staff (n= 549) consented and completed the baseline survey. Response rates were 88% for JJ staff and 72% for BH staff. Breakdowns by race (see Table 1) included 71% White, 24% African American, and 5% other. Only 12% of the sample reported Hispanic ethnicity. By gender, the sample consisted of 39% males and 61% females. Only 2% of the sample had a high school education only; 57% had a B.A. or Associates degree; 41% had a M.A. or Ph.D. degree. The sample ranged in age from 23 to 69 years (mean = 41.5). Years of experience in the field ranged from 0–40 (M = 14.6); years with current employer ranged from 0–39 (M = 11.0).

Table 1.

Descriptive Statistics: Individual- and Site-Level Variables

Total (N=549) JJ (N=458) BH (N=91) JJ v. BH
N Mean S.D. N Mean S.D. N Mean S.D. t or F p
Independent Variables: Sites (L2) t p
 M Quality 36 32.15 2.36 36 32.15 2.36 31 32.46 2.22 −0.79 .435
 M Innovation and Flexibility 36 29.12 2.90 36 29.12 2.90 31 29.41 2.67 −0.60 .554
 M Performance Feedback 36 29.24 2.59 36 29.24 2.59 31 29.53 2.46 −0.68 .499
 M Organizational Climate 36 29.98 2.36 36 29.98 2.36 31 30.27 2.15 −0.75 .459
 M Communication 36 32.67 3.65 36 32.67 3.84 31 32.72 3.88 −0.08 .935
 M Stress 36 33.74 4.88 36 33.74 4.88 31 33.38 4.93 0.44 .664
 M Encourages Innovation 36 40.25 3.12 36 40.24 3.12 31 40.04 3.21 0.41 .683
 M Program Needs 36 34.15 4.26 36 34.15 4.26 31 34.24 4.30 −0.12 .904
Independent Variables: Subjects (L1) F p
 Perceived Organizational Support 29.06 5.92 28.39 5.66 32.49 6.06 38.77 .001 *
 Satisfaction 39.82 6.56 39.48 6.26 41.56 7.77 7.74 .006 *
 Adaptability 38.82 5.99 38.46 5.96 40.58 5.83 9.59 .002 *
 Race
  White (0=N, 1=Y) 0.71 0.45 0.71 0.45 0.70 0.46 0.04 .842
 Hispanic (0=N, 1=Y) 0.12 0.32 0.13 0.33 0.06 0.23 3.68 .055
 Gender (0=F, 1=M) 0.39 0.49 0.41 0.49 0.26 0.44 8.00 .005 *
 Highest Degree
  Bachelors/Associates (0=N, 1=Y) 0.57 0.50 0.63 0.48 0.22 0.41 56.32 .001 *
 Age 41.54 10.11 41.28 9.63 42.83 12.21 1.74 .187
#Years Experience 14.61 8.65 15.03 8.48 12.50 9.26 6.52 .011 *
Dependent Variables F p
 Resource Exchanges 36.59 9.02 36.63 8.86 36.43 9.88 0.03 .854
 Resource Needs 41.66 8.26 41.22 8.40 43.86 7.14 7.62 .006 *
 Effectiveness of Relationship 39.44 8.43 38.87 8.34 42.24 8.35 12.34 .001 *
 Quality of Communications 42.66 9.83 42.05 10.06 45.55 8.08 9.68 .002 *
 Challenges to Coordination 39.39 9.79 39.21 9.84 40.26 9.55 0.86 .354
 Frequency of Communication 13.58 9.12 12.84 9.03 17.13 8.77 17.19 .001 *
*

p < .05, M = Site Mean.

Note: Due to multicollinearity (r > .70), several independent variables were dropped prior to conducting HLM analyses: Race - African American (0=N, 1=Y), Race - Other (0=N, 1=Y), Highest Degree – High School Only (0=N, 1=Y), Highest Degree - Post Graduate (0=N, 1=Y), and

#

Years with Current Employer.

Measures

Independent measures.

The scales comprising the staff survey were selected based on their relevance to research and theory. All scales demonstrated good reliability and validity across prior studies (see Broome, Knight, Edwards, & Flynn, 2009; Garner, Knight, & Simpson, 2007; Lehman, Greener, Rowan-Szal, & Flynn, 2012; Lehman, Greener, & Simpson, 2002; Shortell et al., 2004; Taxman, Young, Wiersema, Rhodes, & Mitchell, 2007). Scores on the five-point Likert scales generally ranged from 10 to 50, with scores calculated so that a higher score indicated a more positive appraisal of the construct.

As is common practice in multilevel studies, scales were separated into organizational and individual variables (Marsh et al., 2012; Raudenbush & Bryk, 2002). Scholars of organizational behavior argue that the construct of organizational climate arises through individuals’ shared perception of their work unit or organization (e.g., Anderson, 1982; Owens, 1987; Van Bruggen, Lilien & Kacker, 2002; Welsh, Stokes & Greene, 2000). These group-level perceptions are substantively different from individual characteristics, and are thus better partitioned and analyzed through a nested design (i.e., individuals within organizations) in order to minimize random error (Marsh et al., 2012). Organizational variables were represented by aggregated ratings by staff of some characteristic of the unit or organization (e.g., “Staff members at your unit work together as a team”). The reference point for agency-level measures was the organization (Marsh et al., 2012). In contrast, if questions asked the respondent about their own individual characteristics or experiences (e.g., “You are satisfied with your present job”), those items and scales were treated as individual-level (level-1) variables.

Organizational variables.

The agency-level survey questions probed aspects related to organizational change, particularly in prior studies of public or health service agencies (Aarons et al., 2011; Greenhalgh, Robert, Macfarlane, Bate, & Kyriakidou, 2004; Lehman, Greener, & Simpson, 2002; Proctor, Landsverk, Aarons, Chambers, Glisson, 2009; Welsh, Prendergast, et al., 2016). In addition to identifying the kind of agency (Juvenile Justice or Behavioral Health), other measures addressed organizational climate and organizational functioning.

Prior to aggregating individual data, the degree to which individuals at a given site were substantially in agreement with one another was examined using the Intraclass Correlation (ICC; Hofmann, Griffin, & Gavin, 2000). The ICC is defined as the proportion of observed variance that can be explained by between-groups differences. Scales with small ICCs (e.g., <.10) suggest lower consensus among respondents within a given site, while scales with higher ICCs suggest that staff ratings are similar within a given site, which lends credence to the aggregation of individual level perceptions to form the organizational variables (Hofmann, Griffin & Gavin., 2000; Van Bruggen, Lilien & Kacker, 2002). Most ICCs were within acceptable ranges (reported below), although the ICC for one scale (Encouragement of Innovation) was somewhat low (ICC = .05), suggesting caution in interpreting these results. Because of theoretical interest, this measure of Leadership was retained (Raudenbush, 1997; Spybrook & Raudenbush, 2009).

Organizational climate measures were based on Patterson and colleagues’ Organizational Climate measure (Patterson et al., 2005). Scales included an overall summary score for Organizational Climate (15 items; α =.92; ICC = .14), as well as its three constituent subscales: a 4-item measure of Organizational Quality (α =.83; ICC = .08) a 6-item measure of Innovation and Flexibility (α =.92; ICC = .13), and a 5-item measure of Performance Feedback (α =.84; ICC = .13). However, issues with multicollinearity required us to drop the three constituent climate subscales. Multicollinearity is a particular concern with level-2 variables, due to the smaller sample sizes that are characteristic of higher levels of analysis (Raudenbush, 1997; Raudenbush & Bryk, 2002). The three constituent subscales were highly inter-correlated with each other (r = .62 – .93) and with the overall (15-item) measure of organizational climate (r = .64 – .83); thus we retained the more meaningful overall climate scale.

We also used several scales from the Texas Christian University Survey of Organizational Functioning and Leadership (TCU Institute of Behavioral Research, 2013). All questions were asked using 5-point Likert scales, ranging from Disagree strongly (1) to Agree strongly (5). This tool assessed four important constructs: Communication (six items; α =.87; ICC = .15), Stress (4 items; α =.87; ICC = .13), Encouragement of Innovation (4 items; α =.92; ICC = .05), and Program Needs (7 items; α =.90; ICC = .17). The Communication scale addressed respondents’ perceptions of how well they can communicate with their management and vice versa (e.g., “The formal communication channels work very well here”). The Stress section assessed respondents’ perceptions of how work-related stress impacts organizational effectiveness (e.g., “Staff members are under too many pressures to do their jobs effectively). This scale was reverse scored, so that higher scores reflected less stress. The Encouragement of Innovation section addressed the degree to which respondents perceived that their immediate supervisor encouraged new ideas and opportunities for change (e.g., “Your supervisor encourages staff to try new ways to accomplish their work”). Finally, the Program Needs section asked respondents to reflect on whether their organization adequately provided youth with necessary services (e.g., “Your organization needs additional guidance in assessing youth needs”). This scale was reverse scored, so that higher scores reflected fewer perceived needs.

The Communications subscale was dropped from multivariate analyses due to multicollinearity with other level-2 variables (Raudenbush, 1997; Raudenbush & Bryk, 2002). Communication was highly correlated with the overall measure of Organizational Climate (r = .83). Organizational Climate was retained due to its prominent status in implementation science models (Aarons et al., 2011; Glisson & Green, 2006; Proctor et al., 2009) and empirical examinations of IORs (Cropper et al., 2008; Lehman et al., 2009; Palinkas et al., 2013, 2014).

Individual variables.

Individual level variables asked respondents to answer questions about their individual characteristics and experiences. Basic demographics such as gender, ethnicity, race, age, and education level were included as covariates. Other questions asked whether respondents were from juvenile justice or treatment provider agencies, current job level (e.g., agency director, supervisor, probation officer, case manager, counselor, and support/other), and length of employment (in years) at both at their current job and in the field in general.

Due to multicollinearity among categorical independent variables, several variables were culled prior to beginning HLM analyses: Race - African American (0=N, 1=Y), Race - Other (0=N, 1=Y), Highest Degree - Post Graduate (0=N, 1=Y), and Number of Years with Current Employer. The Race variable thus assessed White (1=Yes, 0=No) and the Education variable assessed whether the respondent had a Bachelor’s or Associate degree only (1=Yes, 0=No). No further issues of multicollinearity were detected.1

As noted above, three scales derived from the Patterson et al. (2005) and TCU instruments were conceptualized as individual-level (level-1) rather than agency-level (level-2) measures (Marsh, 2012), since the point of reference in the question’s wording was the individual (e.g., “My organization really cares about my well-being”) rather than the unit or organization (e.g., “This organization is always looking to achieve the highest standards of quality”). The eight-item Perceived Organizational Support scale (α =.90) evaluated participants’ perception of how much help and support they received from their organization (Rhoades, Eisenberger & Armeli, 2001). Examples included “My organization strongly considers my goals and values” and “Help is available from my organization when I have a problem.” The Satisfaction scale (6 items; α =.82) addressed the individual’s feelings of satisfaction within their organization (TCU Institute of Behavioral Research, 2013). Examples of items included “You are satisfied with your present job” and “You feel appreciated for the job you do.” The Adaptability scale (4 items; α = .74) addressed an individual’s ability to take on new ideas or practices within their work role (TCU Institute of Behavioral Research, 2013). Examples included “You are willing to try new ideas even if some staff members are reluctant” and “Learning and using new procedures are easy for you.”

Dependent measures.

The 28-item IOR survey was administered as a part of the baseline staff and leadership survey for both juvenile justice and treatment agencies. The IOR survey instrument was adapted from the Van de Ven and Ferry (1980) validated instrument, which measures relationships among human service agencies. The IOR survey also overlaps with the IOR survey used in the CJ-DATS II studies (Welsh, Prendergast, et al., 2016), which examined adult criminal justice and treatment agency relationships. Except for Frequency of Communications, discussed below, all of the items were assessed by using multiple Likert-scale questions (e.g., 1 = not at all to 5 = very much).

Prior to examining hypotheses, Factor Analyses (FA) examined the validity and psychometric properties of the IOR scales. Principal Axis Factoring with Oblique Rotation (to allow the factors to correlate) was used to examine whether the factor structure of the subscales might be improved. The final solution (28 items) was a very good fit to the data. FA extracted 6 factors accounting for 61.4% of the explained variance. The Kaiser-Meyer-Olkin measure of sampling adequacy = .882 (indicating a strong fit), and Bartlett’s Test of Sphericity was rejected (as desired): χ2 = 9334.18 (p < .001). All of the communalities were > .40. Interpretation of the rotated pattern matrix proceeded as recommended (Tabachnick & Fidell, 2013). The overall solution was very clean (i.e., no cross-loadings > .30) and highly interpretable. The IOR items are shown in Table 2, along with factor loadings and subscale reliabilities.

Table 2.

IOR Items, Factor Loadings and Subscale Reliabilities

Survey Subscales and Items
1. Resource Exchanges (5 items; alpha = .85) 1 2 3 4 5 6
a. To what extent does your agency send youth with alcohol/drug problems to the local treatment provider? .64
b. To what extent does your agency send results from screening youth for alcohol/drug problems to the local treatment provider? .75
c. To what extent does your agency send results from full assessments of youth for alcohol or drug problems to the local treatment provider? .74
d. To what extent does your agency receive information from the local treatment provider about whether referred youth initiated treatment? .76
e. To what extent does your agency receive information from the local treatment provider about whether referred youth participated/completed treatment? .78
2. Resource Needs (2 items; alpha =.81) 1 2 3 4 5 6
a. For the local treatment provider to attain its goals, to what extent does it need services, resources or support from your organization? .79
b. To attain your agency’s goals, to what extent does your agency need services, resources, or support from the local treatment provider? .84
3. Effectiveness of Relationship (9 items; alpha = .92) 1 2 3 4 5 6
a. To what extent does the treatment agency carry out commitments it agreed to with your agency? .65
b. To what extent do you feel the relationship between your agency and this treatment agency is productive? .79
c. To what extent is the time and effort spent in developing and maintaining the relationship with this treatment agency worthwhile? .65
d. To what extent are you satisfied with the relationship between your agency and this treatment agency? .79
e. How well informed are you about the specific goals and services that are provided by this treatment agency? .78
f. To what extent does your agency collaborate with this treatment agency in planning delivery of services to youth? .76
g. Think of one person in the treatment agency with whom you have had the most contact. How well are you acquainted with this person? .68
h. When you wanted to communicate with persons in this agency during the past six months, would you characterize your communications with persons in this treatment agency as high quality? .72
i. When you wanted to communicate with persons in this agency during the past six months, were they willing to engage in frank, open and civil discussion? .68
4. Quality of Communication (2 items; alpha = .88) 1 2 3 4 5 6
a. When you wanted to communicate with persons in this agency during the past six months, how much difficulty have you had in getting in touch with them?* .77
b. When you wanted to communicate with persons in this agency during the past six months, how often did your messages ‘get lost’ or not get a follow through response, such as a return call or email?* .79
b. When you wanted to communicate with persons in this agency during the past six months, how often did your messages ‘get lost’ or not get a follow through response, such as a return call or email?* .79
5. Challenges to Collaboration (6 items; alpha = .89) 1 2 3 4 5 6
a. To what extent is your agency's relationship with the treatment provider hampered by concerns about youth confidentiality and release of information?* .73
b. To what extent is your agency’s relationship with the treatment provider hampered by different sources of funding?* .74
c. To what extent is your agency’s relationship with the treatment provider hampered by different staff backgrounds or experiences?* .77
d. To what extent is your agency’s relationship with the treatment provider hampered by time constraints due to your other responsibilities?* .75
e. To what extent is your agency’s relationship with the treatment provider hampered by your own agency’s electronic records or data system?* .77
f. To what extent is your agency’s relationship with the treatment provider hampered by conflicting organizational goals?* .74
6. Frequency of Collaboration (4 items; alpha = .88) 1 2 3 4 5 6
a. During the past six months, how frequently have you sent or received material (of any kind) by mail, courier, or fax with anyone in this treatment agency? .75
b. During the past six months, how frequently have you had personal face-to-face contact with anyone in this treatment agency? .69
c. During the past six months, how frequently have you exchanged telephone calls with anyone in this treatment agency? .90
d. During the past six months, how frequently have you exchanged emails with anyone in this treatment agency? .86

Notes. Survey items shown are from the Juvenile Justice Staff Survey version; the same items were posed to Behavioral Health staff (with slightly different wording to reflect membership in different agencies). All subscales utilized a 5-point Likert Scale (1 = Not at all to 5 = Very much/Very frequently) except for Frequency of Collaboration, which utilized a nine-point ordinal scale (Never = 0 to About Every Day = 8) based upon the original scale created by Van de Ven and Ferry (1980).

* =

reverse scored. Factor loadings < .30 were omitted to ease interpretability.

Nine questions asked about the Effectiveness of the Relationship (α =.92) between juvenile justice and treatment providers. Questions addressed how familiar each partner was about the other agency’s goals and services, follow-through with commitments, productivity of the relationship, the value of the time and effort put into the relationship between justice and treatment agencies, and satisfaction with the relationship.

The six items on the Challenges to Collaboration scale (α =.89) addressed potential areas where each agency may find difficulties in working with their partner. These included issues of confidentiality, differing funding streams, differences in the backgrounds of staff from each agency, differences in organizational goals, data management system issues, and time management issues. All items were reverse scored, so that higher scores reflected fewer perceived challenges.

The five-item Resource Exchange scale (α = .85) gathered information on what resources were exchanged between agencies, including how often clients were sent to treatment agencies from juvenile justice agencies, how often results from screening or full assessments were sent to the treatment agencies, and how often treatment providers sent information to the juvenile justice agencies about client initiation, participation and completion of treatment.

It was also critical to assess each agency’s perceived needs for collaboration with the other agency. Two items asked about Resource Needs (α =.81), including how much services, resources, and support were needed from juvenile justice providers for treatment providers to achieve their goals, and vice versa. Two questions (both reverse-scored) included in the Quality of Communication scale (α = .88) asked about the level of difficulty experienced in contacting partner agencies, and the frequency of ‘missed or lost’ messages.

Finally, the Frequency of Communication scale (4 items; α =.88) asked how often the two agencies were in contact with one another, in person, by phone, by mail, and by e-mail. Unlike the other scales, this scale used a nine-point ordinal scale, following Van de Ven and Ferry (1980). For example, one item asked “During the past six months, how frequently have you exchanged telephone calls with anyone in this agency?” and response categories included zero times (0), 1 time (1), About 2 times (2), About 3 times (3), About every month (4), About every two weeks (5), About every week (6), About every 2–3 days (7) and About every day (8).

Analyses

HLM analyses accounted for the nested structure of the independent variables (individuals were nested within agencies which were nested within sites). While a full 3-level HLM model was not feasible due to the small number of agencies within each site, it was critical that data analyses accounted for the unique membership of each case within its appropriate agency and site. A 2-level HLM model with a small number of predictors entered at level-2 demonstrated adequate statistical power (> .80) for proposed analyses (for details about a-priori power analyses, see Knight et al., 2016).

HLM analyses examined individual (level-1) and agency-level (level-2) predictors of IOR, controlling for nesting of agencies within sites. We hypothesized that key organizational and individual characteristics were significantly associated with six dimensions of IORs between juvenile probation and community behavioral health providers. Due to the inclusion of six dependent variables, sequential Bonferroni corrections were applied to control for potentially inflated type-1 error due to multiple comparisons (Tabachnik & Fidell, 2013). Hierarchical Linear Modeling (HLM) (IBM SPSS v. 25, Generalized Linear Mixed Models) was used to examine the effects of individual and organizational characteristics on the six dependent variables. HLM techniques allow for the estimation of random effects (e.g., between-site differences) and entry of covariates at both the individual (L1) and site (L2) levels of analysis. We minimized total random error by allowing the error to vary separately for the L1 and L2 variables (Van Bruggen, Lilien & Kacker, 2002). In addition, HLMs do not require a balanced design; they do not assume an equal number of observations for all participants, and they allow the use of all available data when estimating the effects (Hedeker, Gibbons, & Flay, 1994; Raudenbush & Bryk, 2002).2

Variables entered at the individual level (Level 1) included demographics (e.g., age, gender, race, education), Perceived Organizational Support, Satisfaction, and Adaptability. Organizational-level scales (e.g., Organizational Climate, Stress, Encourages Innovation, and Program Needs) were entered at Level 2. In order to minimize the possibility of capitalizing on random (chance) error, a common problem with stepwise regression techniques (Weisburd & Gill, 2013), all independent variables were entered into regression equations using forced-entry methods (Cohen, Cohen, West & Aiken, 2002; Tabachnick & Fidell, 2013). Overall model fit was confirmed by inspecting observed versus predicted residuals and examining goodness of fit indices (e.g., −2 log likelihood, Akaike, Bayesian).

Results

Because of our interest in the dyadic relationships between juvenile justice (JJ) and behavioral health (BH) respondents, initial analyses examined descriptives for the two types of agencies separately, as well as the total sample (Table 1). T-tests or F-tests were used to examine univariate differences between the two samples, though we strongly caution that only multivariate results can correctly identify true between-subjects differences, since univariate analyses fail to control for the shared variance between different variables in the equation, and fail to control for the overall type-1 error rate experiment-wise across 23 independent tests of mean differences between the two subsamples (Cohen et al., 2002; Tabchnick & Fidell, 2013). On the eight site level (level-2) variables, no differences between the two subsamples were found. Of the nine individual (level-1) variables, however, several differences were found. On average, BH respondents reported higher levels of Perceived Organizational Support, Satisfaction, and Adaptability (Table 1). Several demographic differences were identified. A greater percentage of male respondents (41%) characterized the JJ sample compared to the BH sample (26%). JJ respondents (63%) were also much more likely than BH respondents (22%) to have at least a BA or Associates degree. JJ respondents also reported greater experience overall (mean = 15.0 yr.) compared to BH respondents (mean = 12.5 yr.). The observed differences between the two subsamples indicated that it was desirable to control for the potential influence of Agency Type (JJ v BH) as a covariate in the multivariate (HLM) analyses.

Univariate results (Table 1) also identified potential differences between the two subsamples on the six dependent variables. Compared to the BH subsample, the JJ respondents reported lower mean levels of Resource Needs (41.2 v. 43.9), Effectiveness of Relationship (38.9 v. 42.2), Quality of Communications (42.0 v. 45.6), and Frequency of Communications (12.8 v. 17.1). These results suggest some caution in conducting multivariate (HLM) analyses with the total (combined) sample. Entering a covariate to control for between-subjects differences (Agency Type) in multivariate analyses is usually the preferred course of action in such cases, because doing so enables the researcher to maintain a larger overall sample and greater statistical power (Tabachnick & Fidell, 2013). In addition, because the dependent variables are dyadic ratings of relationships between the two agencies, results including rather than excluding the BH respondents are more meaningful (Van de Ven & Ferry, 1980).

To examine the possibility that HLM results for the combined sample might have been unduly influenced by the small numbers of BH respondents (n=91), we analyzed the HLM results separately for the combined sample (n=549) and for the JJ subsample only (n=458). The results were virtually identical. Using the standard 95% confidence interval, there were only 4 differences in statistical significance out of 84 coefficients (4.8%).3 Due to very minor differences between the two samples, and the importance of including BH respondents in the dyadic ratings of relationships, we present the results for the combined sample in Table 3.

Table 3.

HLM Results: Effects of Organizational and Individual Variables on Interorganizational Relationships

Resource Exchanges Resource Needs Effectiveness
Fixed Effects b SE t p b SE t p b SE t p
Level 2 (Sites)
M Org Climate 0.53 0.35 1.50 .152 −0.01 0.17 −0.06 .955 0.16 0.40 0.41 .683
M Stress 0.12 0.15 0.77 .456 −0.12 0.11 −1.06 .304 0.15 0.15 1.04 .318
M Encourages Innovation −0.61 0.22 −2.77 .016* −0.11 0.12 −0.86 .422 −0.23 0.17 −1.32 .249
M Program Needs −0.08 0.19 −0.45 .655 0.14 0.12 1.24 .231 −0.17 0.19 −0.87 .392
Level 1 (Individuals)
Perceived Org. Support 0.26 0.07 3.48 .001* 0.09 0.09 1.05 .297 0.18 0.09 1.95 .054
Satisfaction −0.03 0.08 −0.40 .690 0.12 0.08 1.64 .102 0.12 0.07 1.84 .068
Adaptability 0.05 0.07 0.63 .527 0.14 0.06 2.46 .014* 0.12 0.05 2.20 .028*
Race (1 = White, 0 = Other) 0.18 1.04 0.17 .865 0.58 0.95 0.61 .546 −0.43 1.07 −0.40 .691
Hispanic (1 = Y, 0 = N) 1.46 0.96 1.51 .131 1.66 1.25 1.32 .187 0.22 0.97 0.22 .824
Gender – Male (1=Y, 0=N) 2.36 0.70 3.34 .001* −1.14 0.72 −1.58 .115 0.38 0.63 0.61 .544
Highest Degree – BA/Assoc. (1 = Y, 0 = N) −0.84 0.71 −1.18 .238 0.26 0.82 0.32 .748 0.36 0.70 0.51 .611
Age (yr.) 0.07 0.05 1.28 .201 0.02 0.05 0.29 .776 0.10 0.05 1.87 .064
Years in Probation/Tx −0.11 0.06 −1.83 .069 −0.05 0.06 −0.86 .392 −0.09 0.06 −1.53 .130
Agency Type (1=JJ, 0=BH) 2.40 1.26 1.91 .058 −1.50 0.81 −1.84 .066 −0.80 1.07 −0.74 .459
Random Effects b SE Z p b SE Z p b SE Z p
Site (L2) 15.64 5.53 2.83 .005* 4.14 2.42 1.71 .087 16.82 5.53 3.04 .002*
Residual 64.34 4.23 15.21 .001* 62.17 4.12 15.11 .001* 53.20 3.50 15.18 .001*
Within-Subjects Var. .81 .94 .76
Between-Subjects Var. .19 .06 .24
Quality of Communications Challenges to Collaboration Frequency of Communications
Fixed Effects b SE t p b SE t p b SE t p
Level 2 (Sites)
M Org Climate −0.16 0.35 −0.45 .658 −0.72 0.26 −2.78 .023* −0.36 0.29 −1.24 .238
M Stress 0.01 0.17 0.04 .965 −0.24 0.10 −2.26 .066 −0.07 0.12 −0.58 .580
M Encourages Innovation 0.12 0.24 0.50 .626 0.28 0.29 0.97 .338 0.17 0.22 0.77 .453
M Program Needs −0.01 0.21 −0.05 .958 −0.52 0.15 −3.54 .003* −0.09 0.15 −0.63 .537
Level 1 (Individuals)
Perceived Org. Support 0.40 0.09 4.35 .001* 0.24 0.11 2.07 .040* −0.02 0.10 −0.45 .804
Satisfaction −0.05 0.09 −0.59 .554 −0.04 0.09 −0.40 .688 −0.02 0.08 −0.21 .836
Adaptability −0.06 0.07 −0.89 .372 −0.03 0.10 −0.30 .762 0.10 0.09 1.11 .270
Race (1 = White, 0 = Other) 0.22 0.80 0.27 .788 −0.91 1.06 −0.85 .394 −0.12 0.89 −0.13 .896
Hispanic (1 = Y, 0 = N) −1.32 1.55 −0.85 .396 −3.22 1.41 −2.28 .024* 0.75 1.02 0.74 .461
Gender – Male (1=Y, 0=N) −0.27 0.86 −0.31 .754 −1.68 0.73 −2.30 .022* 0.67 0.65 1.03 .302
Ed. - BA (1 = Y, 0 = N) −0.56 0.94 −0.60 .553 1.24 0.97 1.29 .199 0.42 0.93 0.44 .657
Age (yr.) 0.04 0.06 0.73 .466 0.01 0.05 0.10 .917 0.07 0.05 1.36 .174
Years in Probation/Tx 0.04 0.06 0.63 .530 −0.00 0.06 −0.01 .990 −0.06 0.06 −1.00 .322
Agency Type (1=JJ, 0=BH) −0.77 1.12 −0.69 .493 −0.04 1.16 −0.03 .974 −4.28 1.41 −3.03 .003*
Random Effects b SE Z p b SE Z p b SE Z p
Site (L2) 17.70 6.41 2.76 .006* 8.35 4.07 2.05 .040* 11.23 4.32 2.60 .009*
Residual 78.22 5.22 14.99 .001* 85.24 5.68 15.00 .001* 70.85 4.73 14.98 .001*
Within-Subjects Var. .82 .91 .86
Between-Subjects Var. .18 .09 .14

Notes. HLM analyses were conducted with SPSS v. 25 Generalized Linear Mixed Models (GLMM). All variables were entered using the forced entry method.

*

p < .05.

Table 3 reports results from the forced-entry HLM models; Table 4 summarizes the statistically significant coefficients (at both the .05 and .10 levels to aid interpretation). Across all sites, JJ respondents reported less frequency of communications with BH partners than vice versa (p < .003). Two other results just failed to reach statistical significance. Compared to their BH partners, JJ respondents perceived slightly greater Resource Exchanges (p < .058) and slightly fewer Resource Needs (p < .066).

Table 4.

Summary of Significant Coefficients from HLM Analyses

Resource Exchanges Resource Needs Effectiveness of Relationship Quality of Communications Challenges to Collaboration Frequency of Communications
M Org. Climate (L2)
(p < .023)*
Stress (L2) +
(p < .066)±
M Encourages Innovation (L2)
(p < .016)*
M Program Needs (L2)
(p < .003)*
Perceived Org. Support +
(p < .001)*
+
(p < .054)±
+
(p < .001)*
+
(p < .040)*
Satisfaction +
(p < .068)±
Adaptability +
(p < .014)*
+
(p < .028)*
Hispanic (1 = Y, 0 = N)
(p < .024)*
Gender (1=M, 0=F) +(p < .001)*
(p < .022)*
Age +
(p < .064)±
Years in Probation/Tx +
(p < .069)±
Agency Type (1=JJ, 0=BH) +
(p < .058)±
+
(p < .066)±

(p < .003)*

Notes.

*

indicates effects that were statistically significant at p < .05. ± indicates effects that were statistically significant at p < .10. The “+” or “–” sign indicates the direction of effect. L2 = Level-2 (site) level variables.

Results (Tables 3 and 4) indicated that the strongest predictor of IORs, by far, was Perceived Organizational Support. Organizational support was positively correlated with Resource Exchanges, Quality of Communications, and (fewer) Challenges to Collaboration (reverse-scored); it did not quite reach statistical significance in predicting Effectiveness (p < .054). The next strongest predictor of IORs was Adaptability, with higher scores on Adaptability positively associated with higher perceived Resource Needs (p < .014) and greater Effectiveness of Relationship (p < .028) between juvenile justice and behavioral health partners.

Two other individual (level-1) variables significantly predicted two dimensions of IOR. Male respondents perceived greater Resource Exchanges between partners; and Male and Hispanic respondents (compared to females and non-Hispanics) perceived greater Challenges to Collaboration. Other demographic variables (Race, Education, and Years of Experience) failed to reach statistical significance in any of the models.

Most of the organizational (L2) measures failed to reach statistical significance, with two exceptions. Agencies with higher scores on Encourages Innovation reported fewer Resource Exchanges with their interorganizational partners. In addition, and also contrary to expectations, agencies that reported fewer Program Needs also perceived greater Challenges to Collaboration. Finally, respondents who perceived a stronger Organizational Climate in their own agency also tended to perceive greater Challenges to Collaboration with other agencies.

Discussion

The major aim of this study was to explore individual and agency-level factors associated with key dimensions of IORs between juvenile justice (JJ) agencies and community-based Behavioral Health (BH) providers, and identify the strongest facilitators/barriers to IOR. In addition to enhancing implementation science and IOR theory, agency leaders can then identify critical factors that may be targeted for change prior to (or during) an intervention that requires interorganizational coordination to improve client outcomes.

Two key results were reported. First, the strongest predictors of IORs were Perceived Organizational Support and Adaptability. While most of the organizational (L2) measures failed to reach statistical significance, two counterintuitive findings emerged: (a) Agencies rated higher on Encouragement of Innovation reported fewer Resource Exchanges with their partners; and (b) agencies that reported fewer Program Needs perceived greater Challenges to Collaboration.

These two counterintuitive findings merit some attention. These findings might indicate some conflicts between the missions of JJ and BH agencies and may suggest particular ways of interpreting innovation. Innovation may be understood as creative approaches enlisted within an agency—departures from treatment as usual—rather than novel strategies that are jointly created among partnering agencies (e.g., Lehman et al., 2002; 2012). Thus, an organizational culture predicated on workplace and programmatic silos (i.e., compartmentalization) may not view IORs as fertile ground for innovation given the different missions of JJ agencies around public safety and supervision and BH agencies around treatment and rehabilitation. Quite the contrary, such partnerships might be seen as potentially stifling through the introduction of new protocols and compromises that may be demanded by coordination among agencies. It also bears mention that the perceived need for collaboration and the perceived benefits of interorganizational partnerships may be undervalued by JJ and perhaps more strongly endorsed among behavioral health workers, the latter of whom have long been favorably disposed toward coalitions (e.g., Nelson, 1994), but are underrepresented in our sample.

Moreover, the negative associations of Innovation and Program Needs with measures of IORs could be particular to the agencies and individuals sampled in this study (especially juvenile probation). In agencies where resources are seen as adequate and innovation is encouraged internally rather than between agencies, juvenile probation officers (compared to treatment providers) may tend to view IORs as less of a necessity or even an opportunity to leverage additional resources for their clients. Such factors may encourage internal cooperation rather than outreach to treatment providers. These findings deserve further consideration in collaborative efforts between juvenile justice and behavioral health agencies (e.g., recruiting and selecting staff to participate in interorganizational work groups).

Role conflict and/or differences in professional role orientations (punishment v. rehabilitation) may also partly explain these counterintuitive findings. For probation officers, professional orientation refers to attitudes toward offenders and interactions with offenders (Whitehead & Lindquist, 1992). Over time and to varying degrees, probation officers may develop role conflict due to perceived inconsistencies in the goals of offender supervision: for example, to enforce the legal requirements of supervision; to assist the offender in successful community adjustment; or to carry out the policies of the supervising agency (Clear & Latessa, 1993). Meanwhile, behavioral health staff are concerned with treatment and rehabilitation, which may be seen as conflicting with the goals of JJ staff. Because agencies vary in their support for different role orientations, organizational culture is a key influence on professional role orientations (Clear & Latessa, 1993). Organizational cultures and professional role orientations are reinforced through education, training, and socialization. Though not measured directly in this study, prior studies (Welsh, Prendergast, et al., 2016; Taxman et al., 2007) have reported a positive association between support for rehabilitation and perceived effectiveness of IORs. Once again, such characteristics have implications not only for selection (who the agency picks to serve on an interorganizational task force) but organizational culture (how officers should be trained and rewarded for interorganizational collaboration).

Behavioral health agencies and their staff, in contrast to juvenile probation, tend to perceive a greater need for collaboration and, as a byproduct, often have greater experience with interorganizational groups (Palinkas et al., 2014; Smith & Mogro-Wilson, 2007). Although the association between Agency Type and Resource Needs was not statistically significant (p < .066) in this study, results suggest a potential imbalance between the two agencies in their organizational needs for cooperation. Juvenile probation is the largest source of referrals for many treatment providers, especially in large urban areas, and thus behavioral health partners are likely to enter interorganizational discussions with much more immediate and direct perceptions of benefits (e.g., increased numbers of clients and billing reimbursements) v. costs (e.g., increased time and effort associated with collaborative tasks). Again, it may be prudent in any future collaborative efforts to identify and target such perceived imbalances for change early on. Juvenile justice staff, for instance, might benefit from a clear articulation of expected, measurable benefits for themselves and their clients (e.g., a reduced likelihood of recidivism).

The strongest predictor of IORs was Perceived Organizational Support. These findings support previous research emphasizing the importance of leadership in organizational change efforts (Aarons et al., 2011; Glisson & Hemmelgarn, 1997; Lehman et al., 2009; Rhoades et al., 2001), but amplify the significance of perceived support from one’s own supervisor in IOR efforts. Given the additional workload that accompanies participation in IOR efforts (in this study alone, interorganizational teams worked for 18 months or more on interorganizational goals), employees might expect some tangible reward for the extra effort they expend. Even if the value of working toward shared goals is acknowledged, that alone may be insufficient to offset the need to be recognized, rewarded and supported by one’s own supervisor. Otherwise, collaborative tasks may be perceived as simply another set of time-consuming tasks heaped upon probation employees who already are wrestling with the burdens of high caseloads.

Similarly, Adaptability was significantly associated not only with perceived Effectiveness of Relationship, but also with Resource Needs, suggesting that individuals who are high in Adaptability are also more likely to perceive that working with other agencies can benefit one’s own agency. Holding constant other influences, an individual’s willingness and capacity to take on different (and perhaps demanding) interorganizational tasks is a critical factor deserving of attention. It would seem prudent not only to support and reward individuals for their efforts toward IORs, but to select and nurture those willing to work outside the normal boundaries and parameters of their own agency. In this sense, both individual (willingness, capacity) and organizational (recognition, support) factors are intertwined (Welsh, Prendergast, et al., 2016).

The results have several important implications for research, theory and practice. Findings should help researchers and practitioners identify relevant targets for change in new or ongoing IORs between juvenile justice and behavioral health agencies. Simply talking to one another is insufficient to produce improved outcomes. For example, the findings showed that perceived organizational support was critical across most of the IOR outcomes, so simply talking with one another in the absence of organizational support is not likely to be effective.

Findings also suggest that it is critical to measure multiple dimensions of IOR, not simply mechanisms of communication and collaboration. Results showed that different dimensions of IOR were associated with different sets of correlates, and several of the relationships appeared counter-intuitive. Thus, implications for research include the need for studies to further delineate the relevant correlates in order to build more cohesive models of IOR, and to further understand the correlates that appear to be counter-intuitive. Collaborative networks can benefit from greater awareness about different dimensions of IOR (e.g., resource needs and exchanges) and targeted improvements in specific dimensions of IOR (e.g., Welsh, Knudsen et al., 2016).

For practical considerations, the results demonstrate a need to assess IOR in order to improve service delivery and client outcomes across different transition points in the services continuum (Belenko et al., 2017). To do so, it may be critical to address potential imbalances in IOR. For example, if BH needs JJ much more than vice versa (referrals), this power imbalance may influence the nature of IORs. In such instances, partners need to be aware of imbalances and focus on how to achieve shared goals (e.g., recovery and recidivism – or other shared goals such as improved client health, family relationships, etc.), as exemplified by initiatives such as Recovery Oriented Systems of Care (ROSC) (Sheedy & Winter, 2009) and Partners for Recovery (PFR) (SAMHSA, 2014).

There were several limitations to this study. First, the small number of sites, and small sample size of BH staff within sites, reduced the explained variance at the site (level-2) level. Relatedly, the ICC for one scale (Encourages Innovation) was somewhat low. Although there were few differences in survey responses between the two groups, future studies would benefit from larger samples of BH participants in order to facilitate additional subgroup analyses. Randomized block designs could also help ensure that sufficiently large samples of all subgroups of interest are included in the research design a priori (Weisburd & Gill, 2013). Second, several standard errors reported in the tables were somewhat high, potentially affecting the reliability of some of the estimates. Third, it was not possible at this time to collect information on additional constructs suggested by the EPIS model (Aarons et al., 2011; Becan et al., 2020), including measures of the external environment of the sites (e.g., variations in local, regional, and state politics; funding mechanisms; health and justice policies). Fourth, it could be beneficial to examine cross-level interactions between level-1 (individual) characteristics (e.g., age, length of employment) and level-2 (organizational) variables, although such studies would require larger samples of agencies at level-2 in order to ensure adequate statistical power (Raudenbush, 1997; Raudenbush & Bryk, 2002). Fifth, generalization to other juvenile justice and behavioral health organizations is limited since a purposive sample of motivated juvenile probation agencies and their community behavioral health partners participated in the study.

Conclusion

Although conceptual models of implementation in human service agencies emphasize the critical influence of IORs on organizational change efforts and client outcomes, few studies have empirically examined the individual-level and organizational characteristics associated with IORs, and scarcely any studies have examined service coordination between juvenile probation agencies and community-based behavioral health providers. A focus on relationships between juvenile justice and behavioral health agencies is critical in order to understand and improve the factors associated with the effective delivery of substance abuse treatment services to justice-involved youth. The strongest predictors of IORs in this study were Perceived Organizational Support and Adaptability, suggesting the importance of individual as well as organizational influences on IORs (see also Smith & Mogro-Wilson, 2007; Welsh, Prendergast, et al., 2016). Results also suggested a certain predisposition toward intra-organizational innovation and problem-solving; thus future studies would be wise to identify and target such influences for change prior to implementing evidence based practices that require interorganizational service coordination between juvenile justice and behavioral health agencies.

Funding:

This study was funded under the Juvenile Justice Translational Research on Interventions for Adolescents in the Legal System project (JJ-TRIALS) cooperative agreement, funded by the National Institute on Drug Abuse (NIDA), National Institutes of Health (NIH). The authors gratefully acknowledge the collaborative contributions of NIDA and support from the following grant awards: Chestnut Health Systems (U01DA036221); Columbia University (U01DA036226); Emory University (U01DA036233); Mississippi State University (U01DA036176); Temple University (U01DA036225); Texas Christian University (U01DA036224); and University of Kentucky (U01DA036158). The NIDA Science Officer on this project is Tisha Wiley. The contents of this publication are solely the responsibility of the authors and do not necessarily represent the official views of the NIDA, NIH, or the participating universities or juvenile justice systems.

Footnotes

Conflict of Interest:The authors declare that they have no conflict of interest.

Ethical approval:

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent:

Informed consent was obtained from all individual participants included in the study.

1

The variance inflation factor (VIF) and tolerance are closely related statistics for diagnosing collinearity (toleranceis the reciprocal ofVIF). If any VIF value exceeds 4.0, or any tolerance is less than 0.2, then there is a problem with multicollinearity (Hair et al., 2010). In the present study, no VIF value exceeded 2.5, and no tolerance value was less than 0.5.

2

As shown in Table 1, five of the 36 sites did not have valid survey data for BH respondents. As noted, HLM mixed models do not require a balanced design and allow the use of all available data when estimating effects.

3

Specifically, Gender was a significant predictor of Resource Needs in the JJ sample (p < .042), but not in the combined sample (p < .115). Adaptability was a significant, positive predictor of Effectiveness in the combined sample (p < .028), but not in the JJ-only sample (p < .211). Gender (p < .024) and Hispanic (p < .022) were significant predictors of Challenges to Collaboration in the total sample, but not in the JJ sample (p < .074 and p < .058, respectively). HLM results for the JJ sample only are available from the corresponding author upon request.

References

  1. Aarons GA, Hurlburt M, & Horwitz SM (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research, 38, 4–23. doi: 10.1007/s10488-010-0327-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons GA, & Palinkas L (2007). Implementation of Evidence-Based Practice in Child Welfare: Service Provider Perspectives. Administration and Policy in Mental Health and Mental Health Services Research, 34, 411–419. [DOI] [PubMed] [Google Scholar]; Bai Y, Wells R, & Hillemeier MM (2009). Coordination between child welfare agencies and mental health service providers, children’s service use, and outcomes. Child Abuse & Neglect, 33(6), 372–381. doi: 10.1007/s10488-007-0121-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Anderson Carolyn S. (1982). The search for school climate: A review of the literature. Review of Educational Research, 52, 368–420. [Google Scholar]
  4. Becan JE, Fisher JH, Johnson ID, Bartkowski JP, Seaver R, Gardner SK, Aarons GA, Renfro TL, Muiruri R, Blackwell L, Piper KN, Wiley TA, & Knight DK (published online, 11 January 2020). Improving substance use services for juvenile justice-involved youth: Complexity of process improvement plans in a large scale multi-site study. Administration and Policy in Mental Health and Mental Health Services Research, 1–14. 10.1007/s10488-019-01007-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Belenko S, Knight D, Wasserman GA, Dennis ML, Wiley T, Taxman FS, Oser C, Dembo R, Robertson AA, & Sales J (2017). The juvenile justice behavioral health services cascade: A new framework for measuring unmet substance use treatment services needs among adolescent offenders. Journal of Substance Abuse Treatment, 74, 80–91. doi: 10.1016/j.jsat.2016.12.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bitar GW, & Gee R (2010). Integrating public health and public safety in the criminal justice system: An overview of behavioral health services, including alcohol/other drug disorders. Alcoholism Treatment Quarterly, 28(2), 163–175. doi: 10.1080/07347321003648448 [DOI] [Google Scholar]
  7. Bowser D, Henry BF, Wasserman GA, Knight D, Gardner S, Krupka K, Grossi B, Cawood M, Wiley T, & Robertson A (2018). Comparison of the overlap between juvenile justice processing and behavioral health screening, assessment and referral. Journal of Applied Juvenile Justice Services, 97–125. http://npjs.org/jajjs/ [PMC free article] [PubMed] [Google Scholar]
  8. Broome KM, Knight DK, Edwards JR, & Flynn PM (2009). Leadership, burnout, and job satisfaction in outpatient drug-free treatment programs. Journal of Substance Abuse Treatment, 37, 160–170. doi: 10.1016/j.jsat.2008.12.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Chuang E, & Lucio R (2011). Interorganizational collaboration between child welfare agencies, schools, and mental health providers and children’s mental health service receipt. Advances in School Mental Health Promotion, 4(2), 4–15. doi: 10.1080/1754730X.2011.9715625 [DOI] [Google Scholar]
  10. Chuang E, & Wells R (2010). The role of inter-agency collaboration in facilitating receipt of behavioral health services for youth involved with child welfare and juvenile justice. Children and Youth Services Review, 32(12), 1814–1822. doi: 10.1016/j.childyouth.2010.08.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Clear TR, & Latessa EJ (1993). Probation officers’ roles in intensive supervision: Surveillance versus treatment. Justice Quarterly, 10, 441–462. doi: 10.1080/07418829300091921 [DOI] [Google Scholar]
  12. Cohen J, Cohen P, West SG, & Aiken LS (2002). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. [Google Scholar]
  13. Cooper HS (2008). Interorganizational relationships among providers of public social services for emotionally disturbed children in rural east Texas. Dissertation. [Google Scholar]
  14. Courtney KO, Joe GW, Rowan-Szal GA, & Simpson DD (2007). Using organizational assessment as a tool for program change. Journal of Substance Abuse Treatment, 33, 131–137. doi: 10.1016/j.jsat.2006.12.024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Cropper S, Ebers M, Huxham C, & Ring PS (eds.) (2008). The Oxford handbook of inter-organizational relations. New York: Oxford University Press. [Google Scholar]
  16. Fletcher BW, Lehman WEK, Wexler HK, Melnick G, Taxman FS, & Young DW (2009). Measuring collaboration and integration activities in criminal justice and drug abuse treatment agencies.Drug and Alcohol Dependence, 103, S54–S64. 10.1016/j.drugalcdep.2009.01.001 [DOI] [PubMed] [Google Scholar]
  17. Fuller BE, Rieckmann T, Nunes EV, Miller M, Arfken C, Edmundson E, & McCarty D (2007). Organizational readiness for change and opinions toward treatment innovations. Journal of Substance Abuse Treatment, 33, 183–192. doi: 10.1016/j.jsat.2006.12.026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Garner BR, Knight K, & Simpson DD (2007). Burnout among corrections-based drug treatment staff: Impact of individual and organizational factors. International Journal of Offender Therapy and Comparative Criminology, 51, 510–522. 10.1177/0306624X06298708 [DOI] [PubMed] [Google Scholar]
  19. Gil-Garcia J, Schneider C, Pardo T, & Cresswell A (2005). Interorganizational information integration in the criminal justice enterprise: Preliminary lessons from state and county initiatives. Proceedings of the 38th Annual Hawaii International Conference on System Sciences. doi: 10.1109/hicss.2005.338. [DOI] [Google Scholar]
  20. Glisson C & Hemmelgarn A (1997). The effects of organizational climate and interorganizational coordination on the quality and outcomes of children’s service systems. Child Abuse & Neglect, 22(5), 401–421. 10.1016/S0145-2134(98)00005-2 [DOI] [PubMed] [Google Scholar]
  21. Glisson C, & Green P (2006). The effects of organizational culture and climate on the access to mental health care in child welfare and juvenile justice systems. Administration and Policy in Mental Health and Mental Health Services Research, 33(4), 433–448. doi: 10.1007/s10488-005-0016-0 [DOI] [PubMed] [Google Scholar]
  22. Glisson C, & Schoenwald S (2005). The ARC organizational and community intervention strategy for implementing evidence-based children’s mental health treatments. Mental Health Services Research, 7(4), 243–259. doi: 10.1007/s11020-005-7456-1 [DOI] [PubMed] [Google Scholar]
  23. Greenhalgh T, Robert G, Macfarlane F, Bate P, & Kyriakidou O (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Quarterly, 82, 581–629. 10.1111/j.0887-378X.2004.00325.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Hair JF, Black WC, Babin BJ, Anderson RE, & Tatham RL (2010). Multivariate data analysis (7th ed.). New York: Pearson. [Google Scholar]
  25. Hedeker D, Gibbons RD, & Flay BR (1994). Random-effects regression models for clustered data with an example from smoking prevention research. Journal of Consulting and Clinical Psychology, 62, 757–765. [DOI] [PubMed] [Google Scholar]
  26. Hofmann DA, Griffin MA, & Gavin MB (2000). The application of hierarchical linear modeling to organizational research In Klein KJ & Kozlowski SJ (Eds.), Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 467–511). San Francisco, CA: Jossey-Bass. [Google Scholar]
  27. Horwath J, & Morrison T (2007). Collaboration, integration and change in children’s services: Critical issues and key ingredients. Child Abuse & Neglect, 31(1), 55–69. doi: 10.1016/j.chiabu.2006.01.007 [DOI] [PubMed] [Google Scholar]
  28. Howell JC, Kelly MR, Palmer J, & Mangum RL (2004). Integrating child welfare, juvenile justice, and other agencies in a continuum of services. Child Welfare, 83(2), 143–56. [PubMed] [Google Scholar]
  29. Klein KJ, & Sorra JS (1996). The challenge of innovation implementation. Academy of Management Review, 21, 1055–1080. doi: 10.2307/259164 [DOI] [Google Scholar]
  30. Knight DK, Belenko S, Wiley T, Robertson AA, Arrigona N, Dennis M, Bartkowski JP, McReynolds LS, Becan JE, Knudsen HK, Wasserman GA, Rose E, DiClemente R, Leukefeld C. & the JJ-TRIALS Cooperative (2016). Juvenile Justice--Translational research on interventions for adolescents in the legal system (JJ-TRIALS): A cluster randomized trial targeting system-wide improvement in substance use services. Implementation Science, 11, 57. doi: 10.1186/s13012-016-0423-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Knight DK, Joe GW, Morse DT, Smith C, Knudsen H, Johnson I, Wasserman GA, Arrigona N, McReynolds LS, Becan JE, Leukefeld C, & Wiley TRA (2019). Organizational context and individual adaptability in promoting perceived importance and use of best practices for substance use. The Journal of Behavioral Health Services & Research, 46(2), 192–216. doi: 10.1007/s11414-018-9618-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Konrad EL (1996). A multidimensional framework for conceptualizing human services integration initiatives. New Directions for Evaluation, 69, 5–19. 10.1002/ev.1024 [DOI] [Google Scholar]
  33. Lawrence R (1995). Controlling school crime: An examination of interorganizational relations of school and juvenile justice professionals. Juvenile & Family Court Journal 46(3), 3–15. 10.1111/j.1755-6988.1995.tb00819.x [DOI] [Google Scholar]
  34. Lehman WEK, Fletcher BW, Wexler HK, & Melnick G (2009). Organizational factors and collaboration and integration activities in criminal justice and drug abuse treatment agencies.Drug and Alcohol Dependence, 103(Supplement 1), S65–S72. 10.1016/j.drugalcdep.2009.01.004 [DOI] [PubMed] [Google Scholar]
  35. Lehman WEK, Greener JM, & Simpson DD (2002). Assessing organizational readiness for change. Journal of Substance Abuse Treatment, 22, 197–209. doi: 10.1016/S0740-5472(02)00233-7 [DOI] [PubMed] [Google Scholar]
  36. Lehman WEK, Greener JM, Rowan-Szal GA, & Flynn PM (2012). Organizational readiness for change in correctional and community substance abuse programs. Journal of Offender Rehabilitation, 51, 96–114. doi: 10.1080/10509674.2012.633022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Lipari RN,Park-Lee E, &Van Horn S(2016). America’s need for and receipt of substance use treatment in 2015. The CBHSQ Report: September 29, 2016. Center for Behavioral Health Statistics and Quality, Substance Abuse and Mental Health Services Administration, Rockville, MD. [PubMed] [Google Scholar]
  38. Marsh HW, Ludtke O, Nagengast B, Trautwein U, Morin AS, Abduljabbar AS, & Koller O (2012). Classroom climate and contextual effects: Conceptual and methodological issues in the evaluation of group-level effects. Educational Psychologist, 47, 106–124. doi: 10.1080/00461520.2012.670488 [DOI] [Google Scholar]
  39. Mendel P, Meredith L, Schoenbaum M, Sherbourne C, & Wells K (2008). Interventions in organizational and community context: A framework for building evidence on dissemination and implementation in health services research. Administration and Policy in Mental Health and Mental Health Services Research, 35(1), 21–37. doi: 10.1007/s10488-007-0144-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Monico LB, Mitchell SG, Welsh W, Link N, Hamilton L, Malvini Redden S, & Friedmann PD (2016). Developing effective interorganizational relationships between community corrections and community treatment providers. Journal of Offender Rehabilitation, 55(7), 484–501. doi: 10.1080/10509674.2016.1218401 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Nelson G (1994). The development of a mental health coalition: A case study. American Journal of Community Psychology, 22, 229–255. doi: 10.1007/BF02506864 [DOI] [PubMed] [Google Scholar]
  42. Owens RG (1987). Organizational Behavior in Education, 3rd Ed. Englewood Cliffs, NJ: Prentice Hall. [Google Scholar]
  43. Palinkas LA, Fuentes D, Finno M, Garcia AR, Holloway IW, & Chamberlain P (2014). Inter-organizational collaboration in the implementation of evidence-based practices among public agencies serving abused and neglected youth. Administration and Policy in Mental Health and Mental Health Services Research, 41, 74–85. DOI: 10.1007/s10488-012-0437-5 [DOI] [PubMed] [Google Scholar]
  44. Palinkas LA, Holloway IW, Rice E, Brown H, Valente T, & Chamberlain PA (2013). Influence network linkages across treatment conditions in randomized controlled trials. Implementation Science, 8, Article 133. doi: 10.1186/1748-5908-8-133 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Parmigiani A, & Rivera-Santos M (2011). Clearing a path through the forest: A meta-review of interorganizational relationships. Journal of Management, 37(4), 1108–1136. 10.1177/0149206311407507 [DOI] [Google Scholar]
  46. Patterson MG, West MA, Shackleton VJ, Dawson JF, Lawthom R, Maitlis S, Robinson DL, & Wallace AM (2005). Validating the organizational climate measure: Links to managerial practices, productivity and innovation. Journal of Organizational Behavior, 26, 379–408. 10.1002/job.312 [DOI] [Google Scholar]
  47. Polgar MF, Cabassa LJ & Morrissey JP (2016). How community organizations promote continuity of care for young people with mental health problems. Journal of Behavioral Health Services & Research, 43(2), 200–213. doi: 10.1007/s11414-014-9409-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, & Mittman B (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy Mental Health, 36, 24–34. doi: 10.1007/s10488-008-0197-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Raudenbush SW (1997). Statistical analysis and optimal design for cluster randomized trials. Psychological Methods, 2, 173–185. doi: 10.1037//1082-989X.2.2.173 [DOI] [PubMed] [Google Scholar]
  50. Raudenbush SW, & Bryk AS (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). Newbury Park, CA: SAGE. [Google Scholar]
  51. Rhoades L, Eisenberger R, & Armeli S 2001. Affective commitment to the organization: The contribution of perceived organizational support. Journal of Applied Psychology, 86, 825–836. doi: 10.1037/0021-9010.86.5.825; [DOI] [PubMed] [Google Scholar]; These 8 items are the short version of a longer scale that was originally published as:; Eisenberger R, Huntington R, Hutchison S, & Sowa D (1986). Perceived organizational support. Journal of Applied Psychology, 71, 500–507. 10.1037/0021-9010.71.3.500 [DOI] [Google Scholar]
  52. Rivard J, Johnsen M, Morrissey J, & Starrett B (1999). The dynamics of interorganizational collaboration: How linkages develop for child welfare and juvenile justice sectors in a system of care demonstration. Journal of Social Service Research, 25(3), 61–82. 10.1300/J079v25n03_05 [DOI] [Google Scholar]
  53. Roman JK, Butts JA & Roman CG(2011). Evaluating systems change in a juvenile justice reform initiative. Children and Youth Services Review, 33, S41–S53. 10.1016/j.childyouth.2011.06.012 [DOI] [Google Scholar]
  54. Sheedy CK & Whitter M (2009). Guiding principles and elements of recovery-oriented systems of care: What do we know from the research? HHS. Publication No. (SMA) 09–4439. Rockville, MD: Center for Substance Abuse Treatment, Substance Abuse and Mental Health Services Administration; https://www.naadac.org/assets/2416/sheedyckwhitterm2009_guiding_principles_and_elements.pdf. Accessed 23 Dec 2019. [Google Scholar]
  55. Shortell SM, Marsteller JA, Lin M, Pearson ML, Wu S, Mendel P, Cretin S & Rosen M (2004). The role of perceived team effectiveness in improving chronic illness care. Medical Care, 42, 1040–1048. [DOI] [PubMed] [Google Scholar]
  56. Simpson DD, Joe GW, & Rowan-Szal GA (2007). Linking the elements of change: Program and client responses to innovation. Journal of Substance Abuse Treatment, 33, 201–209. doi: 10.1016/j.jsat.2006.12.022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Smith BD, & Mogro-Wilson C (2007). Multi-level influences on the practice of inter-agency collaboration in child welfare and substance abuse treatment. Children and Youth Services Review, 29, 545–556. doi: 10.1016/j.childyouth.2006.06.002 [DOI] [Google Scholar]
  58. Smith B, & Mogro-Wilson C (2008). Inter-agency collaboration: Policy and practice in child welfare and substance abuse treatment. Administration in Social Work, 32(2), 5–24. 10.1300/J147v32n02_02 [DOI] [Google Scholar]
  59. Spybrook J, & Raudenbush SW (2009). An examination of the precision and technical accuracy of the first wave of group-randomized trials funded by the Institute of Education Sciences. Educational Evaluation and Policy Analysis, 31, 298–318. doi: 10.3102/0162373709339524 [DOI] [Google Scholar]
  60. Abuse Substance and Mental Health Services Administration (SAMHSA) (2014). Partners for Recovery (PFR) FACT SHEET: Serving People by Improving Systems. Rockville, MD: Center for Substance Abuse Treatment, Substance Abuse and Mental Health Services Administration; https://www.samhsa.gov/sites/default/files/pfr_fact_sheet.pdf. Accessed 23 Dec 2019. [Google Scholar]
  61. Sullivan BA (2012). Inter-organizational relationships of health partnerships: Characteristics of the Fulton County SPARC Program. Journal of Health and Human Services Administration, 35, 44–70. [PubMed] [Google Scholar]
  62. Tabachnick BG, & Fidell LS (2013). Using multivariate statistics (6th ed.). Boston, MA: Pearson. [Google Scholar]
  63. Taxman FS, Young DW, Wiersema B, Rhodes A, & Mitchell S (2007). The national criminal justice treatment practices survey: Multilevel survey methods and procedures. Journal of Substance Abuse Treatment, 32, 225–238. doi: 10.1016/j.jsat.2007.01.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Taxman FS, Henderson CE, & Belenko S (2009). Organizational context, systems change, and adopting treatment delivery systems in the criminal justice system. Drug and Alcohol Dependence, 103, S1–S6. doi: 10.1016/j.drugalcdep.2009.03.003 [DOI] [PubMed] [Google Scholar]
  65. TCU Institute of Behavioral Research (2013). Survey of Organizational Functioning and Leadership (SOFL) TCU FORMS /SOFL-sg (1/22/13) Institute of Behavioral Research: Fort Worth, Texas. [Google Scholar]
  66. Van Bruggen GH, Lilien GL & Kacker M (2002). Informants in organizational marketing research: Why use multiple informants and how to aggregate responses. Journal of Marketing Research, 39, 469–478. [Google Scholar]
  67. Van de Ven AH, & Ferry DL (1980). The interorganizational field. In Van de Ven AH & Ferry DL (Eds.), Measuring and assessing organizations (pp. 296–346). New York, NY: John Wiley. [Google Scholar]
  68. Walker JS, & Sanders B (2011). The community supports for wraparound inventory: An assessment of the implementation context for wraparound. Journal of Child and Family Studies, 20(6), 747–757. doi: 10.1007/s10826-010-9432-1 [DOI] [Google Scholar]
  69. Weisburd D, & Gill C (2013). Block randomized trials at places: rethinking the limitations of small N experiments. Journal of Quantitative Criminology, 30, 97–112. doi: 10.1007/s10940-013-9196-z. [DOI] [Google Scholar]
  70. Welsh WN (2000). The effects of school climate on social disorder. Annals American Academy of Political and Social Science, 567, 88–107. [Google Scholar]
  71. Welsh WN, Prendergast M, Knight K, Knudsen HK, Monico L, Gray J, Abdel-Salam S, Redden SM, Link N, Hamilton L, Shafer M, & Friedmann PD (2016). Correlates of interorganizational service coordination in community corrections. Criminal Justice and Behavior, 43(4), 483–505. doi: 10.1177/0093854815607306 [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Welsh WN, Knudsen HK, Knight K, Ducharme L, Pankow J, Urbine T, Lindsey A, Abdel-Salam S, Wood J, Monico L, Link N, Albizu-Garcia C, & Friedman PD (2016). Effects of an organizational linkage intervention on inter-organizational service coordination between probation/parole agencies and community treatment providers. Administration and Policy in Mental Health and Mental Health Services Research, 43, 105–121. doi: 10.1007/s10488-014-0623-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Welsh WN, Stokes R, & Greene JR (2000). A macro‐level model of school disorder. Journal of Research in Crime and Delinquency, 37, 243–283. [Google Scholar]
  74. Whitehead JT, & Lindquist CA (1992). Determinants of probation and parole officer professional orientation. Journal of Criminal Justice, 20, 13–24. 10.1016/0047-2352(92)90031-4 [DOI] [Google Scholar]
  75. Winters KC, Botzet AM, & Fahnhorst T (2011). Advances in adolescent substance abuse treatment. Curr Psychiatry Rep, 13(5), 416–421. doi: 10.1007/s11920 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES