Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jul 1.
Published in final edited form as: Adm Policy Ment Health. 2020 Jul;47(4):569–580. doi: 10.1007/s10488-020-01024-3

Sustainment of Trauma-Focused and Evidence-Based Practices Following Learning Collaborative Implementation

Sarah A Helseth 1, Samuel O Peer 2, Funlola Are 3, Alyssa M Korell 2, Benjamin E Saunders 3, Sonja K Schoenwald 4, Jason E Chapman 4, Rochelle F Hanson 3
PMCID: PMC7260081  NIHMSID: NIHMS1565706  PMID: 32090298

Abstract

Purpose:

Given the need to develop and validate effective implementation models that lead to sustainable improvements, we prospectively examined changes in attitudes, behaviors, and perceived organizational support during and after statewide Community-Based Learning Collaboratives (CBLCs) promoting trauma-focused evidence-based practices (EBPs).

Methods:

Participants (N = 857; i.e., 492 clinicians, 218 brokers, and 139 senior leaders) from 10 CBLCs completed surveys pre- and post-CBLC; a subsample (n = 146) completed a follow-up survey approximately two years post-CBLC.

Results:

Results indicated (a) medium, sustained increases in clinician-reported use of trauma-focused EBPs, (b) medium to large, sustained increases in perceived organizational support for trauma-focused EBPs, and (c) trivial to small, sustained increases in perceived organizational support for EBPs broadly. In contrast, clinician-reported overall attitudes towards EBPs decreased to a trivial degree pre- to post-CBLC, but then increased to a small, statistically significant degree from post-CBLC to follow-up. Notably, the degree of perceived improvements in organizational support for general and trauma-focused EBPs varied by professional role.

Conclusions:

Findings suggest the CBLC implementation strategies may both increase and sustain provider practices and organizational support towards EBPs, particularly those EBPs a CBLC explicitly targets.

Keywords: Learning Collaboratives, sustainability, evidence-based practices, Trauma-Focused Cognitive-Behavioral Therapy


Of the 13% to 20% of children in the United States who experience mental health disorders (Centers for Disease Control and Prevention, 2013), many are unable to access high quality, evidence-based practices (EBPs; Hoagwood et al., 2014; Hurlburt et al., 2004; Merikangas et al., 2010). This is of particular concern among trauma-exposed youth; an estimated 70% of American youth witness or experience violence during their lifetime (Saunders & Adams, 2014), with conferred risk for significant behavioral health problems (e.g., Fergusson, Boden, & Horwood, 2008). Eager to increase access to trauma-focused EBPs, researchers, public health officials, and service providers are examining how to effectively train providers to sustainably deliver EBPs with fidelity. One increasingly used set of implementation strategies is the Learning Collaborative (LC) model.

Learning Collaborative Implementation Model

Originally based on quality improvement collaborative models designed to implement a variety of evidence-based treatments in healthcare service systems (e.g., Breakthrough Series Collaborative model; Institute for Healthcare Improvement, 2003), the LC model engages participants operating at multiple levels of an agency or organization (i.e., direct service providers, supervisors, administrators) to create a supportive infrastructure that can help sustain delivery of an EBP. LCs aim to cultivate both individual and organizational change, by providing training in an EBP within the context of ongoing, moderate-intensity implementation support (see the Learning Collaborative Toolkit; Markiewicz, Ebert, Ling, Amaya-Jackson, & Kisiel, 2006). The LC combines didactic and experiential activities, as well as ongoing evaluation of specific quality improvement outcomes (e.g., patient engagement, provider adherence and competence) to facilitate effective and sustainable delivery of a specific EBP. LCs typically include in-person trainings, often called learning sessions, training cases, and consultation or coaching with an expert in the EBP, to address individual and organizational implementation barriers (Nadeem, Olin, Hill, Hoagwood, & Horwitz, 2013).

The National Child Traumatic Stress Network (NCTSN) has widely utilized the LC model (Ebert, Amaya-Jackson, Markiewicz, Kisiel, & Fairbank, 2012; Nadeem, Olin, Hill, Hoagwood, & Horwitz, 2014) for large-scale training and implementation of trauma-focused EBPs, such as Trauma-Focused Cognitive Behavioral Therapy (TF-CBT; Cohen, Mannarino, & Deblinger, 2017). Despite growing use of LCs, evidence of their effectiveness in supporting the initial and sustained implementation of EBPs is somewhat limited. For example, in Nadeem and colleagues’ (2014) review of LCs, among the 16 LC studies that focused on implementation of EBPs in behavioral health settings, only eight reported outcomes related to the “sustained use of new practices after the conclusion of the LC.” Among the handful of published LC studies, just two have examined large scale efforts to disseminate TF-CBT via the LC model. Ebert and colleagues (Ebert et al., 2012) reported on the NCTSN’s first application of the LC model to disseminate TF-CBT to 109 staff across 11 organizations. The authors concluded that the LC model facilitated skillful delivery of TF-CBT with fidelity over time, with all 11 organizations’ directors reporting continued delivery of TF-CBT at their organization at the one-year follow-up. Importantly, the evidence for sustained use of TF-CBT at the one-year follow-up was based on a simple yes/no from organization directors rather than clinicians themselves.

More recently, Lang and colleagues have reported on two separate statewide initiatives to create a trauma-informed child welfare system (Lang, Campbell, Shanley, Crusto, & Connell, 2016) and to utilize LCs to disseminate TF-CBT (Lang, Franks, Epstein, Stover, & Oliver, 2015) through the Department of Children and Families, an integrated state agency that includes child welfare, behavioral health, prevention, juvenile justice, and education. Results indicated small-to-moderate significant improvements among child welfare workers in trauma-informed knowledge, practice, and collaboration across the initial two-year implementation period (Lang et al., 2016). In a second statewide initiative, participation in year-long LCs for clinical providers was associated with pre-to-post improvements in reported use of TF-CBT practices, attitudes towards EBPs, organizational readiness to provide trauma-informed care, and reductions in child trauma symptoms (Lang et al., 2015). Of note, these LCs to train clinical providers in TF-CBT were conducted separately from the child-welfare trainings described above (Lang et al., 2016).

In a subsequent manuscript, Lang and colleagues (Lang, Randall, Delaney, & Vanderploeg, 2017) stated that TF-CBT practices were sustained among agencies participating in one of their LCs for up to nine years post implementation. However, ‘sustainment’ was defined by the increased number of agencies in the state that offered TF-CBT over a nine-year period, and not whether LC-trained providers continued to deliver TF-CBT with post-LC implementation quality. Collectively, these findings suggest that LC participation is associated with initial adoption of trauma-focused practices and may promote sustained use among organizations; however, to our knowledge, no studies to date have reported on within-participant outcomes, leaving open the question of whether the LC model is associated with sustainment of TF-CBT practices following cessation of LC-related supports.

Community-based Learning Collaborative Implementation Model

By design, LCs typically train clinicians within a single agency or organization. As such, LCs may have a limited impact on the broader community of public service sectors and service organizations involved in the identification, assessment, and treatment of trauma-exposed youth, such as child welfare or juvenile justice. In addition, the LC framework does not include any mechanism to increase awareness of or local demand for EBPs. To address these limitations, (Hanson et al., 2016; Saunders & Hanson, 2014) augmented the LC model to include (a) training of both clinical and broker professionals (i.e., nonclinical professionals who identify and link children with mental health service providers) from (b) three organizational levels–frontline providers (e.g., therapists, case workers), supervisors, and senior leaders (agency directors/program managers) (c) representing multiple key service organizations within a targeted community (e.g., state, county). This enhanced model, called the Community-Based Learning Collaborative (CBLC), sought to connect local professionals from multiple service systems (i.e., mental health, child welfare, juvenile justice), in an effort to increase both the supply of and demand for EBPs. Finally, CBLC activities were designed to promote the development and sustainment of interprofessional collaborations and inter-organizational relationships (Hanson et al., 2016), which have been shown to improve patient care in medical settings (Hammick, Freeth, Koppel, Reeves, & Barr, 2007; Zwarenstein, Goldman, & Reeves, 2009) and to increase utilization of mental health services (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Hurlburt et al., 2004). These studies suggest that proactive cultivation of interprofessional relationships may drive local demand for EBPs, but their impact on sustained implementation remains unclear. In sum, the CBLC’s inclusion of multiple disciplines, across professional levels and youth service systems in all training and implementation activities, with the intent to build interorganizational collaboration, distinguishes the current work from prior trauma-focused state-wide LC initiatives (Ebert et al., 2012; Lang et al., 2016, 2015).

Promoting Sustainment of Practices and Support for EBP

Sustainability, defined as “continued use of program components at sufficient intensity for the sustained achievement of desirable program goals and population outcomes,” (Shelton, Cooper, & Wiltsey-Stirman, 2018) may be the most relevant measure of effective implementation because it reflects how well an organization continues to implement a new program or practice after the withdrawal of support from a training team (see Proctor et al., 2011; Scheirer & Dearing, 2011). In a sense, it represents the transition of a novel practice into standard practice. Operational definitions and indicators of sustainability vary widely (for reviews, see Shelton et al., 2018; Wiltsey-Stirman et al., 2012), as they typically reflect study-specific criteria, which can complicate comparisons across studies. Some require strict protocol adherence for a site to achieve sustained implementation; whereas, others specify that certain intervention components be maintained while allowing adaptations or natural drift from the original model. Likewise, sustainability outcomes can be assessed at individual (e.g., client outcomes, provider practices), organizational, or community levels. Sustainability can be influenced by a variety of factors, including organizational climate (Glisson et al., 2008; Glisson & Green, 2011) and staff turnover (Aarons & Sawitzky, 2006; Aarons, Sommerfeld, Hecht, Silovsky, & Chaffin, 2009; Woltmann et al., 2008). Overall, studies conducted to date suggest that strategies to support implementation, including sustained practice use, should seek to change attitudes and practices of both individual providers and their organizations. Importantly, while a number of studies have examined the effects of specific implementation support strategies on sustained use of EBPs (Brown et al., 2014; Chamberlain et al., 2012; Glisson et al., 2010; Lang et al., 2017), LCs and their impact on sustained practices across providers and service agencies have not been the focus of much research to date.

The Present Study

Evidence of the effectiveness of LCs and CBLCs to disseminate and implement EBPs in community-based mental health service systems is limited but hints at their promise to support implementation. Furthermore, few studies have examined whether LC or CBLC participation promotes sustained implementation of the newly learned practice among participating providers. The present study offers a first step in examining this question, using data from (1) a statewide initiative comprising 10 CBLCs to disseminate and implement trauma-focused EBPs and from (2) a follow-up study with a subsample of these CBLC participants.

Hypotheses.

We hypothesized that pre- to post-CBLC ratings from clinicians would indicate significantly increased use of TF-CBT practices (H1) and positive attitudes towards EPBs in general (H2; e.g., Aarons, 2005; Aarons et al., 2010). Furthermore, we hypothesized that clinicians and senior leaders would perceive significantly increased support within their organizations for TF-CBT, pre- to post-CBLC (H3), and that ratings from all participants (i.e., clinicians, brokers, senior leaders; Beidas et al., 2018) would indicate significantly increased support within their organizations for EBPs generally (H4). Finally, we predicted that, among the subsample of follow-up study participants, the increased use of, improved attitudes toward, and perceptions of organizational support for trauma-focused and other EBPs observed pre-to-post CBLC would remain after the cessation of CBLC activities (H5), thus providing preliminary evidence for sustainment of CBLC-related training gains.

Methods

Study Context and Setting

Project BEST and our statewide CBLC initiative have been discussed in detail elsewhere (Hanson et al., 2016, 2018, 2019; Saunders & Hanson, 2014). Briefly, Project BEST was a statewide implementation effort that used CBLCs to train clinicians, brokers, and senior leaders from community child welfare, child advocacy and mental health organizations to utilize evidence-based, trauma-focused practices (i.e., TF-CBT). The overarching Project goal was to ensure equal access to high-quality, evidence-based mental health assessment and psychosocial treatment for all abused children in the state. Guided by the empirical literature (i.e., EPIS framework; Aarons, Hurlburt, & Horwitz, 2011), Project BEST team members developed the CBLC model (described above) to provide systematic training, ongoing implementation support, and promote interprofessional relationships that would facilitate sustainment of training outcomes among individuals and their organizations. Salient Project BEST implementation outcomes (Hanson et al., 2018, 2019) included high completion rates, perceived utility of CBLC components, small to moderate significant pre- to post-CBLC decreases in barriers to child trauma treatment, and small to large significant pre- to post-CBLC increases in participant-reported use of trauma-focused EBPs, interprofessional collaboration, and perceived utilization of evidence-based child trauma treatment.

Research Design

The present non-controlled, prospective study utilized survey and implementation data collected from Project BEST. Project BEST data were collected from all participants at pre-CBLC baseline (T1) and upon completion of their CBLC (T2). Additional data for the present study were collected from a separate research study that explored inter-organizational relationships, interprofessional collaboration, and select implementation outcomes with a subset of Project BEST participants. This separate research study, FIPC, collected additional information at later timepoints and allowed our team to observe whether changes in attitudes, practices, and perceptions were sustained over time (i.e., T3).

Participants

A total of 10 CBLCs were completed between 2007 and 2016, averaging 85.70 attendees per CBLC (SD = 17.05, range: 60–111). Roughly 13% of participants (n = 131) dropped out or discontinued their participation in the CBLC, with a plurality citing change of employment (44%) as their reason for termination. Participants for Project BEST included those who participated in one of the 10 CBLCs. As this was a training initiative, not a research study, pre- and post-surveys were a required component of the CBLC. Also, since this was a training initiative, rather than a research study, participant demographic information was not routinely collected. Project BEST participants (N = 857) were from 153 organizations, spanning the departments of mental health (41%), social services (25%), child advocacy (10%), juvenile justice (5%), and other organizations (19%). The sample included 492 clinicians (57%), 225 brokers (26%), and 140 senior leaders (16%).

Project BEST participants were actively recruited to participate in FIPC from March 2015 through January 2016 via email, letters, and phone calls; senior leaders were asked to facilitate contact with current and former agency staff to help our team reach CBLC attendees who may have changed jobs (i.e., snowball sampling). There were no eligibility requirements for FIPC participants, with the exception that they had previously participated in a Project BEST CBLC. FIPC had a targeted enrollment of n = 150, equally distributed across clinicians, brokers, and senior leaders (i.e., 50 of each; for more information, see Hanson et al., 2016). However, of the Project BEST participants, only 146 participants opted to enroll in the follow-up FIPC study. These participants represented all 10 CBLCs, with 95 clinicians (65%), 25 brokers (17%), and 26 senior leaders (18%). Demographic data showed FIPC participants were predominantly women (93%), White (64%) or African-American (35%), and Non-Hispanic (98%). On average, FIPC participants were 43.80 years old (SD = 10.40). Most held postgraduate degrees (89%) and were employed by local agencies, including departments of mental health (42%), social services (14%), child advocacy centers (14%), juvenile justice (7%), schools (6%), private practices (5%), or other organizations (12%). FIPC participants reported working for their current agency an average of 9.30 years (SD = 8.56 years), though responses varied widely (i.e., range: 1 month–31 years).

The present study analyzed data from 404 Project BEST participants–including a subsample of 119 FIPC participants. For the T1–T2 analyses (see below), participants were included in the present study if they 1) had the opportunity to complete all relevant T1 and T2 surveys (i.e., one CBLC was not administered T2 surveys, and its attendees were excluded from analyses) and (2) provided at least partial survey data at both T1 and T2 (i.e., 60 CBLC attendees did not provide any survey data at either time point, and thus they were excluded from the present analyses). Of the 146 participants recruited for FIPC, only 119 participants provided at least partial survey data at both T2 and T3; their data were analyzed to measure sustained attitudes and practices at follow-up.

Analysis of Project BEST and FIPC participant attributes that may have influenced follow-up results indicated that their rostering rates (i.e., percentage of participants who attended all learning sessions and completed consultation calls, and for clinicians, also completed at least two TF-CBT training cases; 58% and 64%, respectively) were not statistically different, T2(1) = 1.99, p = .16. Rostering rates also did not significantly differ among participant roles (i.e., clinicians [59%], brokers [25%], and senior leaders [17%], T2[2] = 1.66, p = .44), indicating receipt of equivalent training.

Research Procedures

All data were collected in accordance with APA Ethical Guidelines for Research and approved by [institution]’s Institutional Review Board. Study outcome measures were collected via online surveys (i.e., Survey Monkey) at each time point. The average length of time between T1 and T2 data collection for Project BEST was 16.40 months (SD = 5.05). Among the subsample of FIPC participants who provided T3 data, the average length of time between T2 and T3 data collection was 21.23 months (SD = 10.48), and the average length of time between T1 and T3 data collection was 37.76 months (SD = 12.60).

Measures

TF-CBT Practices Scale (TPS).

Derived from the Clinical Practices Questionnaire (CPQ; Deblinger, Cohen, Runyon, & Hanson, 2005), the TPS is a self-report measure that includes 44 items assessing the frequency with which clinicians use TF-CBT-specific clinical practices (TPS available upon request from second author). Items assess the structure and content of therapy sessions, including psychoeducation, personal safety skills, coping and stress management, trauma-focused intervention, and behavior management skills training. Items were rated on a 6-point Likert scale (i.e., 0 = “0% of the time” to 5 = “81%−100% of the time”) and demonstrated excellent internal consistency in the current sample (α = .91). TPS items were averaged to create a total score, with higher scores indicating more frequent self-reported use of TF-CBT practices. Of note, our team has previously published on TPS outcomes for a subsample of the present sample (i.e., 572 participants from five of our 10 CBLCs; Hanson et al., 2019); all other measures and all FIPC data are novel.

Evidence-Based Practice Attitude Scale (EBPAS).

The EBPAS (Aarons et al., 2010) is a 15-item, empirically validated measure of attitudes towards common EBPs; only clinicians completed the EBPAS. Subscales assess the appeal of EBPs, requirements to adopt EBPs, openness to new practices or EBPs, and divergence from EBPs in favor of one’s own clinical experience. Items were rated on a 5-point Likert scale (i.e., 1 = “not at all” to 5 = “to a very great extent”) and were averaged for a total score, with higher scores representing more favorable attitudes towards EBPs. We found evidence of good internal consistency (α = .82), which aligned with previously reported reliability statistics (α = .76; Aarons et al., 2010).

TF-CBT in Your Organization (TFO).

This 18-item scale was designed for Project BEST to assess clinicians’ and clinical senior leaders’ perceptions of their organizations’ support for TF-CBT practices (Hanson et al., 2016); brokers did not complete the TFO as their organizations did not usually provide treatment services. Respondents were asked to indicate on a 5-point Likert scale (i.e., 1 = “strongly disagree” to 5 = “strongly agree”) the extent to which they agreed with each statement. Each item began with the phrase “In our organization…”, followed by a question about perceived support for TF-CBT implementation at the organization: e.g., “…clinical supervisors provide regular clinical supervision on TF-CBT” or “…senior leaders collect and report staff metrics that track the use of TF-CBT with appropriate clients”. The TFO demonstrated excellent internal consistency with the present sample (α = .93), prompting our use of a total mean score.

About Your Organization (AYO).

This 18-item scale was also developed as part of Project BEST, to capture CBLC participants’ perceptions about their organizations’ support for EBPs in general; all participants completed this measure. Participants rated items using a 5-point Likert scale (i.e., 0 = “strongly disagree” to 4 = “strongly agree”). Items began with “In our organization…”, and assessed aspects of EBP support: e.g., “…regular assessment of clients using standardized measures is part of everyday practice”, “…we regularly collect and use feedback from clients about the services we provide”, “…learning new skills and being on the cutting edge are highly valued” (AYO available upon request from second author). Negatively worded items were reverse scored prior to generating total mean scores; higher AYO scores indicated stronger organizational support for EBPs. AYO items demonstrated good internal consistency (α = .82).

Analytic Plan

Data were analyzed using SPSS 25 software (IBM Corp., 2017).

Preliminary Analyses

Nesting.

Average months between T1–T2 and T2–T3 surveys did not correlate significantly with any study variables and thus were not included in future analyses. The potential effect of CBLC cohort on variables of interest (i.e., AYO, EBPAS, TFO, TPS) was modeled using fixed effects and found to be nonsignificant (ps = .20–.40), and was therefore not included in the final analyses. Modeling the potential nesting effects of participants within organizations was not possible because of the data’s relative singularity (i.e., the mode and median number of participants per organization for Project BEST were 1 and 2, respectively). Consequently, all analyses were conducted at the participant level (Hox, 2010).

Missing data.

Of the 857 eligible Project BEST participants, 849 (99%) completed at least part of the pre-CBLC survey measures in the present analysis. These included 492 clinicians, 218 brokers, and 139 senior leaders. In contrast, post-CBLC surveys were completed, at least partially, by 245 clinicians, 85 brokers, and 74 senior leaders, totaling 404 participants (48%). Independent t-tests confirmed that post-CBLC survey completers and non-completers did not significantly differ on any pre-CBLC measured variables (i.e., AYO, EBPAS, TFO, and TPS scores; ps = .08–.69).

Of the 146 participants who enrolled in the FIPC study, 119 participants (82%) completed at least part of the follow-up survey measures in the present analysis at T3, including 83 clinicians, 17 brokers, and 19 senior leaders. To assess potential differences between FIPC participants who did and did not participate in the FIPC study and provide T3 data, independent t-tests of pre- to post-CBLC change scores on variables of interest (i.e., AYO, EBPAS, TFO, and TPS) were computed. Results indicated that the two groups did not significantly differ in pre- to post-CBLC changes on the AYO, EBPAS, or TFO (ps = .40–.68). However, groups differed to a small, but statistically significant degree on their pre- to post-CBLC change on TPS scores; t(173) = −2.53, p = .01, Mean Difference = −.41, g = −0.44. Specifically, Project BEST clinicians who did not participate in the FIPC study had significantly greater TPS improvements (M = 0.77, SD = 0.95) than those who participated in FIPC (M = 0.36, SD = 0.88). Further probative t-tests indicated these clinicians differed significantly only on their pre-CBLC TPS scores (t[138] = 2.73, p = .007, Mean Difference = 0.32, g = 0.25), such that FIPC participants had higher pre-CBLC TPS scores than FIPC non-participants. In contrast, these groups of clinicians did not differ significantly on post-CBLC scores (p = .25). Notwithstanding these differences, TPS scores increased significantly pre- to post-CBLC for Project BEST participants who did (t[43] = 1.95, p = .03, d = 0.30) and did not participate in FIPC (t[130] = 8.35, p < .001, d = 0.73).

Among those who submitted survey data, not all participants completed each survey item. For pre-CBLC surveys, all items, 226 cases (27%), and 2,959 values (6%) had missing data. Little’s test indicated the aforementioned pre-CBLC data were missing completely at random (MCAR) for all three types of participants (i.e., clinicians, χ2[8,153] = 7706.20, p = 1.00; brokers, χ2[340] = 335.89, p = .55; and senior leaders, χ2[165] = 159.41, p = .61). Similarly, for submitted post-CBLC surveys, all items, 134 cases (33%), and 3,301 values (13%) had missing data. Little’s test indicated these post-CBLC data were also MCAR for all three types of participant (i.e., clinicians, χ2[4,355] = 4,438.81, p = .18; brokers, χ2[208] = 195.60, p = .72; and senior leaders, χ2[88] = 53.17, p = 1.00). Finally, for submitted follow-up T3 surveys, all items, 71 cases (60%), and 1,046 values (12%) had missing data. Once again, Little’s test indicated these post-CBLC data were MCAR for all three types of participant (i.e., clinicians, χ2[3,102] = 2,445.35, p = 1.00; brokers, χ2[56] = 63.07, p = .24; and senior leaders, χ2[22] = 3.74, p = 1.00).

Given the aforementioned fractions of MCAR data, multiple imputation (MI; m = 20; Dong & Peng, 2013) was conducted at each timepoint to replace missing values for any measure a participant partially completed. All analyses were conducted with and without MI. Since the magnitude, direction, and statistical significance of these results remained relatively invariant, and conclusions about significance did not change, only results with MI are reported.

Primary Analyses

Implementation: Pre- to post-CBLC analyses.

To assess hypothesized pre- to post-CBLC changes in the above variables of interest (as measured by AYO, EBPAS, TFO, and TPS scores), data from the full sample of Project BEST participants were compared at T1 and T2 using a series of paired samples t-tests [one-tailed]. For hypothesized CBLC-related changes on the TFO and AYO, follow-up paired t-tests [one-tailed] were conducted for each participant role that completed each measure (i.e., TFO = clinicians and clinical senior leaders, AYO = clinicians, brokers, and senior leaders), given evidence that perceptions of organizational climate vary significantly by professional role (Beidas et al., 2018).

Sustainment: Post-CBLC to follow-up analyses.

To test our hypotheses around sustainability of attitudes and practices following CBLC completion, we first conducted paired samples t-tests using T2 and T3 data from FIPC participants. Next, we ran non-inferiority tests using 90% confidence intervals of Cohen’s ds based on the above t-tests (Steiger, 2004; Wuensch, 2012) to evaluate whether any significant T1 to T2 improvements in attitudes toward, use of, and perceptions of organizational support for trauma-focused EBPs were sustained from T2 to T3 (i.e., T3 scores could remain practically equivalent or significantly higher than T2 scores). Follow-up paired t- and non-inferiority tests for role-specific AYO and TFO scores were only computed when cell sizes (n ≥ 10) provided sufficient power.

Results

Pre-to-Post CBLC Results: Project BEST Sample (see Table 1)

Table 1.

Changes in Project BEST Sample from Pre- to Post-Community-Based Learning Collaborative (i.e., T1 and T2)

T1a T2a
Measure n M (SD) M (SD) ta pab d
TPS 242 3.88 (0.84) 4.52 (0.87) 9.63 < .001 0.62
EBPAS 245 2.99 (0.77) 2.84 (0.73) −2.17 .03 −0.14
TFO 272 3.04 (0.70) 3.39 (0.62) 8.56 < .001 0.52
 Clinician 245 3.03 (0.72) 3.35 (0.61) 7.51 < .001 0.48
 Senior Leader 27 3.15 (0.54) 3.77 (0.50) 5.23 < .001 1.01
AYO 404 3.46 (0.56) 3.51 (0.56) 2.13 .02 0.11
 Clinician 245 3.56 (0.54) 3.57 (0.55) 0.43 .67 0.03
 Broker 85 3.23 (0.51) 3.35 (0.49) 2.38 .01 0.26
 Senior Leader 74 3.37 (0.61) 3.45 (0.62) 1.81 .04 0.21

Note. TPS = TF-CBT Practices Scale (completed by clinicians only); EBPAS = Evidence-Based Practice Attitude Scale (completed by clinicians only); TFO = TF-CBT in Your Organization (completed by clinicians and clinical senior leaders); AYO = About Your Organization (completed by clinicians, brokers, and senior leaders).

a

= based on pooled multiple imputation data (m = 20).

b

= one-tailed.

TF-CBT use and organizational support.

As hypothesized (H1), clinicians’ TPS scores significantly increased pre- to post-CBLC; t(241) = 9.63, p < .001; reflecting a medium-sized gain in self-reported use of TF-CBT practices (d = 0.62). Also as hypothesized (H3), clinicians’ and senior leaders’ perceived organizational support for TF-CBT practices (as measured by TFO scores) significantly increased to a medium degree, pre- to post-CBLC, t(272) = 8.56, p < .001, d = 0.52. Follow-up, role-specific analyses revealed that perceived organizational support for TF-CBT practices increased significantly pre- to post-CBLC for both clinicians; t(244) = 7.51, p < .001; and senior leaders; t(26) = 5.23, p < .001. However, the magnitude of these improvements varied notably by professional role, such that pre- to post-CBLC gains in TFO scores were medium for clinicians (d = 0.52), but large for senior leaders (d = 1.11).

Individual attitudes toward and organization support for EBPs in general.

Contrary to our hypothesis (H2), clinicians’ attitudes towards EBPs, as measured by the EBPAS, significantly decreased from pre- to post-CBLC, t(244) = −2.17, p = .03; although the magnitude of this change was trivial (d = −.14). In contrast, hypothesized (H4) AYO scores for the overall Project BEST sample, on average, increased significantly pre- to post-CBLC, t(403) = 2.13, p = .02, d = .0.11), but the degree to which participants perceived higher organizational support for EBPs was trivial (d = .11). Yet, as with TFO scores, pre- to post-CBLC changes in AYO scores varied notably across professional roles. Specifically, pre- to post-CBLC AYO scores, on average, did not change significantly for clinicians; t(244) = 0.43, p = .67, d = 0.03, but did increase significantly to a small degree for both brokers; t(84) = 2.38, p = .01, d = 0.26; and senior leaders; t(73) = 1.81, p = .04, d = 0.21.

Outcomes Sustained at Follow-Up: FIPC Sample (see Table 2)

Table 2.

Changes in FIPC Subsample from Post-Community-Based Learning Collaborative (T2) to Follow-Up (T3)

T2a T3b Noninferiority
Measures n M (SD) M (SD) ta pb d d 90% CIc
TPS 82 4.52 (0.98) 4.44 (0.88) −0.52 .60 −0.04 [−0.17, 0.09]
EBPAS 83 2.86 (0.71) 3.18 (0.46) 2.17 .04 0.24 [0.05, 0.42]
TFO 92 3.45 (0.60) 3.44 (0.61) −0.14 .89 −0.01 [−0.19, 0.16]
 Clinician 83 3.42 (0.62) 3.41 (0.61) −0.13 .90 −0.01 [−0.19, 0.17]
 Senior Leader 9 3.74 (0.37) 3.72 (0.56) −0.07 .95 −0.02
AYO 119 3.48 (0.52) 3.53 (0.51) 0.72 .48 0.07 [−0.09, 0.22]
 Clinician 83 3.61 (0.54) 3.64 (0.48) 0.52 .60 0.06 [−0.12, 0.24]
 Broker 17 3.23 (0.53) 3.34 (0.58) 0.83 .41 0.20 [−0.19, 0.59]
 Senior Leader 19 3.57 (0.29) 3.66 (0.24) 0.83 .41 0.19 [−0.19, 0.57]

Note. TPS = TF-CBT Practices Scale (completed by clinicians only); EBPAS = Evidence-Based Practice Attitude Scale (completed by clinicians only); TFO = TF-CBT in Your Organization (completed by clinicians and clinical senior leaders); AYO = About Your Organization (completed by clinicians, brokers, and senior leaders).

a

= based on pooled multiple imputation data (m = 20).

b

= one-tailed.

c

= Role-specific noninferiority tests were only conducted when n ≥ 10.

TF-CBT use and organizational support.

Consistent with hypotheses (H5), clinicians’ TPS did not change significantly from post-CBLC to follow-up, t(81) = −0.52, p = .60, d = −0.04. Moreover, non-inferiority testing showed that follow-up TPS scores were statistically non-inferior to post-CBLC scores (d 90% CI [−0.17, 0.09]), suggesting that pre- to post-CBLC increases in self-reported TF-CBT practices were sustained at the follow-up assessment. Similarly, organizational support for TF-CBT practices (as measured by TFO scores) was not significantly different at follow-up than post-CBLC; t(91) = −0.14, p = .89, d = −0.01; but were instead statistically noninferior; d 90% CI [−0.19, 0.16]. This pattern was repeated in role-specific, follow-up analyses for both clinicians; t(82) = −0.13, p = .90, d = −0.01, 90% CI [−0.19, 0.17]; and clinical senior leaders who completed the TFO post-CBLC and at follow-up; t(8) = −0.07, p = .95, d = −0.02. Thus, results consistently supported sustainment of pre- to post-CBLC gains in perceived organizational support for TF-CBT practices across professional roles.

Individual attitudes toward and organizational support for EBPs in general.

In contrast to significant pre- to post-CBLC decreases in clinician’s attitudes towards EBPs in general (as measured by EBPAS scores), said attitudes increased, on average, from post-CBLC to follow-up to a small, but statistically significant degree; t(82) = 2.17, p = .04, d = .24, 90% CI [0.05, 0.42]. Also consistent with hypotheses, organizational support for EBP practices in general (as measured by AYO scores) was not significantly different at follow-up than post-CBLC for the overall FIPC subsample; t(118) = 0.72, p = .48, d = 0.07; but were instead statistically noninferior; d 90% CI [−0.09, 0.22]. This pattern occurred for all three CBLC roles: clinicians; t(82) = 0.52, p = .60, d = 0.06, 90% CI [−0.12, 0.24]; brokers; t(16) = 0.83, p = .41, d = 0.20, 90% CI [−0.19, 0.59]; and senior leaders; t(18) = 0.83, p = .41, d = 0.19, 90% CI [−0.19, 0.57]. In other words, results indicated that pre- to post-CBLC improvements in perceived organizational support for EBPs in general were sustained across professional roles at follow-up.

Discussion

The present study analyzed survey data collected during and after Project BEST, a statewide implementation effort to train clinicians, brokers, and senior leaders from different community organizations to utilize evidence-based, trauma-focused clinical practices (e.g., TF-CBT). The CBLC model’s inclusion of multiple disciplines across professional levels and youth service systems in all training/implementation activities, with the intent to build interorganizational collaboration, distinguishes the current work from prior state-wide initiatives to implement TF-CBT (Ebert et al., 2012; Lang et al., 2016, 2015). A subset of those participants (i.e., FIPC subsample) provided follow-up data, on average, 21 months post-CBLC—nearly 38 months after initial baseline assessments were completed—allowing for evaluation of sustained clinical practices and organizational support. We hypothesized that clinicians would report pre- to post-CBLC improvements in their self-reported use of TF-CBT skills (H1) and attitudes towards EBPs in general (H2). We also hypothesized that clinicians and senior leaders would endorse improved organizational support for TF-CBT (H3), and that all Project BEST participants (i.e., clinicians, brokers, and senior leaders) would perceive increased support for EBPs in general at their organizations (H4). Finally, we examined whether hypothesized pre- to post-CBLC changes were sustained over time (H5). Study hypotheses were generally supported, suggesting that positive changes noted pre- to post-CBLC were sustained.

Changes in Trauma-Specific Practices and Organizational Support

Our results suggest that the CBLC model may work to promote changes in clinicians’ use of and perceived organizational support for trauma-focused EBPs. Project BEST clinicians self-reported a significant, medium increase in their use of TF-CBT practices to treat trauma-exposed youth, following the conclusion of the CBLC (i.e., T1 to T2). This result was consistent with our prior work examining self-reported TF-CBT practices from a subsample of the present sample (i.e., participants from 5 of the 10 CBLCs examined herein; Hanson et al., 2018) and, more broadly, prior research linking LCs to improved clinical practices (for review, see Daily et al., 2018). Also, ratings by Project BEST clinicians and senior leaders, respectively, indicated medium and large increases in perceived support for TF-CBT at their organizations from pre- to post-CBLC, suggesting that integrated training for clinical service providers and organizational leadership may promote organization-level support for the targeted practice. Together, these results suggest that CBLCs may be an effective way to not only train professionals in trauma-focused service delivery, but to maximize perceived organizational support for trauma-specific EBPs. It is, however, important to note that senior leaders reported nearly twice as much improvement in organizational support for TF-CBT as clinicians. This type of variability across roles has been noted in our own Project BEST work (Hanson et al., 2019), as well as prior research (e.g., Beidas et al., 2018) and highlights the importance of obtaining information across all levels of an organization.

Changes in Attitudes toward and Organizational Support for EBPs

In contrast to hypotheses, clinicians reported significantly less positive attitudes towards general EBPs immediately post-CBLC, albeit the decline was trivial. This counters work by Lang and colleagues (Lang et al., 2015), who found that mental health providers’ attitudes towards TF-CBT improved post-LC. The EBPAS (Aarons et al., 2010) itself offers one possible explanation for these divergent findings, as it does not assess attitudes towards a specific EBP, such as TF-CBT. Thus, the EBPAS may not have captured changes in attitudes towards TF-CBT specifically. A second potential explanation lies in differences in perceived organizational support for EBPs across participant roles. Previous research has indicated that providers’ attitudes towards EBPs are related to several factors, including knowledge and training in EBPs, organizational characteristics, as well as broader economic and political contextual factors (Glisson et al., 2008). While our findings indicated an overall post-CBLC increase in organizational support for EBPs, closer examination of the data indicated that brokers and senior leaders reported these increases, but not clinicians. Thus, consistent with prior research (e.g., Beidas et al., 2018), participant roles may influence perceptions of organizational support, which may in turn impact attitudes towards EBPs. More work is needed to examine whether CBLC training in a specific EBP elevates perceptions of organizational support for EBPs broadly, and if tailoring of implementation initiatives to directly impact attitudinal barriers towards a specific EBP may be warranted.

Sustained Use of TF-CBT and EBPs

A second aim of this study was to examine sustained use of TF-CBT and EBPs. Analyses conducted at T3, which occurred an average of 21 months after completion of the CBLC, indicated that that FIPC participants across all roles - clinicians, senior leaders, and brokers - reported sustained organizational support for EBPs. This finding is particularly important given the critical role of organization support for EBP sustainment (Aarons et al., 2011; Swain, Whitley, McHugo, & Drake, 2010), which CBLCs are thought to target by engaging individuals from different professional roles, both within and across organizations.

Specific to trauma-focused EBPs, clinicians and senior leaders also reported sustained organizational support for TF-CBT. These findings align with conceptual models of sustainment, which emphasize organizational and leadership support for EBPs as important for sustaining “buy-in” and continued use (Aarons et al., 2011). From a policy and practice perspective, these data underscore the importance of engaging not only clinicians, but also senior leaders throughout implementation and looking for ways to bolster intra-organizational support for EBPs early on. The CBLC model may have further fostered organizational support for TF-CBT implementation by connecting local organizations with state-level leadership and support. More specifically, in 2012, we established a formal partnership with state-level departments of social services and mental health to conduct CBLCs across the state; some departments offered financial assistance to help their employees attend CBLC learning sessions (see Hanson et al., 2019). Thus, it is possible that this formal partnership resulted in ‘top down’ support for this statewide initiative. Furthermore, this combination of clinician, senior leader, and broker engagement in the CBLC, within the context of state and local leadership, support, and financial assistance, may have contributed to sustained organizational support for TF-CBT among CBLC participants. Emerging research has begun to identify how such policy and funding initiatives may influence implementation and sustainment of EBPs (Jaramillo, Sossa, & Mendoza, 2019).

Though clinicians’ attitudes towards EBPs decreased from pre- to post-CBLC, their attitudes improved significantly at T3. We speculate that the sustained organizational support for EBPs in general and TF-CBT specifically, may have influenced clinicians’ more favorable attitudes towards EBPs by T3. Such findings are in line with prior research reporting positive relations between organizational characteristics (e.g., organizational culture and climate), provider attitudes towards and use of EBPs (Aarons & Sawitzky, 2006; Aarons et al., 2009).

Also consistent with prior research, clinicians sustained their pre- to post-CBLC gains in self-reported TF-CBT practices over time. Ebert and colleagues (2012) also found that LC participation was associated with increased self-reported use of TF-CBT, with organizations’ leaders reporting those gains sustained one-year post-LC. The present work extended these findings by examining within-subjects’ sustained self-reported use of TF-CBT nearly two years post-implementation and also by including brokers in CBLC activities to examine whether their inclusion might support sustained EBPs. Indeed, as with clinicians and senior leaders, brokers also reported sustained organizational support for EBPs. Though more rigorous study designs are needed (e.g., observe clinical practices in lieu of self-report, comparison condition), these findings support the notion that the enhanced implementation strategies in the CBLC model (Hanson et al., 2018, 2019), including its added focus on developing and fostering interprofessional collaboration, may be helpful in promoting sustained use of an EBP, such as TF-CBT. Such findings are also in line with the multilevel framework described by Shelton et al. (2018), which highlights the roles of individual and organizational factors that influence sustained use of a specific intervention, such as TF-CBT, as well as the broader contextual factors that may influence sustainability across systems (Shelton et al., 2018; Wiltsey-Stirman et al., 2012).

Limitations and Conclusions

The present findings must be considered in the context of several key limitations. First, our study utilized a prospective observational design that lacked both random assignment and a comparison condition, making it impossible to infer causal relationships between CBLC participation and our outcomes of interest. Though less scientifically rigorous, observational designs capitalize on ongoing dissemination and implementation efforts, providing insight into their feasibility and effectiveness in real-world, non-research settings (West, Cham, & Liu, 2014). Second, we relied exclusively on self-report measures, most of which were designed specifically for this study. Our results would be strengthened by data from other sources (e.g., observational coding systems, record reviews, qualitative interviews, fidelity of TF-CBT implementation) as well as established measures, to ensure method variance did not influence the findings and provide a richer context in which to interpret our findings. Of note, current efforts are focused on psychometric evaluation of Project BEST-specific measures (i.e., TPS, TFO, AYO). Third, we were unable to model nesting of participants within specific organizations, due to our data’s considerable singularity. As a consequence, the unmodeled nesting may have significantly impaired our de facto statistical power and related hypothesis testing. Similarly, our data’s relative singularity prohibited parsing apart participant and organization effects. Finally, we limited our definition of sustainment to whether or not any CBLC-related improvements in self-reported clinical practices or organizational support would remain after the cessation of CBLC activities (i.e., whether pre- to post-CBLC gains were retained at the follow-up assessment). We did not examine other valuable sustainability outcomes, such as provider fidelity in delivering TF-CBT (Shelton et al., 2018) or any patient-level outcomes. Given the complexity of some multi-component EBPs, like TF-CBT, additional supports after training will likely be needed to sustain high-fidelity delivery and, in turn, positive patient outcomes.

The present findings highlight the promise of collaborative training approaches (i.e., LCs, CBLCs) to disseminate clinical practices and improve organizational support for EBPs, like TF-CBT. While our findings suggested that these approaches may promote sustained changes in individual and organizational practices, more rigorous research studies are needed to develop, compare, and refine strategies for effective, sustainable dissemination and implementation. Several research teams are actively examining the utility of LC models, both alone (Amaya-Jackson et al., 2018) and in comparison with other training approaches (Herschell et al., 2015). The present findings, in concert with forthcoming innovative work, will contribute to our growing knowledge base about how to efficiently and effectively disseminate EBPs to community-based mental health service systems, and importantly, how to sustain these practices once active implementation is completed.

Acknowledgments

This research was supported in part by grants from the Duke Endowment (1790-SP) and the National Institute of Mental Health (R34 MH104470) awarded to Dr. Hanson. Manuscript preparation was supported in part by a National Institute on Alcohol Abuse and Alcoholism T32 Fellowship (T32 AA007459; PI: Monti) awarded to Dr. Helseth.

Footnotes

Please note that a portion of these findings were presented as a poster at the 2017 ABCT Conference.

Informed consent was obtained from all individual participants included in the study.

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee (MUSC Institutional Review Board for Human Research) and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

The authors declare that we have no conflict of interest.

References

  1. Aarons GA (2005). Measuring provider attitudes toward evidence-based practice: Consideration of organizational context and individual differences. Child and Adolescent Psychiatric Clinics of North America, 14(2), 255–271. doi: 10.1016/j.chc.2004.04.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons GA, Glisson C, Hoagwood KE, Kelleher K, Landsverk J, & Cafri G (2010). Psychometric properties and U.S. national norms of the Evidence-Based Practice Attitude Scale (EBPAS). Psychological Assessment, 22(2), 356–365. doi: 10.1037/a0019188 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aarons GA, Hurlburt M, & Horwitz SM (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health, 38(1), 4–23. doi: 10.1007/s10488-010-0327-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Aarons GA, & Sawitzky AC (2006). Organizational climate partially mediates the effect of culture on work attitudes and staff turnover in mental health services. Administration and Policy in Mental Health, 33(3), 289–301. doi: 10.1007/s10488-006-0039-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Aarons GA, Sommerfeld DH, Hecht DB, Silovsky JF, & Chaffin MJ (2009). The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: evidence for a protective effect. Journal of Consulting and Clinical Psychology, 77(2), 270–80. doi: 10.1037/a0013223 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Amaya-Jackson L, Hagele D, Sideris J, Potter D, Briggs EC, Keen L, … Socolar R (2018). Pilot to policy: Statewide dissemination and implementation of evidence-based treatment for traumatized youth. BMC Health Services Research, 18(1), 589. doi: 10.1186/s12913-018-3395-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Beidas RS, Williams NJ, Green PD, Aarons GA, Becker-Haimes EM, Evans AC, … Marcus SC (2018). Concordance between administrator and clinician ratings of organizational culture and climate. Administration and Policy in Mental Health, 45(1), 142–151. doi: 10.1007/s10488-016-0776-8 [DOI] [PubMed] [Google Scholar]
  8. Brown CH, Chamberlain P, Saldana L, Padgett C, Wang W, & Cruden G (2014). Evaluation of two implementation strategies in 51 child county public service systems in two states: Results of a cluster randomized head-to-head implementation trial. Implementation Science, 9(1), 134. doi: 10.1186/s13012-014-0134-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Centers for Disease Control and Prevention. (2013). Mental health surveillance among children - United States, 2005–2011. MMWR Morbidity and Mortality Weekly Report, 62(Suppl 2), 1–35. [PubMed] [Google Scholar]
  10. Chamberlain P, Roberts R, Jones H, Marsenich L, Sosna T, & Price JM (2012). Three collaborative models for scaling up evidence-based practices. Administration and Policy in Mental Health, 39(4), 278–90. doi: 10.1007/s10488-011-0349-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cohen JA, Mannarino AP, & Deblinger E (2017). Treating trauma and traumatic grief in children and adolescents (2nd ed.). New York: Guilford Press. [Google Scholar]
  12. Daily S, Tout K, Douglass A, Miranda B, Halle T, Agosti J, … Doyle SR (2018). Culture of Continuous Learning Project: A Literature Review of the Breakthrough Series Collaborative (BSC), OPRE Report #2018–28. Washington, DC: Retrieved from www.acf.hhs.gov/opre [Google Scholar]
  13. Deblinger E, Cohen JA, Runyon M, & Hanson RF (2005). Clinical Practices Questionnaire. Unpublished instrument.
  14. Dong Y, & Peng C-YJ (2013). Principled missing data methods for researchers. SpringerPlus, 2(1), 222. doi: 10.1186/2193-1801-2-222 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Ebert L, Amaya-Jackson L, Markiewicz JM, Kisiel C, & Fairbank JA (2012). Use of the breakthrough series collaborative to support broad and sustained use of evidence-based trauma treatment for children in community practice settings. Administration and Policy in Mental Health, 39(3), 187–99. doi: 10.1007/s10488-011-0347-y [DOI] [PubMed] [Google Scholar]
  16. Fergusson DM, Boden JM, & Horwood LJ (2008). Exposure to childhood sexual and physical abuse and adjustment in early adulthood. Child Abuse & Neglect, 32(6), 607–619. doi: 10.1016/j.chiabu.2006.12.018 [DOI] [PubMed] [Google Scholar]
  17. Fixsen DL, Naoom SF, Blase KA, Friedman RM, & Wallace F (2005). Implementation research: A synthesis of the literature. Tampa, FL. Retrieved from http://nirn.fmhi.usf.edu [Google Scholar]
  18. Glisson C, & Green PD (2011). Organizational climate, services, and outcomes in child welfare systems. Child Abuse & Neglect, 35(8), 582–591. doi: 10.1016/j.chiabu.2011.04.009 [DOI] [PubMed] [Google Scholar]
  19. Glisson C, Schoenwald SK, Hemmelgarn AL, Green PD, Dukes D, Armstrong KS, & Chapman JE (2010). Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. Journal of Consulting and Clinical Psychology, 78(4), 537–50. doi: 10.1037/a0019160 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Glisson C, Schoenwald SK, Kelleher K, Landsverk J, Hoagwood KE, Mayberg S, … Research Network on Youth Mental Health. (2008). Therapist turnover and new program sustainability in mental health clinics as a function of organizational culture, climate, and service structure. Administration and Policy in Mental Health, 35(1–2), 124–33. doi: 10.1007/s10488-007-0152-9 [DOI] [PubMed] [Google Scholar]
  21. Hammick M, Freeth D, Koppel I, Reeves S, & Barr H (2007). A best evidence systematic review of interprofessional education: BEME Guide no. 9. Medical Teacher, 29(8), 735–51. doi: 10.1080/01421590701682576 [DOI] [PubMed] [Google Scholar]
  22. Hanson RF, Saunders BE, Peer SO, Ralston E, Moreland AD, Schoenwald S, & Chapman J (2018). Community-based learning collaboratives and participant reports of interprofessional collaboration, barriers to, and utilization of child trauma services. Children and Youth Services Review, 94, 306–314. doi: 10.1016/j.childyouth.2018.09.038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Hanson RF, Saunders BE, Ralston E, Moreland AD, Peer SO, & Fitzgerald MM (2019). Statewide implementation of child trauma-focused practices using the community-based learning collaborative model. Psychological Services, 16(1), 170–181. doi: 10.1037/ser0000319 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Hanson RF, Schoenwald SK, Saunders BE, Chapman J, Palinkas LA, Moreland AD, & Dopp AR (2016). Testing the Community-Based Learning Collaborative (CBLC) implementation model: a study protocol. International Journal of Mental Health Systems, 10(1), 52. doi: 10.1186/s13033-016-0084-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Herschell AD, Kolko DJ, Scudder AT, Taber-Thomas S, Schaffner KF, Hiegel S. a., … Mrozowski S (2015). Protocol for a statewide randomized controlled trial to compare three training models for implementing an evidence-based treatment. Implementation Science, 10(1), 133. doi: 10.1186/s13012-015-0324-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Hoagwood KE, Olin SS, Horwitz S, McKay M, Cleek A, Gleacher A, … Hogan M (2014). Scaling up evidence-based practices for children and families in New York State: Toward evidence-based policies on implementation for state mental health systems. Journal of Clinical Child and Adolescent Psychology, 43(2), 145–57. doi: 10.1080/15374416.2013.869749 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Hox JJ (2010). Multilevel analysis: Techniques and applications (2nd ed., Vol. 31). New York, NY: Routledge. [Google Scholar]
  28. Hurlburt MS, Leslie LK, Landsverk J, Barth RP, Burns BJ, Gibbons RD, … Zhang J (2004). Contextual predictors of mental health service use among children open to child welfare. Archives of General Psychiatry, 61(12), 1217–24. doi: 10.1001/archpsyc.61.12.1217 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. IBM Corp. (2017). IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY: IBM Corp. [Google Scholar]
  30. Institute for Healthcare Improvement. (2003). The Breakthrough Series: IHI’s collaborative model for achieving breakthrough improvement. Boston, MA: Retrieved from www.IHI.org [Google Scholar]
  31. Jaramillo JÁ, Sossa JWZ, & Mendoza GLO (2019). Barriers to sustainability for small and medium enterprises in the framework of sustainable development—Literature review. Business Strategy and the Environment, 28(4), 512–524. doi: 10.1002/bse.2261 [DOI] [Google Scholar]
  32. Lang JM, Campbell K, Shanley P, Crusto CA, & Connell CM (2016). Building capacity for trauma-informed care in the child welfare system: Initial results of a statewide implementation. Child Maltreatment, 21(2), 113–124. doi: 10.1177/1077559516635273 [DOI] [PubMed] [Google Scholar]
  33. Lang JM, Franks RP, Epstein C, Stover C, & Oliver JA (2015). Statewide dissemination of an evidence-based practice using Breakthrough Series Collaboratives. Children and Youth Services Review, 55, 201–209. doi: 10.1016/j.childyouth.2015.06.005 [DOI] [Google Scholar]
  34. Lang JM, Randall KG, Delaney M, & Vanderploeg JJ (2017). A model for sustaining evidence-based practices in a statewide system. Families in Society, 98(1), 18–26. doi: 10.1606/1044-3894.2017.5 [DOI] [Google Scholar]
  35. Markiewicz JM, Ebert L, Ling D, Amaya-Jackson L, & Kisiel C (2006). Learning Collaborative Toolkit In National Center for Child Traumatic Stress. Los Angeles, CA. [Google Scholar]
  36. Merikangas KR, He J-P, Brody D, Fisher PW, Bourdon K, & Koretz DS (2010). Prevalence and treatment of mental disorders among US children in the 2001–2004 NHANES. Pediatrics, 125(1), 75–81. doi: 10.1542/peds.2008-2598 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Nadeem E, Olin SS, Hill LC, Hoagwood KE, & Horwitz SM (2013). Understanding the components of quality improvement collaboratives: A systematic literature review. The Milbank Quarterly, 91(2), 354–94. doi: 10.1111/milq.12016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Nadeem E, Olin SS, Hill LC, Hoagwood KE, & Horwitz SM (2014). A literature review of learning collaboratives in mental health care: Used but untested. Psychiatric Services (Washington, D.C.), 65(9), 1088–99. doi: 10.1176/appi.ps.201300229 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Proctor EK, Silmere H, Raghavan R, Hovmand P, Aarons GA, Bunger A, … Hensley M (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health, 38(2), 65–76. doi: 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Saunders BE, & Adams ZW (2014). Epidemiology of traumatic experiences in childhood. Child and Adolescent Psychiatric Clinics of North America, 23(2), 167–84, vii. doi: 10.1016/j.chc.2013.12.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Saunders BE, & Hanson RF (2014). Innovative methods for implementing evidence-supported interventions for mental health treatment of child and adolescent victims of violence In Reece RM, Hanson RF, & Sargent J (Eds.), Treatment of child abuse: Common ground for mental health, medical, and legal practitioners (2nd ed., pp. 235–245). Baltimore, MD: Johns Hopkins University Press. [Google Scholar]
  42. Scheirer MA, & Dearing JW (2011). An agenda for research on the sustainability of public health programs. American Journal of Public Health, 101(11), 2059–2067. doi: 10.2105/AJPH.2011.300193 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Shelton RC, Cooper BR, & Wiltsey-Stirman S (2018). The sustainability of evidence-based interventions and practices in public health and health care. Annual Review of Public Health, 39(1), 55–76. doi: 10.1146/annurev-publhealth-040617-014731 [DOI] [PubMed] [Google Scholar]
  44. Steiger JH (2004). Beyond the F test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis. Psychological Methods, 9(2), 164–82. doi: 10.1037/1082-989X.9.2.164 [DOI] [PubMed] [Google Scholar]
  45. Swain K, Whitley R, McHugo GJ, & Drake RE (2010). The sustainability of evidence-based practices in routine mental health agencies. Community Mental Health Journal, 46(2), 119–129. doi: 10.1007/s10597-009-9202-y [DOI] [PubMed] [Google Scholar]
  46. West SG, Cham H, & Liu Y (2014). Causal inference and generalization in field settings: Experimental and quasi-experimental designs In Reis HT & Judd CM (Eds.), Handbook of research methods in social and personality psychology (2nd ed., pp. 49–80). New York, NY: Cambridge University Press. [Google Scholar]
  47. Wiltsey-Stirman S, Kimberly J, Cook N, Calloway A, Castro F, & Charns M (2012). The sustainability of new programs and innovations: A review of the empirical literature and recommendations for future research. Implementation Science, 7(1), 17. doi: 10.1186/1748-5908-7-17 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Woltmann EM, Whitley R, McHugo GJ, Brunette M, Torrey WC, Coots L, … Drake RE (2008). The role of staff turnover in the implementation of evidence-based practices in mental health care. Psychiatric Services, 59(7), 732–7. doi: 10.1176/ps.2008.59.7.732 [DOI] [PubMed] [Google Scholar]
  49. Wuensch KL (2012). Using SPSS to obtain a confidence interval for Cohen’s d. Retrieved from http://core.ecu.edu/psyc/wuenschk/SPSS/CI-d-SPSS.pdf
  50. Zwarenstein M, Goldman J, & Reeves S (2009). Interprofessional collaboration: Effects of practice-based interventions on professional practice and healthcare outcomes In Zwarenstein M (Ed.), Cochrane Database of Systematic Reviews (Vol. 3). Chichester, UK: John Wiley & Sons, Ltd. doi: 10.1002/14651858.CD000072.pub2 [DOI] [PubMed] [Google Scholar]

RESOURCES