Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Jul 1.
Published in final edited form as: Behav Ther. 2017 Dec 15;49(4):509–524. doi: 10.1016/j.beth.2017.12.004

A mixed methods study of the stages of implementation for an evidence-based trauma intervention in schools

Erum Nadeem 1, Lisa Saldana 2, Jason Chapman 2, Holle Schaper 2
PMCID: PMC6020145  NIHMSID: NIHMS929191  PMID: 29937254

Abstract

A mixed methods study was conducted to examine the implementation process of 26 urban school-based mental health clinics that took part in a training and implementation support program for an evidence-based school trauma intervention. Implementation process was observed using the Stages of Implementation Completion (SIC) measure. Qualitative interviews were conducted with clinic leaders in order to gain insight into clinic processes related to the SIC. Results showed that almost all of the clinics engaged in some activities related to pre-implementation (engagement, feasibility, and readiness), but only 31% of the sites formally started delivering the program to youth. Completing more Pre-Implementation activities, particularly those related to readiness, predicted program start-up. Qualitative analysis comparing those that implemented the program to those that did not revealed critical differences in decision-making processes, leadership strategies, and the presence of local champions for the program. This study documented the patterns of clinic behavior that occurs as part of large-scale training efforts, suggests some unique challenges that occur in schools, and highlights the importance of engaging in particular implementation activities (i.e., readiness planning, stakeholder consensus and planning meetings) as part of program start-up. Findings indicate that pre-implementation and readiness-related consultation should be employed as part of broad-scale implementation and training efforts.

Keywords: Implementation, school mental health, trauma, evidence-based practice


Over the past three decades, there has been substantial research identifying effective interventions for children with mental health needs. There now exists a range of evidence-based treatments that include classroom-based interventions and parent training for disruptive behavior disorders, cognitive behavioral treatments for internalizing problems (depression, anxiety, traumatic stress), medications, and combined interventions (Hoagwood et al., 2014; Kendall, 2012). There also are empirically-supported practices to enhance service quality, such as standardized screening and assessment tools, measurement feedback systems, and engagement strategies (Bickman, 2008; Burns & Hoagwood, 2004; Hoagwood et al., 2014; McCellan & Werry, 2003; Silverman & Hinshaw, 2008). Within this landscape, states, cities, and localities are prioritizing closing the gap between research and practice by actively supporting the use of evidence-based programs and practices (EBPs) for youth and families served in the public sector.

In fact, over twenty states are actively providing training and implementation support to their mental health providers to promote the use of EBPs and medication practices (Bruns et al., 2008; Essock et al., 2009). An increasing number (e.g., CA, CO, HI, MI, OH, CT) also have contracts with purveyors to assist their agencies in implementation of specific EBPs (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005). Models used to disseminate EBPs range from free training with no monetary incentives or penalties, to specific service provision contracts where agencies respond to requests for proposals. These initiatives often include implementation supports, such as expert clinical consultation for clinicians, leadership support, data collection and accountability, and fidelity monitoring, all of which have been repeatedly recognized in the literature as critical elements in successful program implementation (Aarons et al., 2011; Damschroder et al., 2009; Wandersman et al., 2012). Despite the recent system-level attention to EBPs, there is still much to be learned about the key processes for successful implementation, including the necessary steps to effectively transport EBPs into usual care, and how to measure if they have occurred well (Aarons et al., 2011; Horwitz & Landsverk, 2010; Saldana et al., 2012).

One predominant implementation model is for the state or other local entities to offer free training to mental health clinics, which is often followed up with expert consultation to clinicians, clinic leadership, and supervisors. Many of these training programs set expectations for program completion (e.g., attendance on consultation calls, completion of a certain number of cases, tracking clinical outcomes) (Gleacher et al., 2011; Hoagwood et al., 2014). The current study examined implementation process and outcomes in EBP training initiative that combined free training with ongoing clinical and leadership consulation to support the implemention of an evidence-based trauma program delivered by school mental health clinics.

Implementation Process

In studying implementation process, there is recognition that implementation involves a complex set of interactions that involves planning, training, quality assurance, and interactions among developers and system leaders, front line staff, and consumers (Mittman, 2011; Saldana & Chamberlain, 2012). There is also consensus that implementation is likely a non-linear, recursive process of stages that can take two or more years (Blasé, 2010; Fixsen & Blase, 2009) with achievement of competency strongly influenced by the implementation methods selected throughout the process (Mihalic et al., 2004). These complexities provide unique challenges to measuring implementation processes, particularly when interventions from one system (e.g., mental health) are integrated within a different system (e.g., schools). Activities undertaken in the earliest stages of implementation around feasibility and readiness planning may be particularly critical to successful program start-up and longer-term success in cross-system efforts. The current paper builds on research focused on understanding what steps in the implementation process are essential to effectively transport EBPs within large-scale training initiatives (Saldana, 2014), and extends this research by observing the process of EBP implementation within co-located school-based mental health services.

Measuring implementation processes

The Stages of Implementation Completion (SIC; Saldana, Chamberlain, Wang, & Brown, 2012, is a measure of implementation process and milestones first developed as part of a randomized control implementation trial (Chamberlain et al., 2011; Chamberlain, 2010). That trial compared two implementation strategies to support program start-up of an EBP for youth referred to out-of-home care with severe behavior problems, by non-early adopting counties in two states (Brown et al., 2014). Mapping onto the EPIS implementation framework (Exploration, Preparation, Implementation, Sustainment; Aarons et al., 2011), the SIC defines the completion of implementation activities across eight stages (Engagement through Competency) that span three phases of implementation including pre-implementation, implementation, and sustainability. The SIC is a measure that is used for monitoring when implementation activities are completed by a newly adopting organization. Outcomes consistently suggest that pre-implementation behavior, as measured by the SIC, predicts successful program implementation (Saldana et al., 2015).

To fill the gap of limited tools available to assess implementation process, the SIC has been adapted for a range of practices including mental health, school prevention programs, primary care interventions, substance abuse treatments, and large state system initiatives (Chamberlain et al., 2016; Saldana, 2017; Saldana et al., 2015). SIC results repeatedly demonstrate that when there is variability among sites attempting to implement (i.e., when there is not a prescribed timing of implementation activities), that pre-implementation and implementation behavior predict successful program start-up and the achievement of competency in program delivery (Saldana et al., 2015). As such, the SIC has been used in a number of ways in implementation research in addition to comparing effectiveness of different implementation strategies, including to: monitor implementation progress, assess what organizations do/don't do during their program implementations, and guide implementation efforts. The current study uses an adapted version of the SIC as part of a mixed methods study to observe the implementation of an EBP for treating students exposed to trauma in the schools.

School-Based Implementation

The current project examines the implementation activities of school-based mental health clinics taking part in a free training program offered by the New York City School-based Mental Health (NYC SBMH) Committee. Due to a shared objective among their membership, agency representatives choose to address the impact of trauma by offering training in the Cognitive Behavioral Intervention for Trauma in Schools (CBITS) (Jaycox, 2003; Stein et al., 2003). CBITS is a 10-session group intervention, with 1-3 individual sessions focused on trauma narrative work. The intervention shares core components with other evidence-based treatments for traumatized youth, and is often considered by school districts to be a Tier 2 (targeted) or Tier 3 (intensive) intervention within multi-tiered systems of supports (Nadeem et al., 2014). CBITS had particular appeal to NYC SBMH Committee in that it is tailored to a school context, has a track record of use with diverse populations (Jaycox, 2003; Stein et al., 2003; Stein et al., 2002), and has impact on both academics and symptoms (Kataoka et al., 2011).

Within the EBP implementation context, school-based mental health services provide unique opportunities and challenges. Due to the high levels of unmet need for mental health care among children in the community (Kataoka et al., 2002; Knopf et al., 2008) and the fact that 70-80% of children that receive any mental health services are served at school, schools are touted as ideal settings for access to services. However, being in an educational setting poses complexity for school-based mental health clinics, as their host settings face multiple, competing priorities, and are under pressure to meet academic performance thresholds. Co-located mental health programs are therefore faced with challenges related to budget cuts, staff and leadership turnover, and competing initiatives from two different settings – the schools and their larger clinic or agency (Owens et al., 2014). Financially, many of the daily activities that school-based mental health providers perform in order to effectively align mental health and educational goals, and meet the needs of the school community, are not readily reimbursable through Medicaid billing. This includes coordination of logistics, outreach activities, and ongoing communication with parents and school personnel (Cammack et al., 2014; Owens et al., 2014).

In addition to these common school mental health service challenges, practitioners implementing CBITS may need to engage raise trauma awareness among principals and teachers in order to build buy-in for a new program. Some of them may also need to deploy new screening and consent procedures, and identify school partners who can provide critical logistical support through the process. Existing research on CBITS suggests that these challenges are mitigated by leadership support, alignment of CBITS with school needs and priorities, existence of local champions, connections between CBITS practitioners, data monitoring, and ongoing implementation support to clinicians (Langley et al., 2010; Nadeem et al., 2011).

To date, the majority of CBITS implementation efforts have been taken on by individual sites (e.g., school districts, community mental health agencies) pursuing trauma services. There has been no study of CBITS implementation within the context of a large-scale training initiative for school-based mental health clinics, and limited studies of school-based mental health EBP implementation in general (Owens et al., 2014). The NYC SBMH CBITS training initiative presented a unique opportunity to examine implementation processes and outcomes using the SIC within two important contexts: co-located school-based mental health clinics, and large-scale training and implementation initiatives. Using mixed quantitative and qualitative methods, the primary goals of the study were to 1) characterize the implementation activities and processes that occur within mental health clinics participating in a large-scale school mental health training effort, 2) determine which processes relate to initial implementation outcomes, and 3) utilize qualitative interview data to provide insights into the dynamic implementation processes that may underlie clinics' implementation behaviors as measured by the SIC.

Method

Participants

Participants included 26 NYC school-based mental health clinic sites, which provide co-located school-based mental health services. These clinics happened to be from 26 distinct community mental health agencies. On average, the clinics had 15.9 (SD= 14.18) full time equivalent (FTE) staff, and 75.02% of the clients received Medicaid (SD = 18.3). There was no cap given to clinics on the number of providers they could send to the CBITS training; the number ranged from one to seven providers per site (M = 2.72, SD = 1.62), for a total of 68 providers. Across the 26 sites, providers were 87% female, and predominantly licensed clinical social workers (LCSWs; 76.3% LCSWs, 18.6% psychologists, 3.4% psychiatrists, and 1.7% school psychologists). Fifty-two percent of the providers were White, 28.3% Latino, 8.3% African American, 6.7% Asian American, and 5% biracial or other. The majority of providers (65%) provided services in one school (M= 1.91, SD = 2.33), and they provided 26.67 direct service hours, on average, per week in the schools (SD = 13.16).

In addition to our quantitative analysis focused on the 26 clinic sites, the study used a sequential, explanatory design in which one representative from each clinic site took part in a qualitative interview designed to provide additional explanatory information about each site's implementation process and outcomes on the SIC. Qualitative and quantitative data were analyzed separately and integrated at the interpretation stage (Creswell et al., 2011; Creswell & Plano Clark, 2007). At each site, the point person identified for the project was invited to take part in the interview. In the event that that person felt they were unable to speak to the implementation process, they nominated an alternate person. Participants include 18 program director/supervisors and eight lead clinicians. This sample was 80% female, 61.5% White, 11.5% Black, and 27% Latino. The average age of the interviewees was 44.5 (SD = 12.8). Sixty-five percent were LCSWs, 23% had doctoral degrees, and 12% were child psychiatrists.

Procedures

The NYC SBMH Committee's role is to support NYC community mental health clinics that offer co-located mental health services in schools, and to serve as an advocacy group for the needs of these providers. The group meets bimonthly in person with representation from the school-based mental health clinics, the New York City Department of Education, the New York City Department of Health and Mental Hygiene, and the New York State Office of Mental Health. Over the years, the committee has partnered with context experts, treatment purveyors, and others to provide training and support tailored to the needs of school-based providers.

As noted above, there was recognition among the membership of the committee as well as interest from state and city policymakers in providing access to high quality evidence-based trauma services for children and adolescents. In response to this need and interest, the NYC SBMH Committee partnered with the first author to sponsor a series of free training and consultation in the Cognitive Behavioral Intervention for Trauma in Schools (CBITS) (Jaycox, 2003). The first author is a national CBITS expert trainer and has extensive experience training and consulting on a range of child-focused EBPs across New York State. The training program and readiness requirements were modeled on other programs that have been offered by the New York State Office of Mental Health, and were co-developed with state leaders and the NYC SBMH Committee, with consultation from the national CBITS training team (Gleacher et al., 2010; Hoagwood et al., 2014; Olin, Nadeem, et al., 2015). Presentations and written materials on CBITS were provided at the Committee meetings and via email to all participating sites. The material delineated essential readiness requirements for implementing CBITS (i.e., partnership with school sites, strategies for identifying students to screen and assess for traumatic stress, commitment of staff time), sample timelines, and billing guidance.

The training program was aligned with empirically and theoretically-derived predictors of successful implementation. Attention was paid to the factors from the exploration and adoption phases of implementation that would allow sites to understand essential commitments for the CBITS program (Aarons et al., 2011). The need for multi-level involvement of staff (Fixsen et al., 2013), expert consultation for both clinicians and leadership that extended beyond the initial workshop training (Davis et al., 1995; Ferlie & Shortell, 2001; Nadeem et al., 2013; Raghavan et al., 2008), and ongoing engagement of the city leadership and the purveyors (Aarons et al., 2011; Fixsen et al., 2013) were also incorporated. Specifically, sites that were interested were asked to make several commitments in order to participate in the training program. They were asked to identify a point person/lead person at the site, have the ability to implement one group at their site over the academic year, and participate in biweekly expert clinical phone consultation for six months. No specific requirements were made with respect to fidelity and outcome monitoring. However, clinics were strongly encouraged to track client outcomes and support clinician fidelity using methods that suited their clinic procedures. The first author or her research staff answered all questions from sites via email and phone and encouraged sites to consider their readiness for implementation and participation in the training program. Forty-five sites were invited for participation in the CBITS training program, and twenty-six signed up by completing an application that included written commitments for each of the readiness requirements.

The training program consisted of a 6-hour web-based training that clinicians completed individually (certificates of completion were collected), and a one-day in-person training in CBITS that included role-plays and opportunities to practice clinical skills for each of the CBITS sessions. In order to encourage multi-level involvement, clinicians, their supervisors, and clinic leadership were invited to attend. Clinics were given three options for training dates, which all occurred in November. The training was followed by six months of biweekly clinician phone calls led by the lead authors, and monthly leadership calls. The leadership calls included both the lead authors and representatives from the SBMH Committee leadership in order to encourage communication between organizations, intervention experts, and city leaders. Clinician calls focused on review of CBITS components, discussion of emerging clinical issues, and problem-solving implementation challenges. Leadership calls included discussion of issues clinicians were facing, strategies for buy-in, strategies for addressing billing issues, discussion of fidelity, and outcome monitoring. Clinics were given access to all tools that are typically used during CBITS implementation (e.g., clinician self-rating forms, independent rater fidelity checklists, pre- and post- symptom and functioning measures). They also had access to sample progress notes, consent forms, and outreach materials (e.g., letters for parents, teachers, and principals, trauma education PowerPoint presentations, videos, and written materials) that could be tailored to their context. These materials were housed free of charge on the CBITS website. In addition to scheduled calls, individual site consultation was available as needed. This support was provided by both the first author and the SBMH Committee leadership, and tended to address issues related to buy-in from school administration and logistical support or billing. Additional consultation was provided upon request during the following summer and academic year.

Measures

Implementation Process

The SIC is comprised of eight stages, with sub-activities within each stage that range from engagement with the developers to development of organizational competency in delivery of the EBP (see Table 1). The SIC monitors' completion of specific implementation activities within each stage and phase yields three scores: (1) Duration-- time taken for completion of implementation activities, (2) Proportion-- percentage of activities completed, and (3) Final Stage-- the furthest point in the implementation process achieved. The SIC successfully predicts implementation outcomes, including successful program start-up (Brown et al., 2014). Pre-Implementation (Phase 1) includes Stages 1- 3. Implementation (Phase 2) includes Stages 4-7. Stage 8 measures the beginning of the sustainment phase of implementation, with activities related to achieving competency in EBP delivery.

Table 1. SIC Stages with Sample Activities from CBITS Adaptation.
Phases Stages Sample Activities (date of occurrence)
Pre-implementation 1. Engagement Interest indicated
2. Consideration of Feasibility Point person nominated
Stakeholder meeting #1 (buy-in, feasibility & fit)
3. Readiness Planning Staff sequence/timeline/hire plan review
Referral/eligibility process approved
Implementation 4. Staff Hired and Trained First staff hired or assigned
Clinical/program training
5. Fidelity Monitoring in Place Determination of fidelity monitoring or QA methods
First accessed fidelity support tools
6. Services and Consultation Begin First consult call
First session of first group per agency
7. Ongoing Services, Consultation, Fidelity, Feedback Met threshold of scheduled calls
Sustainability 8. Competency (Certification) Showed evidence of measuring pre-post symptoms

Note. SIC = Stages of Implementation Completion.

The current study included an adaptation of the SIC to measure the implementation of the CBITS intervention in the context of the NYC initiative. In collaboration with the SIC development team, the implementation steps for CBITS as part of this initiative was defined, and activities within stages were adapted to measure this process. The SIC adaptation process followed typical procedures, described in detail by Saldana and colleagues (2017), utilized by the SIC team to operationalize and measure implementation efforts, including detailing what it expected for “completion” of implementation activities. A resulting 36 implementation activities were defined across the eight Stages. Reliability of the CBITS SIC activities was estimated using a dichotomous Rasch measurement model (Smith, 2001). Rasch separation reliability, interpreted similarly to internal consistency, was .89 for activities across the three phases and .68 for the activities in the Pre-Implementation phase. For Pre-Implementation, this indicates that the activities are suitable for discriminating two distinct levels of implementation. Table 1 provides sample stages and activities. SIC data was collected in real time by the first author and her research team. The team logged dates for each SIC activity based on when they were reported to have occurred by the site during phone calls and email communication (at least monthly). The dates that activities were completed (not simply started) were recorded. Proportion scores were calculated by the presence or absence of a site-driven activity (and then divided by the total number of site-driven activities possible). Duration scores were calculated using the date of occurrence for the first site-driven activity to the last site-driven activity. Scores are calculated for each Stage as well as for each Phase. Data was collected for the academic year in which the study was conducted, and was continued for an additional year because some sites noted that they ran out of time to start a group in the first academic year.

Implementation Outcome

The primary outcome for the study was program start up, as indicated by whether or not the clinic was able to begin one CBITS group. As a time-limited school-based intervention, it is extremely rare that a group that is started does not finish. This outcome can be conceptualized as initial implementation under the adoption category according to the guidelines established by Proctor and colleagues (2011).

Qualitative Interviews

As noted above, qualitative interviews were completed in order to expand our understanding of the implementation processes undertaken by each clinic site. Interviews were conducted after the first year of CBITS implementation; sites that reported that they were attempting to implement CBITS in Year 2 were interviewed last. Of specific interest was being able to characterize clinics' experiences and decision-making processes across each of the stages of the SIC. The interview was co-developed by the NYC research team, the NYC SBMH Committee, and the SIC team using an iterative process involving both the research and community partners. Once this team established the interview protocol, additional feedback was obtained from practitioners familiar with CBITS and school-based implementation issues. The final questions were organized across stages of the SIC (engagement, consideration of feasibility, readiness planning, staffing and training, service provision, fidelity monitoring, and competency), which is aligned with EPIS framework's phases and multi-level determinants of implementation success (e.g., organizational support, leadership support, innovation-setting fit; Aarons et al., 2011). Sample questions include: What was it about the CBITS program that piqued your interest to learn more?; What steps or process did you go through at your agency or clinic BEFORE signing up for the training in order to determine if CBITS would be feasible or a good fit for you?; What were the factors that went into considering whether it was feasible?; How did you decide which staff would be trained, what schools, eligibility criteria for the program, etc.?); To what extent was your agency or clinic leadership involved in your initial decision-making and readiness process?. For clinics that implemented, additional questions how barriers were surmounted were asked. For those that did not implement CBITS, both barriers and strategies attempted to overcome barriers were elicited, as well as factors that lead to a decision to not implement. In addition, feedback about CBITS and the consultation calls was sought, and plans and experience with fidelity monitoring, competency, and outcomes were queried. A research project manager with training in qualitative interviewing conducted all interviews.

Analysis

Quantitative analyses focused on group differences, and frequencies were conducted using SPSS. The primary question related to predictors of program implementation was first addressed by conducting bivariate analysis of the relation between duration and number of activities completed within each stage. This was followed up with a logistic regression in order to identify the relative contribution of these variables in predicting program start-up.

Qualitative transcripts were coded using using Atlas.ti (Muhr, 1998). Using a modified grounded theory approach, preliminary codes were derived from the interview guide, which was based on the SIC stages, empirically supported multilevel predictors of the implementation outcomes from the literature that the SIC is based on (e.g., Aarons et al., 2011), and emergent topics from the transcripts themselves. Through an iterative process, the authors discussed, and refined, a list of codes by independently coding transcripts and discussing code definitions and application (Bernard, 2006). First, two primary coders (the lead author and a research team member) conducted open coding of transcripts, guided by the semi-structured interview and the implementation processes that have been identified in the literature. The research team then jointly generated a working code list. Coders then independently coded half of the transcripts and met with the research team to discuss expanding, collapsing, or eliminating codes until there was a refined list of mutually agreed upon codes. Once the final code list was agreed upon, coding of all transcripts was conducted by one primary coder (the trained research team member) and reviewed by the research team. Coding ambiguities were discussed until consensus was reached.

The final coded themes centered on explicating key processes that pertain to each of the SIC phases. These included rationale for signing up for CBITS training (fit with client and clinic needs vs. general interest); leadership involvement in decision making (thoughtful and involved vs. quick); leadership involvement in implementation (guidance vs. no major guidance), stakeholder involvement (school leaders, clinic leaders, families), feasibility considerations, billing issues, having local champion for CBITS, and feedback on the intervention (ease of use, fit with the setting). Codes also included reasons for not implementing CBITS (preference for other EBPs, time, clinic or staff instability, lack of buy-in).

Results

Patterns of Clinic Behavior on the SIC

Descriptive information

Table 2 depicts clinic site behavior across each implementation Stage measured by the SIC. For Stages 1, 2, and 4 (Engagement, Feasibility, Staff Trained) all sites completed at least one activity in this Stage. For Stage 3 (Readiness), the vast majority of sites completed at least one activity. In Stages 5 through 8, the percentage of sites that completed any activities fluctuated from 46% of sites beginning Services and Consultation, 27% of sites engaging in Ongoing Services and Consultation, and 15% of sites engaging in any activities related to the establishment of Adherence and Fidelity Monitoring, and Competency.

Table 2. Patterns of Clinic Behavior on the SIC (N = 26).
Phase Stage % sites with any activities in this stage % activities completed within each stage
Pre-implementation 1. Engagement 100% 100%
2. Feasibility 100% 91%
3. Readiness 96% 70%
Implementation 4. Staff Trained 100% 82%
5. Adherence & Fidelity Monitoring Established 15% .13%
6. Services & Consultation Begin 46% .35%
7. Ongoing Services & Consultation 27% .15%
Sustainability 8. Competence 15% .08%

As shown in Table 2, either all or the vast majority of activities within Stages 1 and 2 (Engagement and Feasibility) were completed. Eighty-two percent of the activities associated with Staff Training were completed, followed by 70% of the Readiness activities. However, less than 1% of the activities in some of the later Stages were completed. Organized by the broad SIC Phases, the average percentage activities completed by each site within Phase 1 (Pre-Implementation: Stages 1-3) was 82 (SD = 13%; out of 14 activities). The average percent of Phase 2 activities (Implementation: Stages 4-7) was 38 (SD = 26%; out of 15 activities). Very few Phase 3 activities (early Sustainment: Stage 8) were completed (M = .08%; SD= .23%).

In order to further explore the types of activities sites engaged in, completion rates of each of the SIC activities were examined. Clear patterns emerged indicating that sites frequently completed tasks required by the SBMH Committee in order to take part in the training program. Activities that required engagement of clinic staff, clinic leadership, and school partners (e.g., stakeholder meeting, staffing plans) were less frequently observed. Table 3 provides a summary of site engagement in illustrative activities across the first four stages of the SIC. These particular stages were selected because of the low frequency of activities in later stages. In Stage 1 (Engagement), all sites completed all engagement activities, as these were requirements for participation in the training program. In Stage 2 (Feasibility), far fewer sites held stakeholder meetings (54%) than indicated their commitment to running a CBITS group (100%). In Stage 3 (Readiness), there was more variability; however, a similar pattern emerged in which almost all sites accomplished completion of the web training, but relatively few sites engaged in additional planning activities with their teams (58%), or planned for CBITS program referral and eligibility procedures (27%). Finally, in Stage 4 (Staff Training), all sites took part in training, but only 62% of those sites reported formally assigning clinicians to run a CBITS group.

Table 3. Completion rates for select SIC Activities (Stages 1-4).
SIC Stage Selected Activities % of sites that completed activity
1. Engagement Interest indicated 100%
2. Feasibility Committed to running one group* 100%
Stakeholder Meeting (buy-in, feasibility, fit) 54%
3. Readiness Presented web training certificates* 96%
Staff assignment/timeline/planning review 58%
Stakeholder Meeting 2 (planning) 42%
Program eligibility/referral reviews 27%
4. Staff Trained Clinician training 100%
Therapist assigned 62%
*

Required by the School-Based Mental Health Committee

Predictors of Program Start-up

The next research question focused on the rate of program start-up and predictors of program start-up (i.e., starting one CBITS group) based on Phase 1 (Pre-Implementation) activities. Overall, 31% (n= 8) of sites started CBITS groups. Group difference comparisons were made for the number of activities completed, the duration of time spent in Phase 1 overall, and in each of the first four SIC stages. Although Stage 4 is not part of Phase 1, it was included in this analysis because it represents the assignment of staff to duties related to CBITS. Consistent with previous research, sites who started CBITS, completed more Phase 1 activities (Pre-implementation) than those that did not, t(24) = 4.32, p < .001 (M= 13.25, SD= 1.4 vs. M = 10.67, SD = 1.41) No group differences were found in the duration of time spent in Phase 1. There were also no group differences in SIC activities or our primary outcome based on clinic demographics. Examination of Stages 1 through 4 revealed significant group differences in the duration of time spent in Stage 4 (Staff Hired and Trained) such that implementers took less time than non-implementers. There also were trends suggesting that the implementing sites spent more time in Stages 1 (Engagement) and Stages 2 (Feasibility) than the non-implementing sites. With respect to number of activities, implementers completed more Stage 2 (Feasibility) and Stage 3 (Readiness) activities than non-implementers (see Table 4).

Table 4. Mean differences in duration and number of activities completed between implementing and non-implementing sites for SIC Stages 1 thru 4.
Implementers (n = 8) M (SD) Non-Implementers (n = 18) M (SD) t (25)
Duration (days)
1. Engagement 230.75 (250.81) 117.89 (45.02) -1.89+
2. Feasibility 29.88 (31.12) 9.72 (24.37) -1.79+
3. Readiness 56.50 (34.23) 62.72 (133.33) 0.13
4. Staff Hired/Trained 43.38 (33.66) 121.33 (79.33) 2.65*
# Activities completed
1. Engagement 2 (0) 2(0) n/a
2. Feasibility 4.88 (.35) 4.39 (.50) -2.47*
3. Readiness 6.38 (1.06) 4.28 (1.13) -4.45***
4. Staff Hired/Trained 3.63 (.74) 3.11 (.83) -1.50
*

p < .05,

***

p < .001

+

p < .10

To assess the relative contribution of these activities in predicting program start-up, logistic regression analysis was conducted in order to determine whether the total number of activities in each of Stages 2 though 4 predicted program start-up. Stage 1 was not included because all sites completed all activities. Results demonstrated that completing more Stage 3 (Readiness) activities predicted CBITS program start-up; OR = 8.86, p < .05; See Table 5). It should be noted that because the confidence interval for this estimate did cross the number “one” (although barely), the odds ratio should be interpreted with caution (Szumilas, 2010).

Table 5. Total number of activities in Stages 1 through 4 predicting CBITS start-up.
Odds Ratio B (SE) 95% C.I.
Constant .001 -7.50 (6.48)
# Stage 2 Activities (Feasibility) .49 -.723 (2.21) .006, 36.96
# Stage 3 Activities (Readiness) 8.86* 2.18 (1.12) .99, 79.87
# Stage 4 Activities (Staff Trained) .59 -.52 (1.37) .041, 8.62
*

p < .05

Qualitative Interviews

Qualitative results are organized around processes underlying the SIC activities that appeared to differentiate those who implemented CBITS (n = 8) from those who did not (n = 18). Very few themes were entirely present in one group and not the other; however, the themes presented highlighted qualitative differences in key processes. Table 6 provides a summary of key themes in sites that implemented CBITS and those sites that did not. These are organized across four broad themes: decision-making, leadership guidance and resources, clinician behavior and perceptions, and feedback on the intervention. Although feedback on CBITS itself did not differ across the groups, it was retained because ease of use and helpfulness are important to frontline clinicians (Damschroder et al., 2009; Panzano & Roth, 2006).

Table 6. Summary of major themes differentiating implementers and non-implementers.

Used CBITS (n= 8) Never started a CBITS group (n=18)
Decision-making
  • Thoughtful/involved; involved multiple staff members

  • Rationale for signing up related to fit (agency mission, client needs), and perceived need for EBPs and/or trauma services

Decision-making
  • Quick; involved fewer staff members

  • Sometimes via email

  • Rationale for signing up related to fit (agency mission, client needs), and perceived need for EBPs and/or trauma services

  • Rationale at some sites also reflected general interest

Leadership and Resources
  • Stayed involved post-training

  • Guiding process

  • Providing staff with resources (e.g., time)

  • Billing/funding was addressed

Leadership and Resources
  • Minimal involvement during active implementation

  • Many reported not knowing what happened

Individual, motivated clinicians exhibited “champion” behavior
  • Engaged school principals

  • Obtained support of clinic leadership

Individual clinicians perceived barriers that were insurmountable
  • School or clinic buy-in

  • Billing

  • Priorities/caseload issues

Sites perceived need for CBITS and liked the intervention
  • Sites with implementation experience discussed adaptations

Sites perceived need for CBITS and liked the intervention

Decision-making processes

The first broad domain in which sites tended to differ was in their descriptions of their decision-making process during the Engagement stage of the SIC. In the eight sites that successfully implemented their first CBITS groups, participants described a seemingly more thoughtful and involved process. One site described the process, “It had to go all the way to the people who sign off on the bills and invoices. So it was the program administration, Medical Director, and the Vice President of Services. Then it was selling them on it, and giving proposals. Then, they advocated up the chain to get approval.” Other sites described similar processes lasting from “several days” to “a few weeks” that involved discussions with frontline staff, supervisors, and leadership in the respective organizations. Another site described a “group of managers at different level” as being core in the decision-making process, and noted the importance of being “mission aligned.”

In contrast, sites that did not ultimately implement CBITS during the training program described decision-making processes that seemed either quick, or that involved fewer clinic personnel. For example, one participant described, “Basically, it was his decision. And then we followed through.” Another supervisor noted, “All that it entails was just [name of another administrator] and I having a conversation and deciding if this would be a good fit. Then basically telling our clinicians that we're sending them to the training'. Another noted, “I got an email about the trainings, and I asked the program supervisors who to send.” The process was often undertaken in a couple of days, and sometimes was conducted via email.

Sites across both categories articulated a rationale for signing up for the training that was related to innovation-setting fit and/or a perceived need for EBP training. However, only sites that did not implement CBITS described general interest, meaning that they signed up for the training based on a broad sense that it could be helpful or interesting for their clinicians. Related to innovation-setting fit, the majority of sites made reference to a need for trauma-focused services. One clinician articulated the level of need, “Where my school is, everybody pretty much had trauma. I took my whole caseload and I gave them all the screenings regardless of age or diagnosis. Every one of my patients came up as positive based on the criteria for this.” Another noted the same need and pointed to the ease of use of the intervention, “We do a lot of trauma. It seemed pretty easy to implement. It was laid out nicely, so I could run it in my school without a huge amount of legwork.” Another site noted the appeal of the evidence-base for CBITS, “This was sort of a nice marriage of evidence-based trauma work that I knew had good both theoretical and research grounding, but now looking at how to apply it into a school environment.” Of note, sites that implemented CBITS often linked to their rationale to the decision-making process, and none of them noted that they signed up out of general interest. One site that did not implement the model shared a perspective that is representative of general interest, “I think the opportunity to be part of any trainings is enticing for our staff. We don't always get that. Something that was specific to schools was interesting to us as well.

Leadership guidance and resources during implementation

Another area in which there was differentiation between implementers and non-implementers related to leadership involvement, and resources were provided to clinicians. Sites that implemented CBITS tended to have leadership that guided implementation and/or provided resources. In these cases, leadership helped determine who to assign to run CBITS groups, sometimes by selecting schools that were most likely to buy in, and that had service structures already in place. One administrator described, “We had already had partnerships with these schools. They knew how we worked and wanted groups. We're already pulling kids out of their classrooms. We didn't anticipate there being any problems.” Guidance and support also came in leadership involvement in helping clinicians to secure buy-in from the schools, and to encourage them to hold stakeholder meetings. A clinic administrator describing the process, “We went over and around with calls and emails going back and forth almost on a daily basis, and a lot of meetings. And me visiting sites and meeting with the clinicians.” Another administrator discussed the importance of having systems in place, “We have been running clinics in these schools for many years. We already know when we can get kids out or not and have access to children's schedules. Because we have medical and mental health and dental clinics in the schools, we know exactly what to do and how to bill by now.” Another supervisor talked about having involvement in recruitment strategies, “We had a team meeting and discussed ways to handle getting parental consents. Our ultimate decision was a letter home to all fifth grade parents, letting them know that we'd be doing an initial workshop in the class.” Another salient resource was leadership support related to time, “We just made a commitment. Our agency has a commitment to training. We have the flexibility to carve out time for things that we think are important and could strengthen our service model.”

In many non-implementing sites, there were reports of clinicians being left to determine how and if to proceed with CBITS on their own. In fact, some clinic leaders reported that they did not know what happened after the initial in-person CBITS training. One clinician described this process, “There wasn't much in terms of meeting after that. I may have spoken in supervision to brainstorm. They weren't that involved, and it was ultimately up to me whether or not I implemented it.” One administrator noted, “They are the experts on their schools and we left it to them to determine if it would work or not.” It should be noted that these group differences were not absolute, and there was some evidence of leadership involvement, guidance, and resources among non-implementers. Some non-implementers pointed to practical reasons why they were unable to launch CBITS, such as running out of time within the current academic year, “We felt that if we screened, we wouldn't be able to start the group until the next year.”

Clinician behavior and perceptions

The role of clinician motivation, interest, and behavior, and the presence of a champion for CBITS, also appeared to be a differentiating factor. Clinicians' advocacy for the program played a role both in getting initial CBITS training and in implementing the group. One clinician noted, “I talked to my supervisor and he agreed that it would be helpful [to attend the training]. And then I thought it would be helpful for the other clinics, so I then also talked to the person who oversaw the rest of the clinic, and she allowed one of her clinics to come.” At other sites, administrators described champion behavior by their clinicians, “She found the program and spearheaded an effort of advocacy to the administration. I supported her in that process, but took a step back. She was motivated and passionate so I wanted her to get all the credit she could.” Another site director commented on another clinician, “She wasn't technically called the senior clinician, but she took this role and wanted to do programming. It wasn't even assigned to her. She also talked to the school principals for the program. I don't think, without her making those efforts, CBITS would be in our agency.”

In many of the non-implementing sites, “champion” behavior on the part of clinicians was far less salient. Instead, these clinicians described barriers that were seemingly insurmountable related to school or clinic buy-in, billing challenges, and time (i.e., overwhelming case loads). One clinician described the lack of buy-in, “I presented it to our school-based support team, which is the principal, the assistant principal, guidance counselors, social worker, psychologist, and support staff. The social worker and psychologist and guidance counselor all thought it was a good idea. But the administration, I quote, said ‘our kids don't have problems. Don't worry about it.’” Other site leaders described that they were unable to bill for the groups, “Our larger agency does not allow us to bill for groups [even though billing codes for groups exist].” Another clinician, who indicated strong interest in wanting to use CBITs noted school and clinic barriers, particularly related to large caseloads and times, “It was a combination. I don't want to put it all on the principal, because I think there were also my own barriers with time. I am in two schools, and at one school I have 50 clients on my caseload.”

Feedback on the intervention

The fourth major theme had to do with the intervention itself. First, there were no codes indicating dissatisfaction with the CBITS training or the intervention itself. Many sites cited its relevance, with one site supervisor stating, “I would say ninety percent of the kids we work with, no matter what the diagnosis is, have some experience with violence or trauma in their life. We're able to implement [CBITS] regardless of the original problem.” Several sites noted the interventions' ease of use, engaging activities, and its focus on diversity. One clinician noted, “I thought it was easy to use, and I liked that it had been developed with populations that are similar to what we see in Bronx.” A site administrator noted, “It was very helpful to see that we were able to improve outcomes for so many students.”

Participants from two sites indicated some preference for trauma-focused cognitive behavioral therapy (TF-CBT) over CBITS due to its individual-focused treatment model and history of prior use at the site. However, the primary innovation-specific issues raised had to do with time constraints and minor adaptations. Only the sites that actually implemented the program raised these issues. One supervisor noted, “They didn't realize it would take so much time. We have to do same-day documentation and don't have an hour to plan the CBITS group. Ideally, it would be great if our productivity numbers were reduced because we're learning a new model.” Within session adaptations were minimal and primarily had to do with alternative ways of teaching a skill or additional time needed to explain concepts, “With a lot of the manualized treatments, the way the material is delivered isn't accessible for students with speech and language differences and different levels of attention and self-regulation.”

Discussion

As policymakers and clinic leaders have been emphasizing adoption of EBPs, there is need for empirical research that elucidates the key processes for effectively transporting EBPs into usual care, and identifying which processes are most critical (Aarons et al., 2011; Horwitz & Landsverk, 2010; Saldana et al., 2012). The current study used mixed methods to observe implementation processes and outcomes in a citywide training program for school-based mental health clinics in CBITS, an evidence-based treatment for students with trauma symptoms in schools (Jaycox, 2003). The study offered important insight into implementation activities that take place among clinic sites taking part in large training initiatives, and some of the unique implementation challenges that school-based mental health providers may face.

Patterns of implementation behavior and rates of program start-up in the current study, as measured by the SIC, were consistent with previous implementation efforts of EBPs for children's mental health. A program start-up rate of about 31 percent, while low, is not dissimilar to those found in other studies of adoption or initial uptake of new practices (Brown et al., 2014; Olin, Chor, et al., 2015; Saldana et al., 2012; Saldana et al., 2015). While we cannot make clear attributions about the reason that few sites were able to implement CBITS, our data suggest that sites were challenged to engage in more complex planning activities and clinic and school stakeholder engagement. There could also be unique barriers related to trauma (e.g., comfort of school personnel and clinicians). However, there are indications that trauma awareness in schools has grown in recent years (Overstreet & Chafouleas, 2016), and there appeared to be high levels of perceived need and enthusiasm for the intervention. It is also worth noting that 46% of clinic sites did begin some aspect of service delivery, which is consistent with, and actually higher than some other implementation efforts (e.g., Saldana et al., 2015). This suggests that some sites started to take steps towards running groups (e.g., steps towards screening or consent), but may have run out of time in the school calendar or faced a larger barrier.

Similar to other studies, site pre-implementation behavior predicted program start-up, with completion of Readiness activities (e.g., recruitment review, establishing referral criteria and processes) proving to be particularly important to the success of implementation (e.g., Brown et al., 2014; Saldana et al., 2012). Thus, beyond an initial decision and commitment, active steps were needed in order to develop the necessary infrastructure to support CBITS. The consistency of finding has particular significance for the scale-up of innovations by mandate or via free training and support programs —the decision alone is insufficient for success. The variability in clinic sites' engagement in these readiness activities underscores the importance of organizations' climate, commitment, and hands-on leadership for program success, all of which has been identified in both mental health and school systems (e.g., Aarons & Sommerfeld, 2012; Glisson & Green, 2011; Hoy et al., 2002). Policymakers should also attend to the additional multi-step process and time constraints with which school-based providers contend.

Beyond completing more Readiness activities, there were noteworthy patterns in the types of activities that were observed. Overall, it appeared to be relatively easier for sites to complete the prerequisites of the training program (complete web training, commit to running a group, attend the training) than it was for them to engage intensive activities such as stakeholder meetings and program planning (determining how to screen clients, holding meetings with clinic and school teams, assigning specific clinicians to run the groups). Again, these results suggest that in large-scale trainings, sites may engage in the simplest implementation activities out of concern over missing an opportunity. Interestingly, unlike some other training programs that have been offered in New York state (Gleacher et al., 2011; Hoagwood et al., 2014; Olin, Nadeem, et al., 2015), the current rollout did not include completion requirements set at the state level. Rather, the effort emerged through a partnership process based on local interest among New York City providers, suggesting that perhaps after the initial engagement phase, clinic behavior was likely motivated by internal rather than external incentives and disincentives.

Future efforts using such training models would benefit from focused coaching on completion of Readiness activities in addition to the clinical training itself. In this process, there are roles for EBP trainers, researchers, and the state and local entities that are promoting implementation of new practices. In fact, many purveyor groups already provide such targeted technical assistance. Because we have increasing evidence for cross-cutting critical activities across EBPs (Saldana et al., 2015), it may be possible to coach clinic sites to put into place implementation supports needed to accomplish particular activities. This process could help sites (and those promoting EBPs) to make sound decisions for the particular site, even if it means delaying or choosing not to proceed. While cross-site consultation calls focused on leadership issues and clinician support have a role in this process, there is also evidence to suggest the robustness of individual site coaching related to initial implementation (Gustafson et al., 2013).

Of note, our findings suggest that clinic sites that not only successfully started program start-up, but also achieved competency in program delivery (i.e., completed Stage 8), were the only clinics that engaged in any fidelity monitoring activities (i.e., Stage 5). Of the teams that started a CBITS program but did not achieve competency, the majority completed one Stage 5 activity (determining what fidelity monitoring indicator will be used), but did not routinely access or seek out tools that were made freely available. Although the small number of sites that made it to Stage 5 precluded rigorous analysis, duration outcomes suggest that sites that were successful spent a considerable amount of time establishing their fidelity monitoring systems, compared to those that were not successful. This finding extends previous research on the SIC to suggest that there is value in delineating fidelity and outcome monitoring processes clearly and thoughtfully. It underscores the importance of coaching and guidance on effective measurement of fidelity within a site. Moreover, the critical role of accountability and outcome monitoring has been highlighted in research on CBITS, and in quality improvement and other domains (Nadeem et al., 2011; Nembhard, 2009; Schoenwald et al., 2008; Wandersman et al., 2012).

Use of the SIC as a guide for qualitative interviews with sites offered the potential to examine differences observed in site behavior with more nuance. Although previous work has demonstrated differences in SIC scores between successful and non-successful implementers (Brown et al., 2014), responses from decision makers, supervisors, and lead clinicians in the current study provide evidence for why these differences might exist, once again pointing to the critical role of leadership support and implementation climate (e.g., Aarons & Sommerfeld, 2012; Glisson & Green, 2011; Glisson & James, 2002; Hoy et al., 2002). Importantly, the thoughtfulness that went into decision-making was important in driving successful implementation behavior, including both the time taken to make the decision and the content taken into consideration for the decision. Further, who was involved in the decision-making appeared to influence the success of implementation, with sites that were more inclusive and collaborative in their decision-making being most likely to succeed. Similarly, it appeared that those sites in which leadership was involved in the Readiness and Feasibility stages chose their school sites and clinicians effectively, and leveraged existing service delivery structures (choosing schools where they had access to student schedules, experience with groups). These sites also provided resources to clinicians, perhaps most importantly, time. While these factors have cross-cutting relevance across service sectors, this kind of thoughtful and knowledgeable leadership support appears critical for providers working in non-specialty mental health settings.

Notably, it was evident that some leaders recognized and supported their champion clinicians. Local champions have long been cited as important factors in successful implementation across school and community settings (Damschroder et al., 2009; Durlak & DuPre, 2008), and in this case demonstrated the critical interplay of frontline staff and clinic leadership in supporting implementation (Fixsen et al., 2013). In contrast, there were sites in which clinicians were enthusiastic about CBITS, but they could not overcome barriers. Because it is a group intervention focused on sensitive clinical issues, CBITS can require outreach efforts to enhance school staff buy-in, and hands-on school staff support to address logistical issues (Langley et al., 2010). As such, CBITS was likely a significant shift in the practice for many. These shifts, along with common school mental health barriers related to un-reimbursed clinician time (Cammack et al., 2014; Owens et al., 2014) were seemingly insurmountable in the absence of strong teamwork and clinic and/or school leadership. Notably, there was little feedback on CBITS itself. This was most likely because of the limited experience many of the sites had with the intervention. However, there appeared to be a strong perception of its ease of use and relevance across implementers and non-implementers, factors that typically facilitate implementation (Damschroder et al., 2009; Panzano & Roth, 2006).

There are several limitations to the current study. First, this study was linked to a specific citywide rollout that emerged from a partnership between city and state agencies, and local clinics. This is important because our observations may not reflect the behavior of clinics being trained under other implementation frameworks or those that may seek out EBP training directly from the purveyor. In fact, after the study period, two of the study sites sought out and paid for CBITS program training and implementation support themselves, and were able to successfully start their groups. This underscores the non-linear processes inherent in implementation (Chamberlain et al., 2011), and highlights that for some sites, a decision to not implement during this specific rollout was a sound one that led them to a more successful effort later. Another limitation of the current study is that the sample size was relatively small and represents a self-selected group. As such, while we found patterns that should be examined further in future research, some of our quantitative findings should be interpreted with some caution (e.g., Nemes et al., 2009). With respect to generalizability of the study, we note that participation rates (58%) were in line with those found in a statewide study of adoption behavior among clinics fortraining programs offered by the New York State Office of Mental Health (Olin, Chor, et al., 2015).

Conclusions

There are several implications for the current study. The study provides important insight into clinic behavior that may occur as part of large-scale trainings, particularly for school-based mental health clinics. First, while encouraging widespread participation in training may have certain benefits in spreading knowledge, enhancing clinician skills, and acceptance of EBPs, clinic leaders should be thoughtful about what they sign up for. The differences in clinic engagement in key Readiness activities suggest the need for leadership and coaching around service planning, needs and capacity assessments, and multilevel stakeholder and staff engagement. Coaching efforts could focus on both practice-specific issues (e.g., creating buy-in around trauma interventions), and more universal issues such as sustaining leadership involvement, and helping leaders to recognize and empower local champions at their sites.

Second, the SIC itself could potentially be deployed as a feedback tool that delineates the core activities, allowing for continuous assessment of milestones. When it becomes clear that a site may have challenges engaging in pre-implementation activities or there are early indicators that the implementation-setting fit is poor, sites could be encouraged to reassess their plans and delay or revisit implementation at a later time. Site leaders, implementation coaches, purveyors, and intermediaries from states or localities could consider embedding such process into implementation efforts in order to facilitate thoughtful resource allocation. Finally, given that this is one of the first studies to use the SIC to examine a school mental health program, it would be of great value to deploy systematic observations of the school implementation process in other contexts. This would help to further delineate the unique challenges school-based mental health providers face, and to advocate for their unique implementation support needs.

Taken as a whole, the study provides guidance to those deciding how to deploy their resources. Offering EBP training without investment in other aspects of the process could lead to wasted resources that might be better allocated to other aspects of service improvement. This could include development of their infrastructure and support systems. Sites with higher “readiness” may benefit from investments in longer-term competence and EBP sustainment. The study also serves to remind us that school-based EBP implementation is indeed challenging. However, despite the need to engage a broad range of stakeholders and navigate complex multisystem service delivery structures, it is indeed possible and appears to occur at similar rates to other practices in complex systems (Brown et al., 2015). Sites that take the time to discuss amongst themselves, include key decision makers in their discussions, and offer thoughtful consideration to their engagement and readiness processes are the most likely to succeed.

Supplementary Material

supplement

Highlights.

  • Mixed methods study examining EBP implementation for 26 school mental health clinics

  • Completing more pre-implementation activities predicted program start-up

  • Qualitative analysis revealed the critical role of leadership and stakeholder involvement

Acknowledgments

Funding: K01 MH083694, PI: Nadeem; R01 MH097748, PI, Saldana

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Aarons G, Hurlburt M, Horwitz S. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):4–23. doi: 10.1007/s10488-010-0327-7. 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons G, Sommerfeld D. Leadership, innovation climate, and attitudes toward evidence-based practice during a statewide implementation. Journal of the American Academy of Child & Adolescent Psychiatry. 2012;51(4):423–431. doi: 10.1016/j.jaac.2012.01.018. 10.1016/j.jaac.2012.01.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bernard HR. Research methods in anthropology: Qualitative and quantitative approaches. 4th. Lanham, MD: AltaMira Press; 2006. [Google Scholar]
  4. Bickman L. A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child and Adolescent Psychiatry. 2008;47(10):1114. doi: 10.1097/CHI.0b013e3181825af8. 10.1097/CHI.0b013e3181825af8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Blasé KA, Fixsen DL, Duda MA, Metz AJ, Naoom SF, Van Dyke MK. Paper presented at the Presented at the Blueprints for Violence Prevention Conference. San Antonio, TX: 2010. Implementation Challenges and Successes: Some Big Ideas. [Google Scholar]
  6. Brown CH, Chamberlain P, Saldana L, Padgett C, Wang W, Cruden G. Evaluation of two implementation strategies in 51 child county public service systems in two states: results of a cluster randomized head-to-head implementation trial. Implementation Science. 2014;9(1):134–168. doi: 10.1186/s13012-014-0134-8. 10.1186/s13012-014-0134-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bruns EJ, Hoagwood KE, Hamilton JD. State implementation of evidence-based practice for youths, part I: Responses to the state of the evidence. Journal of the Academy of Child and Adolescent Psychiatry. 2008;47(4):369–373. doi: 10.1097/CHI.0b013e31816485f4. [DOI] [PubMed] [Google Scholar]
  8. Burns BJ, Hoagwood KE. Preface. Evidence-Based Practice, Part I: Research Update. Child and Adolescent Psychiatric Clinics of North America. 2004;13(4):xi–xiii. doi: 10.1016/j.chc.2004.11.001. 10.1016/S1056-4993(04)00063-X. [DOI] [PubMed] [Google Scholar]
  9. Cammack NL, Brandt NE, Slade E, Lever NA, Stephan S. Funding expanded school mental health programs. In: Weist MD, Lever NA, Bradshaw CP, Owens JS, editors. Handbook of school mental health. New York: Springer; 2014. pp. 17–30. [Google Scholar]
  10. Chamberlain P, Brown C, Saldana L. Observational measure of implementation progress in community based settings: The Stages of implementation completion (SIC) Implementation Science. 2011;6(1):116. doi: 10.1186/1748-5908-6-116. 10.1186/1748-5908-6-116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Chamberlain P, Feldman SW, Wulczyn F, Saldana L, Forgatch M. Implementation and evaluation of linked parenting models in a large urban child welfare system. Child Abuse and Neglect. 2016;53:27–39. doi: 10.1016/j.chiabu.2015.09.013. 10.1016/j.chiabu.2015.09.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Chamberlain P, Saldana L, Brown CH, Leve L. Implementation of Multidimensional Treatment Foster Care in California: A randomized control trial of an evidence-based practice. In: Fogel MRDS, editor. Using Evidence to Inform Practice for Community and Organizational Change. Chicago, IL: Lyceum Books; 2010. pp. 218–234. [Google Scholar]
  13. Creswell JW, Klassen AC, Plano Clark VL, Smith KC. Best practices for mixed methods research in the health sciences. Bethesda, MD: National Institutes of Health; 2011. [Google Scholar]
  14. Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. Thousand Oaks, CA: Sage; 2007. [Google Scholar]
  15. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science. 2009;4(1):50–64. doi: 10.1186/1748-5908-4-50. 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Davis D, Thomson M, Oxman A, Haynes B. Changing physician performance: A systematic review of the effect of continuing medical education strategies. JAMA. 1995;274:700–705. doi: 10.1001/jama.274.9.700. 10.1001/jama.1995.03530090032018. [DOI] [PubMed] [Google Scholar]
  17. Durlak JA, DuPre EP. Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology. 2008;41(3-4):327–350. doi: 10.1007/s10464-008-9165-0. 10.1007/s10464-008-9165-0. [DOI] [PubMed] [Google Scholar]
  18. Essock S, Covell N, Leckman-Westin E, Lieberman J, Sederer L, Kealey E, Finnerty M. Identifying clinically questionable psychotropic prescribing practices for Medicaid recipients in New York State. Psychiatric Services. 2009;60(12):1595–1602. doi: 10.1176/ps.2009.60.12.1595. 10.1176/ps.2009.60.12.1595. [DOI] [PubMed] [Google Scholar]
  19. Ferlie E, Shortell S. Improving the quality of health care in the United Kingdom and the United States: A framework for change. Milbank Q. 2001;79(2):281–315. doi: 10.1111/1468-0009.00206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Fixsen D, Blase K, Metz A, Van Dyke M. Statewide implementation of evidence-based programs. Exceptional Children. 2013;79(2):213–230. 10.1177/001440291307900206. [Google Scholar]
  21. Fixsen D, Naoom S, Blase K, Friedman R, Wallace F. Implementation research: A synthesis of the literature (Louis de la Parte Florida Mental Health Institute Publication# 231) Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network; 2005. [Google Scholar]
  22. Fixsen DL, Blase KA. Implementation: The missing link between research and practice. NIRN implementation brief. 2009;1 [Google Scholar]
  23. Gleacher AA, Nadeem E, Moy A, Whited A, Albano AM, Wang R, et al. Hoagwood KE. The Evidence Based Treatment Dissemination Center: An innovative statewide dissemination effort. Journal of Emotional and Behavioral Disorders. 2010;19(1):1–11. doi: 10.1177/1063426610367793. 10.1177/1063426610367793. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Gleacher AA, Nadeem E, Moy AJ, Whited AL, Albano AM, Radigan M, et al. Hoagwood KE. Statewide CBT Training for Clinicians and Supervisors Treating Youth: The New York State Evidence Based Treatment Dissemination Center. Journal of Emotional and Behavioral Disorders. 2011;19(3):182–192. doi: 10.1177/1063426610367793. Doi 10.1177/1063426610367793. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Glisson C, Green P. Organizational climate, services, and outcomes in child welfare systems. Child Abuse & Neglect. 2011;35(8):582–591. doi: 10.1016/j.chiabu.2011.04.009. 10.1016/j.chiabu.2011.04.009. [DOI] [PubMed] [Google Scholar]
  26. Glisson C, James LR. The cross-level effects of culture and climate in human service teams. Journal of Organizational Behavior. 2002;23(6):767–794. 10.1002/job.162. [Google Scholar]
  27. Gustafson DH, Quanbeck AR, Robinson JM, Ford JH, Pulvermacher A, French MT, et al. McCarty D. Which elements of improvement collaboratives are most effective? A cluster-randomized trial. Addiction. 2013;108(6):1145–1157. doi: 10.1111/add.12117. 10.1111/add.12117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Hoagwood KE, Olin SS, Horwitz SM, McKay M, Cleek A, Gleacher A, et al. Kuppinger A. Scaling up evidence-based practices for children and families in New York State: toward evidence-based policies on implementation for state mental health systems. Journal of Clinical Child & Adolescent Psychology. 2014;43(2):145–157. doi: 10.1080/15374416.2013.869749. 10.1080/15374416.2013.869749. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Horwitz SM, Landsverk J. Methodological issues in child welfare and children's mental health implementation research. Administration and Policy in Mental Health. 2010;38:1–3. doi: 10.1007/s10488-010-0316-x. 10.1007/s10488-010-0316-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Hoy WK, Smith PA, Sweetland SR. The development of the organizational climate index for high schools: Its measure and relationship to faculty trust. The High School Journal. 2002;86(2):38–49. [Google Scholar]
  31. Jaycox LH. Cognitive-Behavioral Intervention for Trauma in Schools. Longmont, CO: Sopris West Educational Services; 2003. [Google Scholar]
  32. Kataoka SH, Jaycox LH, Wong M, Nadeem E, Langley A, Tang L, Stein BD. Effects on school outcomes in low-Income minority youth: Preliminary findings from a community-partnered study of a school trauma intervention. Ethnicity and Disease. 2011;21(3):71. [PMC free article] [PubMed] [Google Scholar]
  33. Kataoka SH, Zhang L, Wells KB. Unmet need for mental health care among U.S. children: Variation by ethnicity and insurance status. American Journal of Psychiatry. 2002;159(9):1548–1555. doi: 10.1176/appi.ajp.159.9.1548. 10.1176/appi.ajp.159.9.1548. [DOI] [PubMed] [Google Scholar]
  34. Kendall PC, editor. Child and adolescent therapy: Cognitive-behavioral procedures. 4th. New York: Guilford Press; 2012. [Google Scholar]
  35. Knopf D, Park MJ, Mulye TP. The mental health of adolescents: A national profile, 2008. San Francisco, CA: 2008. Retrieved from. [Google Scholar]
  36. Langley AK, Nadeem E, Kataoka SH, Stein BD, Jaycox LH. Evidence-based mental health programs in schools: Barriers and facilitators of successful implementation. School Mental Health. 2010;2(3):105–113. doi: 10.1007/s12310-010-9038-1. 10.1007/s12310-010-9038-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. McCellan J, Werry J. Evidence-based treatments in child and adolescent psychiatry: an inventory. Journal of the American Academy of Child & Adolescent Psychiatry. 2003;42(12):1388–1400. doi: 10.1097/01.chi.0000092322.84052.88. 10.1097/01.chi.0000092322.84052.88. [DOI] [PubMed] [Google Scholar]
  38. Mihalic S, Fagan A, Irwin K, Ballard D, Elliott D. Blueprints for Violence Prevention. Washington, DC: 2004. Retrieved from. [Google Scholar]
  39. Mittman B. Paper presented at the VA Greater Los Angeles Healthcare System. Los Angeles, CA: 2011. Partnering for improvement across research, practice, and policy: The case of implementation research in health. [Google Scholar]
  40. Muhr T. Atlas Ti. Scientific software development. Berlin: 1998. [Google Scholar]
  41. Nadeem E, Gleacher A, Beidas RS. Consultation as an implementation strategy for evidence-based practices across multiple contexts: Unpacking the black box. Administration and Policy in Mental Health and Mental Health Services Research. 2013;40(6):439–450. doi: 10.1007/s10488-013-0502-8. 10.1007/s10488-013-0502-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Nadeem E, Jaycox L, Kataoka SH, Langley AK, Stein BD. Going to scale: Experiences implementing a school-based trauma intervention. School Psychology Review. 2011;40(4):549–568. [PMC free article] [PubMed] [Google Scholar]
  43. Nadeem E, Jaycox LH, Langley AK, Wong M, Kataoka SH, Stein BD. Effects of trauma on students: Early intervention through the cognitive behavioral intervention for trauma in schools. In: Weist MD, Lever NA, Bradshaw CP, Owens JS, editors. Handbook of School Mental Health. New York: Springer; 2014. pp. 145–157. [Google Scholar]
  44. Nembhard IM. Learning and improving in quality improvement collaboratives: Which collaborative features do participants value most? Health Services Research. 2009;44(2 Pt. 1):359–378. doi: 10.1111/j.1475-6773.2008.00923.x. 10.1111/j.1475-6773.2008.00923.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Nemes S, Jonasson JM, Genell A, Steineck G. Bias in odds ratios by logistic regression modelling and sample size. BMC Med Res Methodol. 2009;9(1):56–60. doi: 10.1186/1471-2288-9-56. 10.1186/1471-2288-9-56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Olin SS, Chor KHB, Weaver J, Duan N, Kerker BD, Clark LJ, et al. Horwitz SM. Multilevel predictors of clinic adoption of state-supported trainings in children's services. Psychiatric Services. 2015;66(5):484–490. doi: 10.1176/appi.ps.201400206. 10.1176/appi.ps.201400206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Olin SS, Nadeem E, Gleacher AT, Weaver J, Weiss D, Hoagwood KE, Horwitz SM. What predicts clinician dropout from state-sponsored Managing and Adapting Practice training. Administration and Policy in Mental Health and Mental Health Services Research. 2015:1–12. doi: 10.1007/s10488-015-0709-y. [E-pub ahead of print]. 10.1007/s10488-015-0709-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Overstreet S, Chafouleas SM. Trauma-informed schools: Introduction to the special issue. School Mental Health. 2016;8(1):1–6. 10.1007/s12310-016-9184-1. [Google Scholar]
  49. Owens JS, Lyon AR, Brandt NE, Warner CM, Nadeem E, Spiel C, Wagner M. Implementation science in school mental health: Key constructs in a developing research agenda. School Mental Health. 2014;6(2):99–111. doi: 10.1007/s12310-013-9115-3. 10.1007/s12310-013-9115-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Panzano PC, Roth D. The decision to adopt evidence-based and other innovative mental health practices: Risky business? Psychiatric Services. 2006;57(8):1153–1161. doi: 10.1176/ps.2006.57.8.1153. 10.1176/ps.2006.57.8.1153. [DOI] [PubMed] [Google Scholar]
  51. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(2):65–76. doi: 10.1007/s10488-010-0319-7. 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Raghavan R, Bright C, Shadoin A. Toward a policy ecology of implementation of evidence-based practices in public mental health settings. Implementation Science. 2008;3(1):26. doi: 10.1186/1748-5908-3-26. 10.1186/1748-5908-3-26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Saldana L. The Stages of Implementation Completion for Evidence-Based Practice: Protocol for a Mixed Methods Study. Implementation Science. Implementation Science. 2014;9(43):1–11. doi: 10.1186/1748-5908-9-43. 10.1186/1748-5908-9-43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Saldana L, Bennett I, Powers D, Vredevoogd M, Grover T, Schaper H, Campbell M. Scaling implementation of Collaborative Care for Depression: Adaptation of the Stages of Implementation Completion. Manuscript submitted for publication. 2017 doi: 10.1007/s10488-019-00944-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Saldana L, Chamberlain P. Supporting implementation: The role of Community Development Teams to build infrastructure. American Journal of Community Psychology. 2012;50:334–346. doi: 10.1007/s10464-012-9503-0. 10.1007/s10464-012-9503-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Saldana L, Chamberlain P, Wang W, Brown CH. Predicting program start-up using the stages of implementation measure. Administration and Policy in Mental Health and Mental Health Services Research. 2012;39(6):419–425. doi: 10.1007/s10488-011-0363-y. 10.1007/s10488-011-0363-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Saldana L, Schaper H, Campbell M, Chapman J. Standardized measurement of implementation: The Universal SIC. Implementation Science. 2015;10(1):A73. 10.1186/1748-5908-10-S1-A73. [Google Scholar]
  58. Schoenwald SK, Chapman JE, Kelleher K, Hoagwood KE, Landsverk J, Stevens J, et al. Hlth RNYM. A survey of the infrastructure for children's mental health services: Implications for the implementation of empirically supported treatments (ESTs) Administration and Policy in Mental Health and Mental Health Services Research. 2008;35(1-2):84–97. doi: 10.1007/s10488-007-0147-6. [DOI] [PubMed] [Google Scholar]
  59. Silverman WK, Hinshaw SP. The second special issue on evidence-based psychosocial treatments for children and adolescents: A 10-year update. Journal of Clinical Child & Adolescent Psychology. 2008;37(1):1–7. doi: 10.1080/15374410701818293. 10.1080/15374410701817725. [DOI] [PubMed] [Google Scholar]
  60. Smith EV., Jr Evidence for the reliability of measures and validity of measure interpretation: A Rasch measurement perspective. Journal of Applied Measurement. 2001;2:281–311. [PubMed] [Google Scholar]
  61. Stein BD, Jaycox LH, Kataoka SH, Wong M, Tu W, Elliott MN, Fink A. A mental health intervention for schoolchildren exposed to violence: A randomized controlled trial. JAMA. 2003;290(5):603–611. doi: 10.1001/jama.290.5.603. 10.1001/jama.290.5.603. [DOI] [PubMed] [Google Scholar]
  62. Stein BD, Kataoka SH, Jaycox LH, Wong M, Fink A, Escudero P, Zaragoza C. Theoretical basis and program design of a school-based mental health intervention for traumatized immigrant children: a collaborative research partnership. Journal of Behavioral Health Services and Research. 2002;29(3):318–326. doi: 10.1007/BF02287371. 10.1007/BF02287371. [DOI] [PubMed] [Google Scholar]
  63. Szumilas M. Explaining odds ratios. Journal of the Canadian Academy of Child and Adolescent Psychiatry. 2010;19(3):227–229. [PMC free article] [PubMed] [Google Scholar]
  64. Wandersman A, Chien VH, Katz J. Toward an evidence-based system for innovation support for implementing innovations with quality: Tools, training, technical assistance, and quality assurance/quality improvement. American Journal of Community Psychology. 2012;50(3-4):445–459. doi: 10.1007/s10464-012-9509-7. 10.1007/s10464-012-9509-7. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supplement

RESOURCES