Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Jul 1.
Published in final edited form as: Behav Ther. 2018 Jan 6;49(4):538–550. doi: 10.1016/j.beth.2018.01.001

Group CBT for externalizing disorders in urban schools: Effect of training strategy on treatment fidelity and child outcomes

Ricardo Eiraldi 1,2, Jennifer A Mautone 1,2, Muniya S Khanna 1, Thomas J Power 1,2, Andrew Orapallo 1, Jaclyn Cacia 1, Billie S Schwartz 1, Barry McCurdy 3, Jacqueline Keiffer 4, Cynthia Paidipati 5, Rebecca Kanine 1, Manju Abraham 1, Shelby Tulio 1, Lauren Swift 1, Shannon N Bressler 1, Beatriz Cabello 1, Abbas F Jawad 1,2
PMCID: PMC6020147  NIHMSID: NIHMS932783  PMID: 29937256

Abstract

Public schools are an ideal setting for the delivery of mental health services to children. Unfortunately, services provided in schools, and more so in urban schools, have been found to lead to little or no significant clinical improvements. Studies with urban school children seldom report on the effects of clinician training on treatment fidelity and child outcomes. This study examines the differential effects of two levels of school-based counselor training: Training workshop with basic consultation (C) vs. Training workshop plus enhanced consultation (C+) on treatment fidelity and child outcomes. Fourteen school staff members (counselors) were randomly assigned to C or C+. Counselors implemented a group cognitive behavioral therapy protocol (Coping Power Program, CPP) for children with or at risk for externalizing behavior disorders. Independent coders coded each CPP session for content and process fidelity. Changes in outcomes from pre-to post- were assessed via a parent psychiatric interview and interviewer-rated severity of illness and global impairment. Counselors in C+ delivered CPP with significantly higher levels of content and process fidelity compared to counselors in C. Both C and C+ resulted in significant improvement in interviewer-rated impairment; the conditions did not differ from each other with regard to impairment. Groups did not differ with regard to pre- to- post-treatment changes in diagnostic severity level. School-based behavioral health staff in urban schools are able to implement interventions with fidelity and clinical effectiveness when provided with ongoing consultation. Enhanced consultation resulted in higher fidelity. Enhanced consultation did not result in better student outcomes compared to basic consultation. Implications for resource allocation decisions with staff training in EBP are discussed.

Keywords: Dissemination and implementation, evidence-based practice, training, fidelity, consultation, urban schools


Urban public schools have become a common setting for the delivery of mental health services to children and may be an ideal context through which to narrow service disparities (Rones & Hoagwood, 2000). The school is a convenient location where services can often be provided at subsidized cost to families (Taras, 2004). Benefits of providing mental health services in schools include the ability to implement interventions in the very environment in which most symptoms are triggered (Masia-Warner et al., 2005) and to incorporate protocol-specific interventions, with peer and teacher involvement, as needed for generalizability (Evans, 1999). Unfortunately, however, research suggests that services provided in low-income urban schools often result in little to no effect on child outcomes (Farahmand, Grant, Polo, Duffy, & DuBois, 2011).

In a meta-analysis of the effectiveness of services provided in under-resourced schools, it was found that treatment effect sizes ranged from small to negative (Farahmand, Grant, Polo, Duffy, & DuBois, 2011). The studies in the meta-analysis included interventions delivered at the universal, selected, and indicated level. Unfortunately, the study did not report on the potential moderating effects of clinician training on treatment fidelity or outcomes. This is a common problem in the literature; the majority of effectiveness studies conducted in schools do not report treatment fidelity (Flaspohler, Meehan, Maras, & Keller, 2012). Without this information it is not possible to ascertain whether the disappointing reported outcomes of school-based interventions are due to problems with the interventions, problems with the way therapists were trained or the way the interventions were implemented (Shirk & Peterson, 2013).

Reporting fidelity is important because there is a strong relationship between quality of support for counselors and treatment fidelity and between fidelity and treatment outcomes (Durlak & DuPre, 2008). Multiple reviews of the literature on implementation of innovations indicate that training and technical assistance (i.e., consultation) for counselors has a direct impact on fidelity and treatment outcomes (Durlak & DuPre, 2008; Fixsen, Naoom, Blase, Friedman, & Wallace, 2005). The effectiveness trials in Multisystemic Therapy (MST) found that providing training with ongoing supervision to community-based therapists increased their proficiency with MST and that more intense ongoing supervision was related to therapist adherence to the intervention protocol and better youth outcomes (Hengeller et al., 2002; Schoenwald, Sheidow, & Chapman et al., 2009).

Ongoing consultation provided after an initial training workshop has been found to lead to higher levels of fidelity than participation in the initial training workshop alone (Beidas & Kendall, 2010; Webster-Stratton, Reid, & Marsenich, 2014). In an influential review of the literature on evidence-based training in cognitive behavioral therapy, Rakovshik and McManus (2010) found evidence supporting the need to provide theoretical instruction delivered during an initial training workshop followed by experiential interactive training trough practice cases, and co-therapy or supervision. A number of studies have demonstrated that teaching approaches that make use of active learning strategies (e.g., modeling, role plays) are much more effective than traditional approaches such as reading a training manual or attending a training lecture (Beidas & Kendall, 2010). Previous effectiveness studies have included consultation components that address session preparation and implementation barriers and others that encourage self-reflection (Shernoff, Lakind, Frazier, & Jakobsons, 2015) and provide performance feedback (Bickman, Kelley, Breda, de Andrade, & Riemer, 2011; Webster-Stratton et al., 2014), among several other components. Despite the agreement that ongoing consultation support is needed, there is no consensus regarding what specifically constitutes an effective consultation package.

In a comprehensive review of the dissemination and implementation (D&I) of mental health interventions literature, Herschell and colleagues found that multicomponent training strategies involving an initial workshop followed by ongoing consultation and performance feedback enhanced clinician clinical skills, adherence, knowledge, utilization rates and clinical outcomes much more consistently than workshops alone (Herschell et al., 2010). So, though progress has been made in understanding the important components of ongoing consultation, as of yet, it is not clear which (or all?) components are necessary to achieving acceptable levels of fidelity and child outcome (Beidas & Kendall, 2010; Herschell et al., 2010; Nadeem et al., 2013).

Studies examining the impact of training strategies on implementation effectiveness in schools are rare, with a few exceptions. Lochman and colleagues (2009) found that training including individualized problem solving and feedback resulted in higher school counselor fidelity to the Coping Power Program than did training without individualized feedback.

At an even more basic level, school-based behavioral health staff typically does not receive adequate training and/or consultation on implementing evidence-based practices (EBPs; Accurso, Taylor, & Garland, 2011). Ongoing support and careful session preparation and attending to implementation barriers are especially important for counselors with little training in EBPs and for those operating in under-resourced settings where implementation barriers pose a significant risk to service sustainability (Edmunds et al., 2013; Eiraldi, Benjamin Wolk, Locke, & Beidas, 2015).

This study compares the relative effectiveness of two levels of consultation (Edmunds et al., 2013; Nadeem et al., 2013) on treatment fidelity and child outcomes in under-resourced urban schools. A basic, less intensive level, labeled Consultation (C), consisted of: (a) an initial training workshop and manual and (b) ongoing didactics for session preparation. The second, an enhanced, more intensive level, labeled Consultation plus (C+), consisted of (a) an initial training workshop and manual, (b) ongoing didactics for session preparation, (c) encouraging self-reflection about own performance, (d) goal-setting for fidelity, (e) tailored problem-solving for any barriers to implementation, and (f) video review discussion and feedback. This enhanced C+ level was designed to incorporate the current suggested “full package” of consultation components related to therapist treatment fidelity and child clinical outcomes. Counselors were school-based staff, and the EBP was the Coping Power Program (CPP; Lochman, Boxmeyer, Powell, Qu, Wells, & Windle, 2009) for children with externalizing behavior problems. The study contributes to the knowledge base regarding critical components of consultation for strong fidelity and clinical outcomes in urban schools.

The study has two main objectives: (a) to test which consultation strategy (C; C+) results in higher treatment content and process fidelity of CPP; and (b) to test which consultation strategy results in better child outcomes (changes in diagnostic status and impairment). We also wanted to assess the relationship between changes in diagnostic status and fidelity. Based on the extant literature (e.g., Herschell et al., 2010), we expected that C+ would result in higher content and process fidelity compared to C. We also hypothesized that C+ would result in lowered diagnostic severity level and impairment than C, and that changes in diagnostic status would be associated with fidelity.

Methods

Design

Data originate from a cluster randomized clinical trial (RCT) that sought to test the effectiveness of C and C+ in the implementation of school-wide positive behavioral interventions and supports (SWPBIS; Sailor, Dunlap, Sugai, & Horner, 2011) with mental health supports at Tier 2 (Eiraldi et al., 2014). Data for this paper originate from the first three years of Tier 2 interventions.

Randomization

The random assignment occurred at the school level (Eiraldi et al., 2014). The six schools were divided into two strata based on their baseline school climate (Collaborative Responsibility Empowering School Teams [CREST]; McCausland, Hales, & Reinhardtsen, 1997) score. Stratum 1 included three schools with baseline CREST scores below the median CREST and Stratum 2 included three schools with baseline CREST scores above the median. A stratified random assignment list of the six schools to the two conditions was generated by the study statistician.

Participants

Participants in the study were: (a) 14 school personnel (86% females) who were nominated by their administrators and volunteered to serve as counselors to implement Tier 2 evidence-based interventions and (b) 119 children (74% males). The racial/ethnic breakdown of counselors was 36% non-Latino White, 36% African American, 28% mixed race, and 50% Latino. All counselors had a Master’s degree in counseling, social work or education. Ten of the counselors (71%) worked in the school as a therapist, school counselor or social worker. The other counselors included two teachers, a climate facilitator and a dean of students. The counselors had approximately 17 years of work experience (range 2–38). Seven counselors were assigned to each condition.

Forty-eight percent of the children were African American, 35% White, 10% mixed-race, and 7% Other. Fifty-four percent of the children were of Hispanic/Latino ethnic background. There were no differences between conditions (C, C+) on ethnic (p = .401) or racial (p= 103) composition. Children originated from one of four K-8 schools, a K-4 school, or a K-5 school that were randomly assigned to condition (C, C+). The schools where the study took place were located within the same general area of the city; an area which has one of the highest levels of extreme poverty in the state. One hundred percent of students were eligible for free or subsidized lunch. The smallest school had 459 students; the largest school had 859 students. Mean number of students in C schools was 791 (SD = 60.8), and 568 students (SD = 117.8) in C+ schools.

Inclusion and diagnostic criteria

Children were included in the study if they met screening and diagnostic criteria. Students were screened positive if they were in grades 3–8, were referred by a member of the school leadership team because of behavior problems, had a score ≥ 1 standard deviation above the mean on the Conduct Problems scale of the Strengths and Difficulties Questionnaire (SDQ; Goodman et al., 2000), and had a diagnosis of oppositional defiant disorder (ODD) or conduct disorder (CD) at the positive or intermediate (at-risk) level according the computerized parent version of the NIMH Diagnostic Interview Schedule for Children, Fourth Edition (NIMH-C-DISC-IV; Shaffer et al., 2000). A breakdown of the number and percentage of children meeting criteria for ODD, CD and comorbid disorders is presented in Table 1. Three children did not meet criteria for ODD or CD but were included in the study due to their elevated Conduct Problems scores on the SDQ and continuing concern by the referring leadership team member about the children’s externalizing behavior at school. A large number of children included in the study also met diagnostic criteria for attention-deficit hyperactivity disorder (ADHD), anxiety disorders and Major Depressive Episode (MDE). Twelve-percent of the children met diagnostic criteria for post-traumatic stress disorder (PTSD) at the intermediate or positive level.

Table 1.

NIMH C-DISC IV Diagnoses at Baseline

Negative Intermediate Positive

Disorder n % n % n %
Oppositional Defiant Disorder 3 2.5% 60 50.4% 56 47.1%
Conduct Disorder 63 52.9% 41 34.5% 15 12.6%
Attention Deficit-Hyperactivity Disorder 12 10.1% 59 49.6% 48 40.3%
Panic Disorder 108 90.8% 10 8.4% 1 .8%
Generalized Anxiety Disorder 99 83.2% 16 13.4% 4 3.4%
Post-traumatic Stress Disorder 99 83.2% 14 11.8% 6 5%
Separation Anxiety Disorder 69 58% 36 30.3% 14 11.8%
Specific Phobia Disorder 86 72.3% 30 25.2% 3 2.5%
Major Depressive Episode 86 72.3% 31 26.1% 2 1.7%
Dysthymic Disorder 114 95.8% 5 4.2% 0 0%

Note: NIMH C-DISC IV (National Institute of Mental Health Computerized Diagnostic Interview Schedule for Children, Fourth Edition

Training Strategies (C & C+)

Consultation has been defined as “a process of interaction between two professionals, the consultant, who is a specialist, and the consultee, who invokes the consultant’s help in a current work problem” (Caplan & Caplan 1993; p. 11), as cited in Edmunds, Beidas, and Kendall (2013). We wanted to train counselors using strategies that have been proven to work, including the teaching of the theoretical approach used in the intervention, demonstration and practice of skills and performance feedback (Showers, Joyce, & Bennett, 1987).

During year 1 of the project, licensed psychologists with expertise in the treatment of externalizing behavior disorders conducted an initial training workshop with counselors and members of the research team prior to the implementation of the Tier 2 intervention. The training consisted of an eight-hour workshop that included discussion of the theoretical background for Tier 2 intervention (described below), its development (theoretical rationale, key components, efficacy/effectiveness findings), and a detailed review of the intervention sessions (content, structure, process, implementation challenges). Training included both didactic and active learning activities such as small group discussions, role-plays, behavior rehearsals, watching video-recorded sessions, and demonstration of techniques (Beidas & Kendall, 2010; Kolb, 1984). Counselors were administered the Knowledge of Evidence Based Services Questionnaire (KEBSQ; Stumpf, Higa-McMillan, & Chorpita, 2009) following the training workshop. They were required to score at the ≥ 80% level on the Disruptive Behavior and Attention/Hyperactivity sections of the KEBSQ before they could start implementing CPP. Counselors in C (N = 7) scored 62% (SD =.037) and those in C+ (N = 7) scored 61% (SD = .023) at initial administration. Counselors were given a review of the material they had difficulty with until they were able to score at the ≥ 80% level.

In subsequent project years, psychology pre-doctoral interns and advanced graduate students in school and clinical psychology, supervised by licensed psychologists, conducted a briefer retraining in the school setting with returning counselors in order to provide a refresher and facilitate consolidation of knowledge (Rakovshik & McManus, 2010). Counselors who were new to the project were trained individually in the school setting using the same material covered at the beginning of the project, but the training sessions were spaced out over several weeks to fit their schedules.

After the initial training workshop, consultants (pre-doctoral interns or advanced graduate students from the research team) provided weekly C or weekly C+ consultation sessions to counselors. Licensed psychologists and postdoctoral psychology fellows provided weekly supervision to the consultants.

Basic Consultation (C)

We wanted to provide counselors in both conditions with a minimum level of support that would enable those with little or no prior exposure to mental health EBPs to be able to conduct groups with relatively well-developed skills. After the initial training workshop, consultants met with counselors for 20-minute weekly on-site consultation sessions throughout the intervention period. Consultation sessions included: (a) discussion of referrals to the groups, (b) dealing with logistical issues (e.g., where and when to hold groups), and (c) didactic session preparation for upcoming sessions. Counselors with experience conducting at least 2 CPP groups participated in streamlined 15-minute on-site consultation sessions. Prior to the first consultation session, the consultants provided video-recorded samples of effective implementation of the main components of the treatment to the counselors. Periodically, the consultants encouraged counselors to watch the video clips at their convenience.

Enhanced Consultation (C+)

Consultants provided basic consultation (didactic session preparation), plus 20-minute, in-person, individual enhanced consultation to counselors in the C+ condition. Counselors who had previously implemented CPP received a more streamlined 15-minute version of coaching.

The enhanced consultation strategy was developed based on adult learning characteristics (e.g., propensity to learn from experience, capacity to reflect on their performance and apply knowledge, self-motivation; Knowles, 1980; Merriam, 2004; Rakovshik & McManus, 2010). The enhanced consultation strategy included (a) self-reflection (Denton & Hasbrouck, 2009); (b) goal setting (Locke & Latham, 2002); (c) video review discussion and feedback; (d) performance feedback (Kluger & DeNisi, 1996), and (d) tailored problem solving for dealing with implementation barriers. For self reflection, the clinician was asked to reflect on the previous session (e.g., “How do you think you did during the last session?”; “Where there any problems?”; “What do you think went right with your delivery of the content material”; “What do you think did not go right?”). Second, consultants and counselors discussed session goals (tailored for the needs of each group) as well as content fidelity goals (counselors were expected to reach 80% fidelity in each session) and the rationale for the goal (i.e., because research suggests that the treatment is more likely to be effective at higher fidelity; Locke & Latham, 2002). Then, the consultant provided the counselor with fidelity data for the previous group session and noted whether fidelity threshold was achieved. Fidelity data were shared with counselors with regard to content fidelity (i.e., the material the clinician was supposed to cover in session) as measured by a content fidelity checklist (CFC) filled out by the consultant, and process fidelity (i.e., how well the clinician delivered the session) as measured by the Process Fidelity Checklist (Lochman et al., 2009) also filled out by the consultant. Next, the consultant showed two brief video clips from the previous session that contained examples of well-executed CPP objectives from the agenda and two brief video clips that contained ineffective or less than optimal execution of objectives from the agenda. The clinician was asked to reflect on the videos, and the consultant and clinician jointly discussed strategies for improvement. Finally, the consultant conducted tailored problem solving for dealing with implementation barriers (e.g., how to ensure continuing attendance by specific children; how to deal with the behavior of specific children in the group).

Supervision of consultants

Licensed psychologists or fellows supported by licensed psychologists provided approximately 60 minutes of supervision to the consultants each week. Supervision sessions focused on all of the C and C+ consultation sessions conducted by each consultant each week. The supervision focused on providing performance feedback after observing and evaluating the C or C+ session in vivo or after watching a video recording of the session. Supervision aimed at ensuring that consultants (a) were accurately and consistently communicating CBT principles, (b) stayed within the protocol limits for each condition (i.e. prevent bleed), (c) encouraged and positively reinforced counselors for their efforts, and (d) that their delivery of didactics was consistent across groups.

Supervision followed procedures similar to enhanced consultation (described above). For example, consultants were asked to reflect on the C or C+ session, and the supervisor and the consultant together reviewed specific sections of the basic or enhanced consultation session with regard to effective (e.g., effective use of praise) and less than effective (e.g., difficulty connecting with the counselor; not being clear in teaching a particular concept) performance. Additionally, supervisors provided feedback about differentiation between conditions. In other words, consultants received feedback about whether they used any elements of enhanced consultation during C sessions, and the supervisor and consultant problem-solved strategies to ensure that conditions remained distinct.

Tier 2 Intervention – Coping Power Program

The Coping Power Program (CPP; Curry, Wells, Lochman, Craighead, & Nagy, 2003; Lochman & Wells, 2004) is a cognitive-behavioral, multi-component group intervention for elementary and middle school students with, or at risk for, externalizing behavior disorders. In addition to anger management, CPP includes units on goal-setting, emotional awareness, relaxation training, social skills training, problem solving, and handling peer pressure. In the original version of CPP, the program is delivered across two years (i.e., 8 sessions in the first year of intervention and 25 sessions in the second year); however, most of the content is taught during the first 8 sessions. Studies using an earlier, briefer (12 session) version of CPP (Anger Coping) reported significant reductions in aggressive behavior at post-intervention among targeted aggressive boys, compared to untreated aggressive boys and normal controls (Lochman & Curry, 1986; Lochman, 1985). In a pilot study, our team modified CPP for use in under-resourced urban schools (Eiraldi et al., 2016). For this study, we implemented the modified version of CPP, consisting of 14 weekly 45-minute sessions in order ensure feasibility of implementation. In making program adaptations, we preserved all main components of the protocol.

Children who were absent for a session typically did not receive an individual make-up session, although previous session content was routinely reviewed with the group at the start of the subsequent session. Twenty-eight CPP groups were conducted with an average of 4.4 children per group.

Measures

Fidelity

Treatment fidelity is defined as the degree to which practices or interventions are delivered as planned or designed (Perepletchikova & Kazdin, 2005). The construct of fidelity has been conceptualized with two main components: content (i.e., the extent to which primary components of intervention are delivered) and process (i.e., the quality with which the intervention was delivered; Power et al., 2005). Both content fidelity and process fidelity have been linked to improved patient outcomes for multiple mental health disorders and for services provided in different clinical settings (Herschell, Kolko, Baumann, & Davis, 2010; Rakovshik & McManus, 2010).

Group sessions were video-recorded to assess content fidelity and process fidelity. An independent coder (IC) who was not involved with consultation or coaching coded for content and process fidelity all video-recorded sessions available (152 sessions for Condition C and 167 sessions for Condition C+; this represented 78% and 82%, respectively of total sessions). Some sessions were not video-recorded due to technological problems. A second IC double-coded 50 (33%) sessions from condition C, and 56 (34%) from condition C+ in order to obtain inter-rater reliability for content and process fidelity measures.

The content fidelity checklist (CFC) listed program components for each session. A “yes” or “no” response was used to indicate whether a content area was covered during a session. We added up all the “yes” responses and divided them by the total number of items in order to obtain the average fidelity for each session. A score of 80% or above indicated an acceptable level (Breitenstein et al., 2010) of content fidelity for each session. Total fidelity for the group was the sum of fidelity for each session divided by the total number of sessions. Total fidelity for the program was the average fidelity at the item level for all groups. Inter-rater reliability was evaluated using Kappa coefficients, which represents agreement between two observers, taking into account the fact that observers sometimes agree or disagree simply by chance (Viera & Garrett, 2005). Inter-rater reliability for content fidelity ranged from .70 to .82 across all content areas and sessions.

Process fidelity was measured using a modified version of a 14-item checklist developed by John Lochman and colleagues (Lochman et al., 2009). The Process Fidelity Checklist (PFC) assesses multiple domains of implementation quality and therapist competence. We eliminated items from the measure that referred to the behavior of children in the group to focus only on the behavior of the counselor. This resulted in a 12-item checklist rated on a 5-point scale ranging from 1 = “Poor” to 5 = “Exceeds expectations.” To evaluate inter-rater reliability for process fidelity, intra-class correlations (ICC) were conducted. The ICC for the total process fidelity score was .65 for C and .52 for C+, which are both in the moderate range (Koo & Mae, 2016). The total mean and mean item scores were used in analyses related to process fidelity.

Training differentiation

To examine whether any elements of enhanced consultation were implemented in the basic consultation condition, supervisors completed a two-item rating scale measure (The consultant asked the counselor to briefly discuss what went right/ wrong in the previous session and to reflect on his/her own performance; The consultant discussed the fidelity data and provided positive reinforcement for steps completed and corrective feedback). The supervisor rated each item after watching the video-recording of consultation sessions for condition C. The measure used a 3-point Likert-type scale, 0 = none, 1=some discussion, 2=thorough discussion). Contamination for a given session was determined by a rating of 1 (some discussion) or 2 (thorough discussion) for either item.

Child diagnostic status

Parents were interviewed in English (N = 105, 88%) or in Spanish (N = 14, 12%) via the NIMH Diagnostic Interview Schedule for Children, Computer Version, 4th Edition (NIMH C-DISC-IV), for 9 disorders: externalizing/disruptive behavior disorders (ADHD, Conduct, and Oppositional Defiant), anxiety disorders (Social Phobia, Separation Anxiety, Panic, Generalized Anxiety, and PTSD), and mood disorders (Major Depression/Dysthymia). The NIMH C-DISC-IV (Shaffer et al., 1996; Shaffer, Fisher, Lucas, Dulcan, & Schwab-Stone, 2000) is a highly structured, diagnostic interview with good psychometric properties that is commonly used in epidemiologic and clinical studies. There are no significant differences between the English and Spanish versions of the instrument with regard to content or psychometric properties (Bravo et al., 2001). The structured nature of the interview does not allow for subjective interpretation, therefore eliminating the need for diagnostic reliability checks or inter-rater reliability checks (Shaffer et al., 2000). The instrument reports three levels of diagnostic severity for each disorder: Positive, Intermediate (at-risk), or Negative. The NIMH C-DISC-IV was administered at pre- and post-intervention.

Impairment

After the completion of the parent structured interview, the interviewer completed the Clinical Global Impression Severity (GCI; Guy, 1976) scale, and the Children’s Global Assessment Scale (CGAS; Shaffer et al., 1983) after examining a printout of the interview containing a list of diagnoses for each child and notes taken during the interview.

The CGI is used in studies to measure symptom severity and treatment response based on a 7-point scale, ranging from 1 (normal) to 7 (amongst the most severely ill patients). The CGI has 3 global subscales (Severity of Illness, Global Improvement, Efficacy Index). The ratings are based upon observed and reported symptoms, behavior, and function during the past seven days. The CGI has adequate psychometric properties (Guy, 1976). We used the CGI severity score to measure pre- to- post- changes in severity of illness, specifically for the primary NIMH C-DISC-IV diagnosis.

The CGAS is appropriate for children ages 4 to 16 years. The CGAS has a 1 (extreme level of impairment) to 100 (no impairment) scale used to measure global impairment level during the past month. The CGAS has shown high retest reliability (.65 – .95) and has been found to be sensitive to changes in impairment (Shaffer et al., 1983). We used the CGAS to measure pre- to- post-treatment change in global impairment for all presenting problems and disorders combined.

Data Analytic Plan

Descriptive statistics (mean, standard deviations, median and range) for content and process fidelity were calculated for each group and for all three years combined. Comparison of content and process fidelity scores between the two training strategies (C, C+) was conducted using nonparametric methods, specifically Wilcoxon rank sum two sample tests, because the distributions of content and process fidelity scores were skewed and the assumption of equal variances between the two groups were rejected (F = .0016 and F = .0016 respectively).

We wanted to assess whether there were associations between implementation strategy (C, C+) and changes in diagnostic status (improve, no change, worse) for oppositional defiant disorder (ODD) and conduct disorder (CD). Given the relatively small number of children with CD in each condition, we opted for analyzing data for ODD only. To examine this association, we classified each participant’s diagnostic status at each time point (pre-, post-treatment) using the following ordinal scale: 0 = Negative (no diagnosis); 1 = Intermediate (at risk for diagnosis); 2 = Positive (presence of diagnosis). Changes in diagnostic status were created for each participant by subtracting each participant’s diagnostic status at pre- from his/her diagnosis status at post-treatment. We examined pre-treatment ODD diagnostic status frequencies and post- ODD diagnosis status frequencies between the two strategies using chi-square tests. McNemar’s chi-squared test was used to compare the proportion of participants who improved in their diagnostic status from pre- to post-treatment to the proportion of participants who worsened from pre- to post-treatment. The association between process fidelity and content fidelity with changes in diagnostic status was examined using the Kruskal-Wallis Test (a non-parametric test). Wilcoxon rank sum two sample tests were used for post hoc analysis.

We also wanted to assess whether there were associations between implementation strategy (C, C+) and improvement in clinical global impression for the primary diagnosis (ODD, CD), as measured by the CGI, and a reduction in global impairment for all presenting problems combined, as measured by the CGAS. We used the generalized estimating equation (GEE), a between-group, repeated measures, population-based approach (Wang, 2014), to examine changes from pre- to post-treatment in CGAS and CGIS ratings between the two conditions (C vs. C+) after adjusting for pre-treatment ratings using Proc Genmod in SAS (SAS Institute Inc., 2014). Statistical significance was set at p < .05.

Results

An examination of the two-item training differentiation measure for 195 C sessions showed that 6 sessions (3.1%) were rated with a rating of 1 (some discussion) or 2 (thorough review), indicating that very few sessions in the basic consultation condition included elements of enhanced consultation.

Mean content fidelity (percentage of items implemented with fidelity) and mean process fidelity (mean item score on the process fidelity items) for all CPP groups across both implementation strategies were calculated. For the C and C+ strategies combined, number of sessions, mean (SD) and median content fidelity scores were 318, 82.6% (21.04) and 85%, respectively; and for process fidelity scores were 319, 4.05 (0.56) and 4.20, respectively. The number of sessions, mean (SD) and median for content fidelity for the C+ condition were 166, 87.1% (17.94) and 100%, and for the C condition were 152, 77.7% (23.07) and 83.0%, respectively. Number of sessions, mean (SD), and median for process fidelity for the C+ condition were 167, 4.16 (.45), 4.20, and for the C condition were 152, 3.39 (.63), 4.10, respectively.

The non-parametric Wilcoxon test was used to compare percent content fidelity, and mean process fidelity (mean item score) between the C and C+ conditions. The content fidelity for C+ (87.6%) was significantly higher than the content fidelity for C (77.75%); (Two-sided Pr > |Z| with continuity correction: p < .0001). Mean process fidelity mean item score for C+ (4.16) was significantly higher than the mean process fidelity score for C (3.92); (Two-sided Pr > |Z| with continuity correction: p = .0069).

Next, we examined whether change in child diagnostic status differed by condition for children for whom we had pre- and post-treatment NIMH C-DISC-IV data (73/119 or 61%). Children with pre- and post-treatment data did not differ from children without post-treatment data on baseline diagnosis of ODD (χ2 = 0.93, p = 0.817). Out of the 73 children with ODD data, 41 (56.2%) children were in condition C. At pre-treatment, 2 were ODD negative, 23 were intermediate and 15 were positive. Thirty-two (43.8%) were in condition C+. At pre-treatment, 1 was ODD negative, 13 were intermediate and 18 were positive. There was no statistically significant difference in the frequencies of ODD diagnosis at pre-treatment between the two conditions (χ2(2) = 2.81, p =.2454). At post-treatment, in condition C, 8 children were ODD negative, 14 were intermediate and 19 were positive. In condition C+, 4 children were ODD negative, 16 were intermediate and 12 were positive. There was no statistically significant difference in the frequencies of ODD diagnosis at post-treatment between the two conditions (χ2(2) = 2.00, p =.3739). For each of the two conditions, McNemar’s test was utilized for the analysis of pre- to- post-treatment changes in ODD status. We examined whether change in participant diagnostic status was associated with process fidelity (PF) and content fidelity (CF). Across both conditions, 18 children improved in their ODD diagnosis status (M [SD]: PF= 86% [15%]; CF=4.17 [0.51]), 34 children experienced no changes (M [SD]: PF= 83% [19%]; CF=4.06 [0.63]), and 11 children experienced worsening in their diagnosis status (M [SD]: PF= 77% [18%]; CF=3.67 [0.49]). No significant differences were seen in process fidelity means between the three groups (Kruskal-Wallis χ2(2) =2.0925; Pr = 0.3513). However, there were significant differences in content fidelity means between the three groups (Kruskal-Wallis χ2(2) = 6.48; Pr = 0.039). A post hoc analysis using the Wilcoxon test comparing the improved and the no change groups vs. the worsened group showed significantly higher content fidelity scores (Two sided Pr Z = .023 and 0.021), respectively.

We examined differences in pre-treatment severity of illness scores and pre-treatment global improvement scores between the C and C+ conditions using independent two sample t-tests. The two groups had similar mean CGI scores at pre-treatment (C = 3.93; SD = .716 vs. C+ = 4.13, SD = .665; t-test [144] = 1.72, p = 0872). However, the C condition had significantly higher mean CGAS ratings (M =53.51, SD = 6.086) at pre-treatment, indicating less impairment, than participants in the C+ condition (M = 50.19, SD = 6.688); t-test [144] = 3.14, p = .0021). The generalized estimating equations (GEE) was utilized (Proc Genmod in SAS, SAS Institute Inc., 2014) to examine pre- to- post-treatment changes in CGIS and CGAS between the two conditions adjusting for ratings at pre-treatment. The independent terms used in the GEE modeling were time (pre-/post-treatment), condition (C/C+) and the time*condition interaction. The measurement at pre-treatment was included in the model as a covariate. For both, CGIS and CGAS, and after adjusting for ratings at pre-treatment, no significant time*condition interaction term and condition term were found. However, time was significant for both outcomes. After adjusting for ratings at pre-treatment, CGIS decreased by .49 (standard error = .15, and Pr >|Z| = .0009), and CGAS increased by 3.76 (standard error = 1.08 and Pr >|Z| = .0005). The results suggest that both conditions resulted in significant pre- to- post-treatment changes for CGIS and CGAS (see Table 2).

Table 2.

Pre- to- post- changes in CGIS and CGAS ratings using the Generalized Estimating Equations (GEE)

Estimate Standard
Error
95% Confidence Limits Z Pr > [Z]

CGIS CGAS CGIS CGAS CGIS CGAS CGIS CGAS CGIS CGAS

Intercept .88 4.35 .29 2.70 .30 1.46 −.94 9.64 2.98 1.61 .003 .11
Pre-Measurement .78 .92 .07 .05 .63 .92 .82 1.02 10.59 18.28 <.0001 <.0001
Time −.49 3.76 .15 1.08 −.78 .20 1.64 5.87 −3.32 3.49 .0009 .0005
Condition .04 −.27 .04 .23 −.03 .12 −.71 .17 1.22 −1.20 .223 .232
Condition*Time −.29 1.06 .22 1.58 −.72 .13 −2.03 4.15 −1.36 .67 .173 .503

Note: CGIS (Clinical Global Impression – Severity of Illness); CGAS (Children’s Global Assessment Scale)

Finally, we used Spearman correlation coefficients to assess the relationship between counselor prior mental health experience in years (M = 12.11, SD = 13.142) with content fidelity. The correlations was not statistically significant (Spearman correlation coefficient = −.36, p =.2278), indicating no significant association between counselor mental health experience and content fidelity

Discussion

The study sought to test two training strategies for the implementation of CPP. We wanted to determine whether a more comprehensive consultation strategy comprised of an initial training workshop, treatment manual, consultation for session preparation, self-reflection, goal setting, video review and discussion and performance feedback (C+) would result in higher levels of content and process fidelity and child outcomes, than a less comprehensive consultation strategy comprised of an initial training workshop, treatment manual, and consultation for session preparation (C).

Both consultation strategies resulted in content fidelity scores generally higher than the reported fidelity level for most community-based studies (Durlak & DuPre, 2008). As expected, C+ resulted in higher content fidelity compared to C. C+ also led to higher process fidelity scores, compared to C. However, although statistically significant, the difference in process fidelity between conditions (mean item score of 4.16 vs. 3.92 on a scale ranging from 1 to 5) was likely not clinically meaningful.

With regard to the second aim, no differences were found between conditions with regard to changes in diagnostic status and level of global impairment. However, children in both the C and C+ conditions showed reductions in clinician ratings of illness severity for the primary diagnosis (ODD, CD) and overall global impairment. The inability to demonstrate these changes in relation to a treatment-as-usual control condition warrants that these reductions be interpreted with caution.

An examination of training contamination between conditions showed that only 3.1% of consultation sessions in conditions C had any elements of enhanced consultation (i.e., self-reflection, discussing fidelity data, providing corrective feedback). As such, consultants appear to have adhered to the different training components, which lends credibility to fidelity and functional impairment findings. Also, we did not find a relationship between counselor years of mental health experience and implementation fidelity.

The results regarding direct association between content and process fidelity and changes in child outcomes were not confirmed. However, an analysis of the proportion of students who seemed to improve, not improve or get worse, and group content fidelity, suggest that sessions with higher fidelity were associated with better child outcomes. Future randomized clinical trials aimed at assessing the direct impact of fidelity on child outcomes seem warranted.

The findings warrant several observations pertaining to training and implementation of EBPs in urban school settings. Results showed that school counselors, with little or no prior exposure to mental health EBPs, implemented group treatment for children with externalizing behavior problems with fidelity after receiving training and ongoing expert support. Also, both consultation strategies resulted not only in moderate to high levels of content and process fidelity, but may have also resulted in improvement in clinical outcomes, which has been cited as the “acid test” of any supervision or consultation approach in community settings (Ellis & Ladany, 1997).

The results of the study reaffirm the importance of ongoing initiatives, led by school-based therapists and researchers, to bring mental health evidence-based practices to schools (Adelman & Taylor, 2006; Atkins et al., 2006; Duchnowski & Kutash, 2009), in particular efforts aimed at connecting school-wide systems of support, such as SWPBIS, with mental health supports (Anello et al., 2016; Barrett, Eber, & Weist, 2013; Bradshaw, Bottiani, Osher, & Sugai, 2014).

Several observations are also warranted with regard to the amount and nature of the training provided to school counselors. First, studies have shown that more extensive training of community counselors appears to be better than less extensive training (Rakovshik & McManus, 2010; Webster-Stratton et al., 2014). The findings from this study are consistent with previous findings with regard to content and process fidelity but not with regard to child outcomes. It appears that if the goal of a school-based service is to maximize the fidelity of intervention implementation, a multicomponent training strategy (Herschell et al., 2010) consisting of an initial training workshop conducted by well-trained coaches, followed by a treatment manual, ongoing consultation for session preparation, plus dealing with implementation barriers, setting targets for implementation fidelity, encouraging clinician self-reflection and providing performance feedback, would appear to be indicated. If, on the other hand, the goal is to deliver services that reach acceptable levels of implementation fidelity, then a training strategy consisting of an initial training workshop, training manual, plus ongoing didactic session preparation, may be sufficient. Underscoring both strategies is the need to provide ongoing training support for the duration of the intervention, which is difficult to achieve in an era of scarce financial resources for urban schools and public mental health systems (Maag & Katsiyannis, 2010; Stewart et al., 2016).

The pre- to post-treatment reductions in impairment scores were considerably higher than in previous studies (e.g., Farahmand, Grant, Polo, Duffy, & DuBois, 2011). No pre- to post-treatment changes in diagnostic status were found in either condition. The lack of observed pre- to post-treatment effects in diagnostic severity level might be the result of the outcome measure used in the study. The NIMH DISC-IV has a restricted range to detect differences. Indeed, a measure with only three diagnostic levels (negative, intermediate, positive) was probably not sensitive enough to detect subtle change in diagnostic presentation in a group of children with multiple comorbidities.

It is noteworthy that the outcomes with regard to fidelity and functional impairment were obtained by a group of counselors with little to no prior experience implementing mental health EBPs. In fact, 29% of counselors did not have any prior experience providing mental health interventions to students. It is widely accepted within D&I studies that the main goal of consultation in community settings should be to ensure acceptable implementation fidelity, given the strong connection between fidelity and clinical outcomes (Schoenwald, Carter, Chapman, & Sheidow, 2008). The results of the study support this approach to implementation. The fact that counselors may have been able to obtain positive clinical outcomes with regard to impairment while implementing the intervention with high levels of fidelity is encouraging for future D&I initiatives in urban schools.

Limitations

The results of the study should be considered in light of the existence of some limitations. First, we recognized the potential for student outcomes within school, within school personnel or within CPP group to be correlated (nested), and that such correlations may produce biased results. In studies with a larger number of schools, the nested nature of students within school/counselors/CPP groups ought to be addressed in the analyses. Second, the absence of a treatment-as-usual condition placed limits on the interpretation of pre- to post-treatment reductions in symptom severity and impairment. Third, the inter-rater reliability was not assessed for both impairment measures. As such, the results for the CGI and CGAS measures should be interpreted with caution. Fourth, the relatively low and variable reliability of the process fidelity measure reflects the challenges of assessing process fidelity in school practice. Fifth, by design, counselors in condition C received less consultation time with consultants (20 minutes) than counselors in condition C+ plus (40 minutes). It is possible that some of the effects of C+ relative to C on fidelity were the result of counselors having more time with consultants. Sixth, the measurement of clinical outcomes did not include instruments able to measure subtle changes in behavior and performance in home and school settings, which is a noteworthy limitation (Arora et al., 2016).

Future Studies

The study provided initial findings regarding the effectiveness of two training strategies on implementation and child outcomes. Future studies should assess the cost-effectiveness of the training strategies so that schools and mental health agencies that provide services in schools can make more informed decisions about the pros and cons of each training strategy vis-à-vis cost (Durlak & DuPre, 2008). The study design did not permit an examination of the relative contribution of each training component to the overall outcome. Future dismantling studies should assess the relative contribution of each component (Edmunds et al., 2013).

Conclusions

In conclusion, an enhanced consultation approach appears more effective in addressing implementation fidelity than a basic consultation approach. Both consultation strategies appear effective for improving child functional impairment. The results of the study are encouraging for future efforts to bring mental health EBPs to urban schools, given that both consultation strategies resulted in high levels of fidelity and may have contributed to lower functional impairment.

Highlights.

  • Enhanced consultation resulted in higher content and process fidelity compared to basic consultation.

  • Both consultation approaches resulted in lowered child interviewer-rated severity of illness and global impairment.

  • Consultation conditions did not differ with regard to severity of illness or global impairment.

Acknowledgments

The National Institute of Child Health and Human Development funded the research reported in this publication, Award Number R01HD073430. This study is registered in ClinicalTrials.Gov, Identifier NCT01941069.

Abbreviations

C

Consultation

C+

Enhanced consultation

CPP

Coping Power Program

D&I

Dissemination and implementation

EBP

Evidence-based practice

RCT

Randomized clinical trial

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Conflict of Interest

All authors declare no potential conflict of interest pertaining to this manuscript.

References

  1. Accurso EC, Taylor RM, Garland AF. Evidence-based Practices Addressed in Community-based Children’s Mental Health Clinical Supervision. Training and Educacion in Professional Psychology. 2011;5:88–96. doi: 10.1037/a0023537. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Adelman HS, Taylor L. Mental health in schools and public health. Public Health Reports. 2006;121:294–298. doi: 10.1177/003335490612100312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Anello V, Weist M, Eber L, Barrett S, Cashman J, Rosser M, Bazyk S. Readiness for Positive Behavioral Interventions and Supports and School Mental Health Interconnection: Preliminary Development of a Stakeholder Survey. Journal of Emotional and Behavioral Disorders. 2016;25:1–14. doi: 10.1177/1063426616630536. [DOI] [Google Scholar]
  4. Arora PG, Connors EH, George MW, Lyon AR, Wolk CB, Wiest MD. Advancing Evidence-Based Assessment in School Mental Health: Key Priorities for applied research agenda. Clinical Child and Family Psychology Review. 2016;19:271–284. doi: 10.1007/s10567-016-0217-y. [DOI] [PubMed] [Google Scholar]
  5. Atkins MS, Frazier SL, Birman D, Adil JA, Jackson M, Graczyk PA, McKay MM. School-based mental health services for children living in high poverty urban communities. Administration and Policy in Mental Health. 2006;33:146–159. doi: 10.1007/s10488-006-0031-9. [DOI] [PubMed] [Google Scholar]
  6. Barrett S, Eber L, Weist M. Advancing education effectiveness: Interconnecting school mental health and school-wide positive behavior support. 2013 Retrieved from https://http://www.pbis.org/common/cms/files/pbisresources/Final-Monograph.pd.
  7. Beidas RS, Kendall PC. Training Therapists in Evidence-Based Practice: A Critical Review of Studies From a Systems-Contextual Perspective. Clinical Psychology Science and Practice. 2010;17:1–30. doi: 10.1111/j.1468-2850.2009.01187.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bickman L, Kelley SD, Breda C, de Andrade AR, Riemer M. Effects of routine feedback to counselors on mental health outcomes of youths: results of a randomized trial. Psychiatric Services. 2011;62:1423–1429. doi: 10.1176/appi.ps.002052011. [DOI] [PubMed] [Google Scholar]
  9. Bradshaw CP, Bottiani JH, Osher D, Sugai G. The integration of positive behavioral interventions and supports and social emotional learning. In: Weist MD, Lever NA, Bradshaw CP, Owens JS, editors. Handbook of School Mental Health: Research, training, practice and policy. Second. New York: Springer; 2014. pp. 101–118. [Google Scholar]
  10. Bravo M, Ribera J, Rubio-Stipec M, Canino G, Shrout P, Ramirez R, Martinez Taboas A. Test-retest reliability of the Spanish version of the Diagnostic Interview Schedule for Children (DISC-IV) Journal of Abnormal Child Psychology. 2001;29:433–444. doi: 10.1023/A:1010499520090. [DOI] [PubMed] [Google Scholar]
  11. Breitenstein SM, Gross D, Garvey C, Hill C, Fogg L, Resnick B. Implementation Fidelity in Community-Based Interventions. Research in Nursing & Health. 2010;33:164–173. doi: 10.1002/nur.20373. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Caplan G, Caplan RB. Mental Health Consultation and Collaboration. San Francisco, CA: Jossey-Bass; 1993. [Google Scholar]
  13. Curry JF, Wells KC, Lochman JE, Craighead WE, Nagy PD. Cognitive-behavioral intervention for depressed, substance-abusing adolescents: development and pilot testing. Journal of the American Academy of Child and Adolescent Psychiatry. 2003;42:656–665. doi: 10.1097/01.CHI.0000046861.56865.6C. doi: http://dx.doi.org/10.1097/01.CHI.0000046861.56865.6C. [DOI] [PubMed] [Google Scholar]
  14. Denton CA, Hasbrouck J. A Description of Instructional Coaching and its Relationship to Consultation. Journal of Educational and Psychological Consultation. 2009;19:150–175. doi: 10.1080/10474410802463296. [DOI] [Google Scholar]
  15. Duchnowski AJ, Kutash K. Integrating PBS, mental health services, and family driven care. In: Sailor W, Dunlap G, Sugai G, Horner R, editors. Handbook of positive behavior support. New York: Springer; 2009. pp. 203–231. [Google Scholar]
  16. Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology. 2008;41:327–350. doi: 10.1007/s10464-008-9165-0. [DOI] [PubMed] [Google Scholar]
  17. Edmunds JM, Beidas RS, Kendall PC. Dissemination and Implementation of Evidence-Based Practices: Training and Consultation as Implementation Strategies. Clinical Psychology (New York) 2013;20:152–165. doi: 10.1111/cpsp.12031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Eiraldi R, Power TJ, Schwartz BS, Keiffer JN, McCurdy BL, Mathen M, Jawad AF. Examining Effectiveness of Group Cognitive-Behavioral Therapy for Externalizing and Internalizing Disorders in Urban Schools. Behavior Modification. 2016;40:611–639. doi: 10.1177/0145445516631093. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Eiraldi R, Benjamin Wolk C, Locke J, Beidas R. Clearing hurdles: the challenges of implementation of mental health evidence-based practices in under-resourced schools. Advances in School Mental Health Promotion. 2015;8:124–145. doi: 10.1080/1754730X.2015.1037848. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Eiraldi R, McCurdy B, Khanna M, Mautone J, Jawad AF, Power TJ, Sugai G. A cluster randomized trial to evaluate external support for the implementation of positive behavioral interventions and supports by school personnel. Implementation Science. 2014;9(12) doi: 10.1186/1748-5908-9-12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Ellis MV, Ladany N. Inferences concerning supervisees and clients in clinical supervision: An integrative review. In: Watkins CEJ, editor. Handobook of psychotherapy supervision. New York: Wiley; 1997. pp. 447–507. [Google Scholar]
  22. Evans SW. Mental health services in schools: utilization, effectiveness, and consent. Clinical Psychology Review. 1999;19:165–178. doi: 10.1016/S0272-7358(98)00069-5. [DOI] [PubMed] [Google Scholar]
  23. Farahmand K, Grant KE, Polo AJ, Duffy SN, DuBois DL. School-based mental health and behavioral programs for low-income, urban youth: A systematic and meta-analytic review. Clinical Psychology Science and Practice. 2011;18:372–390. [Google Scholar]
  24. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature (FMHI Pub. No. 231) Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network; 2005. [Google Scholar]
  25. Flaspohler PD, Meehan C, Maras MA, Keller KE. Ready, willing, and able: Developing a support system to promote implementation of school-based prevention programs. American Journal of Community Psychology. 2012;50:428–444. doi: 10.1007/s10464-012-9520-z. [DOI] [PubMed] [Google Scholar]
  26. Goodman R, Ford T, Simmons H, Gatward R, Meltzer H. Using the Strengths and Difficulties Questionnaire (SDQ) to screen for child psychiatric disorders in a community sample. The British Journal of Psychiatry. 2000;177:534–539. doi: 10.1192/bjp.177.6.534. [DOI] [PubMed] [Google Scholar]
  27. Guy W. The clinical global impression scale. As published in The ECDEU Assessment Manual for Psychopharmacology-Revised (DHEW Publ No ADM 76-338) in U.S. Department of Health, Education, and Welfare Public Health Service, Alcohol, Drug Abuse, Mental Health Administration, NIMH Psychopharmacology Research Branch. Division of Extramural Research; Rockville, MD: 1976. pp. 218–222. [Google Scholar]
  28. Henggeler SW, Schoenwald SK, Liao JG, Letourneau EJ, Edwards DL. Transporting efficacious treatments to field settings: the link between supervisory practices and therapist fidelity in MST programs. Journal of Clinical Child and Adolescent Psychology. 2002;31:155–167. doi: 10.1207/S15374424JCCP3102_02. [DOI] [PubMed] [Google Scholar]
  29. Herschell AD, Kolko DJ, Baumann BL, Davis AC. The role of therapist training in the implementation of psychosocial treatments: a review and critique with recommendations. Clinical Psychology Review. 2010;30:448–466. doi: 10.1016/j.cpr.2010.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Kluger AN, DeNisi A. The Effects of Feedback Interventions on Performance: A Historical Review, a Meta-Analysis, and a Preliminary Feedback Intervention Theory. Psychological Bulletin. 1996;119:254–284. doi: http://dx.doi.org/10.1037/0033-2909.119.2.254. [Google Scholar]
  31. Knowles MS. The modern practice of self-education: From pedadogy to andragogy. 2. New York: Cambridge University Press; 1980. [Google Scholar]
  32. Kolb DA. Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice-Hall; 1984. [Google Scholar]
  33. Koo TK, Li MY. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. Journal of Chiropractic Medicine. 2016;15:155–163. doi: 10.1016/j.jcm.2016.02.012. doi: http://dx.doi.org/10.1016/j.jcm.2016.02.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Lochman JE, Boxmeyer C, Powell N, Qu L, Wells KC, Windle M. Dissemination of the Coping Power program: importance of intensity of counselor training. Journal of Consulting and Clinical Psychology. 2009;77:397–409. doi: 10.1037/a0014514. doi:2009-08093-003 [pii] 10.1037/a0014514. [DOI] [PubMed] [Google Scholar]
  35. Lochman JE, Curry J. Effects of social problem solving training and self-instruction training with aggressive boys. Journal of Clinical Child Psychology. 1986;15:159–164. doi: 10.1207/s15374424jccp1502_8. [DOI] [Google Scholar]
  36. Lochman JE, Wells KC. The coping power program for preadolescent aggressive boys and their parents: outcome effects at the 1-year follow-up. Journal of Consulting and Clinical Psychology. 2004;72:571–578. doi: 10.1037/0022-006X.72.4.571. [DOI] [PubMed] [Google Scholar]
  37. Lochman JE. Effects of different treatment lengths in cognitive behavioral interventions with aggressive boys. Child Psychiatry and Human Development. 1985;16:45–56. doi: 10.1007/BF00707769. [DOI] [PubMed] [Google Scholar]
  38. Locke EA, Latham GP. Building a practically useful theory of goal setting and task motivation. A 35-year odyssey. American Psychologist. 2002;57:705–717. doi: 10.1037//0003-066x.57.9.705. [DOI] [PubMed] [Google Scholar]
  39. Maag JW, Katsiyannis A. School-Based Mental Health Services: Funding Options and Issues. Journal of Disability Policy Studies. 2010;21:173–180. doi: 10.1177/1044207310385551. [DOI] [Google Scholar]
  40. Masia-Warner C, Klein RG, Dent HC, Fisher PH, Alvir J, Albano AM, Guardino M. School-based intervention for adolescents with social anxiety disorder: results of a controlled study. Journal of Abnormal Child Psychology. 2005;33:707–722. doi: 10.1007/s10802-005-7649-z. [DOI] [PubMed] [Google Scholar]
  41. McCausland SG, Hales LW, Reinhardtsen JM. CREST: Technical Report. Vancouver, WA: Educational Service District 112; 1997. [Google Scholar]
  42. Merriam S. The changing landscape of adult learning theory. In: Comings J, Garner B, Smith C, editors. Review of adult learning and literacy: Connecting research, policy and practice. Mahwah, NJ: Lawrence Erlbaum Associates; 2004. pp. 199–220. [Google Scholar]
  43. Nadeem E, Gleacher A, Beidas RS. Consultation as an implementation strategy for evidence-based practices across multiple contexts: unpacking the black box. Administration and Policy in Mental Health. 2013;40:439–450. doi: 10.1007/s10488-013-0502-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Perepletchikova F, Kazdin AE. Treatment Integrity and Therapeutic Change:Issues and Research Recommendations. Clinical Psychology: Science and Practice. 2005;12:365–383. doi: 10.1093/clipsy/bpi045. [DOI] [Google Scholar]
  45. Pianta RC, Mashburn AJ, Downer JT, Hamre BK, Justice L. Effects of web-mediated professional development resources on teacher-child interactions in pre-kindergarten classrooms. Early Childhood Research Quarterly. 2008;23:431–451. doi: 10.1016/j.ecresq.2008.02.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Power TJ, Blom-Hoffman J, Clarke AT, Riley-Tillman TC, Kelleher C, Manz PH. Reconceptualizing intervention integrity: A partnership-based framework for linking research with practice. Psychology in the Schools. 2005;42:495–507. doi: 10.1002/pits.20087. [DOI] [Google Scholar]
  47. Rakovshik SG, McManus F. Establishing evidence-based training in cognitive behavioral therapy: A review of current empirical findings and theoretical guidance. Clinical Psychology Review. 2010;30:496–516. doi: 10.1016/j.cpr.2010.03.004. [DOI] [PubMed] [Google Scholar]
  48. Rones M, Hoagwood K. School-based mental health services: a research review. Clinical Child and Family Psychology Review. 2000;3:223–241. doi: 10.1023/A:1026425104386. [DOI] [PubMed] [Google Scholar]
  49. Sailor W, Dunlap G, Sugai G, Horner G. Handbook of Positive Behavior Support. New York: Springer; 2011. [DOI] [Google Scholar]
  50. SAS. SAS 9.4, SAS/STAT 14.1 User’s Guide –Procedures. Cary, NC: SAS Institute Inc; 2014. [Google Scholar]
  51. Schoenwald SK, Carter RE, Chapman JE, Sheidow AJ. Therapist adherence and organizational effects on change in youth behavior problems one year after multisystemic therapy. Administration and Policy in Mental Health. 2008;35:379–394. doi: 10.1007/s10488-008-0181-z. [DOI] [PubMed] [Google Scholar]
  52. Schoenwald SK, Hoagwood K. Effectiveness, transportability, and dissemination of interventions: What matters when? Psychiatric Services. 2001;52:1190–1197. doi: 10.1176/appi.ps.52.9.1190. [DOI] [PubMed] [Google Scholar]
  53. Schoenwald SK, Sheidow AJ, Chapman JE. Clinical supervision in treatment transport: effects on adherence and outcomes. Journal of Consulting and Clinical Psychology. 2009;77:410–421. doi: 10.1037/a0013788. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Shaffer D, Gould MS, Brasic J, Ambrosini P, Fisher P, Bird H, Aluwahlia S. A children’s global assessment scale (CGAS) Archives of General Psychiatry. 1983;40:1228–1231. doi: 10.1001/archpsyc.1983.01790100074010. [DOI] [PubMed] [Google Scholar]
  55. Shaffer D, Fisher P, Dulcan MK, Davies M, Piacentini J, Schwab-Stone ME, Regier DA. The NIMH Diagnostic Interview Schedule for Children Version 2.3 (DISC-2.3): description, acceptability, prevalence rates, and performance in the MECA Study. Methods for the Epidemiology of Child and Adolescent Mental Disorders Study. Journal of the American Academy of Child and Adolescent Psychiatry. 1996;35:865–877. doi: 10.1097/00004583-199607000-00012. doi: http://dx.doi.org/10.1097/00004583-199607000-00012. [DOI] [PubMed] [Google Scholar]
  56. Shaffer D, Fisher P, Lucas CP, Dulcan MK, Schwab-Stone ME. NIMH Diagnostic Interview Schedule for Children Version IV (NIMH DISC-IV): description, differences from previous versions, and reliability of some common diagnoses. Journal of the American Academy of Child and Adolescent Psychiatry. 2000;39:28–38. doi: 10.1097/00004583-200001000-00014. [DOI] [PubMed] [Google Scholar]
  57. Shernoff ES, Lakind D, Frazier SL, Jakobsons L. Coaching Early Career Teachers in Urban Elementary Schools: A Mixed-Method Study. School Mental Health. 2015;7:6–20. doi: 10.1007/s12310-014-9136-6. [DOI] [Google Scholar]
  58. Shirk SR, Peterson E. Gaps, Bridges, and the Bumpy Road to Improving Clinic-Based Therapy for Youth. Clinical Psychology: Science and Practice. 2013;20:107–113. doi: 10.1111/cpsp.12026. [DOI] [Google Scholar]
  59. Showers B, Joyce B, Bennett B. Synthesis of research on staff development: a framework for future study and a state-of-the-art analysis. Educational Leadership. 1987;45:77–87. [Google Scholar]
  60. Stewart RE, Adams DR, Mandell DS, Hadley TR, Evans AC, Rubin R, Beidas RS. The Perfect Storm: Collision of the Business of Mental Health and the Implementation of Evidence-Based Practices. Psychiatric Services. 2016;67:159–161. doi: 10.1176/appi.ps.201500392. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Stumpf RE, Higa-McMillan CK, Chorpita BF. Implementation of evidence-based services for youth: Assessing provider knowledge. Behavior Therapy. 2009;33:48–65. doi: 10.1177/0145445508322625. [DOI] [PubMed] [Google Scholar]
  62. Taras HL. School-based mental health services. Pediatrics. 2004;113:1839–1845. doi: 10.1542/peds.113.6.1839. [DOI] [PubMed] [Google Scholar]
  63. Viera AJ, Garrett JM. Understanding interobserver agreement: The kappa statistic. Family Medicine. 2005;37:360–363. [PubMed] [Google Scholar]
  64. Wang M. Generalized estimating equations in longitudinal data analysis: A review and recent developments. Advances in Statistics, Article ID 303728. 2014:11. doi: http://dx.doi.org/10.1155/2014/303728.
  65. Webster-Stratton CH, Reid MJ, Marsenich L. Improving therapist fidelity during implementation of evidence-based practices: Incredible years program. Psychiatric Services. 2014;65:789–795. doi: 10.1176/appi.ps.201200177. [DOI] [PubMed] [Google Scholar]

RESOURCES