Abstract
Objective
Progress bringing evidence-based practice (EBP) to community behavioral health (CBH) has been slow. This study investigated feasibility, acceptability, and fidelity outcomes of a program to implement transdiagnostic cognitive therapy (CT) across diverse CBH settings, in response to a policy shift toward EBP.
Method
Clinicians (n = 348) from 30 CBH programs participated in workshops and 6 months of consultation. Clinician retention was examined to assess feasibility, and clinician feedback and attitudes were evaluated to assess implementation acceptability. Experts rated clinicians’ work samples at baseline, mid-, and end-of-consultation with the Cognitive Therapy Rating Scale (CTRS) to assess fidelity.
Results
Feasibility was demonstrated through high program retention (i.e., only 4.9% of clinicians withdrew). Turnover of clinicians who participated was low (13.5%) compared to typical CBH turnover rates, even during the high-demand training period. Clinicians reported high acceptability of EBP and CT, and self-reported comfort using CT improved significantly over time. Most clinicians (79.6%) reached established benchmarks of CT competency by the final assessment point. Mixed-effects hierarchical linear models indicated that CTRS scores increased significantly from baseline to the competency assessment (p < .001), on average by 18.65 points. Outcomes did not vary significantly between settings (i.e., outpatient vs. other).
Conclusions
Even clinicians motivated by policy-change rather than self-nomination may feasibly be trained to deliver a case-conceptualization driven EBP with high levels of competency and acceptability.
Public Health Significance
Access to EBPs in community settings has been a long-sought but slow process, and the Beck Community Initiative suggests a practical model for EBP increasing access in a large CBH network.
Keywords: cognitive therapy, implementation, community mental health, competency, fidelity
Introduction
Despite the existence of evidence-based practices (EBPs) for a range of behavioral health concerns and populations (Butler, Chapman, Forman, & Beck, 2006; Chorpita et al., 2011a), uptake in practice contexts has been slower than anticipated (President's New Freedom Commission on Mental Health, 2003; Chorpita et al., 2015). Although policy makers have issued mandates, provided incentives, and devoted billions of dollars to bring EBPs to community behavioral health (CBH) in the United States (Institute of Medicine, 2001; McHugh & Barlow, 2010) and around the world (Clark, 2011; Layard, 2006), the majority of CBH services continue to have little or no relation to practices supported by empirical evidence (Zima et al., 2005; Creed, Stirman, Evans, & Beck, 2014a). Even among the notable successful translations of practice from academia to practice settings, vestiges of the translation gap remain. Studies have focused on clinicians who have typically self-selected or been nominated for training, which has often focused on a single presenting problem or manualized treatment being implemented in one level of care (Karlin et al., 2010; Merrill, Tolbert, & Wade, 2003; Miller, Yahne, Moyers, Martinez, & Pirritano, 2004; Scheeres, Wensing, Knoop, & Bleijenberg, 2008). Policy-driven change in CBH systems, however, may require uptake of EBP across clinicians, presenting problems, and levels of care that represent a more complex reality. The current study examines the acceptability, feasibility, and fidelity outcomes of a policy-driven effort to implement a transdiagnostic EBP across multiple CBH settings and levels of care.
Although the availability of a large assortment of empirically supported therapies (ESTs) may at first seem beneficial, selecting and allocating resources to train providers and implement numerous ESTs is emerging as a significant challenge for behavioral health systems (Chorpita, Bernstein, & Daleiden, 2011b). For example, in a structured comparison of clients in a state-wide CBH system and participants from 437 randomized controlled trials of ESTs, Chorpita et al., (2011b) demonstrated that only 14% of children could be matched to an appropriate EST based on five basic client characteristics (target problem, age, gender, ethnicity, setting). When the matching criteria were simplified (e.g., target problem, age, gender), the needs of 71% of youth could be met, but only through state-wide implementation of nine ESTs. Even this comprehensive (though hypothetical) investment of resources would leave almost a third of the state's youth to be served by usual care. In reality, learning multiple protocols may not be feasible, taxing the finite resources of community clinicians with limited returns for people receiving services (Chorpita et al., 2011b; Garland, Bickman, & Chorpita, 2010).
In contrast to an EST (i.e., a specified treatment with evidence for its efficaciousness for a given problem in specified circumstances), EBP integrates the best available research, clinical expertise, and characteristics of the individual receiving services into an evidence-driven approach that draws upon, but does not necessarily mirror, an EST (APA Presidential Task Force on Evidence Based Practice, 2006). Transdiagnostic or common elements approaches to treating psychopathology have been developed, emphasizing core features that cut across psychiatric disorders and may serve as appropriate treatment targets (e.g., Barlow, Allen, & Choate, 2004; Chorpita & Daleiden, 2009). This approach is consistent with the Research Domain Criteria project (Insel et al., 2010) of the National Institute of Mental Health and is reflected in a number of recent treatment protocols (Norton & Philipp, 2008). A transdiagnostic approach has also shown promise when implemented within CBH settings (McEvoy & Nathan, 2007; McFarr et al., 2014; Weisz et al., 2012), and represents a compelling area of research for meeting the diverse needs of populations served by CBH.
Another possible barrier to bridging the gap between research and practice is the complexity of the CBH system. In order to successfully train clinicians in CBH settings to use EBPs in their regular practice, implementation efforts must be flexible and multifaceted. Frameworks for implementation (e.g., Consolidated Framework for Implementation Research; Damschroder et al., 2009; Exploration, Adoption/Preparation, Implementation, Sustainment: Aarons, Hurlburt, & Horwitz, 2011) have highlighted the importance of recognizing and responding to various aspects of the settings in which implementation occurs (e.g., level of care) and characteristics of the individuals being trained (e.g., attitudes about EBP) when developing implementation processes to encourage effective and sustained learning of the EBP.
Flexibility may be particularly important when participation is driven by policy change within a system, rather than self-selection or nomination. Previous research has demonstrated that highly motivated, self-selected samples of clinicians in the community can be successfully trained to deliver Cognitive Therapy (CT) and Cognitive Behavior Therapy (CBT), and that these interventions can lead to symptom change (e.g., Karlin et al., 2010; Merrill, et al., 2003; Scheeres, et al., 2008). However, the growing force of policy-driven change in large systems (e.g., Institute of Medicine, 2001; Patient Protection and Affordable Care Act of 2010) may lead to implementation efforts directed toward a broader range of clinicians who may not have self-selected for participation. The feasibility, acceptability and effectiveness of implementing an EBP among these clinicians have yet to be demonstrated.
One example of a transdiagnostic EBP approach that has been implemented in a variety of CBH settings following policy-driven change is the Beck Community Initiative (BCI; Creed et al., 2014a; Stirman, Buchhofer, McLaulin, Evans, & Beck, 2009). Beginning in 2007, the Philadelphia Department of Behavioral Health and Intellectual disAbility Services (DBHIDS), a large publicly funded mental health system that serves more than 120,000 people annually, began to support the large-scale implementation of EBPs in their CBH system (Stirman et al., 2009). The BCI was the first of these initiatives, and aims to advance the quality of care for persons in recovery by infusing CT1 principles and skills into services delivered by the Philadelphia CBH network. The BCI incorporates a variety of implementation strategies, including training and intensive consultation in CT, ongoing technical support, stakeholder engagement, and attention to organization and system-level factors that influence CT implementation (Creed et al., 2014a; Stirman et al., 2010). Providers across many different levels of care learn to use cognitive case-conceptualization to apply CT flexibly across diagnoses with the diverse clients they serve, and to navigate barriers to the uptake of CT that may be present in their specific work setting.
To determine the success of implementation efforts like the BCI, Proctor and colleagues (2011) identified a number of outcomes that differ from those examined in other community-based studies such as effectiveness research. In contrast to clinical research, implementation research focuses on “effects of deliberate and purposive actions to implement new treatments, practices, and services” (Proctor et al., p. 65). In other words, the research question becomes whether providers in real-world settings can deliver the EBPs that were developed in controlled research settings (Sholomskas et al., 2005). This study focuses on three implementation outcomes: feasibility, acceptability, and fidelity (Proctor et al., 2011). Feasibility has been defined as the degree to which an intervention can be successfully delivered in a given setting as assessed by recruitment, retention, or participation rates. Acceptability reflects stakeholder perceptions, based on direct experience with an intervention, that the intervention is agreeable, satisfactory, or suitable. These attitudes can facilitate or interfere with the uptake of interventions, influencing whether they will be implemented as intended (Aarons et al., 2012) or sustained over time (Palinkas et al., 2013). Fidelity is defined as the extent to which interventions are implemented as treatment developers intended. Fidelity has been measured more frequently in the literature than any of the other implementation outcomes, perhaps in part because of its shared importance in efficacy, effectiveness, and process research (Proctor et al., 2011).
This study extends the literature by evaluating the implementation of an evidence-based case conceptualization-driven intervention, rather than a disorder-specific protocol, delivered across many levels of care. To our knowledge, this study is the first that examines the degree to which implementation may be successful following policy change in under-resourced community mental health settings. Specifically, the current paper presents implementation outcomes for clinicians trained in individual CT in the first seven years of the BCI (2007-2014). The study aims were to: 1) establish the feasibility of retaining CBH clinicians in an intensive CT training and consultation process, and collecting audio recorded work samples for evaluation, 2) evaluate clinicians’ perceptions of the acceptability of EBPs and, specifically, the BCI CT training, 3) investigate whether community clinicians are able to deliver CT with fidelity after participation in training and consultation, and 4) examine whether these outcomes are different for clinicians in traditional outpatient settings versus those in other settings.
Method
Setting
To date, the BCI has partnered in the training of staff in a variety of roles (e.g., therapists, line staff in the therapeutic milieu, peer specialists) across 42 programs in the DBHIDS network (Creed et al., 2014b; Pontoski et al., in press). The current paper presents implementation outcomes for the 30 programs in the BCI in which clinicians were trained to provide CT in individual sessions. Fourteen programs provided general outpatient services for a broad range of presenting problems (n = 166 clinicians) and 16 programs provided individual CT in settings outside of the traditional general outpatient model (n = 182 clinicians). The non-traditional settings included six school-based programs, four substance abuse programs (two methadone-assisted treatment, one intensive outpatient, and one targeted outpatient services), three residential treatment programs, two Assertive Community Treatment teams, and one day program for people with serious mental illness.
Participants
Within the evaluation period, 348 clinicians completed program evaluation measures and attended a workshop or submitted an audio recording to begin the training process. Table 1 provides background information about the clinicians who participated.
Table 1.
N | % | ||
---|---|---|---|
Highest earned degree | PhD/PsyD | 13 | 4.0 |
MD | 9 | 2.8 | |
Masters | 291 | 90.1 | |
Bachelor's | 10 | 3.1 | |
Years since degree earned | 0-3 | 142 | 50.9 |
4-10 | 77 | 27.6 | |
11-20 | 36 | 12.9 | |
21 or more | 24 | 8.6 | |
Disciplines represented | Counseling | 74 | 23.0 |
Education | 12 | 3.7 | |
Psychiatry / medicine | 9 | 2.8 | |
Psychology | 67 | 20.8 | |
Social work | 116 | 36.0 | |
Couples/family therapy | 9 | 2.8 | |
Other | 35 | 10.8 | |
License status | Licensed | 111 | 40.2 |
Unlicensed | 165 | 59.8 |
Procedures
A detailed description of the BCI training model for clinicians is available for review (see Creed et al., 2014a), and a summary is provided here for context. At the beginning of each program's involvement in the BCI, clinicians learned the basics of CT including case conceptualization, intervention, and treatment planning, through several intensive in-person workshops (22 hours total). Next, clinicians participated in six months of weekly, two-hour consultation groups focused on the application of CT with clients on their regular caseloads.
Over the course of the consultation phase, clinicians were required to submit at least 15 recorded sessions (with appropriate assent/consent) to demonstrate ongoing use of CT with clients. At the conclusion of the consultation phase, clinicians were eligible to submit audio for the assessment of their competency in CT. In addition to in-person feedback provided during consultation, clinicians received detailed written feedback on two of these audio submissions. Feedback focused on areas of relative strength and weakness for each item and the session as a whole, followed by specific suggestions to improve their skills. (For more information about the feedback process, see Creed et al., 2013).
At the conclusion of the active training phase, clinicians were expected to continue meeting as an internal peer supervision group to build skills and prevent drift from the model. Once the program's initial cohort of clinicians transitioned to an internal supervision group, an ongoing web-based training was made available to additional program clinicians to increase CT capacity in the program and address turnover. Web-based training clinicians (n=133) completed an online training analogous to the in-person workshops, then joined their program's ongoing supervision group for support in applying CT with their clients. After six months of participation in the internal supervision groups and submission of at least 15 recorded sessions, these additional clinicians became eligible to submit audio for the assessment of competency.
Measures
Feasibility
Three aspects of feasibility were assessed: clinician retention, attendance at internal consultation groups, and collection of audio recordings. Retention of clinicians and attendance at consultation groups were evaluated using the attendance tracking forms for the consultation groups. If a clinician stopped attending the BCI, data were gathered from the designated group leader about the reason for the withdrawal (i.e., left the agency, no longer eligible to participate [e.g., promoted and no longer seeing clients], still at the agency and no longer wishes to participate). The percentage of clinicians who submitted at least the minimum required number of audio recordings was also calculated to evaluate the feasibility of the use of audio recordings to track progress and assess fidelity over time.
Acceptability
To assess clinicians’ attitudes towards EBPs, the Evidence-Based Practice Attitude Scale (EBPAS), a 15-item self-report measure that evaluates clinicians’ beliefs about the utility of EBP, perceived barriers, and institutional requirements (Aarons et al., 2010) was administered. The EBPAS has been found to have adequate internal consistency with a total scale alpha of .74 (Aarons et al., 2010). In the current sample, coefficient alpha ranged from .70 to .78 for the 3 time-points at which the EBPAS was administered (i.e., pre-workshop, post-workshop and end-of-consultation).
To measure acceptability of the BCI, clinicians in the live training answered eight questions providing feedback about their experience post-workshop. In order to promote candidness in feedback, responses were collected anonymously (i.e., data were linked by program being trained, but not by individual completing the measure). Five categorical questions assessed clinician perceptions using “yes,” “maybe,” or “no” (see Table 3). Three additional questions were rated on a 0-6 Likert scale with higher scores indicating more comfort using CT, better quality of training, and greater difficulty of material (see Table 2). The question about clinician comfort using CT, was repeated at mid- and end-of-consultation to assess changes in clinicians’ self-reported comfort using CT over the training period.
Table 3.
All Settings |
Outpatient |
Other |
|||||||
---|---|---|---|---|---|---|---|---|---|
Questiona | Yes | Maybe | No | Yes | Maybe | No | Yes | Maybe | No |
4) Relevant | 92.5% | 6.7% | 0.7% | 90.3% | 8.1% | 1.6% | 94.4% | 5.6% | 0.0% |
5) Paceb | 16.4% | 76.1% | 7.5% | 21.0% | 72.6% | 6.5% | 12.5% | 79.2% | 8.3% |
6) Improve | 77.6% | 20.1% | 2.2% | 80.6% | 17.7% | 1.6% | 75.0% | 22.2% | 2.8% |
7) Equipped | 91.1% | 5.9% | 3.0% | 90.3% | 4.8% | 4.8% | 91.8% | 6.8% | 1.4% |
8) Refer | 91.2% | 6.6% | 2.2% | 93.7% | 3.2% | 3.2% | 89.0% | 9.6% | 1.4% |
Question: 4) Were the training topics relevant to your work with clients? 5) How would you rate the pace of the training sessions? 6) Do you feel that using the techniques used in training will help your clients get better? 7) Do you feel that the training you have received will enable you to become a better-equipped therapist? 8) Would you refer another therapist to attend this training workshop?
Response options were “Too Fast,” “Just Right,” and “Too Slow.”
Table 2.
Post-Workshop | |||
---|---|---|---|
All Settings | Outpatient | Other | |
Questiona | M(SD) | M(SD) | M(SD) |
1) Quality | 5.14(.96) | 5.29(.95) | 5.01 (.95) |
2) Comfort | 3.94 (1.35) | 4.00(1.43) | 3.89(1.28) |
3) Difficulty | 1.72(1.32) | 1.67(1.34) | 1.76(1.31) |
Likert items were on a 0-6 scale
Question: 1) How would you rate the overall quality of the training you received? 2) How comfortable do you feel applying what you have learned in training to your client sessions? 3) How difficult did you find the training material to learn?
Fidelity
The Cognitive Therapy Rating Scale (CTRS; Young & Beck, 1980) was used to evaluate clinicians’ recorded work samples for competency. The CTRS is the observer-rated measure most frequently used to evaluate competency in case-conceptualization driven cognitive therapy (Beck, 1995; 2011). Each of the 11 items is scored on a 7-point Likert scale that ranges from 0 (Poor) to 6 (Excellent), yielding a total score with a range of 0 to 66. The CTRS includes items that assess general therapy skills (e.g., interpersonal effectiveness), CT-specific skills, and case conceptualization. A total score of 40 or higher is used to represent competent delivery of CT in clinical trials (Shaw et al., 1999). The CTRS has demonstrated adequate internal consistency and inter-rater reliability (Vallis, Shaw, & Dobson, 1986), and strong inter-rater agreement for competency based on the total score (Williams, Moorey, & Cobb, 1991). Raters were doctoral-level CT experts who were required to demonstrate calibration prior to rating clinician work samples. Regular reliability meetings held among all instructors to prevent rater drift resulted in high inter-rater reliability for the CTRS total score (ICC = .84).
Instructors providing consultation to the training groups used the CTRS to rate several work samples for each clinician: a baseline (i.e., treatment as usual) session rated prior to the beginning of consultation, a mid-consultation session, and an end-of-consultation session. Clinicians who did not demonstrate competency (i.e., total score ≥40) by the end-of-consultation date were allowed to submit additional work samples for evaluation of competency, to reflect any gains in CT skills acquired with additional peer supervision and practice.
Data analysis
Statistical analyses were conducted using SPSS version 22 (IBM, Armonk, NY, USA) and HLM version 7 (Scientific Software International, Inc., Skokie, IL, USA). Study data were managed using REDCap electronic data capture tools hosted at the University of Pennsylvania (Harris et al., 2009).
Percentages of clinicians who completed the full training program or who withdrew prior to one of the assessment points, as well as their reasons for ending participation, were examined to assess feasibility of retaining clinicians in the BCI. In addition, the percentage of groups attended and the number of audio submissions made were examined to explore the feasibility of conducting ongoing consultation groups and collecting work samples in CBH. To assess overall acceptability of the BCI, average scores on Likert feedback items and percentages of clinicians responding “yes,” “maybe,” and “no” to categorical items were examined. To assess whether the feasibility and acceptability outcomes varied between settings (i.e., outpatient versus other), t-tests were conducted for continuous and Likert scale outcomes; Pearson's Chi-Square tests were conducted for categorical outcomes, and, in cases where cells’ expected values were less than 5, Fisher's Exact Tests were used.
Acceptability of the BCI was investigated by examining clinician EBPAS scores and comfort using CT. Fidelity was investigated by examining clinician CTRS scores. To examine changes in scores over time, four mixed-effects hierarchical linear regression models using restricted maximum likelihood estimation were conducted to accommodate the longitudinal and nested nature of the data (i.e., three assessment points nested within clinicians, and clinicians nested within programs). Setting (a dichotomous variable with traditional outpatient equal to 1 and any other setting equal to 0) was added to all HLMs as a grand-mean centered, program-level predictor, to assess if the training was equally effective in non-traditional and traditional settings. To control for the potential impact of missing data due to clinicians not completing training, completion (a dichotomous variable with completers equal to 0) was included as a grand-mean centered, clinician-level predictor in the 3 HLMs that included clinician-level data (Hedeker & Gibbons, 1997; see description of levels below)2. Bonferroni adjusted alpha levels of .0125 (.05/4) were used to protect against Type I errors due to multiple comparisons. For all HLMs, global pseudo-R2 effect size statistics were computed (Peugh, 2010).
EBPAS scores were examined using a three-level HLM with three assessment points (pre-workshop, post-workshop, and end-of-consultation; level 1) nested within clinicians (level 2), and clinicians nested within programs (level 3). Comfort ratings, which were collected anonymously, were modeled using a two-level HLM with three assessment points (post-workshop, mid-consultation, and end-of-consultation; level 1) nested within programs (level 2).
To estimate CT skill development, two different three-level models (with time [level 1] nested within clinicians [level 2], nested within programs [level 3]), were performed using three CTRS assessment points (Model 1: baseline, mid-consultation, and end-of-consultation; Model 2: baseline, mid-consultation, and competency assessment point). Model 1 was used to examine CT skill development from baseline to the set end-point of the intensive training (i.e., intercept = end-of-consultation score). Model 2 was used to examine how CT skills developed over time when skill acquisition was not limited to a set training end date (i.e., intercept = competency assessment score). For clinicians who submitted additional work samples, the final work sample score was used as the competency assessment value. For all other clinicians, the competency assessment value was the same as their end-of-consultation score. In Model 2, the length of time from baseline to competency assessment point was extended to account for the additional time to submission of supplemental work samples (i.e., an average of 1 month after end-of-consultation).
The amount of change in CTRS scores from baseline to mid-consultation was observed to be greater than the amount of change from mid-consultation to end-of-consultation and from mid-consultation to competency assessment point. As such, assessment point in the Level 1 model was log10 transformed. Akaike's Information Criterion (AIC) calculations confirmed that the use of the log transformation of the assessment point as a predictor of CTRS scores was a better approximation of the relations between the assessment points and CTRS scores than the model using the untransformed assessment points as a predictor.
Results
Feasibility
Of the 348 clinicians who began the BCI training, 274 (78.7%) completed the full workshop plus 6 months of consultation. Only 13 clinicians (3.7% of total sample) began the workshop but did not complete it. Thirty-four clinicians (9.8% of total sample) stopped participation between the workshop and mid-consultation, while 27 clinicians (7.8% of total sample) stopped participation between mid-consultation and end-of-consultation. No significant differences in time of discontinuation were found based on setting, χ2(3) = 3.76, p = .29.
Among the 74 clinicians who did not complete the training, 47 clinicians (13.5% of total sample) left their agencies before having the opportunity to complete six months of training, and 10 clinicians (2.9% of total sample) ended participation because they were no longer eligible to participate (e.g., promoted and no longer seeing clients for training cases). Only 17 clinicians (4.9% of total sample) did not complete training because they chose to withdraw from the BCI. A Fisher's exact test showed that reasons for clinician withdrawal did not vary significantly between settings (p = .88). Clinicians who completed the training and those who did not complete the training did not differ significantly on baseline CTRS scores (t[123.43] = 1.12, p = .27), mid-consultation CTRS scores (t[295] = .95, p = .34), pre-workshop EBPAS scores (t[72.13]=.51, p=.62) or post-workshop EBPAS scores (t[136] = 1.41, p = .16).
On average, clinicians attended 84.6% of consultation groups (SD = 18.82). The percentage of consultation groups attended did not differ significantly between settings (t[163] = .56; p = .58). Among clinicians who completed the training, 84.3% submitted at least the required number of audio sessions (M = 16.24, SD = 8.61, range = 2-69). Of the full sample, 68.1% submitted the required number of audio sessions (M = 13.47, SD = 9.52, range = 0-69). Neither the number of work samples submitted, t(340.5) = 1.80, p = .07, nor whether clinicians submitted the required number of sessions, χ2(1) = .72, p = .42, varied between settings.
Acceptability
The three level HLM (global pseudo-R2 = .82) indicated that the predicted average endof-consultation EBPAS score was 3.03 out of 4, SE = 0.14, t(28) = 22.36, p < .001. EBPAS score did not change significantly over time, b100 = −0.02, SE = 0.02, t(28) = −.87, p = .39. Setting did not significantly influence the average end-of-consultation EBPAS score, b001 = −0.10, SE = 0.27, t(28) = −0.37, p = .72, or the rate of change, b101 = −0.004, SE = 0.04, t(28) = −.10, p = .92. Both the average end-of-consultation EBPAS score, χ2(12) = 33.07, p = .001, and the rate of change of the EBPAS, χ2(12) = 26.90, p = .008, varied significantly across programs.
Clinicians’ average ratings of the training quality and degree of comfort in using CT were moderate to high, and the difficulty of the material was perceived to be low (see Table 2). Clinicians’ ratings of quality, t(133) = −1.69, p = .09, comfort, t(133) = −0.48, p = .64, and difficulty, t(133) = 0.43, p = .67, did not differ between settings. Further, clinicians’ categorical responses indicated that they believed the workshop content was relevant to their everyday work, CT techniques were likely to lead to client improvement, and their clinical skills had improved as a result of the training. Moreover, clinicians reported a high level of satisfaction with the pace of training and a high likelihood that they would refer other clinicians to the training (see Table 3). Fisher's exact tests showed that clinician responses about relevance (p = .50), client improvement (p = .69), clinical skill set (p = .47), pace of training (p = .40), and referring another therapist (p = .27) did not vary across settings.
The two-level HLM assessing changes over time in clinicians’ average rating of comfort using CT skills (global pseudo-R2 = .20) demonstrated that the predicted average rating of comfort at the end-of-consultation was 4.62 out of 6, SE = 0.16, t(25) = 28.70, p < .001, with ratings of comfort increasing significantly over time, b10 = 0.12, SE = 0.03, t(25) = 4.04, p < .001. Setting did not significantly influence either the average end-of-consultation comfort score, b01 = −0.41, SE = 0.33, t(25) = −1.26, p = .22, or the rate of change in comfort, b11 = −0.07, SE = 0.06, t(25) = −1.22, p = .23. Neither the average end-of-consultation score, χ2(22) = 27.71, p = .19, nor the rate of change, χ2(22) = 19.51, p > .50, varied significantly across programs.
Fidelity
Mean CTRS scores for clinicians who reached each time point were 21.33 (SD = 7.68) at baseline, 33.05 (SD = 7.11) at mid-consultation, 38.72 (SD = 8.96) at end-of-consultation, and 41.20 (SD = 7.86) at the competency assessment point. Among the 274 clinicians who completed training, 163 (59.5%) reached competency by the end of training. Sixty-nine clinicians submitted additional work samples, and 55 of those clinicians reached competency, for a total of 218 clinicians (79.6%) who demonstrated competency in CT. The majority of clinicians who reached competency through an additional work sample submitted one additional recording (83.9%), though eight clinicians (14.3%) submitted two additional samples to reach competency, and one clinician (1.8%) submitted three additional work samples.
Results of the HLM to assess change in CTRS scores from baseline to end-of-consultation (global pseudo-R2 = .85) indicated that the predicted average end-of-consultation CTRS score was 38.32, SE = 0.66, t(28) = 58.49, p < .001. For this model, the CTRS score increased significantly over time, b100 = 35.32, SE = 1.72, t(28) = 20.57, p < .001, on average by 10.63 points from baseline to mid-consultation and by 6.22 points from mid-consultation to the end-of-consultation. Setting did not significantly influence the average end-of-consultation CTRS score, b001 = 0.84, SE = 1.30, t(28) = 0.65, p = .52, or the rate of change of the CTRS scores over time, b101 = 3.10, SE = 3.37, t(28) = 0.92, p = .37. Neither the average end-of-consultation CTRS score, χ2(15) = 24.81, p = .05, nor the rate of change in CTRS score, χ2(15) = 27.57, p < .02, varied significantly across programs.
The HLM to assess change in CTRS scores from baseline to the competency assessment point (global pseudo-R2 = .86) indicated that the predicted average competency assessment CTRS score was 40.39, SE = 0.57, t(28) = 70.51, p < .001. For this model, the CTRS score increased significantly over time, b100 = 36.28, SE = 1.36, t(28) = 26.60, p < .001, with an average increase from mid-consultation to the competency assessment point that was 1.8 points greater than the increase from mid- to end-of-consultation in the previous model. Setting did not significantly influence the average competency assessment score, b001 = 1.82, SE = 1.14, t(28) = 1.60, p = .12, nor the rate of change of the CTRS scores, b101 = 4.87, SE = 2.67, t(28) = 1.83, p = .08. Neither the average competency assessment CTRS, χ2(15) = 19.77, p = .18, nor the rate of change in CTRS score, χ2(15) = 20.78, p = .14, varied significantly across programs.
For all 3-level HLMs, completion status did not significantly influence the intercept (endof-consultation CTRS: b010 = −3.70, SE = 1.93, t(28) = −1.92, p = .07; competency assessment CTRS: b010 = −4.33, SE = 2.09, t(28) = −2.07, p = .05; EBPAS: b010 = −.44, SE = .52, t(28) = −.86, p = .40) or the slope (end-of-consultation CTRS: b110 = −4.87, SE = 4.08, t(28) = −1.19, p = .24; competency assessment CTRS: b110 = −5.84, SE = 4.03, t(28) = −1.45, p = .16; EBPAS: b110 = −.06, SE = .08, t(28) = −.71, p = .49). These results indicate that the intercept and slope estimates were not biased by completion status.
Discussion
The science-practice gap in behavioral health has been widely acknowledged, and policy-makers have begun to reshape expectations and issue mandates for use of EBP in CBH systems, but little prior research has examined implementation at this intersection of policy and practice. As these policy mandates continue to shape the landscape of CBH, the acceptability and feasibility of training clinicians to deliver EBPs with fidelity have become questions of great importance (Persons & Silberschatz, 1998; Ruzek & Rosen, 2005). This study suggests that the large-scale implementation of transdiagnostic, case conceptualization-driven CT is both feasible and acceptable to clinicians working in diverse CBH settings. Further, these community-based clinicians demonstrated levels of competency in CT commensurate with those demonstrated in efficacy trials, regardless of whether therapy was delivered in a traditional therapy setting like an outpatient clinic, or the assorted less traditional settings in which behavioral health care may be delivered (e.g., schools, Assertive Community Treatment teams, addictions services). The BCI represents one of few efforts to implement an EBP that is transdiagnostic in nature (e.g., Chorpita & Daleiden, 2009; McEvoy & Nathan, 2007; Southam-Gerow et al., 2013; Weisz et al., 2012) and is among the first large-scale programs that also target diverse age populations across levels of care. The finding that most clinicians were able to achieve competency, regardless of the context in which they provided services, should alleviate some concerns about the feasibility of preparing the mental health workforce to deliver EBPs competently.
With regard to feasibility, overall retention in this intensive training program was high; the vast majority (78.7%) of clinicians completed the full training and consultation process, a completion rate comparable to those found in recent studies of clinicians who self-selected for training (e.g., Miller et al., 2004; Sholomskas et al., 2005). Among clinicians who did not complete the training program, the majority (63.5% of those who did not complete or 13.5% of the total sample) were lost to staff turnover rather than a decision to withdraw from the BCI. This turnover rate is lower than typical rates of turnover in CBH, which range from 30-60% annually (Mor Barak, Nissley, & Levin, 2001), suggesting that even during a high-intensity training period, participation in the BCI may be associated with less turnover than would typically be expected in a CBH system. The BCI has developed a number of specific strategies aimed at engaging and retaining clinicians, including kickoff celebrations, ongoing solicitation and incorporation of stakeholder feedback, quarterly meetings with key personnel throughout the training process to check in about progress, and tailoring of the training to each program's level of care and population (see Creed et al., 2014a for a detailed discussion).
Additionally, fidelity monitoring may function as a protective factor against turnover when it is presented to clinicians as supportive consultation (Aarons, Sommerfeld, Hecht, Silovsky, & Chaffin, 2009a; Beidas et al., 2015). Fidelity monitoring in the BCI was accomplished via work sample review, with an average of 16 sessions submitted by clinicians who completed the training program and an average of 13 sessions submitted by the overall sample. Clinicians in outpatient and less traditional settings submitted similar quantities of audio recordings, suggesting that fidelity monitoring through work sample evaluation was feasible across settings. Given the economic and pragmatic challenges of fidelity monitoring in implementation research (Proctor et al., 2009, 2011), the feasibility and potential benefit of retaining EBP-trained therapists may allay some concerns about the return on the investment, both economically and for the outcomes of individuals receiving services (Aarons, Wells, Zagursky, Fettes, & Palinkas, 2009b).
The BCI was also found to be acceptable to clinicians, who were highly satisfied with the training and its effect on their practice. Clinicians’ attitudes toward EBP were within the high average range prior to beginning training (i.e., within the upper limit of one standard deviation of the national average; Aarons et al., 2010) and remained within that limit during their participation in the BCI. This high acceptability may have been a key factor in the success of the training program with regard to training community clinicians to randomized controlled trial (RCT)-level competence in CT. One possible explanation for these relatively positive attitudes may be that the clinicians in this study were, on average, early in their careers. Given that training in evidence-based practices in graduate programs has increased in recent years, early career clinicians may have more exposure to EBP during training, as well as increased openness to learning EBPs. Future research examining the relations among clinician characteristics (e.g., early or later career status), attitudes towards evidence-based practice, and training outcomes may help identify clinicians most likely to benefit from EBP training.
Clinician comfort with the delivery of CT significantly increased over the training period, suggesting that clinicians across settings not only developed competency in delivering CT, but also felt comfortable doing so. Comfort using CT skills may increase a clinician's likelihood to utilize these skills over time. As acceptability may be a key contextual factor that influences the degree of sustainability of a new evidence-based practice (Palinkas et al., 2013), this relation represents an important area for future research.
Average EBPAS scores and their rate of change over time varied significantly across programs, suggesting that program-level characteristics may lead to variability in clinician attitudes toward EBPs. Indeed, recent implementation studies have identified many potential program-level barriers and facilitators to implementation that might account for variation across programs, including workload, program leadership, readiness for change, and the social context and culture of the program (e.g., Aarons et al., 2012; Aarons et al., 2009b; Glisson et al., 2008). Although information about program-level barriers and facilitators to implementation were not systematically collected for all programs in the current study, past research by our group in a subset of these programs identified workload, productivity demands, and negative staff reactions to change as potential barriers to implementation (Stirman et al., 2013).
Finally, the training program was found to lead to a high degree of clinician fidelity, with 59.5% of clinicians reaching competency by the set end-of-consultation point, and 79.6% of clinicians demonstrating competency by the competency assessment point. Scores increased significantly over the course of training, with average scores coming within 2 points of the established threshold for CT competency (Shaw et al., 1999) by the end-of-consultation, and surpassing this threshold when clinicians were given additional time to build their skills. Thus the training was successful in facilitating competent CT skill acquisition in community clinicians, despite the fact that participation was driven by policy rather than self-nomination, and the wide variation in clinical presentations and populations that are served by these clinicians. Allowing clinicians additional time to build competency past the end-of-training date allowed a notably higher number of clinicians to demonstrate the required level of skill, which suggests that flexibility may be important in improving individual clinicians’ trajectories and perhaps even in retaining those clinicians in the EBP implementation. Setting did not significantly influence CTRS scores or rates of change in CTRS scores. This suggests that working in nontraditional settings (e.g., school-based services, residential treatment) did not negatively impact clinician skill acquisition or delivery of CT. Rather, the BCI demonstrated strong training outcomes across settings.
Although this study has several strengths, findings should be considered within the context of its limitations. First, characteristics of individual programs that might have impacted attitudes about evidence-based practice were not assessed, so we can only speculate about individual program characteristics that may lead to these differences. Future research on specific organizational characteristics of programs that may influence clinician attitudes is recommended to identify modifiable program characteristics that could potentially foster the development and maintenance of positive attitudes toward EBP.
Second, although using expert ratings of competency from audiotaped sessions strengthened fidelity measurement, this resource-intensive approach may limit generalizability and scalability. However, evidence suggests that CBT is a cost-effective EBP (Domino et al., 2009; Lynch et al., 2010; Vos et al., 2005), which over the longer term might lead to a meaningful return on investment. Furthermore, other research with BCI clinicians suggests that reviewing short segments of audio and providing feedback in a supportive group context, rather than listening to full sessions for each individual clinician, is an effective and efficient strategy to improve competence (Stirman et al., 2015).
Another limitation was the use of instructor ratings, rather than blind ratings, which may have introduced rater bias. However, successful collection and rating of work sample data in implementation research is rare, and the supervisory relationship between rater and clinician may have been key to the BCI's ability to collect audio samples and keep clinicians actively engaged in training after receiving constructive feedback. Feedback from an instructor with whom the clinician had regular weekly contact may have been easier for the clinician to receive and incorporate into practice than feedback from a blind rater. Further, instructors demonstrated high reliability with the rater group, which suggests that their ratings did not differ substantially from those of their colleagues who were not involved in direct clinician consultation.
Finally, the current study examined CT skill acquisition during and immediately following the intensive training phase, but did not examine whether these skills were sustained in practice over longer periods of time. Future research examining sustained practice of CT is necessary in order to evaluate the long-term impact of the training on the CBH system.
The resounding call for EBPs has been echoing for years, but uptake of those practices in CBH has not kept pace (Zima et al., 2005; Creed et al., 2014b). Implementation of transdiagnostic EBPs may present an opportunity to respond to that call by offering an alternative to the challenge faced by providers in selecting and allocating resources to implement numerous ESTs (Chorpita et al., 2011b). To the extent that these implementation programs may offer some protective effect against staff turnover or provide cost-effective treatment, the return on investment may also be financially appealing to providers and networks. Perhaps most compelling, offering communities access to effective, evidence-based services is an important component of social justice for individuals who depend upon CBH services. The current study suggests that as public policies change to create the impetus for EBP implementation across systems, community-based clinicians with the appropriate supports may rise to the challenge to competently deliver case conceptualization-driven CT for diverse clients and settings, in a manner that is feasible within the CBH system and acceptable to the clinicians being trained.
Acknowledgments
The authors wish to thank the Philadelphia Department of Behavioral Health and Intellectual disAbility Services, the research assistants and the Beck Community Initiative instructors at the Aaron T. Beck Psychopathology Research Center, and the clinicians, administrators, and people in recovery whose participation in training and feedback allowed us to refine the Beck Community Initiative. Funding for this project was supported by the Philadelphia Department of Behavioral Health and Intellectual disAbility Services, and by grants from NIMH (T32 MH083745-03, Beck; F32-MH-103955, Benjamin).
Footnotes
CT is specified to distinguish this approach from the broader term, CBT, signaling that case conceptualization is used to guide intervention according to the cognitive model, as originally developed by Beck (Beck, Rush, Shaw, & Emery, 1979; Christon, McLeod, & Jenson-Doss, 2015; Persons, 1989). Over the past four decades, the cognitive model and theory has greatly expanded from an early focus on depression, and now represent a set of common evidence-based principles that can be applied transdiagnostically, across the spectrum of psychological disorders (for a description, see Haigh & Beck, 2014).
For the three-level HLMs, the equation was: OUTCOMEijk = β000 + β001*SETTINGk + β010* COMPLETIONjk + β011*COMPLETIONjk*SETTINGk+ β100*TIMEijk + β101*TIMEijk*SETTINGk + β110*TIMEijk*COMPLETIONjk + β111*TIMEijk*COMPLETIONjk*SETTINGk + e0jk + e1jk*TIMEijk + r00k + r01k*COMPLETIONjk + r10k*TIMEijk + r11k*TIMEijk*COMPLETIONjk + εijk. For the two- level HLM, the equation was: OUTCOMEik = β00 + β01*SETTINGk + β10*TIMEik + β11*SETTINGk*TIMEik + r0k + r1k*TIMEik + eik..
References
- Aarons GA, Glisson C, Green P, Hoagwood K, Kelleher KJ, Landsverk JA. The organizational social context of mental health services and clinician attitudes toward evidence-based practice: A United States national study. Implementation Science. 2012;7(1):56. doi: 10.1186/1748-5908-7-56. doi:10.1186/1748-5908-7-56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Glisson C, Hoagwood K, Kelleher K, Landsverk J, Cafri G. Psychometric properties and U.S. national norms of the Evidence-Based Practice Attitude Scale (EBPAS) Psychological Assessment. 2010;22:356–365. doi: 10.1037/a0019188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Hurlburt M, Horwitz SM. Advancing a Conceptual Model of Evidence-Based Practice Implementation in Public Service Sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):4–23. doi: 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Sommerfeld DH, Hecht DB, Silovsky JF, Chaffin MJ. The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: evidence for a protective effect. Journal of Consulting and Clinical Psychology. 2009a;77(2):270–280. doi: 10.1037/a0013223. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Wells RS, Zagursky K, Fettes DL, Palinkas LA. Implementing evidence-based practice in community mental health agencies: A multiple stakeholder analysis. American Journal of Public Health. 2009b;99:2087–2095. doi: 10.2105/AJPH.2009.161711. doi: 0.2105/AJPH.2009.161711. [DOI] [PMC free article] [PubMed] [Google Scholar]
- APA Presidential Task Force on Evidence Based Practice Evidence based practice in psychology. American Psychologist. 2006;61:271–285. doi: 10.1037/0003-066X.61.4.271. [DOI] [PubMed] [Google Scholar]
- Barlow DH, Allen LB, Choate ML. Towards a unified treatment for emotional disorders. Behavior Therapy. 2004;35:205–230. doi: 10.1016/j.beth.2016.11.005. [DOI] [PubMed] [Google Scholar]
- Beck AT, Rush AJ, Shaw BF, Emery G. Cognitive therapy of depression. Guilford; New York: 1979. [Google Scholar]
- Beck JS. Cognitive Therapy: Basics and Beyond. Guilford Press; New York: 1995. [Google Scholar]
- Beck JS. Cognitive Behavior Therapy: Basics and Beyond. Guilford Press; New York: 2011. [Google Scholar]
- Beidas RS, Marcus S, Wolk CB, Powell B, Aarons GA, Evans AC, Mandell DS. A prospective examination of clinician and supervisor turnover within the context of implementation of evidence-based practices in a publicly-funded mental health system. Administration and Policy in Mental Health and Mental Health Services Research. 2015 doi: 10.1007/s10488-015-0673-6. doi: http://dx.doi.org/10.1007/s10488-015-0673-6. [DOI] [PMC free article] [PubMed]
- Butler AC, Chapman JE, Forman EM, Beck AT. The empirical status of cognitive-behavioral therapy: A review of meta-analyses. Clinical Psychology Review. 2006;26:17–31. doi: 10.1016/j.cpr.2005.07.003. [DOI] [PubMed] [Google Scholar]
- Chorpita BF, Bernstein A, Daleiden EL. Empirically guided coordination of multiple evidence-based treatments: An illustration of relevance mapping in children's mental health services. Journal of Consulting and Clinical Psychology. 2011b;79:470–480. doi: 10.1037/a0023982. [DOI] [PubMed] [Google Scholar]
- Chorpita BF, Daleiden EL. Mapping evidence-based treatments for children and adolescents: Application of the distillation and matching model to 615 treatments from 322 randomized trials. Journal of Consulting and Clinical Psychology. 2009;77(3):566–579. doi: 10.1037/a0014565. doi: 10.1037/a0014565. [DOI] [PubMed] [Google Scholar]
- Chorpita BF, Daleiden EL, Ebesutani C, Young J, Becker KD, Nakamura BJ, Starace N. Evidence-based treatment of children and adolescents: An updated review of indicators of efficacy and effectiveness. Clinical Psychology: Science & Practice. 2011a;18:154–172. [Google Scholar]
- Chorpita BF, Park A, Tsai K, Korathu-Larson P, Higa-McMillan CK, Nakamura BJ, The Research Network on Youth Mental Health Balancing effectiveness with responsiveness: Therapist satisfaction across different treatment designs in the child STEPS randomized effectiveness trial. Journal of Consulting and Clinical Psychology. 2015;83:709–718. doi: 10.1037/a0039301. [DOI] [PubMed] [Google Scholar]
- Christon LM, McLeod BD, Jenson-Doss A. Evidence-based assessment meets evidence-based treatment: An approach to science-informed case conceptualization. Cognitive and Behavioral Practice. 2015;22:36–48. [Google Scholar]
- Clark DM. Implementing NICE guidelines for the psychological treatment of depression and anxiety disorders: The IAPT experience. International Review of Psychiatry. 2011;23:375–384. doi: 10.3109/09540261.2011.606803. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Creed TA, Benjamin C, Feinberg B, Evans AC, Beck AT. Beyond the label: Relationship between community therapists’ self-report of a cognitive behavioral therapy orientation and observed skills. Administration and Policy in Mental Health Services Research. 2014b doi: 10.1007/s10488-014-0618-5. doi: 10.1007/s10488-014-0618-5. [DOI] [PubMed] [Google Scholar]
- Creed TA, Jager-Hyman S, Pontoski K, Feinberg B, Rosenberg Z, Evans AC, Beck AT. The Beck Initiative: Training school-based mental health staff in Cognitive Therapy. The International Journal of Emotional Education. 2013;5:49–66. [Google Scholar]
- Creed TA, Stirman SW, Evans AC, Beck AT. A model for implementation of cognitive therapy in community mental health: The Beck Initiative. The Behavior Therapist. 2014a;37:56–64. [Google Scholar]
- Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science. 2009;4:50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Domino ME, Foster EM, Vitiello B, Kratochvil CJ, Burns BJ, Silva SG, March JS. Relative cost-effectiveness of treatments for adolescents depression: 36-week results from the TADS randomized trial. Journal of the American Academy of Child & Adolescent Psychiatry. 2009;48:711–720. doi: 10.1097/CHI.0b013e3181a2b319. [DOI] [PubMed] [Google Scholar]
- Garland AF, Bickman L, Chorpita BF. Change what? Identifying quality improvement targets by investigating usual mental health care. Administration & Policy in Mental Health & Mental Health Services Research. 2010;37:15–26. doi: 10.1007/s10488-010-0279-y. doi: 10.1007/s10488-010-0279-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glisson C, Landsverk J, Schoenwald SK, Kelleher K, Hoagwood KE, Mayberg S, Green P. Assessing the Organizational Social Context (OSC) of mental health services: Implications for implementation research and practice. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35(1):98–113. doi: 10.1007/s10488-007-0148-5. [DOI] [PubMed] [Google Scholar]
- Haigh EA, Beck AT. Advances in cognitive theory and therapy: The generic cognitive model. Annual Review of Clinical Psychology. 2014;10:1–24. doi: 10.1146/annurev-clinpsy-032813-153734. [DOI] [PubMed] [Google Scholar]
- Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap) - A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Information. 2009;42:377–381. doi: 10.1016/j.jbi.2008.08.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hedeker D, Gibbons RD. Application of Random-Effects Pattern-Mixture Models for Missing Data in Longitudinal Studies. Psychological Methods. 1997;2(1):64–78. [Google Scholar]
- IBM Corp. Released . IBM SPSS Statistics for Windows, Version 22.0. IBM Corp.; Armonk, NY: 2013. [Google Scholar]
- Insel T, Cuthbert B, Garvey M, Heinssen R, Pine DS, Quinn K, Wang P. Research domain criteria (RDoC): Toward a new classification framework for research on mental disorders. American Journal of Psychiatry. 2010;167:748–751. doi: 10.1176/appi.ajp.2010.09091379. [DOI] [PubMed] [Google Scholar]
- Institute of Medicine . Crossing the quality chasm: A new health system for the 21st century. Author; Washington, DC: 2001. [PubMed] [Google Scholar]
- Karlin BE, Ruzek JI, Chard KM, Eftekhari A, Monson CM, Hembree EA, Foa EB. Dissemination of evidence-based psychological treatments for posttraumatic stress disorder in the veterans’ health administration. Journal of Traumatic Stress. 2010;23(6):663–673. doi: 10.1002/jts.20588. [DOI] [PubMed] [Google Scholar]
- Layard R. The case for psychological treatment centres. British Journal of Medicine. 2006;332:1030–1032. doi: 10.1136/bmj.332.7548.1030. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lynch FL, Striegel-Moore RH, Dickerson JF, Perrin N, DeBar L, Wilson GT, Kraemer HC. Cost-effectiveness of guided self help treatment for recurrent binge eating. Journal of Consulting and Clinical Psychology. 2010;78:322–333. doi: 10.1037/a0018982. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McEvoy PM, Nathan P. Effectiveness of cognitive behaviour therapy for diagnostically heterogeneous groups: A benchmarking study. Journal of Consulting and Clinical Psychology. 2007;75:344–350. doi: 10.1037/0022-006X.75.2.344. [DOI] [PubMed] [Google Scholar]
- McFarr L, Brown LA, Holler R, Jackson L, Ramirez U, Morgan W. Cognitive Behavior Therapies in Southern California. The Behavior Therapist. 2014;37:117–121. [Google Scholar]
- McHugh RK, Barlow DH. The dissemination and implementation of evidence-based psychological treatments. American Psychologist. 2010;65:73–84. doi: 10.1037/a0018121. [DOI] [PubMed] [Google Scholar]
- Merrill KA, Tolbert VE, Wade WA. Effectiveness of cognitive therapy for depression in a community mental health center: A benchmarking study. Journal of Consulting and Clinical Psychology. 2003;71:404–409. doi: 10.1037/0022-006x.71.2.404. [DOI] [PubMed] [Google Scholar]
- Miller WR, Yahne CE, Moyers TB, Martinez J, Pirritano M. A randomized trial of methods to help clinicians learn motivational interviewing. Journal of Consulting and Clinical Psychology. 2004;72:1050–1062. doi: 10.1037/0022-006X.72.6.1050. [DOI] [PubMed] [Google Scholar]
- Mor Barak ME, Nissley JA, Levin A. Antecedents to retention and turnover among child welfare, social work, and other human service employees: What can we learn from past research? A review and meta-analysis. Administration and Policy in Mental Health and Mental Health Services Research. 2001;39(5):341–352. [Google Scholar]
- Norton PJ, Philipp LM. Transdiagnostic approaches to the treatment of anxiety disorders: A quantitative review. Psychotherapy: Theory, Research Practice, Training. 2008;45:214–226. doi: 10.1037/0033-3204.45.2.214. [DOI] [PubMed] [Google Scholar]
- Palinkas LA, Weisz JR, Chorpita BF, Levine B, Garland AF, Hoagwood KE, Landsverk J. Continued use of evidence-based treatments after a randomized controlled effectiveness trial: A qualitative study. Psychiatric Services. 2013;64(11):1110–1118. doi: 10.1176/appi.ps.004682012. [DOI] [PubMed] [Google Scholar]
- Patient Protection and Affordable Care Act, 42 U.S.C. § 18001 et seq. 2010.
- Persons JB. The case formulation approach to cognitive behavioral therapy. Guilford Press; New York: 1989. [Google Scholar]
- Persons JB, Silberschatz G. Are results of randomized controlled trials useful to psychotherapists? Journal of Consulting and Clinical Psychology. 1998;66(1):126–135. doi: 10.1037//0022-006x.66.1.126. [DOI] [PubMed] [Google Scholar]
- Peugh JL. A practical guide to multilevel modeling. Journal of School Psychology. 2010;48(1):85–112. doi: 10.1016/j.jsp.2009.09.002. [DOI] [PubMed] [Google Scholar]
- Philadelphia Department of Behavioral Health and Intellectual disAbilities Services DBHIDS divisions. 2015 Retrieved from http://dbhids.org/divisions/
- Pontoski K, Cunningham A, Schultz L, Jager-Hyman S, Sposato R, Evans AC, Creed TA. Using a cognitive behavioral framework to train staff serving individuals who experience chronic homelessness. American Journal of Community Psychology. in press. [Google Scholar]
- President's New Freedom Commission on Mental Health . Achieving the promise: Transforming mental health care in America. Final Report. U.S. Department of Health and Human Services; Rockville, MD: 2003. (DHHS Pub. No. SMA-03-3832) [Google Scholar]
- Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research. 2009;36:24–34. doi: 10.1007/s10488-008-0197-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Hensley M. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health. 2011;38:65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scheeres K, Wensing M, Knoop H, Bleijenberg G. Implementing cognitive behavioral therapy for chronic fatigue syndrome in a mental health center: A benchmarking study. Journal of Consulting and Clinical Psychology. 2008;76:163–171. doi: 10.1037/0022-006X.76.1.163. [DOI] [PubMed] [Google Scholar]
- Scientific Software International, Inc. HLM, Version 7. Scientific Software International; Skokie, IL: [Google Scholar]
- Shaw BF, Elkin J, Yamaguchi J, Olmsted M, Vallis TM, Dobson KS, Imber SD. Therapist competence ratings in relation to clinical outcome in cognitive therapy of depression. Journal of Consulting and Clinical Psychology. 1999;67:837–846. doi: 10.1037//0022-006x.67.6.837. [DOI] [PubMed] [Google Scholar]
- Sholomskas DE, Syracuse-Siewert G, Rounsaville BJ, Ball SA, Nuro KF, Carroll KM. We don’t train in vain: A dissemination trial of three strategies of training clinicians in cognitive behavioral therapy. Journal of Consulting and Clinical Psychology. 2005;73:106–115. doi: 10.1037/0022-006X.73.1.106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Southam-Gerow MA, Daleiden E, Chorpita B, Bae C, Mitchell C, Faye M, Alba M. MAPping Los Angeles County: Taking an evidence-informed model of mental health care to scale. Journal of Clinical Child and Adolescent Psychology. 2013;43:190–200. doi: 10.1080/15374416.2013.833098. [DOI] [PubMed] [Google Scholar]
- Stirman SW, Bhar S, Spokas M, Brown G, Creed T, Perivoliotis D, Beck AT. Training and consultation in evidence-based psychosocial treatments in public mental health settings: The ACCESS model. Professional Psychology: Research and Practice. 2010;41:48–56. doi: 10.1037/a0018099. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stirman SW, Buchhofer R, McLaulin JB, Evans AC, Beck AT. The Beck Initiative: A partnership to implement Cognitive Therapy in a community behavioral health system. Psychiatric Services. 2009;60:1302–1304. doi: 10.1176/appi.ps.60.10.1302. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stirman SW, Pontoski K, Xhezo R, Evans AC, Beck AT, Crits-Christoph P, Creed TA. A non-randomized comparison of strategies for consultation in a community-academic training program to implement an evidence-based psychotherapy. Administration and Policy in Mental Health Services and Mental Health Services Research. 2015 doi: 10.1007/s10488-015-0700-7. DOI 10.1007/s10488-015-0700-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vallis TM, Shaw BF, Dobson KS. The Cognitive Therapy Scale: Psychometric properties. Journal of Consulting and Clinical Psychology. 1986;54:381–385. doi: 10.1037//0022-006x.54.3.381. doi:10.1037/0022-006X.54.3.381. [DOI] [PubMed] [Google Scholar]
- Vos T, Haby MM, Magnus A, Mihalopoulos C, Andrews G, Carter R. Assessing cost-effectiveness in mental health: Helping policy-makers prioritize and plan services. Australian and New Zealand Journal of Psychiatry. 2005;39:701–712. doi: 10.1080/j.1440-1614.2005.01654.x. [DOI] [PubMed] [Google Scholar]
- Weisz JR, Chorpita BF, Palinkas LA, Schoenwald SK, Miranda J, Bearman SK, Gibbons RD. Testing standard and modular designs for psychotherapy treating depression, anxiety, and conduct problems in youth. Archives of General Psychiatry. 2012;69(3):274–282. doi: 10.1001/archgenpsychiatry.2011.147. [DOI] [PubMed] [Google Scholar]
- Williams RM, Moorey S, Cobb J. Training in cognitive behavior therapy: Pilot evaluation of a training course using the Cognitive Therapy Scale. Behavioural Psychotherapy. 1991;19:373–376. doi:10.1017/S0141347300014075. [Google Scholar]
- Young JE, Beck AT. Cognitive Therapy Scale: Rating manual. Unpublished manuscript, Center for Cognitive Therapy. University of Pennsylvania; Philadelphia, PA.: 1980. [Google Scholar]
- Zima BT, Hurlburt MS, Knapp P, Ladd H, Tang L, Duan N, Wells K. Quality of publicly-funded outpatient specialty mental health care for common childhood psychiatric disorders in California. Journal of the American Academy of Child & Adolescent Psychiatry. 2005;44:130–144. doi: 10.1097/00004583-200502000-00005. [DOI] [PubMed] [Google Scholar]