Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Jun 1.
Published in final edited form as: Clin Psychol Rev. 2010 Mar 1;30(4):448–466. doi: 10.1016/j.cpr.2010.02.005

The Role of Therapist Training in the Implementation of Psychosocial Treatments: A Review and Critique with Recommendations

Amy D Herschell a, David J Kolko a, Barbara L Baumann a, Abigail C Davis b
PMCID: PMC2872187  NIHMSID: NIHMS183935  PMID: 20304542

Abstract

Evidence-based treatments (EBT) are underutilized in community settings, where consumers are often seen for treatment. Underutilization of EBTs may be related to a lack of empirically informed and supported training strategies. The goals of this review are to understand the state of the literature for training therapists in psychotherapy skills and to offer recommendations to improve research in this area. Results of this review of 55 studies evaluating six training methods indicate that multi-component trainings have been studied most often and have most consistently demonstrated positive training outcomes relative to other training methods. Studies evaluating utility of reading, self-directed trainings, and workshops have documented that these methods do not routinely produce positive outcomes. Workshop follow-ups help to sustain outcomes. Little is known about the impact of train-the-trainer methods. Methodological flaws and factors that may influence training outcome and future directions are also reviewed.

Keywords: therapist training, implementation, dissemination, psychosocial treatments

The hope that mental health problems can be successfully ameliorated is supported by the availability of an increasing number of psychosocial treatment approaches with established efficacy (e.g., Silverman & Hinshaw, 2008). For example, efficacious treatment programs have been reported to address developmental disorders, behavioral and emotional disorders, substance abuse, eating disorders, personality disorders, and psychotic disorders, among others (e.g., Eyberg, Nelson, & Boggs, 2008; Scogin, Welsh, Hanson, Stump, & Coates, 2005). However, these approaches continue to be underutilized in community settings (Street, Niederehe, & Lebowitz, 2000) where millions of consumers receive mental health services each year (National Advisory Mental Health Council, 2001[NAMHC]; Ringel & Sturm, 2001).

Recognition of underutilization has led expert groups and professional organizations to advocate for the dissemination, implementation, and testing of evidence-based treatments (EBT) in community settings (e.g., NAMHC, 2001). Several clinical initiatives have been launched and substantial federal funding has been invested to disseminate EBTs (e.g., National Registry of Evidence-Based Programs and Practices, http://www.nrepp.samhsa.gov/, National Child Traumatic Stress Network, www.NCTSNet.org). Some have even encouraged or mandated the use of EBTs within state Medicaid programs (Reed & Eisman, 2006). Advances in technology have made training protocols easily available online (e.g., http://www.behavioraltech.com/ol/; http://tfcbt.musc.edu/). States also have invested funding and resources to disseminate EBTs to community therapists. For example, the state of California established a clearinghouse of EBTs (http://www.cachildwelfareclearinghouse.org/) and the New York State Office of Mental Health has launched an initiative to train therapists in cognitive behavioral therapy (http://www.omh.state.ny.us/omhweb/ebp/). Within these implementation efforts training has been extensively conducted across various treatment modalities, settings, and therapist1 groups (McHugh & Barlow, 2010).

The field’s current focus on EBT dissemination has highlighted both the need for effective implementation strategies, and the lack of data on knowledge transfer and implementation topics (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Gotham, 2004). In general, significant implementation difficulties and a lack of demonstrated clinical success have been reported in transitioning treatments from university to community settings (National Institute of Mental Health, 1998; President’s New Freedom Commission on Mental Health, 2003). Ironically, the field lacks comprehensive empirical guidelines to support the transfer of EBTs to community therapists. Little empirical attention has been paid to those who provide community care and how to most effectively train them to implement psychosocial interventions, including EBTs (Carroll, 2001; Luborsky, McLellan, Diguer, Woody, & Seligman, 1997).

Scope and Definitions of Evidence-Based Treatment

More than 10 years ago, the American Psychological Associations’ Division 12’s (Clinical Psychology) Task Force on Promotion and Dissemination of Psychological Procedures (1995) offered recommendations to promote efforts to define, study and evaluate, teach, and disseminate EBTs (then labeled empirically validated treatments and renamed empirically supported treatments; Chambless et al., 1998; Chambless et al., 1996). These recommendations included increasing the availability of empirically supported interventions, enforcing guidelines for their documentation, and distributing information about effective services to professionals, the public, and the media. The Division 12 Task Force spurred a movement toward EBT, which has included enthusiasm, controversy and concern. Notable were concerns related to a possible over-focus on manualized treatments and under-appreciation of common factors and patient diversity. More recently, and perhaps in response, the American Psychological Association’s Presidential Task Force on Evidence-based Practice (American Psychological Association Presidential Task Force on Evidence-based Practice, 2006) broadened the conceptualization of this topic and offered the following definition: “Evidence-based practice in psychology (EBPP) is the integration of best available research with clinical expertise in the context of patient characteristics, culture, and preferences.” (p. 273). Even more recently Kazdin (2008) defined EBTs as interventions or techniques that have “produced therapeutic change in controlled trials” (p. 147), and evidence-based practice (EBP) as a broader term referring to “clinical practice that is informed by evidence about interventions, clinical expertise, and patient needs, values, and preferences, and their integration in decision-making about individual care” (p. 147). These definitions extend prior descriptions that emphasize related concepts, but also incorporate alternative approaches and guidelines (Spring, 2007). The breadth of definitions reported is apparent in several recent studies and reviews (Luongo, 2007; Schoenwald, Kelleher, & Weisz, 2008) and just what constitutes an EBT is still a matter of debate.

A common thread to the debate is that regardless of exactly what constitutes an EBPP, EPT, or EBP, there is a continuing need to transfer science into practice, which will require effective and efficient methods for transferring to therapists the skills and knowledge needed to conduct empirically informed psychotherapies (Fixsen et al., 2005; Gotham, 2004). Similarly, all psychotherapies are “soft technologies” (Hemmelgarn, Glisson, & James, 2006), meaning that they are malleable and rely extensively on people (therapists), which further complicates their implementation.

Models for Dissemination of EBT

In conjunction with refinements in these definitions, models or conceptual frameworks for the dissemination and implementation of EBTs have been proposed to guide efforts to change existing service systems and enhance overall outcomes for consumers of mental health services. The National Institute of Mental Health (2002) has defined dissemination as “the targeted distribution of information to a specific audience,” and has defined implementation as “the use of strategies to introduce or adapt evidence-based mental health interventions within specific settings” (PA-02-131; p. 1). These concepts have been examined and incorporated in models designed to guide the communication of new technologies using various methods or strategies (e.g., Berwick, 2003; Gotham, 2004; Greenhalgh, Robert, MacFarlane, Bate, & Kyriakidou, 2004). All of these models acknowledge the importance of understanding and enhancing the methods by which new knowledge can be conveyed and incorporated for routine application.

Researchers within industrial organizational psychology have offered conceptual frameworks for transferring knowledge, which vary in their complexity (for an overview see Machin, 2002), though many summarize the transfer process into three time frames: what happens before, during, and after training. Ford and colleagues maintain that three factors impact learning, retention, generalization, and maintenance of skills: trainee characteristics (before), training design (during), and work environment or organizational setting (after; Baldwin & Ford, 1988;J. K. Ford & Weissbein, 1997). Trainee characteristics include factors such as previous knowledge and skill, ability, motivation, self-efficacy, and personality. Training design factors are the focus of this review and include the structure and format of training, incorporation of learning principles into training, sequencing of training, and the job relevance of training content. Work environment factors include constraints and opportunities to use the trained skills, support from supervisors and peers, and organizational culture and climate. Successful transfer of training to the work setting is not solely determined by any one factor. Instead, transfer of training is a complex, multi-level process. This review will focus on training design as one part of a larger process because the successful transfer of psychosocial innovations, including EBTs from university or research clinics to community clinics, will require an understanding of the effectiveness of current training methods for assisting post-graduate professionals to implement new treatment approaches.

The Role of Training in EBTs

Community therapists and administrators have acknowledged a need for initial training as well as ongoing support, consultation, and supervision in EBT, but, at the same time, acknowledge the lack of time, support, and opportunities for learning new skills (Essock et al., 2003; Herschell, Kogan, Celedonia, Gavin, & Stein, 2009). Concerns have also been raised about the relevance and utility of existing educational programs for professional psychologists (see Sharkin & Plageman, 2003). Recent guidelines for training practitioners in the use of EBTs emphasize the importance of using specialized techniques designed to engage, train, and support practitioners in their use of new technologies, such as a review of treatment manuals, exposure to intensive didactic training workshops, and participation in closely supervised training cases (Bruns et al., 2008; Hyde, Falls, Morris, & Schoenwald, 2003). To further our understanding of this objective, the goals of this review are to understand the state of the literature for training therapists in psychotherapy and to offer recommendations to improve research in this area.

Previous Reviews and Context for Present Review

Previous reviews of therapist training studies have indicated that there is little empirical evidence to confirm that training improves therapist skills to a level that results in competent administration of treatment (e.g., Alberts & Edelstein, 1990;J. D. Ford, 1979). Instead, skill acquisition is assumed rather than confirmed (Alberts & Edelstein, 1990;J. D. Ford, 1979). Just as Alberts and Edelstein (1990) who reviewed studies from 1979 to 1990 picked up where Ford left off (reviewed 1960 to 1978), the current review begins where Alberts and Edelstein left off (1990) and continues through February 2010.

Alberts and Edelstein’s review (1990) included studies divided into two clusters, training in “traditional,” process-related skills (e.g., empathy, attending, open-ended questions) and training in “complex verbal skill repertoires” (e.g., clinical assessment, case conceptualization). Participants most often were graduate students in clinical, counseling, or school psychology and techniques were studied within the context of the larger graduate training program. Reviewed studies focused on training discrete skills. A combination of didactic instruction, modeling, feedback, and practice (rehearsal) were important for skill acquisition. Methodological flaws originally noted by Ford (1979), continued to be mentioned as problematic in the Alberts and Edelstein review (1990), which included a lack of validated dependent variables and control groups, as well as little attention devoted to interactions among therapist characteristics, target behaviors, training techniques, and instructor’s credentials. Additional concerns included use of single-method measurement strategies, lack of in vivo skills assessments, limited external validity of skill assessments, and no follow-up evaluations to assess maintenance.

More specific reviews will be highlighted as they apply to specific areas of this review. For example, Miller and Binder (2002) completed a review focused on training issues related to the use of treatment manuals, and Vandecreek and colleagues (1990) completed a review of psychology continuing education. Stein and Lambert (1995) reviewed literature related to the impact of graduate training in psychotherapy on therapist behavior and patient outcome. In that same year (1995) Holloway and Neufeldt reviewed research related to clinical supervision. Similar to Alberts and Edelstein (1990), this review is meant to provide a more comprehensive evaluation of the broad therapist training literature. Just as treatment techniques have advanced in the last 15 years, training strategies have expanded, which likely is due, in part, to the increased interest in dissemination of EBT to community settings. This expansion is reflected in the current review by the inclusion of studies that include community-based clinicians rather than graduate-level trainees as Alberts and Edelstein included (1990).

Methods

Search Strategy

Relevant studies were identified by using four computer-assisted searches: Psychological Abstracts (PsycINFO), Educational Resources Information Center (ERIC) database, Social Science Citation Index (SSCI), and Medline. Several keywords and authors were used as search terms such as adherence, community, community clinician, community therapist, continuing education, dissemination, empirically supported, evidence-based, evidence-based treatment, fidelity, implementation, interventions, mental health, psychotherapy training, substance abuse, training, transporting, and workshop. Broad terms were used in an effort to be as inclusive as possible. Similar to Greenhalgh, Robert, MacFarlane, Bate, and Kyriakidou (2004), a snowball method (searching references of included articles) produced a large number of valuable citations that were not obtained by other search mechanisms. Multiple studies were identified within specific areas using the snowball method such as substance abuse, training community psychiatric nurses to deliver psychosocial interventions for patients with schizophrenia and their families, residential care facilities for persons with mental retardation, and motivational interviewing. Extreme care was taken to be as thorough as possible; however, it is possible that some studies were missed, in part, because of the lack of consistency in terms used to describe training-related constructs and the diversity of training studies. For example, some studies included in this review were part of a larger implementation effort so that training-related hypotheses were one of multiple questions addressed (e.g., Hunter et al., 2005; Squires, Gumbley, & Storti, 2008). Also, training studies were found across a variety of treatment areas (e.g., child, adult, substance abuse, mental health), which complicated the search.

Inclusion and Exclusion Criteria

Given that the aim of this review was to understand therapist training within the mental health field from 1990 (publication date of Alberts & Edelstein review article) through February 2010, we included studies that focused on training mental health providers (e.g., social worker, psychologist, psychiatrist, nurse, support staff) in a mental health intervention (e.g., Cognitive Behavior Therapy [CBT], Dialectical Behavior Therapy [DBT], psychoeducation) for a clinical population (e.g., DSM-IV diagnosis, substance abuse\addictions, child maltreatment) within the time frame of 1990 to February 2010. Studies focused on training other professionals to apply mental health techniques to a general population were excluded (e.g., teacher training in problem solving to help general classroom). Similarly, given the focus on skill-building, studies that focused on training directed towards mental health policies and practices in schools (e.g., in-service training to facilitate classroom inclusion; Johnson & Knight, 2000) were excluded. Studies that included only medical residents, primary care physicians, or graduate students also were excluded given that the intent was to characterize the training of community-based mental health providers. Considering that the focus of this review was on training (not treatment) we included studies that implemented any psychosocial treatment, regardless of its evidence base. Finally, we included only published empirical studies. Unpublished dissertations, conceptual articles, and recommendation papers were excluded.

Classification Criteria

To provide a measure of methodological rigor, studies in this review have been classified according to Nathan and Gorman’s (2002, 2007) criteria, similar to a special section in the Journal of Clinical Child and Adolescent Psychology (2008, Volume 37, Number 1). Nathan and Gorman developed these criteria as a tool for evaluating the methodological rigor of research studies. Studies are classified as one of six types.

Type 1 studies are the most rigorous. They involve a randomized, prospective clinical trial. These studies use comparison groups, random assignment, blind assessments, clear inclusion and exclusion criteria, state-of-the-art diagnostic methods, sufficient sample size and power, and clearly described statistical methods. Type 2 studies are clinical trials in which an intervention is made, but some aspects of the Type 1 study requirement are missing. They have some significant flaws (e.g., no blind assessment, lack of random assignment, short period of observation) but are not necessarily fatally flawed. Type 2 studies do not merit the same consideration as Type 1 studies, but often do provide useful information. Type 3 studies are clearly methodologically limited. This group includes studies that are uncontrolled; use pre-post designs, and retrospective designs. Type 3 studies are often aimed at providing pilot data and include case-control studies, open treatment studies, and retrospective data collection. Type 4 studies include reviews with secondary data analysis such as meta-analysis. Type 5 studies are those that are reviews without secondary data analysis. Type 6 studies include case studies, essays, and opinion papers (Nathan & Gorman, 2002, 2007). Only Types 1 through 3 were included in this review.

To ensure accuracy of classifications, 2 independent raters coded each study. Classification agreement was noted for 89% of the studies (49 out of 55). When an inconsistency was noted (in 6 of the 55 studies; 11%) or questions arose, each was reviewed, discussed, and consensus was obtained with a third rater in order to correctly classify the study. Similarly, to ensure accuracy of information presented in the tables, each study was reviewed and coded by the primary author. Afterward, a second, independent rater checked 100% of the table information. When inconsistencies were noted, each was reviewed, discussed, and consensus was obtained.

Results

Summary of the Literature: Designs and Types of Investigations

Fifty-five studies evaluating training techniques or methods were identified. Methodologies of these studies were diverse, and included both quantitative and qualitative designs that range from single-subject designs to randomized control trials. As is demonstrated in Tables 1 through 6, each of which includes studies of different training methods (i.e., written materials [Table 1], self-directed training techniques [Table 2], workshops [Table 3], workshop follow-ups [Table 4], pyramid training [Table 5], multicomponent training packages [Table 6]), only a few of the 55 studies incorporated what would be considered rigorous methodologies. For example, 14 (25%) studies used a group comparison, 29 (53%) used a pre/post, 5 (10%) used a single-subject, and 5 (10%) used a survey design. Two (4%) studies used alternative designs (e.g., time series). Nine of the 14 (64%) comparison studies used random assignment. Of all the included studies, 19 (35%) included a follow-up, 24 (44%) utilized a multi-method assessment strategy, and 26 (47%) included standardized measures. In addition, study sample sizes range from 6 to 3,558 depending on the methodology employed (e.g., single subject versus survey research) and typically were small for group comparison studies (i.e., approximately 20 per group). According to Nathan and Gorman’s (2002, 2007) classification system, only 6 (11%) studies were considered Type 1 studies; 20 (36%) were considered Type 2, and 29 (53%) were considered Type 3 studies. Similar to problems noted in previous reviews (e.g., Alberts & Edelstein, 1990;J. D. Ford, 1979), these studies suffer from several limitations including: a) a lack of control groups, b) no measurement of training integrity, c) poor measurement methods, d) short follow-ups, and e) lack of in-vivo assessments. Additional methodological limitations included a lack of random assignment, standardized measurement, follow-up assessments, and patient-level data. Consistent with our focus on training community therapists to use an EBT, diverse treatment approaches are included that focused, for example, on substance abuse treatment for adults, motivational interviewing, residential care for persons with mental retardation, psychosocial and CBT interventions for patients with schizophrenia and their families, and time-limited dynamic psychodynamic therapy.

Table 1.

Summary of Studies Examining the Effectiveness of Written Materials (e.g., treatment manuals) as a Training Method

Nathan & Gorman (2002) Criteria Author(s) Sample Design Measurement Method Findings & Comments

Type Random Assign # and Primary Comparison Groups Follow-up Domain Type Standardized measures with psychometrics

1 Dimeff et al. (2009) n = 174 drug and mental health treatment providers Group Comparison Yes 3:
1. Manual
2. Online training
3. Instructor led 2-day workshop
1 and 3 months (30 and 90 days) IA, C, S, Sat K, BO, SR No 2>3>1
Relative to other conditions, those who reading the treatment manual had smaller improvements in knowledge, self-efficacy, competence, and adherence and lower satisfaction
3 Ducharme & Feldman’s (1992) study 1 n = 9 direct care staff Single-subject multiple baseline NA 0 6 months G, S BO No Written material had little effect on skill
2 Kelly et al. (2000) n = 74 AIDS Service Organizations directors Group comparison Yes 3:
1. Manual,
2. Manual+2-day workshop,
3. Manual+2-day workshop + follow-up consultation
6 and 12 months O, P SR No Condition 1 resulted in the least frequent adoption of the intervention; Condition 3 resulted in more frequent adoption of the intervention
3>2>1
3 Rubel et al. (2000) n = 44 clinicians and researchers Pre/Post No 0 None K, S SR Yes No differences in those who read versus did not read the treatment manual
2 Sholomskas et al. (2005) n = 78 full-time, substance abuse counselors Group Comparison No 3:
1. Manual,
2. Manual + website,
3. Manual + didactic seminar + supervised casework
3 months F, I, K, P, S BO, SR Yes Slight improvements in knowledge, adherence, and skill after reading; improvements did not near criterion mastery levels
3>2>1

Note. Measurement domains: A = Attitudes, C= Confidence, F = Treatment Fidelity or adherence; G = Generalization; I = Implementation Difficulty or Barrier – Anticipated or Actual; K = Knowledge, O = Organizational Resources and Characteristics, P = Practices or techniques used, S = Skills/competence, Sat = satisfaction/acceptability; Measurement Types: BO = Behavior observation, SR = Self-report.

Table 6.

Summary of Studies Examining the Effectiveness of Multicomponent Training Packages as a Training Method

Nathan & Gorman (2002) Criteria Author(s) Sample Design Measurement Method Findings & Comments

Type Random Assign # and Primary Comparison Groups Follow-up Domain Type Standardized measures with psychometrics

3 Bein et al. (2000) n = 16; psychiatrists (8) & psychologists (8) Pre/Post No 0 None F BO Yes Majority of therapists did not achieve basic competence at TLDP
3 Brooker & Butterworth (1993) n = 10 community psychiatric nurses Pre/Post No 0 None A, S BO, SR Yes No significant differences
3 Classen et al. (1997) n = 24 therapists from 10 oncology centers and 2 university medical centers Pre/Mid/Post No 0 None K SR No Improvement in written responses for all 4 dimensions of the model: affect, personalization, coping, and group interaction
1 Crits-Christoph et al. (1998) n = 65 therapists; 202 Patients with cocaine dependence Group Comparison Yes 3:
1. Cognitive Therapy,
2. Supportive-Expressive Dynamic Therapy,
3. Individual Drug Counseling
None K, S BO, PR, SR Yes Adherence and competence increased for all groups
1>2=3;
only cognitive therapy demonstrated learning that carried over from training case to training case
3 Gamble et al. (1994) n = 12 community-based mental health nurses Pre/Mid/Post No 0 None A, K, S SR No Improvements in knowledge of and attitudes towards family work and schizophrenia
3 Henry, Strupp et al. (1993) n = 16 psychiatrists (8) and psychologists (8); 84 patients Pre/Post No 0 None S, T BO Yes Increased adherence to technical intervention
3 Hughes & Halek (1991) n = 24 psychiatric nurses Survey No 0 None C, K SR No Overall had a greater understanding of psychotherapy and greater confidence that they were competent with individuals and groups
2 Hunter et al. (2005) n = 16 substance abuse treatment staff Pre/Post No 2:
1. 3 intervention sites,
2. 2 comparison sites selected with similar client flow rates
1 year A, K, M, Sat (with job) SR Yes Due to low n, statistical analyses were not completed. Knowledge, attitudes, job satisfaction, and morale improved over time for the intervention, but not the comparison group
3 Hurley et al. (2006) n = 221 direct care staff Pre/Post No 0 None F, K, P, S BO, SR No 79%of staff completed all training sessions and scored 80% or higher on knowledge and skill tests; 30% reduction in critical incidents for youth post-implementation
3 Jameson et al. (2007) n = 38 mental health professionals Survey No 0 None A, K, S, Sat SR No Positive results for: change in therapeutic practice (especially in client-therapist relationship and intrapersonal dynamics)
Positive results also reported for therapeutic effectiveness; small groups better promoted skill development in comparison to readings and lectures
3 Lancashire et al. (1997) n = 12 community psychiatric nurses, 33 patients Ongoing assessment for nurses, Pre/Post for patients No 0 None Cl, S BO, SR Yes Patients reported improvements in total symptom ratings and in positive and affective symptoms. No change was noted in negative symptoms.
3 Leff & Gamble (1995) n = 43 community mental health nurses Pre, 2 month, post No 0 None A, K SR Yes Nurses increased knowledge and positive attitudes about schizophrenia
1 Lochman et al. (2009) n = 49 school counselors, 531 children Group Comparison Yes 3:
1. enhanced training,
2. basic training
3. comparison
None Cl, I, BO, SR Yes The more intense clinician training, the better the outcome for children treated by the trained school counselor
1>2>3
3 Milne et al. (1999) n = 20 psychologists, psychiatrists, and mental health nurses; 20 patients Pre/During/Post training (clinicians) and therapy (patients) No 0 None Cl, G, S BO, SR Yes Statistically significant increase in therapist competence following training and patient improvement in coping
3 Milne et al. (2000) n = 48 mental health practitioners Pre/Post 3-month follow-up No 0 3 months C, G, K, S, Sat BO, SR Yes High satisfaction reported with the workshop format; skill and knowledge improved from pre to post, generalization across clients, time, and setting were reported by participants.
2 Milne et al. (2003) n = 25 mental health professionals Group Comparison, Focus Group No 2:
1. Training
2. Waitlist control Group
None G, I, K, S, Sat BO, PR, SR Yes High satisfaction reported with training content, process, and trainers
Significant improvement in paper and pencil knowledge test and video-presented skill tests; generalization 6–9 months after training reported
2 Morgenstern et al. (2001) n = 29 front- line substance abuse counselors Group Comparison Yes 2:
1. CBT Training,
2. Control group
None A, C, F, S, Sat BO, SR Yes Counselors responded well to the CBT content and manualized format of the training, Adequate skill levels were reached
1>2
3 Moss et al. (1991) n = 11 therapists 2-year Follow-up No 0 2 year F, S BO No Training gains maintained over 2 years.
2 Myles & Milne (2004) n = 90 mental health professionals 3-month pre-training Pre/Post No 2:
1. CBT Training,
2. Participants served as their own controls by having double baseline assessment
3 month G, I, K, Sat SR Yes High satisfaction reported with the acceptability and effectiveness of training
Knowledge increased according to written and video completed assessments
Maintained use of CBT techniques at 3 month follow-up
3 Ryan et al. (2005) n = 137 nurses Survey No 0 None K, P, S, Sat SR No Experience of and satisfaction with the course were rated high
Nurses reported enhancement of general and behavior therapy skills
2 Strosahl et al. (1998) n = 18 therapists, and 321 patients Group Comparison No 2:
1. Training
2. No training
None Cl, Sat, T PR, SR No Clients of trained clinicians were more likely to finish treatment, agree with the clinician and report better clinical outcomes

Note. Measurement domains: A = Attitudes, C = Confidence, Cl = Clinical Outcome, F = Treatment Fidelity or adherence, G = Generalization, I = Implementation Difficulty or Barrier – Anticipated or Actual; K = Knowledge, M = Job Morale, OR = Organizational Readiness for Change P = Practices or techniques used, S = Skills/competence, Sat = satisfaction/acceptability, SP = Supportive practices; T = therapeutic interaction/rapport/working alliance; Measurement Types: BO = Behavior observation, PR = Patient report of therapist behavior, SR = Self-report.

Table 2.

Summary of Studies Examining the Effectiveness of Self-directed Training Techniques (e.g., computer assisted, video review) as a Training Method

Nathan & Gorman (2002) Criteria Author(s) Sample Design Measurement Method Findings & Comments

Type Random Assign # and Primary Comparison Groups Follow-up Domain Type Standardized measures with psychometrics

1 Dimeff et al. (2009) n = 174 drug and mental health treatment providers Group Comparison Yes 3:
1. Manual
2. Online training
3. Instructor led 2-day workshop
1 and 3 months (30 and 90 days) IA, C, K, S, Sat BO, SR No 2>3>1
Though sometimes comparable to other conditions, findings favored online training for highest effect sizes in knowledge, competence, and adherence at post and 90-day follow-up
1 Miller, Yahne et al. (2004) n = 140 substance abuse counselors Group Comparison Yes 5:
1. Workshop,
2. Workshop + practice feedback,
3. Workshop + individual coaching sessions,
4. Workshop + feedback + coaching,
5. Waitlist control of self-guided training
4 and 8 months K, P, S BO, SR Yes Manuals and videotape insufficient for behavior change; ongoing support is necessary for maintenance of gains

4>2>3>1>5
1 Moyer et al. (2008) n=129 behavioral health providers Group Comparison Yes 3:
1. Workshop
2. Workshop + feedback + consult calls
3. Waitlist control of self-guided training
4, 8, and 12 months S BO Yes The addition of feedback and consult calls to the workshop did not result in greater performance; skills declined by the 4 month follow-up; Self-directed techniques did not result in skill improvement.
(1=2)>3
3 National Crime Victims Research and Treatment Center (2007) n = 3,558 mental health professionals Pre/Post No 0 None K, Sat SR No 36.3% overall increase in knowledge; high satisfaction
2 Sholomskas et al. (2005) n = 78 full-time, substance abuse counselors Group Comparison No 3:
1. Manual,
2. Manual + website,
3. Manual + didactic seminar + supervised casework
3 months F, I, K, P, S BO, SR Yes Website training offered a cost-effective strategy, but it was only slightly more effective than the manual only condition

3>2>1
3 Suda & Miltenberger (1993) n = 6 group home staff; 11 patients with moderate to severe mental retardation Single subject multiple baseline NA Conditions: Baseline, Instruction and goal setting, Self-management, Self-management and feedback None F, G, Sat, BO, SR Yes Instruction and goal setting insufficient; self-management sufficient for 4 staff whereas the remaining 2 staff needed feedback
3 Worrall & Fruzzetti (2009) N = 56 therapists Post-training clinician survey No 0 None Sat SR No Participants were able to use the technology and reported high satisfaction with its usefulness.

Note. Measurement domains: A = Attitudes, C = Confidence, F = Treatment Fidelity or adherence, G = Generalization, I = Implementation Difficulty or Barrier – Anticipated or Actual, K = Knowledge, S = Skills/competence, Sat = satisfaction/acceptability; Measurement Types: BO = Behavior observation, SR = Self-report.

Table 3.

Summary of Studies Examining the Effectiveness of Workshops as a Training Method

Nathan & Gorman (2002) Criteria Author(s) Sample Design Measurement Method Findings & Comments

Type Random Assign # and Primary Comparison Group(s) Follow-up Domain Type Standardized Measures with psychometrics

3 Anderson, & Youngson (1990) part 1 n = 40 senior clinical staff Pre/Post No 0 None A, K SR No Increased knowledge, no change in attitude
1 Baer et al. (2009) n = 144 community counselors Group Comparison Yes 2:
1. 2-day workshop
2. context tailored training
3 months K, OR, S, Sat, SP BO, SR Yes High attendance at workshops; Few differences in groups: equal skill improvement and satisfaction; 2-day workshop less costly.
3 Baer et al. (2004) n = 22 clinicians Pre/Post No 0 2 months S BO, SR Yes Statistically significant gains in skill from pre to post; some gains maintained at follow-up (not all); only 8 clinicians considered proficient at follow-up.
3 Byington, et al. (1997) n = 50 rehabilitation and general counselors Pre/Post No 0 None K, S SR Yes Statistical significance only on knowledge, not application measures
2 Chagnon et al (2007) n = 78 helpers serving youth clientele Group Comparison Yes 2:
1. Training,
2. Control
6 months A, K, S SR Yes 15% gain in knowledge and skills in training group immediately post- training; gains declined by 6 month follow-up
2 DeViva (2006) n = 60 mental health professionals and students Group Comparison Yes 2:
1. 3-hour workshop,
2. 6-hour workshop
None F, O, Sat BO, SR No No difference in 3-hour versus 6- hour workshop formats
Each associated with significant change in participant behavior assessed via role-play immediately post training.
1 Dimeff et al. (2009) n = 174 drug and mental health treatment providers Group Comparison Yes 3:
1. Manual
2. Online training
3. Instructor led 2-day workshop
1 and 3 months (30 and 90 days) IA, C, K, S, Sat BO, SR No 2>3>1
A 2-day workshop resulted in self- efficacy, satisfaction, and demonstrated skills comparable to online training, though online training knowledge gains were significantly higher than workshop
3 Freeman & Morris (1999) n = 12 CPS workers Pre/Post No 0 3 months K, S BO, SR No Training improved knowledge, but not skills
3 Gregoire (1994) n = 37 child welfare workers Post-test w/retrospective reporting for pre- No 0 None A, P SR No Positive impact on attitudes and self-report practice change
3 Jensen-Doss et al. (2007) n=66 mental health practitioners, 84 youth Pre/Post No 0 3 months A, P SR, CH Yes Training improved therapist attitudes about the treatment, littlie impact on behavior
3 McVey et al. (2005) n = 3,315 health care practitioners and educators Pre/Post No 0 None C, K SR No Increases in perceived knowledge and confidence
3 Miller & Mount (2001) n = 24 counselors Pre/Post No 0 4 months A, F, I, P, S BO, SR Yes Training did not impact patient response
1 Miller, Yahne et al. (2004) n = 140 substance abuse counselors Group Comparison Yes 5:
1. Workshop,
2. Workshop + practice feedback,
3. Workshop + individual coaching sessions,
4. Workshop + feedback + coaching,
5. Waitlist control of self-guided training
4 and 8 months K, P, S BO, SR Yes Found initial improvements in clinician skill after workshop completion; skills decreased and were comparable to the waitlist control at 4 month follow-up

4>2>3>1>5
1 Moyer et al. (2008) n=129 behavioral health providers Group Comparison Yes 3:
1. Workshop
2. Workshop + feedback + consult calls
3. Waitlist control of self-guided training
4, 8, and 12 months S BO Yes The addition of feedback and consult calls to the workshop did not result in greater performance; skills declined by the 4 month follow-up; Self-directed techniques did not result in skill improvement.

(1=2)>3
3 Neff et al. (1999) n = 837 providers of medical, psychiatric, nursing, and social services Pre/Post No 3:
1. Full-day (N=514)
2. Half- day(N=209), and
3. Brief (1–3 hrs) (N=114) workshops
None A, K, P SR No Dose effect evidenced in that longer trainings produced change
3 Oordt et al. (2009) n=82 mental health professionals Pre/Post No 0 6 months A, P SR No Increases in confidence and intent to change practice behavior were reported
3 Rubel et al. (2000) n = 44 clinicians and researchers Pre/Post No 0 None K, S SR Yes Knowledge and motivational interviewing statements increased over time
3 Russell et al. (2007) n = 175 Department of Defense/Depa rtment of Veterans Affairs clinicians; 72 clients recruited by 8 participating clinicians Post-training clinician survey; archival patient chart review No 0 None P, Sat SR, Ch Yes – patient measures High satisfaction with the training was reported by clinicians
Statistically significant improvements were reported for 4 of 4 patient measures; chart review was completed by 8 clinicians who had participated in training on their own clients
3 Saitz et al. (2000) n = 87 clinicians Pre/Post; Follow-up survey No 0 Up to 5 years A, C, K, P, Sat SR No In the follow-up interview, high levels of confidence, satisfaction, and desirable treatment practices were reported. In the pre/post comparisons, no changes evidence in knowledge or confidence; attitudes slightly higher though no statistical test was used

Note. Measurement domains: A = Attitudes, C= Confidence, F = Treatment Fidelity or adherence, I = Implementation Difficulty or Barrier - Anticipated or Actual, K = Knowledge, O = openness to learning, OR = Organizational Readiness for Change; P = Practices or techniques used, S = Skills/competence, Sat = satisfaction/acceptability; SP = Supportive practices; Measurement Types: BO = Behavior observation, Ch = Chart Review, SR = Self-report.

Table 4.

Summary of Studies Examining the Effectiveness of Workshop Follow-ups (observation, feedback, consultation, coaching) as a Training Method

Nathan & Gorman (2002) Criteria Author(s) Sample Design Measurement Method Findings & Comments

Type Random Assign # and Primary Comparison Groups Follow- up Domain Type Standardized measures with psychometrics

3 Ducharme & Feldman (1992) study 2 n = 7 staff members Single-subject multiple baseline NA 0 None G, S BO No General case training produced criterion levels of generalization
3 Hawkins & Sinha (1998) n = 109 clinicians Pre/Post No 0 None K SR No The best predictors of knowledge were (highest first): reading, peer support/consultation, study group attendance, time spent applying the treatment; expert consultation became more important as training progressed
2 Kelly et al. (2000) n = 74 AIDS Service Organizations directors Group Comparison Yes 3:
1. Manual,
2. Manual+2-day workshop,
3. Manual+2-day workshop + follow-up consultation
6 and 12 months P, O SR No Condition 3 resulted in more frequent adoption of the intervention

3>2>1
1 Miller, Yahne et al. (2004) n = 140 substance abuse counselors Group Comparison Yes 5:
1. Workshop,
2. Workshop + practice feedback,
3. Workshop + individual coaching sessions,
4. Workshop + feedback + coaching,
5. Waitlist control of self- guided training
4 and 8 months K, P, S BO, SR Yes Addition of feedback and/or coaching improved retention of proficiency

4>2>3>1>5
2 Milne et al. (2002) n = 56 nurses, case managers, social workers, occupational therapists Group Comparison No 2:
1. Standard training (n=45),
2. Relapse prevention (n=11)
2 months A, I, K SR No Relapse prevention group evidenced greater training transfer than control group

2>1
1 Moyer et al. (2008) n=129 behavioral health providers Group Comparison Yes 3:
1. Workshop
2. Workshop + feedback + consult calls
3. Waitlist control of self- guided training
4, 8, and 12 months S BO Yes The addition of feedback and consult calls to the workshop did not result in greater performance; skills declined by the 4 month follow-up; Self- directed techniques did not result in skill improvement.
(1=2)>3
3 Parsons & Reid (1995) n = 10 residential staff supervisors Single-subject multiple baseline NA 0 None A, S BO, SR No Supervisor feedback enhanced maintenance of staff members’ teaching skills
2 Parsons et al. (1993) n = 13 direct care staff Pre/Post No 0 None K, S, Sat BO, SR No Improvement in knowledge with observation and feedback; Improvement in skill will extended feedback; When skills applied within existing client program, clients made gains in skill acquisition
3 Schoener et al. (2006) N = 10 community clinicians; 28 clients with co-occurring mental health and substance use disorders Pre/Post with Multiple data points – interrupted time series design No 0 None S BO Yes Increases in responses consistent with motivational interviewing; however, motivational interviewing proficiency was lower than in more controlled studies (Baer et al., 2004; Miller et al., 2004)
2 Sholomskas et al. (2005) n = 78 full- time, substance abuse counselors Group Comparison No 3:
1. Manual,
2. Manual + website,
3. Manual + didactic seminar + supervised casework
3 months F, I, K, P, S BO, SR Yes 3>2>1

Note. Measurement domains: A = Attitudes, F = Treatment Fidelity or adherence, G = Generalization, I = Implementation Difficulty or Barrier – Anticipated or Actual, K = Knowledge, O = Organizational Resources and Characteristics; P = Practices or techniques used, S = Skills/competence, Sat = satisfaction/acceptability; Measurement Types: BO = Behavior observation, SR = Self-report.

Table 5.

Summary of Studies Examining the Effectiveness of Pyramid Training as a Training Method

Nathan & Gorman (2002) Criteria Author(s) Sample Design Measurement Method Findings & Comments

Type Random Assign Follow-up # and Primary Comparison Groups Domain Type Standardized measures with psychometrics

3 Anderson, & Youngson (1990) part 2 n = 40 managers and senior staff Pre/Post No None 0 A, K SR No Increased knowledge; no change in attitude
3 Demchak & Browder (1990) n = 6 group home supervisors and aides; 6 residents with profound mental retardation Single subject multiple baseline NA None 0 S BO No Seems as if training effect was “watered down” from supervisor to staff
3 Shore et al. (1995) n = 8 Supervisors and direct care staff for state residential facility for persons with MR; 6 patients Single-subject multiple baseline NA None 0 S BO No Improvements noticed in client behavior following pyramidal training intervention with supervisors

Note. Measurement domains: A = Attitudes, K = Knowledge, S = Skills/competence; Measurement Types: BO = Behavior observation, SR = Self-report.

Investigators have evaluated a variety of different training approaches, methods, and issues. Some have investigated the utility of specific training techniques such as workshops and computer-assisted training (e.g., S. E. Anderson & Youngson, 1990; Byington, Fischer, Walker, & Freedman, 1997; Caspar, Berger, & Hautle, 2004). Others have investigated the effectiveness of extended training curriculums with multiple components (e.g., Bein, Anderson, Strupp, Henry, Sachacht et al., 2000; Brooker & Butterworth, 1993; Henry, Strupp, Butler, Schacht, & Binder, 1993). Investigators also have examined whether individuals with diverse training backgrounds can implement mental health techniques (e.g., Brooker et al., 1994; Hawkins & Sinha, 1998) and the importance of treatment adherence and competence (e.g., Barber, Crits-Christoph, & Luborsky, 1996; Huppert et al., 2001; Multon, Kivliighan, & Gold, 1996).

What follows is a summary of the key details and results of studies that have been conducted to examine six different training methods. A summary of the specific details of the studies in each section is shown in Tables 16. A study was included in multiple tables if the study’s aims addressed more than one topic area. For example, Sholomskas et al. (2005) evaluated the effectiveness of written materials (Table 1), self-directed training (Table 2), and workshop follow-ups as training techniques (Table 4); therefore, this study was included in each of the three mentioned tables. The discussion section more definitively reviews the overall findings, key limitations, and practice and research implications of this literature.

Treatment Manuals and Written Materials

Description of Studies

Five studies were reviewed that focused on the utility of simply reading materials (e.g., a treatment manual, see Table 1). One of these studies was considered a Type 1 study; two were considered Type 2 studies, and two were considered Type 3 studies. Three of these studies were group comparisons in which sample sizes ranged from 74 to 174 (M = 109), of which two included random assignment. One study used a pre-post comparison and the fourth study used a single-subject design. Four of the five studies included follow-up assessments after training (Dimeff et al., 2009; Ducharme & Feldman, 1992; Kelly et al., 2000; Sholomskas et al., 2005). Studies examined a variety of assessment domains (e.g., knowledge, skills) using behavior observation (3 of 5 studies) and self-report (4 of 5 studies) methods. Two studies included standardized assessment measures that had acceptable psychometric ratings.

Summary of Findings

Despite variations in study quality, findings were consistent in demonstrating that reading treatment manuals and materials may be necessary, but not sufficient, for skill acquisition and adoption of a psychosocial treatment (e.g., Dimeff et al., 2009; Ducharme & Feldman, 1992; Kelly et al., 2000; Rubel, Sobell, & Miller, 2000). That is, these studies found that reading often resulted in knowledge changes, but the changes were short-lived and smaller than those of therapists participating in more intensive trainings (e.g., Sholomskas et al., 2005). Reading may be used as a “first step” to introduce information about a psychosocial treatment, but reading alone does not result in significant changes in skills or treatment mastery, as indicated in Table 1.

Limitations of Studies Reviewed

In the two (Type 2) group comparison studies, reading a manual was compared with training programs that differed both in the modality and number of hours in training, with additional hours being spent in more intensive trainings compared to reading (Kelly et al., 2000; Sholomskas et al., 2005). This pairing makes it difficult to tease out whether group differences were related to the modality or intensity (increased dose) of training. These studies also are limited in that participating therapists were often taking part in larger implementation efforts (e.g., Sholomskas et al., 2005), so it is not known if findings would generalize to other groups of therapists in potentially less innovative settings.

Self-Directed Training Techniques

Description of Studies

This category included seven studies that focused on an individual’s ability to acquire information or skills by independently interacting with training materials (e.g., computer, videotape review; see Table 2). Three studies were considered Type 1 studies; one was considered a Type 2 study, and three were considered Type 3 studies. Study sample sizes ranged from 6 (single-subject design) to 3,558 (survey design). Four studies included a group comparison (three with random assignment); one study included pre- and post- testing; one study included only post- testing; and the final study was a single subject, multiple baseline design. Four of the seven studies included follow-up assessments (Dimeff et al., 2009;W. R. Miller, Yahne, Moyers, Martinez, & Pirritano, 2004; Sholomskas et al., 2005). Assessment primarily focused on knowledge and skill using behavior observation and standardized, self-report methods.

Summary of Findings

As shown in Table 2, self-management strategies were rated favorably by learners (Worrall & Fruzzetti, 2009) and found to be a cost-effective method to increase knowledge (e.g., National Crime Victims Research & Treatment Center, 2007; Sholomskas et al., 2005). However, when stringent assessment methods were used, self-management was found to work only for some therapists (e.g., Suda & Miltenberger, 1993) and was only slightly more effective than reading written materials at improving knowledge (W. R. Miller et al., 2004; Sholomskas et al., 2005). One study that reported substantial knowledge increases (National Crime Victims Research & Treatment Center, 2007) also has substantial methodological flaws. The study relied on one, study-developed knowledge questionnaire administered during the web-based instruction. No comparison, randomization, follow-up, or multi-method assessment was completed. Also, of the 9,149 people who began the web-based training, only 3,558 (39%) completed the training; therefore, results should be interpreted cautiously. In contrast, Dimeff and colleagues’ (2009) findings favored online training for highest effect sizes in knowledge, competence, and adherence at post and 90-day follow-up in comparison to written materials or workshop training, which was partially attributed to the sophistication of the online training methods. Despite variations in methodological rigor, each of these studies demonstrated some improvements in discrete knowledge or skills; however, many acknowledged that these improvements were slight (e.g., Miller, Yahne et al., 2004) and that self-management strategies can not be used as a solitary training technique. Instead, they are better used within a larger training curriculum that involves expert consultation (Dimeff et al., 2009).

Limitations of Studies Reviewed

While the focus of these studies was similar (i.e., self-directed and motivated instruction), the specific training techniques differed. Dimeff and colleagues (2009) reported on an interactive online training experience for Dialectical Behavior Therapy. Miller and colleagues (2004) focused on the use of reading a treatment manual supplemented with videotape review. The National Crime Victims Research and Treatment Center (2007) focused on the utility of a web-based training program for Trauma-Focused Cognitive Behavioral Therapy. Sholomskas et al (2005) focused on the utility of a web-based training program for Cognitive Behavioral Therapy, and Suda and Miltenberger (1993) studied the impact of instruction, goal setting, self-monitoring, self-evaluation and self-praise on positive staff interactions with consumers. Worrall and Fruzzetti (2009) focused on participants viewing and rating mock Dialectical Behavior Therapy sessions. This variation allows for few conclusions about any one of these methods (e.g., web-based training, manual and videotape review) because there are not enough studies on any one of these topics to draw conclusions. Representativeness of studies also is questionable in that one study in particular (W. R. Miller et al., 2004) intended to determine if training could change therapist behavior under optimal conditions, thus therapists were well motivated and perhaps less representative of a larger group of therapists. Only one of these studies included patients (Suda & Miltenberger, 1993), and only four included follow-ups (Dimeff et al., 2009;W. R. Miller et al., 2004; Moyers et al., 2008; Sholomskas et al., 2005). The length of time between post-assessment and follow-ups was short (1, 3, 4, 8, or 12 months). The inclusion of client outcome as well as longer follow-ups would strengthen the methodology of these studies and any conclusions that could be drawn from them.

Workshops

Description of Studies

Nineteen studies were reviewed that included an examination of the effectiveness of workshops as a sole training technique (see Table 3). Four studies were considered Type 1 studies, two studies were considered Type 2 studies, and the remaining 13 studies were Type 3. Study sample size ranged from 12 to 3,315 (median = 50). Eleven studies included pre- and post-workshop testing. Another study included a post-test only with retrospective reporting of a pre-assessment. Each of the six group comparison studies randomly assigned participants to different training methods or to a control group. The final study relied on a clinician survey at the end of training as well as review of patient charts. Eight of the nineteen studies (42%) included a follow-up assessment varying from 1 month to up to 5 years after training. Studies examined a variety of assessment domains (e.g., attitudes, knowledge, organizational readiness practice, satisfaction) using predominantly self-report methods, many of which were not standardized (n = 8). Seven studies supplemented self-report with behavior observation measures; one study supplemented self-report with chart review. Interestingly, some of the studies that used behavior observation methods used simulated clients as part of their assessment strategy (Baer, Rosengren, Dunn, Wells, & Ogle, 2004; Baer et al., 2009; DeViva, 2006; Dimeff et al., 2009; Freeman & Morris, 1999;W. R. Miller & Mount, 2001b; W. R. Miller et al., 2004). Each study had participants interact with a simulated client (typically an actor); those interactions were audio- or video-recorded and later coded. This assessment strategy offers a practical, yet methodologically rigorous, method for behavior observation assessment of potentially knowledge and skill acquisition as well as treatment adherence (Russell, Silver, Rogers, & Darnell, 2007). Another study included work samples (Moyers et al., 2008).

Summary of Findings

Similar to the research focused on medical and psychology continuing education (Davis et al., 1999; Davis, Thomson, Oxman, & Haynes, 1992), the studies reviewed herein demonstrated that workshop training formats generally do little to change behavior (Saitz, Sullivan, & Samet, 2000). Most often, workshop attendance resulted in increased knowledge, but not significant changes in attitude, application of knowledge, or clinical skills assessed via behavior observation (S. E. Anderson & Youngson, 1990; Byington et al., 1997; Freeman & Morris, 1999; McVey et al., 2005; Rubel et al., 2000). Two studies (both Type 3) found that therapists reported improvements in practice (i.e., case worker assessment of substance abuse, assessment and treatment of suicidal behavior, Gregorie, 1994; Oordt, Jobes, Fonseca, & Schmidt, 2009). Other studies, using more stringent assessment methods, behavior observation (Miller & Mount, 2001) and chart review (Jensen-Doss, Cusack, & de Arellano, 2007), found that training did not impact patient response or provider behavior, respectively. Well-controlled studies by Miller, Yahne and colleagues (Type 1, 2004), Moyer and colleagues (Type 1, 2008) and Chagnon et al (Type 2, 2007) found initial improvements in therapist skill (e.g., increase in motivational interviewing statements) after completion of a workshop; however, over time, skills decreased and were comparable to those in the untrained therapist group four months post training (Table 3). Similarly, Baer and colleagues (2009) found that even though workshop participant’s gains were significantly better than a context tailored training group at post, the groups were equivalent at follow-up.

Length of workshop training varied considerably, and there may be a relation between training time and training outcome. For example, Neff et al. (1999) found that one to three hour trainings were insufficient to produce changes in knowledge, practice, or attitude. Instead, longer trainings produced change, though four-hour sessions yielded the same benefit as a full day training (Neff, 1999), suggesting a possible saturation point for participants. DeViva (2006) found no differences in 3 and 6 hour trainings on the same topic. Some have spent as many as 10 to 15 hours in workshop training with no change in behavior (Byington et al., 1997; Jensen-Doss et al., 2007). Recent training studies (e.g., W. R. Miller et al., 2004; Sholomskas et al., 2005) have shown that increases in skill and knowledge of motivational interviewing techniques may be present immediately following the workshop, but that without ongoing support (e.g., individualized feedback, continued consultation), gains can be reversed (Baer et al., 2009; W. R. Miller et al., 2004).

Limitations of Studies Reviewed

Each study examined the impact of workshop training; however, the techniques that were integrated into the workshop format varied (e.g., didactic vs. roleplay) and the length of each training varied from 1 hour (Neff, 1999) to 15 hours (Byington et al., 1997; W. R. Miller & Mount, 2001a), complicating comparisons across studies. Similar to the other categories of studies, results were compromised due to knowledge tests with unknown psychometric properties and ceiling effects (e.g., S. E. Anderson & Youngson, 1990), low response rates (S. E. Anderson & Youngson, 1990), and a lack of focus on the impact of training on consumers, with two notable exceptions (W. R. Miller & Mount, 2001b; Russell et al., 2007). Another study suffered methodologically from combining instrument development and assessment of knowledge and practice within the same study and reporting on both simultaneously (Byington et al., 1997).

Workshop Supplements (observation, feedback, consultation, coaching)

Description of Studies

Ten studies were reviewed that focused on the effectiveness of workshop follow-ups (see Table 4). Two studies were considered Type 1 studies, four studies were considered Type 2 studies, and four studies were considered Type 3 studies. Five studies used a group comparison method; two studies used a single-subject design, two studies used a pre-post test design; and one study used a pre-post with an interrupted time series design. Three of the 5 group comparison studies included random assignment. Five of the 10 studies included a follow-up assessment, which varied in duration from 2 (Milne, Westerman, & Hanner, 2002) to 12 months (Kelly et al., 2000). Studies examined a variety of assessment domains (e.g., knowledge, attitudes, practice, satisfaction) by means of behavior observation and predominantly non-standardized, self-report methods. In many cases, researchers took care to evaluate knowledge, skill, and practice using a multi-method strategy including videotaped role-plays that were coded for skill acquisition (Sholomskas et al., 2005) and actual therapy sessions (Moyers et al., 2008; Schoener, Madeja, Henderson, Ondersma, & Janisse, 2006).

Summary of Findings

Highlighted in Table 4, active, behaviorally-oriented training techniques like those employed in these studies (e.g., feedback, behavioral rehearsal/role-play, supervision) were found to be effective, particularly when used in combination (e.g., Miller et al., 2004). In two well-designed studies comparing workshops to workshops with additional consultation, the additional consultation resulted in more frequent adoption of an innovation (Type 2, Kelly et al., 2000) and improved retention of skill proficiency (Type 1, W. R. Miller et al., 2004). In contrast, Moyers and colleagues found no additional benefit to providing feedback and up to six consultation calls after providers had participated in a two-day workshop (2008).

Ducharme and Feldman (Type 2, 1992) found that a “general case” training strategy, in which multiple examples are chosen and reviewed that represent nearly all possible client responses, produced criterion levels of generalization even without the use of other strategies. Timing also is important in that Hawkins and Sinha (Type 3, 1998) found that expert consultation became more important as therapists had a reasonable amount of knowledge. Parsons and Reid (Type 2, 1995) found that supervisor feedback enhanced maintenance of staff members’ teaching skills. Milne, Westerman, and Hanner (Type 2, 2002) examined the utility of a relapse prevention module and found that in comparison to a group receiving standard training, the relapse prevention group demonstrated greater knowledge and generalization of training across behaviors and clients.

Limitations of Studies Reviewed

Many of the studies included in this category were designed with treatment implementation challenges in mind. Accordingly, they focused on inclusion of ‘real world’ therapists and compared various practical training methods. One caveat is that the included ‘real world’ therapists may not be representative of a more general therapist group because those included in these studies were reportedly highly motivated to learn the treatment. For example, in order to participate in the Miller, Yahne, and colleagues study (2004) and learn motivational interviewing, therapists had to travel to Albuquerque, New Mexico for an initial training and then submit work samples of actual client counseling sessions. These study inclusion/exclusion criteria may have excluded more representative community therapists.

Pyramid Training (Cascade, Train-the-trainers)

Description of Studies

Training only supervisors or a select group of staff who then train other staff has been studied by different names including the “Pyramid” (e.g., Demchak & Browder, 1990), “train the trainer” and “Cascading” models (e.g., S. E. Anderson & Youngson, 1990). Three studies focused on this method as is shown in Table 5, all of which were considered Type 3 studies, using single subject, multiple baseline designs and pre- and post- comparisons of training knowledge (n = 40) without random assignment. None of these studies included follow-up assessments. The pre- and post-training study utilized non-standardized self-report measures to assess attitudes and knowledge (S. E. Anderson & Youngson, 1990). In contrast, the single-subject design studies used behavior observation methods to assess skills (Demchak & Browder, 1990; Shore, Iwata, Vollmer, Lerman, & Zarcone, 1995).

Summary of Findings

In a single-subject design study, Demchak and Browder (1990) evaluated the utility of training supervisors who were told to replicate training with their staff. Both supervisors and staff improved their skills and patient improvement was evident for both supervisors and staff; however, staff did not evidence as many gains as did their supervisors. In another single subject design study, Shore and colleagues (1995) noticed improvements in staff skill and decreases in client self-injurious behavior following a pyramidal training intervention with supervisors. They also concluded that training supervisors in addition to staff improves staff performance (Shore et al., 1995). Anderson and Youngson (1990) demonstrated increases in staff knowledge about sexual abuse after implementing a cascade training. Studies with large sample sizes and scientifically rigorous designs are needed to determine if the benefits of training only supervisors results in effective training for a broader group of practitioners and their clients.

Limitations of Studies Reviewed

Training a small number of staff seems to be a fairly common “real-world” practice, particularly considering how time- and cost-effective it can be; however, there currently are few data to support its use. Each of the 3 studies reviewed in this category should be considered preliminary or pilot studies. They were either single-subject design studies (Demchak & Browder, 1990; Shore et al., 1995) or extremely methodologically flawed (S. E. Anderson & Youngson, 1990). Available data depicted in Table 5 suggest that there may be improvements in therapist knowledge (S. E. Anderson & Youngson, 1990) as well as client behavior (Shore et al., 1995); however, Demchak & Browder (1990) found better outcomes for supervisors than their staff which lead them to comment that it seemed as if the training effect was “watered down” from supervisor to staff. Larger, more representative and methologically rigorous replications that include follow-up assessments are needed to confirm study results and the utility of this training method.

Multi-component Training Packages

Description of Studies

Twenty-one studies focused on the effectiveness of multicomponent training packages (see Table 6) consisting of several training methods in one protocol. Components often include: 1) a treatment manual, 2) multiple days of intensive workshop training, 3) expert consultation, 4) live or taped review of client sessions, 5) supervisor trainings, 6) booster training sessions, and 7) completion of one or more training cases. The studies reported widely different sample sizes (10 to 221). Two studies were considered Type 1 studies, five were considered Type 2 studies, and 14 studies were considered Type 3 studies. Five studies were group comparisons, three of which randomized participants; twelve studies included pre- and post- testing; three studies involved surveys; and one study frequently assessed practitioners and used pre- and post- testing for patients. Four studies included follow-up assessments from 3 months to 2 years after training. Studies examined a variety of assessment domains (e.g., knowledge, attitudes, practice, satisfaction) using predominantly self-report and behavior observation methods.

Summary of Findings

Nineteen of the 21 studies reviewed demonstrated improvements in measured outcomes as is indicated in Table 6. However, it is difficult to generalize study findings given that each study included different training protocols (with different components and timelines) and assessed different constructs. For example, Morgenstern et al. (Type 1, 2001) found that counselors responded well to the CBT content and manualized format of the training and that adequate skill levels were reached. Henry, Strupp, et al., (1993) found that their year-long training program successfully changed therapists’ technical skills in Time-Limited Dynamic Psychotherapy: increases were observed in emphasis on the expression of in-session affect, exploration of the therapeutic relationship, improved participant-observer stance, and open-ended questions. Similarly, Lochman and colleagues (2009) found that their more intensive training condition resulted in substantial treatment benefits for children treated by trained school counselors in comparison to less intensely trained counselors and a comparison condition. The two studies that showed little to no gains were Type 3 studies (Bein, Anderson, Strupp, Henry, Schacht et al., 2000; Brooker & Butterworth, 1993). In a year-long training with multiple components (reading, 100 hours of seminars, small group supervision, audio- and video-tape review) for Time-Limited Dynamic Psychotherapy the majority of therapists did not achieve basic competence in the model (56% did not conduct a case with at least minimal skill; Bein, Anderson, Strupp, Henry, Sachacht et al., 2000). In another study, community psychiatric nurses completing a six-month training evidenced little change in their attitudes about schizophrenia and their preference for behavior therapy over time (Brooker & Butterworth, 1993).

Limitations of Studies Reviewed

Only five of the 21 studies reviewed were group comparisons. The others (including pre-post testing and single subject designs) do not control for therapist maturation effects, which might partly account for better post-training outcomes. Assessment time also might be important. Of those that conducted follow-up assessments, the longest length of a follow-up was one year post training, which limits our understanding of maintenance of training gains and treatment sustainability. Similarly, it is unclear how long or how many cases might be needed for skills to be consolidated. Similar to other groups of studies, many of these have limited generalizability in that only therapists who were experienced and interested in the approach to be trained were included. For example, in one study (Crits-Christoph et al., 1998) therapists were selected to participate by supervisors on the basis of background education and training, letters of reference, and audiotaped samples of their work, which were rated for quality. Of those who applied for specific treatment trainings, as few as 50% were accepted.

Discussion

This empirical review of 55 studies evaluating six therapist training methods has found that there are differences in the number of studies for specific training methods and their respective effectiveness. Multiple studies have been conducted on multi-component treatment packages (20), workshops (19), and workshop follow-ups (9). Fewer studies have been completed on the utility of pyramid (train-the-trainer) models (3), reading (5), and self-directed trainings (7). Not only have multi-component treatment packages been studied most often, they also have most consistently demonstrated positive training outcomes relative to other training methods. Conversely, studies evaluating the utility of reading, self-directed trainings, and workshops have documented that these methods do not routinely produce positive outcomes. Workshop follow-ups help to sustain outcomes. Little is known about the impact of pyramid or train-the-trainer methods.

The literature is limited by a lack of methodological rigor and multiple “pilot” studies characterized by small sample sizes, limited power, and absent comparison groups, random assignment, standardized assessment measures, and routine follow-up assessments. The inclusion of therapists who may not be representative of those providing services in community agencies also has compromised conclusions that can be drawn within this area of investigation. Few follow-ups have been conducted, and those that have been conducted are generally of short duration. Patient outcomes also are rarely included in studies. Therefore, we are unable to understand treatment sustainability or the impact of training on patient outcomes. Despite significant methodological flaws, what follows is a brief summary of some key lessons learned from this research, including: a) the level of effectiveness for a variety of training methods, b) factors that appear to influence training outcome, c) methodological concerns, and d) recommendations for therapist training and training research.

Effectiveness of Different Training Methods

To date, the most common way to train community therapists in new treatment approaches like EBTs has been to ask them to read written materials (e.g., treatment manuals) or attend workshops. There is little to no evidence that either of these approaches will result in positive, sustained training outcomes (i.e., increases in skill and competence). The most positive result of reading written materials was a slight increase in knowledge. Of the five studies reviewed that examined the utility of reading, none demonstrated increases in significant behavior change or competence. In fact, one study found no differences between those who read versus did not read a treatment manual (Rubel et al., 2000). In terms of workshop attendance, this review confirms what others (e.g., Davis et al., 1999; VandeCreek et al., 1990; Walters, Matson, Baer, & Ziedonis, 2005) have found: while workshop participants sometimes demonstrate increases in skills and (more often) knowledge, workshops are not sufficient for enabling therapists to master skills (Sholomskas et al., 2005), maintain skills over time (Baer et al., 2004;W. R. Miller et al., 2004), or impact patient outcome (W. R. Miller & Mount, 2001a).

Additional information is needed on the effect of self-directed training methods on therapist skills. For example, web-based trainings are cost-effective, convenient, and well liked by participants (National Crime Victims Research & Treatment Center, 2007); however, there is only a small amount of data to support their effectiveness (Dimeff et al., 2009). One study found a 36.3% increase in knowledge after completing a web-based training (National Crime Victims Research & Treatment Center, 2007); however, this finding is based on a Type 3 study with significant methodological flaws. Using more rigorous methodology, Dimeff and colleagues (2009; Type 1) demonstrated increases in knowledge, competence, and adherence at post and 90-day follow-up using a sophisticated online learning method. In contrast, Sholomskas et al. (2005), a Type 2 study, found that web-based training was only slightly more effective than reading a treatment manual. There simply is not yet enough evidence to draw a conclusion about the utility of this training technique. Additional information on the interactive nature of the online method and other technologies (e.g., podcasts, archived webinars) will be important to gather given their potential broad application.

Workshop follow-ups that included observation, feedback, consultation, and/or coaching have improved adoption of the innovation (Type 2; Kelly et al., 2000), retention of proficiency (Type 1; W. R. Miller et al., 2004), and client outcome (Type 2; Parsons, Reid, & Green, 1993), compared to workshops alone. Essentially, there does not seem to be a substitute for expert consultation, supervision, and feedback for improving skills and increasing adoption. The challenge is that these methods are resource intensive as they require the availability of expert consultation, clinical supervisors, and therapist time, all of which are costly for community-based mental health agencies. The implementation field needs to determine: a) how to sequence learning activities to be cost-effective without compromising training and treatment outcome, and b) how to use technology more effectively. Participants report liking web-based training (e.g., National Crime Victims Research & Treatment Center, 2007); perhaps we can capitalize on technology to increase the availability of expert consultation. Additionally, utilizing cost-effective training methods initially might reduce the amount of expert consultation and supervision needed later. Hawkins and Sinha (1998) found that consultations appeared to be more effective for therapists with a reasonable amount of pre-training knowledge, but this result is tentative given that the methodological flaws of this Type 3 study. If results were replicated, one strategy might be for therapists to complete a web-based training prior to attending a workshop. Once competency knowledge and skill levels were met, that therapist could proceed to participate in conference calls with an expert trainer and other therapists from different agencies as a form of group supervision. Afterward, the therapist could receive individual supervision and expert consultation on selected cases. This type of training approach might minimize costs and maximize the potential for skill acquisition by sequencing training activities, imposing competency standards, and utilizing internet technology.

Pyramid or train-the-trainer training methods also have the potential to be time- and cost-effective; however, this method has received the least amount of rigorous examination, limited to only three studies (S. E. Anderson & Youngson, 1990; Demchak & Browder, 1990; Shore et al., 1995). The ultimate question that remains is that even if effects are watered down from supervisors to therapists, are the improvements for consumers still clinically meaningful. Chamberlain and colleagues currently are conducting a large-scale, prospective study to examine the effects of using a cascading model to transfer components of Multidimensional Treatment Foster Care (NIMH Grant # 060195) from a research setting (The Oregon Social Learning Center) to the foster care system in San Diego. Initially, the original developers of the model will train and supervise staff in San Diego to implement the model. In the second training iteration, the developers will have substantially less involvement. Similarly, Chaffin and colleagues are examining the utility of a Cascading model for implementing in-home family preservation/family reunification services. Providers from a well-trained, model, seed program will serve as implementation agents for sequential implementations at other agencies. (NIMH Grant #001334). Studies like these will contribute to a better understanding of the utility of cascading models as a training technique.

The familiar tone of Bickman’s observations (Bickman, 1996) demonstrating that “more is not always better” resonates in studies examining the effectiveness of multi-component training packages as a training method. Of the twenty studies in this area, the large majority found positive training outcomes. However, two (Bein, Anderson, Strupp, Henry, Sachacht et al., 2000; Brooker & Butterworth, 1993) studies found that therapists did not achieve even basic competence in the treatment approach after extended (e.g., year-long) training initiatives. One study (Crits-Christoph et al., 1998) found that only one of three therapies (CBT) demonstrated learning that carried over from training case to training case. This is somewhat disappointing given the substantial resources invested; however, it highlights the need to understand the utility of specific components of these training packages and the ease of training specific approaches.

Additional information is also needed on the methods in which therapists should be trained. Chorpita and Weisz (e.g., Chorpita, Becker, & Daleiden, 2007) have focused on comparing the benefits of training therapists in a modular based treatment versus individual EBTs, which will help to inform this area. As these authors have suggested, perhaps training therapists in one conceptual approach will have broader implications and be well received by therapists rather than training them in multiple EBTs.

Influences on Training Outcome Not Included in this Review

This review focused on the training design component of Transfer of Training Models (Machin, 2002). Essentially the focus was on outcomes of what happens during training; however, the two remaining components of the model, what happens before (therapist characteristics) and after (organizational setting) training, are equally important, which has been highlighted by those implementing EBTs (Chaffin, 2006). For example, Bruns and colleagues’ (2008) maintain that a supportive organizational context and clinical supervisors who are trained to supervise EBTs, are critical to the success of EBT implementation.

Therapist Characteristics

Therapist characteristics are often mentioned as key factors in treatment implementation and dissemination. After all, the characteristics of those who receive the training and provide the treatment could affect implementation on multiple levels such as treatment competence (Siqueland et al., 2000) and client outcomes (Vocisano et al., 2004). Most EBTs have been developed by and for doctoral-level clinical professionals (e.g., clinical psychologists, psychiatrists) within defined theoretical orientations (e.g., behavioral, cognitive-behavioral). In contrast, community mental health centers employ primarily masters-level therapists to provide most of the mental health therapy (Garland, Kruse, & Aarons, 2003; Weisz, Chu, & Polo, 2004). Therapists report their theoretical orientation to be “eclectic” (e.g. Addis & Krasnow, 2000; Kazdin, 2002; Weersing, Weisz, & Donenberg, 2002) and that they value the quality of the therapeutic alliance over the use of specific techniques (Shirk & Saiz, 1992).

Small sample sizes and lack of random assignment hinder our ability to determine the degree to which therapist characteristics are important and which characteristics in particular need to be addressed by trainers. Therapists are a diverse group with different learning histories, training backgrounds, and preferences. Understanding more about how to tailor training to maximize learning outcomes for diverse groups will be an important academic endeavor. Studies that randomly assign therapists to different training conditions could control characteristics that are common to research studies, such as high motivation and interest in the treatment approach, while examining factors that could be addressed such as knowledge, caseload size, and supervisor support, each of which has been raised as impacting training results. Examining therapist characteristics seems to be a missed opportunity within the existing research. Much more could be learned if researchers conducted studies of therapists or at a minimum, included moderator analyses in their existing implementation studies.

Organizational Factors

Organizational difficulties are commonly cited in discussion sections and conceptual papers as challenges that have to be overcome in order to implement EBT (e.g., Bailey, Burbach, & Lea, 2003; Fadden, 1997); however, organizational factors are seldom studied. When they are examined, it seems that they are sometimes included at the end of a study to potentially account for findings (e.g., post study interviews; W. R. Miller et al., 2004; Schoener et al., 2006). Also missing in this literature are multiple studies on how organizational interventions (Glisson & Schoenwald, 2005) could be used to enhance implementation successes. This may be an emerging area of study (e.g., Glisson et al., 2008; Gotham, 2006; Schoenwald, Chapman et al., 2008).

Glisson and colleagues (Glisson, Dukes, & Green, 2006) developed the Availability, Responsiveness, and Continuity (ARC) organizational intervention strategy to improve services in child welfare and juvenile justice systems, which is now being used to support the implementation of Multisystemic Therapy in rural Appalachia (Glisson & Schoenwald, 2005). Similarly, the Addiction Technology Transfer Center of New England has implemented an organizational change strategy, Science to Service Laboratory, in 54 community-based substance abuse treatment agencies in New England (Squires et al., 2008) since 2003.

Clinical Supervision

Finding appropriate training and supervision has been cited as a primary barrier in dissemination of EBT (Conner-Smith & Weisz, 2003; Essock et al., 2003). Extensive reviews have been completed on the methodological limitations of research on clinical supervision (Ellis, Ladany, Krengel, & Schult, 1996) as well as the efficacy of supervision in training therapists (Holloway & Neufeldt, 1995). The links between clinical supervision and therapist efficacy and treatment adherence have rarely been studied (Ellis et al., 1996; Lambert & Ogles, 1997), with a few notable exceptions (e.g., Henggeler, Melton, Brondino, Scherer, & Hanley, 1997; Henggeler, Schoenwald, Liao, Letourneau, & Edwards, 2002). These existing studies indicate that: a) training supervisors has been shown to facilitate improvements in staff performance, b) supervision increases therapist knowledge and proficiency with complex therapeutic procedures, c) supervisor expertise is positively correlated with therapist treatment adherence, d) supervisor rigidity (over focus on the analytic process and treatment principles) is associated with low therapist adherence, e) supervisor feedback appeared to enhance the maintenance of staff members’ skills, and f) supervisors benefit from receiving specific instruction on how to supervise others in addition to instruction on treatment content. Even fewer studies have examined the relation of therapist performance and client outcome to clinical supervision (Holloway & Neufeldt, 1995). A better understanding of how supervisors should be trained and included in the implementation process is needed.

Methodological Concerns

Lack of Theory to Drive Implementation Research

This emerging area of research appears to be suffering from a lack of theory-driven studies. Researchers (Glisson & Schoenwald, 2005; Gotham, 2004) have highlighted the value in understanding the complex environment of which these training efforts are a part. Despite these recommendations, there remains a lack of systematic investigations tied together by a strong theoretical framework. Perhaps there is value in looking to other disciplines with similar missions to understand potentially relevant theoretical frameworks. For example, the medical field has tried to implement evidence-based practices. The field of behavioral health may benefit from incorporating organizational theories from this work such as complexity science adaptive systems (R. A. Anderson, Crabtree, Steele, & McDaniel, 2005; Scott et al., 2005).

Uneven Implementation

This literature seems to be largely composed of a few significant dissemination/implementation efforts within specific topic areas such as behavioral family therapy for schizophrenia, substance abuse treatments including motivational interviewing, DBT, and behavioral interventions for individuals with developmental disabilities in residential treatment facilities. Considering that studies on these topics are dominant within a small implementation literature, generalizations across treatment approaches are difficult and questionable. For example, it is unclear if results from training studies focused on implementing family therapy for schizophrenia might be applicable to training studies focused on implementing motivational interviewing. Perhaps the method and dose of training necessary for adequate skill acquisition (competence in a treatment) is specific to each treatment. More intensive treatment approaches and/or those that require the use of significantly different skills that a therapist’s current skill set may require more intensive training methods or doses than less intensive treatment approaches and/or those that are similar to therapists existing skill sets.

Alternatively, our observation that the literature is dominated by a few significant dissemination/implementation efforts may be due to the snowball search method employed. The reference section for each identified article was reviewed for the possible inclusion of referenced studies in this review. To guard against this potential bias, several keywords and databases were used, as indicated in the methods section of this paper. Also, all relevant articles reference sections were reviewed, even those that were not included in this review (e.g., reference sections of conceptual papers). Therefore, however plausible, it appears unlikely that the snowball search method biased the selection of studies for inclusion in this review.

Measurement of too few constructs

As previously noted, trainers often seek to improve knowledge and skill; however, knowledge acquisition appears to be easier to demonstrate and is more commonly assessed in comparison to skill acquisition. A few studies that have assessed both knowledge and skill have found that these constructs do not always increase at the same rate nor do they always positively correlate. Freeman and Morris (1999) found statistical significance was demonstrated on a knowledge measure, but not on a clinical vignette where the application of knowledge had to be demonstrated. Similarly, Byington et al. (1997) found that knowledge improvements were evident, but improvements were not evident on applying concepts. Reporting only knowledge can lead to a more optimistic or skewed (Baer et al., 2009) view of training outcome than is possibly accurate.

Exclusive Reliance on Therapist Self-report

Therapist self-repot is commonly used to evaluate response to training; however, studies that have examined the validity of therapist self-report have found that therapist self-reports of their own behavior (e.g., clinical proficiency) and patient improvements were more optimistic when compared with behavior observations (Gregorie, 1994; W. R. Miller & Mount, 2001b;W. R. Miller et al., 2004). Behavior observation ratings present challenges to studies (e.g., cost, time, sample adequacy), but poor concordance between therapist and observer ratings suggest that therapist reports may be a supplement to, but not substitute for, observer ratings (Carroll, Nich, & Rounsaville, 1998, p. 307). In one study by Carroll et al. (2000), 741 sessions were rated by a therapist and an independent rater. For 71% of those sessions, therapists’ ratings were higher (more optimistic) than the independent raters, 26% of ratings were identical, and 6% of ratings were higher for independent raters than therapists.

Lack of Rigor in Study Design and Scope

As mentioned previously, the multiple methodological flaws limit the conclusions that can be drawn from these studies. There also is significant heterogeneity among therapists, training methods, training protocols, interventions trained, and constructs assessed. All of this variability combined with a lack of methodological rigor in completed studies significantly complicates this area of inquiry. While this review sought to organize the literature in a meaningful way by using an established classification system (Nathan and Gorman, 2002, 2007), the categorization of studies should not be treated as sacrosanct. Nathan and Gorman’s classification system is not the only system available for classifying research methodologies (e.g., Bliss, Skinner, Hautau, & Carroli, 2008); however, it is the most comprehensive and widely disseminated system with regard to rank ordering research methods by the degree of scientifically regarded rigor. For example, Bliss and colleagues (2008) describe different research methodologies, but do not rank order them.

Research Directions

We are just beginning to understand how to train community therapists in psychosocial treatment skills. Thus far, some methods appear to more effective in changing knowledge and skill (e.g., multi-component training packages, feedback, consultation, supervision) than others (e.g., reading a treatment manual, attending workshops). The former methods are notable for their individualized approach, although it should also be noted that these methods have other requirements or limitations (e.g., time, cost, intensity). Few studies have directly compared different methods, which may be one of the main directions for further work. One key question is, what is the most efficient method in order to achieve initial therapist skill acquisition. Perhaps an even more important question is whether it is necessary to administer ongoing training and consultation (feedback) in order to achieve therapist adoption. An ongoing study by Chaffin and colleagues (NIMH Grant #065667) is evaluating the role of ongoing fidelity monitoring on the implementation of an EBT at the state level. Results may help to determine whether this component is essential in maintaining good adherence to a treatment model and ultimately improved client outcome. Similar research might also examine the benefits of different training activities, such as supervisor training or use of live coaching/consultation.

Complex, but important questions originally proposed in the review by Ford (1979) continue to remain unanswered, including: a) What is the minimal therapist skill proficiency level that could serve as a valid cutting point for predicting success or failure in training?, b) Are there certain complex interpersonal skills that underlie treatment approaches that should be considered prerequisites for training?, and c) Is there a way to match trainees with a training method to produce better training outcomes?. However, even simpler questions remain such as: d) What educational level (e.g., M.A./M.S., M.S.W., Ph.D.) is necessary to be able to benefit from training?, e) What is the impact of therapist training on client outcomes?, f) How well do trained skills generalize from training cases to ‘real-world’ clients?, g) Is the impact of training transient or long-term?, and h) what program/agency or organizational mechanisms/structures/resources are needed to maximize the likelihood of successful therapist acquisition and adoption of a psychosocial treatment? To address some of these unanswered questions, Kolko and colleagues are currently completing a randomized effectiveness trial (NIMH Grant # 074737) to understand the potential benefits of training therapists who are diverse in background (BA vs. MA/MS/MSW) and service setting (in-home, family-based, outpatient) in one EBT for child physical abuse, Alternatives for Families: A Cognitive Behavioral Therapy. This same study will provide information on therapist knowledge, skills, attitudes, real-world practices, and the impact of these factors on family outcomes. It also will provide information on supervisor and organizational characteristics that impact implementation over time. Perhaps these efforts as well as some of those included in this review are reflective of a shift toward applying increasing rigorous methods to the study of psychosocial treatment implementation. Notable is that 5 of the 6 studies included in this review that were rated as a Type 1 study were published after 2004 (Baer et al., 2009; Dimeff et al., 2009; Lochman et al., 2009;W. R. Miller et al., 2004; Moyers et al., 2008).

In summary, surprisingly little research has been conducted to evaluate methods for training therapists in implementing a broad array of psychotherapy techniques. Clearly, there is a need to develop and test innovative and practical training methods. Research of training methods should move beyond examinations of workshop training into developing and testing training models that are innovative, practical, and resource effective. Large-scale, methodologically-rigorous trials that include representative clinicians, patients, and follow-up assessments are necessary to provide sufficient evidence of effective training methods and materials. Without such trials, the field will continue to try to disseminate evidence-based treatments without evidence-based training strategies.

Ultimately, the current national focus on dissemination requires researchers to examine two issues together: 1) how well can community therapists be trained to effectively retain and implement new psychotherapy skills and knowledge and 2) does the application of these new skills and knowledge increase positive outcomes for clients when delivered in community settings. Attention to the integration of these complementary objectives will hopefully promote advances in training technologies that can play a significant role in promoting advancing the mental health competencies of community therapists and enhancing the quality of care delivered in everyday practice settings. Ultimately, just as “Evidence-based medicine should be complimented by evidence-based implementation” (Grol, 1997), so too should evidence-based psychosocial treatments by complimented by evidence-based implementation.

Footnotes

1

The term “therapist” is used broadly and is meant to include professionals who provide psychological services to populations with clinically-significant mental or behavioral health difficulties. It is meant to include counselors, clinical social workers, psychologists, psychiatrists, and all other mental or behavior health clinicians.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Amy D. Herschell, Email: HerschellAD@upmc.edu.

David J. Kolko, Email: KolkoDJ@upmc.edu.

Barbara L. Baumann, Email: BaumannBL@upmc.edu.

Abigail C. Davis, Email: abbiedavis618@hotmail.com.

References

  1. Addis ME, Krasnow AD. A national survey of practicing psychologists’ attitudes toward psychotherapy treatment manuals. Journal of Consulting & Clinical Psychology. 2000;68(2):331–339. doi: 10.1037//0022-006x.68.2.331. [DOI] [PubMed] [Google Scholar]
  2. Alberts G, Edelstein B. Therapist training: A critical review of skill training studies. Clinical Psychology Review. 1990;10:497–511. [Google Scholar]
  3. American Psychological Association Presidential Task Force on Evidence-based Practice. Evidence-based practice in psychology. American Psychologist. 2006;61:271–285. [Google Scholar]
  4. Anderson RA, Crabtree BF, Steele DJ, McDaniel RR. Case Study Research: The View From Complexity Science. Qualitative Health Research. 2005;15(5):669–685. doi: 10.1177/1049732305275208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Anderson SE, Youngson SC. Introduction of a child sexual abuse policy in a health district. Children and Society. 1990;4:401–419. [Google Scholar]
  6. Baer JS, Rosengren DB, Dunn CW, Wells EA, Ogle RL. An evaluation of workshop training in motivational interviewing for addiction and mental health clinicians. Drug & Alcohol Dependence. 2004;73(1):99–106. doi: 10.1016/j.drugalcdep.2003.10.001. [DOI] [PubMed] [Google Scholar]
  7. Baer JS, Wells EA, Rosengren DB, Hartzler B, Beadnell B, Dunn C. Agency context and tailored training in technology transfer: A pilot evaluation of motivational interviewing training for community counselors. Journal of Substance Abuse Treatment. 2009;37:191–202. doi: 10.1016/j.jsat.2009.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bailey R, Burbach FR, Lea SJ. The ability of staff trained in family interventions to implement the approach in routine clinical practice. Journal of Mental Health. 2003;12(2):131–141. [Google Scholar]
  9. Baldwin T, Ford JK. Transfer of training: A review and directions for future research. Personnel Psychology. 1988;41:63–105. [Google Scholar]
  10. Barber JP, Crits-Christoph P, Luborsky L. Effects of therapist adherence and competence on patient outcome in brief dynamic therapy. Journal of Consulting and Clinical Psychology. 1996;64:619–622. doi: 10.1037//0022-006x.64.3.619. [DOI] [PubMed] [Google Scholar]
  11. Bein E, Anderson T, Strupp HH, Henry WP, Sachacht TE, Binder JL, et al. The effects of training in time-limited dynamic psychotherapy: Changes in therapeutic outcome. Psychotherapy Research. 2000;10:119–132. doi: 10.1080/713663669. [DOI] [PubMed] [Google Scholar]
  12. Bein E, Anderson T, Strupp HH, Henry WP, Schacht TE, Binder JL, et al. The effects of training in time-limited dynamic psychotherapy: Changes in therapeutic outcome. Psychotherapy Research. 2000;10:119–132. doi: 10.1080/713663669. [DOI] [PubMed] [Google Scholar]
  13. Berwick DM. Disseminating innovations in health care. Journal of the American Medical Association. 2003;289:1969–1975. doi: 10.1001/jama.289.15.1969. [DOI] [PubMed] [Google Scholar]
  14. Bickman L. A continuum of care: More is not always better. American Psychologist. 1996;51(7):689–701. doi: 10.1037//0003-066x.51.7.689. [DOI] [PubMed] [Google Scholar]
  15. Bliss SL, Skinner CH, Hautau B, Carroli EE. Articles published in four school psychology journals from 2000 to 2005: An analysis of experimental/intervention research. Psychology in the Schools. 2008;45(6):483–498. [Google Scholar]
  16. Brooker C, Butterworth T. Training in psychosocial intervention: The impact on the role of community psychiatric nurses. Journal of Advanced Nursing. 1993;18:583–590. doi: 10.1046/j.1365-2648.1993.18040583.x. [DOI] [PubMed] [Google Scholar]
  17. Brooker C, Falloon I, Butterworth A, Goldberg D, Graham-Hole V, Hillier V. The outcome of training community psychiatric nurses to deliver psychosocial intervention. British Journal of Psychiatry. 1994;165:222–230. doi: 10.1192/bjp.165.2.222. [DOI] [PubMed] [Google Scholar]
  18. Bruns EJ, Hoagwood KE, Rivard JC, Wotring J, Marsenich L, Carter B. State implementation of evidence-based practice for youths, part II: Recommendations for research and policy. Journal of the American Academy for Child and Adolescent Psychiatry. 2008;47(5):499–504. doi: 10.1097/CHI.0b013e3181684557. [DOI] [PubMed] [Google Scholar]
  19. Byington K, Fischer J, Walker L, Freedman E. Evaluating the effectiveness of a multicultural counseling and assessment training. Journal of Applied Rehabilitation Counseling. 1997;28:15–19. [Google Scholar]
  20. Carroll KM. Constrained, confounded, and confused: Why we really know so little about therapists in treatment outcome research. Addictions. 2001;96:203–206. doi: 10.1046/j.1360-0443.2001.9622032.x. [DOI] [PubMed] [Google Scholar]
  21. Carroll KM, Nich C, Rounsaville BJ. Utility of therapist session checklists to monitor delivery of coping skills treatment for cocaine abusers. Psychotherapy Research. 1998;8(3):307–320. [Google Scholar]
  22. Carroll KM, Nich N, Sifry RL, Nuro KF, Frankforter TL, Ball SA, et al. A general system for evaluating therapist adherence and competence in psychotherapy research in the addictions. Drug & Alcohol Dependence. 2000;57:225–238. doi: 10.1016/s0376-8716(99)00049-6. [DOI] [PubMed] [Google Scholar]
  23. Caspar F, Berger T, Hautle I. The right view of your patient: A computer-assisted, individualized module for psychotherapy training. Psychotherapy: Theory, Research, Practice, Training. 2004;41(2):125–135. [Google Scholar]
  24. Chaffin M. Organizational culture and practice epistemologies. Clinical Psychology: Science and Practice. 2006;13(1):90–93. [Google Scholar]
  25. Chagnon F, Houle J, Marcoux I, Renaud J. Control-Group study of an intervention training program for youth suicide prevention. Suicide and Life-Threatening Behavior. 2007;37(2):135–144. doi: 10.1521/suli.2007.37.2.135. [DOI] [PubMed] [Google Scholar]
  26. Chambless DL, Baker MJ, Baucom DH, Beutler LE, Calhoun KS, Crits-Christoph P, et al. Update on empirically validated therapies II. The Clinical Psychologist. 1998;51:3–15. [Google Scholar]
  27. Chambless DL, Sanderson WC, Shoham V, Johnson SB, Pope KS, Crits-Christoph P, et al. An update on empirically validated therapies. The Clinical Psychologist. 1996;49:5–18. [Google Scholar]
  28. Chorpita BF, Becker KD, Daleiden EL. Understanding the common elements of evidence-based practice: Misconceptions and clinical examples. Journal of the American Academy of Child and Adolescent Psychiatry. 2007;46:647–652. doi: 10.1097/chi.0b013e318033ff71. [DOI] [PubMed] [Google Scholar]
  29. Conner-Smith JK, Weisz JR. Applying treatment outcome research in clinical practice: Techniques for adapting interventions to the real world. Child and Adolescent Mental Health. 2003;8:3–10. doi: 10.1111/1475-3588.00038. [DOI] [PubMed] [Google Scholar]
  30. Crits-Christoph P, Siqueland L, Chittams J, Barber JP, Beck AT, Frank A, et al. Training in cognitive, supportive-expressive, and drug counseling therapies for cocaine dependence. Journal of Consulting and Clinical Psychology. 1998;66:484–492. doi: 10.1037//0022-006x.66.3.484. [DOI] [PubMed] [Google Scholar]
  31. Davis DA, Thomson MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. The impact of formal continuing medical education: Do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? Journal of the American Medical Association. 1999;282:867–874. doi: 10.1001/jama.282.9.867. [DOI] [PubMed] [Google Scholar]
  32. Davis DA, Thomson MA, Oxman AD, Haynes RB. Evidence for the effectiveness of CME: A review of randomized controlled trials. Journal of the American Medical Association. 1992;268:1111–1117. [PubMed] [Google Scholar]
  33. Demchak M, Browder DM. An evaluation of the pyramid model of staff training in group homes for adults with severe handicaps. Education & Training in Mental Retardation. 1990;25(2):150–163. [Google Scholar]
  34. DeViva J. The effects of full-day and half-day workshops for health care providers in techniques for increasing resistant clients’ motivation. Professional Psychology - Research & Practice. 2006;37:83–90. [Google Scholar]
  35. Dimeff LA, Koerner K, Woodcock EA, Beadnell B, Brown MZ, Skutch JM, et al. Which training method works best? A randomized controlled trial comparing three methods of training clinicians in dialectical behavior therapy skills. Behavioural Research and Therapy. 2009;47:921–930. doi: 10.1016/j.brat.2009.07.011. [DOI] [PubMed] [Google Scholar]
  36. Ducharme J, Feldman N. Comparison of staff training strategies to promote generalization. Journal of Applied Behavior Analysis. 1992;25:165–179. doi: 10.1901/jaba.1992.25-165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Ellis MV, Ladany N, Krengel M, Schult D. Clinical supervision research from 1981 to 1993: A methodological critique. Journal of Counseling Psychology. 1996;43:35–50. [Google Scholar]
  38. Essock SM, Goldman HH, Van Tosh L, Anthony WA, Appell CR, Bond GR, et al. Evidence-based practices: Setting the context and responding to concerns. Psychiatric Clinics of North America. 2003;26:919–938. doi: 10.1016/s0193-953x(03)00069-8. [DOI] [PubMed] [Google Scholar]
  39. Eyberg SM, Nelson MM, Boggs SR. Evidence-based psychosocial treatments for children and adolescents with disruptive behavior. Journal of Clinical Child and Adolescent Psychology. 2008;37(1):215–238. doi: 10.1080/15374410701820117. [DOI] [PubMed] [Google Scholar]
  40. Fadden G. Implementation of family interventions in routine clinical practice following staff training programs: A major cause for concern. Journal of Mental Health. 1997;6:599–612. [Google Scholar]
  41. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institue, The National Implementatio Research Network; 2005. [Google Scholar]
  42. Ford JD. Research on training counselors and clinicians. Review of Educational Research. 1979;69:87–130. [Google Scholar]
  43. Ford JK, Weissbein DA. Training of transfer: An updated review. Performance Improvement Quarterly. 1997;10:22–41. [Google Scholar]
  44. Freeman KA, Morris TL. Investigate interviewing with children: Evaluation of the effectiveness of a training program for child protective service workers. Child Abuse & Neglect. 1999;23(7):701–713. doi: 10.1016/s0145-2134(99)00042-3. [DOI] [PubMed] [Google Scholar]
  45. Garland AF, Kruse M, Aarons GA. Clinicians and outcome measurement: What’s the use? Journal of Behavioral Health Services & Research. 2003;30(4):393–405. doi: 10.1007/BF02287427. [DOI] [PubMed] [Google Scholar]
  46. Glisson C, Dukes D, Green P. The effects of the ARC organizational intervention on caseworker turnover, climate, and cultuer in children’s service systems. Child Abuse & Neglect. 2006;30:855–880. doi: 10.1016/j.chiabu.2005.12.010. [DOI] [PubMed] [Google Scholar]
  47. Glisson C, Schoenwald S. The ARC organizational and community intervention strategy for implementing evidence-based children’s mental health treatments. Mental Health Services Research. 2005;7(4):243–259. doi: 10.1007/s11020-005-7456-1. [DOI] [PubMed] [Google Scholar]
  48. Glisson C, Schoenwald SK, Kelleher K, Landsverk J, Hoagwood KE, Mayberg S, et al. Therapist turnover and new program sustainability in mental health clinics as a function of organizational culture, climate, and service structure. Administration & Policy in Mental Health. 2008;35:124–133. doi: 10.1007/s10488-007-0152-9. [DOI] [PubMed] [Google Scholar]
  49. Gotham HJ. Diffusion of mental health and substance abuse treatments: Development, dissemination, and implementation. Clinical Psychology: Science and Practice. 2004;11:160–176. [Google Scholar]
  50. Gotham HJ. Advancing the implementation of evidence-based practices into clinical practice: How do we get there from here? Professional Psychology - Research & Practice. 2006;37(6):606–613. [Google Scholar]
  51. Greenhalgh T, Robert G, MacFarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Quarterly. 2004;82(4):581–629. doi: 10.1111/j.0887-378X.2004.00325.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Gregorie TK. Assessing the benefits and increasing the utility of addiction training for public child welfare workers: A pilot study. Child Welfare. 1994;73:68–81. [PubMed] [Google Scholar]
  53. Grol R. Personal paper: Beliefs and evidence in changing clinical practice. BMJ. 1997;315:418–421. doi: 10.1136/bmj.315.7105.418. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Hawkins KA, Sinha R. Can front-line clinicians master the conceptual complexities of dialectical behavior therapy? An evaluation of a State Department on Mental Health training program. Journal of Psychiatric Research. 1998;32:379–384. doi: 10.1016/s0022-3956(98)00030-2. [DOI] [PubMed] [Google Scholar]
  55. Hemmelgarn AL, Glisson C, James LR. Organizational culture and climate: Implications for services and interventions research. Clinical Psychology: Science & Practice. 2006;13(1):73–89. [Google Scholar]
  56. Henggeler SW, Melton GB, Brondino MJ, Scherer DG, Hanley JH. Multisystemic therapy with violent and chronic juvenile offenders and their families: The role of treatment fidelity in successful dissemination. Journal of Consulting & Clinical Psychology. 1997;65(5):821–833. doi: 10.1037//0022-006x.65.5.821. [DOI] [PubMed] [Google Scholar]
  57. Henggeler SW, Schoenwald SK, Liao JG, Letourneau EJ, Edwards DL. Transporting efficacious treatments to field settings: The link between supervisory practices and therapist fidelity in MST programs. Journal of Clinical Child and Adolescent Psychology. 2002;31(2):155–167. doi: 10.1207/S15374424JCCP3102_02. [DOI] [PubMed] [Google Scholar]
  58. Henry WP, Strupp HH, Butler SF, Schacht TE, Binder JL. Effects of training in time-limited dynamic psychotherapy: Changes in therapist behavior. Journal of Consulting and Clinical Psychology. 1993;61:434–440. doi: 10.1037//0022-006x.61.3.434. [DOI] [PubMed] [Google Scholar]
  59. Herschell AD, Kogan JN, Celedonia KL, Gavin J, Stein BD. Understanding community mental health administrators’ perspectives on evidence-based treatment implementation. Psychiatric Services. 2009;60:985–988. doi: 10.1176/appi.ps.60.7.989. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Holloway EL, Neufeldt SA. Supervision: Its contributions to treatment efficacy. Journal of Consulting and Clinical Psychology. 1995;63:207–213. doi: 10.1037//0022-006x.63.2.207. [DOI] [PubMed] [Google Scholar]
  61. Hunter SB, Watkins KE, Wenzel S, Gilmore J, Sheehe J, Griffin B. Training substance abuse treatment staff to care for co-occuring disorders. Journal of Substance Abuse Treatment. 2005;28(3):239–245. doi: 10.1016/j.jsat.2005.01.009. [DOI] [PubMed] [Google Scholar]
  62. Huppert JD, Bufka LF, Barlow DH, Gorman JM, Shear MK, Woods SW. Therapists, Therapist Variables, and Cognitive-Behavioral Therapy Outcome in a Multicenter Trail for Panic Disorder. Journal of Consulting & Clinical Psychology. 2001;69(5):747–755. doi: 10.1037//0022-006x.69.5.747. [DOI] [PubMed] [Google Scholar]
  63. Hyde PS, Falls K, Morris JA, Schoenwald SK. Turning Knowledge Into Practice: A Manual for Behavioral Health Administrators and Practitioners About Understanding and Implementating Evidence-Based Practices. Boston: Technical Assistance Collaborative, Inc; 2003. [Google Scholar]
  64. Jensen-Doss A, Cusack KJ, de Arellano MA. Workshop-based training in Trauma-focused CBT: An in-depth analysis of impact on provider practices. Community Mental Health Journal. 2007;44(4):227–244. doi: 10.1007/s10597-007-9121-8. [DOI] [PubMed] [Google Scholar]
  65. Johnson G, Knight R. Developmental antecedents of sexual coercion in juvenile sexual offenders. Sexual Abuse: A Journal of Research and Treatment. 2000;12(3):165–178. doi: 10.1177/107906320001200301. [DOI] [PubMed] [Google Scholar]
  66. Kazdin AE. Psychosocial treatments for conduct disorder in children and adolescents. In: Nathan PE, Gorman JM, editors. A guide to treatments that work. 2. London: Oxford University Press; 2002. pp. 57–85. [Google Scholar]
  67. Kazdin AE. Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist. 2008;63(3):146–159. doi: 10.1037/0003-066X.63.3.146. [DOI] [PubMed] [Google Scholar]
  68. Kelly JA, Heckman TG, Stevenson LY, Williams PN, Ertl T, Hays RB, et al. Transfer of research-based HIV prevention interventions to community service providers: Fidelity and adaptation. AIDS Education and Prevention. 2000;12(5 Suppl):87–98. [PubMed] [Google Scholar]
  69. Lambert MJ, Ogles BM. The effectiveness of psychotherapy supervision. In: Watkins CEJ, editor. Handbook of psychotherapy supervision. Hoboken, NJ: John Wiley & Sons; 1997. pp. 421–446. [Google Scholar]
  70. Lochman JE, Boxmeyer C, Powell N, Qu L, Wells K, Windle M. Dissemination of the Coping Power Program: Importance of intensity of couselor training. Journal of Consulting and Clinical Psychology. 2009;77(3):397–409. doi: 10.1037/a0014514. [DOI] [PubMed] [Google Scholar]
  71. Luborsky L, McLellan A, Diguer L, Woody G, Seligman DA. The psychotherapist matters: Comparison of outcomes across twenty-two therapists and seven patient samples. Clinical Psychology: Science & Practice. 1997;4(1):53–65. [Google Scholar]
  72. Luongo G. Re-thinking child welfare training models to achieve evidence-based practices. Administration in Social Work. 2007;31(2):87–96. [Google Scholar]
  73. Machin MA. Planning, managing, and optimizing transfer of training. In: Kraier K, editor. Creating, implementing, and managing effective training and development: State-of-the-art lessons for practice. San Francisco: Jossey-Bass; 2002. pp. 263–301. [Google Scholar]
  74. McHugh RK, Barlow DH. The dissemination and implemenation of evidence-based psychological treatments: A review of the current efforts. American Psychologist. 2010;65(2):73–84. doi: 10.1037/a0018121. [DOI] [PubMed] [Google Scholar]
  75. McVey GL, Davis R, Kaplan AS, Katzman DK, Pinhas L, Geist R, et al. A community-based training program for eating disorders and its contribution to a provincial network of specialized services. International Journal of Eating Disorders. 2005;37:35–40. doi: 10.1002/eat.20114. [DOI] [PubMed] [Google Scholar]
  76. Miller SJ, Binder JL. The effects of manual-based training on treatment fidelity and outcome: A review of the literature on adult individual psychotherapy. Psychotherapy: Theory, Research, Practice, Training. 2002;39(2):184–198. [Google Scholar]
  77. Miller WR, Mount KA. A small study of training in motivational interviewing: Does one workshop change clinician and client behavior? Behavioural & Cognitive Psychotherapy. 2001a;29:457–471. [Google Scholar]
  78. Miller WR, Mount KA. A small study of training in motivational interviewing: Does one workshop change clinician and client behavior? Behavioural & Cognitive Psychotherapy. 2001b;29:457–471. [Google Scholar]
  79. Miller WR, Yahne CE, Moyers TB, Martinez J, Pirritano M. A randomized trial of methods to help clinicians learn motivational learning. Journal of Consulting and Clinical Psychology. 2004;72(6):1050–1062. doi: 10.1037/0022-006X.72.6.1050. [DOI] [PubMed] [Google Scholar]
  80. Milne D, Westerman C, Hanner S. Can a “Relapse Prevention” module facilitate the transfer of training? Behavioural and Cognitive Psychotherapy. 2002;30(3):361–364. [Google Scholar]
  81. Morgenstern J, Morgan TJ, McCrady BS, Keller DS, Carroll KM. Manual-guided cognitive-behavioral therapy training: A promising method for disseminating empirically supported substance abuse treatments to the practice community. Psychology of Addictive Behaviors. 2001;15(2):83–88. [PubMed] [Google Scholar]
  82. Moyers TB, Manuel JK, Wilson PG, Hendrickson SML, Talcott W, Durand P. A randomized trial investigating training in motivational interviewing for behavioral health providers. Behavioural and Cognitive Psychotherapy. 2008;36:149–162. [Google Scholar]
  83. Multon KD, Kivliighan JDM, Gold PB. Changes in counselor adherence over the course of training. Journal of Consulting Psychology. 1996;43:356–363. [Google Scholar]
  84. Nathan PE, Gorman JM. A guide to treatments that work. 2. New York: Oxford University Press; 2002. [Google Scholar]
  85. Nathan PE, Gorman JM. A guide to treatments that work. 3. New York: Oxford University Press; 2007. [Google Scholar]
  86. National Advisory Mental Health Council. Blueprint for change: Research on child and adolescent mental health: A report by the National Advisory Mental Health Council’s Workgroup on Child and Adolescent Mental Health Intervention Development and Deployment. Bethesda, MD: National Institutes of Health/National Institute of Mental Health; 2001. [Google Scholar]
  87. National Crime Victims Research & Treatment Center. TF-CBT Web: First Year Report. Charleston, SC: Medical University of South Carolina; 2007. [Google Scholar]
  88. National Institute of Mental Health. Bridging science and service: A report by the National Advisory Mental Health Council’s Clinical Treatment and Services Research Workgroup. 1998. [Google Scholar]
  89. National Institute of Mental Health. Dissemination and implementation research in mental health. Washington, DC: 2002. [Google Scholar]
  90. Neff JA, Amodei N, Martinez C, Jr, Ingmundson P. HIV/AIDS mental health training for health care providers: An evaluation of three models. American Journal of Orthopsychiatry. 1999;69(2):240–246. doi: 10.1037/h0080425. [DOI] [PubMed] [Google Scholar]
  91. Oordt MS, Jobes DA, Fonseca VP, Schmidt SM. Training mental health professionals to assess and manage suicidal behavior: Can provider confidence and practice behaviors be altered? Suicide and Life-Threatening Behavior. 2009;39(1):21–32. doi: 10.1521/suli.2009.39.1.21. [DOI] [PubMed] [Google Scholar]
  92. Parsons MB, Reid DH. Training residential supervisors to provide feedback for maintaining staff teaching skills with people who have severe disabilities. Journal of Applied Behavior Analysis. 1995;28(3):317–322. doi: 10.1901/jaba.1995.28-317. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Parsons MB, Reid DH, Green CW. Preparing direct service staff to teach people with severe disabilities: A comprehensive evaluation of an effective and acceptable training program. Behavioral Residential Treatment. 1993;8(3):163–185. [Google Scholar]
  94. President’s New Freedom Commission on Mental Health. Achieving the promise: Transforming mental health care in America. 2003. Retrieved. from. [Google Scholar]
  95. Reed GM, Eisman E. Uses and misuses of evidence: managed care, treatment guidelines, and outcome measurement in professional practice. In: Goodheart CD, Kazdin AE, Sternberg RJ, editors. Evidence-based psychotherapy: Where practice and research meet. Washington, DC: American Psychological Association; 2006. [Google Scholar]
  96. Ringel JS, Sturm R. National estimates of mental health utilization and expenditures for children in 1998. Journal of Behavioral Health Services & Research. 2001;28:319–333. doi: 10.1007/BF02287247. [DOI] [PubMed] [Google Scholar]
  97. Rubel EC, Sobell LC, Miller WR. Do continuing education workshops improve participants’ skills? Effects of a motivational interviewing workshop on substance-abuse counselors’ skills and knowledge. Behavior Therapist. 2000;23(4):73–77. [Google Scholar]
  98. Russell MC, Silver SM, Rogers S, Darnell JN. Responding to an identified need: A joint Department of Defense/Department of Veterans Affairs training program in eye movement desensitization and reprocessing (EMDR) for clinicians providing trauma services. International Journal of Stress Management. 2007;14(1):61–71. [Google Scholar]
  99. Saitz R, Sullivan LM, Samet JH. Training community-based clinicians in screening and brief intervention for substance abuse problems: Translating evidence into practice. Substance Abuse. 2000;21(1):21–31. doi: 10.1080/08897070009511415. [DOI] [PubMed] [Google Scholar]
  100. Schoener EP, Madeja CL, Henderson MJ, Ondersma SJ, Janisse J. Effects of motivational interviewing training on mental health therapist behavior. Drug and Alcohol Dependence. 2006;82:269–275. doi: 10.1016/j.drugalcdep.2005.10.003. [DOI] [PubMed] [Google Scholar]
  101. Schoenwald SK, Chapman JE, Kelleher K, Hoagwood KE, Landsverk J, Stevens J, et al. A survey of the infrastructure for children’s mental health services: Implications for the implementation of empirically supported treatments. Administration & Policy in Mental Health. 2008;35:84–97. doi: 10.1007/s10488-007-0147-6. [DOI] [PubMed] [Google Scholar]
  102. Schoenwald SK, Kelleher K, Weisz JR. Buidling bridges to evidence-based practice: The MacArthur Foundation Child System and Treatment Enhancement Projects (Child STEPS) Administration and Policy in Mental Health and Mental Health Services Research. 2008;35:66–72. doi: 10.1007/s10488-007-0160-9. [DOI] [PubMed] [Google Scholar]
  103. Scogin F, Welsh D, Hanson A, Stump J, Coates A. Evidence-based psychotherapies for depression in older adults. Clinical Psychology: Science and Practice. 2005;12(3):222–237. [Google Scholar]
  104. Scott J, Tallia A, Crosson JC, Orzano AJ, Stroebel C, DiCicco-Bloom B, et al. Social network analysis as an analytic tool for interaction patterns in primary care practices. Annals of Family Medicine. 2005;3(5):443–448. doi: 10.1370/afm.344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Sharkin BS, Plageman PM. What do psychologists think about mandatory continuing education? A survey of Pennsylvania practitioners. Professional Psychology: Research & Practice. 2003;34(3):318–323. [Google Scholar]
  106. Shirk SR, Saiz CC. Clinical, empirical, and developmental perspectives on the therapeutic relationship in child psychotherapy. Development and Psychopathology. 1992;4:713–728. [Google Scholar]
  107. Sholomskas DE, Syracuse-Siewert G, Rounsaville BJ, Ball SA, Nuro KF. We don’t train in vain: A dissemination trial of three strategies of training clinicians in cognitive behavioral therapy. Journal of Consulting and Clinical Psychology. 2005;73(1):106–115. doi: 10.1037/0022-006X.73.1.106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Shore BA, Iwata BA, Vollmer TR, Lerman DC, Zarcone JR. Pyramidal staff training in the extension of treatment for severe behavior disorders. Journal of Applied Behavior Analysis. 1995;28(3):323–332. doi: 10.1901/jaba.1995.28-323. [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Silverman WK, Hinshaw SP. The second special issue on evidence-based psychosocial treatments for children and adolescents. Journal of Clinical Child and Adolescent Psychology. 2008;37(1):1–8. doi: 10.1080/15374410701818293. [DOI] [PubMed] [Google Scholar]
  110. Siqueland L, Crits-Christoph P, Barber JP, Butler SF, Thase M, Najavits L, et al. The role of therapist characteristics in training effects in cognitive, supportive-expressive, and drug counseling therapies for cocaine dependence. Journal of Psychotherapy Practice & Research. 2000;9(3):123–130. [PMC free article] [PubMed] [Google Scholar]
  111. Spring B. Evidence-based practice in clinical psychology: What it is, why it matters; what you need to know. Journal of Clinical Psychology. 2007;63(7):611–631. doi: 10.1002/jclp.20373. [DOI] [PubMed] [Google Scholar]
  112. Squires DD, Gumbley SJ, Storti SA. Training substance abuse treatment organizations to adopt evidence-based practices: The Addiction Technology Transfer Center of New England Science to Service Laboratory. Journal of Substance Abuse Treatment. 2008;34(3):293–301. doi: 10.1016/j.jsat.2007.04.010. [DOI] [PubMed] [Google Scholar]
  113. Stein DM, Lambert MJ. Graduate training in psychotherapy: Are therapy outcomes enhanced? Journal of Consulting and Clinical Psychology. 1995;63:182–196. doi: 10.1037//0022-006x.63.2.182. [DOI] [PubMed] [Google Scholar]
  114. Street LL, Niederehe G, Lebowitz BD. Toward greater public health relevance for psychotherapeutic intervention research: An NIMH workshop report. Clinical Psychology Science & Practice. 2000;7(2):127–137. [Google Scholar]
  115. Suda KT, Miltenberger RG. Evaluation of staff management strategies to increase positive interactions in a vocational setting. Behavioral Residential Treatment. 1993;8(2):69–88. [Google Scholar]
  116. Task Force on Promotion and Dissemination of Psychological Procedures. Training in and dissemination of empirically-validated psychological treatments. The Clinical Psychologist. 1995;48:3–23. [Google Scholar]
  117. VandeCreek L, Knapp S, Brace K. Mandatory continuing education for licensed psychologists: Its rationale and current implementation. Professional Psychology Research and Practice. 1990;21(2):135–140. [Google Scholar]
  118. Vocisano C, Klein DN, Arnow B, Rivera C, Blalock JA, Rothbaum B, et al. Therapist variables that predict symptom change in psychotherapy with chronically depressed outpatients. Psychotherapy: Theory, Research, Practice, Training. 2004;41(3):255–265. [Google Scholar]
  119. Walters ST, Matson SA, Baer JS, Ziedonis DM. Effectiveness of workshop training for psychosocial addiction treatments: A systematic review. Journal of Substance Abuse Treatment. 2005;29:283–293. doi: 10.1016/j.jsat.2005.08.006. [DOI] [PubMed] [Google Scholar]
  120. Weersing VR, Weisz JR, Donenberg GR. Development of the Therapy Procedures Checklist: A therapist-report measure of technique use in child and adolescent treatment. Journal of Clinical Child and Adolescent Psychology. 2002;31(2):168–180. doi: 10.1207/S15374424JCCP3102_03. [DOI] [PubMed] [Google Scholar]
  121. Weisz JR, Chu BC, Polo AJ. Treatment Dissemination and Evidence-Based Practice: Strengthening Intervention Through Clinician-Researcher Collaboration. Clinical Psychology: Science and Practice. 2004;11(3):300–307. [Google Scholar]
  122. Worrall JM, Fruzzetti A. Improving peer supervision ratings of therapist performance in dialectical behavior therapy: An internet-based training system. Psychotherapy Theory, Research, Practice, Training. 2009;46(4):476–479. doi: 10.1037/a0017948. [DOI] [PubMed] [Google Scholar]

RESOURCES