Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jul 1.
Published in final edited form as: J Subst Abuse Treat. 2020 Apr 25;114:108014. doi: 10.1016/j.jsat.2020.108014

Initial Testing of a Computer-Based Simulation Training Module to Support Clinicians’ Acquisition of CBT Skills for Substance Use Disorder Treatment

Nadine R Mastroleo a, Laura Humm b, Callon M Williams a, Brian D Kiluk c, Ariel Hoadley d, Molly Magill d
PMCID: PMC7361509  NIHMSID: NIHMS1593480  PMID: 32527511

Abstract

Cognitive behavioral therapy (CBT) is one of the most common and effective treatments for substance use disorders (SUD); however, effective delivery of CBT depends on a wide variety of nuanced skills that require practice to master. We created a computer-based simulation training system to support the development of necessary skills for student trainees to be able to apply CBT effectively for clients with SUDs. CBT: Introducing Cognitive Behavioral Therapy is an interactive, role-play simulation that provides opportunities for clinician trainees to hone their skills through repeated practice and real-time feedback before application in a clinical setting. This is the first study that tests whether such a simulation improves trainee skills for the treatment of clients with SUDs. Graduate students (N = 65; social work, clinical psychology) completed standardized patient (SP) interviews, were randomized to the simulation training program or manual comparison condition (Project MATCH manual), and completed SP interviews three months post-baseline. Using general linear models, results indicated a significant time x group effect, with students assigned to the simulation training program showing greater improvement in “extensiveness” and “skillfulness” ratings across three skill categories: general agenda setting (p =.03), explaining CBT concepts (p = .007), and understanding of CBT concepts (p = .001). However, manual comparison participants showed greater improvement than simulation trainees in “assessing primary drug use” (prange =.013 – 024). No changes in extensiveness or skillfulness of motivational interviewing (MI) style were observed. This pilot test of CBT: Introducing Cognitive Behavioral Therapy offers support for use of this novel technology as a potential approach to scale up CBT training for students, and perhaps clinicians, counseling people with SUDs.

Keywords: CBT, training, simulation module, skills, technology-based

1. Introduction

Cognitive behavioral therapy (CBT) is one of the most commonly used and extensively studied treatments to reduce and/or eliminate alcohol and other substance use and associated problems. CBT emphasizes exposure to and practice of skills and coping behaviors to manage problems and implement learned behaviors to avoid relapse and maintain abstinence. Scholars and researchers have manualized these approaches in an effort to ensure the dissemination of empirically supported treatment approaches for the reduction of alcohol and other drug use (e.g., coping skills training, increasing non-use-related activities, drug-refusal skills; Carroll et al., 1998; Kadden et al., 1992; Monti, Abrams, Kadden, & Cooney, 1989). Several meta-analyses demonstrating a significant and moderate effect over minimal and no treatment control conditions support the efficacy of CBT for substance use reduction (e. g., Gooding & Tarrier, 2009; Magill & Ray, 2009; Magill et al., 2019). Most of the literature focuses on dissemination of CBT through clinical research studies, while little research has focused on the most effective methods to train clinicians in CBT (Sholomskas et al., 2005). As a result, few approaches to train clinical trainees in the dissemination of CBT have been established.

1.1. Therapist training in CBT

Research is required to identify the specific training techniques that are most effective for the development of CBT competencies associated with the dissemination and implementation of CBT for substance use treatment. A gap exists between empirically supported substance use treatments and those used in community settings (Morgenstern, Morgan, McCrady, Keller, & Carroll, 2001), and this may be associated with a lack of empirically supported approaches to therapist training. As noted, the majority of published studies focus on testing CBT and emphasize a) selecting therapists with experience and commitment to the treatment approach, b) using an intensive didactic training seminar and role-plays to adopt skills, and c) closely supervising training and implementation of skills (e. g., Crits-Christoph et al., 1998; DeRubeuis, Hollon, Evans, & Bemis, 1982; Hepner, Hunter, Paddock, Zhou, & Watkins, 2011; Waltz, Addis, Koerner, & Jacobson, 1993). However, these training methods have rarely been applied within the practicing clinical community. Rather, dissemination has been limited to distributing manuals and/or brief didactic training with little supervision or competency evaluation. This is also true of other evidence-based substance use treatments, such as motivational interviewing (MI; Miller & Rollnick, 2002, 2012) and contingency management (Oluwoye et al., 2019), where researchers have noted both the importance of training approaches (e.g., building clinical competencies through active learning strategies) and implementation support by management staff and clinical training programs (Hartzler & Rabun, 2014).

Research suggests that both computer-based and face-to-face classroom training result in similar gains in knowledge and skills in CBT for counselors (Larson et al., 2013; Morgenstern et al., 2001; Sholomskas et al., 2005; Weingardt, Cucciare, Bellotti, & Lai, 2009; Weingardt, Villafranca, & Levin, 2006). However, research has also shown that without supervision aimed at promoting skill utilization and development, general knowledge of CBT does not promote clinicians’ adoption and utilization of CBT skills and may reduce the benefit clients receive from treatment (King et al., 2002; Mannix et al., 2006; Westbrook, Sedgwick, Bennett-Levy, Butler, & McManus, 2008). Bennett-Levy and colleagues (2008) found that understanding theory and knowledge can come from reading and didactic training materials, while modeling was the bridge between knowledge and practical CBT implementation. Role-play was most helpful in embedding the skills into practice. It is this role-play and interactive practice that requires in-person, face-to-face interactions to enable trainees to learn and refine the skills associated with effective delivery of CBT (Bennett-Levy & Perry, 2009). Past studies aimed at counselor training in MI suggest the same; that is, that efficacious integration into practice requires a combination of training, observation, feedback, and coaching (Madson Loignon, & Lane, 2009; Miller et al., 2004).

1.2. Web- and computer-based training approaches

Training approaches for therapists now include web-based and computer-delivered instructional materials; however, there is little investigation of the specific training techniques that are most effective for developing CBT competencies. Weingardt et al. (2009) used eight online CBT modules and four weekly group supervision meetings (via web-conferencing) to train community therapists. Participants demonstrated improvements in CBT knowledge and self-efficacy; however, the authors did not test CBT skills per se. Additional studies have tested varying levels of trainer involvement, including a 10-day training course plus supervision (Westbrook et al., 2008) and a 20-week program delivered via videoconferencing (Rees & Gillam, 2001), finding that clinicians were receptive to each approach and reported high levels of satisfaction and confidence in delivering CBT. Studies that have incorporated individual feedback and supervision over CBT skills for trainees show that trainees’ skillfulness improved, and that their self-efficacy in using CBT as a treatment approach increased (e. g., Morgenstern et al., 2001; Sholomskas et al., 2005). Studies show that CBT training and supervision can enhance CBT skills, regardless of the didactic training modality used (e.g., in-person, computer based; Morgenstern et al., 2001; Sholomskas et al., 2005). These studies also highlight the need for a high level of resources, both through initial training and supervision, to successfully disseminate CBT for treatment adherence (Bennett-Levy & Perry, 2009; Morgenstern et al., 2001; Sholomskas et al., 2005).

However, what remains unknown in the research is the capacity to train a large number of clinicians through a computer-based training model, because previous studies have limited computer-based training to only didactic materials. Computer-based training has been successful in several areas of health care (e. g., Anger et al., 2001; Fleming et al., 2009; Issenberg et al., 1999; Salas, Wilson, Burke, & Priest, 2005; Tulsky et al., 2011). Therefore, the utility of a computer-based training module with on-going feedback and coaching to train clinicians in a specific manual-guided treatment approach is an important step toward the dissemination of empirically supported interventions, such as CBT for substance users. A computer-based training module may be a more feasible approach than face-to-face training given that traditional clinical training approaches are both time and cost intensive.

The current study conducted an initial test of a computer-based simulation module, CBT: Introducing Cognitive Behavioral Therapy, to train clinical graduate students in the delivery of CBT for substance use. Specifically, we tested whether students assigned to practice talking with the fictional client in the simulation training module demonstrated an increase in CBT skills compared to a comparison group who were provided a training manual over a three-month intervention period. Additionally, we evaluated whether the trainees accepted and were pleased with the simulation module to assess the feasibility of this training approach.

2. Materials and methods

2.1. Sample and setting

Participants were 65 graduate students enrolled in either a Masters in Social Work (n = 51) or PhD in Clinical Psychology (n = 14) program at a large, research-intensive university in the Northeast. Of the 65 participants, 52 (80.0 %) were female. Participants were primarily white (81.5%), followed by 7.7% black/African American, 3.1% Asian, 1.5% Native Hawaiian or other Pacific Islander, and 3.1% identifying as some other race, with 9.2% identifying as Hispanic. The mean age of the sample was 28.46 years (SD = 8.51). The majority of the sample were first-(50.8%) and second- (33.8%) year students. A percentage of students had received prior training in CBT (n = 11, 16.9%), substance use disorders (n = 11, 16.9%), and MI (n = 21, 32.3%).

2.2. Procedure

2.2.1. Participant recruitment

Part- and full-time students were recruited through email and the first author recruited students in the classroom. Individuals who agreed to participate were presented with informed consent forms and the baseline questionnaires. Once enrolled, participants were randomized to either the experimental simulation training program or manual comparison group (MATCH manual; Kadden et al., 1992). Similar to prior training studies (e.g., Morgenstern et al., 2001), two-thirds of the students were randomized to the simulation training condition (n = 40), with the remaining one-third assigned to the comparison condition (n = 25). Participants were then scheduled (via email) to complete two in-person role-plays with standardized patients.

2.2.2. Standardized patient (SP) role-plays

Participants in both conditions completed in-person role-plays with trained, standardized patients (SP). SPs have been commonly used in various healthcare settings to asses clinical and counseling skills (e.g., Cleland, Abe, & Rethans, 2009; Lane & Rollnick, 2007). In this study, client A was voluntarily seeking help to reduce alcohol use. Client B was mandated to treatment following a recent DUI arrest. The first author and a faculty member in the university Theater Department trained SPs on the role of the SP, character development, and guidance on ways to interact in session. SPs were trained actors and the study trainings lasted for five hours. A total of five actors (three female, two male) conducted all SP sessions (same actors at baseline and follow-up). Participants conducted role-plays at baseline and follow-up with Clients A and B (resulting in a total of four role-plays). Each participant was given a brief description of the SP, which included name, demographic characteristics, reason for attending therapy, and general information about alcohol and other drug use. Participants were also told that they were entering the therapy session approximately 20 minutes into the session (after having gathered general information about the client) and to spend the remaining 25 minutes of the appointment explaining CBT and discussing the client’s most recent use of alcohol. Participants were given handouts that included a diagram of a theoretical model of CBT and a blank change plan sheet (a form used to document plans for changing current alcohol use behaviors) and were told that they were free to use these materials. Participants were emailed and scheduled follow-up SP role-plays three months after their baseline role-plays. All SP sessions were audio recorded for observational rating of clinical skills.

2.2.3. Computer-based simulation training module

Participants in the simulation training condition were given access to the computer-based interactive training module, CBT: Introducing Cognitive Behavioral Therapy, and told to complete 10 role-plays (lasting about 20 minutes each) using the simulation. CBT: Introducing Cognitive Behavioral Therapy includes a brief didactic training guide; a simulated conversation with the fictional patient, Tanisha Mosley; and integrated feedback and scoring. The e-learning materials provided 1) a brief guide to developing a relationship with clients and best practices for introducing CBT as a treatment option, 2) a brief overview of Tanisha’s background, 3) a description of how to use the simulation software, and 4) an explanation of how the session would be scored. During the simulated sessions with Tanisha, participants had the opportunity to explain why CBT was chosen as a treatment option, describe the core tenants of CBT, provide examples of CBT, provide diagrams of CBT, describe the role of homework in future sessions, and discuss what can be expected in a CBT session (see Figure 1). Each time a participant began a simulated session, the computer randomly selected one of three versions of Tanisha Mosley. Version A of the simulated patient was passive in her participation, version B was inquisitive, and version C was suspicious of the treatment plan. As the participant talked with Tanisha, the on-screen coach provided visual cues (e.g., thumbs-up) and written guidance to help participants quickly identify and adjust to mistakes (see Figure 2). At the end of each simulated session, participants were provided with scores that rated their effectiveness in introducing the basic tenants of CBT to the patient—creating a collaborative environment, maintaining a conversational tone, displaying empathy, individualizing CBT for the simulated patient, ensuring patient understanding, setting an agenda for the session, and assigning homework in a clear and collaborative manner (see Figure 3).

Figure 1:

Figure 1:

Traditional e-learning provides participant’s access to a training guide and scenario details.

Figure 2:

Figure 2:

The simulated conversation interface allows participants to select from hundreds of statements including appropriate and inappropriate choices. The on-screen help coach provides in-the-moment feedback to take advantage of teachable moments.

Figure 3:

Figure 3:

After-session scores and feedback help participants identify areas of strength and improvement.

Of the 35 participants provided with the simulation, 30 participants completed the protocol (three users did not launch or engage with the module; two users launched the module but did not begin a simulated conversation). Of the 30 completers, 18 completed or exceeded the protocol expectations and 12 participants completed between two and nine conversations. Participants launched the simulation 374 times (an average of 10.68 times per participant) and completed 279 simulated sessions (an average of 7.97 simulated sessions per participant). Participants spent 170 minutes engaging with the e-learning materials (an average of 4.86 minutes per participant) and 3,362 minutes performing simulated sessions (an average of 96.05 minutes per participant) for a total of 3,532 minutes (an average of 100.91 minutes per student). Participant scores on the simulation ranged from 0 to 99, with an overall average score of 55.07 (SD = 28.08) and the average highest score (calculated from each participants’ highest score across simulation plays) of 74.57 (SD = 35.88). Participants in this condition were not given the Project MATCH manual during the trial, but were offered a copy after completing all components of the research study.

2.2.4. Comparison condition

Participants in the comparison condition were given copies of the Project MATCH CBT coping skills therapy manual (Kadden et al., 1992) and told to read the manual for training. Specific guidelines were given to read pages 19–53, as those were the pages that most closely aligned with the computer-based training simulation. Participants were asked to underline, highlight, and make notes in the margins of the book. Prior to completing follow-up role-plays, we examined these notes to evaluate engagement with the training manual and, therefore, adherence to the comparison condition. All but one participant returned the manual. In our content analysis of the manuals, we found that only 40% were returned with little to no markup (identified as less than 50% of pages with writing). Approximately 20% of manuals were marked up significantly, and sections 1 through 3 (Introduction to Coping Skills Training [pp. 21–26], Coping with Cravings [pp. 27–34], and Managing thoughts about Alcohol and Drinking [pp. 35–38]) had the most notations. Participants in this condition were offered access to the training simulation at the conclusion of the study.

2.3. Measures

2.3.1. Counselor Activity Self-Efficacy Scales (CASES; Lent, Hill, & Hoffman, 2003)

The CASES is a 15-item self-administered questionnaire that assesses task self-efficacy, or capability to perform counseling tasks under normative conditions. Basic helping skills were divided into three components: a) exploration skills, b) insight skills, and c) action skills. Insight skills (6 items) consisted of capabilities, such as challenging client inconsistencies, using self-involving immediacy statements, and offering interpretations. Exploration skills (5 items) encompassed skills, such as basic communication competencies, reflecting feelings, and using restatements. Finally, action skills (4 items) consisted of skills providing relatively structured interventions, such as information giving and homework assignments. The CASES demonstrated good internal consistency with Cronbach’s alpha of .885 for insight, .851 for exploration, and .833 for action skills.

2.3.2. Intervention satisfaction

All participants completed an intervention satisfaction questionnaire at follow-up aimed at assessing their experiences with the assigned treatment (i.e., simulation or manual). The questionnaire was developed specifically for this project based on past studies examining participant satisfaction with treatments for alcohol use (Monti et al., 2016). Likert-type items, scored from 0 (not at all) to 3 (very), focused on how helpful the training (i.e., simulation or manual) was in preparing the participant to deliver CBT. Specific topics included, “developing relationships with clients,” “setting agendas,” “explaining CBT,” and “dealing with client resistance.” The second set of items focused on the specifics of each training with some common questions for both treatment conditions, such as, “How engaging was the training?” and “How likely are you to recommend this (training)?” Specific questions about the simulation asked, “How realistic was Tanisha Mosley?”; “Did you use the help coach?”; “Did you use the scores?” for example. Similarly, items specific to the manual condition were included for participants in the manual comparison condition (e.g., “How much did you want to read it again?”; “How helpful was the content?”)

2.3.3. CBT adherence and competence measure: Yale Adherence and Competence Scale (YACS)

Therapist skills were rated using a validated adherence and monitoring tool, the YACS (Carroll et al., 2000), which includes a detailed manual describing a range of topics or interventions typically covered during the course of CBT for SUDs. We used an abbreviated version of the YACS that included five items: two core skills (assessment of primary drug use, agenda Setting), two CBT skills (explain CBT concepts, understood CBT concepts), and one motivational item (MI style). YACS items are nonorthogonal; thus rated skills can be scored under multiple categories simultaneously. Each item is rated along two dimensions: extensiveness (i.e., quantity—the frequency and extent to which that intervention was present), and skillfulness (i.e., quality—the skill with which the therapist delivered the intervention). Extensiveness ratings were scored on a 7-point Likert scale (i.e., 1 = did not occur; 2 = occurred only once; 3 = occurred infrequently and without depth; 4 = occurred occasionally, possibly with depth; 5 = occurred quite a bit, usually with some depth/detail; 6 = occurred considerably, almost always in detail; 7 = occurred extensively and with great detail/depth). Finally, skillfulness ratings represent the overall effectiveness of the therapist in demonstrating each skill (8-point Likert scale; 0 = did not occur; 1 = very poor; 2 = poor; 3 = acceptable; 4 = adequate; 5 = good; 6 = very good; 7 = excellent). The primary advantage of the YACS is that it allows adherence and competence to be assessed independently, recognizing that adherence to a treatment protocol does not necessarily guarantee skillfulness (Carroll et al., 2000).

2.4. Rater training

For the current study, two bachelor’s-level raters received roughly 30 hours of training from the last author. Rater training followed standard procedures in three phases: 1) brief didactic overview, including review of manual and required readings (e.g., Carroll et al., 2000); 2) group coding practice with corrective feedback; and 3) individual coding practice with group corrective feedback. Rater proficiency and ongoing project reliability were defined as intraclass correlation coefficient (ICC; two-way mixed; single measure) values of .75 or above (Cicchetti, 1994).

2.5. Data collection procedures

Data collection occurred in two steps. First, raters listened to audio-recorded interviews to discern the overall content of the interview. In a second listen, each therapist behavior was classified with an intervention code, while taking detailed notes about the skill level. Once the second pass through the interview was completed, raters tallied the frequency of every item category and assessed the extensiveness and skillfulness of each using the manual as a guide (Carroll et al., 2000). Inter-rater reliability was assessed at four-month intervals via a 20% (N = 35) random sample of interviews. Inter-rater reliability was maintained via weekly coding laboratory meetings in which project members participated in group coding exercises, discussed difficult items, and resolved rater discrepancies (items with “poor” rater agreement).

2.6. Analysis plan

In our initial analysis, we examined demographic and self-efficacy scores between the simulation training and manual comparison groups. Next, we examined student descriptor data with means, standard deviations, and percentile estimates. We conducted analysis for descriptive items collected on intervention satisfaction, and for items similar across both treatment conditions, independent measure t-tests were used to compare mean scores. For analysis of student training outcomes in our mandated (Client A) and volunteer (Client B) scenarios, we utilized analysis of variance with a two-by-two factorial design (condition by time). Analyses were conducted on the five items of interest, as well as by extensiveness and skillfulness outcomes. Where we observed significant condition by time effects, we calculated effect size estimates (ῃ2 = .01 “small”, .06 “medium”, .14 “large”; Cohen, 1988). Finally, we conducted post hoc analysis that considered prior experience in CBT (yes/no) as a moderator of the experimental condition effect in relation to the CBT skillfulness outcomes.

3. Results

3.1. Student sample

Independent sample t-tests and chi-square tests revealed no significant differences between groups on any demographic items or self-efficacy scores at baseline.

3.2. Intervention helping and satisfaction scores

Overall, participants in the training simulation condition reported that the simulation helped them with “explaining CBT” and “setting agendas”. When compared to the manual comparison condition, participants rated the simulation training significantly more helpful in explaining CBT (t(1, 37) = 2.72, p = .011). Participants also had higher satisfaction ratings in the simulation training than in the comparison condition when asked the questions, “How engaging was the training?” and “How likely are you to recommend this training?” (see Table 1).

Table 1.

Participant training satisfaction ratings (N = 65).

Variable Full sample N (%) Simulation Training N (%) Comparison Manual N (%) Sig
Developing Relationships with Clients 1.59 (.75) 1.63 (.84) 1.50 (.52) n/s
Setting Agendas 2.10 (.64) 2.19 (.68) 1.92 (.52) n/s
Explaining CBT 2.31 (.73) 2.48 (.75) 1.92 (.52) .011
Dealing with Client Resistance 1.28 (.76) 1.33 (.78) 1.17 (.72) n/s
Overall Training Engagement 1.67 (.97) 2.04 (.84) 0.92 (0.79) .000
Overall Recommend Training 1.95 (1.05) 2.20 (1.11) 1.42 (0.67) .032
Overall Prepared to work with client with substance use disorder 1.80 (.58) 1.84 (.62) 1.70 (0.48) n/s

Note. Simulation Training N = 40; Manual Comparison N = 25; Likert-type items, scored from 0 (Not at All) to 3 (Very)

3.3. Reliability results

Table 2 shows reliability estimates for a randomly selected subsample of student interviews (N = 35). With the exception of extensiveness of MI style (ICC = .32), ICC values were “good” or “excellent” for both extensiveness and skillfulness ratings (Cicchetti, 1994).

Table 2.

Reliability and descriptive information.

Code ICC1 Min2 Max2 Mean2 (SD)2
Assess Primary Drug Use
 Extensiveness .91 0 6 3.88 2.11
 Skill .91 0 5 2.27 1.12
Agenda Setting
 Extensiveness .85 0 6 3.24 1.56
 Skill .73 0 5 2.31 .99
Explain CBT Concepts
 Extensiveness .91 0 6 3.19 1.85
 Skill .74 0 4 2.06 1.05
Understood CBT Concepts
 Extensiveness .79 0 6 1.41 1.41
 Skill .88 0 4 1.58 1.24
MI Style
 Extensiveness .32 3 6 5.76 .56
 Skill .88 1 4 2.66 .85
1

Reliability estimates based on N = 35 double-coded interviews.

2

Descriptive data based on N = 173 interviews.

Notes. Cicchetti (1994) suggests the following guidelines for assessing reliability of observational coding systems: ICC of .75 or above = excellent; .60-.74 = good; .40-.59 = fair; below .40 = poor. Poor reliability items shown in bold.

3.4. Changes in student demonstration of core skills and MI style

Tables 3 and 4 show student training outcomes in the simulation training and manual comparison conditions. Among the core skills examined, extensiveness of “assessment of primary drug use” did not differ between conditions for the mandated and voluntary scenarios. However, the manual comparison condition showed a significantly greater increase in assessment skill post-training (mandated ῃ2 = .155; voluntary ῃ2 = .138). For agenda setting, the simulation training showed significantly greater increases in extensiveness (mandated ῃ2 = .121; voluntary ῃ2 = .148), but not skillfulness. Because client-centered elements had been integrated into the CBT training simulation, we also considered changes in MI style post-training. No differences were observed between conditions.

Table 3.

Student extensiveness of core, CBT, and MI skills.

Baseline Mean (SD) Follow-Up Mean (SD) F p
Mandated Client Role-Play
Assess Primary Drug Use
 Treatment Simulation 4.77 (1.50) 3.23 (2.07) 3.79 .059
 Comparison Manual 4.15 (1.72) 4.00 (1.68)
Agenda Setting
 Treatment Simulation 2.88 (1.53) 3.96 (1.66) 5.09 .030
 Comparison Manual 3.46 (1.33) 3.00 (.913)
Explained CBT Concepts
 Treatment Simulation 2.77 (1.68) 4.19 (1.50) 8.88 .007
 Comparison Manual 3.23 (2.39) 2.77 (1.69)
Understood CBT Concepts
 Treatment Simulation 1.19 (1.39) 2.27 (1.40) 6.22 .017
 Comparison Manual 1.15 (1.34) .923 (1.19)
Motivational Interviewing Style
 Treatment Simulation 5.69 (0.55) 5.85 (0.37) 2.32 .136
 Comparison Manual 5.92 (0.28) 5.77 (0.60)
Voluntary Client Role-Play
Assess Primary Drug Use
 Treatment Simulation 3.80 (2.12) 2.24 (2.60) 2.78 .104
 Comparison Manual 4.17 (1.99) 4.17 (1.85)
Agenda Setting
 Treatment Simulation 3.16 (1.11) 3.92 (1.78) 6.07 .019
 Comparison Manual 3.67 (1.23) 2.75 (1.60)
Explained CBT Concepts
 Treatment Simulation 3.20 (1.71) 4.36 (1.55) 8.18 .007
 Comparison Manual 2.75 (1.71) 2.33 (1.70)
Understood CBT Concepts
 Treatment Simulation 1.04 (1.48) 2.44 (1.23) 12.07 .001
 Comparison Manual 1.33 (1.50) 1.00 (1.13)
Motivational Interviewing Style
 Treatment Simulation 5.76 (0.66) 5.64 (0.76) 1.05 .312
 Comparison Manual 5.58 (0.67) 5.83 (0.66)

Notes. CBT = cognitive behavioral therapy. MI = motivational interviewing. Extensiveness ratings on a 7-point Likert scale.

Table 4.

Student skillfulness on core, CBT, and MI skills.

Baseline Mean (SD) Follow-Up Mean (SD) F p
Mandated Client Role-Play
Assess Primary Drug Use
 Treatment Simulation 2.69 (0.62) 2.04 (1.18) 6.79 .013
 Comparison Manual 2.38 (0.65) 2.77 (1.09)
Agenda Setting
 Treatment Simulation 2.04 (1.04) 2.54 (1.03) 1.58 .216
 Comparison Manual 2.46 (1.13) 2.38 (.506)
Explained CBT Concepts
 Treatment Simulation 1.81 (0.85) 2.54 (0.95) 5.99 .019
 Comparison Manual 2.00 (1.22) 1.77 (1.17)
Understood CBT Concepts
 Treatment Simulation 1.38 (1.20) 2.42 (1.03) 9.12 .005
 Comparison Manual 1.23 (1.09) .923 (1.04)
Motivational Interviewing Style
 Treatment Simulation 2.38 (0.80) 2.85 (0.88) 3.77 .060
 Comparison Manual 2.62 (1.12) 2.35 (0.77)
Voluntary Client Role-Play
Assess Primary Drug Use
 Treatment Simulation 2.32 (1.14) 1.20 (1.35) 5.59 .024
 Comparison Manual 2.42 (0.90) 2.50 (1.00)
Agenda Setting
 Treatment Simulation 2.40 (0.82) 2.48 (1.00) 1.56 .219
 Comparison Manual 2.67 (0.78) 2.25 (0.75)
Explained CBT Concepts
 Treatment Simulation 2.24 (0.93) 2.48 (0.82) 0.81 .375
 Comparison Manual 2.00 (1.13) 1.83 (1.19)
Understood CBT Concepts
 Treatment Simulation 1.24 (1.20) 2.48 (1.05) 8.48 .006
 Comparison Manual 1.50 (1.17) 1.33 (1.23)
Motivational Interviewing Style
 Treatment Simulation 3.00 (0.71) 2.80 (0.82) 0.22 0.64
 Comparison Manual 2.83 (0.58) 2.50 (0.80)

Notes. CBT = cognitive behavioral therapy. MI = motivational interviewing. Skillfulness ratings on an 8-point Likert scale.

3.5. Changes in student demonstration of module-trained CBT skills

Tables 3 and 4 show changes in student training outcomes in the simulation training and manual comparison conditions from baseline to follow-up. The simulation module was targeted as an introduction and explanation of CBT skills for treating substance use. For the explained CBT concepts skill, extensiveness ratings significantly increased among participants in the simulation training compared to comparison condition from baseline to follow-up (mandated ῃ2 = .183; voluntary ῃ2 = .189). Skillfulness ratings also significantly increased in the mandated scenario (ῃ2 = .140), but not the volunteer scenario. Finally, extensiveness and skillfulness in checking clients’ understanding of CBT concepts significantly improved for the simulation compared to the comparison condition (i.e., understood CBT concepts; extensiveness mandated ῃ2 = .144; extensiveness voluntary ῃ2 = .256; skillfulness mandated ῃ2 = .198; skillfulness voluntary ῃ2 = .195).

3.6. Post hoc analysis: Changes in core and CBT skills by prior experience

The sample contained a portion of students reporting prior experience with CBT (16.9%). In a post hoc analysis, we considered this factor as a potential moderator of the training simulation effects on CBT skillfulness outcomes. We entered individually, as a between-subject factor in the ANOVA models, reported prior CBT experience (yes/no), and we calculated within group pre-post d effect sizes for descriptive comparison. Analyses of effect moderation by this student factor was nonsignificant. Effect sizes for explained CBT concepts were as follows: skillfulness no prior experience = .517 (mandated), skillfulness prior experience = .953 (mandated); and skillfulness no prior experience = –.098 (voluntary), skillfulness prior experience = .977 (voluntary). Effect sizes for understood CBT concepts were as follows: skillfulness no prior experience = 1.011 (mandated), skillfulness prior experience = .928 (mandated); and skillfulness no prior experience = 1.162 (voluntary), skillfulness prior experience = –.381 (voluntary).

4. Discussion

The current study is the first to test a computer-based simulation module designed to train mental health clinicians to deliver CBT for substance users. We also examined trainees’ acceptance of the simulation. Given most computer-delivered training approaches include only didactic materials, the interactive module tested in this study has the potential to offer a more acceptable and effective training approach than current computer-based training approaches. Overall, trainees rated the simulation as better at helping them to learn CBT skills and more engaging than trainees did in the manual comparison condition. Participants who used the simulation training module outperformed those in the treatment manual comparison condition on most CBT skills post-training, and across both mandated and voluntary SPs. This finding offers initial support for the training simulation as a stand-alone approach to training clinicians to deliver CBT for substance use disorders.

4.1. Counselor trainees seem to like using the simulation module

This training has potential to assist novice mental health clinicians in learning substance abuse–specific CBT. However, trainees must accept and be willing to use the computer-based module. Our evidence indicates that this acceptance is strong across all domains, as participants who were assigned to the computer-based training module scored the acceptability of and overall satisfaction with the training significantly higher than participants who were assigned to read the Project MATCH manual. While students still rated the Project MATCH manual favorably and reported that the manual was helpful in offering guidance and support in learning the skills, Project MATCH manual scores were low on “training engagement” and “recommending the training manual”. As our results show, then, specific guidance in delivering empirically supported treatment approaches is of interest to students in clinical training programs (e.g., Luebbe, Radcliffe, Callands, Green, & Thorn, 2007).

Although we were unable to compare the current training simulation to the traditional didactic models used in the past, our participants rated their satisfaction with and acceptability of the computer-based training simulation as high. Other studies have also found that individuals reported being satisfied with computer-based training approaches (e. g., Bennett-Levy et al., 2008; Rees & Gillam, 2001). Because computer-based training may differ from face-to-face training approaches, there are a number of reasons trainees might prefer computer-based training approaches. First, computer-based training allows trainees to get real-time feedback, rather than waiting for an instructor or supervisor to listen to a recorded session and offer feedback, which often occurs long after the session has been completed. Additionally, conducting role-plays with someone other than a classmate or friend creates an experience similar to real-life clinical work. Overall, the simulation was rated as more engaging and participants gave it higher scores on “recommending it to others” compared to the manual approach. Together, our data indicate that participants were satisfied with the simulation program and that they accepted it as a means of training for CBT.

4.2. Clinicians’ skills improved after using the simulation

The training simulation outperformed the manual when examining the impact on CBT skills, regardless of client type (voluntary or mandatory). The coding process indicated higher scores for the simulation training group compared to the manual comparison group. It is important to note that the higher scores may also be because students in the simulation training condition were able to perform an average of eight role-plays (i.e., completed interactions with the training simulations), while students who read the treatment manual had no formal opportunities to practice CBT skills. The training simulation targeted specific aspects of introducing CBT to a client and offered precise language and opportunities to practice these CBT skills, which likely resulted in the improved scores from baseline to follow-up in the demonstration of CBT skills.

Another important finding to note is that the manual group did not improve their CBT skills from baseline to follow-up. In fact, in a number of domains the manual comparison group seems to have slightly iatrogenic results on demonstrating their skills with the SPs. This finding is important, given the prevalence of manuals across empirically based treatments; however, this finding should be interpreted with caution given our small sample size in the comparison condition. Given all study participants were clinical trainees, we are unable to determine if we would see similar outcomes with practicing clinicians. The only category in which the manual group did improve was in “assessing primary drug use” (which was not included in the simulation didactic materials or practice sessions); here the manual group also outperformed the simulation training group. Perhaps the Project MATCH manual emphasized a framework and detailed guidance that supported this development of this specific skill. This finding is consistent with those in a study that Morgenstern and colleagues (2001) conducted, in which the Project MATCH manual was used to train substance use counselors in the delivery of CBT. Results of that study suggest that the manual provided counselors the techniques to help clients learn coping skills and appeared to improve counselor’s clinical work, possibly by offering structure for sessions and therapeutic focus. However, in our sample of novice trainees, the manual was insufficient to improve skillfulness and extensiveness of CBT skills across almost all domains; this finding is consistent with prior research indicating manual-only training is insufficient for improving counselors’ CBT skills (Sholomskas et al., 2005). Although the Project MATCH manual is a well-established training guide that offers an initial framework for discussing substance use and describing CBT to clients, it is clear that clinical trainees need some practice and feedback on their CBT skills to become competent in delivering CBT.

4.3. Implications for training and CBT skill maintenance

This study offers a number of additional implications for training and assisting clinicians in maintaining skills for the delivery of CBT. First, the web-based simulation program is a practical approach for clinical training programs to supplement face-to-face training approaches. The stand-alone simulation training module can be integrated into educational programs, potentially improving the dissemination of empirically supported treatments for clinicians and increasing the availability of high-quality training for clinicians in rural and geographically remote areas. Graduate students in mental health fields (e.g., psychology, social work) are often initially exposed to empirically supported treatments during their graduate work; therefore, training activities such as the CBT simulation module may impact whether and how these students disseminate and implement CBT in clinical practice. Research on transferring MI into clinical practice (e.g., see Madson et al., 2009) offers support for CBT training and implementation. More research is needed to understand the clinical utility of integrating this (or similar) training simulation into course work in clinical graduate programs. Nevertheless, given our current findings, the specific feedback that the simulation offers to students may be effective in teaching critical clinical skills. Further, the simulation may help to maintain skills for current CBT clinicians seeking re-training and continued support. Although we did not test this specifical population in this trial, clinicians who have not delivered CBT to patients with substance use concerns, or were never trained to work with this population, may find the training simulation an easy way to gain exposure to new content, and a way to practice CBT skills.

4.4. Limitations

There are several key limitations to the current study. The first is the small sample size of clinical graduate students and the lack of heterogeneity among the sample. Because our study was conducted at one institution, our results cannot be generalized to other training programs. Although participants were graduate students in both social work and clinical psychology, expanding research to include a more diverse group of clinical trainees, particularly those in other disciplines who may encounter substance users in their clinical practice (e.g., Nursing, General Medicine), is an important next step to evaluate both the acceptability and efficacy of this training simulation. Because the current study was limited to graduate students, we did not include practicing clinicians or individuals who had already completed their training programs. Research is needed to see if this simulation may be a viable approach to offering continued education for practitioners. The low ICC scores for MI extensiveness is another limitation; however, this finding is likely due to limited variability in scores as is often the case with observational coding studies with restricted score range. Further, it is not clear whether the 40% of comparison condition participants with minimal mark-up to their manual did not read the material, or if they simply chose not to write in the manual. Nevertheless, using the Project MATCH manual as the comparison condition allowed us to evaluate how the simulation may differ from materials readily available to clinicians. Other studies using manuals and handbooks as treatment approaches have reported similar results from content analysis (e.g., Turrisi et al., 2009).

5. Conclusion

Overall, the training simulation supported the development of clinical trainee’s CBT skills for clients with substance use concerns. Trainees were highly satisfied with the training module, offering support for the idea that the module could be adopted as either a stand-alone or supplemental training tool in clinical training programs. This training module and its real-time feedback may also offer continued training for practitioners in clinical settings or practitioners transitioning into working with clients with substance use disorders.

Highlights.

  • First test of acceptance and efficacy of a CBT simulation training module for SUD.

  • Results support the simulation module as an acceptable CBT SUD training approach.

  • Simulation module significantly improved CBT skills compared to a control condition.

  • Simulation training offers utility to enhance training approaches for CBT for SUD.

Acknowledgements

This work was supported by the National Institute on Alcohol Abuse and Alcoholism [Grant number R44AA023719].

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Anger WK, Rohlman DS, Kirkpatrick J, Reed RR, Lundeen CA, & Eckerman DA (2001). cTRAIN: a computer-aided training system developed in SuperCard for teaching skills using behavioral education principles. Behav Res Methods Instrum Comput, 33(2), 277–281. [DOI] [PubMed] [Google Scholar]
  2. Bennett-Levy J, McManus F, & Westling B (2008). Are some therapist training methods better than others? It depends on what you’re training. Paper presented at the European Association of Behavioural and Cognitive Therapy Conference, Helsinki, Finland. [Google Scholar]
  3. Bennett-Levy J, & Perry H (2009). The promise of online cognitive behavioural therapy training for rural and remote mental health professionals. Australasian Psychiatry, 17, S121–S124. [DOI] [PubMed] [Google Scholar]
  4. Carroll KM, Connors GJ, Cooney NL, DiClemente CC, Donovan DM, Kadden RR, … Zweben A (1998). Internal validity of Project MATCH treatments: discriminability and integrity. J Consult Clin Psychol, 66(2), 290–303. [DOI] [PubMed] [Google Scholar]
  5. Carroll KM, Nich C, Sifry RL, Nuro KF, Frankforter TL, Ball SA, … Rounsaville BJ (2000). A general system for evaluating therapist adherence and competence in psychotherapy research in the addictions. Drug and Alcohol Dependence, 57, 225–238. [DOI] [PubMed] [Google Scholar]
  6. Cicchetti DV (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6(4), 284–290. [Google Scholar]
  7. Cleland JA, Abe K, & Rethans JJ (2009). The use of simulated patients in medical education: AMEE Guide No 42. Med Teach, 31(6), 477–486. [DOI] [PubMed] [Google Scholar]
  8. Cohen J (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. [Google Scholar]
  9. Crits-Christoph P, Siqueland L, Chittams J, Barber JP, Beck AT, Frank A, … Woody G (1998). Training in cognitive, supportive-expressive, and drug counseling therapies for cocaine dependence. J Consult Clin Psychol, 66(3), 484–492. [DOI] [PubMed] [Google Scholar]
  10. DeRubeuis RJ, Hollon SD, Evans MD, & Bemis KM (1982). Can psychotherapies for depression be discriminated? A systematic investigation of cognitive therapy and interpersonal therapy. Journal of Consulting and Clinical Psychology, 50(5), 744–756. [DOI] [PubMed] [Google Scholar]
  11. Fleming M, Olsen D, Stathes H, Boteler L, Grossberg P, Pfeifer J, … Skochelak S (2009). Virtual reality skills training for health care professionals in alcohol screening and brief intervention. Journal of the American Board of Family Medicine, 22, 387–398. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Gooding P, & Tarrier N (2009). A systematic review and meta-analysis of cognitive-behavioural interventions to reduce problem gambling: hedging our bets? Behav Res Ther, 47(7), 592–607. doi: 10.1016/j.brat.2009.04.002 [DOI] [PubMed] [Google Scholar]
  13. Hepner KA, Hunter SB, Paddock SM, Zhou AJ, & Watkins KE (2011). Training addiction counselors to implement CBT for depression. Adm Policy Ment Health, 38(4), 313–323. doi: 10.1007/s10488-011-0359-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Issenberg SB, McGaghie WC, Hart IR, Mayer JW, Felner JM, Petrusa ER, … Ewy GA (1999). Simulation technology for health care professional skills training and assessment. JAMA, 282(9), 861–866. [DOI] [PubMed] [Google Scholar]
  15. Kadden RR, Carroll KM, Donovan DM, Cooney NL, Monti PM, Abrams D, … Hester R (1992). Cognitive-behavioral coping skills therapy manual: A clinical research guide for therapists treating individuals with alcohol abuse and dependence. (ADM 92–1895). Washington, DC: Department of Health and Human Services. [Google Scholar]
  16. King M, Davidson O, Taylor F, Haines A, Sharp D, & Turner R (2002). Effectiveness of teaching general practitioners skills in brief cognitive behaviour therapy to treat patients with depression: randomised controlled trial. BMJ, 324(7343), 947–950. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Lane C, & Rollnick S (2007). The use of simulated patients and role-play in communication skills training: a review of the literature to August 2005. Patient Educ Couns, 67(1–2), 13–20. doi: 10.1016/j.pec.2007.02.011 [DOI] [PubMed] [Google Scholar]
  18. Larson MJ, Amodeo M, Locastro JS, Muroff J, Smith L, & Gerstenberger E (2013). Randomized trial of web-based training to promote counselor use of cognitive behavioral therapy skills in client sessions. Subst Abus, 34(2), 179–187. doi: 10.1080/08897077.2012.746255 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Lent RW, Hill CE, & Hoffman MA (2003). Development and validation of the counselor activity self-efficacy scales. Journal of Counseling Psychology, 50, 97–108. [Google Scholar]
  20. Luebbe AM, Radcliffe AM, Callands TA, Green D, & Thorn BE (2007). Evidence-based practice in psychology: Perceptions of graduate students in scientist-practitioner programs. Journal of Clinical Psychology, 63(7), 643–655. [DOI] [PubMed] [Google Scholar]
  21. Magill M, & Ray LA (2009). Cognitive-behavioral treatment with adult alcohol and illicit drug users: a meta-analysis of randomized controlled trials. J Stud Alcohol Drugs, 70(4), 516–527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Magill M, Ray LA, Kiluk B, Hoadley A, Bernstein M, Tonigan JS, & Carroll KM (2019). A meta-analysis of cognitive-behavioral therapy for alcohol or other drug use disorders: Treatment efficacy by contrast condition. Journal of Consulting and Clinical Psychology, 87(12), 1093–1105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Mannix KA, Blackburn IM, Garland A, Gracie J, Moorey S, Reid B, … Scott J (2006). Effectiveness of brief training in cognitive behaviour therapy techniques for palliative care practitioners. Palliat Med, 20(6), 579–584. doi: 10.1177/0269216306071058 [DOI] [PubMed] [Google Scholar]
  24. Monti PM, Abrams DB, Kadden RM, & Cooney NL (1989). Treating Alcohol Dependence: A Coping Skills Training Guide. New York, NY: Guilford. [Google Scholar]
  25. Monti PM, Mastroleo NR, Barnett NP, Colby SM, Kahler CW, & Operario D (2016). Brief motivational intervention to reduce alcohol and HIV/sexual risk behavior in emergency department patients: A randomized controlled trial. Journal of Consulting and Clinical Psychology, 84, 580–591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Morgenstern J, Morgan TJ, McCrady BS, Keller DS, & Carroll KM (2001). Manual-guided cognitive-behavioral therapy training: a promising method for disseminating empirically supported substance abuse treatments to the practice community. Psychol Addict Behav, 15(2), 83–88. [PubMed] [Google Scholar]
  27. Oluwoye O, Kriegel L, Alcover KC, McPherson S, McDonell MG, & Roll JM (2019). The dissemination and implementation of contingency management for substance use disorders: A systematic review. Psychol Addict Behav. doi: 10.1037/adb0000487 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Rees CS, & Gillam D (2001). Training in cognitive-behavioural therapy for mental health professionals: A pilot study of videoconferencing. Journal of Telemedicince and Telecare, 7, 300–303. [DOI] [PubMed] [Google Scholar]
  29. Salas E, Wilson KA, Burke CS, & Priest HA (2005). Using simulation-based training to improve patient safety: what does it take? Jt Comm J Qual Patient Saf, 31(7), 363–371. [DOI] [PubMed] [Google Scholar]
  30. Sholomskas DE, Syracuse-Siewert G, Rounsaville BJ, Ball SA, Nuro KF, & Carroll KM (2005). We don’t train in vain: a dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. J Consult Clin Psychol, 73(1), 106–115. doi: 10.1037/0022-006X.73.1.106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Tulsky JA, Arnold RM, Alexander SC, Olsen MK, Jeffreys AS, Rodriguez KL, … Pollak KI (2011). Enhancing communication between oncologists and patients with a computer-based training program: a randomized trial. Ann Intern Med, 155(9), 593–601. doi: 10.7326/0003-4819-155-9-201111010-00007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Waltz J, Addis ME, Koerner K, & Jacobson NS (1993). Testing the integrity of a psychotherapy protocol: assessment of adherence and competence. J Consult Clin Psychol, 61(4), 620–630. [DOI] [PubMed] [Google Scholar]
  33. Weingardt KR, Cucciare MA, Bellotti C, & Lai WP (2009). A randomized trial comparing two models of web-based training in cognitive-behavioral therapy for substance abuse counselors. J Subst Abuse Treat, 37(3), 219–227. doi: 10.1016/j.jsat.2009.01.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Weingardt KR, Villafranca SW, & Levin C (2006). Technology-based training in cognitive behavioral therapy for substance abuse counselors. Subst Abus, 27(3), 19–25. doi: 10.1300/J465v27n03_04 [DOI] [PubMed] [Google Scholar]
  35. Westbrook D, Sedgwick, − T,A, Bennett-Levy J, Butler G, & McManus F (2008). A pilot evaluation of a brief CBT training course: Impact on trainees’ satisfaction, clinical skills, and patient outcomes. Behavioural and Cognitive Psychotherapy, 36, 569–579. [Google Scholar]

RESOURCES