Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 May 1.
Published in final edited form as: Adm Policy Ment Health. 2022 Dec 30;50(3):392–399. doi: 10.1007/s10488-022-01246-7

Implementation of a Low-Cost, Multi-component, Web-Based Training for Trauma-Focused Cognitive-Behavioral Therapy

Brigid R Marriott 1, Matthew D Kliethermes 2, J Curtis McMillen 3, Enola K Proctor 4, Kristin M Hawley 5
PMCID: PMC10461596  NIHMSID: NIHMS1923218  PMID: 36583811

Abstract

Effective, interactive trainings in evidence-based practices remain expensive and largely inaccessible to most practicing clinicians. To address this need, the current study evaluated the impact of a low-cost, multi-component, web-based training for Trauma-Focused Cognitive-Behavioral Therapy (TF-CBT) on clinicians’ TF-CBT knowledge, strategy use, adherence and skill. Clinician members of a practice-based research network were recruited via email and randomized to either an immediate training group (N = 89 assigned) or waitlist control group (N = 74 assigned) that was offered access to the same training after six months, with half of each group further randomized to receive or not receive incentives for participation. Clinicians completed assessments at baseline, 6 months, and 12 months covering TF-CBT knowledge, strategy use, and for a subset of clinicians (n = 28), TF-CBT adherence and skill. Although significant differences in overall TF-CBT skillfulness and readiness were found, there were no significant differences between the training and waitlist control group on TF-CBT knowledge and strategy use at six months. However, there was considerable variability in the extent of training completed by clinicians. Subsequent post-hoc analyses indicated a significant, positive association between the extent of training completed by clinicians and clinician TF-CBT knowledge, strategy use, demonstrated adherence and skill across the three TF-CBT components, and overall TF-CBT readiness. We also explored whether incentives predicted training participation and found no differences in training activity participation between clinicians who were offered an incentive and those who were not. Findings highlight the limitations of self-paced web-based trainings. Implications for web-based trainings are discussed.

Keywords: Implementation, Web-based training, Knowledge, Skill, TF-CBT


Observational studies and surveys of practicing clinicians indicate that few evidence-based practices (EBPs) have become everyday practice (e.g., Beidas et al., 2015; Borntrager et al., 2013). Continuing education and training, required for most practicing clinicians, could be a useful strategy for increasing implementation of EBPs (e.g., Powell et al., 2015). Trainings that utilize multiple methods, particularly those that incorporate active, behaviorally oriented learning strategies (e.g., role-plays, consultation, feedback), have demonstrated improvement in clinician EBP knowledge, use, and competence (e.g., Frank et al., 2020; Herschell et al., 2010). Nevertheless, such multi-component trainings typically require substantial time and monetary investment, making them inaccessible to most practicing clinicians (Powell et al., 2013).

Web-based training may be a promising avenue to decrease cost and increase accessibility of EBP trainings. Recent reviews indicate that clinicians who participated in web-based trainings grow in both knowledge and skill pre- to post-training (Frank et al., 2020; Jackson et al., 2018), with online formats found to be as effective as and less costly than in-person formats (Valenstein-Mah et al., 2020). Further, the effects of external incentives (e.g., money, praise, public recognition) on reducing barriers and increasing training participation warrant further research (Friedberg, 2015). Offering clinicians an incentive may help to offset some of the costs associated with missing work to attend trainings and may improve training engagement. A pilot study exploring the effects of a financial and social incentive on the implementation of CBT found both strategies to be feasible and acceptable (Beidas et al., 2017).

In sum, ongoing training appears to be a promising strategy for closing the research-to-practice gap, but effective trainings remain expensive and largely inaccessible to most practicing clinicians. For the present study, we developed a low-cost, largely web-based, multi-component training protocol for Trauma-Focused Cognitive-Behavioral Therapy (TF-CBT), an evidence-based treatment for children and adolescents who have experienced trauma or traumatic grief (Cohen et al., 2016). The overarching purpose of this study was to examine the effectiveness of this multi-component, web-based training in TF-CBT among a sample of clinicians randomized to receive the free training immediately or following a six-month waitlist.1 Within each group, clinicians were further randomized to receive an incentive or no incentive for training. This study was guided by the following aims: (1) to examine whether training was associated with clinicians’ TF-CBT knowledge, strategy use, adherence and skill and (2) to determine whether modest financial incentives increased training participation.

Method

Recruitment

Clinicians (N = 614) who were part of a practice-based research network in a Midwestern state and lived or worked in a county catchment area targeted for health promotion and support by the (Missouri Foundation for Health) were emailed a brief description of the training opportunity. Interested clinicians (N = 301) were offered enrollment if they met the inclusion criteria: (1) reliable access to high-speed internet, (2) treated three or more children with a significant trauma history in the past year, and (3) submitted therapy claims for reimbursement to the state’s Medicaid authority in the past year. Clinicians (N = 163) formally enrolled in the study via online consent and completion of a web-based pre-training assessment.

Randomization to Training and Incentive Groups

Using true random assignment with groups not constrained to be equal, clinicians (N = 163) were randomized to either an immediate training group (TG; N = 89) or a six-month waitlist (WL; N = 74) in order to examine the effects of the specific web-based training program relative to usual training otherwise available to clinicians. Clinicians were also randomized to an incentive group (N = 90; TG: n = 53, WL: n = 37) or no incentive group (N = 72; TG: n = 35, WL: n = 37); one clinician’s incentive status was missing. Clinicians in the incentive group could receive up to $100 total ($20 for completing the TF-CBTWeb course and $20 for attending each of four online webinars). Clinicians in the no incentive group did not receive money for completing training activities.

Procedures

All participants completed a pre-training, web-based assessment. Clinicians assigned to the TG were immediately provided online access, mailed all supporting materials (e.g., manual, toolkit), and encouraged to complete all training activities within six months. After six months, WL clinicians were offered the opportunity to participate in the training. Clinicians in both the TG and WL were asked to complete web-based assessments at 6 months (70.8% response rate for TG and 70.3% for WL) and 12 months (66.3% response rate for TG and 48.6% for WL) after the baseline assessment. Nonresponding clinicians who did not complete the 12-month assessment were offered a very brief assessment after 3 months (i.e., 15 months after baseline); four opted to complete it. Clinicians received $5 for each assessment completed regardless of assigned incentive group. TF-CBT role-play assessments (N = 28) were also conducted with a selected subset of clinicians from the TG (n = 17) and WL (n = 10). Clinicians were selected using stratified purposeful sampling based on level of training completed to sample clinicians who completed all, most, some, and none of the training. All clinicians who participated in the role-play assessments received $50 for participating. Clinician participation rate and sample representativeness of the role-play assessments are described in Marriott et al. (2022). In sum, clinicians with a master’s degree were more likely to participate but no significant differences were found for other key demographic variables, practice characteristics, TF-CBT knowledge, TF-CBT strategy use, and the extent of training completed. The Institutional Review Boards of the University of Missouri and Washington University in St. Louis approved all study procedures.

Web-based TF-CBT Training Protocol

Reviews of the education and learning literatures (e.g., Bryan et al., 2009; Cucciare et al., 2008) and clinician training literature (Beidas & Kendall, 2010; Herschell et al., 2010; Powell et al., 2013) guided the development of the multi-component training protocol. To keep the training protocol low-cost, each training component was provided to participating clinicians for free and as noted below, learning partners were assigned based on geography to minimize transportation time and costs. Training had eight activities:

  1. Participants were provided with a free copy of the TF-CBT treatment manual (Cohen et al., 2006) and asked to read it;

  2. All participants were able to access the online 10-hour introductory TF-CBT training (TF-CBTWeb1.0; https://tfcbt.musc.edu) at any time regardless of condition;

  3. Four live webinars presented by the TF-CBT developers covering topics that these expert trainers felt were most critical for accurate understanding and implementation of TF-CBT and that were normally covered in in-person TF-CBT workshops. These webinars were recorded and online access was provided for anyone unable to participate during the live webinar;

  4. Weekly emailed TF-CBT clinical and implementation tips generated by the treatment developers and the investigative team;

  5. On-line discussion forum with other trainees moderated and supervised by a certified TF-CBT trainer;

  6. Four brief video demonstrations of TF-CBT components delivered by a certified TF-CBT trainer. These demonstrations were selected and designed in consultation with the treatment developers and a certified TF-CBT trainer to provide coverage of critical TF-CBT components typically covered during in-person trainings;

  7. Toolkit of supplementary TF-CBT training materials (e.g., clinical measures, handouts for clients); and.

  8. An assigned learning partner with whom to discuss and practice TF-CBT skills. Learning partners were assigned by the investigative team based on geography (i.e., closest clinician to them) and provided with specific role-plays and discussion topics designed by the treatment developers and investigative team to approximate those typically completed during live trainings.

Measures

TF-CBT Strategy Use Survey

Clinicians rated their use of 34 prescribed treatment strategies using a recent case within the last 3 months that represented their usual treatment approach for trauma on a 5-point Likert scale of “Never” to “Almost Always.” We developed this measure by adapting an existing treatment strategies survey to reflect use of the core components of TF-CBT (Cho et al., 2019). In the current study, we only evaluated clinicians’ prescribed TF-CBT strategy use (i.e., those strategies that are advised or recommended in TF-CBT, such as the trauma narrative), yielding a total score by averaging the 34 prescribed TF-CBT strategy items. This study had excellent internal consistency for this measure (α = 0.93).

Knowledge of TF-CBT (Heck et al., 2015; National Crime Victims Research & Treatment Center, 2007)

This test consisted of 34 multiple choice questions measuring clinicians’ knowledge of general CBT, trauma, and TF-CBT components (e.g., trauma narrative). The current test is a shortened version of the Heck et al. (2015) Knowledge of TF-CBT test; we abbreviated the test based on communication with the authors of the measure. Knowledge test score is calculated as the percent of questions that are correct.

TF-CBT Adherence and Skill: TF-CBT Role-Play Assessment (Marriott et al., 2022)

We conducted a role-play assessment with a subset of clinicians to assess clinicians’ adherence and skill to three core TF-CBT components (description and rationale of TF-CBT; conducting trauma narrative with youth; cognitively processing trauma with caregiver), as well as clinicians’ overall skillfulness in implementing TF-CBT strategies during the role-play, extensiveness or thoroughness in covering the relevant TF-CBT components, and readiness to effectively implement TF-CBT. Role-play assessments were conducted in-person by trained graduate research assistants and videotaped. Assessments were later coded by a certified TF-CBT trainer and three clinical psychology graduate students using a scale of 0 to 5 (0 = not present, 1 = present but unacceptable skillfulness, 3 = present with acceptable skillfulness, 5 = present with superior skillfulness). Coders were unaware of the clinicians’ assigned training group or level of training completed. An initial evaluation of the reliability and validity of this assessment was promising, with good interrater reliability (ICC M = 0.71, SD = 0.15 for individual items; ICC M = 0.71, SD = 0.18 for overall TF-CBT items) and performance on this assessment positively associated with TF-CBT knowledge and amount of training completed (Marriott et al., 2022).

Training Participation

Clinicians reported the extent to which they participated in each of the eight training activities on a 5-point Likert scale (0=“Not at all”, 2=“Somewhat”, 4=“Completely”). An extent of training completed score was calculated by summing the eight training activity items.

TF-CBT Protocol Implementation

Clinicians were asked to self-report on the number of youth trauma cases with whom they had started or finished using the entire TF-CBT treatment protocol in the past three months. This item was administered to TG clinicians who had indicated they had begun the TF-CBT training at the 6-month assessment point and TG and WL clinicians who had begun training at the 12-month assessment point.

Statistical Analysis Plan

All analyses were conducted using IBM SPSS Statistics 27. Data and materials from this study are not publicly available, but may be requested from the corresponding author. To evaluate differences in training outcomes between the TG and WL at six months and compare training completion between the incentive versus no incentive groups, we performed independent samples t-tests or non-parametric Mann-Whitney U Tests when non-normality of the response variable was present. We also conducted dependent samples t-tests or non-parametric Wilcoxon-Signed Ranks Tests to examine changes in TF-CBT knowledge and strategy use from pre- to post-training for the full, combined sample. Because of the results of the intent-to-train analyses and considerable variability in the extent of training completed by clinicians, we subsequently performed post-hoc analyses to explore the association between extent of training completed, the number of started or finished TF-CBT cases at post-training, and training outcomes. Due to non-normality and small sample size, the non-parametric correlation coefficient, Kendall’s Tau, was used to explore these associations.

Results

Missing Data Comparison

Less than 5% of the pre-training data were missing, but more than 5% of data were missing at post-training assessments. We found no significant results for Little’s MCAR test, suggesting data were missing completely at random. We conducted independent samples t-tests and χ2 tests to compare clinicians with and without missing data on pre-training variables (e.g., knowledge, demographic, practice) at each assessment point. We found a statistically significant difference at the 6-month (t(160) = 2.19, p = .03) and 12-month assessment (t(160) = 2.36, p = .02) on hours worked per week; clinicians with missing data worked more hours at pre-training than those without missing data. WL clinicians were also significantly more likely to have missing data at the 12-month assessment than were TG clinicians, χ2(1) = 6.01, p = .01.

Participants

Clinicians (N = 163) were primarily female (n = 121, 74.2%), Caucasian (n = 144, 88.3%), and Masters’ level (n = 115, 70.6%). Clinicians had an average age of 47.9 years (SD = 11.7). Clinicians were predominantly licensed as LPCs (n = 72, 44.2%) and LCSWs (n = 60, 36.8%); employed in private individual practice (n = 57, 35.0%), outpatient/community mental health center (n = 45, 27.6%), and/or private group practice (n = 38, 23.3%); and had a primary theoretical orientation of cognitive and/or behavioral (n = 82, 50.3%).

Evaluation of Training Outcomes

Training participation rates have previously been reported (McMillen et al., 2016). Notably, around a quarter of clinicians reported not participating in any training component at the follow-up survey. Participation rates were highest for the weekly emailed TF-CBT tips, online introductory TF-CBT training, and toolkit. Clinicians underutilized the training components with the most support for behavior change (i.e., meeting with a learning partner and using the online discussion forum). We did not find any statistically significant differences between the TG and WL clinicians on TF-CBT knowledge score or strategy use at six-months (see Table 1). For the role-play assessments (N = 28), TG clinicians demonstrated more adherence and skill on average across the three TF-CBT components (t(26) = 3.12, p = .004) and greater overall TF-CBT skillfulness (U = 36.00, p = .006) and readiness (U = 34.50, p = .004) throughout the role-play assessment than WL clinicians at six months. We did not find a significant difference between groups for the overall TF-CBT Extensiveness item.

Table 1.

Comparison of the training and waitlist control group on training outcomes at six-month time-point

Variables TG M(SD) WL M(SD) t d

Prescribed TF-CBT Strategy Use 2.76 (0.72) 2.54 (0.61) 1.22 0.33
Overall TF-CBT Component Adherence & Skill 1.29 (0.56) 0.71 (0.61) 3.12** 1.21
Variables TG M(SD) WL M(SD) U r
TF-CBT Knowledge Score 72.18% (11.13%) 70.29% (9.64%) 1347.50 0.10
TF-CBT Skillfulness 1.88 (0.99) 0.55 (0.93) 36.00** 0.56
TF-CBT Extensiveness 1.12 (1.11) 0.36 (0.81) 60.00 0.35
TF-CBT Readiness 1.94 (1.03) 0.55 (0.93) 34.50** 0.57

In analyses examining pre- to post-training changes for the full sample, we found statistically significant differences for TF-CBT knowledge score (Z=−3.48, p < .001), suggesting TF-CBT knowledge scores significantly increased pre- (Mdn = 67.65%, SD = 10.08%) to post-training (Mdn = 73.53%, SD = 10.98%). We also found TF-CBT strategy use significantly increased pre- (Mdn = 2.61, SD = 0.51) to post-training (Mdn = 2.87, SD = 0.66) (Z=−2.06, p = .04).

TG Training Group; WL Waitlist Control Group

**

p < .01

Incentives on Training Completion

We found no significant difference in the extent to which clinicians reported participating in the TF-CBTWeb course (U = 1066.00, z=−0.63, p = .53) between the incentive (M = 2.04, Mdn = 2.0, SD = 1.66) and no incentive group (M = 1.83, Mdn = 2.0, SD = 1.84). There was also no significant difference between groups in the extent to which clinicians reported participating in the live webinars (U = 944.50, z=−1.43, p = .15; Incentive: M = 2.28, Mdn = 3.0, SD = 1.82; No incentive: M = 1.78, Mdn = 1.5, SD = 1.70). Across both groups, the average clinician report was “somewhat” participating in the TF-CBTWeb course and the webinars.

Post-Hoc Analyses

To examine the relation between the extent of training completed, number of started or completed TF-CBT cases, TF-CBT knowledge, and self-reported TF-CBT strategy use for the full sample, we performed bivariate correlations. There was a significant, positive correlation between extent of training completed and post-training TF-CBT knowledge (τ = 0.17, n = 84, p = .03) and post-training self-reported prescribed TF-CBT strategy use (τ = 0.29, n = 53, p < .01); completing more training was associated with higher knowledge scores and more prescribed TF-CBT strategy use. We also found a significant, positive relation between number of started or completed TF-CBT cases and post-training self-reported prescribed TF-CBT strategy use, τ = 0.21, n = 51, p = .04, with starting or completing more TF-CBT cases related to greater TF-CBT strategy use. The association between TF-CBT knowledge and number of TF-CBT cases was not significant (τ = 0.06, n = 64, p = .56).

The association between extent of training completed and demonstrated adherence and skill on the role-play assessment was also investigated. There was a strong, positive correlation between demonstrated adherence and skill across the three TF-CBT components and extent of training completed (τ = 0.28, n = 28, p = .048), with more training completed related to more demonstrated adherence and skill on average across the TF-CBT components. We also observed a strong, positive relation between extent of training completed and overall demonstrated TF-CBT readiness, τ = 0.36, n = 28, p = .02; more training completed was related to more overall TF-CBT readiness to effectively implement TF-CBT. No statistically significant correlations were found between extent of training completed and demonstrated overall TF-CBT skill (τ = 0.28, n = 28, p = .08) or extensiveness (τ = 0.24, n = 28, p = .15). WL clinicians did not report on the number of TF-CBT cases they had started or finished over the last three months at the time of the role-play assessment due to the setup of the survey at the 6-month assessment, so correlations between number of TF-CBT cases and demonstrated adherence and skill were not calculated.

Discussion

The current study evaluated a web-based training for TF-CBT, designed to approximate interactive, multicomponent trainings in EBPs. We found no differences between the training group (TG) and waitlist control group (WL) on TF-CBT knowledge and strategy use at the intent-to-train point. There was large variability in the extent of training completed among clinicians across both training groups, with around one-quarter of clinicians not completing any training and the training components with the most support for behavior change, such as the learning partners and online discussion forum, having the lowest levels of participation (McMillen et al., 2016). Post-hoc analyses indicated a significant relationship between the extent of training completed by clinicians and clinician TF-CBT knowledge and self-reported TF-CBT strategy use, with more training completed associated with greater TF-CBT knowledge and TF-CBT strategy use. Thus, these nonsignificant findings may be due to the TG clinicians not completing a sufficient dose of the training protocol.

Interestingly, we did find that TG clinicians showed more adherence and skill on average across the three TF-CBT treatment components, and more overall TF-CBT Skillfulness and Readiness during the role-play assessment than WL clinicians. Nevertheless, the level of skill demonstrated by the TG still often did not reach an acceptable level of skillfulness, with average ratings (M = 1.29) far below the 70% cutoff score criterion (average rating of 3.5) used in previous studies (Beidas et al., 2012). This may be due in part to the considerable variability in training participation. Indeed, similar to the findings for TF-CBT knowledge and strategy use, we found a positive association between extent of training completed and demonstrated adherence and skill across the three TF-CBT components and overall TF-CBT readiness to effectively implement TF-CBT. Further, clinicians may not have had enough opportunity (e.g., only having one or two new trauma cases) to use their newly acquired TF-CBT skills before the role-play assessments. Skill development may necessitate an interplay of ongoing support as well as experience using these new skills (Jackson et al., 2017).

While there were few between-group differences, clinicians’ TF-CBT knowledge score and self-reported use of TF-CBT strategies significantly increased from pre- to post-training. Similar to the underwhelming adherence and skill findings, clinicians still only averaged 72% correct at post-training, below the 80% proficiency cut-off often used in other training evaluations (Beidas & Kendall, 2010; Sholomskas et al., 2005). Further, while clinicians self-reported use of TF-CBT strategies increased following training, their use remained on average between ‘sometimes’ and ‘frequently’. These findings corroborate prior research that trainings often improve clinician knowledge but rarely change clinician behaviors without substantial ongoing support (Beidas & Kendall, 2010; Frank et al., 2020; Herschell et al., 2010). Additionally, post-hoc analyses revealed that starting or completing more TF-CBT cases was related to greater post-training TF-CBT strategy use, suggesting that the opportunity to use TF-CBT with cases may lead to greater use and underscoring the importance of clinicians having training cases to use TF-CBT with during training efforts.

Because numerous barriers exist to clinicians being able to participate in EBP trainings, we also examined the impact of modest financial incentives on training participation. We found no differences in extent of participation in the training activities between the incentive and no incentive group. Prior reviews indicate that community-based clinicians prefer to participate in EBP trainings that fit with their schedules, do not take time away from clients, and are not costly (Herschell et al., 2014), but we found most clinicians in the current study completed less than half of the training and numerous clinicians did not complete any training. Our findings corroborate previous studies that have found web-based trainings to have low completion rates (Liyanagunawardena et al., 2013) and that the time most clinicians have reported being willing to invest in training was insufficient of the training requirements necessary to learn EBPs (Powell et al., 2013). One reason for the low training completion rates in this study may be the lack of accountability and interpersonal interaction in a self-paced web-based training. The current training relied on clinicians to direct their own learning and to choose which training components to complete. While web-based trainings may provide a more accessible and flexible alternative for many clinicians, the ideal balance between allowing flexibility while keeping clinicians on track to complete EBP trainings remains unknown. Suggestions to increase accountability provided by clinicians in the qualitative portion of the larger study included having the training be more directive on what needs to be completed and when, and tying continuing education credits to meeting deadlines (McMillen et al., 2016). Future studies may wish to examine how these and other accountability and engagement strategies may influence training engagement and outcomes.

Our findings of significant differences between the TG and WL clinicians on the role-play assessment at six months but no differences in TF-CBT knowledge or strategy use were surprising. Differences in the measurement approach for the role-play assessment compared to the TF-CBT knowledge and strategy use measures may account for these results. The role-play assessment was conducted in-person and performance was observationally coded, while the TF-CBT knowledge test and strategy use measure were administered through a web-based survey that was completed more than once by clinicians, so a practice effect may also have impacted these scores. The strategy use measure may have also been subject to social desirability bias once participants learned those strategies were part of TF-CBT. Finally, while TG clinicians had higher scores on the role-play assessment, these scores were still low overall, with TF-CBT items averaging a rating of being present but with an unacceptable level of skillfulness (Ms = 1.29–1.94). Thus, TG clinicians were more adherent and demonstrated more TF-CBT items in their role-play assessment than WL clinicians, but the TG clinicians were not delivering these techniques more skillfully. This is in line with a recent training review that found mixed results around whether EBP training increased competence, with the authors suggesting that training may increase skill acquisition and adherence but may not increase competence or skillfulness in delivering EBP components (Valenstein-Mah et al., 2020). Future studies should focus on dismantling EBP training protocols to determine which training components are necessary to attain proficient EBP knowledge, skill and behavior change (Jackson et al., 2017).

Some limitations of the current study bare mention. First, around a quarter of clinicians had not completed any training at the intent-to-train point, and there was substantial variability in training participation rates among clinicians. These variable participation rates make it difficult to disentangle whether the training itself failed to impact training outcomes or whether clinicians failed to engage in a high enough dose of the training to benefit. Second, although clinicians were under no external pressure (e.g., no mandates, no agency oversight) to respond in any way, self-report is subject to numerous well-known biases (e.g., response bias, social desirability; Dillman 2000; Hurlburt et al., 2010). Third, there were sharply declining response rates from baseline to 12 months, though these response rates were within the range of previous studies (Barnett et al., 2017; Hawley et al., 2009). In addition, only a small subset of clinicians participated in the role-play assessments, limiting the statistical power for the role-play assessment analyses. Fourth, several measures used in the study have not been evaluated in other samples. Thus, these results should be interpreted with caution; future resarch should evaluate the reliability and validity of these measures.

Conclusion

The current study evaluated an inexpensive, accessible approximation of the gold standard of training in EBPs. Following the training, clinicians were modestly knowledgeable in TF-CBT, demonstrated a minimal level of skill, and reported using TF-CBT strategies sometimes to frequently in their practice. While the current study did not find many between-group differences, we did find significant increases in knowledge and TF-CBT strategy use pre- to post-training. The mixed and nonsignificant findings as well as low training completion rates in the current study highlight the limitations of self-paced web-based trainings. Without the incorporation of strategies to increase clinician engagement and accountability, the potential of web-based trainings may be limited. Understanding how to better engage clinicians in completing web-based trainings and in implementing EBPs in their practice is needed and critical to improving mental health care.

Acknowledgements

The authors would like to thank the anonymous reviewer for their suggestion to consider post-hoc correlations between the extent of training completed and clinician outcomes.

Funding

This research was supported by a grant from the Missouri Foundation for Health, the Center for Mental Health Services Research (P30 MH068579) and the Washington University Institute of Clinical and Translational Sciences (NIH/NCRR U54 RR024992).

Footnotes

Conflict of interest The authors declarethat they have no conflict of interest.

Ethical Approval Allprocedures performed in studies involving human participants were inaccordance with the ethical standards of the Universityof Missouri and Washington University in St. Louis Institutional Review Boardsand with the1964 Helsinki Declaration and its later amendments or comparableethical standards. Study procedures were approved by the Institutional ReviewBoards of University of Missouri and Washington University in St. Louis.

Informed Consent Informed consent wasobtained from all individual participants included in the study.

1

Other aspects of this implementation study (i.e., qualitative interviews) and sample have previously been published (McMillen et al. 2016).

References

  1. Barnett M, Brookman-Frazee L, Regan J, Saifan D, Stadnick N, & Lau A (2017). How intervention and implementation characteristics relate to community therapists’ attitudes toward evidence-based practices: A mixed methods study. Administration and Policy in Mental Health and Mental Health Services Research, 44(6), 824–837. 10.1007/s10488-017-0795-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Beidas RS, & Kendall PC (2010). Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: A Publication of the Division of Clinical Psychology of the American Psychological Association, 17(1), 1–30. 10.1111/j.1468-2850.2009.01187.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Beidas RS, Edmunds JM, Marcus SC, & Kendall PC (2012). Training and consultation to promote implementation of an empirically supported treatment: A randomized trial. Psychiatric Services, 63(7), 660–665. 10.1176/appi.ps.201100401 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Beidas RS, Marcus S, Aarons GA, Hoagwood KE, Schoenwald S, Evans AC, Hurford MO, Hadley T, Barg FK, Walsh LM, Adams DR, & Mandell DS (2015). Predictors of community therapists’ use of therapy techniques in a large public mental health system. JAMA Pediatrics, 169(4), 374–382. 10.1001/jamapediatrics.2014.3736. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Beidas RS, Becker-Haimes EM, Adams DR, Skriner L, Stewart RE, Wolk CB, Buttenheim AM, Williams NJ, Inacker P, Richey E, & Marcus SC (2017). Feasibility and acceptability of incentive-based implementation strategies for mental health therapists implementing cognitive-behavioral therapy: A pilot study to inform a randomized controlled trial. Implementation Science, 12(1), 148. 10.1186/s13012-017-0684-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Borntrager C, Chorpita BF, Higa-McMillan CK, Daleiden EL, & Starace N (2013). Usual care for trauma-exposed youth: Are clinician-reported therapy techniques evidence-based? Children and Youth Services Review, 35(1), 133–141. 10.1016/J.CHILDYOUTH.2012.09.018 [DOI] [Google Scholar]
  7. Bryan RL, Kreuter MW, & Brownson RC (2009). Integrating adult learning principles into training for public health practice. Health Promotion Practice, 10(4), 557–563. 10.1177/1524839907308117. [DOI] [PubMed] [Google Scholar]
  8. Cho E, Wood PK, Taylor EK, Hausman EM, Andrews JH, & Hawley KM (2019). Treatment strategies in youth mental health services: Results from a national survey of providers. Administration and Policy in Mental Health and Mental Health Services Research, 46(1), 71–81. 10.1007/s10488-018-0896-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Cohen JA, Mannarino AP, & Deblinger E (2006). Treating trauma and traumatic grief in children and adolescents. Guilford Press. [Google Scholar]
  10. Cohen JA, Mannarino AP, & Deblinger E (2016). Treating trauma and traumatic grief in children and adolescents (2nd ed.). Guilford Publications. [Google Scholar]
  11. Cucciare MA, Weingardt KR, Villafranca S, & Evaluation C (2008). Using blended learning to implement evidence-based. Health Care, 15, 299–307. [Google Scholar]
  12. Dillman D (2000). Procedures for conducting government-sponsored establishment surveys: Comparisons of the total design method (TDM), a traditional cost-compensation. Proceedings of American Statistical Association, Second International Conference on Establishment Surveys, 343–352. [Google Scholar]
  13. Frank HE, Becker-Haimes EM, & Kendall PC (2020). Therapist training in evidence-based interventions for mental health: A systematic review of training approaches and outcomes. Clinical Psychology: Science and Practice. 10.1111/cpsp.12330 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Friedberg RD (2015). When treatment as usual gives you lemons, count on EVBPs: Economic arguments for training child clinicians in evidence-based practices. Child & Family Behavior Therapy, 37(4), 335–348. 10.1080/07317107.2015.1104782 [DOI] [Google Scholar]
  15. Garland AF, Hurlburt MS, Brookman-Frazee L, Taylor RM, & Accurso EC (2010). Methodological challenges of characterizing usual care psychotherapeutic practice. Administration and Policy in Mental Health and Mental Health Services Research, 37(3), 2–18. 10.1007/s10488-009-0237-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Hawley KM, Cook JR, & Jensen-Doss A (2009). Do noncontingent incentives increase survey response rates among mental health providers? A randomized trial comparison. Administration and Policy in Mental Health and Mental Health Services Research, 36(5), 343–348. 10.1007/s10488-009-0225-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Heck NC, Saunders BE, & Smith DW (2015). Web-based training for an evidence-supported treatment. Child Maltreatment, 20(3), 183–192. 10.1177/1077559515586569. [DOI] [PubMed] [Google Scholar]
  18. Herschell AD, Kolko DJ, Baumann BL, & Davis AC (2010). The role of therapist training in the implementation of psychosocial treatments: A review and critique with recommendations. Clinical Psychology Review, 30(4), 448–466. 10.1016/j.cpr.2010.02.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Herschell AD, Reed AJ, Mecca LP, & Kolko DJ (2014). Community-based clinicians’ preferences for training in evidence-based practices: A mixed-method study. Professional Psychology: Research and Practice, 45(3), 188–199. 10.1037/a0036488 [DOI] [Google Scholar]
  20. Hurlburt MS, Garland AF, Nguyen K, & Brookman-Frazee L (2010). Child and family therapy process: Concordance of therapist and observational perspectives. Administration and Policy in Mental Health and Mental Health Services Research, 37(3), 230–244. 10.1007/s10488-009-0251-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Jackson CB, Herschell AD, Schaffne KF, Turiano NA, & McNeil CB (2017). Training community-based clinicians in parent-child interaction therapy: The interaction between expert consultation and caseload. Professional Psychology: Research and Practice, 48(6), 481–489. 10.1037/pro0000149 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Jackson CB, Quetsch LB, Brabson LA, & Herschell AD (2018). Web-based training methods for behavioral health providers: A systematic review. Administration and Policy in Mental Health and Mental Health Services Research, 45(4), 587–610. 10.1007/s10488-018-0847-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Liyanagunawardena TR, Adams AA, & Williams SA (2013). MOOCs: A systematic study of the published literature 2008–2012. International Review of Research in Open and Distributed Learning, 14(3), 202–227. [Google Scholar]; National Crime Victims Research & Treatment Center. (2007). TF-CBT Web: First Year Report. Medical University of South Carolina. [Google Scholar]
  24. Marriott BM, Cho E, Tugendrajch SK, Kliethermes MD, McMillen JC, Proctor EK, & Hawley KM (2022). Role-play assessment of therapist adherence and skill in implementation of Trauma-Focused Cognitive-Behavioral Therapy. Administration and Policy in Mental Health and Mental Health Services Research, 49, 374–384. 10.1007/s10488-021-01169-9 [DOI] [PubMed] [Google Scholar]
  25. McMillen JC, Hawley KM, & Proctor EK (2016). Mental health clinicians’ participation in web-based training for an evidence supported intervention: Signs of encouragement and trouble ahead. Administration and Policy in Mental Health and Mental Health Services Research, 43, 592–603. [DOI] [PubMed] [Google Scholar]
  26. Powell BJ, McMillen JC, Hawley KM, & Proctor EK (2013). Mental health clinicians’ motivation to invest in training: results from a practice-based research network survey. Psychiatric Services, 64(8), 816–818. 10.1176/appi.ps.003602012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, Proctor EK, & Kirchner JAE (2015). A refined compilation of implementation strategies: Results from the expert Recommendations for Implementing Change (ERIC) project. Implementation Science, 10(1), 1–14. 10.1186/s13012-015-0209-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Sholomskas DE, Syracuse-Siewert G, Rounsaville BJ, Ball SA, Nuro KF, & Carroll KM (2005). We don’t train in vain: A dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. Journal of Consulting and Clinical Psychology, 73(1), 106–115. 10.1037/0022-006X.73.1.106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Valentsein-Mah H, Greer N, McKenzie L, Hansen L, Strom TQ, Wiltsey Stirman S, Wilt TJ, & Kehle-Forbes SM (2020). Effectivness of training methods for delivery of evidence-based psychotherapies: A systemic review. Implementation Science, 15, 40. 10.1186/s13012-020-00998-w [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES