Skip to main content
Implementation Research and Practice logoLink to Implementation Research and Practice
. 2022 Apr 19;3:26334895221087477. doi: 10.1177/26334895221087477

Tailored isn't always better: Impact of standardized versus tailored training on intention to use measurement-based care

Hannah Kassab 1,*, Kelli Scott 2,3,*,, Meredith R Boyd 4, Ajeng Puspitasari 5, David Endicott 6, Cara C Lewis 7
PMCID: PMC9924248  PMID: 37091104

Abstract

Background: Brief educational trainings are often used for disseminating and implementing evidence-based practices (EBPs). However, many accessible trainings are ubiquitously standardized. Tailored training focused on modifying individual or contextual factors that may hinder EBP implementation is recommended, but there is a dearth of research comparing standardized versus tailored training. This study sought to: (a) assess the impact of MBC training on clinician intention to use measurement-based care (MBC); (b) compare the effect of standardized versus tailored training on clinician intention to MBC; and (c) identify clinician-level predictors of intention. Methods: Clinicians (n = 152) treating adult clients with depression at 12 community mental health clinics were randomized to either tailored or standardized MBC training. Clinic-specific barriers and facilitators were used to inform training content and structure tailoring. Linear mixed modeling tested the association between training condition and post-training intention to use MBC, as well as hypothesized individual-level predictors of post-training intention (e.g., age, gender). Results: Clinician intention pre- and post-training increased across training conditions (B = 0.38, t = −5.95, df = 36.99, p < .01, Cohen's d = 0.58). Results of linear mixed modeling procedures suggest no significant difference in clinician intention between conditions post-training (B = −0.03, SE = .19, p > .05, Cohen's d = .15). Only baseline intention emerged as a predictor of post-training intention (B = 0.39, SE = .05, p < .05). Conclusions: These findings suggest the additional effort to tailor training may not yield incremental benefit over standardized training, at least in the short term. As a result, implementation efforts may be able to reserve time and finances for other elements of implementation beyond the training component.

Plain Language Summary

Educational training is a common approach for enhancing knowledge about research-supported mental health treatments. However, these trainings are often not tailored to meet the needs of the trainees, and there is insufficient evidence about whether tailoring might improve the impact of training compared to a one-size-fits-all, standard version. This study compared the impact of a tailored versus standard training on mental health clinician’s intentions to use measurement-based (MBC) care for monitoring treatment progress for clients with depression. Study results indicated that intention to use MBC improved for clinicians receiving both the tailored and standard training after training completion. There were no differences in intention to use MBC care when the two types of training were compared. These study findings suggest that tailoring, which may require substantial time and effort, may not be a necessary step to improve the short-term impact of educational trainings.

Keywords: Tailoring, workshop training, measurement-based care, implementation science


Use of evidence-based practices (EBPs) remains limited in community mental health settings which may be due, in part, to a lack of effective brief training approaches (Herschell et al., 2010). Typical approaches to training, such as single-exposure, didactic workshop trainings—like those offering continuing education credits required to maintain licensure—can promote knowledge acquisition but are limited in their capacity to produce behavior change (Beidas & Kendall, 2010; Herschell et al., 2010).

Training can be enhanced when expert-led didactic workshops contain active learning techniques like discussion, demonstrations, and role plays with feedback (El-Tannir, 2002; Herschell et al., 2010; Sholomskas et al., 2005); this “gold standard” can influence both clinician knowledge and attitudes (Beidas & Kendall, 2010), but still does very little to promote practice change. Some researchers suggest that this limited capacity to produce clinician behavior change is due to trainings’ “one-size-fits-all” nature (Carpenter et al., 2012; Herschell et al., 2010). In the present study, we explored an alternative to the gold standard approach by comparing it to a tailored approach in the context of training community mental health providers to use measurement-based care (MBC; Lewis et al., 2015).

MBC is an EBP that involves monitoring client symptoms (usually via a brief standardized self-report measure) before or during each therapy session to inform treatment decisions (Lewis, Boyd, et al., 2018; Lewis et al., 2015). Studies show that tracking client symptoms and progress during therapy improves client outcomes because it provides the clinician with feedback to inform clinical decision-making (Lambert et al., 2003). MBC allows clinicians to focus sessions on clients’ unique symptoms and progress. Across numerous studies, MBC outperforms usual care (Lambert et al., 2005) and may be a minimal intervention needed for change (Scott & Lewis, 2015). However, MBC is rarely used in community mental health (Lambert et al., 2003), with fewer than 14% of clinicians reporting its use in line with research (Jensen-Doss et al., 2016; Lewis, Boyd, et al., 2018).

Tailored Training: A Promising Alternative?

A standardized approach to training does not account for clinician characteristics or contextual factors unique to the setting in which the clinicians are embedded; these factors may act as barriers or facilitators of EBP implementation (Beidas & Kendall, 2010). Recommendations have been made for the use of tailored trainings to promote EBP uptake in behavioral health settings (Beidas et al., 2011). Tailored training may be designed to target individual characteristics (e.g., clinician attitudes about the intervention) or contextual factors (e.g., leadership buy-in for the intervention) by conducting a pre-training assessment to identify barriers to be addressed, find facilitators that can be leveraged, and adjust the training content and structure accordingly (Lewis et al., 2015).

Existing research on tailored training conducted in the behavioral health and medical fields has yielded mixed results (Beidas et al., 2014), which could be due to the multitude of ways in which researchers study tailoring. Formal needs assessments are used variably to inform tailoring of training (Baer et al., 2009; Bryson & Ostmeyer, 2014; Siddiqi et al., 2008). Some studies focus on tailoring the structure to fit the contextual needs of the trainees, (e.g., delivering motivational interviewing training in the trainees’ workplace; Baer et al., 2009; Rollnick et al., 2002) while others focus on tailoring content (e.g., providing information about a specific technique [e.g., parenting support strategies] that the trainees want/need to learn; Bryson & Ostmeyer, 2014; Sanders & Turner, 2005; Turner & Sanders, 2006). Other trainings incorporate both tailored structure and content in response to a formal needs assessment (i.e., trainings for motivational interviewing and delirium intervention; Baer et al., 2009; Siddiqi et al., 2008).

To our knowledge, only two behavioral health studies evaluating motivational interviewing and communication skills have directly compared standardized to tailored training (Baer et al., 2009; Rollnick et al., 2002). Other than increased acceptability of the training by participants in the tailored condition for one study (Rollnick et al., 2002), results indicated no difference between tailored and standardized conditions. Others have compared different tailored trainings to each other (Bryson & Ostmeyer, 2014; Sanders et al., 2003) or employed tailored approaches without comparisons (Friesen-Storms et al., 2015; Siddiqi et al., 2008), making it challenging to determine whether tailoring indeed impacted training outcomes. Additionally, none of these prior studies evaluated the impact of tailoring on precursors to behavior change (e.g., intention to use a new EBP) that may be important for influencing new practice implementation. Instead, these studies predominantly focused on satisfaction with or perceived helpfulness of training as well as perceived or observed skill gain following training. Because of the limited and variable research on this topic, additional research is needed to determine whether tailoring confers benefits beyond that of standardized training on a wide range of outcomes.

Training Outcomes and Predictors

As a precursor to change at the service and client levels, implementation outcomes have been identified as essential (intermediary) endpoints of study to better understand how strategies, like training, may contribute to successful EBP uptake (Proctor et al., 2011). Intention to use an EBP, which is often used interchangeably with the term “adoption,” is an implementation outcome that is implicated early on and can be expected to change over the course of a training (Proctor et al., 2011). Intention is a core domain of the Theory of Planned Behavior (TPB), which posits that social pressure to perform a behavior, perceived self-efficacy, and attitudes toward a behavior all influence intention to engage in a behavior, the latter of which is a very strong predictor of the behavior itself (Ajzen, 1991). Although evaluating intention to engage in or change a behavior appears to be a promising measure of interest for assessing future EBP use, it has been understudied as a key training outcome. Most training studies evaluate changes in knowledge, attitudes, and skill of the participants (Herschell et al., 2010).

Understanding clinician-level predictors of implementation outcomes, like intention, is critical to ascertain for whom training works (Beidas & Kendall, 2010), which can then also be used to tailor trainings. Based on the extant literature, there are at least four key variables that may influence the effect of training with regard to conferring improved knowledge and/or intention to use EBPs: clinician attitudes, theoretical orientation, age, and experience. Several studies have found that clinicians with favorable attitudes toward the EBP have a higher probability of using the EBP post-training, regardless of the type or quality of the training (Beidas et al., 2014). Researchers have also found that CBT-oriented clinicians had both fewer attitudinal barriers (Hatfield & Ogles, 2007) and more positive attitudes toward MBC (Jensen-Doss et al., 2016), relative to other orientations (e.g., eclectic, insight-oriented). In another survey of clinicians and case managers, age positively predicted favorable attitudes toward EBPs, such that older clinicians were more open to the adoption of EBPs (Aarons & Sawitzky, 2006). Studies have also indicated that the presence of related skills (i.e., experience) prior to training predicted greater skill post-training, and experience predicted a greater magnitude of change in skill post-training as well as greater intervention intention/adoption (Carpenter et al., 2012; Chor et al., 2015). Unfortunately, the majority of research that has attempted to identify predictors is limited by lack of randomized study design, limited training experience, and small sample sizes (Herschell et al., 2010).

Current Study

The purpose of this study was to compare the impact of tailored versus standardized training on intention to use MBC among community mental health center clinicians participating in a large cluster randomized controlled trial. The trial was guided by the Framework for Dissemination (Mendel et al., 2008), an implementation science framework that provided the conceptual model for the implementation phases (adoption, implementation, and sustainment), evaluation process (needs assessment, implementation/process evaluation, and outcome/impact evaluation), and contextual factors impacting MBC implementation (see Lewis et al., 2015 for full description of the conceptual framework employed in this study). This study included three specific aims. Aim 1 assessed whether MBC training facilitated favorable outcomes regarding clinician intention to use MBC. We hypothesized that training would increase clinicians’ intention to use MBC with clients immediately post-training, across both conditions. Aim 2 compared the effect of standardized versus tailored training on intention to use MBC. We hypothesized that tailored training addressing the unique needs of clinicians and their context, informed by a mixed-methods assessment guided by the Framework for Dissemination (Mendel et al., 2008), would result in greater intention to use MBC compared to standardized training. Aim 3 explored predictors of intention to use MBC including age, previous MBC experience, attitudes toward MBC, years of clinical experience, cognitive-behavioral orientation, and caseload. No a priori hypotheses were articulated as Aim 3 was exploratory but guided by previous literature.

Method

Context and Design

The present study was conducted in the context of a dynamic cluster randomized controlled trial led in collaboration with Centerstone, the largest community-based, not-for-profit mental health service provider in the United States (see Lewis et al., 2015 for a full trial protocol). Twelve Centerstone clinic sites in Tennessee and Indiana were selected for this trial based on (1) number of clinicians, (2) number of adults with depression diagnoses, and (3) site urban versus rural status to have a balanced representation. Sites were first randomized by the principal investigator and research team into four training cohorts for feasibility purposes and then into condition, either standardized or tailored in line with Consolidated Standards of Reporting Trials (CONSORT) and guidelines set forth by Chamberlain et al. (2008) and Brown et al. (2014; see Figure 1 and Lewis et al., 2015). Participating clinicians received either standardized or tailored training based on the assigned condition at the site where they were primarily employed (i.e., a clinician at a site randomized to the tailored condition received tailored training). Each site underwent a baseline mixed-methods needs assessment consisting of surveys and focus groups that queried relevant domains outlined in the Framework for Dissemination (e.g., resources, media, and change agents, and structures and processes), followed by training approximately 1 month later. Primary outcomes for the trial included both MBC effectiveness (client depression symptom change) and MBC implementation (MBC fidelity) outcomes. For this study, we engaged in a secondary data analysis of the standardized and tailored training delivered in the trial, with a primary outcome of clinician intention to use MBC following training.

Figure 1.

Figure 1.

Clinician CONSORT flowchart.

Note: Ten clinicians practiced at more than one site during the study period. Only data were collected at the first site at which a clinician provided consent was retained in the data.

Participants

Clinicians were included if they met the following eligibility criteria: (1) provided individual psychotherapy, (2) treated adults with depression, and (3) conducted psychotherapy sessions in English (Lewis et al., 2015). In total, 187 participants from all sites were enrolled in the parent trial. This sample size was determined using a simulation-style power analysis in the larger trial (see Lewis et al., 2015). Given this study's focus on exploring the differential influence of standardized versus tailored training, six clinicians were excluded because they participated in an online version of the training (which was inherently standardized), eight were excluded because they did not see clients (i.e., due to being in an administrative role), and 21 were excluded because they did not complete the baseline assessment. In total, 152 participants were included in this study (81% of total enrolled clinicians across sites), with 86 in the tailored and 66 in the standardized condition (see Figure 1 for CONSORT). The sample was comprised of 133 therapists, 11 clinic directors, six interns, and two psychiatrists.

Clinician participants attended the 4-hour training in person and completed surveys before and immediately after training. Participants were individually consented and were compensated with productivity credits (i.e., equivalent face-to-face client time) to ensure that they were not penalized for training attendance and incentivized with cash for completing the needs assessment and post-training surveys. Protocols for this study were approved by the Institutional Review Board at Indiana University.

Data Collection Procedures

All sites completed a baseline needs assessment and clinic tour. All clinicians completed a battery of self-report questionnaires (e.g., demographics, MBC intention), and a subset of clinicians and clinic directors completed focus groups and/or interviews to assess MBC implementation barriers and facilitators. This mixed-methods needs assessment data was utilized to inform training (e.g., incorporating strategies for working with cognitively impaired clients at sites with concerns about using MBC for this population) in the tailored condition. Specifically, self-report questionnaires were scored using established scoring guidelines for each measure (e.g., MBC intention scores). Qualitative focus groups and interviews were analyzed using a directed content and reflexive team analysis approach (Hsieh & Shannon, 2005) that combined both deductive and inductive approaches to achieve thematic saturation. Focus group analysis was completed by a team of three coders who met weekly to establish inter-rater reliability and resolve coding disagreements. Results of the focus group analysis are published elsewhere (Albright et al., 2021).

Results from the quantitative surveys and qualitative coding were used to modify the tailored training to address contextual factors impacting MBC implementation at each site randomized to the tailored condition. No changes were made to the standardized training following survey and focus group completion. Four-hour trainings were then conducted with all sites across conditions 1 month after the baseline visit, immediately after which participants completed a 15-minute battery of self-report measures. For full details about the randomized clinical trial, see Lewis and colleagues (2015).

Measures

Clinician Demographics and MBC Exposure

A 16-item questionnaire was administered to clinicians at baseline to assess demographics and prior exposure to MBC. This questionnaire assessed basic demographic information, level of education, level of previous MBC exposure, knowledge of MBC, comfort with MBC use, caseload, and theoretical orientation.

Intention to Use MBC

The TPB survey was administered during the baseline needs assessment and post-training to evaluate change in clinician intention to use MBC. MBC intention served as the primary outcome measure for the present study. Consistent with TPB recommendations, this survey was developed using the TPB Questionnaire Construction guide (Fishbein & Ajzen, 2010) to assess clinician intention to use MBC. The MBC intention subscale had three items with strong internal consistency in the current sample (α = .84). Clinicians were asked to indicate the extent to which they agreed with statements like, “I expect to use measurement-based care (administering and reviewing the PHQ-9 with clients),” on a scale of 1 (“Strongly Disagree”) to 7 (“Strongly Agree”).

Monitoring and Feedback Scale (MFA)

The MFA is a 14-item scale developed to assess clinician attitudes toward measuring progress and providing feedback to clients (Jensen-Doss et al., 2016; Jensen-Doss & Hawley, 2010), a synonymous practice to MBC. It was administered at baseline and post-training and used to assess attitudes toward MBC as a predictor of clinician intention. This measure was developed from previous scales with two items adapted from the Utility of Diagnosis Scale (Jensen-Doss & Hawley, 2010), and the remaining items developed by MBC experts. For the current study, two subscales were utilized to measure constructs of perceived benefit (10 items; n = 152, α = 0.89 in current sample) and perceived harm, or disadvantages (4 items; n = 152, α = 0.84 in current sample) of using MBC. Clinicians indicated the extent to which they agreed or disagreed with statements such as, “Monitoring treatment progress is an important part of treatment,” on a scale of 1 (“Strongly Disagree”) to 5 (“Strongly Agree”).

Standardized Training Overview

Clinicians in the standardized training condition received 4 hours of training that presented a basic introduction to MBC, research evidence supporting MBC, information to address anticipated clinician’s concerns about MBC (standardized across the sites in the standardized condition), and clinical utility of MBC (Tables 1 and 2). The training focused on the use of the Patient Health Questionnaire-9 (PHQ-9) when engaging in MBC. The PHQ-9 is a reliable and valid, 9-item measure that captures the severity of depressive symptoms in the past 2 weeks (Kroenke et al., 2010). Clinicians were trained to (a) administer the PHQ-9, (b) interpret scores over time, and (c) discuss the meaning of scores with their clients; these three components constitute fidelity to MBC if completed in a single clinical encounter (i.e., a therapy session; Lewis et al., 2015).

Table 1.

Tailoring components: Content.

Standardized Site 1 Site 5 Site 6 Site 9 Site 10 Site 12
Reviewed site-specific barriers and facilitators to implementation X X X X X X
MBC/PHQ-9 BASICS  
Introduce MBC (conceptual framework, purposes) X X X X X X X
Description of PHQ-9 use in other languages, or for other populations X X
Defined each individual purpose of MBC, written out on slide for benefit of trainees X
Provided info about PHQ-9 sensitivity to change, objectivity about client progress, and utility for important conversations w/clients X
Provided more information about how PHQ-9 use protects against cognitive errors by clinician X
RESEARCH  
Present research evidence for and clinical utility of PHQ-9 X X X X X X X
Reviewed facts/research about the PHQ-9 in more detail (i.e. whom it works for, how use helps clients, etc.) X X X
Reviewed research regarding use of PHQ-9 w/medical co-morbidities X
Reviewed research related to utility of PHQ-9 w/regard to severe mental illness X
STRATEGIES  
Present strategies for using PHQ-9 X X X X X X X
Reviewed strategies for working with lack of progress X X X X X X X
Provided tips for administering PHQ-9 with cognitively impaired clients X X X X
Emphasized trainee autonomy for when/how often the measure should be used X

Note: MBC = measurement-based care; PHQ-9 = Patient Health Questionnaire-9.

Table 2.

Tailoring components: Structure.

Standardized Site 1 Site 5 Site 6 Site 9 Site 10 Site 12
MODELING/ROLE PLAY  
Active Role Play (practice administering/interpreting PHQ-9 scores during training) X X X X X X X
Reviewed three major components of MBC implementation (administer the measure, record the score, discuss scores with client) step by step with clinical vignette/sample client X X X X X X X
Trainer Modeling (administration and discussion of PHQ-9 use w/client) X X X X X X X
Video Modeling (discussing PHQ-9 w/upset or annoyed clients) X X X X X X X
Video about MBC overview, included example modeling use/interpretation of scores with a client X
Clinical Vignette Example to demonstrate PHQ-9 score use X
ACTIVE DISCUSSIONS  
Discussed IMPACT trials, superiority of tailored collaborative care to usual care (Hunkeler et al., 2006) X X
Discussed intention to use MBC at baseline measurement X
Dicussed length of time it took for trainees to complete the PHQ-9 X X X
Discussed trainees’ current use of MBC X X X
Discussed what might incentivize trainees to use MBC X
Discussed whether it's triggering to start a session w/PHQ-9 questions about suicidality X
QUOTES/FEEDBACK FROM CLINICIANS  
Provided clinician perspectives on MBC as demonstration of clinical utility of MBC X X X X X X X
Audio quote from clinician about using PHQ-9 scores to refer client to specialist (endocrinologist) X
Client example about review of PHQ-9 score trajectory over time and how it helped X X
Words from identified champion of MBC implementation from another Centerstone site X
Provided client perspectives on MBC and its utility X X
Provided three quotes from other Centerstone clinicians about clinical utility and how they fit it into session X
Incorporated quote from clinician about how the highest quality of care includes MBC X

Note: MBC = measurement-based care; PHQ-9 = Patient Health Questionnaire-9.

Training incorporated active learning strategies recommended by adult learning theory and empirical research to increase both knowledge and skills (Beidas & Kendall, 2010; El-Tannir, 2002; Herschell et al., 2010; Sholomskas et al., 2005). Specific strategies included in vivo practice administering and discussing the PHQ-9 with a standardized client (i.e., role plays with feedback), modeling (i.e., watching an expert deliver a standardized MBC example with challenging clients), and active discussions (Tables 1 and 2). Throughout the training, these strategies encouraged clinicians to be engaged and ask questions, and trainers directed conversation to focus on enhancing “in therapy room” fidelity to MBC. Clinicians were taught strategies to use when clients do not make progress, and they received several supplementary training materials (e.g., training binders, copies of the PHQ-9, and a PHQ-9 scoreboard to facilitate administration to clients). Overall, the goal of the standardized training was to build clinician MBC knowledge and skill to enable implementation with fidelity. As a result, all examples, information, and role plays provided to trainees were identical and standardized across all six sites in the condition.

Tailored Training Overview

Clinicians in the tailored training condition also received 4 hours of training that contained all topics covered in the standardized training (i.e., basic introduction to MBC, MBC research evidence, and MBC clinical utility). Information from the needs assessment data sources (i.e. surveys, focus groups, interviews, and clinic tours) informed the tailoring of examples, information, and active learning training activities for the six sites in the tailored condition. Data from the surveys and focus groups were used to identify site-level implementation barriers and facilitators that aligned with six contextual factors of the Framework for Dissemination thought to impact successful implementation: (1) norms and attitudes; (2) structure and process, (3) resources, (4) policies and incentives, (5) networks and linkages, (6) media and change agents (Mendel et al., 2008). Average site scores on quantitative surveys were computed and compared to national averages presented in the literature (e.g., national averages on the MFA measure) to identify site-specific barriers and facilitators.

The process of tailoring the training content was carried out by a team of implementation scientists. The team had expertise in using implementation mechanisms to match strategies for the purpose of resolving barriers as discussed briefly in Lewis, Klasnja et al. (2018). Barriers and facilitators from this needs assessment was used to inform the tailoring, either by adjusting training content to be more relevant to the specific site (Table 1) and/or by adjusting structure through site-specific discussions, activities, or examples (Table 2). For instance, at sites where clinicians noted that they did not view EBPs to be as important as clinical judgment, trainers presented research demonstrating robust evidence of inaccurate clinician perception of client progress and audio-recorded clinician quotes discussing the PHQ-9's clinical utility. Data on site-specific barriers and facilitators were presented on slides at the end of training in the tailored, but not the standardized, condition. A comprehensive description of the tailoring of training was developed based on a systematic slide deck comparison between conditions and across sites (Tables 1 and 2). The standardized and tailored trainings were conducted by different clinical psychologists to avoid contamination of the conditions; both were trained in cognitive-behavioral therapy and were MBC experts. In summary, the goal of the tailored training was to enhance MBC implementation with fidelity, but also to address potential site-specific barriers to MBC that may limit scale-up.

Statistical Analyses

Analyses for this project were conducted using R statistical software (R Core Team, 2013). A multivariate imputation by chained equations (MICE) analysis was employed to account for missing pre- and post-training data across variables of interest (Van Buuren & Groothuis-Oudshoorn, 2011). Five datasets were imputed and all results refer to pooled results across the datasets. Results with imputed data were compared to a model without missing data to check for robustness. Exploratory analyses assessed whether assumptions (i.e. independence, normality of the data) were met. To determine whether randomization led to equivalent groups of clinicians in the standardized versus tailored training conditions, independent samples t-tests were run for continuous variables (i.e. age, attitudes), and Pearson's chi-square tests of independence were conducted for nominal level variables (i.e. sex, race). Bivariate correlation analyses were run to evaluate the strength of the relation between continuous variables of interest (putative predictors) and intention to use MBC post-training (primary outcome). Variables included were age, attitudes, knowledge in MBC, comfort in MBC use, and extent of current MBC use.

For Aim 1, a two-part analysis was conducted to assess the overall impact of training as an implementation strategy in increasing intention to use MBC. For Aim 2, a linear mixed effects model was utilized to assess change in MBC intention post-training across conditions. Modifications were made to the standard OLS model because the data come from multiple clinician sites (n = 12). Regression models account for any unobserved group differences between sites by including a random coefficient for site. With this modification, the coefficients refer to the average effect across sites.

Similar to the analysis above, linear mixed effects models were utilized to assess Aim 3, predictors (either condition or clinician-level variables) of clinician intention to use MBC. For this model, all clinician-level variables of interest were incorporated, including previous experience with MBC, attitudes towards MBC, years of clinical experience, primary theoretical orientation, and caseload. Variables for gender, age, ethnicity, race, licensure status, and baseline intention to use MBC were included in the regression model as controls.

Results

Descriptive statistics characterized the participant sample across the standardized (n = 66) and tailored training conditions (n = 86; see Table 3 for full participant demographics). The majority of participating clinicians were female (74.2% in standardized condition, 83.7% in tailored), White (77.3% in standardized, 91.9% in tailored), had Master's degrees (90.9% in standardized, 95.3% in tailored), and had under 10 years of clinical experience (54.5% in standardized, 60.5% in tailored). We explored whether tailored and standardized condition randomization was successful. Results provide evidence to support the randomization, with the only statistical difference between the groups of clinicians being caseload, such that those in the standardized condition reported having a larger caseload on average. Correlations between MBC variables were low to moderate in size (−0.23 to 0.54), offering little evidence of multicollinearity among these variables.

Table 3.

Summary of baseline characteristics.

Standardized Sites
(n = 66)
Tailored Sites
(n = 86)
Characteristic M SD M SD
Age 43.43 12.51 41.91 12.10
Intention to Use MBC 5.60 1.27 5.36 1.12
Attitudes toward EBPs Perceived Benefit 4.15 0.45 4.20 0.50
Perceived Harm 2.27 0.64 2.26 0.71
MBC Use Knowledgeable in MBC Use 1.94 0.70 1.65 0.68
Comfortable in MBC Use 2.46 0.90 2.10 0.81
Extent of Current MBC Use 2.31 1.11 1.74 0.86
n % n %
Sex Female 49 74.2 72 83.7
Male 17 25.8 14 16.3
Ethnicity Hispanic/Latino 2 3.0 1 1.2
Non-Hispanic 62 93.9 85 98.8
Race White 51 77.3 79 91.9
Non-White 14 21.2 7 8.1
Licensure Status Licensed 39 59.1 55 64.0
Not Licensed 27 40.9 31 36.0
Highest Degree Status Bachelor's Degree 2 3.0 2 2.3
Master's Degree 60 90.9 82 95.3
Doctoral Degree 3 4.5 1 1.2
Other 1 1.5 -- --
Primary Theoretical Orientation Cognitive Behavioral 40 60.6 45 52.3
Other 25 37.9 41 47.7
Years of Clinical Experience 0–10 years 36 54.5 52 60.5
10–20 years 18 27.3 19 22.1
>20 years 11 16.7 15 17.4
Current Caseload 1–20 clients 10 15.0 26 30.3
21–50 clients 20 30.3 29 33.7
>50 clients 35 53.0 29 33.7
Previous Experience in MBC Attended MBC Workshop 12 18.2 16 18.6
Read MBC Research Articles 27 40.9 17 19.8
Supervised in MBC 8 12.1 9 10.5
No Experience/Training 26 39.4 51 59.3

Note: Tests were conducted on baseline characteristics to test randomization between conditions. Continuous variables were analyzed using t-tests whereas categorical variables were analyzed using chi-square test. No differences were found between conditions. EBPs = evidence-based practices; MBC = measurement-based care.

Results of linear mixed modeling for Aim 1 indicate that there was a positive, statistically significant relationship between baseline clinician intention to use MBC and post-training intention to use MBC, when controlling for training condition (Table 4); clinician intention increased from baseline (M = 5.46, SD = 1.19) to post-training (M = 6.05, SD = 0.82; B = 0.38, SE = .06, p < .05, Cohen's d = 0.58). Results of the regression analysis for Aim 2 (Table 4) indicated that training condition did not significantly predict change in MBC intention (B = −0.03, SE = .19, p > .05, Cohen's d = 0.15). Given the nonsignificant findings, we conducted a sensitivity power analysis to determine whether the study was sufficiently powered to detect training condition effects. Assuming a power of .8 and adjusting for design effects due to site clustering (design effect = 1.66), our study was powered to detect medium effect sizes (0.6). Based on a prior meta-analysis, medium effect size changes in intention are sufficient to produce small-to-medium changes in behavior (Webb & Sheeran, 2006). As a result, our study was sufficiently powered to detect meaningful changes in intention that would have a direct impact on MBC use across conditions. For Aim 3, only baseline intention to use MBC (B = 0.39, p < .05) which was included in the model as a control, emerged as a significant predictor of post-training intention, all p's > .05.

Table 4.

Aim 1: Results of training outcome analysis to determine whether tailored training and baseline intention to use MBC affected post-training intention.

Variable Coefficient Est. SE
Intercept 4.18** .40
Tailored Condition −0.14 .16
Baseline Intent 0.38** .06
Aim 2: Results for whether tailored training condition predicted post-training intention to use MBC
Variable Coefficient Est. SE
Intercept 6.07** .30
Tailored Condition −0.03 .19

Note: Model includes random coefficient for clinic site, n= 152. MBC = measurement-based care.

**p-value <.05.

Discussion

Two types of training were evaluated in this study to assess their differential influence on training outcomes: (1) standardized training, which was a “one-size-fits-all” didactic workshop containing active learning techniques (e.g., demonstrations and role plays) and (2) tailored training, which was an adapted version of the standardized training designed to address site-specific individual and contextual factors that might influence MBC use. Results for Aim 1 yielded significant, positive results that suggest training was useful for generating clinician intention to use MBC. For Aim 2, this study revealed no differences between conditions with respect to clinician intention to use MBC post-training. For Aim 3, only baseline intention to use MBC predicted intention post-training; no other clinician variables (e.g., previous MBC experience, attitudes towards MBC) were predictive of this outcome.

Although no differences were observed between conditions, results suggest that training was a useful strategy for enhancing clinician intention to use MBC. Clinicians experienced a large, statistically significant increase in intention to use MBC from baseline to post-training. Community mental health centers, in particular, have limited time for training, thus it is encouraging that such strength of intention could be engendered in only 4 hours. This outcome is important for the field of implementation science, given that greater intention to engage in a behavior often influences actual behavior change/implementation, according to the TPB (Ajzen, 1991), yet concrete measures of adoption are needed to confirm the benefits of this training.

The Influence of Training Condition on Clinician Intention to Use MBC

That training condition did not influence clinician intention to use MBC was surprising, but consistent with at least one other research study comparing context-tailored training to a standardized workshop training (Baer et al., 2009). That is, there appeared to be no advantage to using tailoring methods to inform training structure and content for individual sites over the standardized, one-size-fits-all training that contained active learning strategies. This study contributes to the evolving evidence base that suggests the effects of tailoring strategies are variable, and when superior, the effect is small to moderate (Baker et al., 2015). Given that training is one of the most commonly used implementation strategies, replicating our finding that standardized and tailored training have equivalent impacts on the likelihood of adoption is critical for the practice of implementation. This finding implies that best-practice, standardized training, which is less costly in terms of time, resources, and expert involvement compared with tailored training, may be sufficient for influencing knowledge dissemination and implementation outcomes of interest (Baer et al., 2009). Given the complexity of conducting needs assessments, rapidly analyzing data, and altering the trainings themselves, it is encouraging that standardized training had an equivalent impact on clinician intention. Based on this result, multi-site organizations seeking to conduct EBP implementation efforts may opt to focus on the incorporation of active learning strategies into standardized training and invest time and finances into other elements of the implementation effort.

Active learning strategies incorporated across both conditions were designed to meaningfully engage clinicians, and as a result, the trainer addressed many questions or concerns during training. The reactive, in the moment customization afforded by active learning strategies could be construed as “tailoring,” or at least there was perhaps sufficient opportunity for tailoring to address factors that influence adoption (e.g., norms, attitudes, and perceived control). For instance, at Site 12, an active discussion was incorporated in the tailored training about suicidality to address concerns expressed during the baseline assessment. A clinician in one of the standardized trainings, however, also had opportunities to raise similar concerns and have them skillfully addressed.

It may also be the case that tailoring has the potential to differentially influence other outcomes of interest not measured in the present study. For example, Rollnick et al. (2002) found that tailoring training increased acceptability of the intervention for general practitioners. Rollnick and colleagues utilized structured tailoring and minimized barriers to attendance by conducting the training in the practitioners’ workplace during their lunch hour. This approach to tailoring allowed for highly personalized interfacing with each individual and may have driven their positive findings. The feasibility of applying this approach to training larger groups of clinicians in EBPs may be limited, as trainers and the organizations that employ them would have to devote significant time and resources to allow for pre- and post-training follow-up with each trainee. The needs assessment in our study solicited feedback to address barriers for most clinicians at each individual site, so perhaps in addressing more global feedback, the training did not successfully address individual clinician needs.

Consideration of the core elements of the training itself may help to explain the nature of our results. Across sites, tailoring focused on structure occurred more often than did tailoring focused on content. Each tailored training had three or four unique structural components, whereas tailoring focused on content was much more variable, ranging from one to six content elements. Our null findings in this study may be due to the fact that the extent to which a training can be tailored is limited, as it can only address a certain amount of contextual or individual factors throughout the course of a 4-hour training (Baer et al., 2009). Thus, despite the comprehensiveness and rigor of the needs assessment conducted to inform training development, tailoring of the training was not suited to addressing certain factors that may have had greater influence on training outcomes (e.g., leadership buy-in). The literature points to accounting for factors at multiple levels (i.e., provider, client, and organization) and discerning which implementation strategies, like training, may be most influential in addressing targets (often referred to as barriers or determinants) via theorized mechanisms to manifest implementation outcomes to inform training design (Williams et al., 2017). A one-time training with voluntary clinician participation at a particular site, for instance, is unlikely to alter an organizational culture that is unsupportive of EBP uptake. Thus, further tailoring beyond training as part of post-training consultation may strengthen the effect of tailoring on implementation outcomes (e.g., Bearman et al., 2013), which is being tested in the parent randomized trial.

Predictors of Intention to Use MBC Post-Training

The predictors in this study were exclusively clinician-level factors. Results of the linear mixed model yielded no strong predictors of intention to use MBC post-training, with the exception of baseline intention. Inability to identify a predictor of intention is not entirely surprising given that some previous studies have failed to relate individual clinician factors to training outcomes (Baer et al., 2004; Baer et al., 2009). Overall in the literature, findings regarding individual-level predictors are mixed, except that clinician attitudes toward EBPs have consistently been established as a predictor of implementation outcomes (Aarons & Sawitzky, 2006). It is possible that we did not observe a relation between attitudes and intention in this study due to the fact that attitudes were evaluated specifically about MBC rather than about EBPs in general. MBC is less complex than other EBPs (e.g., manualized cognitive-behavioral therapy), which may have influenced the impact of attitudes on MBC intention. Additionally, attitudes may play a more important role in influencing actual MBC implementation rather than intention alone, highlighting a key area for future research that may guide implementation strategy timing and use for MBC scale up.

The lack of significant individual-level predictors of MBC intention in this study may also suggest that broader organizational factors may either interact with individual factors to influence outcomes, or even overpower the influence of individual factors on intention to use MBC (Aarons & Sawitzky, 2006). Understanding the exact nature of this influence is critically important for informing best practices for implementation in particular settings. One study identified organizational culture and clinicians’ intentions to adopt an EBP as important factors for increasing adoption (Williams et al., 2017). Assessing organizational factors as predictors that may influence intention to use MBC immediately post-training and how they relate to the actual use of MBC in later stages of implementation are key future directions that may contribute to our understanding of how to develop successful implementation strategies in the early stages of implementation.

Limitations

There were several noteworthy limitations. First, the review of tailored training studies, while comprehensive, was not systematic and therefore may not have captured all relevant studies. Second, in the moment, reactive tailoring within the context of active learning strategies may have occurred in the standardized training, however tracking this was unfeasible across 12, 4-hour training sessions. Future studies may consider exploring the tailoring afforded by active learning strategies to better understand how these strategies impact outcomes. Third, different trainers led the standardized and tailored trainings to prevent contamination across conditions. However, these two trainers had different levels of expertise (one had a PhD for 5 years while the other had a PhD for 1 year), and trainer characteristics/style may have affected results. Future studies should consider the impact of trainer characteristics on training outcomes (Boyd et al., 2017).

Fourth, our data may have been prone to selection bias, as clinicians who actively participated in training and surveys may have had more favorable views about MBC. We were unable to collect data on nonparticipating clinicians to assess the representativeness of our sample, however, our demographics aligned with the broader population of Master's level clinicians (Aarons et al., 2012). Fifth, our study only evaluated the impact of tailoring MBC training for a behavioral health treatment setting. Tailoring may be more impactful and should be studied further in other contexts and for other health conditions. Sixth, our study explored a selection of predictors guided by the extant literature rather than by a conceptual model. The field would benefit from additional research to establish a conceptual model to organize possible predictors of intention. Finally, our study was only powered to detect medium effect sizes, thereby limiting our ability to identify small effect size impacts of condition on clinician MBC intention.

Conclusion

This study evaluated the impact of a brief training on increasing intention to use MBC, examined the potential differential influence of tailored versus standardized training, and investigated clinician-level variables as putative predictors of intention to use MBC. While the intention to use MBC increased across training conditions, standardized versus tailored training did not differentially influence outcomes. The predictor analysis yielded only baseline intention as an influential predictor. Findings suggest that training remains an effective strategy for conferring clinician adoption of an EBP and that tailoring may not be necessary. Trainers can consider devoting resources to enhancing standardized formats with active learning strategies and to other elements of an MBC implementation effort.

Footnotes

The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Dr. Cara Lewis is a Co-Founding Editor-in-Chief of Implementation Research and Practice. As such, Dr. Lewis was not involved in the peer-review process for this manuscript.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Institute on Alcohol Abuse and Alcoholism, National Institute on Drug Abuse, and National Institute of Mental Health (grant numbers T32AA007459, K23DA050729, F31MH111134, and R01MH103310).

References

  1. Aarons G., Sawitzky A. (2006). Organizational climate partially mediates the effect of culture on work attitudes and staff turnover in mental health services. Administration and Policy in Mental Health and Mental Health Services Research, 33(3), 289. 10.1007/s10488-006-0039-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons G. A., Glisson C., Green P. D., Hoagwood K., Kelleher K. J., Landsverk J. A. (2012). The organizational social context of mental health services and clinician attitudes toward evidence-based practice: A United States national study. Implementation Science, 7(1), 1–15. 10.1186/1748-5908-7-56 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Ajzen I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. 10.1016/0749-5978(91)90020-T [DOI] [Google Scholar]
  4. Albright K., Navarro E. I., Jarad I., Boyd M. R., Powell B. J., Lewis C. C. (2021). Communication strategies to facilitate the implementation of new clinical practices: A qualitative study of community mental health therapists. Translational Behavioral Medicine, 12(2), 324–334. 10.1093/tbm/ibab139 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Baer J., Wells E., Rosengren D., Hartzler B., Beadnell B., Dunn C. (2009). Agency context and tailored training in technology transfer: A pilot evaluation of motivational interviewing training for community counselors. Journal of Substance Abuse Treatment, 37(2), 191–202. 10.1016/j.jsat.2009.01.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Baer J. S., Rosengren D. B., Dunn C. W., Wells E. A., Ogle R. L., Hartzler B. (2004). An evaluation of workshop training in motivational interviewing for addiction and mental health clinicians. Drug and Alcohol Dependence, 73(1), 99–106. 10.1016/j.drugalcdep.2003.10.001 [DOI] [PubMed] [Google Scholar]
  7. Baker R., Camosso-Stefinovic J., Gillies C., Shaw E. J., Cheater F., Flottorp S., Jaeger C. (2015). Tailored interventions to address determinants of practice. Cochrane Database of Systematic Reviews, 4, 10.1002/14651858.CD005470.pub3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bearman, S .K., Weisz, J .R., Chorpita, B. F., Hoagwood, K., Ward, A., Ugueto, A. M., Bernstein, A., & The Research Network on Youth Mental Health. (2013). (2013). The Research Network on Youth Mental Health. More practice, less preach? The role of supervision processes and therapist characteristics in EBP implementation. Administration and Policy in Mental Health and Mental Health Services Research, 40(6), 518–529. 10.1007/s10488-013-0485-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Beidas R., Kendall P. (2010). Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: Science and Practice, 17(1), 1–30. 10.1111/j.1468-2850.2009.01187.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Beidas R., Koerner K., Weingardt K., Kendall P. (2011). Training research: practical recommendations for maximum impact. Administration and Policy in Mental Health and Mental Health Services Research, 38(4), 223–237. 10.1007/s10488-011-0338-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Beidas R. S., Edmunds J., Ditty M., Watkins J., Walsh L., Marcus S., Kendall P. (2014). Are inner context factors related to implementation outcomes in cognitive-behavioral therapy for youth anxiety? Administration and Policy in Mental Health and Mental Health Services Research, 41(6), 788–799. 10.1007/s10488-013-0529-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Boyd M., Lewis C., Scott K., Krendl A., Lyon A. (2017). The creation and validation of the measure of effective attributes of trainers (MEAT). Implementation Science, 12(1), 73. 10.1186/s13012-017-0603-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Brown C. H., Chamberlain P., Saldana L., Padgett C., Wang W., Cruden G. (2014). Evaluation of two implementation strategies in 51 child county public service systems in two states: Results of a cluster randomized head-to-head implementation trial. Implementation Science, 9, 134. 10.1186/s13012-014-0134-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Bryson S., Ostmeyer K. (2014). Increasing the effectiveness of community mental health center social skills groups for children with autism spectrum disorder: A training and consultation example. Administration and Policy in Mental Health and Mental Health Services Research, 41(6), 808–821. 10.1007/s10488-013-0533-1 [DOI] [PubMed] [Google Scholar]
  15. Carpenter K., Cheng W., Smith J., Brooks A., Amrhein P., Wain R., Nunes E. (2012). “Old dogs” and new skills: How clinician characteristics relate to motivational interviewing skills before, during, and after training. Journal of Consulting and Clinical Psychology, 80(4), 560. 10.1037/a0028362 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Chamberlain P., Brown C. H., Saldana L., Reid J., Wang W., Marsenich L., Bouwman G. (2008). Engaging and recruiting counties in an experiment on implementing evidence-based practice in California. Administration and Policy in Mental Health and Mental Health Services Research, 35(4), 250–260. 10.1007/s10488-008-0167-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Chor K. H. B., Wisdom J. P., Olin S. C. S., Hoagwood K. E., Horwitz S. M. (2015). Measures for predictors of innovation adoption. Administration and Policy in Mental Health and Mental Health Services Research, 42(5), 545–573. 10.1007/s10488-014-0551-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. El-Tannir A. (2002). The corporate university model for continuous learning, training and development. Education Training, 44(2), 76–81. 10.1108/00400910210419973 [DOI] [Google Scholar]
  19. Fishbein M., Ajzen I. (2010). Predicting and changing behavior: The reasoned action approach. Psychology Press. [Google Scholar]
  20. Friesen-Storms J., Moser A., Loo S., Beurskens A., Bours G. (2015). Systematic implementation of evidence-based practice in a clinical nursing setting: A participatory action research project. Journal of Clinical Nursing, 24(1-2), 57–68. 10.1111/jocn.12697 [DOI] [PubMed] [Google Scholar]
  21. Hatfield D., Ogles B. (2007). Why some clinicians use outcome measures and others do not. Administration and Policy in Mental Health and Mental Health Services research, 34(3), 283–291. 10.1007/s10488-006-0110-y [DOI] [PubMed] [Google Scholar]
  22. Herschell A., Kolko D., Baumann B., Davis A. (2010). The role of therapist training in the implementation of psychosocial treatments: A review and critique with recommendations. Clinical Psychology Review, 30(4), 448–466. 10.1016/j.cpr.2010.02.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Hsieh H. F., Shannon S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288. 10.1177/1049732305276687 [DOI] [PubMed] [Google Scholar]
  24. Hunkeler, E. M., Katon, W., Tang, L., Williams, J. W., Kroenke, K., Lin, E. H., ... Unützer, J. (2006). Long term outcomes from the IMPACT randomised trial for depressed elderly patients in primary care. BMJ, 332(7536), 259–263. 10.1136/bmj.38683.710255.BE [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Jensen-Doss A., Haimes E., Smith A., Lyon A., Lewis C., Stanick C., Hawley K. (2016). Monitoring treatment progress and providing feedback is viewed favorably but rarely used in practice. Administration and Policy in Mental Health and Mental Health Services Research, 45(1), 48–61. 10.1007/s10488-016-0763-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Jensen-Doss A., Hawley K. (2010). Understanding barriers to evidence-based assessment: Clinician attitudes toward standardized assessment tools. Journal of Clinical Child & Adolescent Psychology, 39(6), 885–896. 10.1080/15374416.2010.517169 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Kroenke K., Spitzer R., Williams J., Löwe B. (2010). The patient health questionnaire somatic, anxiety, and depressive symptom scales: A systematic review. General Hospital Psychiatry, 32(4), 345–359. 10.1016/j.genhosppsych.2010.03.006 [DOI] [PubMed] [Google Scholar]
  28. Lambert M., Harmon C., Slade K., Whipple J., Hawkins E. (2005). Providing feedback to psychotherapists on their patients’ progress: Clinical results and practice suggestions. Journal of Clinical Psychology, 61(2), 165–174. 10.1002/jclp.20113 [DOI] [PubMed] [Google Scholar]
  29. Lambert M., Whipple J., Hawkins E., Vermeersch D., Nielsen S., Smart D. (2003). Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clinical Psychology: Science and Practice, 10(3), 288–301. 10.1093/clipsy.bpg025 [DOI] [Google Scholar]
  30. Lewis C., Boyd M., Puspitasari A., Navarro E., Howard J., Kassab H., Hoffman M., Scott K., Lyon A., Douglas S., Simon G., Kroenke K. (2018). Implementing measurement-based care in behavioral health: A review. JAMA Psychiatry, 76(3), 324–335. 10.1001/jamapsychiatry.2018.3329 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Lewis C., Scott K., Marti C., Marriott B., Kroenke K., Putz J., Mendel P., Rutkowski D. (2015). Implementing measurement-based care (iMBC) for depression in community mental health: A dynamic cluster randomized trial study protocol. Implementation Science, 10(1), 1–14. 10.1186/s13012-015-0313-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Lewis C. C., Klasnja P., Powell B. J., Lyon A. R., Tuzzio L., Jones S., Weiner B. (2018). From classification to causality: Advancing understanding of mechanisms of change in implementation science. Frontiers in Public Health, 6, 136. 10.3389/fpubh.2018.00136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Mendel P., Meredith L., Schoenbaum M., Sherbourne C., Wells K. (2008). Interventions in organizational and community context: A framework for building evidence on dissemination and implementation in health services research. Administration and Policy in Mental Health and Mental Health Services Research, 35(1-2), 21–37. 10.1007/s10488-007-0144-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Proctor E., Silmere H., Raghavan R., Hovmand P., Aarons G., Bunger A., Griffey R., Hensley M. (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38(2), 65–76. 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. R Core Team (2013). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org/.
  36. Rollnick S., Kinnersley P., Butler C. (2002). Context-bound communication skills training: Development of a new method. Medical Education, 36(4), 377–383. 10.1046/j.1365-2923.2002.01174.x [DOI] [PubMed] [Google Scholar]
  37. Sanders M., Murphy-Brennan M., McAuliffe C. (2003). The development, evaluation and dissemination of a training programme for general practitioners in evidence-based parent consultation skills. International Journal of Mental Health Promotion, 5(4), 13–20. 10.1080/14623730.2003.9721914 [DOI] [Google Scholar]
  38. Sanders M., Turner K. (2005). Reflections on the challenges of effective dissemination of behavioural family intervention: Our experience with the triple P-positive parenting program. Child and Adolescent Mental Health, 10(4), 158–169. 10.1111/j.1475-3588.2005.00367.x [DOI] [PubMed] [Google Scholar]
  39. Scott K., Lewis C. (2015). Using measurement-based care to enhance any treatment. Cognitive and Behavioral Practice, 22(1), 49–59. 10.1016/j.cbpra.2014.01.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Sholomskas D., Syracuse-Siewert G., Rounsaville B., Ball S., Nuro K., Carroll K. (2005). We don’t train in vain: A dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. Journal of Consulting and Clinical Psychology, 73(1), 106. 10.1037/0022-006X.73.1.106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Siddiqi N., Young J., Cheater F., Harding R. (2008). Educating staff working in long-term care about delirium: The Trojan horse for improving quality of care? Journal of Psychosomatic Research, 65(3), 261–266. 10.1016/j.jpsychores.2008.05.014 [DOI] [PubMed] [Google Scholar]
  42. Turner K., Sanders M. (2006). Dissemination of evidence-based parenting and family support strategies: learning from the triple P—positive parenting program system approach. Aggression and Violent Behavior, 11(2), 176–193. 10.1016/j.avb.2005.07.005 [DOI] [Google Scholar]
  43. Van Buuren S., Groothuis-Oudshoorn K. (2011). Mice: Multivariate imputation by chained equations in R. Journal of Statistical Software, 45(3), 1–67. 10.18637/jss.v045.i03 [DOI] [Google Scholar]
  44. Webb T. L., Sheeran P. (2006). Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychological Bulletin, 132(2), 249. 10.1037/0033-2909.132.2.249 [DOI] [PubMed] [Google Scholar]
  45. Williams N., Glisson C., Hemmelgarn A., Green P. (2017). Mechanisms of change in the ARC organizational strategy: Increasing mental health clinicians’ EBP adoption through improved organizational culture and capacity. Administration and Policy in Mental Health and Mental Health Services Research, 44(2), 269–283. 10.1007/s10488-016-0742-5 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Implementation Research and Practice are provided here courtesy of SAGE Publications

RESOURCES