Abstract
Evidence-based practice (EBP) attitudes were measured in a sample of Los Angeles County mental health service providers. Three types of data were collected: provider demographic characteristics, attitudes toward EBP in general, and attitudes toward specific EBPs being implemented in the county. Providers could reliably rate characteristics of specific EBPs, and these ratings differed across interventions. Preliminary implementation data indicate that appealing features of an EBP relate to the degree to which providers use it. These findings suggest that assessing EBP-specific attitudes is feasible and may offer implementation-relevant information beyond that gained solely from providers' general attitudes toward EBP.
Keywords: evidence-based practice, provider attitudes, implementation, community mental health
Over the past few decades, researchers have established considerable support for the efficacy of evidence-based practices (EBPs) in real-world settings (McHugh & Barlow, 2010). In addition to yielding superior client outcomes relative to alternative treatments, the uptake of EBPs has been associated with better workforce outcomes, such as reduced burnout, among community providers (Aarons, Fettes, Flores Jr., & Sommerfeld, 2009). A Delphi poll of psychotherapy experts in 2000 predicted that evidence-based psychotherapies would become mandated and, by extension, widely implemented by 2010 (Norcross, Hedges, & Prochaska, 2002). Although progress has been slower than anticipated, there are signs that reform in public mental health systems is beginning to result in the increased adoption of EBPs in community settings (Kazdin, 2008; Cooper & Aratani, 2009).
By 2008, 12 states had mandated the use of EBPs in public mental health systems, with eight of these states promoting, supporting, or requiring specific EBPs to be implemented statewide (Cooper et al., 2008). Ninety percent of state mental health authorities report implementation strategies to install EBPs, with 12% having fiscal policies mandating EBP implementation through reimbursement practices (Cooper & Aratani, 2009). There are also national and state efforts to facilitate the implementation of EBPs in community settings (e.g., the Child and Family EBP Consortium; California Institute of Mental Health). Nevertheless, even in the context of policy reforms and widespread implementation efforts, research suggests that dissemination is not usually sufficient to guarantee actual implementation and sustained use of EBPs in community settings (Jensen-Doss, Hawley, Lopez, & Osterberg, 2009).
Among the most well examined barriers to adoption are provider attitudes toward EBP. In particular, the work of Aarons and colleagues in validating and norming the Evidence-Based Practice Attitude Scale (EBPAS; Aarons, 2004; Aarons et al., 2010) and the expanded EBPAS-50 has provided researchers with a comprehensive set of attitude dimensions with a reliable factor structure (Aarons, McDonald, Sheehan, & Walrath-Greene, 2007; Aarons, Cafri, Lugo, & Sawitzky, 2012a). The EBPAS assesses four dimensions of attitudes including the intuitive Appeal of EBP, likelihood of adopting EBP given Requirements to do so, general Openness to new practices, and perceived Divergence between research-based interventions and needs in current practice. Additional research has demonstrated that these dimensions can be influenced by numerous factors, including characteristics of individual providers (Aarons et al., 2010), organizational culture and climate (Aarons & Sawitzky, 2006; Aarons et al., 2012b), supervisor leadership behaviors (Aarons & Sommerfeld, 2012), and EBP training experiences (Lim, Nakamura, Higa-McMillan, Shimabukuro, & Slavin, 2012). Nelson and Steele (2007) demonstrated that practitioner attitudes toward efficacy research predict self-reported EBP use, highlighting the role of provider attitudes in predicting likelihood of EBP implementation.
Previous research has approached provider attitudes toward EBP as a general construct. This approach has proven valuable in establishing attitudes as a significant individual difference variable that can be addressed in dissemination and implementation efforts. Additionally, there is some evidence to suggest that looking beyond general EBP attitudes may reveal another level of complexity to our understanding of provider receptivity to EBP. Borntrager and colleagues (2009) demonstrated that changes in providers' general attitudes toward EBP from pre- to post-training were dependent on the manner in which EBP was described. In this study, providers trained in a flexible, modular EBP reported significantly improved post-training EBP attitudes, while providers trained in a standard manualized EBP showed no attitude improvement from pre- to post-training. Notably, the improvement in pre- to post-training attitudes for the modular EBP providers was only detected on a modified measure that did not refer to EBP as “manualized.” Borntrager et al.'s study suggests that even minor alterations to the way we query providers about their EBP attitudes may reveal important nuances about their perception of EBP. That providers can distinguish their perceptions of manualized EBP from their more generalized attitudes toward EBP highlights the possibility that they may hold multiple, or even contrary attitudes about EBP depending on how EBP is defined.
In the current marketplace of dissemination and implementation, providers, organizations, and systems have an array of EBPs upon which they can focus their attention and resources for implementation. As a prime example of multiple EBP implementation, the Los Angeles County Department of Mental Health's (LACDMH) Prevention and Early Intervention (PEI) transformation of children's services is representative of an early trend in fiscally driven approaches to EBP implementation in the public mental health sector. LACDMH is the nation's largest county mental health department, directly operating 33 clinics with 288 contracted agencies. In August 2009, LACDMH launched the PEI transformation of children's services through a fiscal mandate that restricted reimbursement to an array of 52 interventions, amending the contracts of 120 agencies. LACDMH provided implementation support (i.e., training and consultation) for five selected EBPs to address a range of child mental health problems: Trauma Focused Cognitive Behavior Therapy (TF-CBT), Seeking Safety (SS), Positive Parenting Program (Triple P), Child-Parent Psychotherapy (CPP), and Cognitive Behavioral Interventions for Trauma in Schools (CBITS). A sixth intervention was included based on the Managing and Adapting Practice (MAP) system (e.g., Chorpita & Daleiden, 2013), which is a knowledge management system that allows treatment teams to design and adapt evidence-informed plans personalized to each youth. These plans can organize and include EBPs formally, or can build approaches based on practice elements (discrete clinical techniques used as part of a larger intervention plan) common to EBPs (e.g., Chorpita & Daleiden, 2009). The five EBPs are included in the National Registry of Evidence-based Programs and Practices (NREPP), and MAP has accumulated considerable evidence in support of its effectiveness in community settings (Daleiden, Chorpita, Donkervoet, Arensdorf, & Brogan, 2006; Southam-Gerow et al., 2013).
Organizations were funded to train practitioners in the interventions they selected based on their stated needs and preferences. The six interventions included in the current study are among the most heavily utilized treatments in Los Angeles County. The size and scope of the PEI transformation represents a leading example of the movement toward adoption of EBP in usual care settings. Thus, it is becoming increasingly necessary to better understand the provider responses to such large-scale implementation efforts. Provider responses to and perceptions of EBPs may be important predictors of implementation outcomes such as uptake, fidelity, and sustainability (Aarons, Hurlburt, & Horwitz, 2011).
The Current Study
In a context in which providers have been trained in multiple EBPs, assessing general attitudes toward EBP may not capture the diversity of their perceptions of the various EBPs in which they have been trained. In fact, our work with community providers in developing modular treatments has led us to wonder how much of the variance in provider attitudes toward EBP is captured by their individual attributes compared to variance in the design features of the treatments themselves (Chorpita et al., 2011; Borntrager et al., 2009). In order to address this issue, the current study utilized an adapted administration of the EBPAS-50 to explore the feasibility and utility of capturing both general attitudes about EBP and specific attitudes toward the EBPs to which providers have been exposed. The current study was an exploration into the utility of expanding approaches to measuring provider attitudes, given that previous research has proposed innovation-specific characteristics as a meaningful component of the implementation process. For example, in their conceptual model of EBP implementation in public service sectors, Aarons and colleagues (2011) suggested that the strength of a particular innovation's fit with organizational and provider values is likely to influence its chances of effective implementation. Isett and colleagues (2007) found unique implementation challenges for each of five EBPs for adults with serious mental illness. Finally, Jensen-Doss, Cusack, and de Arellano (2008) demonstrated positive attitude change from pre- to post-workshop training for a specific EBP (TF-CBT).
We addressed four research questions in the current study: (a) do attitudes toward specific EBPs vary significantly by treatment, (b) to what extent are perceptions of EBP-specific attitudes accounted for by general attitudes toward EBP, (c) what provider characteristics predict perceptions of specific EBPs, and (d) do attitudes toward a specific EBP predict providers' self-reported use of that EBP? Given our belief that characteristics of individual interventions meaningfully influence provider experiences, we hypothesized that significant variance in provider attitudes would be attributable to the intervention. However, we were agnostic as to which interventions providers would prefer since the current study was not designed to make direct comparisons between specific interventions. Beyond our central hypothesis for the study (research question a), our approach to questions b, c, and d was exploratory in nature.
Methods
Participants
Data were collected from a convenience sample recruited at a one-day booster training event for one of the PEI supported interventions (MAP) in Los Angeles County. All participants (N = 348) were community therapists practicing in Los Angeles County. See Table 1 for participant demographic data. A total of 506 providers attended the training event and were provided a survey packet, resulting in a 69% response rate for the survey.
Table 1. Participant demographic information.
Variable | Mean | SD | N | % |
---|---|---|---|---|
Age | 35.24 | 8.89 | ||
Gender | ||||
Male | 53 | 15.2 | ||
Female | 295 | 84.8 | ||
Time since degree (yrs.) | 6.29 | 6.63 | ||
Licensed in CA | 152 | 43.7 | ||
Clinical supervisor status | 99 | 28.4 | ||
Degree | ||||
Master's | 342 | 98.3 | ||
Doctoral | 43 | 12.4 | ||
Ethnicity | ||||
Spanish/Hispanic/Latino | 134 | 38.5 | ||
White/Caucasian/European-American | 126 | 36.2 | ||
Asian | 43 | 12.4 | ||
Black/African-American | 24 | 6.9 | ||
Mixed/Other | 15 | 4.3 | ||
Primary theoretical orientation | ||||
Cognitive-Behavioral | 151 | 43.4 | ||
Eclectic | 108 | 31.0 | ||
Family Systems | 41 | 11.8 | ||
Humanistic | 27 | 7.8 | ||
Psychodynamic | 17 | 4.9 | ||
Other | 3 | 0.9 | ||
Avg. burnout (0 “Never” - 4 “All the Time”) | 1.80 | 0.833 | ||
Avg. caseload size | 13.94 | 7.83 | ||
Ideal caseload size | 13.20 | 7.64 | ||
Hrs. billed per wk. for EBP (including MAP) | 13.02 | 8.73 | ||
Hrs. billed per wk. for non-EBP | 9.26 | 8.03 | ||
Hrs. of supervision per wk. for EBP | 1.59 | 2.41 | ||
Hrs. of supervision per wk. for non-EBP | 1.31 | 1.20 |
The 347 therapists who provided EBP-specific attitudes data had attended trainings as follows: 343 in MAP (99%), 243 in TF-CBT (70%), 149 in Seeking Safety (43%), 54 in Triple P (16%), 25 in CBITS (7%), and 13 in CPP (4%). The breakdown of total EBPs in which providers were trained was: 55 in one EBP (16%), 136 in two EBPs (39%), 125 in three EBPs (36%), and 30 in four or more EBPs (9%). The mean number of EBPs on which participants were trained in the current study was 2.38 (SD = 0.88). While funding initiatives in Los Angeles County (e.g., PEI) incentivized the use of certain EBPs, trainee selection was managed in an individualized manner across agencies. Compared with the Los Angeles County system-wide training data, the current sample demonstrates variability in the number of EBPs on which providers were trained. Whereas system-wide data indicate a sizeable proportion of providers within agencies were trained on a single EBP (33%) or four or more EBPs (26%), the current study sample had a heavier concentration of providers trained in two to three EBPs (75% in current sample vs. 41% system-wide). In line with system-wide gross penetration data from 2011-12, MAP, TF-CBT, and Seeking Safety were the most frequently trained EBPs in the sample. We assumed the current sample of providers primarily served youths because they were attending training for a child-focused intervention (MAP) and the vast majority (84%) endorsed training in at least one other child-focused EBP.
Measures
Provider characteristics
A background questionnaire was used to obtain information on various therapist characteristics, including age, gender, ethnicity, professional specialty, licensure status, primary theoretical orientation, years of clinical experience, highest degree obtained, hours of continuing education, current actual caseload, weekly hours billed to EBP reimbursement codes, weekly hours billed to non-EBP reimbursement codes, weekly hours of supervision for EBP and non-EBP, and whether or not the participant was a clinical supervisor in their agency. In addition, participants were asked to provide ratings of professional burnout and were asked to report their ideal caseload. These items were selected from a “Therapist Background Questionnaire” utilized in a previously published clinical trial (Weisz et al., 2012).
The EBPAS-50
The EBPAS-50 has been validated and normed in a national sample of over 1,000 mental health service providers across 26 states (Aarons et al., 2012a). Its factor structure, internal consistency reliability, construct validity, and convergent validity have been demonstrated (Aarons, 2004; Aarons, McDonald, Sheehan, & Walrath-Greene, 2007; Aarons et al., 2012a). The 50-item EBPAS-50 consists of the following 12 domains. The Requirements domain (3 items; α = .83 for the study sample) captures providers' willingness to adopt interventions given external requirements. An example item reads, “I am likely to continue using evidence-based practice because my agency requires it.” Appeal (4 items; α = .90) measures the perceived positive characteristics of EBPs according to providers: for example, “If I received training in a therapy or intervention that was new to me, I would adopt it if it ‘made sense’ to me.” Openness (4 items; α = .78) evaluates providers' openness to trying new interventions: “I like to use new types of therapy/interventions to help my clients.” The Divergence domain (4 items; α = .71) queries providers' inclinations to avoid using EBPs in clinical practice: “Clinical experience is more important than using manualized therapy/interventions.” Limitations (7 items; α = .87) evaluates perceived issues with EBP according to providers: “EBP is not useful for clients with multiple problems.” The Fit domain (7 items; α = .78) measures how well EBP matches the values and needs of the client and clinician: “I would adopt an EBP if it fit with my treatment philosophy.” The Monitoring domain (3 items; α = .86) captures providers' negative reactions to oversight of their clinical work: “I do not want anyone looking over my shoulder while I provide services.” Balance (4 items; α = .42) evaluates providers' beliefs about the role of science in therapy: “A positive outcome in therapy is an art more than a science.” The Burden domain (4 items; α = .83) inquires about the perceived administrative burden associated with learning EBPs: “EBP will cause too much paperwork.” Job Security (3 items; α = .89) measures providers' impressions of EBPs' potential to improve their job security: “Learning an EBP will help me keep my job.” Organizational Support (3 items; α = .74) queries about providers' desire for support during and after EBP training: “I would learn an EBP if ongoing support was provided.” Finally, the Feedback domain (3 items; α = .87) assesses providers' notions about the role of feedback in improving clinical practice: “Getting supervision helps me to be a better therapist/case manager.” Five of these domains – Divergence, Limitations, Monitoring, Balance, and Burden – were reverse-coded so that higher values indicated more positive attitudes toward EBP. Overall, all EBPAS-50 scales demonstrated acceptable to excellent (.70 < α ≤ .90) internal consistency reliability in the current sample except for Balance.
Specialized administration of the EBPAS-50
In the current study, the items in the Appeal and Limitations subscales were selected as suitable for assessing attitudes toward specific EBPs. These two domains were selected for adaptation because their items pertain to properties of EBP in general rather than provider traits. Consequently, items in these scales could be easily reworded to pertain specifically to the properties of an individual EBP. For example, the Appeal scale's statement, “If I received training in a therapy or intervention that was new to me, I would adopt it if it ‘made sense’ to me” was reworded to apply to a specific EBP via the following adaptation: “I am likely to continue using this intervention because it ‘makes sense’ to me.” Likewise, the Limitations scale statement, “EBP is not useful for clients with multiple problems” was reworded to apply to a specific EBP via the following adaptation: “This intervention is not useful for clients with multiple problems.”
In contrast to Appeal and Limitations, the remaining 10 domains pertain primarily to characteristics of the provider rather than characteristics of EBP. For this reason, we would not expect ratings to vary meaningfully if the items contained in these scales were altered to apply to specific EBPs. For example, the statement “I would learn an EBP if ongoing support was provided” (from the Organizational Support scale) would be unlikely to vary if adapted to specific EBPs because the item focuses on the provider's desire for support rather than any particular aspect of EBP. Additionally, statements such as, “I like to use new types of therapy/interventions to help my clients” (from the Openness scale) would not be suitable for adaptation because they refer primarily to a provider's individual characteristics as opposed to their views about EBP. For these reasons, the remaining 10 scales of the EBPAS-50 were administered as usual to assess more general attitudes toward EBP.
Adapting the Appeal and Limitations scales created two domains associated with EBP-specific attitudes (11 items per intervention), and left 10 domains (39 items total) referring to general EBP attitudes. All items retained the original EBPAS response scale, which asks participants to rate their agreement with each item from 0 (“Not at All”) to 4 (“To a Very Great Extent”). In the current study, participants rated the two EBP-specific attitude scales up to six times, depending on the number of EBPs in which the provider received training.
EBP-specific attitudes
The modified Appeal and Limitations subscales were used to measure EBP-specific attitudes. The Limitations scale was reverse-coded so that higher values indicated more positive attitudes toward EBP. The two scales were correlated r = .41 across the six EBPs. We created a mean composite score to yield an EBP-specific attitude score for each EBP. The mean internal consistency of the 11-item composite across the six EBPs was .88.
General EBP attitudes
We created a composite general EBP attitudes score from the 10 EBPAS-50 subscales that remained after excluding Appeal and Limitations. The general EBP attitudes composite score was calculated by averaging the 10 individual subscale scores, resulting in a single value representing a provider's general attitudes toward EBP. Internal consistency was good for the 39-item composite score (α = .77).
Provider reported EBP implementation
A single item was included to assess self-reported EBP implementation for each of the six treatments in which the provider was trained. Providers were asked to rate their response to the item, “I have used this intervention in my regular clinical practice,” on a scale from 0 (“Not at All”) to 4 (“To a Very Great Extent”).
Procedure
The first author and a research assistant recruited participants during registration for the therapist training event and throughout the day during breaks. Providers were told that their participation was voluntary and would have no bearing on their standing within their agency, with the EBP developers or training staff, and with LACDMH. Periodic announcements were made to all training attendees in the main conference hall throughout the day; participants opted into the study by submitting their surveys to a collection table set up outside the conference hall.
Consenting participants were provided with a packet containing the consent form and questionnaire, which together took pilot participants at a previous training event (N = 24) an average of 13 minutes and 26 seconds (SD = 3:36) to complete. Four arrangements of the survey battery were distributed randomly to counterbalance for two considerations: (a) whether EBP-specific or general EBP attitudes were queried first, and (b) the order in which the individual EBPs were presented (standard vs. reverse-ordered). All questionnaires began with the provider characteristics section. Providers only completed EBP-specific attitude ratings for EBPs on which they self-reported being trained. In order to ensure privacy and increase the likelihood of response integrity, each participant was given two separate items paper-clipped together: (a) a sheet of paper on which to provide their written consent and identifying information, and (b) the questionnaire itself. Both forms were pre-labeled with a participant identification number. Upon completion of the measure and consent form, participants turned the items in to separate collection boxes. The de-identified participant responses were entered into the main database for analysis, and the identifying information was used to create a separate password-protected participant key linking participant identifying information to their questionnaire data. In return for their time, participants were provided a raffle ticket for one of 25 prizes ranging in value from $5 to $200, with total value of $500. Prizes were raffled off during breaks throughout the day, incentivizing participants to complete their measures earlier in the day in order to increase their odds of winning a prize. Institutional review boards at UCLA and LACDMH approved all procedures for this study.
Results
Provider attitude scores were compared across the six individual interventions included on the measure: MAP, TF-CBT, Seeking Safety, Triple P, CPP, and CBITS. Because therapists in the sample were trained on different combinations of EBPs, a multilevel model with random intercepts was utilized to account for the non-independence of their EBP-specific attitude ratings. Level one variables were the repeated measures EBP-specific ratings, their identifying “treatment type” variable, and self-reported EBP implementation; level two variables were providers' general EBP attitude ratings and individual provider characteristics (e.g., primary theoretical orientation).
Question 1: Do Attitudes Toward Specific EBPs Vary Significantly by Treatment?
The dependent variable, EBP-specific attitude ratings, was predicted by the categorical within-subjects variable “treatment type” in order to test the primary research question of whether EBP-specific attitudes would vary significantly by treatment. Covariates in the model included providers' general attitude ratings toward EBP, the total number of EBPs on which they were trained (“EBP training count”), and duration between a provider's training in a specific EBP and the date of the measure administration (“time since EBP training”). Seven demographic variables were also included as covariates including ethnicity (three levels coded: Non-Hispanic White, Hispanic, and Other), highest degree level (two levels: master's degree and doctorate), discrepancy between actual and ideal caseload (“caseload discrepancy”), weekly hours of EBP supervision, self-reported burnout, primary theoretical orientation (coded CBT versus all others), and duration between the date the provider's most advanced degree was earned and the date of the measure administration (“clinical experience”). This model will be referred to as “Model 1.”
Controlling for the covariates in Model 1, an omnibus test of fixed effects revealed a significant effect of treatment type on EBP-specific attitudes, F(5, 561) = 34.93, p < .001. Mean attitude scores for each intervention in Model 1 can be found in Table 2. Overall attitude scores for the specific EBPs ranged from 2.07 (CBITS) to 3.27 (CPP). The level 1 (total observations) residual estimate for a partial Model 1 excluding the treatment type variable was .51. Adding treatment type reduced the level 1 residual estimate to .37 for the full Model 1. Thus, the residual change score ((σ2res(partial) - σ2res(full))/ σ2res(partial)) reveals that treatment type accounted for a .28 reduction in the level 1 residuals for the dependent variable, EBP-specific attitudes.
Table 2. Estimated marginal means for practice-specific attitude scores.
Treatment Type | Appeal Scale (S.E.) | Limitations Scale (S.E.) | Overall (S.E.) |
---|---|---|---|
CPP (n = 13) | 3.24 (.27) | 3.32 (.25) | 3.27 (.21) |
MAP (n = 343) | 2.92 (.06) | 3.20 (.05) | 3.07 (.04) |
TF-CBT (n = 243) | 2.77 (.06) | 2.77 (.06) | 2.77 (.05) |
Triple P (n = 54) | 2.60 (.13) | 2.57 (.12) | 2.59 (.10) |
Seeking Safety (n = 149) | 1.85 (.08) | 2.54 (.07) | 2.19 (.06) |
CBITS (n = 25) | 1.61 (.20) | 2.55 (.18) | 2.07 (.15) |
Analysis of Appeal and Limitations scales separately
In order to examine whether the aforementioned findings might be different when analyzing EBP-specific Appeal scores versus Limitations scores separately, separate analyses were conducted using each scale as the dependent variable. Including all of the previous covariates, treatment type was found to be predictive of both Appeal, F(5, 579) = 34.55, p < .001, and Limitations, F(5, 546) = 16.91, p < .001, individually.
Question 2: To What Extent Are Perceptions of EBP-Specific Attitudes Accounted for by General Attitudes Toward EBP?
The predictive value of each covariate from Model 1 can be found in the multilevel model statistics provided in Table 3. As expected, general EBP attitudes were found to significantly predict EBP-specific attitudes, F(1, 280) = 76.91, p < .001.
Table 3. Unstandardized estimates of effects of selected predictors on overall EBP-specific attitudes, Appeal, and Limitations scales in Model 1.
Overall EBP-Specific Attitudes | Appeal Scale | Limitations Scale | ||||
---|---|---|---|---|---|---|
| ||||||
Predictor | Unstd. Est. | S.E. | Unstd. Est. | S.E. | Unstd. Est. | S.E. |
Intercept | 0.32 | 0.32 | -0.35 | 0.39 | 1.00** | 0.40 |
Treatment Type | ||||||
MAP a | - | - | - | - | - | - |
TF-CBT | -0.29*** | 0.06 | -0.15* | 0.08 | -0.43*** | 0.07 |
Seeking Safety | -0.88*** | 0.07 | -1.08*** | 0.09 | -0.66*** | 0.08 |
Triple P | -0.48*** | 0.11 | -0.33* | 0.14 | -0.64*** | 0.12 |
CPP | 0.21 | 0.21 | 0.31 | 0.27 | 0.11 | 0.25 |
CBITS | -1.00*** | 0.16 | -1.31*** | 0.21 | -0.66*** | 0.19 |
General EBP Attitudes | 0.78*** | 0.09 | 0.96*** | 0.11 | 0.60*** | 0.11 |
EBP Count | 0.06 | 0.04 | 0.03 | 0.05 | 0.09 | 0.05 |
Years Since EBP Training | 0.03 | 0.03 | 0.07 | 0.05 | -0.01 | 0.04 |
Ethnicity | ||||||
White a | - | - | - | - | - | - |
Hispanic | 0.13 | 0.07 | 0.17* | 0.09 | 0.08 | 0.09 |
Other | 0.13 | 0.08 | 0.17 | 0.10 | 0.07 | 0.10 |
Degree Level | 0.27** | 0.09 | 0.26* | 0.11 | 0.27* | 0.12 |
Caseload Discrepancy | -0.01 | 0.01 | 0.00 | 0.01 | -0.01* | 0.01 |
EBP Supervision (Hrs) | 0.02 | 0.01 | 0.03 | 0.02 | 0.00 | 0.02 |
Burnout | -0.11** | 0.04 | -0.11* | 0.05 | -0.10* | 0.05 |
Primary Orientation | 0.15** | 0.06 | 0.17* | 0.08 | 0.14 | 0.08 |
Clinical Experience (Yrs) | 0.02** | 0.01 | 0.02* | 0.01 | 0.01 | 0.01 |
Note.
p ≤ .05.
p ≤ .01.
p ≤ .001.
Reference group.
Question 3: What Provider Characteristics Predict Perceptions of Specific EBPs?
The following demographic variables predicted EBP-specific attitudes in Model 1: degree level, F(1, 294) = 8.09, p = .005, self-reported burnout, F(1, 288) = 7.74, p = .006, primary theoretical orientation, F(1, 294) = 6.05, p = .014, and clinical experience, F(1, 300) = 6.58, p = .011. Specifically, model estimates indicate that increased general EBP attitudes, presence of a doctorate-level degree, decreased burnout, presence of a primary CBT orientation, and increased clinical experience all significantly predicted higher EBP-specific attitude scores. Ethnicity, discrepancy between actual and ideal caseload, amount of EBP supervision, EBP training count, and time since EBP training did not have significant associations with EBP-specific attitudes.
Due to the potential bias and experimenter demand introduced by the context of data collection at a MAP booster training event, the same multilevel model with random intercepts was conducted excluding providers' ratings of attitudes toward MAP (Model 2) and using CPP as the new reference group. The omnibus test for Model 2 once again demonstrated a significant effect of treatment type on EBP-specific attitudes, F(4, 311) = 19.44, p < .001, for the five non-MAP treatments. All covariates found to be significant predictors of EBP-specific attitudes in Model 1 remained significant in the same direction with the exception of clinical experience, which became non-significant, F(1, 269) = 2.68, p = .103. Since the inclusion of MAP cases did not have a major effect on the significance of any key predictors and covariates besides clinical experience, all other analyses reported included all six treatment types.
Question 4: Do Attitudes Toward a Specific EBP Predict Providers' Self-Reported Use of That EBP?
As an initial attempt to explore the association between provider EBP attitudes and EBP implementation, EBP-specific attitudes (our previous dependent variable) were included as a predictor in a multilevel model with self-reported implementation as the dependent variable. Controlling for treatment type, general EBP attitudes, EBP training count, time since EBP training, and the seven demographic variables from previous analyses, EBP-specific attitudes significantly predicted self-reported implementation, F(1, 686) = 85.49, p < .001. The multilevel model estimate of .56 (S.E. = .06) indicates that for every unit increase in EBP-specific attitude score, self-reported implementation for that EBP increases by .56 units on the 5-point (0-4) scale. Treatment type, F(5, 594) = 10.69, p < .001, and time since EBP training, F(1, 678) = 39.54, p < .001, were also significant predictors of self-reported treatment use. As time since training on a specific EBP increased, self-reported use of that EBP increased. Neither general EBP attitudes, F(1, 306) = 3.28, p = .071, nor EBP training count, F(1, 327) = 0.20, p = .658, significantly predicted self-reported treatment use. Of the seven demographic variables, only having a primary CBT orientation, F(1, 286) = 3.86, p = .050, was a significant predictor of increased self-reported EBP implementation in this analysis.
Predicting treatment use from the Appeal and Limitations scale scores separately revealed a divergent pattern of results. Appeal was a significant predictor of self-reported implementation, F(1, 694) = 221.42, p < .001, while Limitations was not. These findings suggest that the Appeal scale may drive the relationship between overall EBP-specific attitudes and self-reported treatment use, rather than the Limitations scale.
Discussion
We sought to explore the potential utility of measuring providers' attitudes toward specific evidence-based practices in addition to their general attitudes about EBP. In doing so, our data suggest that L.A. County therapists could reliably differentiate between EBPs in which they had been trained via two attitude domains, Appeal and Limitations. These findings affirm our hypothesis that attitudes toward specific EBPs would demonstrate significant variance by treatment, which was our primary research question.
Furthermore, we learned that the effect of treatment type on EBP-specific attitudes remained even after controlling for general attitudes toward EBP as well as a number of other provider demographic characteristics and contextual training factors. This discovery indicates that, in response to our second research question, EBP-specific attitudes may provide unique information beyond that which is contributed by providers' general attitudes toward EBP.
Our third research question asked what provider characteristics predict EBP-specific attitudes, following from past studies exploring predictors of EBP attitudes and delivery (e.g., Brookman-Frazee, Haine, Baker-Ericzén, Zoffness, & Garland, 2010). We found that doctoral level training, lower burnout, a primary CBT orientation, and increased clinical experience all contributed to higher EBP-specific attitudes when controlling for treatment type and general EBP attitudes. On the other hand, ethnicity, caseload discrepancy, amount of EBP supervision, EBP training count, and time since EBP training were not associated with EBP-specific attitudes.
In regard to our fourth and final research question, providers' EBP-specific attitudes were linked to their self-reported use of those same treatments in the current study, even after controlling for general attitudes toward EBP. Nelson and Steele (2007) demonstrated that general practitioner attitudes toward treatment research were a significant predictor of self-reported EBP use, and these findings extend their results by revealing a link between EBP-specific attitudes and self-reported use of that EBP. Interestingly, although the self-reported implementation of a particular treatment was strongly related to its Appeal score, no connection was found between treatment use and its Limitations score. This was somewhat unexpected given that the perceived burden, complexity, and difficulty of EBPs have been cited as factors deterring adoption of EBPs (e.g., Aarons, Wells, Zagursky, Fettes, & Palinkas, 2009; Jensen-Doss et al., 2009). However, in the context of system reform requiring EBP implementation, these perceived limitations might not be the main factor driving EBP use or selection. Given this pattern of preliminary findings, it is worth investigating whether providers may be more concerned with a treatment's lack or presence of appealing features than with its limitations in the context of implementation efforts.
Although a particular EBP's Appeal and composite attitudes scores predicted self-reported implementation in the current study, general attitudes toward EBP did not have a significant effect on self-reported EBP use. While this study represents an initial entry into the measurement and exploration of EBP-specific attitudes, a robust replication of these findings would suggest that specific treatment attitudes might in fact be more proximal to use than general attitudes toward EBP when uptake is mandated through policy.
Several limitations should be considered regarding the current study. First, the study used a convenience sample of L.A. County therapists attending a MAP-related event. Therapists were encouraged by their agencies to attend the training, but attendance was not mandatory. While the selection process may have resulted in a sample with more open or positive views toward EBPs, there is no reason to believe participants were biased toward any particular EBP, except possibly MAP. Nevertheless, the primary finding that there were differences among treatment-specific attitudes based on treatment type should only be interpreted generally, rather than attempting to make any specific comparisons among the treatments measured in this study. Again, we want to emphasize that the central finding – that provider attitudes toward the specific interventions measured in this study varied significantly – held true even when excluding MAP from the analyses. MAP was not the most highly rated intervention in our sample, as CPP in fact garnered the highest domain-specific and composite attitude scores. Yet, the primary finding from Model 1 remains significant even after excluding both MAP and CPP cases. Nevertheless, replicating the current findings in a broader community sample (e.g., a LACDMH-wide administration of the measure) while tracking behavioral outcomes would provide the most conclusive data in answering our initial research questions.
A second limitation was the cross-sectional nature of the survey, which did not allow us to determine whether EBP-specific attitudes affected implementation experience, or vice versa. This directional ambiguity highlights the necessity of prospective longitudinal studies that would capture change in attitudes and implementation experience over time. As suggested by recent empirical findings (e.g., Aarons et al., 2012b; Torrey, Bond, McHugo, & Swain, 2012) it could well be the case that the best intervention for poor attitudes is well-supported implementation. Third, we were unable to control for providers' agencies, the nature (type, intensity, frequency) of their EBP training experiences, or other unmeasured factors that may have affected EBP attitude ratings. Fourth, internal consistency reliability was low for the Balance subscale (α = .42) in this study's sample. Excluding this subscale when calculating providers' general EBP attitudes scores had no meaningful impact on the outcomes reported.
Our reliance on provider self-report as a measure of EBP usage limits our ability to draw conclusions about the relationship between attitudes and utilization, as self-reported use is no guarantee of actual use or fidelity. Future studies should utilize multiple indicators of provider treatment usage and fidelity in order to fully explore the complex relationships between attitudes, utilization, and fidelity. Finally, due to the tremendous pressure placed on organizations to train their providers as part of the PEI initiative, we believe the current findings would best generalize to systems in which EBPs are fully mandated. In non-mandated environments where providers have more flexibility to select EBPs, we would expect self-selection to reduce the variance among EBP-specific attitude ratings.
Future research to replicate this study's findings could have implications for treatment design, as a better understanding of how EBP characteristics influence attitudes – and most importantly, behavior – could aid treatment developers in creating more desirable EBPs. Such research would also allow us to draw more fine-grained conclusions about which characteristics of EBP are more and less favorable, and perhaps to evaluate which of the currently available EBPs providers find most and least desirable. Understanding the relative desirability of various EBPs would have clear implications for EBP implementation, as decision makers would have an additional source of relevant information when making critical choices about which treatments to implement (e.g., LACDMH's PEI transformation). Studies involving the coding of EBPs to determine which features specifically relate to therapist attitudes could help to create a feedback loop between treatment developers and their consumers (providers). Furthermore, providing feedback to clinicians on client-level outcomes following implementation of EBPs may further improve attitudes, and promote more widespread and sustained use (Bickman, 2008; Garland, Bickman, & Chorpita, 2010).
In addition, qualitative research regarding the types of EBP refinements or adaptations that may improve fit with provider needs and practice setting contexts could help inform implementers, trainers, and developers. This type of research has been ongoing (e.g., Southam-Gerow, Hourigan, & Allin, 2009; Aarons & Palinkas, 2007), and might be enhanced by the inclusion of EBP-specific attitude assessment.
Eventually, the field may benefit from a more thorough investigation of the effect of treatment-specific attitudes on key implementation outcomes for available EBPs relative to other considerations like primary presenting problem, agency climate, and billing pressures. All of these areas of consideration represent potential avenues for improving EBP utilization, and a focus on treatment design and selection as a means of influencing EBP-specific attitudes could expand researchers' repertoire for provider attitude and behavior change. Ideally, simultaneous adoption and implementation support for a range of practices would allow providers to choose practices that best fit their service context and client needs.
In an increasingly complex environment of EBP delivery, a sharper focus on treatments themselves – in addition to the individuals who deliver them and the contexts in which they are delivered – might prove fruitful. Our data suggest that measuring EBP-specific attitudes and their effects on implementation outcomes represents a worthwhile pathway for future exploration. Given the body of evidence supporting general EBP attitudes as a point of intervention in affecting implementation outcomes, perhaps EBP-specific attitudes can add another avenue to aid mental health experts in closing the gap between research and practice. Given that providers are the terminal gatekeepers for the dissemination of research products to those they are designed to help, it is worth considering which products these providers find most desirable and why. We believe measuring EBP-specific attitudes is a promising step in this direction.
Contributor Information
Michael E. J. Reding, University of California, Los Angeles
Bruce F. Chorpita, University of California, Los Angeles
Anna S. Lau, University of California, Los Angeles
Debbie Innes-Gomberg, Los Angeles County Department of Mental Health.
References
- Aarons GA. Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS) Mental Health Services Research. 2004;6(2):61–74. doi: 10.1023/B:MHSR.0000024351.12294.65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Cafri G, Lugo L, Sawitzky A. Expanding the domains of attitudes towards evidence-based practice: The Evidence Based Practice Attitude Scale-50. Administration and Policy in Mental Health. 2012a;39(5):331–340. doi: 10.1007/s10488-010-0302-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Fettes DL, Flores LE, Sommerfeld DH. Evidence-based practice implementation and staff emotional exhaustion in children's services. Behavior Research and Therapy. 2009;47(11):954–960. doi: 10.1016/j.brat.2009.07.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Glisson C, Hoagwood K, Kelleher K, Landsverk J, Cafri G. Psychometric properties and U.S. National norms of the Evidence-Based Practice Attitude Scale (EBPAS) Psychological Assessment. 2010;22(2):356–365. doi: 10.1037/a0019188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Glisson C, Green PD, Hoagwood K, Kelleher KJ, Landsverk J the Research Network on Youth Mental Health. The organizational social context of mental health services and clinician attitudes toward evidence-based practice: A United States national study. Implementation Science. 2012b;7(56) doi: 10.1186/1748-5908-7-56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Hurlburt MS, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health. 2011;38(1):4–23. doi: 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, McDonald EJ, Sheehan AK, Walrath-Greene CM. Confirmatory factor analysis of the Evidence-Based Practice Attitude Scale in a geographically diverse sample of community mental health providers. Administration and Policy in Mental Health. 2007;34(5):465–469. doi: 10.1007/s10488-007-0127-x. [DOI] [PubMed] [Google Scholar]
- Aarons GA, Palinkas LA. Implementation of evidence-based practice in child welfare: Service provider perspectives. Administration and Policy in Mental Health and Mental Health Services Research. 2007;34(4):411–419. doi: 10.1007/s10488-007-0121-3. [DOI] [PubMed] [Google Scholar]
- Aarons GA, Sawitzky AC. Organizational culture and climate and mental health provider attitudes toward evidence-based practice. Psychological Services. 2006;3(1):61–72. doi: 10.1037/1541-1559.3.1.61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Sommerfeld DH. Leadership, innovation climate, and attitudes toward evidence-based practice during a statewide implementation. Journal of the American Academy of Child and Adolescent Psychology. 2012;51(4):423–431. doi: 10.1016/j.jaac.2012.01.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Wells RS, Zagursky K, Fettes DL, Palinkas LA. Implementing evidence-based practice in community mental health agencies: A multiple stakeholder analysis. American Journal of Public Health. 2009;99(11):2087–2095. doi: 10.2105/AJPH.2009.161711. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bickman L. A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child and Adolescent Psychiatry. 2008;47(10):1114–1119. doi: 10.1097/CHI.0b013e3181825af8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brookman-Frazee L, Haine RA, Baker-Ericzén M, Zoffness R, Garland AF. Factors associated with use of evidence-based practice strategies in usual care youth psychotherapy. Administration and Policy in Mental Health and Mental Health Services Research. 2010;37(3):254–269. doi: 10.1007/s10488-009-0244-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Borntrager CF, Chorpita BF, Higa-McMillan C, Weisz JR. Provider attitudes toward evidence-based practices: Are the concerns with the evidence or with the manuals? Psychiatric Services (Washington, DC) 2009;60(5):677–681. doi: 10.1176/appi.ps.60.5.677. [DOI] [PubMed] [Google Scholar]
- Chorpita BF, Daleiden EL. Structuring collaboration between science and service in pursuit of a shared vision. Journal of Clinical Child & Adolescent Psychology. doi: 10.1080/15374416.2013.828297. in press. [DOI] [PubMed] [Google Scholar]
- Chorpita BF, Daleiden EL. Mapping evidence-based treatments for children and adolescents: Application of the distillation and matching model to 615 treatments from 322 randomized trials. Journal of Consulting and Clinical Psychology. 2009;77:566–579. doi: 10.1037/a0014565. [DOI] [PubMed] [Google Scholar]
- Chorpita BF, Rotheram-Borus MJ, Daleiden EL, Bernstein A, Cromley T, Swendeman D, Regan J. The old solutions are the new problem: How do we better use what we already know about reducing the burden of mental illness? Perspectives on Psychological Science. 2011;6(5):493–497. doi: 10.1177/1745691611418240. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cooper JL, Aratani Y, Knitzer J, Rachel AD, Patti M, Sarah B. Unclaimed children revisited: The status of children's mental health policy in the United States. 2008 (November). Retrieved October 22, 2012, from http://academiccommons.columbia.edu/catalog/ac:126417.
- Cooper JL, Aratani Y. The status of states' policies to support evidence-based practices in children's mental health. Psychiatric Services. 2009;60(12):1672–1675. doi: 10.1176/appi.ps.60.12.1672. [DOI] [PubMed] [Google Scholar]
- Daleiden EL, Chorpita BF, Donkervoet CM, Arensdorf AA, Brogan M. Getting better at getting them better: Health outcomes and evidence-based practice within a system of care. Journal of the American Academy of Child and Adolescent Psychiatry. 2006;45:749–756. doi: 10.1097/01.chi.0000215154.07142.63. [DOI] [PubMed] [Google Scholar]
- Garland AF, Bickman L, Chorpita BF. Change what? Identifying quality improvement targets by investigating usual mental health care. Administration and Policy in Mental Health. 2010;37:15–26. doi: 10.1007/s10488-010-0279-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Isett KR, Burnam MA, Coleman-Beattie B, Hyde PS, Morrissey JP, Magnabosco J, Rapp CA, Ganju V, Goldman HH. The state policy context of implementation issues for evidence-based practices in mental health. Psychiatric Services. 2007;58(7):914–921. doi: 10.1176/appi.ps.58.7.914. [DOI] [PubMed] [Google Scholar]
- Jensen-Doss A, Cusack KJ, de Arellano MA. Workshop-based training in trauma-focused CBT: An in-depth analysis of impact on provider practices. Community Mental Health Journal. 2008;44:227–244. doi: 10.1007/s10597-007-9121-8. [DOI] [PubMed] [Google Scholar]
- Jensen-Doss A, Hawley KM, Lopez M, Osterberg LD. Using evidence-based treatments: The experiences of youth providers working under a mandate. Professional Psychology: Research and Practice. 2009;40(4):417–424. doi: 10.1037/a0014690. [DOI] [Google Scholar]
- Kazdin AE. Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist. 2008;63(3):146–159. doi: 10.1037/0003-066X.63.3.146. [DOI] [PubMed] [Google Scholar]
- Lim A, Nakamura BJ, Higa-McMillan CK, Shimabukuro S, Slavin L. Effects of workshop trainings on evidence-based practice attitudes among youth community mental health providers. Behavior Research and Therapy. 2012;50(6):397–406. doi: 10.1016/j.brat.2012.03.008. [DOI] [PubMed] [Google Scholar]
- McHugh RK, Barlow DH. The dissemination and implementation of evidence-based psychological treatments. A review of current efforts. The American psychologist. 2010;65(2):73–84. doi: 10.1037/a0018121. [DOI] [PubMed] [Google Scholar]
- Nelson TD, Steele RG. Predictors of practitioner self-reported use of evidence-based practices: Practitioner training, clinical setting, and attitudes toward research. Administration and Policy in Mental Health. 2007;34(4):319–330. doi: 10.1007/s10488-006-0111-x. [DOI] [PubMed] [Google Scholar]
- Norcross JC, Hedges M, Prochaska JO. The face of 2010: A Delphi poll on the future of psychotherapy. Professional Psychology: Research and Practice. 2002;33(3):316–322. doi: 10.1037//0735-7028.33.3.316. [DOI] [Google Scholar]
- Southam-Gerow MA, Daleiden EL, Chorpita BF, Bae C, Mitchell C, Faye M, Alba M. MAPping Los Angeles County: Taking an evidence-informed model of mental health care to scale. Journal of Clinical Child & Adolescent Psychology. doi: 10.1080/15374416.2013.833098. in press. [DOI] [PubMed] [Google Scholar]
- Southam-Gerow MA, Hourigan SE, Allin RB. Adapting evidence-based mental health treatments in community settings: Preliminary results from a partnership approach. Behavior Modification. 2009;33:82–103. doi: 10.1177/0145445508322624. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Torrey WC, Bond GR, McHugo GJ, Swain K. Evidence-based practice implementation in community mental health settings: The relative importance of key domains of implementation activity. Administration and Policy in Mental Health. 2012;39:353–364. doi: 10.1007/s10488-011-0357-9. [DOI] [PubMed] [Google Scholar]
- Weisz JR, Chorpita BF, Palinkas LA, Schoenwald SK, Miranda J, Bearman SK, Daleiden EL, et al. Testing standard and modular designs for psychotherapy treating depression, anxiety, and conduct problems in youth: A randomized effectiveness trial. Archives of General Psychiatry. 2012;69(3):1–9. doi: 10.1001/archgenpsychiatry.2011.147. [DOI] [PubMed] [Google Scholar]