Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Nov 26.
Published in final edited form as: Psychol Assess. 2010 Jun;22(2):10.1037/a0019188. doi: 10.1037/a0019188

Psychometric Properties and United States National Norms of the Evidence-Based Practice Attitude Scale (EBPAS)

Gregory A Aarons 1, Charles Glisson 2, Kimberly Hoagwood 3, Kelley Kelleher 4, John Landsverk 5, Guy Cafri 6; The Research Network on Youth Mental Health
PMCID: PMC3841109  NIHMSID: NIHMS207064  PMID: 20528063

Abstract

The Evidence-Based Practice Attitude Scale (EBPAS) assesses mental health and social service provider attitudes toward adopting evidence-based practices. Comprised of four subscales (i.e., Appeal, Requirements, Openness, and Divergence) and a total scale score, preliminary studies have linked EBPAS scores to clinic structure and policies, organizational culture and climate, and first-level leadership. EBPAS scores are also related to service provider characteristics including age, education level, and level of professional development. The present study examined the factor structure, reliability, and norms of EBPAS scores in a sample of 1089 mental health service providers from a nationwide sample drawn from 100 service institutions in 26 states in the United States. The study also examines associations of provider demographic characteristics with EBPAS subscale and total scores. Confirmatory factor analysis supported a second-order factor model, and reliability coefficients for the subscales ranged from .91-.67 (total scale = .74). The study establishes national norms for the EBPAS so that comparisons can be drawn for United States local and international studies of attitudes toward evidence-based practices. The results suggest that the factor structure and reliability are likely generalizable to a variety of service provider contexts and different service settings and that EBPAS scales are associated with provider characteristics. Directions for future research are discussed.

Keywords: evidence-based practice, mental health services, provider attitudes

Introduction

The dissemination and implementation of evidence-based practices (EBPs) to improve the quality of mental health services is important to improve the quality of care in real-world human service settings (Hoagwood, 2005). However, the pace at which evidence-based health and mental health practices and technologies are implemented is painstakingly slow (Balas & Boren, 2000). Considerable resources are being used to increase the implementation of EBPs in community-based service settings. For example, the California Mental Health Services Act supports EBP implementation (Cashin, Scheffler, Felton, Neal, & Miller, 2008); New York State has established an Evidence-based Treatment Dissemination Center (EBTDC) to support training and year-long consultation to front-line clinicians (Bruns, et al., 2008); the State of Ohio Department of Mental Health (ODMH) has developed “Coordinating Centers of Excellence” to promote use of best-practices and EBPs (ODMH, 2009). While these and other initiatives support the “push” of EBPs into community based settings (Magnabosco, 2006), it is also important to consider mental health service provider attitudes toward adopting EBPs in order to better tailor implementation efforts to meet the needs and/or characteristics of providers in community mental health agencies. Understanding the market for EBPs as represented by service provider perspectives, can contribute to increasing the “pull” for EBP by tailoring strategies to increase the demand for these practices.

Multiple factors at policy, system and organizational levels influence implementation of EBP and other innovations in mental health care settings. These include the social, economic, and political /context, characteristics of the innovation itself, characteristics of the organization attempting to implement the innovation, and characteristics of both the providers and clients (Aarons, 2004, 2005; Glisson & Schoenwald, 2005; Glisson, Schoenwald, et al., 2008; Greenhalgh, Robert, Macfarlane, Bate, & Kyriakidou, 2004; Grol & Wensing, 2004). Mental health service providers’ attitudes toward change and innovation may influence the implementation of EBPs at several stages. First, provider attitudes toward innovation in general can be a precursor to the decision of whether or not to try a new practice. Second, if providers do decide to try a new practice, the affective or emotional component of attitudes can impact decision processes regarding the actual implementation and use of the innovation (Aarons, 2005; Candel & Pennings, 1999; Frambach & Schillewaert, 2002; Rogers, 1995). Third, attitudes can and do change with experience, and it is important to be able to gauge these changes over time (Rydell & McConnell, 2006). However, measurement of provider attitudes toward EBPs has only recently been undertaken and there is relatively little data attesting to the factor structure of these attitudes and their corresponding norms.

The Evidence-Based Practice Attitude Scale (EBPAS; (Aarons, 2004; Aarons, McDonald, Sheehan, & Walrath-Greene, 2007) was developed to assess mental health provider attitudes toward adoption of innovation and EBPs in mental health service settings. The EBPAS assesses four dimensions of attitudes toward adoption of EBPs including: a) intuitive Appeal of EBP, b) likelihood of adopting EBP given Requirements to do so, c) Openness to new practices, and d) perceived Divergence between research-based/academically developed interventions and current practice. The EBPAS fills a void in that it allows for quantitative assessment of provider attitudes that can then be used in models of innovation implementation and to assess provider readiness to adopt new practices.

Although the EBPAS is relatively new it is being used across the United States and internationally as evidenced by requests for the measure from the first author. Permission to use the EBPAS has been provided for over 50 research and evaluation studies in the United States and requests for the EBPAS have come from investigators in other countries including Iran, Israel, Japan, Korea, Norway, Romania, and Sweden. Considering the potential significance of attitudes toward adopting EBPs and the frequency with which the EBPAS is being used, it is increasingly important to examine evidence for validity and reliability of scores on this measure in multiple settings. This study examines the factor structure, reliability, and norms of EBPAS scores in a large and diverse national sample of mental health service professionals.

Past research examining the EBPAS’ factor structure consists of two studies (Aarons, 2004; Aarons, et al., 2007). Despite use of exploratory and confirmatory factor analysis to evaluate factor structure, these studies had some limitations in regard to the samples used and types of factor models investigated. For example, sample characteristics of the original study consisted of a sample of service institutions and participants in one California county in which half the total sample was used to conduct an exploratory factor analysis and the other half to conduct a confirmatory factor analysis (mental health programs = 49, N = 322; (Aarons, 2004). Subsequently, another study confirmed the EBPAS factor structure in a more geographically diverse (17 states) sample, however the sample size was relatively small (mental health programs = 18, N = 221; (Aarons, et al., 2007).

In addition, previous studies examined a first-order factor structure whereas a higher-order factor structure might be considered more appropriate than the first-order factor models evaluated in the two previous studies because all first-order EBPAS factors are, in theory, indicators of a higher-order or more global factor representing attitudes toward adoption of EBPs. Moreover, it is a higher-order factor that will most likely be used in structural models as either an independent or dependent variable, which further necessitates evaluation of a measurement model that reflects this structure.

The present study builds on our preliminary work by examining a wider array of plausible factor models and evaluating norms for the EBPAS in a sample of 1089 mental health service providers from 100 outpatient mental health clinics in 26 states sampled to be representative of social service agencies in the United States. Development of a measure that has scores indicative of validly and reliably measuring attitudes toward adoption of EBPs can help in understanding service provider attitudes toward adopting EBPs and adoption of such practices, which may ultimately help to improve the quality of mental health services.

Methods

Survey Design and Context

Participants were part of the national survey of service providers in mental health clinics (Glisson, Landsverk, et al., 2008) included in the Clinical Systems Project (Schoenwald, Kelleher, & Weisz, 2008) of the MacArthur Research Network on Children’s Mental Health that began with the counties sampled in the National Survey of Child and Adolescent Well-being (Burns, et al., 2004; Dowd, et al., 2004). The sample of 100 clinics used in the Mental Health Clinician Survey is a subset of 200 clinics whose directors participated in the Director’s Survey (Schoenwald, et al., 2008). The clinics participating in this onsite clinician survey met a minimum size criterion (five or more clinicians), and were clinics in which the director agreed to allow the survey to be administered directly to clinicians in scheduled onsite staff meetings. Comparison of the characteristics of clinics that did and did not participate in the survey suggest a similar average number of therapists employed by each clinic, proportions of therapists in each clinic who are psychiatrists, Ph.D. psychologists, and MSW-level social workers, but not BSW-level social workers (Glisson, Landsverk, et al., 2008).

Procedure

Professional research staff administered the EBPAS in person at each of 100 mental health clinics to clinicians who treated either children or both children and adults. The number of clinicians who met this criterion ranged from 6 to 86 per site. The response rate per site for the clinicians who met this criterion ranged from 30% to 100% with an overall average response rate of 76%. Respondents in each clinic completed the surveys simultaneously during a staff meeting with no upper-level managers present, after receiving assurances of confidentiality from the research assistant. Respondents returned the surveys at the end of the meeting directly to the research assistant using sealed envelopes. In some sites, more than one meeting was necessary to obtain data from all clinicians.

Participants

One thousand eighty-nine clinicians from 100 clinics from 75 cities in 26 states completed the survey. Among the respondents 24.2% worked for public mental health agencies, 2.1% for private-for-profit agencies, and 73.6% for private-not-for-profit agencies. Participant mean age was 38.22 (SD = 11.49; Range: 21-73) years. Most participants were female (76%). The racial/ethnic distribution was 70.5 % Caucasian, 14.9% African American, 7.6% Hispanic, 1.8% Asian American, 0.3% Native American, and 4.8% other. Respondents worked as mental health providers for a mean of 10.66 years (SD = 8.51; Range: 0-50). Highest level of education consisted of 7.0% with a doctorate, 67.6% with a masters degree, 6.9% some graduate work, 16.0% with a bachelors degree, 2.1% with some college, and 0.4% completed high school. The discipline in which the highest degree was earned by the participants was 40.7% social work, 32.0% psychology, 4.8% education, 1.1% medicine, 0.7% nursing, and 20.6% other. The lack of national data describing the mental health services workforce makes it difficult to determine how representative the sample is of that workforce in terms of demographic characteristics. However, given the diverse sampling of clinics and the sampling frame, the responses appear to provide a reasonable basis for establishing norms for attitudes toward evidence-based treatments and practices.

Measure

The present study focuses on the EBPAS, which consists of 15 items measured on a 5-point Likert scale ranging from 0 (Not at all) to 4 (To a very great extent) (Aarons, 2004; Aarons, et al., 2007). The EBPAS is conceptualized as consisting of four lower-order factors/subscales and a higher-order factor/total scale (i.e., total scale score), the latter representing respondents’ global attitude toward adoption of EBPs. For the lower-order factors, Appeal assesses the extent to which the provider would adopt an EBP if it were intuitively appealing, could be used correctly, or was being used by colleagues who were happy with it. The Requirements factor assesses the extent to which the provider would adopt an EBP if it were required by an agency, supervisor, or state. The Openness factor assesses the extent to which the provider is generally open to trying new interventions and would be willing to try or use more structured or manualized interventions. The Divergence factor assesses the extent to which the provider perceives EBPs as not clinically useful and less important than clinical experience.

As described in Aarons (2004), content validity of the EBPAS was based on initial development of a pool of items generated from literature review, consultation with mental health service providers, and consultation with mental health services researchers with experienced in evidence-based protocols. As additional evidence of content validity we also asked an expert panel of six mental health services researchers to rate each item of the EBPAS in terms of a) relevance in assessing attitudes toward evidence-based practice, b) importance in assessing attitudes toward evidence-based practice, and c) how representative the item is of the particular factor it is attempting to assess on a 5-point Likert scale (e.g., 1 = “not at all relevant”, 2 = “relevant to a slight extent”, 3 = “relevant to a moderate extent”, 4 = “relevant to a great extent”, 5 = “relevant to a very great extent”) . For individual items the mean rating across panel members ranged from 3.33 - 4.67 for relevance, 3.17- 4.67 for importance, and 3.17- 4.67 for representative. This result supports EBPAS content validity as every item was on average rated as at least “moderately” relevant, important, and representative of the factor it was purported to assess.

Previous studies suggest moderate to good internal consistency reliability in two samples for the total score (Cronbach’s α = .77, .79) and subscale scores excluding divergence (α range= .78-.93), with somewhat lower reliability estimates for divergence (α = .59, .66) (Aarons, 2004; Aarons, et al., 2007). Construct validity in previous studies is supported by two previous scale development studies that have found acceptable model-data fit for previous confirmatory factor analysis models (Aarons, et al., 2007). In terms of construct and convergent validity, studies have found significant associations between EBPAS scores and mental health clinic structure and policies (Aarons, 2004), organizational culture and climate (Aarons & Sawitzky, 2006) and leadership (Aarons, 2006).

Planned Analyses

Confirmatory factor analysis (CFA) was used to evaluate the factor structure of the EBPAS. Model specification was based on preliminary findings from the two previous EBPAS scale development studies (Aarons, 2004; Aarons, et al., 2007). After identifying a measurement model with acceptable fit, relations between demographic variables and the EBPAS factors were examined. Because clinicians were nested within clinics it was necessary to account for variation resulting from this nested data structure. Therefore, models were estimated using Mplus (Muthén & Muthén, 1998-2007) adjusting for the nested data structure using maximum likelihood estimation with robust standard errors (MLR), which appropriately adjusts standard errors and chi-square values.1 Missing data were handled through full information maximum likelihood (FIML) estimation. Fit for individual models was assessed using several indices: the comparative fit index (CFI), the Tucker-Lewis index (TLI), the root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). CFI and TLI values greater than .90, RMSEA values less than .10, and SRMR values less than .08 indicate acceptable model fit (Dunn, Everitt, & Pickles, 1993; Hu & Bentler, 1998; Hu & Bentler, 1999; Kelloway, 1998) . Type two error rates tend to be low when multiple fit indices are used in studies where sample sizes are large (i.e., n ≥ 1000) and non-normality is limited - as in the present study (Guo, Li, Chen, Wang, & Meng, 2008). Nested models are compared using a z-test of the parameters in the less constrained model (Muthén, 2008).2 Non-nested models are compared using the Akaike Information Criterion (AIC) and sample size adjusted Bayesian Information Criterion (SBIC) where smaller values represent better fit.

Model comparisons were conducted based on hypotheses concerning whether a first-order vs. higher-order factor model is more suitable and whether a correlation among a pair of item residuals is appropriate. In the previous EBPAS scale development studies first-order factor models were evaluated (Aarons, 2004; Aarons, et al., 2007). An alternative plausible model that can be tested is one with a higher-order factor structure, because in theory all lower-order EBPAS factors are indicators of a higher-order construct that might be regarded as global attitude toward adoption of EBPs. In the original study of the EBPAS (Aarons, 2004) no item residuals were correlated. In a subsequent study (Aarons, et al., 2007) the residuals of two items within the Appeal scale, “intuitively appealing” and “makes sense”, were correlated based on the results of a modification index. In the current study the correlation among these item residuals was structured as an a priori hypothesis to be tested.

Results

We first examined the amount of dependency among observations within the same clinic using the intraclass correlations (ICC, type 1;(Shrout & Fleiss, 1979) and the average agreement within clinics on EBPAS items and scales (awg; (Brown & Hauenstein, 2005; James, Demaree, & Wolf, 1984; James, Demaree, & Wolf, 1993). Agreement indices are used to assess the appropriate level of aggregation for nested data. For example, they may be used to assess the extent to which members of an organization, program, or team rate a construct in a similar way. Higher levels of agreement suggest that the higher level of aggregation is supported. This is relevant for the current study as clinicians were working within clinics. As shown in Table 1, the ICC indicated a relatively small degree of dependency among service provider responses within the same clinic. Nevertheless, the true variance tends to be underestimated whenever ICCs take on non-zero values, an effect that is magnified with increasing average cluster size (Cochran, 1977). Next, we calculated the average amount of agreement within clinics for individual items and scales using awg(1) and awg(J), respectively. awg ranges from 1 to -1, with awg(1) calculated as one minus the quotient of 2 times the observed variance divided by the maximum possible variance, and awg(J) is the sum of awg(1) values for items divided by the number of items for a scale. These statistics have the advantage over rwg (James, et al., 1984; James, et al., 1993) of not being scale and sample size dependent, and not assuming that a uniform distribution is appropriate (James, et al., 1984; James, et al., 1993). Based on the results presented in Table 1 and in light of the rule of thumb that awg values less than .60 are indicative of low agreement (James, et al., 1984; James, et al., 1993), most EBPAS items and all scales should be considered as representing clinician-level and not clinic-level constructs. This finding is consistent with previous studies (Aarons, 2004).

Table l.

EBPAS Item and Scale Means, Standard Deviations, Factor Loadings, and Reliability Estimates

EBPAS Subscales and Total Mean SD α/ ρ Scale
1
Scale
2
Scale
3
Scale
4
ICC awg
1. Requirements 2.41 .99 .91/91 a. 44 .04 .48
Agency required 2.40 1.06 .99 .05 .51
Supervisor required b 2.33 1.05 .88 .03 .51
State required 2.50 1.14 .77 .03 .41
2. Appeal 2.91 .68 .80/.85 a .89 .05 .59
Makes sense 3.05 .80 .63 .02 .62
Intuitively appealing b 2.79 .88 .49 .03 .59
Colleagues happy with intervention 2.70 .93 .75 .05 .61
Get enough training to use 3.10 .88 .80 .04 .55
3. Openness 2.76 .75 .84/.84 a . 61 .05 .58
Will follow a treatment manual 2.77 .95 .78 .07 .55
Therapy developed by researchers 2.80 .86 .85 .04 .62
Like new therapy types b 2.85 .87 .70 .03 .59
Therapy different than usual 2.61 .95 .68 .01 .56
4. Divergence 1.25 .70 .66/.67 a .22 .02 .45
Research based treatments not useful .77 .87 .68 .02 .47
Will not use manualized therapy .82 .97 .57 .02 .36
Clinical experience more important 2.05 1.08 .55 .01 .49
Know better than researchers b 1.35 1.05 .49 .01 .48
Overall EBPAS mean 2.73 .49 .76d .03 .53

Note: EBPAS scale scores are expressed as item averages;

a

Denotes second-order factor loadings of each subscale on the EBPAS total scale presented in italic font above the corresponding first-order factor loadings;

b

Item used to scale the latent variable by fixing the factor loading to 1, with the requirements factor used to scale the higher-order factor;

c

Based on second-order factor loadings and error variances;

d

we provide only the appropriate alpha reliability estimate for the higher order factor. All factor loadings are statistically significant,p < .05. SD = Standard deviation; α = Chronbach’s alpha. ρ’ = A generalization of Raykov’s reliability that accounts for error covariances (Kano & Azuma, 2001). ICC= Intraclass Correlation Coefficient. awg= average within clinic agreement of responses.

The results for EBPAS scale norms and reliability are presented in Table 1. Subscale reliabilities ranged from .67 to .91 and the EBPAS Total scale had α = .76. We next examined the factor structure of the EBPAS. Initially, a first-order factor model with all possible factor correlations was evaluated, but without correlated item residuals between the “intuitively appealing” and “makes sense” items. This yielded a model with indices suggesting acceptable fit, with the exception of TLI, which suggested that the model was mis-specified (Table 2). Keeping the same model but allowing correlated residuals among the aforementioned items yielded improvement in all fit indices, with none suggesting misfit.

Table 2.

Confirmatory Factor Analysis Fit Statistics for First- and Second-Order Factor Models with and without Correlated Item Residuals.

Description χ 2 df p CFI TLI RMSEA RMSEA (90% CI) SRMR AIC SBIC
First-order without
correlated item residuals
682.99 84 <.001 .90 .87 .081 (.075, .087) .063 38976.58 39069.24
First-order with correlated
item residuals
403.22 83 <.001 .94 .92 .060 (.054, .065) .058 37343.65 37438.12
Second-order without
correlated item residuals
676.93 86 <.001 .89 .87 .079 (.074, .085) .071 37650.00 37739.02
Second-order with
correlated item residuals
409.72 85 <.001 .94 .93 .059 (.054, .065) .058 37346.26 37437.10

Note: χ2 = chi-square, df= degrees of freedom, CFI= Comparative Fit Index; TLI=Tucker-Lewis Index; RMSEA=root mean squared error of approximation; CI=confidence interval; SRMR=standardized root mean residual; AIC=Akaike’s Information Criterion; SBIC=sample size adjusted Bayesian Information Criterion.

In order to formally compare these two models, the significance test of the correlation among the item residuals was used, r = .60, z = 18.14, p < .05, suggesting that constraining the residuals to be uncorrelated was unreasonable. Next, a higher-order factor model was evaluated without correlated item residuals. As with the first-order factor model, there was an indication that this model was mis-specified (see results for CFI and TLI). Allowing item residuals to be correlated again yielded improvement in all fit indices, with none suggesting misfit. Comparing these nested models using the significance test of the correlation among the item residuals, r = .60 z = 18.13, p < .05, suggested that the model allowing correlated item residuals fit the data significantly better.2

Comparing the first-order factor model to the higher-order factor model suggested small differences in fit ( Δ AIC= −2.61, ΔSBIC =1.02) with AIC slightly favoring the first-order factor model and the sample size adjusted SBIC slightly favoring the higher-order factor model. This result suggests model equivalence, however, given our preference for conceptualizing attitudes toward EBP as a single higher-order construct and the likelihood that future use of this scale in structural models will involve use of the higher-order factor as an independent or dependent variable, hereafter only the results for the higher-order factor model will be reported. Figure 1 displays the standardized factor loadings for the higher-order factor model with correlated item residuals.3 First-order factor loadings ranged from .49 to .99 and second-order factor loadings ranged from .22 to .89 and all factor loadings were statistically significant (p’s < .05).

Figure 1.

Figure 1

Second-Order Confirmatory Factor Analysis of the Evidence-Based Practice Attitudes Scale (EBPAS) in a National Sample of Mental Health Service Providers.

Note: All factor loadings are statistically significant, p < .05; Estimation of correlated residuals between two Appeal subscale items is indicated by a double-headed arrow.

As shown in Table 3, regression models with MLR estimation were used to evaluate the association between clinician demographic characteristics separately for each of the four EBPAS subscales and the EBPAS total scale (i.e., the four first-order factors and one higher-order factor). For the EBPAS total scale, more positive attitudes toward EBPs were associated with being female, Caucasian (compared to African American, Latino, “other” ethnicity), and having the highest degree earned in social work (compared to psychology). For the Requirements scale, greater willingness to adopt EBP given the requirements to do so was associated with increasing age and being female, and decreased with higher levels of education attainment and more years of experience. For the Appeal subscale, greater perceived intuitive appeal of EBP was associated with higher education level, as well as being female and Caucasian (compared to African American, Latino, or “other” ethnicity). For the Openness subscale, greater openness to new practices was associated with fewer years of professional experience and having the highest degree earned in social work (relative to psychology). Finally, for the divergence subscale, less perceived divergence between EBP and current practice was associated with having less years experience, being Caucasian relative to African American or “other” ethnicity, and having the highest degree earned in psychology vs. other field.

Table 3.

Association of Provider Demographic Characteristics with EBPAS Factors

Variable EBPAS Total (R2 =.11) Requirements (R2=.04) Appeal (R2 =11) Openness (R2=.06) Divergence (R2=.10)

β b Z β b Z β b Z β b Z β b Z

Age .112 .004 1.52 .131 .010 2.35* .075 .003 1.30 .090 .005 1.68 .041 .002 .54
Gender .086 .075 2.25* .102 .216 3.71* .077 .074 2.66* .024 .034 .59 .075 .089 1.83
Education Level .090 .036 1.76 −.113 −.109 −3.04* .126 .056 2.76* .012 .008 .24 .007 .004 .16
Years Experience −.143 −.006 −1.59 −.150 −.016 −2.62* −.084 −.04 −1.54 −.159 −.012 −3.19* −.203 −.012 −2.83*
Job Tenure −.024 −.002 −.73 −.012 −.002 −.22 −.021 −.002 −.68 −.026 −.003 −.612 −.029 −.003 −.77
Race/Ethnicity
 African American −.148 −.154 −3.20* −.003 −.009 −.09 −.160 −.185 −3.61* .044 .077 1.29 −.178 −.255 −3.21*
 Asian American .012 .034 .12 .000 .000 .00 .019 .060 .50 −.007 −.033 −.19 −.032 −.123 −.49
 Hispanic −.117 −.165 −2.32* −.012 −.040 −.298 −.126 −.196 −2.58* −.038 −.091 −.71 .000 −.001 −.01
 American Indian .018 .120 .24 .033 .547 1.24 −.001 −.009 −.018 .061 .694 1.48 .019 .174 .66
 Other Ethnicity −.100 −.174 −2.62* −.027 −.114 −.75 −.117 −.225 −3.15* .010 .030 .26 −.105 −.251 −2.40*
Discipline
 Education .008 .013 .22 .039 .166 1.05 −.004 −.007 −.13 .014 .040 .34 −.059 −.141 −1.71
 Medicine .005 .017 .22 .027 .231 .92 −.007 −.026 −.29 .009 .055 .24 .032 .157 .84
 Nursing −.058 −.261 −1.04 −.062 −.670 −1.87 −.051 −.255 −.94 −.042 −.316 −.91 .031 .192 1.07
 Social Work .103 .078 2.08* .033 .061 .93 .075 .063 1.79 .139 .178 3.90* −.011 −.011 −.30
 Other Discipline −.039 −.036 −.87 .051 .113 1.23 −.037 −.037 −.84 −.061 −.095 −1.42 −.094 −.118 −2.30*

Note: R2 = β = standardized regression coefficient; b = Unstandardized regression coefficient; Reference groups for dummy coded variables are as follows: gender reference is male, Race/ethnicity reference is Caucasian, Discipline reference is Psychology (Discipline is the content area in which the highest degree was earned);

*

p < .05 (two-tailed).

In order to interpret the substantive significance of the associations between clinician demographic characteristics and the EBPAS factors it is necessary to consider the scaling of the latent factors, which are based on the individual items that have anchor points of “0 = Not at All”, “1= To a Slight Extent”, “2= To a Moderate Extent”, “3= To a Great Extent”, and “4= To a Very Great Extent”. For example, the gender difference on the requirements subscale represented by an unstandardized regression coefficient of .216 suggests that females scored .216 units more than males, while holding constant other variables in the model.

Discussion

The results of this study support the higher-order factor structure of the EBPAS in a large, nationwide and diverse sample of mental health service providers. The EBPAS subscales and total scale score demonstrated moderate to excellent reliability in the current sample. The sampling frame used for this study was intended to represent public sector social service agencies throughout the United States. The mental health clinics were selected based on this sampling frame and thus are likely representative of mental health providers in the United States. Indeed, the findings presented here are highly congruent with previous studies in two different settings (Aarons, 2004; Aarons, et al., 2007). Given the size, geographic dispersion, and diversity of the sample, the results suggest that the factor structure and reliability may be generalizable to a variety of contexts and the reported norms provide a reference point for future research. In addition, the results presented here serve as a reference point for exploring the factor structure and scale norms when the EBPAS is used with other cultures and in other languages.

Our results also demonstrated a number of associations with mental health service provider characteristics. Older providers were more open to adopting EBP given requirements to do so but age was not associated with any of the other scales. This suggests that formalized policies and communications about the importance of adopting EBP may be particularly important for younger service providers (Aarons, 2004). Females were also more open to adopting EBPs relative to males and this was reflected in the Requirements, Appeal, and EBPAS total scale scores. This is in contrast to a previous study in which this effect was not found (Aarons, 2004). While the mental health and social service workforce is predominantly female (76% in this sample), attention should be give to understanding why gender differences are present and how to support adoption of EBP across genders.

Higher educational attainment was associated with both a lower likelihood of adopting an EBP if required, and also greater willingness to adopt given the appeal of an EBP, which is consistent with the original EBPAS scale development study (Aarons, 2004). This finding suggests that those with higher educational attainment may be more assertive in making independent decisions about utilizing an EBP, but may also be particularly cognizant of the fit of an EBP with their own view of what constitutes appropriate mental health treatment or intervention. As with education, provider years of experience was associated with lower Requirements scores but was also associated with lower Openness and Divergence scores. Contrasting educational attainment with years of experience demonstrates different patterns suggesting a more restrained openness to adopting EBP for those with higher educational attainment level, and lower enthusiasm for EBP given more on the job experience. This contrast suggests that workplace and organizational supports for EBP, particularly around acceptance of new innovations, might improve provider attitudes and willingness to adopt EBP (Aarons, Sommerfeld, & Walrath, in press).

Relative to Caucasian providers, African American, Latino, and “other” providers had lower scores on some of the EBPAS scales. This was found for the EBPAS total, Appeal, and Divergence scales. While highly speculative, this finding suggests more caution about the perceived fit and perceived utility of EBPs for these providers. It may be that non-Caucasian providers have a more cautious or skeptical stance towards the utility of EBPs. This could be a result of the absence of translated materials, culturally adapted materials, or of evidence regarding the fit of such practices for a given cultural or community group. However, there is a need to balance the needs of practitioners and their clients with available best treatments (Tanaka-Matsumi, 2008; Whitley, 2007). While some researchers have developed recommendations or approaches for adaptation of EBPs for particular cultural groups (Lau, 2006; McCabe, Yeh, Garland, Lau, & Chavez, 2005), such approaches are not widespread. Consequentially, the absence of available adapted EBPs may engender some caution or skepticism towards the appropriateness of particular EBPs within some communities. However, EBP and cultural adaptation should ultimately be seen as complementary and accomplishable (Whaley & Davis, 2007).

We also examined EBPAS scale scores across professional disciplines. With the exception of those trained in social work and one result for the “other” category, there were few significant differences between those trained in psychology vs. other disciplines. The “other” category results indicate more perceived divergence between EBP and this group’s usual practice. Those trained in social work, however, had higher scores for two of the EBPAS scales (i.e., Openness, and EBPAS total). It is likely that the result for the EBPAS total scale score was largely influenced by the significant Openness scale effect indicating more openness to trying new practices even if the practices are manualized or more structured. We do not know if there are education or training experiences that might account for these differences; however, further work is needed to develop effective approaches to training practitioners and researchers in implementation of EBPs across mental health and social service disciplines (Proctor, et al., 2009).

The EBPAS has proved to be a useful measure for the study of service provider attitudes toward adopting EBPs. It has been found to be related to organizational supports, such that more positive attitudes towards adopting EBPs were associated with greater organizational support (Aarons, et al., in press) and formalized practice policies (Aarons, 2004) .Consequently, as the dissemination and implementation of EBPs within communities continues to rise, it will be important to improve our understanding about when, how, why, and what type of organizational enhancements may be needed to improve uptake and adoption.

In addition, while attitudes toward EBPs have been shown to be related to organizational culture, climate, and leadership, it will be important to examine evidence of both concurrent and predictive validity. Future studies should examine the degree to which attitudes toward adopting EBP predict provider behaviors including seeking further education and training in EBP, subsequent use of EBP, and sustained use of EBPs in practice. In addition, it will be important to examine factors that might moderate the relationship between attitudes and behavior (Aarons, 2005; Ajzen & Cote, 2008; Ajzen & Fishbein, 2005; Jaccard & Blanton, 2005). For example, providers may gain knowledge and skill in training but may lack a high level of confidence or self-efficacy that can influence actual use of an EBP with clients or patients. It may also be that there is ambiguity in the degree to which a particular EBP may be applicable for a particular client or patient. Take the example of a physician who has received some training in or guidelines for the use of a particular evidence-based psychotropic medication but has not had a subsequent opportunity to try it with patients. Conversely, it may be that a patient’s symptoms don’t appear to clearly meet guidelines for a particular medication.

The EBPAS is now being used in a number of studies to better understand how provider attitudes toward EBP might be influenced and how such attitudes might influence future behaviors. For example, organizational supports for using EBPs are associated with more positive attitudes towards EBP (Aarons, et al., in press). This suggests that assessment of attitudes prior to an implementation effort might be used to target specific organizational or leader activities and supports to foster buy in during the implementation process. Leadership style is also associated with staff attitudes and leaders might use EBPAS data to better target their leadership strategies and messages to promote positive attitudes among staff.

Another critical concern is establishing a link between attitudes and the fidelity with which EBPs are implemented. However, where local adaptation of EBP is needed to meet the constraints of implementation context or the needs of particular clients or populations (Aarons & Palinkas, 2007) fidelity assessment may require reformulation. Attitudes toward implementation of EBPs may also be an outcome to be studied in its own right. For instance, research suggests that provider characteristics (e.g., education) and organizational context (e.g., level of organizational bureaucratic structure; organizational climate and culture, leadership) play a role in the implementation of EBPs in real world settings (Aarons, 2005; Aarons & Palinkas, 2007; Aarons & Sawitzky, 2006; Glisson, 2002). Organizational and leadership interventions could be tailored to improving provider attitudes and subsequent uptake of EBPs.

Future research should also investigate a number of other dimensions of provider attitudes toward EBP and change in practice that have yet to be identified and explored. For example, the impact of learning new clinical skills on perceived professional accomplishment might impact attitudes toward adopting new practices. Similarly, determining that a new clinical skill has had its intended effect on child outcomes is likely to exert an influence on attitude. Input from clinicians and supervisors could provide insight into additional potentially important questions that might impact staff attitudes and implementation of EBPs. Future studies should address these concerns in an effort to tailor practice change efforts to the attitudes and preferences of mental health service providers. More study of attitudes toward implementation of EBPs should contribute to the knowledge base on how to more effectively and efficiently move EBPs into real-world human service settings and improve quality of care and consumer outcomes.

Acknowledgments

The Research Network on Youth Mental Health is a collaborative network funded by the John D. and Catherine T. MacArthur Foundation. Network Members at the time this work was performed included: John Weisz, Ph.D. (Network Director), Bruce Chorpita, Ph.D., Robert Gibbons, Ph.D., Charles Glisson, Ph.D., Evelyn Polk Green, M.A., Kimberly Hoagwood, Ph.D., Peter S. Jensen, M.D., Kelly Kelleher, M.D., John Landsverk, Ph.D., Stephen Mayberg, Ph.D., Jeanne Miranda, Ph.D., Lawrence Palinkas, Ph.D., Sonja Schoenwald, Ph.D. This work was also funded in part by National Institute of Mental Health grant R01MH072961 (PI: Aarons).

Footnotes

1

There was some evidence of multivariate non-normality in the data based Mardia’s skew and kurtosis statistics. For any one item the largest absolute skew and kurtosis values were relatively small, 1.06 and 1.12, respectively. Based on simulation studies, the impact of this degree of non-normality on the χ2 statistic, fit indices, and estimation of parameter standard errors using a traditional ML estimator is likely to be little (Curran, West, & Finch, 1996; West, Finch, & Curran, 1995). However, we utilized the MLR estimator in our analyses which should lead to results that are robust to any effects of non-normality (Muthén & Muthén, 1998-2007) .

2

Initially, scaled chi-square difference test statistics were calculated to compare nested models, however these yielded implausible negative values in the two comparisons of interest, which can occur in practice because the test is asymptotic (Satorra & Bentler, 2008). Since the nested models constrained only a single parameter relative to the less constrained models, we adopted the alternative approach using the z-test to compare the parameter in the more vs, less constrained model (Muthén, 2008).

3

Twenty-one multivariate outliers were identified using Mahalanobis distance (i.e., cases with p-values < .001). Deletion of these cases resulted in negligible changes to results related to fit statistics, parameter estimates, and significance tests and thus we elected to include these cases in our reported analyses.

Publisher's Disclaimer: The following manuscript is the final accepted manuscript. It has not been subjected to the final copyediting, fact-checking, and proofreading required for formal publication. It is not the definitive, publisher-authenticated version. The American Psychological Association and its Council of Editors disclaim any responsibility or liabilities for errors or omissions of this manuscript version, any version derived from this manuscript by NIH, or other third parties. The published version is available at www.apa.org/pubs/journals/pas.

Contributor Information

Gregory A. Aarons, Department of Psychiatry, University of California, San Diego

Charles Glisson, College of Social Work, University of Tennessee, Knoxville.

Kimberly Hoagwood, Department of Psychiatry, Columbia University, and New York State Office of Mental Health.

Kelley Kelleher, College of Medicine and Public Health, Ohio State University.

John Landsverk, Child and Adolescent Services Research Center.

Guy Cafri, Department of Psychology, University of California, San Diego.

References

  1. Aarons GA. Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS) Mental Health Services Research. 2004;6(2):61–74. doi: 10.1023/b:mhsr.0000024351.12294.65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons GA. Measuring provider attitudes toward evidence-based practice: Consideration of organizational context and individual differences. Child and Adolescent Psychiatric Clinics of North America. 2005;14:255–271. doi: 10.1016/j.chc.2004.04.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aarons GA. Transformational and transactional leadership: Association with attitudes toward evidence-based practice. Psychiatric Services. 2006;57(8):1162–1169. doi: 10.1176/appi.ps.57.8.1162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Aarons GA, McDonald EJ, Sheehan AK, Walrath-Greene CM. Confirmatory factor analysis of the Evidence-Based Practice Attitude Scale (EBPAS) in a geographically diverse sample of community mental health providers. Administration and Policy in Mental Health and Mental Health Services Research. 2007;34:465–469. doi: 10.1007/s10488-007-0127-x. [DOI] [PubMed] [Google Scholar]
  5. Aarons GA, Palinkas LA. Implementation of evidence-based practice in child welfare: Service provider perspectives. Administration and Policy in Mental Health & Mental Health Services Research. 2007;34:411–419. doi: 10.1007/s10488-007-0121-3. [DOI] [PubMed] [Google Scholar]
  6. Aarons GA, Sawitzky A. Organizational culture and climate and mental health provider attitudes toward evidence-based practice. Psychological Services. 2006;3(1):61–72. doi: 10.1037/1541-1559.3.1.61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Aarons GA, Sommerfeld DH, Walrath CM. Evidence-based practice implementation: The impact of public vs. private sector organization type on organizational support, provider attitudes, and adoption of evidence-based practice. Implementation Science. doi: 10.1186/1748-5908-4-83. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Ajzen I, Fishbein M. The influence of attitudes on behavior. In: Albarracén D, Johnson BT, Zanna MP, editors. The Handbook of Attitudes. Lawrence Erlbaum Associates, Inc.; Mahwah, N.J.: 2005. pp. 173–222. [Google Scholar]
  9. Ajzen I, Cote NG. Attitudes and the prediction of behavior. In: Crano WD, Prislin R, editors. Attitudes and attitude change. Psychology Press; New York, NY: 2008. pp. 289–311. [Google Scholar]
  10. Balas EA, Boren SA. Managing clinical knowledge for healthcare improvements. In: Bemmel J, McCray AT, editors. Yearbook of Medical Informatics 2000: Patient-Centered Systems. Schattauer Verlagsgesellschaft mbH; Stuttgart, Germany: 2000. pp. 65–70. [Google Scholar]
  11. Brown RD, Hauenstein NMA. Interrater agreement reconsidered: An alternative to the rwg indices. Organizational Research Methods. 2005;8:165–184. [Google Scholar]
  12. Bruns EJ, Hoagwood KE, Rivard JC, Wotring J, Marsenich L, Carter B. State implementation of evidence-based practice for youths, part II: recommendations for research and policy. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47(5):499–504. doi: 10.1097/CHI.0b013e3181684557. [DOI] [PubMed] [Google Scholar]
  13. Burns BJ, Phillips SD, Wagner HR, Barth RP, Kolko DJ, Campbell Y, et al. Mental health needs and access to mental health services by youths involved with child welfare: A national survey. Journal of the American Academy of Child & Adolescent Psychiatry. 2004;43(8):960–970. doi: 10.1097/01.chi.0000127590.95585.65. [DOI] [PubMed] [Google Scholar]
  14. Candel MJJM, Pennings JME. Attitude-based models for binary choices: A test for choices involving an innovation. Journal of Economic Psychology. 1999;20(5):547–569. [Google Scholar]
  15. Cashin C, Scheffler R, Felton M, Neal A, Miller L. Transformation of the California mental health system: stakeholder-driven planning as a transformational activity. Psychiatric Services. 2008;59(10):1107–1114. doi: 10.1176/ps.2008.59.10.1107. [DOI] [PubMed] [Google Scholar]
  16. Cochran WG. Sampling techniques. 3rd ed Wiley; New York: 1977. [Google Scholar]
  17. Curran PJ, West SG, Finch JF. The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychological Methods. 1996;1:16–29. [Google Scholar]
  18. Dowd K, Kinsey S, Wheeless S, Thissen R, Richardson J, Suresh R, et al. National Survey of Child and Adolescent Well-Being (NSCAW), combined waves 1-4, data file user’s manual, restricted release version. Research Triangle Institute, University of North Carolina at Chapel Hill, Caliber Associates, University of California at Berkely; 2004. [Google Scholar]
  19. Dunn G, Everitt B, Pickles A. Modeling covariances and latent variables using EQS. Chapman & Hall; London: 1993. [Google Scholar]
  20. Frambach RT, Schillewaert N. Organizational innovation adoption: A multi-level framework of determinants and opportunities for future research. Journal of Business Research. Special Issue: Marketing theory in the next millennium. 2002;55(2):163–176. [Google Scholar]
  21. Glisson C. The organizational context of children’s mental health services. Clinical Child and Family Psychology Review. 2002;5(4):233–253. doi: 10.1023/a:1020972906177. [DOI] [PubMed] [Google Scholar]
  22. Glisson C, Landsverk J, Schoenwald S, Kelleher K, Hoagwood KE, Mayberg S, et al. Assessing the organizational social context (OSC) of mental health services: Implications for research and practice. Administration and Policy in Mental Health and Mental Health Services Research. Special Issue: Improving mental health services. 2008;35(1-2):98–113. doi: 10.1007/s10488-007-0148-5. [DOI] [PubMed] [Google Scholar]
  23. Glisson C, Schoenwald SK. The ARC organizational and community intervention strategy for implementing evidence-based children’s mental health treatments. Mental Health Services Research. 2005;7(4):243–259. doi: 10.1007/s11020-005-7456-1. [DOI] [PubMed] [Google Scholar]
  24. Glisson C, Schoenwald SK, Kelleher K, Landsverk J, Hoagwood KE, Mayberg S, et al. Assessing the organizational and social context (OSC) of mental health services: Implications for research and practice. Administration and Policy in Mental Health. 2008;35:124–133. doi: 10.1007/s10488-007-0148-5. [DOI] [PubMed] [Google Scholar]
  25. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Quarterly. 2004;82(4):581–629. doi: 10.1111/j.0887-378X.2004.00325.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Grol R, Wensing M. What drives change? Barriers to and incentives for achieving evidence-based practice. Medical Journal of Australia. 2004;180:S57–S60. doi: 10.5694/j.1326-5377.2004.tb05948.x. [DOI] [PubMed] [Google Scholar]
  27. Guo Q, Li F, Chen X, Wang W, Meng Q. Performance of fit indices in different conditions and selection of cut-off values. Acta Psychologica Sinica. 2008;40:109–118. [Google Scholar]
  28. Hoagwood K. Family-based services in children’s mental health: a research review and synthesis. Journal of Child Psychology and Psychiatry. 2005;46(7):690–713. doi: 10.1111/j.1469-7610.2005.01451.x. [DOI] [PubMed] [Google Scholar]
  29. Hu L, Bentler PM. Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification. Psychological Methods. 1998;3:424–453. [Google Scholar]
  30. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling. 1999;6(1):1–55. [Google Scholar]
  31. Jaccard J, Blanton H. The origins and structure of behavior: Conceptualizing behavior in attitude research. In: Albarracén D, Johnson BT, Zanna MP, editors. The handbook of attitudes. Lawrence Erlbaum Associates Publishers; Mahwah, NJ: 2005. pp. 125–171. [Google Scholar]
  32. James LR, Demaree RG, Wolf G. Estimating within-group interrater reliability with and without response bias. Journal of Applied Psychology. 1984;69(1):85–98. [Google Scholar]
  33. James LR, Demaree RG, Wolf G. Rwg: An assessment of within-group interrater agreement. Journal of Applied Psychology. 1993;75:306–309. [Google Scholar]
  34. Kano Y, Azuma Y. Use of SEM programs to precisely measure scale reliability. In: Yanai H, Okada A, Shigemasu K, Kano Y, Meulman JJ, editors. New Developments in Psychometrics: Proceedings of International Meeting of Psychometric Society; Osaka, Japan: Springer; 2001. pp. 141–148. [Google Scholar]
  35. Kelloway EK. Using Lisrel for Structural Equation Modeling: A researcher’s guide. Sage; Thousand Oaks, CA: 1998. [Google Scholar]
  36. Lau AS. Making the case for selective and directed cultural adaptations of evidence-based treatments: Examples from parent training. Clinical Psychology: Science and Practice. 2006;13(4):295–310. [Google Scholar]
  37. Magnabosco JL. Innovations in mental health services implementation: a report on state-level data from the U. S. evidence-based practices project. Implementation Science. 2006;1(13) doi: 10.1186/1748-5908-1-13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. McCabe KM, Yeh M, Garland AF, Lau AS, Chavez G. The GANA program: A tailoring approach to adapting parent child interaction therapy for mexican americans. Education and Treatment of Children. 2005;28(2):111–129. [Google Scholar]
  39. Muthén BO. Mplus Discussion Group. 2008 from http://www.statmodel.com/discussion/messages/9/156.html?1226115975#POST23630.
  40. Muthén LK, Muthén BO. Mplus user’s guide. 3rd ed Author; Los Angeles: 1998-2007. [Google Scholar]
  41. ODMH [Retrieved September 9, 2009];Ohio’s Coordinating Centers of Excellence. 2009 from http://mentalhealth.ohio.gov/what-we-do/promote/coordinating-centers-of-excellence.shtml.
  42. Proctor LK, Landsverk J, Aarons GA, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research. 2009;36(24-34) doi: 10.1007/s10488-008-0197-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Rogers EM. Diffusions of Innovations. 4th ed The Free Press; New York: 1995. [Google Scholar]
  44. Rydell R, McConnell A. Understanding implicit and explicit attitude change: A systems of reasoning analysis. Journal of Personality and Social Psychology. 2006;91(6):995–1008. doi: 10.1037/0022-3514.91.6.995. [DOI] [PubMed] [Google Scholar]
  45. Satorra A, Bentler PM. Ensuring Positiveness of the Scaled Difference Chi-Square Test Statistic. 2008 doi: 10.1007/s11336-009-9135-y. from http://repositories.cdlib.org/uclastat/papers/2008010905. [DOI] [PMC free article] [PubMed]
  46. Schoenwald SK, Kelleher K, Weisz J. Building bridges to evidence-based practice: The MacArthur foundation child system and treatment enhancement projects (Child STEPs) Administration and Policy in Mental Health. 2008;35:66–72. doi: 10.1007/s10488-007-0160-9. [DOI] [PubMed] [Google Scholar]
  47. Shrout PE, Fleiss JL. Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin. 1979;86(2) doi: 10.1037//0033-2909.86.2.420. [DOI] [PubMed] [Google Scholar]
  48. Tanaka-Matsumi J. Functional approaches to evidence-based practice in multicultural counseling and therapy. In: Gielen UP, Draguns JG, Fish JM, editors. Principles of multicultural counseling and therapy. Routledge/Taylor & Francis Group; New York, NY: 2008. pp. 169–198. [Google Scholar]
  49. West SG, Finch JF, Curran PJ. Structural equation modeling with nonnormal variables: problems and remedies. In: Hoyle RH, editor. Structural Equation Modeling: Concepts, Issues, and Applications. Sage; Thousand Oaks, CA: 1995. [Google Scholar]
  50. Whaley AL, Davis KE. Cultural competence and evidence-based practice in mental health services: A complementary perspective. American Psychologist. 2007;62(6):563–574. doi: 10.1037/0003-066X.62.6.563. [DOI] [PubMed] [Google Scholar]
  51. Whitley R. Cultural competence, evidence-based medicine, and evidence-based practices. Psychiatric Services. 2007;58(12):1588–1590. doi: 10.1176/ps.2007.58.12.1588. [DOI] [PubMed] [Google Scholar]

RESOURCES