Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Nov 7.
Published in final edited form as: Child Youth Serv Rev. 2016 Oct 14;70:375–382. doi: 10.1016/j.childyouth.2016.10.022

Use of evidence-based interventions in child welfare: Do attitudes matter?

Sonya J Leathers 1,*, Catherine Melka-Kaffer 1, Jill E Spielfogel 1,1, Marc S Atkins 1
PMCID: PMC6221194  NIHMSID: NIHMS993936  PMID: 30416239

Abstract

Implementation of evidence-based programs in progressed slowly, with the majority of services in child welfare settings lacking empirical evidence for effectiveness. In other settings, research has identified providers’ attitudes about evidence-based practices (EBPs) as a potential barrier to adoption of EBPs. As little research has focused on the role of attitudes in influencing use after training in an EBP in child welfare, the potential for attitudes to impede implementation efforts in child welfare is unclear. This study addressed this question in a sample of 55 caseworkers and therapists randomly assigned to enhanced support to use an EBP following training or a training-only condition. Information on providers’ use of the intervention after training and their attitudes about EBPs were measured for up to five time points. Results indicate that attitudes did not predict providers’ use of the EBP, and attitudes did not change overall or in the enhanced condition that provided greater exposure to the intervention. Providers perceived of requirements to use a practice as more influential in their use than their openness to EBPs. However, those who were more open to EBPs were more likely to participate in implementation support after the training, suggesting that openness facilitates participation in activities that support use of a new intervention.

Keywords: Implementation, Child welfare training, Evidence-based practices

1. Introduction

Although evidence-based practices have the potential to improve outcomes in child protection, placement prevention, placement outcomes, and child mental health (Chaffin & Friedrich, 2004; Timmer & Urquiza, 2014), most services provided by child welfare agencies do not incorporate empirically supported interventions. Just 20% of child welfare administrators and practitioners report using empirically-based knowledge in their practice (Chagnon, Pouliot, Malo, Gervais, & Pigeon, 2010). Implementation research has advanced understanding of multiple factors at the provider, agency, and service system levels that potentially affect adoption of new practices in other settings (Aarons & Palinkas, 2007; Gray, Joy, Plath, & Webb, 2013a,b), but little is known about elements that support or inhibit implementation of innovations in practice in child welfare settings. These factors might be similar to those in other settings, or some might be context specific, with unique aspects inhibiting adoption of empirically-based practice models in complex, stressed public service systems. This lack of understanding is a concern as it limits efforts to develop effective strategies to support implementation in child welfare.

Efforts to develop frameworks to guide implementation of new practices (e.g., Fixsen, Naoom, Blasé, Friedman, & Wallace, 2005) followed recognition that knowledge of a practice, while necessary, is insufficient to initiate use of a new practice. Some research suggests that training alone leads to little or no use of a new intervention, even when the training incorporates an active learning approach involving demonstration, role play, and application to actual cases (Beidas & Kendall, 2010). Providing ongoing support for use with consultation or coaching as well as administrative support increases use (Beidas & Kendall, 2010; Casillas, Fauchier, Derkash, & Garrido, 2016), but even with these supports, other factors may impede full implementation or undermine sustained use. A key factor may be the provider’s perceptions of the new practice both prior to and after the training. Perceptions of evidence-based practices, in particular, may play a pivotal role. Providers with unfavorable attitudes toward evidence-based practices related to perceptions that these practices fail to recognize their practice expertise, are not applicable to their clients due to their complex needs, and require reliance on a cumbersome manual are unlikely to use the new practices, regardless of training and other supports for use (Aarons & Palinkas, 2007; Addis & Krasnow, 2000; Borntrager, Chorpita, Higa-McMillan, & Weisz, 2009). Despite the potential importance of attitudes about evidence-based practices in implementation, very little research has examined child welfare service providers’ perceptions of evidence-based practices and whether their attitudes are associated with their use of evidence-based interventions following training. If negative attitudes are a significant factor potentially undermining implementation efforts, supporting the development of positive attitudes could be a critical aspect of an effective implementation plan. This article explores these questions by reporting on the relationships between attitudes about evidence-based practices and initiation of use of an evidence-based intervention over time by child welfare caseworkers and therapists.

1.1. Evidence-based practice in the context of child welfare

Several aspects of child welfare practice suggest that perceptions of evidence-based practices might have unique effects across child welfare and mental health providers. Child welfare practices are affected by distinct contextual factors at the worker and client level as well as organizational and systemic factors. Child welfare workers often have unmanageable caseloads and are under enormous pressure to ensure child safety, resulting in a primary focus on documentation, meeting court requirements, and addressing only their clients’ most pressing needs (Collins, Kim, & Amodeo, 2010). Worker turnover is high, reflecting the high stress associated with child welfare work. Across settings, time to learn something new and cost are cited as a major barriers to use of evidence based interventions (Barwick et al., 2008; Cook, Biyanova & Coyne, 2009; Stewart, Stirman & Chambless, 2012), and for under-resourced, crisis oriented service systems, implementation of evidence-based practice models may seem impossible given the dif-ficulty in reallocating scarce resources. These factors could lead to more negative attitudes about practice approaches that require more standardization in interventions for all families and greater oversight of practices to ensure fidelity to a particular model; given the high level of needs across families and the impossible task of providing adequate services to all, flexibly responding to the most immediate, high risk families might be more important. This might also lead to a weaker relationship between attitudes about evidence based practices and the impact of these attitudes on use of proposed EBPs, which might be viewed as inapplicable to child welfare services.

In contrast, child welfare systems have experienced increased pressure to provide services that result in improved family and child outcomes (Whitaker, 2011). This factor might positively affect attitudes and openness to evidence-based practice models by creating incentives to use empirically supported treatments and increasing exposure to these interventions. Evidence-based practices have also become more accessible than previously through the resources and supports provided to child welfare systems and providers (e.g., see www.cebc4cw.org and www.childwelfare.gov/management/practice_improvement/evidence/ebp.cfm). The infusion of this information is likely to affect agency administrators and their staff both in their attitudes and exposure to information, training, and ultimately evidence-based practices. Despite these efforts, however, child welfare systems struggle to provide services with a strong evidence base. One nationally representative study conducted in the U.S. found that only about a quarter of new programs initiated by child welfare agency directors were evidence-based (Horwitz et al., 2014), and in another study less than 20% of child welfare staff reported using empirically-based knowledge in their practice (Chagnon et al., 2010). Understanding the extent that attitudes might be a factor inhibiting adoption of EBPs is an important question in current efforts to develop evidence-based service systems.

1.2. Attitudes about evidence-based practices and use

Multiple studies focused on dissemination of evidence-based practices (EBPs) among mental health practitioners have examined the relationship between attitudes toward EBPs and their use (Aarons et al., 2010). The majority of these studies have measured attitudes toward EBPs using the Evidence-Based Practice Attitudes Scale (EBPAS; Aarons, 2004), a 15-item questionnaire that provides an overall score indicating how positive a practitioner is toward EBPs and four subscales: (1) appeal of EBPs, (2) likelihood of using an EBP when it is a requirement, (3) openness to new interventions, and (4) divergence or unfavorable attitudes toward EBPs. Use of the EBPAS in multiple studies provides a standardized measure of attitudes across different provider groups. Research has assessed the association between EBPAS scores and use of EBPs by mental health or social service providers, which includes individuals with varying degrees of educational attainment treating a variety of conditions, including many mental illnesses, trauma, substance use disorders, and autism.

Findings from these studies indicate stronger support for the relevance of specific dimensions measured by the EBPAS than the more global assessment of attitudes provided by the total scale. In particular, a provider’s openness to EBPs might be a critical factor in initiation of a practice. In a sample of psychologists providing treatment to child and adolescent survivors of sexual abuse, Czincz and Romano (2013) found that the Openness and Appeal subscales were associated with self-reported use of EBPs. Similarly, in a study of paraprofessional and professional staff providing treatment for youth with autism, the Openness subscale was associated self-reported use of EBPs (Paynter & Keen, 2015). In both these studies, open attitudes were no longer significant after controlling for other variables potentially indicating greater exposure to or knowledge of EBPs (cognitive behavioral theoretical orientation or self-rated knowledge of specific EBPs). While it is plausible that openness to EBPs preceded the practitioners’ greater exposure to EBPs, openness might have increased as the result of prior exposure.

A third study examining attitudes of supervisors in substance abuse treatment programs provides further support for an association between the Openness subscale and use of two different evidence-based programs (Guerrero, He, Kim, & Aarons, 2014). In this study, supervisors’ scores on the Openness and Requirements subscales were significantly related to the implementation of one of two evidence-based programs and openness was also related to use of a second evidence-based program. Overall, openness was the strongest factor predicting use of either program (Guerrero et al., 2014).

The potential for attitudes to play an indirect role in determining use is also suggested by several other studies. In a sample of 170 mental health providers, Aarons, Sommerfeld, Hecht, Silovsky, and Chaffin (2009) found a significant association between the EBPAS total score and EBP use. However, greater use of EBPs was also associated with organizational support for evidence-based practices. While the authors expected the effect of organizational support to be mediated by attitudes (i.e., attitudes at the individual level would account for the relationship between organizational support and use), this hypothesis was not supported. Instead, organizational support for use of EBPs accounted for the relationship between use and EBPAS scores, indicating that organizational support rather than individual attitudes is a key driver of use. While the authors suggest that attitudes may play a larger role in EBP use when organizational and leadership support is lacking, this hypothesis was not tested (e.g., in the subsample with relatively low organizational support).

In contrast, several other studies have failed to find a direct connection between EBPAS scores and use of evidence-based interventions. In a cross-sectional study of therapists attending workshops on evidence-based practices (Higa-McMillan, Nakamura, Morris, Jackson, & Slavin, 2015), participants generally had high overall EBPAS scores (subscales were not examined), but the overall EBPAS score was unrelated to reported use of various evidence-based interventions. In a second study that administered the EBPAS prior to receiving training in an evidence-based brief screening and treatment intervention, no significant relationship was found between any of the EBPAS subscales and percentage of completed brief screens and interventions in a community mental health center following training (Patterson Silver Wolf, 2015). In a third study, Beidas et al. (2014) also found that none of the EBPAS subscales were related to clinicians’ use after receiving training in a cognitive-behavioral therapy model for children with anxiety.

While few studies overall have been conducted specifically with child welfare involved clients, the studies available also have mixed findings. In a study investigating use of Parent-Child Interaction Therapy by clinicians serving maltreated children, Nelson, Shanley, Funderburk, and Bard (2012) found no significant relationship between EBPAS scores and the number of cases enrolled for consultation, which was used as an indicator of use of the intervention. However, some dimensions of attitudes were related to participation in the training. Practitioners who scored higher on the Divergence subscale, meaning they give more weight to clinical experience than EBPs, were found to attend fewer phone consultations. Additionally, higher scores on the Openness subscale were related to more attendance of online consultations.

In a large sample of clinicians (N = 262) serving maltreated children, Allen, Gharagozloo, and Johnson (2012) also found no significant relationship between attitudes toward EBPs and reported use of these practices. In this study, the openness subscale was related to aspects of training, with openness associated with clinicians’ ability to correctly identify empirically supported practices. Because the ability to correctly identify EBPs was linked to training and use of EBPs, the authors suggested that there might be an indirect relationship between openness to EBPs and use through increased knowledge of such practices. Openness to EBPs was also found to relate to use of evidence-based practices in a cross-sectional study that included 1273 frontline workers from 55 programs at a child and family serving agency providing services in across several areas, including child welfare (Patterson Silver Wolf, Dulmus, & Maguin, 2013). At the program level, openness scores were associated with number of empirically supported treatments used in different programs.

Overall, the findings from the limited number of available studies conducted with providers serving child welfare involved clients are similar to those conducted with providers in other areas. Findings suggest that openness might have an association with use of evidence-based interventions (Nelson et al., 2012; Patterson Silver Wolf et al., 2013). Openness might also facilitate greater participation in implementation support following training (Nelson et al., 2012). However, the lack of direct associations between either attitudes more generally and inconsistencies in associations between openness and use raises questions about the role of individual attitudes in influencing use of EBPs. In addition, the cross-sectional design of many studies limits interpretation of some findings indicating an association. In a small randomized study, exposure to EBP materials increased positive attitudes over time (Leathers & Strand, 2012), raising the question of directionality of effects in studies using cross-sectional data. Cross sectional designs or measures of attitudes at a single time point preclude assessing whether attitudes might change or have distinct effects at a particular point in the implementation process (e.g., prior to vs. after training) or on different aspects of implementation, such as fidelity of use (Beidas et al., 2014).

2. Research questions

This study explored the potential for attitudes about evidence-based practices to affect uptake of a novel intervention after attending training on an evidence-based intervention in a child welfare agency. We chose to examine the potential effects of overall attitudes and openness based on previous findings suggesting that openness to new practices might be a particularly influential aspect of attitudes to be considered in implementation efforts. In the present study, openness also was expected to be more relevant than factors such as responsiveness to requirements because providers were not required by their agency to use the intervention after the training. Given the choice to initiate use of a new intervention provided in this study, openness to innovations might be particularly relevant for use. By examining changes in attitudes over four time periods, we were also able to explore the potential for greater exposure to an evidence-based intervention to affect attitudes over time. Because of inconsistencies in the literature, we did not define specific hypotheses but instead sought to address these research questions:

  1. Are attitudes about evidence-based practices and, in particular, openness to EBPs associated with greater use of an evidence-based practice following training?

  2. Are overall attitudes and openness related to participation in implementation support activities following training?

  3. Do staff who have greater exposure to an evidence-based practice after training have more positive attitudes over time?

3. Methods

This study used data from longitudinal study with an experimental design to address the research questions. A full description of the study’s methods is found in a previous publication (Leathers, Spielfogel, Blakey, Christian, & Atkins, 2016). Briefly, 57 caseworkers and therapists (“providers”) from a single large urban child welfare agency in the United States were randomly assigned to one of two training conditions, a “training as usual” control group (n = 26) that received only training and experimental group (n = 31) that received training enhanced by support after the training. Fifty-five providers completed the attitudes scales at the time points analyzed in this study and so the sample includes these 55 providers. Randomization was completed using a randomization list generated from a statistical program for all eligible providers at the start of the study, with the few providers hired after the initial randomization assuming the randomization of the provider they replaced. Randomization occurred prior to consent and all but one provider who had an eligible child on his or her caseload consented to participate (98%). Both groups received a 16-h, skills-based training in a behavioral parent training program for use with foster parents. After training, providers received a manual of the intervention and a DVD to use with foster parents to demonstrate the behavioral intervention.

The experimental group (n = 30) was also exposed to contact with a “change agent” who interacted with providers in primarily informal, brief in-person interactions in which she discussed the parenting intervention and endorsed its use. By design, the change agent, a social worker employed by the research project, had no authority to mandate or require use of the intervention, but instead attempted to remind providers about the intervention, describe her own success using it with foster parents, and endorse its use. The change agent interaction condition significantly increased use of the intervention [name deleted to maintain the integrity of the review process], providing the opportunity to examine whether attitudes played a role in determining the effectiveness of this implementation strategy (i.e., would positive attitudes facilitate use among those in the experimental change agent group), as well as whether those with more positive attitudes would be more likely to participate in implementation support by having greater contact with the change agent. Additionally, the longitudinal design of the study allowed for the assessment of change in attitudes over time for all providers and particularly among those in the experimental group.

Providers’ attitudes about EBPs and their self-reported use of the intervention components were measured at up to five time points at approximately three month intervals over a 14-month period. All interviews conducted after the baseline interview (four time points) are included in analyses to address the first two research questions and all time points are included in the third set of analyses to address the last question. The baseline interview is not included in the first two sets of analyses since providers had not yet been introduced to the intervention at baseline and none had any use at baseline. A total of 188 provider interviews were completed at across all time points. The study received Institutional Review Board (IRB) approval from the authors’ university as well as from the state child welfare agency.

3.1. Measures

3.1.1. Evidence-based services practice attitude scale (EBPAS)

The EBPAS is a 15-item questionnaire designed to measure practitioner attitudes toward EBPs (Aarons, 2004; Aarons et al., 2010). It provides an overall score indicating how positive a provider’s attitudes are about EBPs and four subscales indicating openness to use of EBPs, the appeal of EBPs, responsiveness to requirements for use, and divergence or unfavorable attitudes toward EBPs (which are reverse scored). Each item is rated on a 5-point anchored scale ranging from agreeing “not at all” to “to a very great extent.” A higher score corresponds to more positive attitudes toward evidence-base practice. Its internal consistency is good, with Cronbach’s alpha coefficients ranging from 0.59 to 0.90 for each subscale in previous research (Aarons & Sawitzky, 2006). This study used the total scale that includes all 15 items (Total EBPAS) and the openness subscale (Openness) that consists of four items asking about attitudes toward using new interventions, treatment manuals, research-based interventions, and interventions that are very different from what the practitioner is used to. The Cronbach’s alphas in the present study were 0.75 for the total scale and 0.77 for the openness sub-scale, indicating good internal reliability.

3.1.2. Use of intervention components

To measure level of use of the intervention, a series of questions asked providers if they had presented specific parent management training components to enrolled foster parents in the past 30 days. To reduce the over-reporting of use of EBPs that has been documented in studies comparing self-report with videotaped therapy sessions (see Hurlburt, Garland, Nguyen, & Brookman-Frazee, 2010), only use that included use of a manual, DVD, or handout to teach a particular skill (e.g., use of a behavior chart, timeout) was counted. For example, a question asked “How many times in the past 30 days did you talk to the foster parent about using incentives or rewards?” This question was followed up with “How many times did you use a manual to describe this strategy?” If a provider had discussed rewards or another skill three times, but used a manual only once, this would only be counted as one time. Presentation of multiple skills using the manual (e.g., behavior charts, incentives/rewards, and timeout) were summed. While this strategy results in a conservative measure of use, as it does not cover the entire time between interviews and does not count use that might have occurred without use of the materials, this created a more specific measure of use of an EBP than would be obtained by relying on a general report of use. This measurement strategy resulted in just 35% of providers being classified as having used the EBP materials at any point in the follow up period. In contrast, providers’ reports of any use (i.e., counting use reported with or without the materials) were much higher. For example, they reported discussing incentives for positive behavior with 47% of foster parents at baseline and 63% at Time 2; behavior charts with 9% at baseline and 24% at Time 2; and effective requests 28% at baseline and 40% at Time 2.

3.1.3. Change agent interaction

Total minutes interacting with the change agent was measured by summing the duration of interactions recorded by the change agent over a six-month period following the provider training. Interactions were recorded throughout each day so that a comprehensive record of time interacting with each provider was maintained. The change agent interaction variable was calculated for each 3-month time period prior to the interviews at times two (i.e., from immediately after the training until the second interview three months later), time three, and time four. Interactions occurred almost exclusively with providers in the enhanced condition, as expected given the study design. Only interactions that included content related to the intervention were included in this count. To address problems with skewed data due to a few providers reporting a high level of contact, a dichotomous variable was created. Because previous results indicated that having at least 30 min of contact supported significantly more use of the intervention while less contact did not (Leathers et al., 2016), a dichotomous variable indicating 30 min or more contact was created.

3.2. Data analysis

The association between attitudes (overall and openness) and intervention use during the intervention period (after the baseline interviews) was assessed using zero-inflated Poisson (ZIP) models with random effects, a type of mixed effects regression model (Hedeker & Gibbons, 2006). A ZIP model is appropriate given the high number of providers (65%) who reported no use of the intervention (i.e., “zero in-flation”), as well as the correlations of individual providers’ responses across time that must be modeled with random effects to accurately estimate standard errors. Additionally, mixed regression models are robust to missing data under more conditions than previous techniques and can be used with subjects who participated at varying time points (Gibbons, Hedeker, & DuToit, 2010). In this study, two ZIP analyses assessed the associations between use and EBPAS total score and the openness sub-score by regressing intervention use on attitudes (EBPAS total score or openness) while controlling for group assignment and demographic variables (race, sex, age, type of provider, and time in position). The change in −2 log likelihood in nested models was compared to the chi-square distribution to identify random effects that significantly improved the models’ fit. Final ZIP models both included two random effects (random intercepts for both the Poisson and zero inflation parts). Control variables that were non-significant and did not affect the results were deleted from the models. These analyses included only data from follow up interviews, after the baseline interview (a total of 147 observations), as no use occurred prior to the completion of the training.

The association between attitudes and participation in implementation support after training was assessed using logistic mixed effect regression models. Two models were used to predict change agent interaction with attitudes (EBP total score or openness) while controlling for group assignment and demographic variables (race, sex, age, type of provider, and time in position). These models included a random intercept to model clustering within individuals across time (a total of 146 observations). These analyses also excluded baseline data, since the change agent did not begin implementation support until after the baseline data had been collected and the training was completed.

Changes in attitudes during the course of the study were assessed using mixed effects linear regression analyses. Overall attitudes and then the openness sub-score were regressed on time, the experimental change agent condition, and an interaction term for time and the change agent condition. Support for a change in attitudes across the duration of the study as a result of greater exposure to the intervention would be indicated by a significant time point and experimental condition interaction coefficient. Random effects to model correlations in individuals’ attitudes over time were included based on changes in −2 log likelihood scores. In all models, unstructured covariance matrices were used for the random effects. Variables potentially affecting associations including provider gender, age, race, and type (therapist or caseworker) were included initially and then deleted if they were non-significant and did not affect the results.

4. Results

4.1. Participants

Providers were predominantly female, with an equal percentage African American and white. Over half reported graduate degrees and each had an average of 3.2 children enrolled in the study (Table 1). There were no significant differences in demographic characteristics across the control and enhanced groups. Mean scores for all subscales of the EBPAS scale are provided for the purpose of comparison with other studies. The EBPAS scores, which range from 0 to 4, indicate positive attitudes about EBPs, with the average just one point below the highest rating of “to a very great extent.” The Openness subscale was somewhat lower than the other subscales (Table 1), and Requirements was rated the highest (M = 3.11, SD = 0.81). This difference was significant [t(55) = 3.59, p < 0.001], suggesting that providers perceived their use of evidence-based interventions to be influenced more by requirements to do so than an openness or receptivity to these types of interventions. The difference between Requirements and Divergence was also significant [t(55) = 2.15, p < 0.05].

Table 1.

Provider demographics and attitudes about evidence-based practices (N = 55).

Variable M SD %
Age 34.75 8.92
Female 80
Race
 African American 46
 White 46
 Asian 5
 Other 2
Degree
 Bachelors 36
 Masters 57
 Not reported 7
Position
 Case manager 73
 Therapist 27
Years worked in child welfare 8.39 8.31
Years in current position 2.60 2.62
EBPAS total score 2.92 0.42
EBPAS openness 2.76 0.63
EBPAS requirements 3.11 0.81
EBP divergence 2.86 0.68
EBP appeal 3.01 0.60

Note. Age was not reported for one provider.

4.2. Question 1: Are attitudes about evidence-based practices and, in particular, openness to EBPs associated with greater use of an evidence-based practice following training?

Neither the total EBPAS score nor openness were significantly related to use of the intervention components after training. The ZIP analysis provides estimates of both amount of use, from the Poisson portion of the model, and zero inflation to model an excess of zeros in the data (i.e., no use of the intervention reported for the past 30 days). In these analyses, placement in the enhanced change agent interaction group significantly predicted greater use, as reported in a previous publication (Leathers et al., 2016). However, coefficients for overall attitudes were not significantly associated with intervention use in either part of the ZIP model (Table 2). Similarly, the coefficients for openness to EBPs were nonsignificant in both parts of the models, indicating no association with intervention use (Table 3).

Table 2.

Zero inflation poisson mixed effect regression model predicting intervention use with overall attitudes toward EBPs.

B SE p
Poisson estimates
 Intercept 1.23 0.88 0.17
 Enhanced group 1.29 0.44 <0.01
 Total EBPAS 0.58 0.1.99 0.77
Inflated zero estimates
 Intercept 0.58 0.67 0.77
 Enhanced group −0.92 0.61 0.14
 Total EBPAS 0.55 0.67 0.42

Note. N = 55 providers with a total of 147 observations collected over up to four time points. Demographic variables including race, sex, age, time employed, and position type were deleted from the model to simplify results as they were nonsignificant and did not affect results. Variance components in the model included random intercepts for both the Poisson and inflated zero portions of the model (Poisson, σ2a0 = 0.64, SE = 0.29, p < 0.05 and inflated zero, σ2a0 = 1.45, SE = 1.15, p = 0.21).

Table 3.

Zero inflation poisson mixed effect regression model predicting intervention use with overall attitudes toward EBPs.

B SE p
Poisson estimates
 Intercept 0.53 0.51 0.30
 Enhanced group 1.19 0.44 <0.01
 Openness 0.18 0.16 0.24
Inflated zero estimates
 Intercept 0.71 1.3 0.59
 Enhanced group −1.03 0.64 0.12
 Openness 0.52 0.48 0.24

Note. N = 55 providers with a total of 147 observations collected over up to four time points. Demographic variables including race, sex, age, time employed, and position type were deleted from the model to simplify results as they were nonsignificant and did not affect results. Variance components in the model included random intercepts for both the Poisson and inflated zero portions of the model (Poisson, σ2a0 = 0.64, SE = 0.29, p < 0.05 and inflated zero, σ2a0 = 1.45, SE = 1.15, p = 0.21).

4.3. Question 2: Are overall attitudes and openness to EBPs related to participation in implementation support following training (i.e., change agent interactions)?

Mixed effect logistic regression analysis results indicate that overall attitudes toward EBPs were not related to whether a provider interacted with the change agent for 30 min or more following the training (Table 4). Placement in the experimental group did significantly increase odds of interaction with the change agent, which is expected given that the change agent sought to interact only with providers in the enhanced group about the intervention.

Table 4.

Logistic mixed effect regression model predicting change agent interactions with attitudes toward EBPs.

B SE Exp(B) p
Enhanced group 2.93 1.15 19.30 0.01
Total EBPAS 1.03 1.00 2.80 0.30
Male 1.32 1.58 3.74 0.41
Intercept −7.74

Note. N = 55 providers with a total of 146 observations collected over up to four time points. Demographic variables including race, age, time employed, and position type were deleted from the model to simplify results as they were nonsignificant and did not affect results. Model included a random intercept variance component (σ2a0 = 7.17, SE = 4.45, p = 0.11).

In contrast, openness to EBPs was significantly related to change agent interaction for 30 min or more, with greater openness associated with significantly increased odds that a provider would interact with the change agent for a cumulative thirty minutes or more in the previous three months (Table 5). After controlling for group assignment, the effect of a one point increase in openness increased the odds of change agent interaction by a factor of 10.7 [10.7 = Exp (2.37)].

Table 5.

Logistic mixed effect regression model predicting change agent interactions with open attitudes toward EBPs.

B SE Exp(B) p
Enhanced group 3.01 1.16 20.29 0.01
Openness to EBPs 2.37 0.93 10.70 0.01
Male 2.17 1.67 8.76 0.20
Intercept −11.67

Note. N = 55 providers with a total of 146 observations collected over up to four time points. Demographic variables including race, age, time employed, and position type were deleted from the model to simplify results as they were nonsignificant and did not affect results. Model included a random intercept variance component (σ2a0 = 6.44, SE = 3.98, p = 0.11).

4.4. Do staff who have greater exposure to an evidence-based practice after training have more positive attitudes over time?

Results from the mixed regression models indicate that attitudes did not change during the course of the study, either among all providers or those who were assigned to the change agent condition that provided greater exposure to the intervention. In both models, the coefficients for time, which would indicate a linear increase or decrease in positive attitudes over time, were nonsignificant (B = −01, β = 0.02, p =0.77 and B = −01, β = 0.04, p = 0.83 for overall attitudes and openness, respectively). Additionally, the enhanced condition × time interaction terms were nonsignificant, indicating that a linear change did not occur among those in the experimental group relative to the enhanced group (B = −01, β = 0.03, p = 0.98 and B = 01, β = 0.05, p = 0.94) for overall attitudes and openness, respectively).

Gender was the only control variable that was significantly related to attitudes. Men had significantly more negative attitudes overall and less openness to EBPs relative to women (B = −0.37, β = 0.13, p < 0.01 and B = −0.51, β = 0.19, p < 0.01) for overall attitudes and openness, respectively. Other control variables including race, time in position, time worked in child welfare, age, and type of provider (caseworker or therapist) were not associated with attitudes and were removed from the models to simplify the results.

5. Discussion

This study’s findings suggest that the attitudes of child welfare staff about evidence-based practices are not directly related to use of an evidence-based practice following training. Child welfare providers with more positive, open attitudes about EBPs were no more likely to use the EBP after training than those who were less positive. This finding is consistent with several other studies conducted in either child welfare or mental health settings that did not detect a relationship between attitudes and use of EBPs (Allen et al., 2012; Beidas et al., 2014; Higa-McMillan et al., 2015; Patterson Silver Wolf, 2015). However, openness to EBPs was related to participation in post-training implementation support, consistent with our second research question. Staff with greater openness to EBPs were significantly more likely to receive implementation support by interacting for 30 min or more with a change agent who provided encouragement to use the intervention and informal consultation with providers who chose to interact with her. This finding is consistent with another study that also reported that openness is related to participation in support activities following training (Nelson et al., 2012). While no direct effect was found for openness on use of the EBP, change agent interaction at this level was related to greater use of the intervention, suggesting that openness might indirectly support use.

This study’s randomization of providers to a condition providing additional support and exposure to an intervention after training provided the opportunity to examine the effect of increased exposure to EBPs on attitudes. While attitudes have been hypothesized to affect use, greater exposure and use might also result in more positive attitudes as providers become more familiar with actual practices and materials (Leathers & Strand, 2012; Lim, Nakamura, Higa-McMillan, Shimabukuro, & Slavin, 2012). Consistent with this notion, a previous study reported that caseworkers who had previous exposure to a parent behavioral skills training had more positive attitudes about evidence-based practices than those who had not been exposed to previous training (Lopez, Osterberg, Jensen-Doss, and Rae, 2011). However, findings from this study did not support this hypothesis, instead finding that the relatively positive attitudes of providers did not significantly change during the course of the study, and the enhanced exposure provided to those randomized to the change agent condition had no effect on attitudes. The discrepancy in these findings with earlier studies could be related to this study’s randomization of providers to greater exposure, which provides a stronger test of the effect of increased exposure. The positive associations in previous studies could be related to providers’ decisions to seek out additional exposure by attending trainings or revising materials based on their greater openness or interest in EBPs.

Although this study focused on overall attitudes and openness to EBPs based on the findings of previous studies that have indicated that openness may be the most salient aspect of attitudes, descriptive findings from this study suggest that different dimensions of attitudes might be relevant for child welfare staff than for providers in other types of service settings. Specifically, providers rated requirements as being more influential in determining whether they would use an EBP than openness. In their study focused on caseworkers, Lopez et al. (2011) had a similar finding, with caseworkers reporting they would be influenced more by the requirement to use EBPs than the appeal of a specific EBP. In contrast, in our study, despite the lack of agency requirements for use, some providers who interacted with the change agent did initiate use of the intervention [name deleted to maintain the integrity of the review process]. This supports that factors other than requirements are predictive of use in the absence of requirements. We did not expect an orientation to use a new practice if required to do so to be salient in our study, as there were no requirements to use the intervention. However, we also found that the change agent condition did not result in widespread or sustained use. In particular, after the change agent left the agency, use dropped off, pointing to the importance of ongoing, more comprehensive implementation supports for successful implementation. These supports are likely to include administrative supports and expectations for use.

A greater emphasis on requirements in decision-making about adoption of an evidence-based intervention may be related to variation in agency and system-wide influences on practices. Child welfare services are provided in a context of low resources and high scrutiny of practices and outcomes by court-appointed lawyers and child and family courts that review cases regularly. In this context, the worker’s role and services may be less flexible (Parada, Barnoff, & Coleman, 2007; Smith & Donavon, 2003). An agency’s culture and climate are also related to these factors, potentially leading to higher levels of resistance to innovation, possibly due to low resources and perceived risks associated with change. Results from a study contrasting the culture and climate as perceived by 118 providers working across programs in the same large child welfare agency that participated in this study is consistent with this hypothesis (Spielfogel, Leathers, & Christian, 2016). Relative to mental health providers, child welfare providers reported higher levels of resistance to change as well as higher levels of functionality, characterized by high role clarity and worker cooperation. These factors point to a service system that provides some protection from threats such as legal risks and high worker turnover by creating service structures with more rigid expectations for practice and defined roles. Within this type of system, requirements to shift practices might be more influential given the stronger controls imposed on practices than in other service environments.

The potential to capitalize on the strengths of child welfare in functionality may be an important area for future research. Functionality is associated with high role clarity, goal-focused practices, and structured practice guidelines. New practices that do not leverage these characteristics are likely to have low uptake in a stressed service system, while administrative support and training and selection of evidence-based practices that are integral to providers’ roles and program goals might facilitate use of a new practice. These characteristics are likely to impact staff motivation to use the practice; as reported in a qualitative interview with a child welfare worker “Commitment from workers will only happen if workers understand the positive impacts on their jobs and their daily work with families.” (McCrae, Scannapieco, Leake, Potter, and Menefee, 2014, p. 33). At times, successful implementation might require restructuring services with greater specialization in providers’ roles to closely align the intervention targets with the program’s goals and provide clear administrative structures for ongoing support. For example, this might involve creating a specific unit with a supervisor and providers trained to provide an in-home evidence-based intervention, rather than expecting existing staff be trained to incorporate the new practice in home visits, which might be inconsistent with their current role and home visiting tasks. By supporting use of the new practice as a part of a defined role, other role demands competing for time would be reduced and administrative supports would be increased.

The results from this study add to the findings across several studies suggesting that overall attitudes about evidence-based practices do not play a central role in determining success of an implementation plan. While this study only examined the role of attitudes about EBPs in general, rather than perceptions of a specific intervention, a large study focused on implementation across multiple sites also reported that “buy in,” or awareness and support for a particular intervention, also had no association with implementation (McCrae et al., 2014). Research focused on barriers to using EBPs point to multiple barriers that were not related to attitudes, leading to the conclusion that future EBP implementation should focus on reducing barriers as opposed to improving attitudes (Nelson et al., 2012).

In particular, focusing on contextual barriers is an important area for future study. Using a systems contextual approach, Beidas and Kendall (2010) reviewed implementation research in therapeutic service settings and found that implementation strategies that incorporate therapist variables, organizational support, the quality of the training program, and client variables affect greater change in therapist behavior and client outcomes. Implementation research focused on identifying strategies to optimize aspects of organizational support are needed. Attitudes might interact with these factors as well as elements of the EBP, as suggested by associations with previous exposure to evidence based practices and organizational capacity to support EBPs, but findings to date suggest that contextual factors are likely to be more influential divers of implementation (Aarons et al., 2009).

5.1. Limitations

While this study’s design is a clear strength that provided the opportunity to better understand the relationship between attitudes about EBPs and their use following training, its findings should be considered in the context of several limitations. A primary limitation is the study’s sample size. Despite the use of repeated measures over time, with a sample of 55 providers, small effects would not be detected, and more complex models including indirect effects could not be estimated. Complex relationships, such as interactions between different aspects of attitudes and provider characteristics, might reveal a stronger role for attitudes for subgroups of providers in a larger sample that allowed for these types of analyses.

The generalizability of this study’s findings is also uncertain given the study’s focus on a single large child welfare agency in an urban Midwestern city. This agency served over 500 children and youth in substitute care at the time it was conducted, a significant proportion of all children in the region. However, results in smaller agencies, in particular, or in states providing child welfare services under different administrations and policies could be quite different.

Finally, this study examined providers’ use of an intervention after training to address a very specific research question related to implementation support: the extent that “change agent” types of interaction affect use of the intervention. While it can be argued that this focus provided a unique opportunity to examine the extent that attitudes are related to use and exposure to an intervention, the study did not involve a full implementation strategy, which would increase incentives to use the intervention and might lead to a stronger role for attitudes to determine extent of use. However, if this were the case, it would seem likely that the effect would also be detected in this study, which did not impose any requirements for use; the lack of a requirement to use the intervention could allow for a greater role for attitudes to influence providers’ choices.

5.2. Conclusion

This study contributes to the limited research focused on the attitudes of child welfare staff about evidence-based practices. As in previous research, attitudes of providers working in child welfare agencies were positive relative to those in other settings, but attitudes had no direct relationship with use of an evidence-based intervention after training. As in one other study, having a more open attitude about EBPs was related to the extent that providers chose to receive support in their use of the intervention after training, suggesting that openness might facilitate learning an EBP, potentially resulting in greater expertise and use over time. However, findings from this study and others do not support development of specific implementation strategies to improve attitudes about evidence-based interventions with the goal of dramatically increasing use of EBPs. Future studies that include analysis of a broad range of individual and contextual factors are needed to better understand optimal methods for implementation in child welfare systems.

Acknowledgments

This research was supported by a grant from the National Institute for Mental Health (RC1 MH088732). The views expressed in this paper solely reflect the views of the authors and do not necessarily reflect the views of the National Institutes of Health.

References

  1. Aarons GA (2004). Mental health provider attitudes toward adoption of evidence-based practice: The evidence-based practice attitude scale (EBPAS). Mental Health Services Research, 6, 61–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons GA, & Sawitzky AC (2006). Organizational climate partially mediates the effect of culture on work attitudes and staff turnover in mental health services. Administration and Policy in Mental Health, 33, 289–301. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aarons GA, & Palinkas LA (2007). Implementation of evidence-based practice in child welfare: Service provider perspectives. Administration and Policy in Mental Health and Mental Health Services Research, 34, 411–419. [DOI] [PubMed] [Google Scholar]
  4. Aarons GA, Sommerfeld DH, Hecht DB, Silovsky JF, & Chaffin MJ (2009). The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: Evidence for a protective effect. Journal of Consulting and Clinical Psychology, 77, 270–280. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Aarons G, Glisson C, Hoagwood K, Kelleher K, Landsverk J, & Cafri G (2010). Psycho-metric properties and U.S. national norms of the evidence-based practice attitude scale (EBPAS). Psychological Assessment, 22, 356–365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Addis ME, & Krasnow AD (2000). A national survey of practicing psychologists’ attitudes toward psychotherapy treatment manuals. Journal of Consulting and Clinical Psychology, 68, 331–339. [DOI] [PubMed] [Google Scholar]
  7. Allen B, Gharagozloo L, & Johnson JC (2012). Clinician knowledge and utilization of empirically-supported treatments for maltreated children. Child Maltreatment, 17, 11–21. [DOI] [PubMed] [Google Scholar]
  8. Barwick MA, Boydell KM, Stasiulis E, Ferguson HB, Blase K, & Fixsen D (2008). Research utilization among children’s mental health providers. Implementation Science, 3, 3–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Beidas RS, & Kendall PC (2010). Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: Science and Practice, 17, 1–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Beidas RS, Edmunds J, Ditty M, Watkins J, Walsh L, Marcus S, & Kendall P (2014). Are inner context factors related to implementation outcomes in cognitive-behavioral therapy for youth anxiety? Administration and Policy in Mental Health and Mental Health Services Research, 41, 788–799. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Borntrager C, Chorpita B, Higa-McMillan C, & Weisz J (2009). Provider attitudes toward evidence-based practices: are the concerns with the evidence or with the manuals? Psychiatric Services, 60, 677–681. [DOI] [PubMed] [Google Scholar]
  12. Casillas KL, Fauchier A, Derkash BT, & Garrido EF (2016). Implementation of evidence-based home visiting programs aimed at reducing child maltreatment: A meta-analytic review. Child Abuse & Neglect, 53, 64–80. [DOI] [PubMed] [Google Scholar]
  13. Chagnon F, Pouliot L, Malo C, Gervais M, & Pigeon M (2010). Comparison of determinants of research knowledge utilization by practitioners and administrators in the field of child and family social services. Implementation Science, 5, 41–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Chaffin M, & Friedrich B (2004). Evidence-based treatments in child abuse and neglect. Children and Youth Services Review, 26, 1097–1113. [Google Scholar]
  15. Collins M, Kim S, & Amodeo M (2010). Empirical studies of child welfare training effectiveness: Methods and outcomes. Child and Adolescent Social Work Journal, 27, 41–62. [Google Scholar]
  16. Cook J, Biyanova T, & Coyne J (2009). Barriers to adoption of new treatments: An internet study of practicing community psychotherapists. Administration and Policy in Mental Health, 36, 83–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Czincz J, & Romano E (2013). Childhood sexual abuse: Community-based treatment practices and predictors of use of evidence-based practices. Child and Adolescent Mental Health, 18, 240–246. [DOI] [PubMed] [Google Scholar]
  18. Fixsen DL, Naoom SF, Blasé KA, Friedman RM, & Wallace F (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute. The National Implementation Research Network (FMHI Publication #231). [Google Scholar]
  19. Gibbons RD, Hedeker D, & DuToit S (2010). Advances in analysis of longitudinal data. Annual Review of Clinical Psychology, 6, 79–107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Gray M, Joy E, Plath D, & Webb SA (2013a). What supports and impedes evidence-based practice implementation? A survey of Australian social workers. British Journal of Social Work, 45, 667–684. [Google Scholar]
  21. Gray M, Joy E, Plath D, & Webb SA (2013b). Implementing evidence-based practice: A review of the empirical research literature. Research on Social Work Practice, 23, 157–166. [Google Scholar]
  22. Guerrero E, He A, Kim A, & Aarons G (2014). Organizational implementation of evidence-based substance abuse treatment in racial and ethnic minority communities. Administration and Policy in Mental Health and Mental Health Services Research, 41, 737–749. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Hedeker D, & Gibbons RD (2006). Longitudinal data analysis Wiley Series in Probability and Statistics. Hoboken, NJ: Wiley-Interscience. [Google Scholar]
  24. Higa-McMillan CK, Nakamura BJ, Morris A, Jackson DS, & Slavin LA (2015). Predictors of use of evidence-based practices for children and adolescents in usual care. Administration and Policy in Mental Health and Mental Health Services Research, 42, 373–383. [DOI] [PubMed] [Google Scholar]
  25. Horwitz SM, Hurlburt MS, Goldhaber-Fiebert JD, Palinkas LA, Rolls-Reutz J, Zhang J, … Landsverk J (2014). Exploration and adoption of evidence-based practice by US child welfare agencies. Children and Youth Services Review, 39, 147–152. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Hurlburt MS, Garland AF, Nguyen K, & Brookman-Frazee L (2010). Child and family therapy process: Concordance of therapist and observational perspectives. Administration and Policy in Mental Health and Mental Health Services Research, 37, 230–244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Leathers SJ, & Strand TC (2012). Increasing access to evidence-based practices and knowledge and attitudes: A pilot study. Research on Social Work Practice, 23, 669–679. [Google Scholar]
  28. Leathers SJ, Spielfogel TC, Blakey J, Christian E, & Atkins MS (2016). The effect of a change agent on use of evidence-based mental health practices. Administration and Policy in Mental Health and Mental Health Services Research, 43, 768–782. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Lim A, Nakamura BJ, Higa-McMillan CK, Shimabukuro S, & Slavin L (2012). Effects of workshop trainings on evidence-based practice knowledge and attitudes among youth community mental health providers. Behaviour Research and Therapy, 50, 397–406. [DOI] [PubMed] [Google Scholar]
  30. Lopez MA, Osterberg LD, Jensen-Doss A, & Rae WA (2011). Effects of workshop training for providers under mandated use of an evidence-based practice. Administration and Policy in Mental Health and Mental Health Services Research, 38, 301–312. [DOI] [PubMed] [Google Scholar]
  31. McCrae JS, Scannapieco M, Leake R, Potter CC, & Menefee D (2014). Who’s on board? Child welfare worker reports of buy-in and readiness for organizational change. Children and Youth Services Review, 37, 28–35 10.1016/j.childyouth.2013.12.001 [DOI] [Google Scholar]
  32. Nelson MM, Shanley JR, Funderburk BW, & Bard E (2012). Therapists’ attitudes toward evidence-based practices and implementation of parent-child interaction therapy. Child Maltreatment, 17, 47–55. [DOI] [PubMed] [Google Scholar]
  33. Parada H, Barnoff L, & Coleman B (2007). Negotiating ‘professional agency’: Social work and decision-making within the Ontario child welfare system. Journal of Sociology and Social Welfare, 34, 35–56. [Google Scholar]
  34. Patterson Silver Wolf (Adelv Unegv Waya) DA (2015). Factors influencing the implementation of a brief alcohol screening and educational intervention in social settings not specializing in addiction services. Social Work in Health Care, 54, 345–364. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Patterson Silver Wolf DA, Dulmus CN, & Maguin E (2013). Is openness to using empirically supported treatments related to organizational culture and climate? Journal of Social Service Research, 39, 562–571. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Paynter JM, & Keen D (2015). Knowledge and use of intervention practices by community-based early intervention service providers. Journal of Autism and Developmental Disorders, 45, 1614–1623. [DOI] [PubMed] [Google Scholar]
  37. Smith BD, & Donovan SEF (2003). Child welfare practice in organizational and institutional context. Social Service Review, 77, 541–563. [Google Scholar]
  38. Stewart RE, Stirman SW, & Chambless DL (2012). A qualitative investigation of practicing Psychologists’ attitudes toward research-informed practice: Implications for dissemination strategies. Professional Psychology: Research and Practice, 4, 100–109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Spielfogel JS, Leathers SJ, & Christian E (2016). Agency culture and climate in child welfare: Do perceptions vary by exposure to the child welfare system? Human Service Organizations: Management, Leadership & Governance, 40, 382–396. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Timmer S, & Urquiza A (Eds.). (2014). Evidence-based Approaches for the Treatment of Maltreated Children: Considering Core Components and Treatment Effectiveness. Dordrecht. New York: Springer. [Google Scholar]
  41. Whitaker T (2011). Administrative case reviews: Improving outcomes for children in out-of-home care. Children and Youth Services Review, 33, 1683–1708. [Google Scholar]

RESOURCES