Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Sep 1.
Published in final edited form as: Child Youth Serv Rev. 2017 Nov 6;83:242–247. doi: 10.1016/j.childyouth.2017.11.005

Use of Research Evidence and Implementation of Evidence-Based Practices in Youth-Serving Systems

Lawrence A Palinkas 1, Lisa Saldana 2, Chih-Ping Chou 3, Patricia Chamberlain 2
PMCID: PMC5695711  NIHMSID: NIHMS919390  PMID: 29170572

Abstract

Although the effectiveness of interventions for prevention and treatment of mental health and behavioral problems in abused and neglected youth is demonstrated through the accumulation of evidence through rigorous and systematic research, it is uncertain whether use of research evidence (URE) by child-serving systems leaders increases the likelihood of evidence- based practice (EBP) implementation and sustainment. Information on URE was collected from 151 directors and senior administrators of child welfare, mental health and juvenile justice systems in 40 California and 11 Ohio counties participating in an RCT of the use of community development teams (CDTs) to scale up implementation of Treatment Foster Care Oregon over a 3 year period (2010–12). Separate multivariate models were used to assess independent effects of evidence acquisition (input), evaluation (process), application (output), and URE in general (SIEU Total) on two measures of EBP implementation, highest stage reached and proportion of activities completed at pre-implementation, implementation and sustainment phases. Stage of implementation and proportion of activities completed in the implementation and sustainment phases were independently associated with acquisition of evidence and URE in general. Participation in CDTs was significantly associated with URE in general and acquisition of research evidence in particular. Implementation of EBPs for treatment of abused and neglected youth does appear to be associated with use of research evidence, especially during the later phases.

Keywords: research evidence, evidence-based practice, child welfare, implementation, mental health services

1. Introduction

Despite substantial evidence of their effectiveness, interventions for the prevention and treatment of mental health and behavioral problems of abused and neglected children and adolescents are not widely used in publicly funded child-serving systems (Hoagwood & Olin, 2002; Horwitz, Chamberlain, Landsverk, & Mullican, 2014; Raghavan, Inoue, Ettner, Hamilton, & Landsverk, 2010). Identification of the factors that serve as barriers and facilitators to evidence-based practice (EBP) implementation in service sectors that cater to abused and neglected children and adolescents has relied upon several theories, models and frameworks (Aarons, Hurlburt & Horwitz, 2011; Damschroder et al., 2009; Hanson, Self-Brown, Rostad, & Jackson, 2015). Some models and frameworks usually include characteristics of the intervention itself (Aarons et al., 2011; Damschroder et al., 2009), while others focus on the interactions that occur between intervention developers and consumers (Rogers, 2003). Still other models focus on the transfer of research evidence from knowledge producers to knowledge consumers or from EBP developers to potential users (Lavis et al., 2003; Landry, Amara, & Lamari, 2001a, 2001b; Landry, Lamari, & Amara, 2003). In fact, the definition of the field of implementation research itself makes references to “methods to promote the systematic uptake of research findings and other evidence-based practices (EBPs) into routine practice, and, hence, to improve the quality and effectiveness of health services” (Eccles & Mittman, 2006).

However, for the most part, these models and frameworks do not explain the mechanism by which use of research evidence (URE) is linked to the likelihood of implementation and sustainment of an EBP. There are several models that focus specifically on the use of research evidence. Nutley, Walter and Davies (2007), for instance, identify four factors associated with the outcome of using research evidence: the nature of the research to be applied, the personal characteristics of both researchers and potential research users, the links between research and its users, and the context for the use of research. Honig and Coburn (2008), emphasize process (searching for evidence, incorporating or not incorporating it in decision making), and predictors (features of the evidence, working knowledge, social capital, organization, normative influence, political dynamics, and state and federal policies) of evidence use.

Among the best-known models of evidence use are the variations that fall under the rubric of knowledge transfer and exchange (KTE) (Lavis, Lomas, Hamid, & Sewankambo, 2006; Lavis et al., 2002; Lomas, 2000; Mitton, Adair, McKenzie, Patten, & Waye Perry, 2007). The foundation for these models is Caplan’s Two Communities Theory (1979), which posits that the utilization of research by policy analysts and decision makers is poor because the assumptions and cultural practices of the two groups differ greatly, so effort is required to bridge the research-policy interface. A common approach to addressing these challenges is regular and direct contact between those who produce knowledge and those who use it (Lavis, Moynahan, Oxman, & Paulsen, 2008; Lomas, 2000). Direct interactions have improved user perception of research’s value (Kothari, Birch, & Charles, 2005) and correlate significantly with the consultation of research material by potential users (Ouimet et al., 2010). Another approach is the tailoring of presentations to meet users’ needs; customization of knowledge is important to potential users (Cherney, Head, Boreham, Povey, & Ferguson 2012), as is the researcher’s understanding of the needs and ability to speak the language of practice or policy (Haynes et al., 2011). A third approach is knowledge brokerage. “Knowledge brokerage refers to efforts to make research and policymaking more accessible to each other with various mechanisms of knowledge sharing and transfer” (Hukkinen, 2016, p. 321). Knowledge brokers include individuals and organizations that serve as intermediaries between knowledge producers and consumers and engage in a variety of activities, including dissemination, matchmatching, consulting, engaging, collaborating, and capacity-building (Meyer, 2010; Michaels, 2009; Ward, House & Hammer, 2009). Although the evidence for the effectiveness of knowledge brokerage is somewhat equivocal (Knight & Lightowler, 2010; Phipps & Morton, 2013), there is some evidence that knowledge brokerage can improve comprehension of the evidence and increase the intention to use it (Kothari, MacLean, Edwards, & Hobbs, 2011).

Among the many strategies employed to facilitate KTE are the following: face-to-face exchange (consultation, regular meetings) between decision makers and researchers; education sessions for decision makers; networks and communities of practice; facilitated meetings between decision makers and researchers, interactive, multidisciplinary workshops, capacity building within health services and health delivery organizations, web-based information and electronic communications, and steering committees to integrate views of local experts into design, conduct and implementation of research (Mitton et al., 2007). Most if not all of these strategies can also be found within a group of implementation strategies known as quality improvement collaboratives (QICs) or learning collaboratives (LCs) (Nadeem, Olin, Campbell, Hoagwood, & Horwitz, 2013). One of the best-known illustrations of the QIC approach to implementation is the Institute for Healthcare Improvement’s Breakthrough Series Collaborative (Institute for Healthcare Improvement, 2003). In a typical QIC/LC, individual sites organize staff into multi-disciplinary teams that participate in a series of in-person, phone, distance learning, and independent activities that are led by LC faculty who serve as content and QI experts (Nadeem, Weiss, Olin, Hoagwood, & Horwitz, 2016). The QIC/LC structure provides sites with access to experts in the field, often including treatment developers and QI experts.

Although these models and strategies have been widely used in implementation research, to our knowledge, there has been no research to date that has demonstrated that URE by policymakers or practitioners is associated with the extent to which implementation of an EBP has been successful or unsuccessful. A few studies have focused on the implementation of KTE strategies (Mitton et al., 2007) but not on specific interventions, programs or practices based on research evidence. Thus, it is unclear whether a decision to adopt, implement and sustain a particular EBP is based on the quality and quantity of evidence supporting its effectiveness, its relevance to the population served, from where and how the evidence was obtained, and how the evidence is used to make or support such a decision. Further, it is unknown whether knowledge brokerage implementation strategies like QICs result in a significant increase in URE.

The study described in this paper examined the use of research evidence among leaders of county-level child welfare, specialty mental health and juvenile justice systems in California and Ohio participating in a randomized controlled trial of a specific QIC strategy for scaling up the use of an EBP for youth in foster care. Previous studies of these leaders revealed the importance of social networks in exchanging information and resources to support EBP implementation (Authors, 2011) and URE to vary based on demographic characteristics such as gender, level of education and type of agency (Authors, 2017). Systems leaders also exhibited significant differences by type of use. They were most engaged in evaluating the evidence, least engaged in accessing it, and more likely to ignore the evidence than to apply it in making decisions whether or not to adopt an innovation. Leaders also consider other forms of evidence, including resources necessary and available to support EBPs, demand for research evidence, and personal experience (Authors, 2017). The current study had two specific aims: 1) to determine whether use of research evidence was independently associated with stage of implementation of an EBP and proportion of activities completed at the pre-implementation, implementation and sustainment phases; and 2) to determine whether URE was significantly associated with the QIC strategy used to scale up the EBP.

2. Methods

2.1. Setting

The setting for present study was the CAL-OH Study, a randomized clinical trial of Community Development Teams (Saldana & Chamberlain, 2012) to scale-up the use of Treatment Foster Care Oregon (TFCO; Chamberlain, Leve, & Degarmo, 2007), an EBP for treatment of externalizing behaviors and mental health problems in youth. The CAL-OH study targeted 40 California counties and 11 Ohio counties that had not already adopted TFCO. They were matched by county characteristics such as size and number of foster care placements to form four nearly equivalent groups. The matched groups then were randomly assigned to four sequential cohorts in a waitlist design with staggered start-up timelines (at months 6, 18, or 30). Within each cohort, counties were randomly assigned to CDT or the standard implementation conditions, thereby generating eight replicate groups of counties with four assigned to CDT.

2.2. Participants

Data for this study were collected from 151 of the 221 (67.9% response rate) available child-serving system leaders, supervisors, and administrators who were participating in the RCT at the time this study was conducted (2010–2012). Participants had an average age of 49 years, and were predominately non-Hispanic white (84.4%), female (69.4%), and living in California (61.6%), with a Master’s degree or higher (62.6%). A little over one-third of the participants (35%) were child welfare system directors; the remaining participants were leaders of mental health (24%), juvenile justice (18%), and other social services (23%) (Authors, 2017).

The study was approved by the Institutional Review Boards of the investigators’ institutions prior to participant recruitment, and informed consent was obtained prior to data collection. Participants were emailed an invitation to participate as well as a link to a web-based survey, which took approximately 15 to 20 minutes to complete.

2.3. Measures

Treatment condition

The Community Development Team (CDT) is a multifaceted intervention developed by the California Institute of Mental Health (CIMH) a California-based statewide training and technical assistance center, dedicated to dissemination and implementation of EBPs for treatment of mental health problems (Saldana & Chamberlain, 2012). CIMH serves as an intermediary and knowledge broker, linking researchers with practitioners and policymakers. The CDT model consists of seven core processes (needs-benefits analysis, planning, monitoring and support, fidelity focus, technical investigation and problem solving, procedural skills development, peer-to-peer exchange and support) that are designed to facilitate the successful adoption, implementation and sustainability of a new practice. These processes are accomplished through seven distinct activities: development team meetings, development team administrator conference calls, prompted listserv, site-specific correspondence and conference calls, fidelity and outcomes protocols and monitoring, CDT practice developer conference calls, and titrated technical assistance (i.e., provided more heavily during the pre-implementation phase, and then slowly reduced over time as teams become more competent in their program delivery). CDTs utilize many of the same components as quality improvement collaborative (QIC) or learning collaborative models of implementation (Nadeem et al., 2013), particularly providing structured opportunities for collaboration and problem solving across sites.

Counties randomized to the individualized implementation strategy (IND) received the usual technical assistance and implementation support as is typically provided to teams who are adopting a new TFCO program. This included three readiness calls with a TFCO purveyor and a face-to-face stakeholder meeting where the county stakeholders meet to ask questions, work through implementation procedures, and develop a concrete plan for start-up. This was followed by a 5-day all staff training for administrators, supervisors, therapists, and skills trainers, a 2-day foster parent training, training in using the TFCO fidelity monitoring system, program start up (placement of youth in TFCO foster homes), and ongoing consultation and support in implementing the model through weekly viewing of video recordings of foster parent meetings and consultation calls to maintain fidelity to the model.

Counties randomized to the CDT condition received all of these activities as well as titrated technical assistance from two CDT consultants who were trained and experienced in offering support for the implementation of the TFCO model. In addition to development team administrator conference calls, a prompted listserv for the exchange of information, site-specific correspondence, and fidelity and outcomes protocols and monitoring, this support was offered in six peer-to-peer meetings and in monthly conference calls with program administrators. Meetings were attended by representatives from 5–7 counties randomized to the CDT condition and were structured to help problem-solve and share information about implementation issues, including discussion of key barriers experienced by counties in California or Ohio that were unique to the state landscapes, and resource sharing. CDT facilitators either were the developers of the CDT model or trained by the CDT developers (Authors, 2014).

The CDT model was developed by the California Institute for Mental Health (CIMH) to support counties in the adoption of a range of EBPs in its role as a knowledge broker and intermediary organization. For the current study the CDT was delivered with a specific focus on TFCO. To evaluate its effectiveness as an implementation strategy, the CDT implementation approach was being directly compared to the procedures normally used by the TFCO purveyor for the purpose of EBP implementation. In this study, CIMH was not acting as an agent of the TFCO purveyor or the counties per se, but was working to successfully implement the TFCO model and then apply the same procedures to scale up other EBPs in California (Authors, 2012).

The Structured Interview of Evidence Use (SIEU) is a 45-item instrument designed to measure the extent of engagement in three forms of evidence: 1) acquisition of research evidence (Input), 2) evaluation of that evidence for reliability, validity and relevance to one’s own clients (Process), and 3) application of that evidence in deciding whether or not to adopt an evidence-based or other innovative practice (Output) (Palinkas et al., 2016). A global measure of use of research evidence (referred to a total SIEU or use of research evidence in general) was also calculated by combining the scores of all three subscales. Respondents indicated level of agreement with a series of statements using a Likert scale ranging from 1 (not at all) to 5 (all the time) for the 17 items contained in the Input subscale, and a similar 5 point Likert scale ranging from 1 (not important) to 5 (very important) for the 16 items contained in the Process and the 12 items contained in the Output subscale. Lower scores on all three subscales indicated lower levels of agreement, while higher scores indicated higher levels of agreement with the respective statements. Each subscale and the total SIEU score was represented as an average of the scores for each item included in the subscale/ total scale. Palinkas and colleagues (2016) report high internal reliability of the total SIEU (α = .88) and all three primary subscales (input = .80; process = .86; output = .80).

Stages of Implementation Completion (SIC)

The SIC is an 8-stage assessment tool of implementation processes and milestones, with sub-activities within each stage (Chamberlain, Brown & Saldana, 2011). The stages range from Engagement with the developers to development of practitioner and organizational Competency. The SIC spans three phases of implementation: pre-implementation (Stages 1–3), implementation (Stages 4–7), and sustainability (Stage 8). As an observational measure, the SIC is flexible in assessing implementation activities conducted by a number of different agents involved in the process, including the county system leaders involved in the decision of whether or not to adopt an EBP, agency leaders and practitioners, and clients receiving services. Implementation progress was assessed on the basis on furthest stage completed (i.e., Stage Score), and the percentage of activities completed within a phase calculated (i.e., Proportion Score). An earlier study (Authors, 2012) found that SIC scores predicted variations in implementation behavior for sites attempting to adopt the TFCO model. Sites also were accurately identified (i.e., face validity) through agglomerative hierarchical cluster analyses.

2.4. Statistical analysis

Because implementation of TFCO occurred at the county level, data from individual study participants were aggregated into 45 clusters based on county and year of participation (2010, 2011, and 2012). A cluster included 3 or more individual participants from the same county in the same year. Clusters included participants from 16 of the 40 California counties and 9 of the 11 Ohio counties; 17 clusters participated in 2010, 19 clusters participated in 2011, and 9 clusters participated in 2012.

Bivariate comparisons of mean cluster measures of use of research evidence (total engagement, acquisition, evaluation and application and subscales) by treatment condition (CDT vs IND) were conducted using analyses of variance. To examine the effect of URE on stage of implementation completion, bivariate comparisons were conducted using Spearman correlation coefficients, and multivariate analyses were conducted using linear and binary logistic regression. Separate linear regression models were constructed for furthest SIC stage reached and proportion of activities completed in Phases 1 (Stages 1–3) and Phase 2 (Stages 4–7); as counties completed neither or both of the two activities in Phase 3 (Stage 8), binary regression models were constructed for this outcome instead. Each model included one of the four measures of evidence use (input, process, outcome, and total SIEU score) and county, year, state, and experimental condition (CDT vs IND) as covariates.

3. Results

A bivariate comparison of measures of use of research evidence with measures of implementation is provided in Table 1 below. Use of research evidence in general (SIEU total score) was significantly correlated with furthest stage reached (p < 0.05), proportion of activities completed in Stages 4–7 (Implementation) (p = 0.009) and Stage 8 (Sustainment) (p = 0.036). Acquisition of research evidence (Input) was significantly associated with proportion of activities completed in Stages 4–7 (p = 0.05) and Stage 8 (p = 0.013), and marginally associated with furthest stage reached (p = 0.09).

Table 1.

Correlations between measures of use of research evidence and measures of stage of implementation

Proportion of activities completed

Cluster
Mean SIEU
Scores
Furthest
stage
reached
Pre-
implementation
(Stages 1–3)
Implementation
(Stages 4–7)
Sustainment
(Stage 3)
Input .25§ −.10 .29* .37*
Process .15 .07 .28 .14
Output .22 .15 .25 .16
Total SIEU .29* .02 .37** .31*
§

p < 0.10,

*

p < 0.05,

**

p < 0.01

Table 2 below displays the results of the multivariate regression analyses of SIC measures on use of research evidence. When controlling for county, state, year of observation, and experimental condition, use of research evidence in general (SIEU total) and acquisition of research evidence (Input) were significantly associated with furthest stage of implementation of TFCO (t = 2.29, p = 0.03 and t = 2.10, p = 0.04) and with the proportion of activities completed in the implementation (t = 2.36, p = 0.02 and t = 2.18, p = 0.035) and sustainment (p = 0.05 and p = 0.006) phases. Evaluation of research evidence (Process) was marginally associated with proportion of activities completed in the pre-implementation (t = 1.86 p = 0.07) and implementation phases (t = 1.66, p = 0.10).

Table 2.

Regression of SIC outcomes on SIEU measures representing type of engagement in evidence use

Implementation outcomes
SIEU mean
score
Furthest
SIC Stage
Proportion of activities
Pre-
implementation
Implementation Sustainment
B (SE) B (SE) B (SE) B (SE)
Input 3.16 (1.50)* 0.17 (0.16) 0.53 (0.24)* 5.14 (2.33)**
Process 2.60 (1.60) 0.30 (0.16)§ 0.43 (0.26)§ 2.22 (2.26)
Output 1.41 (1.43) 0.04 (0.15) 0.25 (0.23) 2.18 (2.10)
Total SIEU score 4.18 (1.82)* 0.29 (0.19) 0.70 (0.30)* 5.98 (3.09)*
§

p < 0.10,

*

p < 0.05,

**

p < 0.01

Controlling for county, year, state, and treatment condition

To determine whether use of research evidence was associated with participation in the Community Development Teams, we compared URE measures by experimental condition. The results are presented in Table 3 below. Community Development Team participation was significantly associated with acquisition of research evidence (Input) (F = 5.38, p = 0.025) and marginally associated with URE in general (SIEU total) (F = 2.84, p = 0.099).

Table 3.

Association between community development team participation and use of research evidence

SIEU Measure Experimental Condition
Community
Development Team
N = 29
Training & Technical
Assistance
N = 16
Mean SD Mean SD
Input 2.86 0.23 2.69 0.26*
Process 3.73 0.20 3.67 0.30
Output 3.39 0.21 3.36 0.36
Total SIEU score 3.33 0.17 3.22 0.25*
*

p < 0.05

4. Discussion

In this study, URE in general (a combination of acquisition or input, evaluation or process, and application or output) and acquisition of research evidence in particular (i.e., input) were significantly associated with stage of implementation completion of TFCO and with the proportion of activities completed in the implementation and sustainment phases. There also is some evidence to suggest that evaluation of research evidence for its validity, reliability and relevance to the county was associated with proportion of activities completed in the pre-implementation and implementation phases, although the associations failed to reach statistical significance. This study also found that participation in Community Development Teams was significantly associated with URE in general and acquisition of research evidence in particular, but not with evaluation or application of research evidence.

Although URE models vary with respect to their distinct theoretical orientations (i.e., human information processing, distributed cognition, diffusion of innovations, decision making theory) and organizational settings (e.g., health care, education), most if not all acknowledge two essential considerations to understanding when research will be used and in what ways (Nutley et al., 2007). The first consideration is the context of research use. As noted by Davies, Nutley & Walter (2008, p. 190), “research use is a highly contingent process. Whether and how new information gets assimilated is contingent on local priorities, cultures and systems of meaning. What makes sense in one setting can make a different sense in another.” An earlier study of child-serving systems leaders participating in the CAL-OH Study similarly found that URE must be placed in context that requires information or evidence that is not based in research, but rather in the availability of resources to enable the use of research evidence, the demand for such evidence or lack thereof, and on the local evidence that is obtained from personal experience and observation (Authors, 2017). In that study, “the primary resource that determines whether research evidence is relied upon for making a decision is the availability of funding to support the practice. As noted by a chief probation officer, ‘If I can’t afford it, then there is no point of going further’” (Authors, 2017, p. 70). Client needs represent another important piece of evidence in deciding whether or not to implement a specific EBP:

“We get a lot of these kids that are not successful in foster placements and they end up in group home placements which cost the county a lot of money and they end up in other systems. So, I think that the main thing we’re looking at is approaches that are going to allow us to be more successful with these children and to avoid them ending up in more restricted types of placements. (mental health director)” (Authors, 2017, p. 72).

A third type of evidence is personal experience. As explained by one county mental health services director”

“I think there’s a term kind of used by CIMH and in other places about practice-based evidence, as opposed to evidence-based practice. And that’s particularly useful, I think, when you’re dealing with cultural issues, and when you’re dealing with the practices that either the practitioner or the community being served, or both, feel that there’s real efficacy to what’s occurring that just hasn’t been able to be documented yet.”

The importance of context may also explain why CDT participation was not associated with evaluation and application of research evidence (process and output, respectively) as these forms of URE are more context-dependent than acquisition of research evidence (input) (Palinkas et al., 2016). While systems leaders focused primarily on the relevance to their county’s population when evaluating the research evidence and based their decision to apply the evidence on their agency’s perceived capacity and need to implement (Authors, 2017), exchange of information, including research evidence, occurred regularly and was not always linked to the need for a specific EBP (Authors, 2011).

The second consideration is that “interpersonal and social interactions often are seen as key to accessing and interpreting such research knowledge, whether among policy or practice colleagues, research intermediaries or more directly with researchers themselves” (Davies et al., 2008, p. 189). For instance, in an earlier study, one of the child welfare agency directors described the following:

“…we were able to go to Ohio and see the Team Decision Making meetings. And that experience was so valuable because you actually got to ask all those questions that you had about how to engage your staff, how to get finance, all that stuff that you are thinking about, “How am I going to do this?” You can ask these people and they can help walk you through that and you can actually see it in action. Reading about it is fine, but it doesn’t really help you see whether it would fit with your circumstance unless you can see it and ask those questions that you need to ask. And I think that’s why we rely on each other so much because it’s all we really have” (Authors, 2017, p. 74).

However, evidence in support of the importance of such networks and interactions was somewhat mixed. On the one hand, a previous study found no evidence that the CDT implementation strategy achieved higher overall implementation compared to that for IND using either a composite score or assessments of how many stages were completed, how fast they were achieved, whether a county achieved placement of any child, or whether a county achieved full competency (Authors, 2014). On the other hand, we found in this study that participation in Community Development Teams was significantly associated with URE in general and acquisition of research evidence in particular, despite the fact that CDTs were not designed or intended to facilitate URE.

In both the CDT and IND conditions, acquisition of evidence supporting the EBP (in this case, TFCO) occurred primarily through the treatment developer. However, the networks developed by participating in the CDTs provided access to evidence supporting other types of EBPs, as well as evidence supporting strategies for obtaining financial support for and sustaining TFCO. One of the primary aims of the CDT is to build a working group of agency stakeholders who share a common goal of implementing a particular EBP. Through facilitated discussions, group members are encouraged to share successes and barriers experienced in their implementation process, to share resources and materials that are helpful for implementing the EBP within the local contexts, and to develop sustainable agency-to-agency relationships outside of the facilitated working group discussions. Further, earlier research with the first cohort of CAL-OH study participants revealed that networks expose leaders to information about EBPs and opportunities to adopt EBPs; they also influence decisions to adopt EBPs (Authors, 2013). Individuals in counties at the same stage of implementation accounted for 83% of all network ties, while networks in counties that decided not to implement TFCO had no extra-county ties. However, the multivariate analyses suggested that acquisition of research evidence was independent of the CDT participation, suggesting that local networks of study participants extended beyond those who were assigned to the same CDT. In the earlier study, implementation of TFCO at the 2-year follow-up was associated with the size of county, urban vs. rural counties, and in-degree centrality, a measure of the extent to which a network member is sought out by other network members for information and advice (Authors, 2011). Collaboration was viewed as critical to implementing EBPs, especially in small, rural counties where agencies have limited resources on their own (Authors, 2013).

There are a number of factors that limit the findings of the present study. Our assessment of URE was not restricted to evidence related only to TFCO; consequently, the circumstances of participation in the CAL-OH study may not have reflected usual patterns of URE because the tasks of making evidence accessible, providing assurances of its validity and reliability, and application of evidence were assumed by the study investigators and intervention developers. Alternatively, participation in the project may have motivated systems leaders to seek out information about TFCO in addition to the information provided by the investigators, evaluate it, and apply it in their own local context of demand for its use and supply of resources to support that use. Restricting analyses of URE by groups of three or more individuals in the same county for the same period of observation limited our statistical power to examine some outcomes, especially binary outcomes. As the measures of implementation outcomes were collected annually at the county level, the unit of analysis was the county and year of participation, not the individual participant. For continuous outcomes, we predicted that the trial would have sufficient power to detect an effect size of 0.9 at power 0.80 and an effect size of 0.8 at power 0.70 for a two-sided 0.05 level test given ICCs less than 0.10. For outcomes other than the proportion of implementation activities completed, there was relatively little variation across cohorts and very modest intraclass correlations among the CDT counties (Authors, 2014). Nevertheless, additional research with larger samples is required to confirm the independent association between URE and EBP implementation outcomes.

5. Conclusion

Despite these limitations, the results of this study suggest that URE by leaders of child-serving systems is an important predictor of the implementation and sustainment of EBPs. Further, KTE and implementation strategies like CDTs do play an important role in URE in general and acquiring research evidence in particular. Support for implementation and sustainment should include assistance in making the evidence for the effectiveness of an EBP or innovative program accessible to policymakers and practitioners and in evaluating the evidence to assure its validity, reliability and relevance to the populations served by these individuals and the organizations they represent. Support should also include providing resources necessary to facilitate the application of evidence in decision-making. Most importantly, the elimination of barriers to EBP implementation can reduce the likelihood that the evidence is ignored and increase the likelihood that it will be applied to sustain the EBP.

Highlights.

  • Leaders of public youth-serving systems of care rely on social networks and research evidence when deciding to adopt and implement an evidence-based practice.

  • Use of research evidence is independently associated with successful implementation of evidence-based practices.

  • Quality improvement collaboratives like community development teams facilitate the development of social networks that share information including research evidence

  • Knowledge translation and exchange and implementation strategies like community development teams play an important role in use of research evidence in general and acquiring research evidence in particular.

Acknowledgments

This study was supported by funding from the William T. Grant Foundation (#10648, Lawrence Palinkas, P.I.), the National Institute of Mental Health (1 R01 MH097748, Patricia Chamberlain, P.I. and 1R01MH076158-01A1, Lisa Saldana, P.I.) and the National Institute on Drug Abuse (5P50DA035763-04, Patricia Chamberlain, P.I.)

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public sector services. Administration and Policy in Mental Health and Mental Health Services Research; 2011. 2011;38:4–23. doi: 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Authors. 2008 [Google Scholar]
  3. Authors. 2011 [Google Scholar]
  4. Authors. 2012 [Google Scholar]
  5. Authors. 2013 [Google Scholar]
  6. Authors. 2014 [Google Scholar]
  7. Authors. 2017 [Google Scholar]
  8. Caplan N. The two-communities theory and knowledge utilization. American Behavioral Scientist. 1979;22:459–471. [Google Scholar]
  9. Chamberlain P, Brown C, Saldana L. Observational measure of implementation progress in community-based settings: the stages of implementation completion (SIC) Implementation Science. 2011;6:116. doi: 10.1186/1748-5908-6-116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Chamberlain P, Leve LD, Degarmo DS. Multidimensional treatment foster care for girls in the juvenile justice system: 2-year follow-up of a randomized clinical trial. Journal of Consulting and Clinical Psychology. 2007;75:187–193. doi: 10.1037/0022-006X.75.1.187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cherney A, Head B, Boreham P, Povey J, Ferguson M. Perspectives of academic social scientists on knowledge transfer and research collaboration: A cross-sectional survey of Australian academics. Evidence and Policy. 2012;8:433–453. [Google Scholar]
  12. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science. 2009;4:50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Davies H, Nutley S, Walter I. Why ‘knowledge transfer’ is misconceived for applied social research. Journal of Health Services and Policy. 2008;13(3):188–190. doi: 10.1258/jhsrp.2008.008055. [DOI] [PubMed] [Google Scholar]
  14. Dwan KM, McInnes P. Improving knowledge exchange at the research-policy interface. Australian Health Review. 2013;37:194–198. doi: 10.1071/AH12158. [DOI] [PubMed] [Google Scholar]
  15. Eccles MP, Mittman BS. Welcome to implementation science. Implementation Science. 2006;1:1. doi: 10.1186/1748-5908-1-1. [DOI] [Google Scholar]
  16. Hanson RF, Self-Brown S, Rostad WL, Jackson MC. The what, when and why of implementation frameworks for evidence-based practices in child welfare and child mental health service systems. Child Abuse & Neglect. 2015 doi: 10.1016/j.chiabu.2015.09.014. epub ahead of print, Nov. 4. http://dx.doi.org/10.1016/j.chiabu.2015.09.014. [DOI] [PMC free article] [PubMed]
  17. Haynes AS, Gillispie JA, Derrick GE, Hall WD, Redman S, Chapman S, et al. Galvanizers, guides, champions, and shields: The many ways that policymakers use public health researchers. Milbank Quarterly. 2011;89:564–598. doi: 10.1111/j.1468-0009.2011.00643.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Hoagwood K, Olin S. The Blueprint for Change Report: Research on child and adolescent mental health. Journal of the American Academy of Child & Adolescent Psychiatry. 2002;41:760–767. doi: 10.1097/00004583-200207000-00006. [DOI] [PubMed] [Google Scholar]
  19. Honig MI, Coburn C. Evidence-based decision making in school district central offices: Toward a policy and research agenda. Educational Policy. 2008;22(4):578–608. [Google Scholar]
  20. Horwitz SM, Chamberlain P, Landsverk J, Mullican C. Improving the mental health of children in child welfare through implementation of evidence-based parenting interventions. Administration and Policy in Mental Health and Mental Health Services Research. 2010;37:27–39. doi: 10.1007/s10488-010-0274-3. [DOI] [PubMed] [Google Scholar]
  21. Hukkinen JI. A model for the temporal dynamics of knowledge brokerage in sustainable development. Evidence and Policy. 2016;12:321–340. [Google Scholar]
  22. Institute for Healthcare Improvement. The Breakthrough Series: IHI’s collaborative model for achieving breakthrough Improvement. Boston, MA: Institute for Healthcare Improvement; 2003. [Google Scholar]
  23. Knight C, Lightowler C. Reflection on ‘knowledge exchange professionals’ in the social sciences: Emerging opportunities and challenges for university-based knowledge brokers. Evidence and Policy. 2010;6:543–556. [Google Scholar]
  24. Kothari A, Birch S, Charles C. Interaction and research utilization in health policies and programs: Does it work? Health Policy. 2005;71:117–125. doi: 10.1016/j.healthpol.2004.03.010. [DOI] [PubMed] [Google Scholar]
  25. Kothari A, MacLean L, Edwards N, Hobbs A. Indicators at the interface: Managing policymaker-researcher collaboration. Knowledge Management Research and Practice. 2011;9:203–214. [Google Scholar]
  26. Landry R, Amara N, Lamari M. Climbing the ladder of research utilization: Evidence from social science research. Science Communication. 2001a;22:396–422. [Google Scholar]
  27. Landry R, Amara N, Lamari M. Utilization of social science research knowledge in Canada. Research Policy. 2001b;30:333–349. [Google Scholar]
  28. Landry R, Lamari M, Amara N. The extent and determinants of the utilization of university research in government agencies. Public Administration Review. 2003;63:192–205. [Google Scholar]
  29. Lavis J, Lomas J, Hamid M, Sewankambo N. Assessing country-level efforts to link research to action. Bulletin of the World Health Organization. 2006;84:620–628. doi: 10.2471/blt.06.030312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Lavis J, Moynihan R, Oxman AD, Paulsen EJ. Evidence-informed health policy 4: Case descriptions of organizations that support the use of research evidence. Implementation Science. 2008;3:56. doi: 10.1186/1748-5908-3-56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J The Knowledge Transfer Group. How can research organizations more effectively transfer research knowledge to decision makers? The Milbank Quarterly. 2003;81(2):221–249. doi: 10.1111/1468-0009.t01-1-00052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Lavis J, Ross S, Hurley J, Hohenadal J, Stoddart G, Woodward C, et al. Examining the role of health services research in public policymaking. Milbank Quarterly. 2002;80:125–154. doi: 10.1111/1468-0009.00005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Lomas J. Using ‘linkage and exchange’ to move research into policy at a Canadian foundation. Health Affairs. 2000;19(1):236–240. doi: 10.1377/hlthaff.19.3.236. [DOI] [PubMed] [Google Scholar]
  34. Meyer M. The rise of the knowledge broker. Science Communication. 2010;32:118–127. [Google Scholar]
  35. Michaels S. Matching knowledge brokering strategies to environmental policy problems and settings. Environmental Science and Policy. 2009;12:994–1011. [Google Scholar]
  36. Mitton C, Adair CE, McKenzie E, Patten SB, Waye Perry B. Knowledge transfer and exchange: Review and synthesis of the literature. Milbank Quarterly. 2007;85:729–768. doi: 10.1111/j.1468-0009.2007.00506.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Nadeem E, Olin SS, Campbell L, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: A systematic literature review. Milbank Quarterly. 2013;91(2):354–394. doi: 10.1111/milq.12016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Nadeem E, Weiss D, Olin SS, Hoagwood KE, Horwitz SM. Using a theory-guided learning collaborative model to improve implementation of EBPs in a state children’s mental health system: A pilot study. Administration and Policy in Mental Health. 2016;43:978–90. doi: 10.1007/s10488-016-0735-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Nutley SM, Walter I, Davies HTO. Using evidence: How research can inform public services. Bristol, UK: The Policy Press; 2007. [Google Scholar]
  40. Ouimet M, Bedard P-O, Turgeon J, Lavis JN, Gelineau F, Gagnon E, et al. Correlates of consulting research evidence among policy analysts in government ministries. A cross-sectional survey. Evidence and Policy. 2010;6:433–460. [Google Scholar]
  41. Palinkas LA, Garcia AR, Aarons GA, Finno-Velasquez M, Holloway IW, Mackie T, et al. Measuring use of research evidence in child-serving systems: The Structured Interview for Evidence Use (SIEU) Research on Social Work Practice. 2016;26(5):550–564. doi: 10.1177/1049731514560413. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Phipps D, Morton S. Qualities of knowledge brokers: Reflections from practice. Evidence and Policy. 2013;9:255–265. [Google Scholar]
  43. Raghavan R, Inoue M, Ettner SL, Hamilton BH, Landsverk J. Preliminary analysis of the receipt of mental health services consistent with national standards among children in the child welfare system. American Journal of Public Health. 2010;100(4):742–749. doi: 10.2105/AJPH.2008.151472. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Rogers EM. Diffusion of innovations. 5. New York: Free Press; 2003. [Google Scholar]
  45. Saldana L, Chamberlain P. Supporting implementation: The role on community development teams to build infrastructure. American Journal of Community Psychology. 2012;50:334–346. doi: 10.1007/s10464-012-9503-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Ward V, House A, Hammer S. Knowledge brokering: The missing link in the evidence to action chain? Evidence and Policy. 2009;5:267–279. doi: 10.1332/174426409X463811. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES