Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Nov 1.
Published in final edited form as: Adm Policy Ment Health. 2012 Nov;39(6):419–425. doi: 10.1007/s10488-011-0363-y

Predicting Program Start-Up using the Stages of Implementation Measure

Lisa Saldana 1, Patricia Chamberlain 2, Wei Wang 3, C Hendricks Brown 4
PMCID: PMC3212640  NIHMSID: NIHMS315415  PMID: 21710257

Abstract

Recent efforts to better understand the process of implementation have been hampered by a lack of tools available to define and measure implementation progress. The Stages of Implementation Completion (SIC) was developed as part of an implementation trial of MTFC in 53 sites, and identifies the duration of time spent on implementation activities and the proportion of activities completed. This paper examines the ability of the first three stages of the SIC (Engagement, Consideration of Feasibility, Readiness Planning) to predict successful program start-up. Results suggest that completing SIC stages completely, yet relatively quickly, predicts the likelihood of successful implementation.

Keywords: implementation progress, stages, predictive validity, MTFC


Over the last decade, there has been an increased effort to implement evidence-based practices (EBPs) into real world community settings (Horwitz & Landsverk, 2010). Doing so often entails extensive planning, training, and quality assurance in the EBP involving a complex set of interactions among developers and system leaders, front line staff, and consumers. In fact, it is generally understood from the literature that it takes an agency a minimum of two years to complete implementation (Fixsen & Blasé, 2009) and that the success of a program is largely dependent on the success of the implementation methods (Mihalic et al., 2004). However, little is known about which aspects of these methods and interactions are most important for successful implementation (Fixsen, Naoom, Blasé, Friedman, & Wallace, 2005). Recently, there has been an increased effort to understand what steps in the implementation process are essential to effectively transport EBPs to a diverse range of communities, and how to best measure if these steps have occurred well (Aarons, Hurlburt, & Horwitz, 2010; Glasgow, Vogt, & Boles, 1999).

There is consensus that implementation is likely a recursive process with well defined stages or steps (Blasé, Fixsen, Duda, Metz, Naoom, Van Dyke, 2010). Fixsen and Blasé (2009) describe several clearly defined stages that are not necessarily linear and that impact each other in complex ways. A treatment developer or purveyor typically assists programs in navigating their way through each of the implementation stages to ensure that the program elements are delivered in the manner intended by the developers. However, the key processes involved in implementation stages need to be measured and modeled, and the fidelity of implementation methods assessed (Proctor et al., 2009; Schoenwald, Garland, Chapman, Frazier, Sheidow, & Southam-Gerow, 2010). Having a well defined system of implementation and knowledge about the typical progression through the stages of implementation might increase the likelihood that a purveyor can provide programs with information in the early stages that will help to support their success in later stages (Fixsen et al., 2005). It may be particularly important to give agencies feedback regarding their progress during the early implementation stages to help them either assess and potentially calibrate their efforts to proceed, or to reassess whether their current implementation plan remains viable. Such efforts have been limited by a lack of assessment tools for determining what steps, or stages, are necessary for implementation success and a lack of data on how programs progress through the key steps or stages.

The Current Paper

To address this gap, the Stages of Implementation Completion (SIC), an 8-stage assessment tool, was developed (Chamberlain and Brown, 2010) as part of a large-scale randomized implementation trial that contrasted two methods of implementation of Multidimensional Treatment Foster Care (MTFC) in counties in California and Ohio. The SIC was developed as a method of measuring a community’s progress toward successful implementation of the MTFC model. As shown in Table 1, the SIC has 8 main stages, with sub activities within each stage. The 8 stages range from Engagement with the developers to practitioner Competency. The SIC measures and monitors completion of implementation activities within each stage, as well as the length of time taken to complete these activities. It was designed in hopes of being a useful tool to measure implementation across EBPs in general, and in this study, within MTFC specifically. Each of the 8 main stages describes key implementation milestones that are necessary for successful implementation; and the sub activities within each stage target specific tasks needed to be accomplished in order to complete each stage within a practice. For example, Stage 3 (Readiness Planning) is required for the implementation of all EBPs, but one of the sub activities is to conduct a foster parent recruitment review, which is specific to the MTFC model. As evident in Table 1, the SIC is date driven in order to analyze rate; the rate at which a community completes each stage is hypothesized to predict successful implementation of the EBP. The appropriate rate of completion, however, is unknown. Communities that complete stages too slowly or get “hung up” in a stage might be likely to encounter further difficulties in the adoption and implementation process. On the other hand, moving through a stage too quickly could result in oversight of the necessary activities or tasks needed to thoroughly and adequately complete the stage, and ultimately, adopt the program. The SIC also yields a proportion score which take into account the number of activities within a stage that are completed. Thus, scores for both the speed and the proportion of activities are calculated to determine if such factors influence the successful adoption of an EBP.

Table 1.

The Stages of Implementation Completion and Involved Agent per Stage

Stage Activity Involved Agent
1: Engagement Date site is informed services/program available
Date of interest indicated
System Leader
2: Consideration of
Feasibility
Date of first contact for pre implementation
planning
Date first in-person meeting held
Date feasibility questionnaire completed
Date of initial feasibility assessment
System Leader
Agency
3: Readiness Planning Date of cost/funding plan review
Date of staff sequence, timeline, hire plan review
Date of Foster Parent recruitment review
Date of referral criteria review
Date of communication plan review
Date of second in-person meeting held
Date written implementation plan complete
System Leader
Agency
4: Staff Hired & Trained Date service provider selected
Date agency checklist completed
Date 1st staff hired
Date clinical training held
Date Foster Parent training scheduled
Date Foster Parent training held
Agency
Practitioner
5: Adherence Monitoring
processes in place
Date fidelity data tracking system training
scheduled
Date data tracking system training held Date of
1st program administrator call
Date site consultant assigned to site
Practitioner
Client
6: Services and
Consultation Begin
Date of first placement
Date of first consult call
Date of first clinical meeting video reviewed
Date of first foster parent meeting video reviewed
Date of second placement
Practitioner
Client
7: Ongoing services,
consultation, fidelity
monitoring and feedback
Dates of site visits
Date of first implementation review
Date of second implementation review Date of
final program assessment
Practitioner
Client
8: Competency Date of pre-certification review
Date of certification application
Date certified
System Leader
Agency
Client

Note. Not all stage activities are necessarily completed and are not necessarily completed in a linear fashion.

Background

Multidimensional Treatment Foster Care (MTFC) is an evidence-based practice developed as an alternative to group or residential out-of-home placement for the treatment of youth with severe behavioral and mental health problems in foster care (Chamberlain, 2003). MTFC is a model program as rated by the Blueprints for Violence Prevention, was recently named as a Top-Tier practice by the Coalition for Evidence Based Policy (2009), and has been rigorously evaluated in a number of randomized trials (Chamberlain, Leve & DeGarmo 2007; Chamberlain & Mihalic, 1998; Leve & Chamberlain, 2007; Chamberlain & Reid, 1998). MTFC is implemented in locally recruited foster homes by a team that includes a supervisor, two therapists, and other part-time staff (e.g., foster parent recruiter/trainer, skills coaches for youth). An MTFC team serves 10–12 youth for an average length of stay of 6–9 months.

As MTFC started to receive national recognition, more early adopting communities approached and worked with the purveyors to bring MTFC to their communities. Currently more than 70 sites internationally have adopted MTFC, with a greater number approaching the purveyors but not successfully following through on implementation. By engaging in the recursive implementation process and by interacting with other developers and purveyors of evidence-based practices, the MTFC developers have gained invaluable knowledge to help define the MTFC implementation protocols. The SIC is intended to indicate the steps that are helpful in moving sites toward successful implementation of MTFC; however, identifying which stages and activities are necessary, or how they might help predict successful implementation, is unknown. Answering these questions has been limited by the lack of a validated method for measuring implementation activities.

The focus of this paper is twofold. First, analyses examine if behavior in the early implementation stages predicts successful program start-up. Using the SIC measure, the time-to-completion rate in the first three stages (Engagement, Consideration of Feasibility, Readiness Planning), and the proportion of activities completed within each of those stages were used to predict the success of counties in reaching service delivery (stage 6). These predictions have potential implications for system leaders and policy makers for understanding not only if their community is a good match for MTFC, but also where they should direct their efforts in the early stages of implementation to optimize their potential for a successful start-up. Second, this paper is the first to examine the predictive validity of the SIC.

Method

Participants

Procedures for this study were part of an ongoing large-scale randomized implementation trial of the MTFC model in California and Ohio (Chamberlain: PI). Prior to this study, the California Institute of Mental Health extended a general invitation for all California counties to receive training in MTFC. At that time, a total of 9 of the 58 counties elected to participate; these early adopting counties were excluded from the current study. In addition, 8 other counties were excluded that had a low “need” for MTFC, defined as having fewer than 6 entries into group care (i.e., the target population for the MTFC model); this was measured during two snapshot days from the 2004 calendar year (the latest year data were available at the start of the study). The remaining California counties were targeted for recruitment into the study, as were multiple sites in LA County (3 sites). Three years into this study, the project was extended to Ohio to include more counties in the sample. Similar to the California procedures, counties who had previously adopted MTFC were excluded as were those who did not have a high enough need for MTFC. Of the 88 Ohio counties, 39 were identified as having a high enough need for the intervention; however, one had previously implemented MTFC. Of the remaining 38 counties, 23 were invited to participate using a rolling invitation method until a sample of 11 was obtained. This process resulted in a combined total of 53 participating sites from California and Ohio.

Recruitment

Introductory letters were sent to each of the county system leaders who were in a position to consider implementing a new evidence-based practice for youth in their community (i.e., child welfare, juvenile justice, mental health). The letter briefly described the evidence base for MTFC, explained the purpose of the study, and stated that their county had an opportunity to participate in a staged roll out of MTFC. The letter also informed system leaders that they would be provided with implementation funding including all training and travel costs for MTFC program staff if their county elected to participate. Further, the directors were informed that their county would be randomly assigned to participate in one of two methods of implementing MTFC: (1) participate with up to 6 other counties in a Community Development Team (CDT), or (2) work individually (IND) with trainers to implement the model. Counties also were informed that because of the large number of counties involved, implementation start dates would be staggered and counties would be randomly assigned to one of three timeframes (cohorts) for participation, spaced 12 months apart. Two weeks after the introductory letter was sent, system leaders were sent a second letter with an appended consent form and were informed of their assignment to condition and cohort. After receiving appropriate study information, system leaders were asked to provide consent to participate and were told that by consenting, they were agreeing to consider implementing MTFC, not formally committing to implement the model. A study recruiter followed up via telephone to address their questions and encourage them to consent to participate. If the system leader had already chosen to sign the consent prior to being contacted by the recruiter, this time was used to answer any outstanding questions.

Measure Development

During the first several years of the study, the SIC evolved from 12 to 8 stages. The 8 stages move from Engagement in the decision to implement an EBP (Stage 1) to practitioner Competency in reaching performance-based certification criteria (Stage 8).

Completion of the multiple stages occurs across different systemic levels as shown in Table 1. Initially, the county system leaders are involved in the decision of whether or not to bring an EBP into their community, followed by consideration of whether or not it is feasible to support an EBP in the community. Once system leaders determine that it is indeed feasible to support the EBP, the Readiness Planning stage activities are commenced, including creating a time-line and cost/funding plan. After stage three is completed, the focus shifts from system leaders to agency practitioners. In Stage 4, staff are hired and trained on the various components of the EBP. Next, in Stage 5 the fidelity monitoring process is established, including conducting any necessary trainings and assigning an expert consultant to the agency. In Stage 6, cases are opened and services and consultation begin such that completion of this stage not only involves the practitioner but also referred clients (youth and families). Similarly, for Stage 7, model fidelity, staff competence, and adherence is tracked at both the practitioner and client levels. Finally, at Stage 8, competency is assessed which involves all three levels of involvement (system, practitioner, client).

Two scores are calculated for each stage on the SIC. First, the amount of time that a county/agency takes in a stage is calculated by dates of entry through date of final activity completed (i.e., Duration Score). Second, the proportion of activities completed within a stage is calculated (Proportion Score). Therefore, a county might quickly complete a stage, but not complete all of the activities within that stage. Including both the duration and proportion scores allows for an evaluation of which of these is most important for successful implementation.

Study Rationale

This study is a first step at evaluating the validity of the SIC in measuring implementation processes and predicting successful implementation outcomes. The focus of this first step is to examine those initial stages that involve the system leaders in the implementation in predicting whether or not a county is successful in commencing a program as evidenced by the placement of a child into an MTFC home. This first placement represents a milestone event; the start of MTFC services. The 3 initial stages examined are shown in Table 1, and include, Engagement, Consideration of Feasibility, and Readiness Planning. The outcome variable (milestone event; MTFC services began) is the time to successful start-up of an MTFC program as indicated by placement of a youth (measured in Stage 6).

Results

Analytic Strategy

To determine if there were similarities among counties that predicted their success in starting an MTFC program, an agglomerative hierarchical clustering method was used (Kaufman & Rousseeuw, 1990). Each site initially was considered a small cluster by itself. Individual site clusters then were iteratively merged until only one large cluster containing all the sites remained. At each stage of merging, the two most similar clusters were combined. Euclidian distance was used to measure the similarity of any two sites based on their standardized characteristics (i.e., proportion of activities completed, duration of time spent completing Stages or both), where the distance between two clusters is the average of the distance between the points in one cluster and the points in the other cluster. Two types of clusters were examined including proportion of activities completed in the first three stages, and the length of time (duration) taken to complete them. Every site was initially considered in a single cluster and then for the successive iterations, the closest pair of clusters was agglomerated by satisfying similarity criteria. A Cox proportional hazard survival model (Cox, 1972) was then employed with days to first placement as a time to event outcome. The “hazard” under a Cox proportional hazard model was interpreted as the instantaneous probability of the site completing the first placement in the next small interval of time. The proportional hazards survival models also describe how the underlying hazard varies in response to cluster membership. When the hazard ratio (HR) is greater than one for a specific cluster, the first placement occurs faster than the reference cluster. When the HR ratio is less than one for a specific cluster, the first placement occurs slower than the reference cluster. To avoid “double counting” of the initial days as being both a predictor and an outcome, days to first placement was counted from the date of completion of the last Stage 3 activity in each site.

Proportion

In the first set of analyses, clusters related to proportion of activities completed in Stage 1, 2 and 3 were examined as the classification variables. After initially considering each site as a unique cluster, 10 new clusters were formed by considering the sites closest to each other. This clustering strategy continued iteratively until all sites were merged into a single, large cluster. Through this process, three distinct clusters were identified in the second to last step (the distances among these three clusters were greater than 2.6; the distance among clusters from previous stages was less than 1.8). Cluster 1, containing 25 sites, included those that had completed a large proportion of activities in all three stages (mean = 78.9%, SD = 11.2%). Cluster 2, containing 23 sites, included sites who had completed fewer activities overall (mean = 42.6%, SD = 8.2%). The final small cluster, containing only 5 sites, included those that completed only a minimum number of activities, and mostly did not complete Stages 2 and 3 (mean = 18%, SD = 3%).

Using a Cox proportional hazard model, cluster membership was used as a predictor of the days to first placement with Cluster 1 being used as the reference group. Results indicated that those who completed fewer activities in the first three stages (i.e., Cluster 2) had a significantly lower “hazard” of the occurrence of the first placement (hazard ratio (HR) = 0.205, p = 0.01) than those who completed more activities. As expected, Cluster 3 had the lowest hazard.

Neither implementation condition (p = 0.43) nor cohort (p = 0.55) significantly contributed to the model of the days to first placement. Therefore, to help address sample size limitations, they were not included in the model.

Duration

Next the duration of time spent completing Stages 1, 2 and 3 was indicated as the classification variable. To help with clustering, duration was categorized into 4 levels, Level 1: 0–31 days; Level 2: 32–365 days; Level 3: >=366 days; and Level 4: missing. Similar to the number of activities, three clusters were obtained using the same iterative clustering process. Cluster 1, containing 26 sites, completed the three stages at a relatively quick pace (mean days = 54.5, SD = 105.5). Among the 20 Cluster 2 sites, only one site completed Stage 3 (316.7 days) with the remaining 19 only completing the first two stages. The last cluster contained seven sites that did not complete the stages.

Using a Cox proportional hazard model, cluster membership was used to predict the days to first placement outcome with Cluster 1 being used as the reference group. Those sites that took longer to complete the first three stages (i.e., Cluster 2 sites) had a significantly lower “hazard” of the occurrence of the first placement (HR = 0.069, p = 0.01) than did the sites that completed the stages more slowly.

Similar to the proportion model, neither implementation condition (p = 0.23) nor cohort (p = 0.56) significantly contributed to the model of the days to first placement and therefore were not included in the model.

Proportion and Duration Combined

A final analysis was conducted to examine if both the number of completed activities and the time spent completing them predicted the time to first placement. Cluster 1, containing 23 sites, included those that had completed a large proportion of activities in all three stages at a relatively faster pace. The average proportion of activities completed over all three stages for Cluster 1 sites was 80.5% (SD = 10.4%) and average duration spent on each stage was 116.7 days (SD = 109.5 days). Cluster 2, containing 22 sites, included sites who had completed fewer activities overall (mean = 44.5%, SD = 8.9%) and 18 of them did not complete Stage 3. The final small cluster, containing only 8 sites, included those that completed only a minimum number of activities, and mostly did not complete Stages 2 and 3. Those sites that both took longer to complete each stage and completed fewer activities had a significantly lower “hazard” of having their first placement within the study period (HR = 0.190, p = 0.01) than Cluster 1 sites.

Consistent with previous models, neither implementation condition (p = 0.33) nor cohort (p = 0.65) significantly contributed to the model of the days to first placement and therefore were not included in the model.

Discussion

This is the first set of analyses to examine the validity of the SIC as a predictor of implementation outcomes. Given the ongoing nature of the trial for which the SIC was developed, only data for the initial stages of the measure were fully collected at the time of this analysis. Therefore, rather than examine the full range of outcomes measured by the SIC (i.e., through Stage 8 measuring Competency), this analysis focused on measuring the success of sites in achieving the milestone event of commencing an MTFC program as indicated by placement of the first youth in the program (Stage 6). The program's implementation behavior in the first three stages, when the system leaders are most involved in the implementation process, was used as the predictor. As expected, clusters were divided into those sites that completed many implementation activities and those that did not; those that moved quickly through the activities and those that lagged. Both the proportion of activities completed and the duration of time spent in the first three stages were predictors of successful start-up of services.

Outcomes from these analyses add to the slowly growing knowledge-base of implementation science. Regardless of implementation condition, sites who were observed to complete a greater number of implementation activities and who completed them in a timely manner were more likely to successfully start-up a program. While seemingly obvious, this analysis is the first known observational study to formally examine if system leader’s behavior, with regard to participating in engagement, consideration of feasibility, and readiness planning, is predictive of successful program start-up. The finding that randomization to cohort and timeframe for beginning the implementation process did not contribute significantly to the models, validated that this finding is not dependent on when in time implementation occurred. Those sites that participated in these first three pre-implementation stages, but did so slowly or incompletely were less likely to have spent their time and resources well. These findings suggest that sites interested in implementing MTFC will be more likely to be successful if the initial implementation stages are completed in a swift and thorough manner.

Although not collected systematically, qualitative data gathered from the contact logs with system leaders provide some information related to decision-making in the first three stages. When system leaders provided reasons for either delaying activities or discontinuing them, they cited reasons such as budgetary problems in the county, turn-over of system leaders, or focus on different priorities. It should be noted that in the cases of turn-over, attempts were always made to re-engage with the new system leader and these attempts were successful in some cases but not in others. Although likely not an inclusive list of reasons for system leader behavior, these reasons/justifications imply factors that are not likely to be able to be addressed by the evidence-based practices directly. Rather, these decisions are engendered by external system level variables, described by Aarons, Hurlburt & Horwitz (2010) as the “Outer Context,” and suggest that communities with systems that are fragile are likely to struggle to develop new programs

Outcomes from this study also provide important information about the development of the SIC measure. The analyses that were conducted to cluster sites achieved results that could be expected based on the proportion of activities completed and the duration of time spent completing them. The use of data from Stages 1–3 to predict program success helps establish the predictive validity of the SIC for implementing the MTFC model. The usefulness of the SIC as a measure for observing and predicting implementation success of other EBP models should be examined in future studies. For the field of implementation science to move forward, it is imperative that standardized measures of implementation such as the SIC be developed. Such measures are necessary to determine what behaviors or activities are needed for successful implementation programs. This information, once garnered, can ultimately be provided to consumers to inform the recursive implementation process and decision making about whether or not or when to proceed with implementation activities toward adopting a practice. Moreover, understanding which of the implementation activities are necessary for program success has potential benefit for consumers and developers to navigate the tricky processes of allocating resources for, and adopting, an EBP.

Several limitations should be noted about this study. First, the evaluation of the SIC is conducted on a single EBP and at this point it is premature to generalize these findings to practices other than MTFC. Second, although this is the largest randomized implementation trial at the county level known to date, the size of this sample limits the power to detect more complex interactions of proportion and duration. Third, although these analyses successfully predicted the amount of time taken to successfully start an MTFC program, it is unknown if the SIC will be useful in predicting successful behavior at other stages of the implementation process (e.g., during implementation and sustainability). Finally, a full evaluation of the psychometric properties of the measure has yet to be completed and therefore the reliability requires further examination. Nevertheless, this evaluation of the SIC suggests promise for this measure to fill a gap in the implementation science literature with regard to the ability to conduct a systematic observation-based assessment of implementation behavior, which is necessary in order to fulfill the need for ongoing implementation research.

Acknowledgements

This study and manuscript preparation was funded by NIMH R01MH076158; NIDA K23DA021603 and P30DA023920. The authors would like to thank all of the participating counties for their time and contribution to this study. Thank you to Michelle Baumann for her editorial assistance.

Footnotes

Disclosure. Chamberlain is a partner in Treatment Foster Care Consultants Inc, a company that provides consultation to systems and agencies wishing to implement MTFC.

Contributor Information

Lisa Saldana, Center for Research to Practice, Eugene Oregon.

Patricia Chamberlain, Center for Research to Practice, Eugene Oregon.

Wei Wang, Dept. of Epidemiology and Biostatistics, College of Public Health, University of South Florida.

C. Hendricks Brown, Center for Family Studies, Department of Epidemiology and Public Health, University of Miami Miller School of Medicine.

References

  1. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2010 doi: 10.1007/s10488-010-0327-7. Online first. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Blasé KA, Fixsen DL, Duda MA, Metz AJ, Naoom SF, Van Dyke MK. Implementation Challenges and Successes: Some Big Ideas. Presented at the Blueprints for Violence Prevention Conference; San Antonio, TX. 2010. [Google Scholar]
  3. Chamberlain P, Brown CH. Observational Measure of Implementation Progress: The Stages of Implementation Completion (SIC) 2010. Manuscript submitted for publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Chamberlain P. The Oregon multidimensional treatment foster care model: Features, outcomes, and progress in dissemination. In S. Schoenwald & S. Henggeler (Series Eds.) Moving evidence-based treatments from the laboratory into clinical practice. Cognitive and Behavioral Practice. 2003;10:303–312. [Google Scholar]
  5. Chamberlain P, Mihalic SF. Multidimensional Treatment Foster Care. In: Elliott DS, editor. Book eight: Blueprints for violence prevention. Boulder, CO: Institute of Behavioral Science, University of Colorado at Boulder; 1998. (Series Ed.) [Google Scholar]
  6. Chamberlain P, Leve LD, DeGarmo DS. Multidimensional Treatment Foster Care for girls in the juvenile justice system: 2-year follow-up of a randomized clinical trial. Journal of Consulting and Clinical Psychology. 2007;75:187–193. doi: 10.1037/0022-006X.75.1.187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Chamberlain P, Reid J. Comparison of two community alternatives to incarceration for chronic juvenile offenders. Journal of Consulting and Clinical Psychology. 1998;6:624–633. doi: 10.1037//0022-006x.66.4.624. [DOI] [PubMed] [Google Scholar]
  8. Coalition for Evidence Based Policy. 2009 http://evidencebasedprograms.org/wordpress/
  9. Cox DR. Regression models and life tables (with discussion) J R Stat Soc. 1972;34:187–220. [Google Scholar]
  10. Fixsen D, Blase K. NIRN Implementation Brief, 1. Chapel Hill: The University of North Carolina; 2009. Implementation: The missing link between research and practice. [Google Scholar]
  11. Fixsen DL, Blase KA, Duda MA, Naoom SF, Van Dyke MK. Implementation of evidence-based treatments for children and adolescents: Research findings and their implications for the future. In: Weisz JR, Kazdin AE, editors. Evidence-based psychotherapies for children and adolescents. 2nd ed. New York: Guilford Press; 2010. [Google Scholar]
  12. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network; 2005. [Google Scholar]
  13. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. American Journal of Public Health. 1999;89:1322–1327. doi: 10.2105/ajph.89.9.1322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Horowitz SM, Landsverk J. Methodological issues in child welfare and children's mental health implementation research. Administration and Policy in Mental Health and Mental Health Services Research. 2010 doi: 10.1007/s10488-010-0316-x. Epub ahead of print. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Kaufman L, Rousseeuw PJ. Finding Groups in Data: An Introduction to Cluster Analysis. New York: Wiley; 1990. [Google Scholar]
  16. Leve LD, Chamberlain P. A randomized evaluation of Multidimensional Treatment Foster Care: Effects on school attendance and homework completion in juvenile justice girls. Research on Social Work Practice. 2007;17:657–663. doi: 10.1177/1049731506293971. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Mihalic S, Fagan A, Irwin K, Ballard D, Elliott D. Blueprints for Violence Prevention. Washington, DC: US Department of Justice, Office of Justice Programs, OJJDP; 2004. [Google Scholar]
  18. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research. 2009;36:24–34. doi: 10.1007/s10488-008-0197-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Schoenwald SK, Garland AF, Chapman JE, Frazier SL, Sheidow AJ, Southam-Gerow MA. Toward the effective and efficient measurement of implementation fidelity. Administration and Policy in Mental Health and Mental Health Services Research. 2010 doi: 10.1007/s10488-010-0321-0. Online first. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES