Abstract
Objective
Population-based demands for trauma services have accelerated interest in rapid deployment of efficacious interventions to address the diverse mental health consequences of traumatic experiences. However, optimal strategies for supporting either implementation or dissemination of trauma-focused interventions within healthcare or mental healthcare systems are under developed.
Methods
This paper offers suggestions for adapting treatment research parameters in order to advance the science on the implementable and practical use of trauma-focused interventions within a public health framework. To this end, we briefly examine the current status of the research evidence in this area and discuss efficacy and effectiveness treatment research parameters with specific attention to the implications for developing the research base on implementation and dissemination of effective trauma practices for children.
Results
Examples from current studies are used to identify approaches for developing, testing, and enhancing strategies to roll-out effective treatment practices in real-world settings.
Conclusions
New approaches that reflect the contexts in which these practices are implemented may enhance the feasibility, acceptability, replicability, and sustainability of trauma treatments and services and thus improve outcomes for a broader population of youth and families.
A large proportion of America’s children and youth are exposed to traumatic experiences and suffer from significant clinical sequelae.1, 2. For example, youth who experience multiple forms of abuse often present with a difficult clinical history that includes both personal and family challenges.3 Funded, in part, by the National Child Traumatic Stress Network (NCTSN; www.nctsnet.org) in response to the events of the 9/11 Disaster in September 2001,4 several efficacious treatment approaches and specific methods have been developed to address the clinical consequences that have been documented following these and other traumatic experiences. These approaches, largely based on cognitive behavioral treatment (CBT) frameworks, have garnered enormous public interest from states,5 policy-makers,6, 7 and federal agencies (NIH, SAMHSA, the VA) because of the growing demand to quickly address the consequences of disasters, wars, and terrorism on large populations internationally. Indeed, these developments have encouraged trauma treatment developers to scale up implementation and dissemination efforts that can ameliorate symptoms and improve the quality of life of the nation’s young people who are exposed to an array of traumatic events (see www.tfcbt.musc.edu).4, 8–10
The interest in rapid deployment of efficacy-based interventions has led to a new generation of studies that focus on enhancing the feasibility and acceptability of these interventions, as well as to the development of practical dissemination tools (e.g., web-based, learning collaborative). However, other considerations needed to accelerate responses to population-based demands for trauma interventions have received less attention. This gap includes an evaluation of cost-effectiveness, mechanisms of action within efficacious treatments, replicability, organizational readiness for uptake, project leadership, implementation quality, and sustainability. Clearly, studies of the dissemination and implementation of efficacious interventions in other areas (e.g., depression, anxiety, bipolar disorders, conduct disorders and prevention), hereafter referred to as “evidence based practices” (EBP), are only now beginning to be reported in the literature.11–13 In fact, the field of trauma research has by necessity had to move quickly into studies of dissemination and implementation and arguably may be further ahead of other disorder-based studies in terms of identifying key dissemination-implementation issues. However, no definitive studies exist on optimal strategies to improve the dissemination and implementation (D-I) of trauma treatments; therefore most efforts to diffuse these treatments are hit or miss.
The purpose of this paper is to suggest approaches to guide a more sophisticated research agenda on the implementable and practical use of trauma-focused interventions (e.g., treatments, services, practices) within a public health framework. To this end, it is important to examine the current status of the research evidence in this area and to identify directions that might help the field to develop, test, and enhance the effectiveness of existing treatment methods in real-world settings. This perspective can improve the public health reach of effective models by improving the brevity, simplicity, and overall impact of treatment.
Findings from Initial Efficacy Research
Evidence regarding the current status of trauma treatment research in children and youth can be found in a recent meta-analysis of early efficacy studies.4 This review is noteworthy for its inclusion of studies dealing with a broad range of traumatic experiences, methodological rigor, and efforts to examine a broad range of studies on numerous methodological criteria. In general, the results showed that treatments for youth trauma produced modest benefits; with an average effect size (ES) based on Cohen’s d of 0.50 for posttraumatic stress symptoms after CBT. For the most frequent type of abuse treated in this review, sexual abuse, the ES was also modest (d = .46). Most of the ES’s were in the medium range (d’s = .43–.09). CBT was found to be more effective than non-CBT methods for various outcomes (e.g., PTS, depression, anxiety, externalizing behavior). These observations led Silverman et al.4 to conclude that the results of these studies provided the “suggestion that early intervention is efficacious” (p. 177). In addition, they provided an extensive set of methodological recommendations for enhancing the quality of treatment research in this area (e.g., improve study design, sample size, and analytic rigor; determine mechanisms of action and for whom an intervention works, document the breadth and stability of outcome).
Clearly, basic efficacy and effectiveness research is still needed to enhance knowledge about basic mechanisms of action and to support the development of clinical practice guidelines. Yet there is also a need for studies on strategies to enhance the uptake of effective trauma treatments, especially given the public health pressures to respond quickly, efficiently and broadly to populations experiencing traumatic events.
Moving Beyond Efficacy to Effectiveness, Implementation and Dissemination
The terms efficacy, effectiveness, diffusion, dissemination, and implementation as used in this paper refer broadly to genres of studies that share common elements of scientifically defensible and rigorous designs and methods but that differ in terms of purpose, questions to be answered, and specific ingredients of the study (e.g., location, recruitment criteria, who delivers the intervention, etc). Efficacy studies generally refer to studies using controlled experimental designs with specific intent to assess comparative outcomes associated with delivery of a therapy, treatment or structured intervention, whereas effectiveness studies generally use similar rigorous designs but examine comparative outcomes in settings, with populations, and with elements of delivery more naturalistically akin to routine practice.
Previous outlines of typology14, 15 have provided useful definitions, for adoption, diffusion, dissemination and implementation studies. Adoption is the decision by a provider or system to learn and implement an intervention. Studies of adoption typically focus on issues of provider acceptability, attitudes, and prior experiences by a system or agency to take up new practices. Diffusion studies occur when the investigation is targeted at ‘translation’ efforts that are relatively unplanned and where the unplanned diffusion of the intervention is simply to promote awareness. Examples include journal or newsletter publications, information on a website, or in the mass media. Dissemination studies target interventions that include more intentional strategies, such as direct mailings of results to intended audiences, workshops, and conferences. The goal is generally attitude change. Implementation studies focus on interventions that are even more active, with the intent of adding behavior change to awareness and attitude change, such as evaluations of the use of opinion leaders, audit and feedback or reminder procedures, and administrative or economic interventions. These efforts to promote a broader and more practical research agenda on trauma-focused interventions require further examination of the effectiveness, diffusion, dissemination, and implementation of EBPs, as well as attention to a few key issues related to this agenda. The need to develop methods to facilitate the transportability of efficacious treatments and the description of several methods to enhance practitioner training and competence in a given EBP have been articulated in a recent review article by McHugh and Barlow (2010).
An important implication of the focus on adoption, diffusion, dissemination and implementation studies is that it reflects a more clear emphasis on the public health relevance of research findings for broader populations. This is articulated in the NIMH’s strategic plan, Strategic Objective 4: Strengthen the Public Health Impact of NIMH-Supported Research, which focuses on increasing the public health impact of new research. By pursuing this objective, the NIMH intends “to help close the gap between the development of new, research-tested interventions and their widespread use by those most in need.”(p.1).9 The sub-objectives of Strategic Objective 4 clarify that the NIMH recognizes that research on matters of access, quality, cost, dissemination and implementation of interventions are critical, and that this work must be pursued in partnership with mental health stakeholders outside of the research community, in order to advance public health impact. Certainly, public health impact could be served by contributing population-based knowledge regarding clinical epidemiology on service use or contextual factors indigenous to everyday practice that settings (e.g., type, cost, quality) that merit incorporation in an EBP.16 Such information would extend prior studies of families served by child welfare that have documented service needs, use, and barriers,17, 18 the prevalence of trauma in a post-disaster context,19, 20 and the prevalence of self-reported PTS symptoms in youth,21 and, thus, might enhance the validity of community-based treatment trials for victims and offenders of violence.
Absent until recently have been frameworks for modeling the translation of efficacy trials or clinical epidemiological studies to broader populations in the real world. However, several relevant models exist. One is derived from the work of Fixsen, Naoom, Blase, Friedman, and Wallace.22 Applicable to implementation, dissemination and deployment research, this framework identifies 5 stages of implementation, which include exploration/adoption, program installation, initial implementation, full operation, innovation, and sustainability. Another very relevant framework is PRISM—Practical, Robust Implementation and Sustainability Model from Feldstein and Glasgow23—which describes a multi-level implementation model that incorporates components of the implementation process (i.e., program elements, factors in the external environment, organizational context, and recipient perspectives). PRISM uses concepts from quality improvement, chronic care, the diffusion of innovations to guide implementation studies. The PRISM model provides a tool for researchers and health care decision makers that integrates concepts from implementation-dissemination science designed to support the translation of research into practice. The Interactive Systems Framework (ISF) for Dissemination and Implementation is another model that describes three systems relating to the delivery and support of preventive interventions in community settings.24 An important feature of these heuristic models is that they articulate several processes or phases that can organize an overall approach to conducting effectiveness research in real-world settings. But, they also raise basic science questions regarding each of their components or phases. For example, the installation of efficacious or effective trauma interventions within schools, clinics, or general health settings requires attention to different contextual demands that may affect the extent to which adoption occurs, the feasibility of installation, and/or the sustainability of the intervention. Of course, successful completion of each process or phase also requires attention to several practical considerations (procedures) that can facilitate research in the effectiveness treatment context, in effect, by following some suggested “how-to’s” in each process or phase.
As an example of the need to articulate and address these practical considerations, consider some emerging evidence from the effectiveness context. Some recent research suggests that when EBPs are transported from the controlled setting of a clinical trial to the community, effect sizes fall considerably.25 There may be several reasons for this attenuation in impact, such as the treatment does not work as well in real-world clinical settings with more typical clients who present with numerous comorbidities or the complexities inherent in diverse community settings require extensive adaptations in the EBP.26 Both explanations imply the plausibility of several practical solutions at the individual, treatment, or organizational levels, such as: 1) provide clinicians and supervisors with more extensive training and ongoing support in the EBP, 2) collect and share additional information on clinical processes and proximal outcomes with the clinician and family, and/or 3) examine the level of adherence to the EBP model and identify factors that impede it.27, 28 Assuming for the moment that the solution involves combining EBPs with systematic feedback on treatment progress and process to provide clinicians with individualized information needed to tailor therapy to the client, a premium is then placed upon using feasible strategies to develop, implement, and embed these new treatment elements in an existing program. But, regardless of the type of solution identified, the primary implication of this responsive flexibility is that community settings may require several, key adaptations to facilitate the delivery of any EBP. Therefore, some practical suggestions and methods designed to facilitate attention to an integrated clinical and research agenda are needed to promote successful treatment effectiveness work.
Translating efficacy or effectiveness research into implementation and dissemination research
The development of a research agenda on trauma within a public health-inspired D-I science framework requires attention to new approaches, issues and methods. The methods, measures, designs, analytic plans, and basic conduct of the studies are shaped differently when conducted within this framework. This is by virtue of the broader public health goal of supporting installation of new practices within a broad network of providers and systems.
In Table 1 we outline ten study parameters, methodological strategies, and other approaches that characterize efficacy and effectiveness studies dealing with child/adolescent trauma. We suggest implications of each one for developing a research agenda to advance D-I science. These parameters include the “bread and butter” of research methods, such as having a well characterized sample, stable staff, and an intervention with documented fidelity. However, as Table 1 suggests, studies designed to improve the uptake, installation, and sustainability of effective practices also require attention to new contextual issues, many of which are typically ignored in most efficacy and even effectiveness studies.
Table 1.
From Efficacy to Effectiveness to Dissemination/Implementation: Implications for Improving the Reach of Research
| Efficacy Research Parameters Implementation | Effectiveness Research Realities | Dissemination-Implications |
|---|---|---|
| 1. Study initiation and impetus: flexible, grant-driven | Agency starts when it’s ready | Use collaborative planning with agency directors/staff |
| 2. Setting: controlled | less controlled, driven by service demands which change with funding/policies | Consider fit of project with funding/policy climate; create governance structure |
| 3. Design: use of rigorous randomization methods | Range of design alternatives | Consider range of designs to maximize generalizability |
| 4. Sample: clearly defined, homogeneous | Heterogeneous, few selection criteria | Attend to representativeness of sample |
| 5. Staff: stable, well trained | Drift, contamination, turnover | Attend to staff buy-in, attitudes towards EBPs |
| 6. Engagement: Use of many incentives, high level of control | No resources, limited options | Develop retention strategies for staff as well as clients |
| 7. Treatment & training model: fixed | Model seen as added burden, may conflict with existing training | Develop feasible training and consultation/coaching models to support uptake of new EBP |
| 8. Fidelity: ongoing monitoring/feedback | Little/no supervision, supervisor conflicts | Develop brief assessments for ongoing monitoring and feedback purposes |
| 9. Outcomes: use of diverse assessment batteries over time | Outcome assessments infrequent or absent; Few resources (time), | Measure process as well as outcomes of implementation |
| 10. Analytic issues: little missing data | Missing or delayed data, different informants | Model multi-level, complex and nested data to capture process and outcomes |
In the remainder of this paper we describe the types of issues that researchers embarking on broad dissemination or implementation studies on trauma might consider when designing their studies. We offer these suggestions in the spirit of accelerating the development of a knowledge base on effective dissemination and implementation strategies to advance trauma research.
1. Use collaborative planning with agency leadership responsive to service initiatives
In contrast to the start date of research grants based upon the receipt of a notice of award, the initiation of a study on D-I will be influenced more by agency and system factors than by prior findings from traditional research studies. A major impetus for several recent projects involves either a major public health problem (e.g., pubic health-related disaster) or a rapid shift in regulatory or fiscal policies affecting a large system of care (e.g., new incentives for a novel EBP or modality of treatment). The 9/11 and Katrina disasters required a rapid mobilization of clinical services at the local, regional, and state levels, whereas routine requests to enhance an agency’s capacity to deliver a new EBP may have far fewer clinical demands and time constraints. Clearly, the influence of policy initiatives on the amenability of an agency to collaborate on a specific treatment study requires research flexibility and patience.
Regardless of the reason for initiation of a D-I effort, collaborative planning is perhaps the most essential element underlying the nine key principles outlined by Israel and colleagues for Community-Based Participatory Research (CBPR)29 that support successful research partnerships in diverse practice settings (e.g., recognition of community as a unit of identity with strengths and resources, collaborative, equitable, and ongoing involvement of all partners in all research phases, addressing health from an ecological perspective, dissemination of knowledge). Especially given the heightened responsiveness of providers to policies and regulations, it seems prudent to begin collaborative planning as early as possible. In cases where immediate implementation is not a necessity, it may be ideal to begin well before—at least 1 to 2 years prior to -- the start of a study.
Of course, even if there is a need to respond rapidly, the gains in long-term investment, engagement, and sustainability of the effort will likely depend on the collaborative structures and planning processes that shape the research study and that occur early in the process. The aforementioned projects developed in the wake of the 9/11 and Katrina disasters could not have occurred without prior collaborative planning. Thus, exigencies of the post-Katrina environment led to implementation of CBT-focused interventions in school systems,30 exploration of models of population-wide screening and intervention including systems-level partners,8 and multi agency quality improvement initiatives aimed at building capacity for appropriate care among impacted safety net agencies (www.reachnola.org), each of which may provide useful examples for project initiation. These efforts were able to be launched quickly because of extensive collaborative planning and networking efforts that provided a seed-bed for the later implementation studies.
2. Balance project fit with funding/policy climate and create governance structure
Related to collaborative planning is the need to develop a shared and collaborative governance for the project (e.g., using parent and community advisory boards), to ensure that the study can remain relevant to and will fit with the dynamic fiscal, political, or policy changes.29 In extending the implications of the CBPR model,31 several recent applications highlight the role of ongoing, jointly administered project operational structures across a number of different clinical and research topics. For example, the CATS Project (Child and Adolescent Trauma Services) initiated a system of governance that linked state-wide administrators, academic researchers, and community practices over several years to integrate state policy directives with ongoing evaluation of process and outcomes and with a flexible protocol to respond to the practical exigencies of the local agencies participating in its CATS Consortium.32–34
The REACH NOLA Mental Health Infrastructure and Training (MHIT) Project involves more than 50 primary care, specialty providers, academic institutions, and nonprofit agencies in regional efforts to improve access to quality mental health care. The project is led by an Executive Committee of 12 academic and community partners who attend to equitable governance practices and resource distribution.35 In the Partnerships for Families (PFF) Project, a community advisory board (CAB) consisting of representatives of 10 participating community agencies, members of the treatment research team, and a community leader, met for nearly one year prior to the formal start of the project and have continued to meet to review key study implementation issues on a monthly basis throughout the 5-year project.36 The CAB has been involved in virtually every decision relating to the objectives and methods of the study, from the study name and public relations messages to the use of assessment and intervention methods.
3. Consider alternative but acceptable designs to maximize generalizability
The research design commonly used in efficacy or effectiveness studies is the group or cluster randomized design with randomized assignment at the highest level. Studies designed to maximize diffusion of findings may need to employ alternative designs or even mixed designs. The cluster RCT design has two major limitations when applied to multi-level settings, systems or contexts that characterize D-I studies. First, the entire evaluation is subject to variance inflation (e.g., the design effect) if multiple nested levels are used. Second, it is difficult if not impossible to untangle the impact of the various components of the intervention for each level. Designs that can overcome the limitations of the group randomized design include split plot designs, commonly used in agricultural studies to accommodate the multi-level nature of the intervention,37 and non-randomized design options38 which may improve upon the naturalistic fit of the research design with practice conditions. An example of the latter is the need-based assignment (a.k.a. “risk-based assignment” and “regression-discontinuity (R-D)” designs)38–44 that has been used in evaluation and policy research, including the (CATS) study.32 This design requires a pre-intervention assessment that is administered to all participants to obtain their baseline need scores. Participants with baseline need scores exceeding a pre-specified threshold are offered high-intensity services (the experimental condition), while those below the threshold are offered low-intensity services (the comparison condition). Adjustment of the follow-up outcomes is done to account for the difference at baseline.
A third design option involves combining randomized and non-experimental evidence as an additional way to overcome certain contextual challenges (such as engagement or no-show problems often evident in service studies). This hybrid design can be used to accommodate participants’ preferences regarding their own treatment. Candidate participants are included in a separate nonrandomized trial based on their preferences regarding their own treatment.45 Addition of nonrandomized trial data to randomized trial data can enhance generalizability of the randomized trial results.46 These design options can take advantage of the stronger internal validity from RCTs and the enhanced generalizability from quasi-experimental trials.47 Marcus has developed statistical methods based upon combining data from a randomized trial with quasi-experimental data to generalize efficacy estimates to a target population of interest.46
A final example comes from a study (Bridging the Research Gap Effectively, BRidGE; ; NIMH 5R34MH71266-3)48 that utilized a single subject multiple baseline design to examine fidelity to Trauma-Focused Cognitive Behavioral Therapy (TF-CBT). Baseline data collection during an initial “Treatment as Usual” (TAU) phase was followed by a second phase that provided exposure to didactic workshops and then a third phase in which ongoing clinical supervision in TF-CBT was provided. Therapist attrition and recruitment obstacles required additional cohorts for training and evaluation, so overall results are not yet available. However, the study’s conceptual hypotheses that training effects would be greatest for supervision followed next by the workshop still have considerable empirical merit and highlight some of the potential benefits to studying different training methods using individual practitioners or families. Perhaps it may be possible to test these hypotheses using briefer phases, as this set of training experiences represent a common and logical sequence for teaching complex EBP skills to community practitioners.
4. Attend to the representativeness of provider and patient samples
Perhaps most fundamental to the conduct of treatment effectiveness research is the effort to directly train existing agency practitioners in an EBP and, yet, little is known about the characteristics of indigenous providers and the settings in which they work. Accordingly, studies of the adoption, use, or sustainability of a trauma treatment need to carefully consider sample definition. The documentation of practitioner demographics, educational background, treatment role, orientation, and practices, client characteristics, exposure to supervision and training, and other relevant mental health or work history experiences may help to shed light on unique attributes of the sample and potential moderators of training impact.49 Likewise, the use of standardized measures that capture the child, parent/caregiver, and family characteristics of the sample would provide helpful information about referral problem severity, clinical comorbidities, and competencies.
Unfortunately, many community programs use limited descriptive measures, if they use any at all. The lack of ongoing standardized assessments in community practice means that defining a sample and focusing services for the appropriate population necessitate attention to the types of assessments typically used in the community, the ability of community providers to integrate more evidence-based assessment practices into routine care, and the efficient targeting of treatment resources for the defined population most likely to benefit from them. The establishment of a system for collecting community metrics from all agencies participating in one large-scale EBP training program has yielded informative patterns about client need, EBP use, and agency adoption.50
5. Maximize staff and stakeholder “buy-in” and attitudes towards EBPs
One major difference between efficacy and effectiveness studies is that in the latter there is often an attempt to directly train existing agency practitioners in an EBP. However, little is known about the impact of such training on staff buy-in, much less treatment use and impact. It is important to consider ways to directly motivate staff to participate, promote ongoing involvement, and encourage retention. Approaches may include shortening the training, providing incentives for involvement, using technological supports (web-based learning modules, live consultation, learning collaborative) to enhance skills acquisition and adoption. Other approaches include the use of CEUs, enhanced case rates, gift certificates, food51). Web-based approaches such as self-paced training via the internet (see www.tfcbt.musc.edu) are widely used. Local champions or opinion leaders can become lead trainers and thus generate enthusiasm for adoption of an EBP.52
A related research need is to examine the role of supervisor or agency routines, as well as the organization’s mission and general characteristics in EBP implementation. One agenda of Project BEST50 is to disseminate EBPs for childhood trauma and abuse in the state of South Carolina. Its mission has been encouraged by the development of educational materials and training methods for two important groups that have a bearing upon the delivery of clinical services, community practitioners/supervisors who delivery care and system “brokers” who fund, solicit, or refer families to care. In both training tracks, efforts have been taken to maximize the fit between the training and new services being taught and the organizational characteristics of each agency. Likewise, the inclusion of a broker track clearly recognizes the important role that system-savvy referral sources can take in demanding that local agencies use existing EBPs.
6. Carefully select retention strategies for staff and consumers
In one recent study,53 EBP implementation with fidelity monitoring itself was found to reduce staff turnover, which may underscore the potential benefits of clear performance feedback. However, turnover is clearly a complex issue that will require other considerations. In larger systems implementation projects, for example, it may be critical to create local embedded expertise (via train-the-trainers approaches) to support training, coaching, and competency development locally. This can be done either at the agency level (better for rural) or possibly cross-agency (better for urban). The key here lies in convincing service purchasers (funders) to support this function. A second approach to promote EBP implementation involves using a technology-based solution (e.g., live coaching of sessions via internet-based teleconferencing equipment). This may be better for smaller scale implementations, but has the disadvantage of tying the agency to a purveyor indefinitely rather than developing their own local training and quality control capacity.54
Based on recent developments designed to clarify and simplify the treatment content of EBPs,55 efforts are being taken to condense existing EBPs for trauma into their “core components”. For example, a recent dismantling study reported by Deblinger et al.56 has examined the relative benefits of key types of content (cognitive restructuring vs. the trauma narrative) and the use of full vs. brief protocols (16 vs. 8 sessions) in ameliorating childhood PTSD. Initial results indicated clear benefits for the brief protocol. A similar investigation examining the components of Cognitive Processing Therapy (CPT) for female victims of interpersonal violence found that cognitive therapy only was more effective than using written accounts of the trauma only.57 These studies may ultimately simplify the treatment model skills that need to be taught to both clinicians and clients.
Other treatment studies have begun to evaluate the mechanisms underlying trauma-focused treatment changes58 and ways to integrate both trauma-focused and delinquency-based intervention methods.59 Studies of this nature may eventually identify a small set of key ingredients that may both facilitate skills training and enhance clinical outcome in an efficient manner
7. Apply feasible training and consultation/coaching models to support EBP uptake
Studies designed to install new practices in real-world systems have to attend to the training demands on line staff. This means ensuring that the models used to teach new clinical content (EBP) are flexible, brief, and fit within the timeframes allotted. Using approaches such as modular instruction, small groups of trainees, and protocol flexibility can allow an EBP to be more suitable for real-world clients. Training practitioners to use focused content in a way that does not impose time or conceptual burden is often difficult when treatment manuals are highly complex and technical. At the same time, it is also important to build in local modal adaptations that can be reviewed in advance and then tested, but perhaps only after the original model has been learned well.60
Another approach to promote EBP implementation involves using a technology-based solution (e.g., live coaching of sessions via internet-based teleconferencing equipment). This may be better for smaller scale implementations, but has the disadvantage of tying the agency to a purveyor indefinitely rather than developing their own local training and quality control capacity.54 The availability of other software applications (e.g., Sharepoint site) has also made it easier to rapidly and efficiently share video or audio-taped session material for consultation purposes.36 These methods may facilitate efforts to efficiently receive ongoing training and consultation from an off-site collaborator.
Other potentially effective implementation models being tested involve developing a local cross-agency implementation team that takes on live coaching, training, and quality control within a multi-agency network.60 An additional model that has been used in prior applications involves the “train the trainer” (TTT)59 approach, in which recently trained staff become the instructors for new trainees. Professional networks, some of which are informal, have been in operation for several years to disseminate existing EBPs (e.g., TF-CBT)61 with child welfare populations. Further empirical study of the utility of a TTT model is certainly warranted.
Training in an EBP has come a long way since the 3-hour workshop, although the evidence for other methods is only beginning to emerge.62 One novel study focusing on teaching PCIT to community practitioners found that 2-days of experiential and didactic training were equally effective in increasing knowledge, skill, and satisfaction, but few participants demonstrated mastery of the skills.18 Additional adaptations of existing training routines have been conducted, including the use of a more extended learning collaborative or learning community which involves the provision of clinical training over an extended period of time using a cohort of practitioners from the same program or agency.61 This method employs didactic, experiential, and integrity feedback components to enhance learning and subsequent booster training, ongoing case consultation, and system feedback to encourage skills adoption. These methods have been incorporated in an ongoing clinical trial to teach an EBP (Alternatives for Families: A Cognitive-Behavioral Therapy or AF-CBT) to community practitioners across different types of agencies and programs.36 Similar approaches have been used in a state-wide CBT dissemination project in New York.63 Such efforts are clearly designed to embed the EBP into the routines of the setting which may enhance its feasibility and relevance. Such methods have been extensively applied by the NCTSN (www.nctsnet.org). Related programs have been conducted on a state-wide or regional basis using both on-site training and then phone consultation phases.32
Other related training developments include the use of internet delivery and other technologies designed to increase exposure to basic and advanced EBP skills. For example, TFCBT-web provides 8 hours of training in TF-CBT and provides feedback to the practitioner based on specific training evaluation questions.61 Webinars are now commonly employed to teach a variety of advanced clinical treatment skills. In addition to the use of multimedia resources, several of these applications have incorporated adult learning principles designed to enhance the quality of instruction and design for retention. These and other internet-based instructional programs are advantageous because they make basic material easily available to a wide audience, but are not equivalent to a full training in the model that focuses upon enhancing clinical competency with patients.
8. Develop brief assessments that can be used to provide ongoing feedback
Implementation studies depend on the development and use of brief, ongoing monitoring and assessment of a broad range of clinical problems. The feasibility of a given instrument may have as much of an impact on its application as its psychometric properties. For example, the NCTSN has reported the development of a 9-item measure of PTSD64 which has a clinical cutoff (10). Other brief tools are available for evaluating behavioral and emotional problems, including the PSC-17,65 SDQ,66 and Peabody Treatment Battery.67 To make implementation studies robust and practical, studies should consider inclusion of other measures that capture key aspects of the functioning and histories of trauma patients, including functional impairment, service use/penetration, and cost of care. Likewise, measures of quality of life are important to examine.
Another issue to consider is the degree to which a given tool can be directly used in the clinical process itself and is clinically meaningful. This requires instrumentation or tools that are brief, simple, and directly relevant to improving care. For example, cellular phones or other more momentary or immediate data collection procedures might be explored to improve efficient data collection for intake and outcome monitoring purposes.
Some studies are examining novel methods designed to monitor treatment response and the mechanisms underlying good outcome. Fitzgerald et al.68 recently reported on the use of community metrics in two statewide implementation projects using TF-CBT. Monthly metrics were collected on several progress indicators and used for ongoing feedback purposes with clinicians across several communities (e.g., staff training, fidelity, service use and components used, case progress, outcomes, challenges). This work provides a novel example of the systematic collection of agency data on implementation using automated data collection methods. In addition, these projects highlight the important role of standardized measures to monitor both the course of treatment and ongoing practitioner performance or competencies after learning a new EBP.
Finally, studies outside mental health show that measuring performance and providing feedback are essential for learning complex tasks.27 Research in the adult mental health field has found that systematic feedback to clinicians about their practice significantly improves outcomes.69–71 As Hatfield and Ogles72 point out, discussing progress (or lack thereof) with the client can provide cues that can be used in the assessment of actual client change, enhancement of therapeutic alliance, more accurate case conceptualization and discussion of potential changes in the treatment plan. Unfortunately, typical clinical mental health settings are not set up to facilitate the use of continuous measurement and feedback because ongoing and systematic assessment is rarely gathered much less used in practice.73 Unless measurement feedback systems are routinely incorporated into healthcare delivery, it will be impossible to know whether client progress is being made or is merely an anticipated outcome.
9. Measure the process and outcome of implementation
Given evidence about the benefits of treatment quality or integrity, efforts are needed to develop brief and more generic approaches to fidelity monitoring. Typical studies use analysis of audio or video-taped session samples which provide rich material for consultation, but are expensive and tedious. Alternatives involve the use of live observations and immediate feedback during treatment sessions that is conveyed to an outside consultant. Such observations are routinely provided in the context of practitioner training in several well-disseminated treatments, such as PCIT (see www.pcit.org) and MST (see www.MSTservices.com).
The use of agency or community metrics to provide overall study feedback has also been incorporated in Project BEST in South Carolina50 and by the state-wide EBP dissemination office in Connecticut.68 These large-scale efforts have found creative ways to provide ongoing individual and agency performance feedback to both administrators and practitioners, such as the number of cases referred for an EBP, assessed, found eligible, initiated an EBP, completed treatment, and benefited from treatment.
At this phase in the development of the science on implementation and dissemination, it may be as important to study the processes of installing new practices as the outcomes associated with the installation. The processes may identify important variables or factors for which a basic science on implementation can be built. Relevant process variables are likely to include (a) staff attitudes, expectations and motivations to adopt new practices; (b) consumer (i.e., family and youth) attitudes, expectations and preferences; (c) qualities of agency leadership; (d) dimensions of the social-organizational context within the work units delivering the new practices; (e) system factors, such as political climate, fiscal policies, and agency regulations. If implementation science is to keep pace with (much less inform) the diffusion of effective practices, then it will be important for studies to measure both processes and outcomes at multiple levels associated with scaling up these practices.
10. Model multi-level, complex and nested data to capture process and outcomes
Multi-level studies are needed to investigate the complexities of implementation. For example, state systems that have taken up the challenge of implementing trauma treatments for the populations they serve (i.e., NY, LA) function at many levels (political, regulatory, fiscal, county, agency, work units, consumers). Studies of implementation within complex systems (e.g., States) will require designs and analyses that take these levels into account. Appropriate multi-level analytic approaches may include hierarchical linear models and random regression coefficient models to accommodate the multi-level data structure for these designs.74, 75 National studies that have used these approaches to examine factors related to the implementation of EBPs include the studies of the Macarthur Foundation Youth Research Network.76, 77 Studies from the CTS project have applied mixed-effects regression78 modeling of longitudinal data from multiple timepoints which have incorporated model interactions over time (e.g., CATS).33, 34 Other studies of practitioner treatment practices have begun to examine potential effects related to nesting within an agency or program.79
Conclusions and Directions
Significant investments by researchers, practitioners, and provider groups have helped to advance the evidence-base on effective trauma treatments for youth and families. These contributions have happened in record time. However, given the increasing demand for rapid dissemination of effective practices within a public health framework, the knowledge base on optimal strategies to improve dissemination and implementation of these practices has to be advanced in equally record time. One implication of this development is the need to assign priority funding to research studies that are likely to have a significant public health impact. Recently, the NIH has been moving in this direction.
Explicit attention to the parameters that constitute the foundation for basic efficacy and effectiveness studies in the trauma field are similarly important in developing D-I science, but they imply different emphases. The parameters include (a) when and how to initiate a study; (b) characteristics of the setting; (c) characteristics of the design; (d) characteristics of the sample; (e) characteristics of staff; (f) use of engagement strategies; (g) treatment and training models; (h) monitoring, feedback, and fidelity assessments; (i) assessment of outcomes; and (j) analytic issues. Some examples have been presented herein of the incorporation of these parameters in ongoing effectiveness trials in order to highlight efforts to advance the methodological adequacy and rigor of new research in this area.
Of course, translating these research paradigms into a public health framework necessitate attention to another set of contextual issues. As articulated herein, these include (a) collaborative planning models to improve initiation and conduct of studies; (b) consideration of a range of design options to optimize generalizability; (c) paying attention to the representativeness of samples, especially important for population-based studies; (d) addressing attitudes and expectations of staff participating in the study; (e) developing retention strategies to reduce turn-over; (f) developing brief and feasible training and consultation models; (g) developing and using continuous monitoring of brief assessments combined with feedback to enhance new learning; (h) measuring the process as well as outcomes of implementation; and (i) modeling multi-level and nested data to capture process and outcomes.
Efforts to address these practical considerations will require additional investments designed to extend models of D-I research in ways that develop, implement, and evaluate new procedures. For example, most of these models indicate what stages or processes of research to include, but do not articulate what specific methods or procedures should be used to accomplish the work (26–27). The D-I research discussed herein may yield several practical guidelines for conducting research in accord with these broader conceptualizations. Further, the impetus to address public health relevance on a larger scale should generate additional information on the clinical needs of and service barriers encountered by several worthy child and adolescent populations exposed to trauma.16
The central importance of collaborative planning and governance to D-I research has highlighted both structural and functional characteristics that need to be addressed in order to maximize the potential of D-I efforts. An interesting study by Wells31 is experimentally testing the role of CPBR in the dissemination of an EBP within several agencies and should provide important information about the overall utility of this type of investment. Perhaps future studies will examine the impact of attending to the other principles outlined in CPBR within a D-I framework (e.g., importance of sharing collaborative feedback). Likewise, investigators were encouraged to appreciate the importance of momentum in relation to the specific initiative that drives the demand for the incorporation of a new EBP, since the level of motivation and commitment at both the organizational and system levels and the individual practitioner level may vary considerably across applications. Accordingly, it would be informative to compare the overall yield and impact of applications driven primarily by a research opportunity vs. a compelling population or public health need.
The search for alternative experimental designs and description of community samples will hopefully be rewarded in the context of novel D-I research that strikes a balance between the attention that must be paid to practical and empirical considerations. The incorporation of flexible designs may facilitate several developments, including analyses of the relative advantages and disadvantages of group comparison vs. single case (individual) designs, or the use of subject, program, and agency randomization methods, and the impact of using several vs. few selection criteria for practitioner involvement (e.g., all practitioners or just those who are highly motivated in the EBP). In conjunction with these designs, analytic methods should be used that can incorporate the potential effects of nested data at various levels (families, practitioners, programs, agencies, etc.; see Kolko et al., 2008).
Given the relative absence of experimental evidence in the trauma field, there is a critical need to evaluate alternative methods or procedures for teaching practitioners to competency administer an EBP.80 As examined by McHugh and Barlow13 for the D-I field more generally, these methods may require attention to such considerations as practitioner needs or barriers, training structure, didactic training methods, competency development efforts, and sustainability routines. Clearly, there are several options within each of these domains that bear implications for conducting a successful D-I effort that may be worth formal application and evaluation.
Some suggestions were offered to enhance staff retention and consumer engagement or participation. Unfortunately, high rates of human services staff turnover and premature family termination have been reported for the child welfare and mental health systems.81 Thus, each of these targets is especially critical to address when considering as well the substantial investment in training time and support provided to community providers who are needed to apply an EBP in an effectiveness trial with recruited families on their caseloads. Possible options for improving staff and family follow-through with training or treatment regimens include the use of shorter and more effective training programs emphasizing core components, more effective and engaging treatments, more efficient delivery of training or treatment programs (e.g., on-line or videoconferencing, phone calls, etc.), and more flexible scheduling and participation parameters.
Whether and how incentives are applied with both practitioners and consumers may also facilitate an understanding of the need to incorporate additional expenses for various incentives and the frequency with which they should be delivered, as both groups require both “buy-in” and retention. Further, the use of structured but brief assessment measures and provision of automated feedback may help both practitioners and families to remain focused on specific treatment goals. Some examples of these measures include the 9-item UCLA PTSD scale,64 the Pediatric Symptom Checklist-17 (PSC-17),65 the 18-item Brief Symptom Inventory,82 and the 35-item revised Brief Child Abuse Potential Inventory.83
Efforts to simulate the demand for EBPs have also begun to include community stakeholders who can encourage direct referrals to programs that administer them. Some work has developed formal procedures for identifying, training, and supporting stakeholder or broker populations to ensure that the system continues to request the use of EBPs by local providers (BEST).50 Of course, stakeholder satisfaction with EBP delivery may also be related to the degree to which clinicians can administer both an EBP and other related skills that the population may require to promote family stability and clinical progress (e.g., rapport and engagement building, basic treatment or process skills, crisis management). Programs that provide clinicians with training in and feedback regarding these various treatment-related skills or competencies (e.g., adherence) may ultimately be in the best position to sustain a new EBP.
In summary, we recognize that the research examples described herein are hardly a definitive listing of the many exemplars that are applying creative solutions to the issues of transporting effective practices within community contexts. But they offer practical illustrations of the flexibility and contextual attentiveness that investigators increasingly will need to demonstrate in order to advance this as yet fledging science. The large number of ongoing projects being conducted across a range of child and adolescent populations (e.g., preschoolers, school-aged children, resource parents in foster care), settings (schools, clinics, homes), trauma histories (e.g., sexual abuse, physical abuse, witnessing domestic violence, exposure to disasters), and treatment models (e.g., AF-CBT, CPT, MTFC, PCIT, PE, TF-CBT) will should provide a rich source of new knowledge about methods, measures, and approaches that can hopefully accelerate the transportability of efficacious treatments to those who apply and need them.
Acknowledgments
Preparation of this paper was supported, in part, by NIMH Grant 074737 (Kolko) and P20MH078178 (Hoagwood). We acknowledge the contributions of Mark Chaffin, Ph.D., Ben Saunders, Ph.D., and Doug Zatzick, M.D., to this paper.
Footnotes
Portions of this paper were presented at the NIMH Mental Health Services Conference on Mental Health Services Research, March 20, 2009, Bethesda, MD.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Contributor Information
David J. Kolko, Western Psychiatric Institute and Clinic, University of Pittsburgh School of Medicine, Pittsburgh, PA.
Kimberly Eaton Hoagwood, Columbia University
Benjamin Springgate, RAND Corporation and Tulane University School of Medicine, New Orleans, LA.
References
- 1.Finkelhour D, Turner H, Ormrod R, Hamby SL. Violence, Abuse, and Crime Exposure in a National Sample of Children and Youth. Pediatrics. 2009;124:124–37. doi: 10.1542/peds.2009-0467. [DOI] [PubMed] [Google Scholar]
- 2.Galea S, Nandi A, Vlahov D. The Epidemiology of Post-Traumatic Stress Disorder after Disasters. Epidemiol Rev. 2005;27:78–91. doi: 10.1093/epirev/mxi003. [DOI] [PubMed] [Google Scholar]
- 3.Walrath CM, Ybarra ML, Sheehan AK, Holden EW, Burns BJ. Impact of Maltreatment on Children Served in Community Mental Health Programs. Journal of Emotional and Behavioral Disorders. 2006;14:143–56. [Google Scholar]
- 4.Silverman WK, Ortiz CD, Viswesvaran C, Burns BJ, Kolko DJ, Putnam FW, et al. Evidence-based psychosocial treatments for children and adolescents exposed to traumatic events. Journal of Clinical Child and Adolescent Psychology. 2008;37:156–83. doi: 10.1080/15374410701818293. [DOI] [PubMed] [Google Scholar]
- 5.Bruns EJ, Hoagwood KE, Rivard JC, Wotring J, Marsenich L, Carter B. State Implementation of Evidence-Based Practice for Youths, Part II: Recommendations for Research and Policy. Journal of Amer Academy of Child & Adolescent Psychiatry. 2008;47:499–504. doi: 10.1097/CHI.0b013e3181684557. [DOI] [PubMed] [Google Scholar]
- 6.Lutterman T, Phelan B, Berhane A, Shaw R, Rana V. Center for Mental Health Services. Rockville, MD: Substance abuse and Mental Health Services Administration; 2008. Characteristics of state mental health agency data systems. [Google Scholar]
- 7.Patel V, Araya R, Chatterjee S, Chisholm D, DeDilve M, Hosman C, et al. Treatment and prevention of mental disorders in low-income and middle-income countries. The Lancet. 2007;370:991–1005. doi: 10.1016/S0140-6736(07)61240-9. [DOI] [PubMed] [Google Scholar]
- 8.Schoenbaum M, Unuetzer J, Sherbourne C, Duan N, Rubenstein LV, Miranda J, et al. Cost-effectiveness of practice-initiated quality improvement for depression: Results of a randomized controlled trial. JAMA: Journal of the American Medical Association. 2009;286:1325–30. doi: 10.1001/jama.286.11.1325. [DOI] [PubMed] [Google Scholar]
- 9.NIMH. National Institute of Mental Health. Strategic Plan. 2007 Retrieved from: http://www.nimh.nih.gov/about/strategic-planning-reports/nimh-strategic-plan-2008.pdf.
- 10.Center P. About the National Center for PSD. Retrieved from Center for PTSD - http://ncptsd.va.gov/ncmain/about/initiatives/education/dissemination.html.
- 11.Leve LD, Fisher PA, Chamberlain P. Multidimensional Treatment Foster Care as a Preventive Intervention to Promote Resiliency Among Youth in the Child Welfare System. Journal of Personality. 2009;77:1869–902. doi: 10.1111/j.1467-6494.2009.00603.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Melnyk BM, Hawkins-Walsh E, Beauchesne M, Brandt P, Crowley A, Choi M, et al. Strengthening PNP Curricula in Mental/Behavioral Health and Evidence-based Practice. Journal of Pediatric Health Care. 2009;24:81–94. doi: 10.1016/j.pedhc.2009.01.004. [DOI] [PubMed] [Google Scholar]
- 13.McHugh RK, Barlow DH. The dissemination and implementation of evidence-based psychological treatments: A review of the current efforts. American Psychologist. 2010;65:73–84. doi: 10.1037/a0018121. [DOI] [PubMed] [Google Scholar]
- 14.Lomas J. Retailing Research: Increasing the Role of Evidence in Clinical Services for Childbirth. The Milbank Quarterly. 1993;71:439–75. [PubMed] [Google Scholar]
- 15.Chambers DA, Ringeisen H, Hickman EE. Federal, State, and Foundation Initiatives Around Evidence-Based Practices for Child and Adolescent Mental Health. Child and Adolescent Psychiatric Clinics of North America. 2005;14:307–27. doi: 10.1016/j.chc.2004.04.006. [DOI] [PubMed] [Google Scholar]
- 16.Zatzick DF, Galea S. An Epidemiologic Approach to the Development of Early Trauma Focused Intervention. Journal of Traumatic Stress. 2007;20:401–12. doi: 10.1002/jts.20256. [DOI] [PubMed] [Google Scholar]
- 17.Kolko DJ, Selelyo J, Brown EJ. The treatment histories and service involvement of physically and sexually abusive families: Description, correspondence, and clinical correlates. Child Abuse & Neglect. 1999;23:459–76. doi: 10.1016/s0145-2134(99)00022-8. [DOI] [PubMed] [Google Scholar]
- 18.Herschell AD, Kogan JN, Celedonia KL, Gavin J, Stein BD. Understanding community mental health administrators’ perspectives on evidence-based treatment implementation. Psychiatric Services. 2009;60:985–88. doi: 10.1176/appi.ps.60.7.989. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Hoven CW, Duarte CS, Mandell DJ. Children's mental health after disasters: the impact of the World Trade Center attack. Current Psychiatry Report. 2003;5:101–07. doi: 10.1007/s11920-003-0026-0. [DOI] [PubMed] [Google Scholar]
- 20.Kessler RC, Galea S, Jones RT, Parker HA. Mental illness and suicidality after Hurricane Katrina. Bulletin of the World Health Organization. 2006;84:930–39. doi: 10.2471/blt.06.033019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Kolko DJ, Hurlburt M, Zhang J, Barth R, Leslie LK, Burns BB. Posttraumatic stress symptoms in children and adolescents receiving child welfare services: A national sample of in-home and out-of-home care. Child Maltreatment. 2010;15:48–63. doi: 10.1177/1077559509337892. [DOI] [PubMed] [Google Scholar]
- 22.Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network; 2005. [Google Scholar]
- 23.Feldstein AC, Glasgow RE. A Practical, Robust Implementation and Sustainability Model (PRISM) for Integrating Research Findings into Practice. The Joint Commission Journal on Quality and Patient Safety. 2008;34:228–43. doi: 10.1016/s1553-7250(08)34030-6. [DOI] [PubMed] [Google Scholar]
- 24.Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, et al. Bridging the Gap Between Prevention Research and Practice: The Interactive Systems Framework for Dissemination and Implementation. American Journal of Community Psychology. 2008;41:171–81. doi: 10.1007/s10464-008-9174-z. [DOI] [PubMed] [Google Scholar]
- 25.Weisz JR, Jensen-Doss A, Hawley KM. Evidence-based youth psychotherapies versus usual clinical care: A meta-analysis of direct comparisons. American Psychologist. 2006;61:671–89. doi: 10.1037/0003-066X.61.7.671. [DOI] [PubMed] [Google Scholar]
- 26.Bickman L. Improving the effectiveness of mental health services. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35:229. doi: 10.1007/s10488-007-0159-2. [DOI] [PubMed] [Google Scholar]
- 27.Bickman L. A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47:1114–19. doi: 10.1097/CHI.0b013e3181825af8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Barnoski R. Outcome evaluation of Washington state's research-based programs for juvenile offenders. Olympia, WA: Washington State Institute for Public Policy; 2004. [Google Scholar]
- 29.Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health. 1998;19:173–202. doi: 10.1146/annurev.publhealth.19.1.173. [DOI] [PubMed] [Google Scholar]
- 30.Cohen, Jaycox L, Walker D, Mannarino A, Langley A, DuClos J. Treating Traumatized Children after Hurricane Katrina: Project Fleur-de Lis™. Clinical Child and Family Psychology Review. 2009;12:55–64. doi: 10.1007/s10567-009-0039-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Wells KB. CBPR: Community-Based Participatory Research. University of California, Los Angeles, School of Public Health: National Institute of Mental Health; 2010. [Google Scholar]
- 32.Hoagwood KE, Vogel JM, Levitt JM, D'amico PJ, Paisner WI, Kaplan SJ. Implementing an Evidence-Based Trauma Treatment in a State System After September 11: The CATS Project. Journal of American Academy of Child & Adolescent Psychiatry. 2007;46:773–79. doi: 10.1097/chi.0b013e3180413def. [DOI] [PubMed] [Google Scholar]
- 33.Consortium C. Implementing CBT for Traumatized Children and Adolescents After September 11: Lessons Learned from the Child and Adolescent Trauma Treatments and Services (CATS) Project. Journal of Clinical Child and Adolescent Psychology. 2007:36. doi: 10.1080/15374410701662725. [DOI] [PubMed] [Google Scholar]
- 34.Consortium C. Implementation of CBT for Children and Adolescents Affected by the World Trade Center Disaster: Outcomes in Reducing Trauma Symptoms. Journal of Traumatic Stress. doi: 10.1002/jts.20594. in press. [DOI] [PubMed] [Google Scholar]
- 35.Springgate BF, Allen C, Jones C, Lovera S, Meyers D, Campbell L, et al. Rapid Community Participatory Assessment of Health Care in Post-Storm New Orleans. American Journal of Preventive Medicine. 2009;37:S237–S43. doi: 10.1016/j.amepre.2009.08.007. [DOI] [PubMed] [Google Scholar]
- 36.Kolko DJ. Help for Families Involved in Physical Coercion or Abuse: An Alternative Approach for Teaching Families New Skills. American Professional Society on the Abuse of Children; Atlanta, GA: 2009. [Google Scholar]
- 37.Montgomery D. Design and Analysis of Experiments. 7. New York: Wiley; 2008. [Google Scholar]
- 38.West SG, Duan N, Pequegnat W, Gaist P, Des Jarlais DC, Holtgrave D, et al. Alternatives to the Randomized Controlled Trial. American Journal of Public Health. 2008;98:1359–66. doi: 10.2105/AJPH.2007.124446. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Thistlewaite DL, Campbell DT. Regression-discontinuity analysis: An alternative to the ex post facto experiment. Journal of Educational Psychology. 1960;51:309–17. [Google Scholar]
- 40.Cappelleri J, Trochim W. An illustrative statistical analysis of cutoff-based randomized clinical trials. Journal Of Clinical Epidemiology. 1994;47:261–70. doi: 10.1016/0895-4356(94)90007-8. [DOI] [PubMed] [Google Scholar]
- 41.Finkelstein MO, Levin B, Robbins H. Clinical and Prophylactic Trials with Assured New Treatment for Those at Greater Risk. Part I - Introduction. American Journal of Public Health. 1996;86:691–95. doi: 10.2105/ajph.86.5.691. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Finkelstein MO, Levin B, Robbins H. Clinical and Prophylactic Trials with Assured New Treatment for Those at Greater Risk. Part II -- Examples. American Journal of Public Health. 1996;86:696–705. doi: 10.2105/ajph.86.5.696. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Shadish WR, Cook T, Campbell D. Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin; 2002. [Google Scholar]
- 44.Cappelleri JC, Trochim WMK. Ethical and Scientific Features of Cutoff-based Designs of Clinical Trials: A Simulation Study. Med Decis Making. 1995;15:387–94. doi: 10.1177/0272989X9501500409. [DOI] [PubMed] [Google Scholar]
- 45.Paradise JL, Bluestone CD, Rogers KD, Taylor FH, Colborn DK, Bachman RZ, et al. Efficacy of adenoidectomy for recurrent otitis media in children previously treated with tympanostomy-tube placement. Results of parallel randomized and nonrandomized trials. Journal of the American Medical Association. 1990;263:2066–73. [PubMed] [Google Scholar]
- 46.Marcus S. Assessing Non-consent Bias with Parallel Randomized and Nonrandomized Clinical Trials. Journal Of Clinical Epidemiology. 1997;50:823–28. doi: 10.1016/s0895-4356(97)00068-1. [DOI] [PubMed] [Google Scholar]
- 47.Shadish WR, Cook T, Campbell D. Experimental and quasi-experimental designs for generalized causal inference. New York: Houghton Mifflin; 2001. [Google Scholar]
- 48.Hanson RF. Transporting Evidence-based Treatment for Child Abuse: Focus on Training Community-based Clinicians. Annual Colloquium for the American Professional Society on the Abuse of Children (APSAC); Phoeniz, AZ. 2008. [Google Scholar]
- 49.Kolko DJ, Dorn LD, Bukstein OG, Burke JD. Clinically referred ODD children with or without CD and healthy controls: Comparisons across contextual domains. Journal of Child and Family Studies. 2008;17:21. [Google Scholar]
- 50.Saunders BE. Project Best: Using Community Change Teams to Bring Evidence-Supported Treatments to Children & Families. Presentation at the annual conference of the Public Children Services Association of Ohio; Columbus, OH. 2009. [Google Scholar]
- 51.Kolko DJ, Herschell AD, Costello AH, Kolko RP. Child welfare recommendations to improve mental health services for children who have experienced abuse and neglect: A national perspective. Administration & Policy in Mental Health. 2009;36:12. doi: 10.1007/s10488-008-0202-y. [DOI] [PubMed] [Google Scholar]
- 52.Atkins MS, Frazier SL, Leathers SJ, Graczyk PA, Talbott E, Jakobsons L, et al. Teacher key opinion leaders and mental health consultation in low-income urban schools. Journal of Consulting and Clinical Psychology. 2008;76:905–08. doi: 10.1037/a0013036. [DOI] [PubMed] [Google Scholar]
- 53.Aarons GA, Sommerfeld DH, Hecht DB, Silovsky JF, Chaffin MJ. The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: Evidence for a protective effect. Journal of Consulting and Clinical Psychology. 2009;77:270–80. doi: 10.1037/a0013223. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Funderburk B, Ware LM, Altshuler E, Chaffin MJ. Use and feasibility of telemedicine technology in the dissemination of parent-child interaction therapy. Child Maltreatment. 2008;13:377–82. doi: 10.1177/1077559508321483. [DOI] [PubMed] [Google Scholar]
- 55.Chorpita BF, Daleiden EL. Mapping evidence-based treatments for children and adolescents: Application of the distillation and matching model to 615 treatments from 322 randomized trials. Journal of Consulting and Clinical Psychology. 2009;77:566–79. doi: 10.1037/a0014565. [DOI] [PubMed] [Google Scholar]
- 56.Deblinger E, Cohen JA, Mannarino A. A dismantling study of TF-CBT. Annual meeting of the Association for Behavioral and Cognitive Therapies (ABCT); New York, NY. 2009. [Google Scholar]
- 57.Resick PA, Galovski TE, Uhlmansiek MO, Scher CD, Clum GA, Young-Xu Y. A randomized clinical trial to dismantle components of cognitive processing therapy for posttraumatic stress disorder in female victims of interpersonal violence. Journal of Consulting and Clinical Psychology. 2007;76:243–58. doi: 10.1037/0022-006X.76.2.243. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Hayes AM, Webb C, Grasso D, Cummings JA, Vahlsing JB. Processes that inhibit and facilitate change in Trauma-Focused Cognitive Behavioral Therapy for youth exposed to interpersonal violence. In: Deblinger E, editor. Association for Behavioral and Cognitive Therapies. Vol. Interventions for youth exposed to trauma/abuse: Understanding change processes. New York, NY: 2009. [Google Scholar]
- 59.Smith D, Chamberlain P, Fisher P. Integrated treatment of PTSD and delinquency using TF-CBT and MTFC. Annual Meeting of the Association for Behavioral and Cognitive Therapies; New York, NY. 2009. [Google Scholar]
- 60.Chaffin MJ. SAFE Dissemination project. Atlanta, GA: Centers for Disease Control; 2010. [Google Scholar]
- 61.Cohen, Mannarino A. Disseminating and implementing trauma-focuses CBT in community settings. Trauma, Violence, and Abuse. 2008;9:214–26. doi: 10.1177/1524838008324336. [DOI] [PubMed] [Google Scholar]
- 62.Herschell AD, Kolko DJ, Baumann BL, Costello AH. The role of therapist training in the implementation of psychosocial treatments: A review and critique with recommendations. Clinical Psychology Review. doi: 10.1016/j.cpr.2010.02.005. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Gleacher A. The Columbia University Clinic for Anxiety and Related Disorders (CUCARD) 2010. [Google Scholar]
- 64.Cohen JA, Kolko DJ. Transforming trauma: Recognizing and responding to posttraumatic stress disorder symptoms in children and adolescents. In: McInerny TK, Adam HM, Campbell DE, Kamat DM, Kelleher KJ, editors. American academy of pediatrics textbook of pediatric care. 1. Elk Grove Village, IL: American Academy of Pediatrics; 2008. pp. 1285–91. [Google Scholar]
- 65.Jellinek MS, Murphy M, Little M, Pagano M, Comer D, Kelleher K. Use of the pediatric symptom checklist to screen for psychosocial problems in pediatric primary care. Arch Pediatr Adolesc Med. 1999;153:254–60. doi: 10.1001/archpedi.153.3.254. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Bourdon K, Goodman R, Rae D, Simpson G, Koretz DS. The Strengths and Difficulties Questionnaire: U.S. normative data and psychometric properties. Journal of the American Academy of Child & Adolescent Psychiatry. 2005;44:557–64. doi: 10.1097/01.chi.0000159157.57075.c8. [DOI] [PubMed] [Google Scholar]
- 67.Kelley SD, Bickman L. Beyond outcomes monitoring: measurement feedback systems in child and adolescent clinical practice. Current Opinion in Psychiatry. 2009;22:363–68. doi: 10.1097/YCO.0b013e32832c9162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Fitzgerald M, Franks B, Lang J. Spreading EBPs to the Community: Initial Results from Two Statewide Implementations of TF-CBT Using the Learning Collaborative Methodology. 23rd Annual San Diego International Conference on Child and Family Maltreatment; San Diego, CA. 2010. [Google Scholar]
- 69.Hawkins E, Lambert M, Vermeersch D, Slade K, et al. The therapeutic effects of providing patient progress information to therapists and patients. Psychotherapy Research. 2004;14:308–27. [Google Scholar]
- 70.Harmon SC, Lambert MJ, Smart DM, Hawkins E, Nielsen SL, Slade K, et al. Enhancing outcome for potential treatment failures: Therapist-client feedback and clinical support tools. Psychotherapy Research. 2007;17:379–92. [Google Scholar]
- 71.Lambert M, Whipple JL, Hawkins E. Is it time for clinicians to routinely track patient outcome? A Meta-analysis. Clinical Psychology: Science and Practice. 2003;10:288–301. [Google Scholar]
- 72.Hatfield DR, Ogles BM. The influence of outcome measures in assessing client change and treatment decisions. Journal of Clinical Psychology. 2006;62:325–37. doi: 10.1002/jclp.20235. [DOI] [PubMed] [Google Scholar]
- 73.Bickman L, Noser K, Summerfelt WT. Long term effects of a system of care on children and adolescents. Journal of Behavioral Health Services & Research. 1999;26:185–202. doi: 10.1007/BF02287490. [DOI] [PubMed] [Google Scholar]
- 74.Duan N, Reise SP, editors. Multilevel Modeling: Methodological Advances, Issues and Applications. New York: Erlbaum; 2002. [Google Scholar]
- 75.Reise SP, Duan N. Multilevel Modeling and its Application in Counseling Psychology Research. The Counseling Psychologist. 1999;27:528–51. [Google Scholar]
- 76.Hoagwood K, Green E, Kelleher K, Schoenwald S, Rolls-Reutz J, Landsverk J, et al. Family Advocacy, Support and Education in Children’s Mental Health: Results of a National Survey. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35:73–83. doi: 10.1007/s10488-007-0149-4. [DOI] [PubMed] [Google Scholar]
- 77.Schoenwald SK, Chapman JE, Kelleher K, Hoagwood KE, Landsverk J, Stevens J, et al. A survey of the infrastructure for children's mental health services: Implications for the implementation of empirically supported treatments. Administration & Policy in Mental Health. 2008;35:84–97. doi: 10.1007/s10488-007-0147-6. [DOI] [PubMed] [Google Scholar]
- 78.Gibbons RD. Mixed-Effects Models for Mental Health Services Research. Health Services and Outcomes Research Methodology. 2000;1:91–129. [Google Scholar]
- 79.Kolko DJ, Cohen J, Mannarino A, Baumann B, Knudsen K. Community Treatment of Child Sexual Abuse: A Survey of Practitioners in the National Child Traumatic Stress Network. Administration and Policy in Mental Health and Mental Health Services Research. 2009;36:37–49. doi: 10.1007/s10488-008-0180-0. [DOI] [PubMed] [Google Scholar]
- 80.Beidas RS, Kendall PC. Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: Science and Practice. 2010;17:1–30. doi: 10.1111/j.1468-2850.2009.01187.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81.Glisson C, Green P. The Role of Specialty Mental Health Care in Predicting Child Welfare and Juvenile Justice Out-of-Home Placements. Research on Social Work Practice. 2006;16:480–90. [Google Scholar]
- 82.Recklitis CJ, Parsons SK, Shih M-C, Mertens A, Robison LL, Zeltzer L. Factor Structure of the Brief Symptom Inventory 18 in Adult Survivors of Childhood Cancer: Results From the Childhood Cancer Survivor Study. Psychological Assessment. 2006;18(1):22–32. doi: 10.1037/1040-3590.18.1.22. [DOI] [PubMed] [Google Scholar]
- 83.Ondersma SJ, Chaffin M, Mullins SM, LeBreton JM. A brief form of the child abuse potential inventory: Development and validation. Journal of Clinical Child & Adolescent Psychology. 2005;34:301–11. doi: 10.1207/s15374424jccp3402_9. [DOI] [PubMed] [Google Scholar]
