Abstract
Mazzucchelli and Sanders (2010) provide a thoughtful, detailed, and complex description of how to encourage flexible fidelity to one well-established, evidence-based treatment, the Triple P- Positive Parenting Program. As the authors highlight, many of the “lessons learned” from this wealth of treatment, research, and implementation experience were developed over decades and can be applied to other evidence based treatments. Underlying many of the recommendations provided by Mazzucchelli and Sanders (2010) is a well-refined infrastructure to support implementation and the need to refine the measurement of fidelity in the field. This commentary will discuss each of those topics. It seems that we have many lessons to learn and hurdles to clear in this emerging area of science, which will be hastened by pioneers like Mazzucchelli and Sanders.
Keywords: treatment fidelity, evidenced-based treatment, flexibility, implementation, evidence
As a few have recently highlighted (Beidas & Kendall, 2010; Herschell, Kolko, Baumann, & Davis, 2010) evidence based treatment (EBT) implementation within behavioral health is difficult, in part, because it often requires therapists to change their behavior. They have to learn a new skill set that often requires learning new information and inhibiting old, familiar ways. Teaching therapists to be flexible but not change the treatment too much “in such a way that it moves beyond its evidence base” (Mazzucchelli & Sanders, 2010, p. 5) is critically important, but is often easier said than done. This requires therapists to know when to deviate and when the deviation is too much (cf Kendall, Gosch, Furr & Sood, 2008), which requires knowledge of the existing treatment as well as its theoretical base and research support. It also requires that therapists are linked into a larger network so that they can remain informed of changes in the treatment and its research. The notion of “flexibility within fidelity” (Kendall & Beidas, 2007) is important because we can be sure that adopters of an innovation, like EBTs, will change them (Rogers, 2003). Encouraging flexibly where it is appropriate (and often necessary) as well as identifying when flexible fidelity turns into drift is critical to the success of EBT implementation. Inherent in Mazzucchelli and Sanders’ (2010) recommendations; flexible fidelity also seems to require developing sufficient infrastructure to support implementation and refining the measurement of fidelity in the field.
Developing Infrastructure to get EBTs into the field
While we once hoped that fidelity in the field could be achieved with quick, passive implementation strategies like workshop or continuing education training, it is now clear that to be successful, the implementation process must be active, include several core components that impact multiple levels of the mental health delivery system (e.g., staff selection, training, coaching, staff performance assessment, data systems, facilitative administration, systems intervention), and will likely take two to four years to complete (Fixsen, Blase, Naoom, & Wallace, 2009). Underlying much of what Mazzucchelli and Sanders (2010) recommend is a well-refined infrastructure for implementing one EBT. In addition to several helpful suggestions, the authors describe a trainer network, clinician certification, professionally produced resource materials (e.g., practitioner manuals, training manuals, parent workbooks, DVDs, tip sheets), newsletters, on-line question and answer forums, and an annual conference. Each of these are products of a well-developed infrastructure that has been designed to attain and maintain high quality implementation (defined by high fidelity) in the quest to achieve a public health impact (Sanders, 2010). Essentially, Mazzucchelli and Sanders (2010) describe strategies utilized in their roles as “purveyors” of Triple P, individuals “who actively work to implement [a] practice or program with fidelity and good effect” (Fixsen et al., 2009, p. 537).
Triple P was originally developed for and has been used extensively to treat disruptive behavior disorders (e.g., Oppositional Defiant Disorder, Conduct Disorder). Fortunately, Triple P is one of many EBTs for disruptive behavior disorders. Behavior therapy, including parent management training, is “well-established” and there are several manualized treatments that are considered “probably efficacious” including cognitive behavioral therapies (i.e., Anger Control Training, Rational-emotive) and behavioral therapies (i.e., Helping the Noncompliant Child, Triple P, Incredible Years, Parent-Child Interaction Therapy, Problem Solving Skills Training, Group Assertiveness Training, Multidemensional Treatment Foster Care, and Multisystemic Therapy; Eyberg, Nelson, & Boggs, 2008) for the treatment of disruptive behavior disorders. There seems to be wide variation among these treatments in terms of how widely they have been implemented in community settings and how much infrastructure has already been developed for each of them. Some of these treatment approaches seem to have a high level of infrastructure comparable to Triple P (e.g., Incredible Years, Multideminsional Treatment Foster Care, Multisystemic Therapy); others (e.g., Parent-Child Interaction Therapy) are in the process of developing similar infrastructure; and others may not have a sophisticated implementation infrastructure.
If a high level of infrastructure is this is what it takes to implement EBTs with fidelity, it seems like much could be gained by combining efforts and learning from others. Even with the outstanding efforts of these authors and many others, EBTs are far from routine in community settings. Given the low penetration rate of EBTs in community practice and the similarities across treatments for disruptive behavior disorders (e.g., theoretical base; research support; active, directive, and structured style), some infrastructure shared across EBTs might help to contribute to an overall public health impact. Perhaps professional organizations and networks could enhance their existing efforts to help disseminate and implement EBTs by encouraging collaborative infrastructure development across treatment approaches. For example, the Blueprints for Violence Prevention Initiative sponsors one conference for multiple EBTs rather than having separate conferences for each EBT. The focus of many sessions within the conference is implementation (http://www.blueprintsconference.com/about/) The National Child Traumatic Stress Network has sponsored several learning collaboratives and online trainings for EBTs rather than endorsing one EBT. Expanding these efforts might involve a common elements approach to training, similar to what has been described by Chorpita and colleagues (Chorpita & Daleiden, 2009) within the EBT literature. Perhaps larger organizations could help support training, certification, and fidelity monitoring of behavioral therapy and cognitive behavioral therapy processes and procedures in an effort to help support the implementation of EBTs generally. Perhaps these could even be considered readiness or prerequisite training for specific EBT implementation. Otherwise, we are left with each individual EBT having to develop their own infrastructure, consequences of which might be siloed treatments and accompanying innovations as well as duplicative efforts and wasted resources. Also, given our lack of success in getting science into practice, efforts that bring science (including EBTs) together might be complicated, but worth pursuing. This article by Mazzucchelli and Sanders (2010) is a great illustration of the benefits of sharing “lessons learned.”
Fine-tuning Measurement – Developing efficient measures and methods to assess fidelity in the field
Though a recent review might have raised some doubts about its importance (Webb, DeRubeis, & Barber, 2010), most agree that treatment fidelity is a difficult, but important construct to assess because it has been associated with positive client outcomes (cf., Huey, Henggeler, Brondino, & Pickrel, 2000) and with positive organizational level variables (Schoenwald, Sheidow, Letourneau, & Liao, 2003) including improved staff retention (Aarons, Sommerfeld, Hecht, Silovsky, & Chaffin, 2009). Traditionally, treatment fidelity has been measured within treatment outcome trials by well trained, independent raters reviewing audio-or video-taped treatment sessions and coding them to ensure that therapists remain faithful (adherent) to the pre-established treatment session content, structure, and process. These behavior observation ratings typically offer a detailed account of what did (and did not) happen within the session. They have remained the gold standard of treatment fidelity monitoring, but they are time-, labor-, and cost-intensive. Sometimes we do not even do fidelity monitoring well in tightly controlled research trials. A close examination of treatment fidelity in randomized controlled trials (RCTs) found that less that 4% of RCTs evaluated adequately implemented treatment fidelity procedures (Perepletchikova, Treat, & Kazdin, 2007). When corresponding authors of RCTs published in influential psychiatric and psychological journals were contacted to assess what they perceived as barriers to treatment fidelity implementation, they reported that they regarded fidelity as important, but there were two strong barriers to collecting these data: 1) lack of theory and specific guidelines on fidelity procedures and 2) the time, cost, and labor demands of fidelity monitoring (Perepletchikova, Hilt, Chereji, & Kazdin, 2009).
If we cannot implement treatment fidelity procedures in well-controlled RCTs where the importance of fidelity is paramount, how can we expect clinical settings to adopt these same procedures where there may be even more barriers and complexities to collecting treatment fidelity data? For example, community-based practitioners typically do not like to tape their sessions and have expressed that they perceive taping to be cumbersome, detracting from the therapeutic focus, and invasive to patients. Fidelity monitoring may even run counter to community practice where resources are limited and treatment is influenced by insurance coverage, scheduling difficulties, and eclectic practice (Angold, Costello, Burns, Erkanli, & Farmer, 2000).
To ease the burden of behavior observation ratings that require an independent observer to rate live or pre-recorded sessions, some have suggested that therapist might be able to rate their own session. Others have (justifiably) argued that even though behavior observation ratings present challenges, poor concordance between therapist and observer ratings suggest that therapist reports may be a supplement to, but not substitute for, observer ratings of fidelity in the field (Carroll, Nich, & Rounsaville, 1998, p. 307). An accurate measurement of what actually happens (rather than what is reported), may be difficult to obtain, but is currently necessary to clearly measure fidelity.
So, if traditional methods for measuring fidelity are too complicated, time consuming, and financially untenable, and therapist reports of fidelity are unreliable, the field is left in a quandary in terms of measuring fidelity to EBTs. We need to think carefully and maybe even creatively about our measures, procedures, and raters. In addition to careful attention to reliability and validity, researchers have to invest effort into streamlining measures and into refining procedures so that fidelity monitoring is simpler and perceived as part of treatment. Perhaps we should embrace this opportunity for practice to inform research questions. For example, if we had a better sense of the “active ingredients” within EBTs we could develop fidelity monitoring around fewer pieces of treatment, which would certainly simplify and streamline measurement.
We also need to think broadly about who might be able to rate fidelity. One possible solution is to train therapists to be more reliable raters, just as we train staff who rate treatment fidelity in RCTs. As part of a system of ongoing fidelity monitoring, therapists might be able to be trained to reliability rate each other’s performance. Another possible solution is to have patients rate therapist fidelity. It seems that we have learned that therapists and supervisors will be overly optimistic in their report of fidelity as compared to a behavior observation rating, but we have yet to draw that same conclusion with patient ratings of fidelity. Perhaps building in patient ratings in addition to behavior observation ratings of fidelity to effectiveness trials might help us to better understand if patient report on key pieces of treatment content, structure, and process might be reliable, valid, and sufficient to determine the level of fidelity that is necessary for a positive treatment outcome. Alternatively, if patient report can not replace behavior observation ratings, perhaps it might supplement behavior observation ratings and other more time- and cost-effective strategies (e.g., internal, peer ratings as opposed to external, expert or consultant ratings) so that different forms of fidelity monitoring are used in combination to systematically monitor fidelity.
All of these suggestions are aimed at reducing the burdens associated with fidelity monitoring as well as to continue to understand the association between fidelity and outcomes, including outcomes that are important to stakeholders who fund services (e.g., administrators, managed care representatives, policy makers). Not only do we need to convince agency personnel that fidelity monitoring is important, we also need to convince those who fund services that fidelity monitoring is an important part of delivering EBT so that community-based mental health providers get reimbursed for the extra effort currently required to complete fidelity monitoring. Understanding the association between fidelity and outcomes that are important to stakeholders who represent multiple levels of the system (e.g., families, therapists, supervisors, agency administrators, funders) will need to continue to be a priority in implementation research if we expect fidelity monitoring to be sustainable in community settings.
In summary, the lessons that Mazzucchellli and Sanders (2010) shared from their extensive experience in implementing Triple P reminds us that high quality EBT implementation is a complex process. We first have to get EBTs into the field. To get them to even a fraction of the people who could benefit from them, we need to develop an infrastructure that will support EBT implementation. Once these treatments are in the field, we need more cost-, time-, and labor-effective methods to assess fidelity, which will require innovation in fidelity monitoring. As others have observed (e.g., Aarons et al., 2009; Chaffin, 2006), EBT implementation occurs within an established culture, of which we need to be mindful. Using EBTs represent a value – the value of science over different loyalties (Chaffin, 2006). The commitment to EBT implementation and fidelity, however flexible, is ongoing and effortful and may require therapists, supervisors, and administrators to shift towards a more evidence-based culture that recognizes the value in collecting data and using it to guide decision making.
Acknowledgments
Preparation of this article was supported and influenced by two grants from the National Institute of Mental Health: a Career Development Award (K23 MH074716) awarded to Amy D. Herschell and an effectiveness trial (R01 MH074737) awarded to David J. Kolko. Thank you to Barbara L. Baumann for reviewing a previous draft of this paper.
References
- Aarons GA, Sommerfeld DH, Hecht DB, Silovsky JF, Chaffin MJ. The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: Evidence for a protective effect. Journal of Consulting and Clinical Psychology. 2009;77(2):270–280. doi: 10.1037/a0013223. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Angold A, Costello JE, Burns BJ, Erkanli A, Farmer EMZ. Effectiveness of nonresidential specialty mental health services for children and adolescents in the "real world". Journal of the American Academy of Child & Adolescent Psychiatry. 2000;39(2):154–160. doi: 10.1097/00004583-200002000-00013. [DOI] [PubMed] [Google Scholar]
- Beidas RS, Kendall PC. Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: Science and Practice. 2010;17(1):1–30. doi: 10.1111/j.1468-2850.2009.01187.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carroll KM, Nich C, Rounsaville BJ. Utility of therapist session checklists to monitor delivery of coping skills treatment for cocaine abusers. Psychotherapy Research. 1998;8(3):307–320. [Google Scholar]
- Chaffin M. Organizational culture and practice epistemologies. Clinical Psychology: Science and Practice. 2006;13(1):90–93. [Google Scholar]
- Chorpita BF, Daleiden EL. Mapping evidence-based treatments for children and adolescents: Application of the distillation and matching model to 615 treatments from 322 randomized trials. Journal of Consulting and Clinical Psychology. 2009;77(3):566–579. doi: 10.1037/a0014565. [DOI] [PubMed] [Google Scholar]
- Eyberg SM, Nelson MM, Boggs SR. Evidence-based psychosocial treatments for children and adolescents with disruptive behavior. Journal of Clinical Child and Adolescent Psychology. 2008;37:215–237. doi: 10.1080/15374410701820117. [DOI] [PubMed] [Google Scholar]
- Fixsen DL, Blase KA, Naoom SF, Wallace F. Core implementation components. Research on Social Work Practice. 2009;19(5):531–540. [Google Scholar]
- Herschell AD, Kolko DJ, Baumann BL, Davis AC. The role of therapist training in the implementation of psychosocial treatments: A review and critique with recommendations. Clinical Psychology Review. 2010;30:448–466. doi: 10.1016/j.cpr.2010.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huey SJ, Henggeler SW, Brondino MJ, Pickrel SG. Mechanisms of change in multisystemic therapy: Reducing delinquent behavior through therapist adherence and improved family and peer functioning. Journal of Consulting & Clinical Psychology. 2000;68(3):451–467. [PubMed] [Google Scholar]
- Kendall PC, Beidas R. Smoothing the trail for dissemination of evidence-based practices for youth: Flexibility within fidelity. Professional Psychology: Research and Practice. 2007;38:13–20. [Google Scholar]
- Kendall PC, Gosch E, Furr J, Sood E. Flexibility within fidelity. Journal of the American Academy of Child and Adolescent Psychiatry. 2008;47:987–993. doi: 10.1097/CHI.0b013e31817eed2f. [DOI] [PubMed] [Google Scholar]
- Mazzucchelli T, Sanders M. Facilitating practitioner flexibility within Evidence Based Practice: Lessons from a system of parenting support. Clinical Psychology: Science and Practice 2010 [Google Scholar]
- Perepletchikova F, Hilt LM, Chereji E, Kazdin AE. Barriers to implementing treatment integrity procedures: Survey of treatment outcome researchers. Journal of Consulting and Clinical Psychology. 2009;77(2):212–218. doi: 10.1037/a0015232. [DOI] [PubMed] [Google Scholar]
- Perepletchikova F, Treat TA, Kazdin AE. Treatment integrity in psychotherapy research: Analysis of the studies and examination of associated factors. Journal of Consulting and Clinical Psychology. 2007;75:829–841. doi: 10.1037/0022-006X.75.6.829. [DOI] [PubMed] [Google Scholar]
- Rogers EM. Diffusion of Innovation. 5. New York: Free Press; 2003. [Google Scholar]
- Sanders MR. Adopting a public health approach to the delivery of evidence-based parenting interventions. Canadian Psychological Review. 2010;51(1):17–23. [Google Scholar]
- Schoenwald SK, Sheidow AJ, Letourneau EJ, Liao JG. Transportability of Multisystemic Therapy: Evidence for Multilevel Influences. Mental Health Services Research. 2003;5(4):223–239. doi: 10.1023/a:1026229102151. [DOI] [PubMed] [Google Scholar]
- Webb CA, DeRubeis RJ, Barber JP. Therapist Adherence/Competence and Treatment Outcome: A Meta-Analytic Review. Journal of Consulting and Clinical Psychology. 2010;78(2):200–211. doi: 10.1037/a0018912. [DOI] [PMC free article] [PubMed] [Google Scholar]
