Abstract
Kazdin and Blase (2011) propose that traditional models of delivering therapy require more resources than are available to address the scope of mental illness. We argue that finding new platforms and avenues for our existing treatments is a good start but not enough. We contend that the field also needs to develop formal strategies to reorganize its increasing abundance of knowledge, in order to address the scarcity of resources for its application. If we can better utilize our existing knowledge, treatment delivery and service resource allocation can become more efficient and effective. If the field continues with its almost singular emphasis on knowledge proliferation (e.g., developing new treatments), as opposed to knowledge management (e.g., developing new ways to design, apply, and organize existing treatments), the problem outlined by Kazdin and Blase cannot be solved.
Kazdin and Blase assert that unless we make some major changes, our profession cannot meet the demand for mental health services in the US or globally. They offer the idea of a portfolio of models, and we agree entirely that increasing the range of how existing treatments can be applied will help reduce the overall burden of mental health suffering. However, within the current zeitgeist, this could well mean that we will see 10 different versions of each protocol, each requiring 10 efficacy trials and 10 more effectiveness trials—essentially taking us from thousands of treatments to hundreds of thousands. This is certainly not what Kazdin and Blase intend, but we believe that without deliberate intervention, it is likely to be how the field responds.
We Need More and Better Ways to Organize and Move Knowledge
We see this as a knowledge management problem. That is, continued proliferation of knowledge about treatment will not help unless we get much, much better at summarizing, synthesizing, integrating, and delivering what we already have (Graham et al., 2006). The existing knowledge base is now too large to comprehend and apply optimally by any psychologist. In our recent efforts to examine how to choose a set of evidence-based treatments (EBTs) that best fit an organization’s service population (Chorpita, Bernstein, & Daleiden, in press), we discovered that just selecting a set of no more than a dozen treatments from among all EBTs for children yields over 67 sextillion possibilities. To put this number in some perspective, if one were to write each unique set of 12 or fewer treatments on a single sheet of ordinary paper, the resulting pile would reach to the sun and back. Over 20 million times. Each of these sets has a unique composition and thus a potentially unique impact on the service population. Selecting an ideal array of treatments from among the promising possibilities is no longer a simple problem—and it is approaching unsolvability. Although we know much about what works, we can no longer apply that knowledge efficiently.
Any system seeking to provide quality care informed by research must select treatments to put into its service array. Given that providing evidence-based, quality care to those in need has become a public health priority (US Department of Health and Human Services, 2007) and that there are numerous ongoing efforts to implement EBTs at the state and national level (Chambers, Ringeisen, & Hickman, 2005), the number of systems facing this challenge is only increasing. It is time to consider whether our policies to implement EBTs and our rules for how we define them are really compatible.
We Need Options Other than Treatment Manuals to Transfer Knowledge
To identify what works, over the past 20 years the field has emphasized efficacy and internal validity over external validity, feasibility and acceptability. Following the principles of good research design (Kazdin, 2003), the field required primarily that treatments be well-specified, typically in the form of a book or manual (e.g., Chambless & Hollon, 1998). This development was an extraordinary and innovative leap forward. But like nearly all innovation, it has had unforeseen consequences as well, which we must now face squarely.
Our knowledge has been packaged in units that cannot easily be combined. Is each manual, tested within its own research program, really a world unto itself? If so, then we have hundreds, perhaps thousands, of silos of expert knowledge, with little means to organize or combine them. Despite many years of brilliant innovation in treatment development and research, a child with two different problem clusters (e.g., separation anxiety and depressed mood) will at best receive a sequence of two separate EBTs built by two different experts. In this day and age, there is still no way for a child to receive care in the community that formally combines the collective scientific expertise on what to do for both conditions, even though we now have very good ideas for how to treat each. Although transdiagnostic treatment models are at last emerging (e.g., Allen, McHugh, & Barlow, 2008), for the most part, the products of our research are still getting in the way of utilizing the knowledge behind them.
In our own work, we have sought to aggregate knowledge in the form of practice elements (discrete clinical procedures), noting which ones are commonly associated with successful outcomes for which symptoms in clinical trials (e.g., Distillation and Matching Model; Chorpita & Daleiden, 2009). This work has involved coding all available treatments protocols for their common procedures and operations (e.g., use of a reward program, relaxation training, cognitive restructuring), and identifying how those operations are associated with client or context features (e.g., diagnosis, age, setting). What emerge are profiles or frequency distributions that show which procedures are most commonly associated with successful treatments for which clinical presentations.
Similar efforts to aggregate practice elements across independent treatments are underway in social work (Barth et al., in press) and in our multi-disciplinary collaboration with the National Child Traumatic Stress Network (e.g., Layne et al., in press). The point of this work is that the patterns in the whole may reveal more than just the sum of the parts—for example, the supportive evidence for a single treatment protocol may be bolstered by the findings from procedurally related treatments. We can imagine possibilities where the actions of therapists are guided not always by a single manual, but at times by the entire relevant treatment literature.
By aggregating across treatments to look for common elements, we seek to outline the robust or important features of an EBT, and to distinguish those features that are nonessential “nuance.” What features are important for efficacy is ultimately an experimental question, and until we have that knowledge in place and can then deliver it in real time to inform treatment prescription, data aggregation methods such as the distillation and matching model are the intermediate step. Kazdin (e.g., 2008a, 2008b) has repeatedly cautioned that an ontology of change mechanisms will take more than our lifetimes to establish. Waiting to discern what we do not know should not stop us from reconsidering what we already do.
Many strategies can be used to aggregate knowledge; identifying practice elements is only one of them. Identifying common processes is another (Collins, Phields, Duncan, & Science Application Team, 2007; Ingram, Flannery, Elkavich, & Rotheram-Borus, 2008; Rotheram-Borus, Ingram, Swendeman, & Flannery, 2009). Processes refer to such things as the degree of structure, activities directed at setting a tone for the group, or the role of the facilitator as active or not. Another approach in both clinical and health promotion trials is to identify standardized functions (Hawe et al., 2004), such as providing education, improving detection, building social networks and support or facilitating accumulation of instrumental goods. In other words, treatments can be organized more around aims than strategies to achieve those aims. Complex interventions may have limited impact because we too literally advocate for replication with fidelity of activities and scripts. There may be multiple strategies to achieve health knowledge, all of which are acceptable, especially in allowing cultural tailoring, if the function of increasing health knowledge is served. Thus, in our existing compendia of EBTs, almost any dimension can be aggregated and mined: how treatments are arranged, the style with which they are delivered, the manner in which they are supervised, the functions they serve. Each analysis reveals patterns that summarize features of the best treatments.
So what will these patterns tell us? At one end of the spectrum, they can point to intact EBTs. For example, the treatment that shares the most features in common with all of the 45 EBTs relevant to a 12-year old girl with anxiety could be a reasonable choice, because it is not only evidence-based within its own replication series, but it is backed by over 30 neighboring randomized trials of highly similar treatments. The same cannot be said for an anxiety treatment whose features suggest it is more of an outlier within that group—it may have to stand alone on its own clinical trials.
Further along the continuum, aggregate patterns can tell us how to build new treatments, perhaps those that are more flexible and more broadly applicable, that borrow the most commonly used procedures of treatments for various disorders. We have developed and recently tested one such protocol that targets four disorders (anxiety, depression, conduct problems, and traumatic stress) using a combination of the components drawn from existing evidence-based approaches and coordinated by a set of guiding algorithms (MATCH-ADTC; Chorpita & Weisz, 2009). We do not really see this as a new treatment—it is better characterized as a new arrangement of the old treatments—an attempt to do more with what we already have.
At the furthest end of the spectrum of independent versus flexibly aggregated treatments, we see the possibility for real-time design of treatments based solely on libraries of component procedures and libraries of the algorithms for combining and ordering those procedures. We have recently designed and implemented such a direct service prototype in children’s mental health, focusing on the selection and delivery of practice elements using guiding algorithms and in the context of feedback on progress and practice history (Chorpita & Daleiden, 2010). Ultimately, we see possibilities for multi-developer treatment content libraries, which can be delivered flexibly across multiple media and service platforms.
We Need to Co-Design Evidence-Based Treatments
To meaningfully achieve the goal that Kazdin and Blase outline—a portfolio of models—we may also need to move away from a paradigm in which laboratory experts solely design treatments. Treatment may ultimately involve co-design: important initial parameters and procedures built in the lab and real-time adjustments and local adaptation made in the field by clinicians. This will yield treatments that involve shared expertise—leveraging two knowledge bases. A priori, the investigator contributes the essentials as to what aspects of treatment should be included, or how certain procedures should be performed at the time of service delivery. In real time, the clinician then adds the local expertise to adapt process, content, or logic based on the thousands of context variables that the laboratory developer cannot anticipate. Many EBTs now over-specify procedural details—sometimes right down to what games to play or which characters to use to illustrate a point (cf. Schoenwald et al., 2011). Identifying the non-essential details in EBTs will move those treatments closer to Kazdin and Blase’s (2011) concept of a portfolio of models.
So how do we know which details are non-essential? It very well may be those that do not show up in most treatments when we aggregate across all of them relevant to a particular set of client characteristics—yet another reason to pursue knowledge aggregation. By stripping some of our best treatments down to the essence, we can allow them to be fleshed out again at the point of service by practitioners with local expertise and who are embedded in the local context. Let therapists add their own jokes, games, or metaphors, and let researchers outline the core change strategies that should be preserved within those operations. If we do not know the core strategies, let knowledge aggregation point to promising candidates to be tested in focal or dismantling research designs.
Having researchers and clinicians co-design treatments in this way is consistent with the recently stated ideals of the APA in the statements regarding evidence-based practice in psychology (EBPP) (APA Presidential Task Force on Evidence-Based Practice, 2006). However, despite these ideals, the landscape of clinical practice still appears to be mostly characterized by a false dichotomy of evidence-based practice or clinical judgment. We need more formal models and exemplars for evidence-based practice and clinical judgment. Of course, we do not know how much treatment design should occur a priori in the lab versus in real time in the field. Thus, we also need a new research agenda to study which co-design proportions work best (although we already have some idea that a heavy proportion of investigator-specified design does not; e.g., Addis & Krasnow, 2000; Borntrager et al., 2009).
Treatments Will Have to Work Together
We have many treatments that work but only a limited understanding of how they work together. Kazdin and Blase (2011) describe the image of a pie, with slices representing people covered by different treatments. Whether treatment is ultimately delivered as therapist-selected practice elements, discrete manualized programs, or in some combination, we have not put enough thought into how to assemble arrays of treatments within service systems or how to gauge their collective impact on a community.
How many EBTs are required to serve a given population? The real answer depends upon the local epidemiology; however, our analyses suggests that even learning all of them would generally not be enough to ensure that everyone with mental health needs receives evidence-based care—in fact, surprisingly far from it (Chorpita et al., in press). Simply making new treatments is not likely to solve this problem and only exacerbates the problem of selecting the best set. To that end, we have developed an analytic method for simultaneously combining local population data and treatment outcome data to point to best fitting solutions (Chorpita et al., in press). This methodology applies mathematical modeling to enhance resource allocation and help a service system achieve the greatest reduction in the burden of mental illness, much as was suggested by Kazdin and Blase. This tool can be considered a knowledge management appliance, and we need more like it to address our new problems.
Knowledge Must Flow in Many Directions
Despite all our treatment outcome research, the best source of evidence is still arguably the evidence that a client is improving. That, too, is a source of knowledge that is largely untapped. Despite emerging research that measurement feedback systems can improve outcomes (Bickman, 2008; Lambert et al., 2005), there is no widely used appliance for providing clinicians feedback on their clients’ mental health outcomes—or clients with feedback on their own outcomes. This is a major research agenda requiring intensive innovation—there is currently only a handful of prototypes of this kind of technology, a stark contrast with the hundreds of manualized treatments available.
Such feedback should not be limited to outcomes, either. As the compendia of EBTs for mental health have grown, so has the literature on the failure to implement EBT with fidelity. For example, only half of service providers trained to use an EBT for HIV prevention ever attempt to implement that treatment, and only half of those providers implement the treatment with fidelity (Collins et al., 2007). The consistency of such findings outside of laboratory clinics, regardless of the specific treatment being evaluated, suggest the additional need for routine feedback on how the clinician is implementing the treatment, even if only at the level of adherence to basic treatment elements. This is yet another knowledge management issue. For a therapist’s future actions to be guided by useful information, we need better methods to deliver that information, whether it comes from the literature, the client’s response, or the therapist’s own past actions (Daleiden & Chorpita, 2005).
Conclusion
We believe Kazdin and Blase (2011) have identified a major failure in knowledge management. If, however, the field interprets this challenge as a failure in knowledge production, we will continue in our old habits of promulgating EBTs that few with mental health needs may ever encounter. The current national economic condition suggests that we should not spend all of our time or other resources solely on producing more treatments that are only incrementally better. We also need new paradigms.
It is time to develop models that allow for designing treatments across laboratories, across disciplines, and across researchers and practitioners. We encourage researchers and treatment developers to consider packaging and studying their new treatments in discrete treatment units or modules that can “plug and play” with those of other developers. We may find that common practice elements, features, processes or functions are robust across a wide variety of delivery platforms or workforces. Meanwhile, we encourage practitioners—broadly defined—to be open to using those treatment elements or modules, and to see them as supports for making their current work more effective. Practitioners will also need to help researchers understand how much of that support is enough, too little, or too much. As we all continue to learn more about how to alleviate mental illness, we must keep in mind that what we know is irrelevant when separated from the question of what to do with what we know.
Contributor Information
Mary Jane Rotheram-Borus, University of California, Los Angeles.
Eric L. Daleiden, PracticeWise, LLC
Adam Bernstein, University of California, Los Angeles.
Taya Cromley, University of California, Los Angeles.
Dallas Swendeman, University of California, Los Angeles.
Jennifer Regan, University of California, Los Angeles.
References
- Addis ME, Krasnow AD. A national survey of practicing psychologists’ attitudes toward psychotherapy treatment manuals. Journal of Consulting and Clinical Psychology. 2000;68:331–339. doi: 10.1037//0022-006x.68.2.331. [DOI] [PubMed] [Google Scholar]
- Allen LA, McHugh RK, Barlow DH. Emotional disorders: A unified protocol. In: Barlow DH, editor. Clinical handbook of psychological disorders. 4. New York, NY: Guilford Press; 2008. pp. 216–249. [Google Scholar]
- APA Presidential Task Force. Evidence-based practice in psychology. American Psychologist. 2006;61:271–285. doi: 10.1037/0003-066X.61.4.271. [DOI] [PubMed] [Google Scholar]
- Barth RP, Lee BR, Lindsey MA, Collins KS, Strieder F, Chorpita BF, Becker KD, Sparks JR. Evidence-based practice at a crossroads: The timely emergence of common elements and common factors. Research on Social Work Practice (in press) [Google Scholar]
- Bickman L. A Measurement Feedback System (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47:1114–1119. doi: 10.1097/CHI.0b013e3181825af8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Borntrager C, Chorpita BF, Higa C, Weisz JR. Provider attitudes toward evidence-based practices: Are the concerns with the evidence or with the manuals? Psychiatric Services. 2009;60:1–5. doi: 10.1176/ps.2009.60.5.677. [DOI] [PubMed] [Google Scholar]
- Chambers DA, Ringeisen H, Hickman EE. Federal, state, and foundation initiatives around evidence-based practices for child and adolescent mental health. Child and Adolescent Psychiatric Clinics of North America. 2005;14:307–327. doi: 10.1016/j.chc.2004.04.006. [DOI] [PubMed] [Google Scholar]
- Chambless DL, Hollon SD. Defining empirically supported therapies. Journal of Consulting and Clinical Psychology. 1998;66:7–18. doi: 10.1037//0022-006x.66.1.7. [DOI] [PubMed] [Google Scholar]
- Chorpita BF, Bernstein A, Daleiden EL. Empirically guided coordination of multiple evidence-based treatments: An illustration of relevance mapping in children’s mental health services. Journal of Consulting and Clinical Psychology. doi: 10.1037/a0023982. (in press) [DOI] [PubMed] [Google Scholar]
- Chorpita BF, Daleiden EL. Mapping evidence-based treatments for children and adolescents: Application of the distillation and matching model to 615 treatments from 322 randomized trials. Journal of Consulting and Clinical Psychology. 2009;77:566–579. doi: 10.1037/a0014565. [DOI] [PubMed] [Google Scholar]
- Chorpita BF, Daleiden EL. Building evidence-based systems in children’s mental health. In: Kazdin AE, Weisz JR, editors. Evidence-based psychotherapies for children and adolescents. New York: Oxford; 2010. pp. 482–499. [Google Scholar]
- Chorpita BF, Weisz JR. MATCH-ADTC: Modular approach to therapy for children with anxiety, depression, trauma, or conduct problems. Satellite Beach, FL: PracticeWise, LLC; 2009. [Google Scholar]
- Collins C, Phields ME, Duncan T, & Science Application Team. An agency capacity model to facilitate implementation of evidence-based behavioral interventions by community-based organizations. Journal of Public Health Management and Practice, Supplement. 2007:S16–S23. doi: 10.1097/00124784-200701001-00005. [DOI] [PubMed] [Google Scholar]
- Daleiden E, Chorpita BF. From data to wisdom: Quality improvement strategies supporting large-scale implementation of evidence based services. Child and Adolescent Psychiatric Clinics of North America. 2005;14:329–349. doi: 10.1016/j.chc.2004.11.002. [DOI] [PubMed] [Google Scholar]
- Department of Health and Human Services (DHHS) Final report on the dimensions of organizational readiness (DOOR) in child-serving clinics. National Institutes of Health; 2005. Grant R24 MH068708–01. [Google Scholar]
- Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, Robinson N. Lost in knowledge translation: Time for a map? Journal of Continuing Education in the Health Professions. 2006;26:13–24. doi: 10.1002/chp.47. [DOI] [PubMed] [Google Scholar]
- Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomized controlled trial be? BMJ. 2004;328:1561–1563. doi: 10.1136/bmj.328.7455.1561. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ingram BL, Flannery D, Elkavich A, Rotheram-Borus MJ. Common processes in evidence-based adolescent HIV prevention programs. AIDS and Behavior. 2008;12:374–383. doi: 10.1007/s10461-008-9369-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kazdin AE. Research design in clinical psychology. 4. Boston: Allyn & Bacon; 2003. [Google Scholar]
- Kazdin AE. Evidence-based treatments and delivery of psychological services: Shifting our emphases to increase impact. Psychological Services. 2008a;5:201–215. [Google Scholar]
- Kazdin AE. Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist. 2008b;63:146–159. doi: 10.1037/0003-066X.63.3.146. [DOI] [PubMed] [Google Scholar]
- Kazdin AE, Blase SL. Rebooting psychotherapy research and practice to reduce the burden of mental illness. Perspectives on Psychological Science. 2011;6:21–37. doi: 10.1177/1745691610393527. [DOI] [PubMed] [Google Scholar]
- Lambert MJ, Harmon C, Slade K, Whipple J, Hawkins E. Providing feedback to psychotherapists on their patients’ progress: Clinical results and practice suggestions. Journal of Clinical Psychology. 2005;61:165–174. doi: 10.1002/jclp.20113. [DOI] [PubMed] [Google Scholar]
- Layne CM, Ghosh Ippen C, Strand V, Stuber M, Abramovitz R, Reyes G, Amaya Jackson L, Ross L, Curtis A, Lipscomb L, Pynoos R. The core curriculum on childhood trauma: A tool for training a trauma-informed workforce. Psychological Trauma: Theory, Research, Practice, and Policy (in press) [Google Scholar]
- Rotheram-Borus MJ, Ingram BL, Swendeman D, Flannery D. Common principles embedded in effective adolescent HIV prevention programs. AIDS and Behavior. 2009;13:387–98. doi: 10.1007/s10461-009-9531-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schoenwald SK, Garland AF, Southam-Gerow MA, Chorpita BF, Chapman JE. Adherence measurement in treatments for disruptive behavior disorders: Pursuing clear vision through varied lenses. 2011. Manuscript submitted for publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
- U.S. Department of Health and Human Services. US Department of Health and Human Services Strategic Plan. Washington, DC: 2007. [Google Scholar]