Abstract
This editorial provides a brief history of mental health services research over the last 30 years and how findings from large-scale studies shocked the field and led to the lines of inquiry culminating in current implementation science research. I review the manuscripts published in this special issue of Administration and Policy in Mental Health in light of that history and usethese studies as a way to assess the state of the field. Finally, I present five takeaways extracted from these articles that may be useful in considering future directions for implementation research.
I wasn’t an IRI fellow but I am a wannabe. I represent the tail end of two generations of researchers that fumbled sideways into implementation science by necessity. Twenty-five years ago implementation science was not on the forefront of thinking in mental health research, and certainly wasn’t known by that name. Now it comprises a – if not the – core set of activities and concepts in mental health services research. The impetus for this seismic change came as the result of a substantial and possibly misguided financial investment and two landmark experiments with unexpected results.
A Brief and Selective History of How Mental Health Services Research Arrived at Implementation Science
In the 1980s, there was growing recognition that most children with mental health problems were not getting appropriate care (Knitzer & Olson, 1982; Tuma, 1989). Inpatient stays for youth were at a startling and unacceptable high (Behar, 1990). This problem was largely conceptualized as one of capacity within the mental health system to provide less restrictive levels of care, and families’ lack of access to care. In 1984, Congress appropriated substantial funds to implement the Child and Adolescent Service System Program (CASSP), envisioned as a comprehensive mental health system of care for youth and their families (Schlenger, Etheridge, Hansen, Fairbank, & Onken, 1992). In the CASSP model, services were provided in the community, included all ancillary services needed for the child and family to be engaged in treatment, coordinated across child-serving systems, and provided in the least restrictive setting possible. The services were to “wrap around” the child and family, addressing every need. Service systems across the country received federal funds to create these systems of care (Stroul & Friedman, 1988). CASSP principles rapidly became the core principles of many advocacy groups and service systems. These principles seemed intuitive, were based on commonly shared values among clinicians, researchers and advocates, and there was every expectation that their operationalization would result in better care – and therefore better outcomes – for children and their families.
The Stark County and Fort Bragg experiments were designed to test this Congressionally-mandated, unprecedented expansion of children’s mental health services. Fort Bragg comprised a quasi-experiment with almost 1000 children and their families. In a rigorous longitudinal analysis, there were no differences between groups on any of 10 child outcome measures of interest (Bickman, 1996a, 1996b). Criticism of the study was swift: it was quasi-experimental; the CASSP principles weren’t fully implemented; and military families’ experiences were not comparable to those of the general public (Hoagwood, 1997; Mordock, 1997; Saxe & Cross, 1997). A second study took place in Stark County, Ohio. This one comprised a randomized trial of 350 children and their families in the civilian population. For all intents and purposes, it replicated the findings from the Fort Bragg study (Bickman, Summerfelt, Firth, & Douglas, 1997; Bickman, Summerfelt, & Noser, 1997). Again there was much criticism; the two most common ones were that the outcome measures were wrong and that the CASSP principles already were widely implemented, meaning that there was little difference between the experimental and control conditions (Friedman & Burns, 1996). Sub-analyses suggested that neither of these criticisms was valid (Bickman et al., 1998). Treatment dose was not associated with outcome, and children in either group who dropped out of treatment had the same outcomes as those who did not.
The principle investigator of both studies, Len Bickman, was searingly honest in his examination of these data. I sat in quiet awe at the University of South Florida’s Children’s Mental Health Conference in 1998, listening to him present these results from five years of data (Bickman, Lambert, Andrade, & Penaloza, 2000) and fervently hoping that it wasn’t true, that an audience member would ask some question or make some perceptive observation pointing out Dr. Bickman’s mistake. If you weren’t around 25 years ago, you may not fully realize the gut punch these results represented. The brutal and not-so-subtle implication was that the best we could do wasn’t good enough to improve behavioral health outcomes for children.
Three Responses to a Crisis
At least three lines of thinking and research emerged in response to these findings. While this taxonomy is overlapping, not complete, and perhaps poorly named, we might categorize these areas as: 1) soul searching, 2) looking outward, and 3) looking inward. Soul searching comprised an in-depth, pre-theoretical and primarily qualitative/ethnographic exploration of service systems and the therapeutic encounter. The fundamental questions underlying much of this research could be construed as, “what did we miss when it comes to the needs of children and their families, the way they access and use services, the way those services are provided, and the construction of the systems in which those services are provided? (Hohmann, 1999; Ware, Tugenberg, Dickey, & McHorney, 1999)” The stakeholder engagement (Aarons, Wells, Zagursky, Fettes, & Palinkas, 2009; Fischer, Shumway, & Owen, 2002; Garland, Lewczyk-Boxmeyer, Gabayan, & Hawley, 2004; Israel, Schulz, Parker, & Becker, 1998), qualitative and mixed-methods inquiry (Palinkas, 2014), and broad compendia of implementation barriers and facilitators that comprise many implementation science frameworks could be considered direct descendants of this line of research, which sought to unpack the “black box” of mental health services and the context in which they are delivered (Bickman, 2000; Birken et al., 2017; Williams & Beidas, 2019).
A second line of inquiry, looking outward, was catalyzed by work of researchers like Charles Glisson, who wisely pointed out that services are provided through organizations, and characteristics of those organizations may affect both how care is delivered and the associated outcomes (Glisson, 1994; Glisson & Hemmelgarn, 1998). If we want to improve outcomes, Glisson and colleagues suggest, we must first change the often overburdened and variably managed organizations in which those services are delivered (Glisson, 2002). An important study conducted a decade ago providing evidence for this hypothesis still constitutes one of the most rigorous tests of an implementation strategy to date (Glisson et al., 2010). Implementation scientists borrowed models and theories from industrial psychology and organizational dynamics, often with a focus on leadership and organizational culture and climate (Novins, Green, Legha, & Aarons, 2013; Powell et al., 2012). To a large extent, most tested implementation strategies rely on this line of research (Powell et al., 2012).
A third line of inquiry, looking inward, was brought to the fore in a commentary by Weisz, Han and Valeri, entitled “More of What?” published right after the publication of the Fort Bragg experiment results. It comprised an insightful treatise on the need to measure use of evidence-based practice and treatment fidelity in these large-scale evaluations (Weisz, Han, & Valeri, 1997). As the authors state, “there is no evidence that any of the specific treatments used [in the Fort Bragg study] had any empirical support.” In fact, research suggested that assessments and treatments delivered in the community had little connection to the evidence base (Bickman, 2000; Garland, Hurlburt, Brookman-Frazee, Taylor, & Accurso, 2010; Lewczyk, Garland, Hurlburt, Gearity, & Hough, 2003). If we want to improve community outcomes, those researchers suggested, we need a suite of readily implementable, demonstrated effective interventions that we move with care into community settings. Researchers begin to wrestle with words like “transportability” and “dissemination” (Schoenwald & Hoagwood, 2001). Rigorous tests of modular (Weisz et al., 2012), trans-diagnostic, and principle-based (Farchione et al., 2012; Kennedy, Bilek, & Ehrenreich-May, 2019) approaches demonstrated the validity of this line of thinking, and have brought into question the population-level utility of complicated, multi-component treatments, which had been the ones most tested in university settings, but which community clinicians often have difficulty using the way they were designed (Beidas et al., 2019). Research on measuring and improving treatment fidelity (especially through direct-to-clinician strategies), and developing treatments that are designed for community implementation by a workforce with little time for training and supervision have developed from this line of thinking (Cheron et al., 2019; Ingersoll & Wainer, 2013).
As these three bodies of research evolved, so did funding mechanisms to support them. The National Institute of Mental Health published funding announcements requesting proposals on implementation, and then partnered with other institutes to form the Dissemination and Implementation Research in Health study section, which welcomed studies examining this problem. As of this writing, NIH and other entities have sponsored more than a decade of conferences to bring together people who develop and test models and strategies to address this set of thorny problems that do not fit neatly into any one discipline. In parallel, the field created training programs for the next generation of researchers who, unlike their predecessors, would be trained from early in their careers as implementation scientists (Baumann et al., 2019; Brownson et al., 2017; Proctor & Chambers, 2017; Tabak et al., 2017).
Evaluating the Implementation Research Institute
And that’s the purpose of the Implementation Research Institute (IRI). IRI itself represents an implementation strategy for the field, a carefully constructed program to create and support researchers devoted to building on the surprising findings of Fort Bragg, Stark County, and similar studies, and to developing methods to figuring out what went wrong and how to fix it. This issue of APMH represents a self-reflection on IRI’s value in advancing the field of implementation science. Bauman and colleague’s article most explicitly quantifies IRI productivity. While determining causality is challenging in these cases, applicants selected for IRI were more likely than those who were not to receive implementation science grants and to publish related articles. By the most proximal and academic metrics, therefore, IRI is successful in moving the field forward. The remaining articles represent perhaps more illustrative examples of IRI’s legacy. Each article comprises both a rigorous study by gifted and well-trained researchers, and a case study of what the field has accomplished. Each offers an important contribution to the field. In what follows I enumerate just a few observations about these contributions, the nice links they form with each other, and what they say about the state of implementation science.
To start, Lau and colleagues nicely lay out one of the fundamental challenges of implementation research: implementation happens all the time, usually doesn’t adhere nicely to a researcher’s schedule, and rarely is amenable to study using traditionally rigorous research designs. Community decision makers have more important problems than to support research; this is their world, we’re just trying to study it. Therefore we often must rely on observational and at best quasi-experimental studies rather than the randomized designs that are considered the gold standard. While randomization may increase internal validity, studying implementation in the wild may be much more ecologically valid and yield more important and actionable results than investigator-initiated trials. Lau et al. explicitly describe the tension between the ethnography and the RCT. They also point to the fact that the “participant” or unit of analysis in implementation studies is the practitioner, and that we must navigate the tension between our ideas as researchers regarding what therapists should do, and the resources and constraints those therapists face.
Saldana and colleagues neatly thread the challenge of rigorous study of the implementation that happens in the wild. The “Stages of Implementation Completion” measure offers a way to quantify implementation progress. Saldana points out the lack of validated, universal measurement tools to assess even the most basic constructs that comprise implementation. There are two aspects of the SIC measure that I think are particularly noteworthy. First, by laying out explicitly the stages of implementation, Saldana, in this and previous work, offers a specific, testable pathway for how implementation happens. One can disagree with her about the sequence or importance of various steps, but careful delineations like this are critical to moving our field forward. In some ways, it is a contribution similar to Proctor’s implementation strategy components of actors, actions, dose, and temporality (Proctor, Powell, & McMillen, 2013). Both cases represent a call to the field to delineate more rigorously what both implementation and implementation strategies mean. I hope that this type of specificity soon will be as expected in implementation studies as a CONSORT diagram is for randomized trials.
A second important aspect of the SIC is that it is adaptable to different implementations. Rather than create a different measure for every study, we should consider measures in which the question stems or general framework is validated, but that specifying the behavior of interest is flexible.
Gopalan and colleagues bring up a different but equally important point related to studying implementation: if the implementation we care about occurs in communities, then studying it requires close community partnerships. Yet scientists rarely are trained in skills needed to form and maintain these partnerships. Their paper nicely lays out these skills and two case studies. It’s important that they focus on early-stage investigators, because developing these skills and relationships often comes at the expense of traditional academic productivity. Teaching scientists to balance these two domains and getting university promotions committees to value the former will be critical to advancing implementation science. Their article nicely lays out the groundwork for an extension of implementation science training.
Hamilton and colleagues use their partnership with the Veteran’s Administration to give an excellent example of “looking inward.” They describe the many psychiatric disorders that frequently co-occur with each other, the VA’s reliance on collaborative care for most of the mental health care it provides, and the challenges that raises for providing evidence-based mental health care, particularly for PTSD, for women veterans. In this case, the requirements of evidence-based treatment and the way it is packaged interfere with its acceptability for patients and clinicians, the clinical profile of the patients, and the structure designed to deliver it. While the authors propose the need for efforts to increase clinician confidence in delivering treatment, I think that their more insightful and potentially actionable suggestion is that we explore other forms of transdiagnostic and modular treatments for this population and this setting.
Brookman-Frazee and colleagues examine system-level effects on implementation. They find that in both schools and community clinics, the role of partnership and leadership are critical. The role of funding and various central policies differ based on the autonomy of entities within the system. Their article provides an excellent example of the need to “look outward,” and the importance of applying the same measures to different settings so that we can have rigorous data on how different organizational missions and structures affect implementation.
Elwy and colleagues examined domains well established in social psychology that have been applied with varying degrees of rigor in implementation science: influence and normative pressure. They ask, how do we leverage it to effect successful implementation? Their conclusion is that behavior change often is most successful when it’s social, a finding that has been replicated in many domains other than implementation. Their study also links nicely to that of Brookman-Frazee and colleagues, in that to the extent organizational variables are important in increasing the use of evidence-based practice, it is likely that they are mediated through some change in practitioner behavior. Linking psychological theories that include construct proximal to behaviors of interest with organizational theories that describe constructs that act upon the practitioner will be critical to developing causal models predicting implementation, and offer a concrete set of levers by which to change implementation.
Five Takeaways from this Special Issue of APMH
In reading these studies through my admittedly biased lens, several themes emerged that may offer some next steps in this excellent work and in implementation research in general:
Others have described this in depth before, but central to almost all of these studies is the critical importance of community partnership. For implementation to be successful, those in charge and those on the front line must have input on problem of interest, and be involved in strategy design. We should place a heavy emphasis on reducing burden to those we expect to implement. The implementer is the client and we should address problems of immediate import to them that make their lives easier as well as make their practice more effective.
Understand the specifics of the thing you want implemented. Many interventions, such as the ones Hamilton, Saldana, and Brookman-Frazee describe, are multi-component. What drives successful implementation of one component may not be the same as what drives implementation of another. We must be prepared to modify or sacrifice some components in the service of successful implementation. Designs like multiphase optimization strategy (MOST) and Sequential Multiple Assignment Randomized Trials (SMART) will be critical to identifying, not just which components comprise the active ingredients of an intervention, but also which have a chance of success in community settings, paired with implementation strategies (Collins, Murphy, & Strecher, 2007).
Several studies in this issue mention or directly describe the role of leadership. A growing body of research suggests that when it comes to implementation, little starts or sustains for very long without committed leadership. While I agree completely, I have concerns about the “implementation leadership” construct. Sometimes leaders agree with us about what should be implemented, and may vary in the extent to which they can effectively implement it. On the other hand, many great organizational leaders may think very little of your intervention. They may think, quite legitimately, that it isn’t right for their organization. Again, the Hamilton and Brookman-Frazee studies provide great examples. We are in danger of equating “agreeing with us” with good leadership. This stands in sharp contrast to what many of the studies here report about valuing community voices and to the skill set Gopalan and colleagues advocate for as part of implementation science training.
Providers are human beings. This is articulated best in the Elwy study, which demonstrates that practitioners are subject to the same constraints and biases as all other human beings. The limited resources available in many community mental health settings may exacerbate these biases. It may be worth construing implementation science as a science of adult behavior change within organizations. As such we should take advantage of disciplines from social and industrial psychology and behavioral economics that have been successful in leveraging psychological constructs such as norms and cognitive biases, and in making the behavior of interest as easy as possible to complete.
Finally, nowhere more than in implementation science do we feel the tension between internal and external validity. This is Lau and colleague’s central thesis. Often implementation scientists interpret this tension as necessitating a trade-off between rigor and relevance. I think that this is a false dichotomy. I think it relates to the nascence of our field, the fact that constructs and measures are still in development, and that we have not yet agreed on causal theory. We must work as a field to the point that relevance and rigor go hand in hand.
The Elephant in the Room?
There is an unstated set of facts, to which some of the studies in this issue at least obliquely refer: we are working with underfunded, deprofessionalized, often disrespected systems, such as community mental health, schools, and primary care, that serve those who are least able to advocate for themselves. As implementation scientists, we must grapple with these larger issues that directly or indirectly influence every implementation decision. It is exciting to see in recent years that implementation scientists are embracing the study of related policies, because unless we address the fundamental issues of resources and accountability, the changes we attempt to affect won’t sustain.
Reading these manuscripts excites me about the direction our field is taking and gives me confidence in new generations of researchers steeped in ways of thinking and conducting studies directly tied to implementation. The field is in good hands, which is reassuring, because there is much left to accomplish.
Acknowledgements
This work was supported by years of soul searching and wandering in the desert, getting to be a fly on the wall in rooms with many people wiser and more experienced than I, and the Penn ALACRITY center (P50 MH113840).
Footnotes
Compliance with Ethical Standards
The author has no potential conflicts of interest to disclose. This commentary did not comprise research involving human or animal participants, therefore no informed consent was required.
References
- Aarons GA, Wells RS, Zagursky K, Fettes DL, & Palinkas LA (2009). Implementing evidence-based practice in community mental health agencies: a multiple stakeholder analysis. American Journal of Public Health, 99(11), 2087–2095. doi: 10.2105/AJPH.2009.161711 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baumann AA, Carothers BJ, Landsverk J, Kryzer E, Aarons GA, Brownson RC, … Proctor EK (2019). Evaluation of the Implementation Research Institute: Trainees' Publications and Grant Productivity. Administration and Policy in Mental Health. doi: 10.1007/s10488-019-00977-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Behar L (1990). Financing mental health services for children and adolescents. Bulletin of the Menninger Clinic, 54(1), 127–139; discussion 140-128. [PubMed] [Google Scholar]
- Beidas RS, Williams NJ, Becker-Haimes EM, Aarons GA, Barg FK, Evans AC, … Mandell DS (2019). A repeated cross-sectional study of clinicians' use of psychotherapy techniques during 5 years of a system-wide effort to implement evidence-based practices in Philadelphia. Implementation Science, 14(1), 67. doi: 10.1186/s13012-019-0912-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bickman L (1996a). The evaluation of a children's mental health managed care demonstration. Journal of Mental Health Administration, 23(1), 7–15. doi: 10.1007/bf02518639 [DOI] [PubMed] [Google Scholar]
- Bickman L (1996b). Implications of a children's mental health managed care demonstration evaluation. Journal of Mental Health Administration, 23(1), 107–117. doi: 10.1007/bf02518647 [DOI] [PubMed] [Google Scholar]
- Bickman L (2000). The Most Dangerous and Difficult Question in Mental Health Services Research. Mental Health Services Research, 2(2), 71–72. doi: 10.1023/A:1010100119789 [DOI] [Google Scholar]
- Bickman L, Lambert EW, Andrade AR, & Penaloza RV (2000). The Fort Bragg continuum of care for children and adolescents: mental health outcomes over 5 years. Journal of Consultative and Clinical Psychology, 68(4), 710–716. [PubMed] [Google Scholar]
- Bickman L, Salzer MS, Lambert EW, Saunders R, Summerfelt WT, Heflinger CA, & Hamner K (1998). Rejoinder to Mordock's critique of the Fort Bragg Evaluation Project: the sample is generalizable and the outcomes are clear. Child Psychiatry and Human Development, 29(1), 77–91. doi: 10.1023/a:1022687331435 [DOI] [PubMed] [Google Scholar]
- Bickman L, Summerfelt WMT, Firth JM, & Douglas SM (1997). The Stark County Evaluation Project: Baseline results of a randomized experiment Evaluating mental health services: How do programs for children "work" in the real world? (pp. 231–258). Thousand Oaks, CA, US: Sage Publications, Inc. [Google Scholar]
- Bickman L, Summerfelt WT, & Noser K (1997). Comparative outcomes of emotionally disturbed children and adolescents in a system of services and usual care. Psychiatric Services, 48(12), 1543–1548. doi: 10.1176/ps.48.12.1543 [DOI] [PubMed] [Google Scholar]
- Birken SA, Powell BJ, Presseau J, Kirk MA, Lorencatto F, Gould NJ, … Damschroder LJ (2017). Combined use of the Consolidated Framework for Implementation Research (CFIR) and the Theoretical Domains Framework (TDF): a systematic review. Implementation Science, 12. doi:ARTN 2 10.1186/s13012-016-0534-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brownson RC, Proctor EK, Luke DA, Baumann AA, Staub M, Brown MT, & Johnson M (2017). Building capacity for dissemination and implementation research: one university's experience. Implementation Science, 12(1), 104. doi: 10.1186/s13012-017-0634-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cheron DM, Chiu AAW, Stanick CF, Stern HG, Donaldson AR, Daleiden EL, & Chorpita BF (2019). Implementing Evidence Based Practices for Children's Mental Health: A Case Study in Implementing Modular Treatments in Community Mental Health. Administration and Policy in Mental Health, 46(3), 391–410. doi: 10.1007/s10488-019-00922-5 [DOI] [PubMed] [Google Scholar]
- Collins LM, Murphy SA, & Strecher V (2007). The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. American Journal of Preventative Medicine, 32(5 Suppl), S112–118. doi: 10.1016/j.amepre.2007.01.022 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Farchione TJ, Fairholme CP, Ellard KK, Boisseau CL, Thompson-Hollands J, Carl JR, … Barlow DH (2012). Unified protocol for transdiagnostic treatment of emotional disorders: a randomized controlled trial. Behavior Therapy, 43(3), 666–678. doi: 10.1016/j.beth.2012.01.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fischer EP, Shumway M, & Owen RR (2002). Priorities of consumers, providers, and family members in the treatment of schizophrenia. Psychiatric Services, 53(6), 724–729. doi: 10.1176/appi.ps.53.6.724 [DOI] [PubMed] [Google Scholar]
- Friedman RM, & Burns BJ (1996). The evaluation of the Fort Bragg Demonstration Project: an alternative interpretation of the findings. Journal of Mental Health Administration, 23(1), 128–136. doi: 10.1007/bf02518651 [DOI] [PubMed] [Google Scholar]
- Garland AF, Hurlburt MS, Brookman-Frazee L, Taylor RM, & Accurso EC (2010). Methodological challenges of characterizing usual care psychotherapeutic practice. Administration and Policy in Mental Health, 37(3), 208–220. doi: 10.1007/s10488-009-0237-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Garland AF, Lewczyk-Boxmeyer CM, Gabayan EN, & Hawley KM (2004). Multiple stakeholder agreement on desired outcomes for adolescents' mental health services. Psychiatric Services, 55(6), 671–676. doi: 10.1176/appi.ps.55.6.671 [DOI] [PubMed] [Google Scholar]
- Glisson C (1994). The effect of services coordination teams on outcomes for children in state custody. Administration in Social Work, 18(4), 1–23. doi: 10.1300/J147v18n04_01 [DOI] [PubMed] [Google Scholar]
- Glisson C (2002). The organizational context of children's mental health services. Clinical Child and Family Psychology Review, 5(4), 233–253. doi: 10.1023/a:1020972906177 [DOI] [PubMed] [Google Scholar]
- Glisson C, & Hemmelgarn A (1998). The effects of organizational climate and interorganizational coordination on the quality and outcomes of children's service systems. Child Abuse and Neglect, 22(5), 401–421. doi: 10.1016/s0145-2134(98)00005-2 [DOI] [PubMed] [Google Scholar]
- Glisson C, Schoenwald SK, Hemmelgarn A, Green P, Dukes D, Armstrong KS, & Chapman JE (2010). Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. Journal of Consultative and Clinical Psychology, 78(4), 537–550. doi: 10.1037/a0019160 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoagwood K (1997). Interpreting nullity. The Fort Bragg experiment--a comparative success or failure? American Psychologist, 52(5), 546–550. doi: 10.1037//0003-066x.52.5.546 [DOI] [PubMed] [Google Scholar]
- Hohmann AA (1999). A Contextual Model for Clinical Mental Health Effectiveness Research. Mental Health Services Research, 1(2), 83–91. doi: 10.1023/a:1022382219660 [DOI] [Google Scholar]
- Ingersoll B, & Wainer A (2013). Initial efficacy of project ImPACT: a parent-mediated social communication intervention for young children with ASD. Journal of Autism and Developmental Disorders, 43(12), 2943–2952. doi: 10.1007/s10803-013-1840-9 [DOI] [PubMed] [Google Scholar]
- Israel BA, Schulz AJ, Parker EA, & Becker AB (1998). Review of community-based research: assessing partnership approaches to improve public health. Annual Review of Public Health, 19, 173–202. doi: 10.1146/annurev.publhealth.19.1.173 [DOI] [PubMed] [Google Scholar]
- Kennedy SM, Bilek EL, & Ehrenreich-May J (2019). A Randomized Controlled Pilot Trial of the Unified Protocol for Transdiagnostic Treatment of Emotional Disorders in Children. Behavior Modification, 43(3), 330–360. doi: 10.1177/0145445517753940 [DOI] [PubMed] [Google Scholar]
- Knitzer J, & Olson L (1982). Unclaimed children: the failure of public responsibility to children and adolescents in need of mental health services. Washington, D.C. (1520 New Hampshire Ave. N.W., Washington 20036): Children's Defense Fund. [Google Scholar]
- Lewczyk CM, Garland AF, Hurlburt MS, Gearity J, & Hough RL (2003). Comparing DISC-IV and clinician diagnoses among youths receiving public mental health services. Journal of the American Academy of Child and Adolescent Psychiatry, 42(3), 349–356. doi: 10.1097/00004583-200303000-00016 [DOI] [PubMed] [Google Scholar]
- Mordock JB (1997). The Fort Bragg continuum of care Demonstration Project: the population served was unique and the outcomes are questionable. Child Psychiatry and Human Development, 27(4), 241–254. doi: 10.1007/bf02353353 [DOI] [PubMed] [Google Scholar]
- Novins DK, Green AE, Legha RK, & Aarons GA (2013). Dissemination and implementation of evidence-based practices for child and adolescent mental health: a systematic review. Journal of the American Academy of Child and Adolescent Psychiatry, 52(10), 1009–1025 e1018. doi: 10.1016/j.jaac.2013.07.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Palinkas LA (2014). Qualitative and mixed methods in mental health services and implementation research. Journal of Clinical Child and Adolescent Psychology, 43(6), 851–861. doi: 10.1080/15374416.2014.910791 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, … York JL (2012). A compilation of strategies for implementing clinical innovations in health and mental health. Medical Care Research Review, 69(2), 123–157. doi: 10.1177/1077558711430690 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proctor EK, & Chambers DA (2017). Training in dissemination and implementation research: a field-wide perspective. Translational Behavioral Medicine, 7(3), 624–635. doi: 10.1007/s13142-016-0406-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proctor EK, Powell BJ, & McMillen JC (2013). Implementation strategies: recommendations for specifying and reporting. Implementation Science, 8, 139. doi: 10.1186/1748-5908-8-139 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saxe L, & Cross TP (1997). Interpreting the Fort Bragg Children's Mental Health Demonstration Project. The cup is half full. American Psychologist, 52(5), 553–556. doi: 10.1037//0003-066x.52.5.553 [DOI] [PubMed] [Google Scholar]
- Schlenger WE, Etheridge RM, Hansen DJ, Fairbank DW, & Onken J (1992). Evaluation of state efforts to improve systems of care for children and adolescents with severe emotional disturbances: the CASSP (Child and Adolescent Service System Program) initial cohort study. Journal of Mental Health Administration, 19(2), 131–142. doi: 10.1007/bf02521314 [DOI] [PubMed] [Google Scholar]
- Schoenwald SK, & Hoagwood K (2001). Effectiveness, transportability, and dissemination of interventions: what matters when? Psychiatric Services, 52(9), 1190–1197. doi: 10.1176/appi.ps.52.9.1190 [DOI] [PubMed] [Google Scholar]
- Stroul BA, & Friedman RM (1988). Caring for severely emotionally disturbed children and youth. Principles for a system of care. Child Today, 17(4), 11–15. [PubMed] [Google Scholar]
- Tabak RG, Padek MM, Kerner JF, Stange KC, Proctor EK, Dobbins MJ, … Brownson RC (2017). Dissemination and Implementation Science Training Needs: Insights From Practitioners and Researchers. American Journal of Preventative Medicine, 52(3 Suppl 3), S322–S329. doi: 10.1016/j.amepre.2016.10.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tuma JM (1989). Mental health services for children. The state of the art. American Psychologist, 44(2), 188–199. doi: 10.1037//0003-066x.44.2.188 [DOI] [PubMed] [Google Scholar]
- Ware NC, Tugenberg T, Dickey B, & McHorney CA (1999). An ethnographic study of the meaning of continuity of care in mental health services. Psychiatric Services, 50(3), 395–400. doi: 10.1176/ps.50.3.395 [DOI] [PubMed] [Google Scholar]
- Weisz JR, Chorpita BF, Palinkas LA, Schoenwald SK, Miranda J, Bearman SK, … Research Network on Youth Mental, H. (2012). Testing standard and modular designs for psychotherapy treating depression, anxiety, and conduct problems in youth: a randomized effectiveness trial. Archives of General Psychiatry, 69(3), 274–282. doi: 10.1001/archgenpsychiatry.2011.147 [DOI] [PubMed] [Google Scholar]
- Weisz JR, Han SS, & Valeri SM (1997). More of what? Issues raised by the Fort Bragg study. American Psychologist, 52(5), 541–545. doi: 10.1037//0003-066x.52.5.541 [DOI] [PubMed] [Google Scholar]
- Williams NJ, & Beidas RS (2019). Annual Research Review: The state of implementation science in child psychology and psychiatry: a review and suggestions to advance the field. Journal of Child Psychology and Psychiatry, 60(4), 430–450. doi: 10.1111/jcpp.12960 [DOI] [PMC free article] [PubMed] [Google Scholar]