Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Apr 1.
Published in final edited form as: Prof Psychol Res Pr. 2019 Jul 18;51(2):134–144. doi: 10.1037/pro0000258

After the Study Ends: A Qualitative Study of Factors Influencing Intervention Sustainability

Sarah Kate Bearman 1, Abby Bailin 2, Rachel Terry 3, John R Weisz 4
PMCID: PMC7518310  NIHMSID: NIHMS1038527  PMID: 32982034

Abstract

Sustaining evidence-based practices after initial training and support has ended is necessary to ensure lasting improvements in youth mental health services. This study examined factors impacting community clinicians’ decisions to sustain a transdiagnostic youth intervention following participation in a study. The aim of the study was to identify potentially mutable factors impacting sustainability to inform future implementation efforts. Thirteen clinicians (85% women, 92% Caucasian, M age = 35.6) completed interviews after participating in an open trial of an evidence-based intervention for depression, anxiety, and conduct disorders. Interviews were analyzed using thematic analysis methods. All (100%) clinicians reported current use of the intervention. Four themes emerged related to sustainability. Clinicians (100%) reported that making modifications, alignment with prior training, and relative advantage influenced their current intervention use. Clinicians (100%) reported that knowledge transfer from treatment developers was vital to sustainability. They (92%) noted a number of logistical, inner-organizational, and client-level barriers to sustainability. Lastly, clinicians (92%) identified factors related to scaling up the intervention. A variety of personal, organizational, logistical, and client variables influence the sustainment of new interventions, and could be leveraged in future implementation efforts.

Keywords: mental health, evidence-based practice, psychotherapists, psychotherapy, qualitative research, community mental health services, children


What types of support are needed to sustain evidence-based practices (EBPs) in community mental health settings following initial intensive training and consultation? To date, dissemination (transmission of information about EBPs) and implementation (use of these practices among community-based clinicians) efforts have primarily focused on the introduction of new practices into clinical settings; much less is known about the factors that influence the long-term use of such treatments over time (Chambers, Glasgow, & Stange, 2013; Stirman et al., 2012). Ultimately, implementation efforts, which often require significant resources and upfront investment, have limited value if they are not sustained (Chambers et al., 2013). The identification of mutable factors that influence sustainability could help to maximize implementation efforts.

Historically, researchers conceptualized sustainability, or the maintenance and long-term use of a new practice over time, as an extension of the earlier phases of implementation (Bowman, Sobo, Asch, & Gifford, 2008). More recently, sustainability is recognized as a distinct phase of implementation, influenced by unique factors and obstacles (Bowman et al., 2008; Stirman et al., 2012). Challenges to the systematic study of sustainability include a lack of consensus about how to define and operationalize sustainability (Johnson, Hays, Center, & Daley, 2004), the appropriate timeframe to determine if an EBP has been sustained (Chambers et al., 2013) and which outcomes should serve as indicators of sustainability. For example, some researchers maintain that effective EBP sustainability entails minimal change over time and high fidelity to the initial model (Scheirer & Dearing, 2011) while others have suggested that effective sustainability might also require processes of adaptation (Chambers, 2011; Stirman, Miller, Toder, & Calloway, 2013). Scheirer, Hartling and Hagerman (2008) contend that successful sustainability can mean different things depending on the needs of stakeholders, highlighting the fact that measuring EBP sustainability can be a complex and multifaceted process.

The Exploration, Preparation, Implementation and Sustainment (EPIS) framework developed by Aarons, Hurlburt, and Horwitz (2011) suggests that “inner context factors” (i.e., factors within the unit providing services) and “outer context factors” (i.e., factors related to the larger environment of the service unit; Aarons et al., 2011; Novins, Green, Legha, & Aarons, 2013) play a role in the sustainment of EBPs. They hypothesize that key inner context factors related to sustainment include organizational characteristics, local fidelity monitoring and support, and staffing or clinician factors. Key outer context factors include sociopolitical factors, funding, and public-academic collaborations.

Within the field of behavioral health care, clinicians are the most proximal medium through which innovative treatments are implemented and sustained. EBPs are often part of complex, multisession treatment packages that depend on the clinician to execute them with a certain level of skill and fidelity (Becker & Stirman, 2011; Herschell, Kolko, Baumann, & Davis, 2010). In the past, the bridge between clinical research and practice was generally one-way, with researchers directing clinicians on how best to implement specific evidence-based interventions (Kazdin, 2008). Today, there is a general recognition in the field that researchers should promote two-way dialogue with clinicians in order to make lasting, sustainable changes to clinical practice (Weisz & Gray, 2008). Research studies have shown that, more often than not, implementation efforts that were not sustained lacked a focus on core issues related to clinician barriers with regard to the delivery system design of the intervention (Feldstein & Glasgow, 2008). Additionally, the current literature suggests that clinician attitudes towards EBPs and research play a role in whether such treatments are used long-term (Jensen-Doss, Hawley, Lopez, Osterberg, 2009; Nelson & Steele, 2008).

A study by Chu and colleagues (2015) illustrates the utility of obtaining clinician feedback to improve EBP development and sustainability. Chu and colleagues interviewed community clinicians three to five years after they had completed training in an EBP for depressed or anxious youths (Southam-Gerow et al., 2010; Weisz et al., 2009). The clinicians reported continued use of the EBPs across a range of clinical settings; however, clinician feedback also indicated that they did not view all components of the EBP to be equally useful in their practice. They demonstrated a preference for self-selecting aspects of the EBP that they found most relevant to each client, and also indicated that modular, flexible treatments were preferred to more traditional, circumscribed EBPs. As clinicians are critical partners in the implementation process, researchers must understand clinicians’ perspectives on EBP usability. In doing so, we have the potential to identify actionable strategies to increase the sustainability of EBPs in real world clinical settings (Powell, Hausmann-Stabile, & McMillen, 2013).

The existing literature provides preliminary, theoretical information on the factors associated with the sustainability of EBPs for youths, but has not yielded well-replicated findings on which to build an empirically grounded theory about the sustainability of EBPs for youths in community behavioral health clinics (Bond et al., 2014). Qualitative methodologies may have particular utility for shedding light on the nuanced perspectives clinicians hold with regards to using and sustaining EBP (Chu et al., 2015; Palinkas et al., 2013; Ringle et al 2015). With this in mind, the primary purposes of the current study are to (1) present a theory of factors influencing community clinicians’ decisions to sustain an evidence-based intervention for youths six months after their participation in an implementation trial for a flexible, transdiagnostic, evidence-based approach for treating youths with anxiety, depression, and conduct-related disorders (Masked for Review) and (2) identify leverage points and areas for future research that have the potential to increase EBP sustainability in community behavioral health settings for youths.

Method

A thematic analysis approach (Braun & Clarke, 2006) was used to generate an in-depth understanding of sustainability of an EBP for youths in community-based service settings.

Participants

Participants were 13 community-based clinicians who provided psychosocial treatment to youths in two northeastern clinics, and who volunteered to participate in a pilot study implementing a transdiagnostic, cognitive-behavioral treatment protocol (Masked for Review) for youth with anxiety, depression, or conduct problems. Both sites were community mental health clinics that offered traditional outpatient services and employed therapists across a variety of mental health degrees, disciplines and theoretical orientations. Invitations were extended to all 14 participants in the pilot implementation trial; one participant from the original sample in the pilot implementation trial had moved overseas and was not able to participate. Table 1 describes the demographic characteristics of the sample. The majority of participants were women (n = 11, 85%) who identified as Caucasian (92%). With regard to training background, 61.5% were master’s level clinicians and 38.5% were doctoral level clinicians. The average age of the participants was 35.6 (SD = 6.7) and the average years of clinical experience for the participants was 8.0 (SD = 9.2). The clinicians reported mainly eclectic orientations. Within the pilot study the clinicians had an average of two youth clients with whom they used the study protocol.

Table 1.

Participant Characteristics (N = 13)

Characteristic N %
Gender (female) 11 84.62
Race
 White/Caucasian 12 92.31
 Latino 1 7.69
Degree
 Masters-level 8 38.50
 Doctoral-level 5 61.50
Age 35.57 (M) 6.69 (SD)
Year of Clinical Experience 7.96 (M) 9.16 (SD)

Procedure

The study received IRB approval and all participating clinicians gave informed consent. Clinicians participated in semi-structured, qualitative interviews administered over the phone six months after completing participation in the implementation trial. This timeline was selected to balance the desire to wait until clinicians had begun new cases independent of weekly study supervision with the researchers’ awareness of high turnover at community clinics and difficulty reaching clinicians for follow up. Details of this pilot trial are described in detail elsewhere (Masked for Review). Briefly, clinicians received two days of training in the transdiagnostic protocol FIRST. FIRST consists of five broad, transdiagnostic principles of therapeutic change derived from previously tested evidence-based treatments for youth with internalizing and externalizing disorders. After, clinicians received one hour of weekly small-group tele-consultation from study staff for the duration of the trial, with the consultants reviewing recordings of participants’ sessions prior to consultation meetings. Clinicians and consultants also had access to web-based reports of progress monitoring data from youths and caregivers. During the consultation calls, study consultants provided guidance regarding clinical decision-making, modeled intervention strategies, and supported clinician implementation.

Participating youths were 24 children aged seven to 15 (M age = 11.03, SD = 2.69) referred for treatment through normal community pathways to one of the two urban clinics in the Northeast. All youths met criteria for at least one Diagnostic and Statistical Manual of Mental Disorders (4th ed. [DSM-IV]; American Psychiatric Association, 1994), and the mean number of disorders was 2.2 (SD = 1.2; Masked for Review). The majority of youth in the study reported a primary problem area of conduct (45.8%), followed by anxiety (41.6%) and mood (12.5%). Preto-post treatment feasibility, acceptability, clinical benefit were benchmarked against comparable studies. Results indicated that the protocol was feasible, acceptable, and showed clinical benefit, with effect sizes in the medium to large range. Benchmarking suggested that results were similar to outcomes of other transdiagnostic protocols and better than the usual care comparison in those studies. For a more detailed description, see [Masked for Review].

Telephone interviews occurred six months after each clinician’s last consultation call for the study. Interviews lasted approximately one hour, and participants were compensated at their fee-for-service rate for their participation. Interviews were conducted by a research assistant at a different institution from the study site (the second author), who had received no training in the intervention, had no prior contact with the clinicians during the study, and was not privy to any information about the clinicians other than their names. Clinicians were assigned code numbers and assured their names would not be associated with their responses. All interviews were digitally audio-recorded to increase their descriptive validity (Maxwell, 1992), transcribed verbatim, de-identified, and checked for accuracy. The interviews were conducted using a list of broad, open-ended questions followed by queries or probes to elaborate. Clinicians were asked to report about the nature of their use of FIRST since the study ended, and why or why not they were using it. Additionally, they were asked what would foster or limit sustained use of the intervention at their clinic. The interview was designed to elicit factors related to sustainability of an EBP after the end of a research study, and is available by request to the first author.

Qualitative data analysis

Interviews were analyzed using a theoretical thematic analysis approach (Braun & Clarke, 2006) to identify codes, themes, and a theoretical narrative (i.e., a summary of what bridges the text to the research concern) concerning factors that influenced the sustainability of the intervention. First, the entire body of text was read and discussed by the team (the first three authors) in order to develop familiarity with the broad concepts that related to the study’s aim (Palinkas et al., 2008). During this phase, initial codes were identified and generated from the data collaboratively. Next, the second and third authors conducted an open analysis wherein the text was independently reviewed line-by-line. Relevant text related to the broad concepts was highlighted, with preliminary notations made to identify possible codes and themes. In a series of meetings, codes that emerged directly from the data were discussed by the two coders, with the first author helping to refine, combine, and disaggregate codes as needed. For example, the initial code “Providers use FIRST with less structure” was developed directly from a transcript that read “I’m using it in that [less structured] way versus using it in a very structured module, manualized approach.” Later, this code merged with others and became the code “Using the intervention with adaptations,” combining the codes “Providers use FIRST with less structure” and “Providers omit or add some elements.” As a final step in this phase, a preliminary coding manual was developed by the second and third authors that included a brief definition of the codes to “jog the analyst’s memory,” (Guest, Bunce, & Johnson, 2006) a more complete definition that explained the code in greater detail, and exemplar relevant text drawn directly from transcripts. This procedure was repeated until no major codes or themes emerged from the data that were not already documented in the codebook, indicating theoretical saturation. Using the codebook, two independent coders (the second and third authors) analyzed and assigned codes to each text unit. All interviews were double coded, coding discrepancies were discussed and resolved with the first author, and the codebook revised as needed. Coding agreement ranged from 86.7% to 100% (M = 97.7, SD = 3.3) for each code. Next, the codes were organized into themes that illustrated their relations with one another based on the data as well as a priori theoretical concepts from the sustainability literature. Overlaps across themes were addressed in coding consensus meetings held by the first three authors. In these meetings, codes were repeatedly reviewed in conjunction with themes to ensure their fit and congruence. This resulted in related sub-themes that were organized into themes regarding the factors that support or limit sustainability of EBPs in community practice.

Results

Analysis yielded four themes and 15 sub-themes, described below. Table 2 presents the themes and sub-themes, the percentage of participants who endorsed each, and exemplar quotes.

Table 2.

Qualitative Data Analysis: Themes and Codes

Category Participants who endorsed (%) Exemplar response
1. Facilitators to sustained practice outside of study 100%
1.1. Adaptation to the intervention 100% I’m using it in that [less structured] way versus using it in a very structured module, manualized approach.
1.2. Alignment with or complementary to existing practice 92% I’m definitely psychodynamic, but I’m also, I incorporate a lot of CBT into my work because I feel like it is really the most effective form of treatment.
1.3. Belief in relative advantage for clients 77% I think that for the clients it was definitely a benefit. One, my girl now who has OCD and had been really, really treatment resistant, had tried another treatment in the past that hadn’t gone anywhere. Her mom had recently said to her…something about “aren’t you glad we found this study?” And the girl said, “I know, it’s like a miracle.”
2. Knowledge transfer 100%
2.1. Value of study supervision and training 100% …Just the way, how knowledgeable [the study supervisor] was about this stuff…Whenever there was a problem, she had exactly the thing to do.
2.2. Increased
confidence/
competence
92% I think that for me [in] those sessions or [with] those clients [i.e., FIRST] I felt more effective and capable. And I think it’s helped my practice in general.
2.3. Enhanced understanding of principles and theory 62% It wasn’t the, you know, the Winnicott and holding environment … it was, you know, I’m going to make you do things that are really uncomfortable for you. So that was a shift, but when I saw that, hey this actually really works and it’s giving her relief, not just in the moment but giving her long-term relief, I started to grow a bit of a backbone and say you know, yes, I know it ‘s going to be hard but I was able to kind of see that, okay, this is helpful.
3. Barriers to sustained practice outside of study 92%
3.1. Lack of goodness of fit with client 77% For some cases my intuition and clinical experience tells me that I need to really sit with the patient and hear what they’re saying and something needs to happen with the relationship.
3.2. Practical and logistical challenges 69% Not getting paid for the time [to prepare] outside of session and not setting aside time outside of the session.
3.3. Organizational culture 38% Not at the [name of clinic] because they’re pretty resistant to it…I think they’re afraid of change, to be honest. I mean they’re set in their ways.
3.4. Incomplete knowledge transfer 30% They were teaching us a lot of information so I don’t want to say that it requires more training but I definitely didn’t feel like an expert on it.
3.5. Lack of goodness of fit with clinician 23% I’m a long-term provider. You know, my clients tend to stay in therapy with me for a while so maybe it’s because I’m not used to the quick fix sort of treatment.
4. Clinic-wide upscaling of the intervention 92%
4.1. Training a critical mass 85% It’s not that everybody has to use it but I feel that the whole clinic should be exposed to it and what’s involved in the different skills.
4.2. Ongoing expert support 77% I don’t think that anyone would say that they were expert enough to be the leader or the one that’s giving all these suggestions.
4.3. Ability to train and supervise others in the EBP 54% Maybe you could create…a CBT team and then those people on the team who are interested and able could be trained in it and then they could supervise [others].
4.4. Fidelity monitoring for quality assurance 38% I just felt like I needed to be my sharpest and at my best, which unfortunately with so many clients you can’t always do

Note. Text included in brackets has been included by the author for clarification or to replace identifying information.

Sustained practices outside of the study

All participants (100%) reported their current continued use of the intervention after the study. Within this theme, the participants identified factors facilitating sustained practice and brought up three sub-themes: (1) using FIRST with adaptations, (2) using aspects of the intervention that they viewed as complementary to their existing practice, and (3) using the intervention when they believed it provided advantages relative to other treatment modalities.

Using FIRST with adaptations.

All participants (100%) described making individualized adaptations when using FIRST outside of the study. Specifically, clinicians in the study described sustaining the intervention in their everyday practice in a way that was less structured, more eclectic, and slower in pace relative to their use during the formal study. For example, one clinician reported, “I wouldn’t say I’m as manualized with my clients [outside of the study], but I use a whole lot of the strategies in my every day practice now.” In a similar vein, one clinician expressed, “I’m not assigning as much homework as I did during FIRST but there is still homework that’s assigned or practice that’s assigned.”

Clinicians also reported using strategies from the intervention in a more eclectic manner in their everyday practice, stating, “It doesn’t have to be this separate, really formal thing. It can be woven into what you’re doing just on a moment-to-moment basis.” Additionally, a number of clinicians reported slowing down the pace of the intervention in their everyday practice, relative to the study. For example, when describing continued use of the intervention, one clinician stated, “If anything I think I go more slowly than the study… I’m more inclined to let them [clients] move a little bit slower if they want that.”

Alignment with or complementary to existing practice.

A majority of the participants (92%) stated they would continue to use the intervention because they felt it was complementary or similar to their existing clinical practice. For example, when one clinician described her continued use she stated, “I’m definitely psychodynamic, but I’m also, I incorporate a lot of CBT into my work because I feel like it is really the most effective form of treatment.” Additionally, another clinician expressed:

You know, I think in the end…for [FIRST] to be implemented on a regular basis it takes clinicians who are willing to, who are first of all convinced that this is an effective method and second of all, [are] willing to put in the effort to learn it well so that it becomes a little easier to use.

Belief in relative advantage for clients.

A majority of the participants (77%) reported that they planned to continue using FIRST because they experienced relative advantages to using the intervention versus other approaches with their clients. The participants often spoke of relative advantages with regard to achieving clinical improvements, enhancing therapeutic alliance, and providing clients with tangible skills. For example, with regard to achieving clinical improvements, one clinician stated:

I think that for the clients it was definitely a benefit. One, my girl now who has OCD and had been really, really treatment resistant, had tried another treatment in the past that hadn’t gone anywhere. Her mom had recently said to her…something about “aren’t you glad we found this study?” And the girl said, “I know, it’s like a miracle.”

With regard to building stronger therapeutic alliances, one clinician provided the following example:

I just find that overall [FIRST] was very helpful in allowing the therapist and parent to have an alliance where they were both on the same team, and it’s really important for me to reiterate that in my past experience it would always be that the alliance was between me and the child, and the parents felt that they were left out and didn’t want to participate in sessions… I didn’t find that with the study.

Knowledge transfer

All participants (100%) discussed the importance of knowledge transfer (i.e., from training and consultation) as a key factor in their ability to sustain the intervention long-term. Within this theme, sub-themes included: (1) the value of supervision and training in helping clinicians use the intervention, (2) clinicians’ experience in developing clinical confidence and competence after participating in the intervention study, and (3) developing an enhanced understanding of principles and theory through participation in the study.

Value of study supervision and training.

In the context of sustaining FIRST at their clinics, all of the participants (100%) discussed the importance of knowledge transfer via the supervision and training provided to them during the implementation phase of the study. One clinician stated, “Just the way, how knowledgeable [the study supervisor] was about this stuff…Whenever there was a problem, she had exactly the thing to do.” Similarly, another clinician stated, “I remember…after the training really needing the supervision for support afterwards because I was still not really feeling like I knew what I was doing.” Many of the clinicians interviewed spoke specifically about the importance of supervision with regard to helping them problem-solve challenging cases. Said one clinician:

I think while I’ve done some different kinds of CBT techniques before I’ve never had really thorough supervision and help from somebody with problem solving when I’m really stuck with one of those techniques not working, you know what else you might try to make it work more effectively.

Additionally, a number of clinicians described how actively practicing the intervention techniques with their supervisors (e.g., through role-plays) helped them develop a significantly better understanding of how to use the intervention with their clients. One clinician reported, “I remember partnering up with [the study supervisor] and she would show me exactly how she would do an in vivo exposure with a child who has severe anxiety, and that was so helpful.”

Increased clinical confidence and competence.

Relatedly, the majority of participants (92%) described that the support provided during the study gave them an increased sense of clinical confidence and competence that would continue to impact their practice over time. One clinician stated, “I think…I can’t say for the other therapists, but I think that for me those sessions or those clients I felt more effective and capable. And I think it has helped my practice in general.” Similarly, another clinician expressed:

I truly feel like it changed who I am as a clinician, and my confidence. And, you know, sometimes we’re here working with clients and we’re not sure exactly what we’re doing. It’s hard to say that, and in those cases where we can really rely on what the research tells us in a clear-cut way. [It] is just something that I am so thankful for and will continue to use.

Another clinician said, “Now I feel with myself I’m just much more confident and I think parents can pick that up and they’re sort of buying what I’m selling now.”

Enhanced understanding of principles and theory.

Many participants (62%) noted that study supervision helped them develop a better understanding of the principles and theory behind EBPs, and that this would foster their sustainment of FIRST. For example, one clinician reported gaining confidence in understanding the biopsychosocial theory of exposure for anxiety:

Especially around anxiety, describing, you know, in depth the biopsychosocial theory behind why kids are the way they are and then moving toward what treatment looks like and why we do what we do…those are techniques I was definitely not confident in using before.

Another clinician noted:

It wasn’t the, you know, the Winnicott and holding environment where we kind of talk and make it safe for you, it was, you know, I’m going to make you do things that are really uncomfortable for you. So that was a shift, but then when I saw that, hey this actually really works and it’s giving her relief, not just in the moment but giving her long-term relief, I started to grow a bit of a backbone and say, you know, yes I know it’s going to be hard but I was able to kind of see that, okay, this is helpful.

Barriers to sustainability

A majority of participants (92%) discussed barriers to sustaining the intervention after the study. Five sub-themes fit within this theme: (1) not wanting to use the intervention with certain clients due to perceived lack of goodness of fit with clients, (2) practical and logistical challenges, (3) difficulties with sustaining the intervention within the organizational culture of their agencies, (4) incomplete knowledge transfer and (5) lack of goodness of fit with clinician orientation or practice.

Lack of goodness of fit with client.

A majority of participants (77%) reported challenges to sustaining the intervention based on goodness of fit with other clients. Examples of client factors that determined goodness of fit included client age or developmental stage, client readiness to make behavioral changes, and the relationship formed between the client and the therapist. For example, one clinician stated that it would be preferable to use other, less structured, clinical techniques with certain clients: “For some cases my intuition and clinical experience tells me that I need to really sit with the patient and hear what they’re saying and something needs to happen with the relationship.” Additionally, a number of clinicians stated that developmental stage was an important factor that would potentially deter them from using the intervention outside of the study. One clinician explained, “I don’t think that you lose the relational piece for the older kids but I just find that for the really little kids that structure is too hard to impose or maintain.”

Practical and logistical challenges.

Many of the participants (69%) brought up a variety of practical challenges related to limitations with time, money, and organizational resources. For example, one clinician stated that she was reluctant to continue using the intervention due to, “Not getting paid for the time [to prepare] outside of session and not setting aside time outside of the session.” The same clinician went on to state, “So those techniques, which I think work, I still don’t really use them as a part of my everyday practice because I feel they’re less practical.” In the words of another clinician, “I think that’s the biggest issue is that people are just super, super busy…so to learn something new or to put the time into it, unless there was some motivation to do so, is hard.” Similarly, another clinician explained that it would be challenging for clinicians within the organization to find enough time to prepare for CBT-oriented sessions:

I think people go into sessions not really thinking about it ahead of time and this does require thinking. It requires some preparation and thought before going into a session and people don’t have that time really to do it. I mean they should because that’s good treatment, but, you know, I think we see clients back to back all day long sometimes without a break and so to do CBT does require a little bit, like even making copies of worksheets or kind of thinking about the agenda or the skills that we’re going to work on, it requires some extra time.

Clinicians stated that it would be particularly challenging to sustain technological and administrative aspects of the intervention, such as client progress monitoring via telephone or internet.

Organizational culture.

A number of clinicians (38%) reported that organizational culture would serve as a barrier to sustaining the intervention or prevent them from teaching it to new clinicians outside of the study. Specifically, one agency was perceived as resistant to change and several clinicians felt that adopting new practices would not be supported. Said one clinician, “Not at the [name of clinic] because they’re pretty resistant to it…I think they’re afraid of change to be honest. I mean they’re set in their ways.” Another clinician explained, “There’s a bit of a hierarchy with the older, with more senior clinicians who just have been there for a long time who were pretty skeptical about anything new.” A third clinician stated that other staff members in the organization would not be open to the manualized intervention due to the added steps involved in using it: “There would be a lot of skepticism… I feel like anything that feels like you’re adding a layer of complexity or a layer, an extra step, is not really looked upon well.”

Incomplete knowledge transfer.

A minority of clinicians (30%) spoke about incomplete knowledge transfer impacting their ability to sustain the intervention. One clinician said, “They were teaching us a lot of information so I don’t want to say that it requires more training but I definitely didn’t feel like an expert on it.” Some clinicians also pointed out that they felt limitations in their knowledge about how to use the intervention due to the fact that they only practiced specific sections from the transdiagnostic manual during the study—for example, if they only treated a depressed client and did not have an opportunity to use other portions of the intervention with expert supervision. One clinician expressed:

I didn’t get a chance to work with depressed clients through the program, or anxious clients, although sometimes the clients that I had, besides conduct disorder, also had anxiety issues, or of course some depression. But pretty much I was using the main protocol with the guidance of the supervisors in terms of this child is permanently coming out of conduct disorder, and so that, if I could say one little downfall, was that ideally I would have loved to have a client on each condition so that I would have gotten direct supervision for each one.

Lack of goodness of fit with clinician.

Finally, some clinicians (23%) noted that in instances where aspects of the intervention directly conflicted with their preferences, training, and prior approaches to treatment, they would be less likely to use it. In particular, those with a strong allegiance to a therapeutic orientation other than cognitive-behavioral described aspects of the intervention as opposed to their prior practice and stated that they were less likely to use it in the future. For example, “I’m a long-term provider. You know, my clients tend to stay in therapy with me for a while so maybe it’s because I’m not used to the quick fix sort of treatment.” Another clinician stated: “I do pretty much expressive and experiential [therapy] so I spend a lot of time joining with the client before we actually get to the treatment.”

Clinic-wide upscaling of the intervention

A majority of participants (92%) discussed factors related to the clinic-wide upscaling of the intervention as an important aspect of sustainability. Four sub-themes were related to this theme: (1) the importance of training a critical mass within their organizations, (2) the need for ongoing expert support to sustain the intervention, (3) the ability to train and supervise others in the intervention and (4) the need for fidelity monitoring for quality assurance.

Training a critical mass.

Most participants (85%) stated that it would be easier to sustain the intervention if there was a large group of clinicians within the organization who were also trained, and discussed the possibility of teaching the intervention during clinic-wide meetings:

We’ve actually talked about this, those of us who were in the [FIRST] study, actually presenting it to our clinic, to the other clinicians, and supervisors so that it can be implemented on a clinic-wide level as opposed to just individual clinicians being trained. I think it’s really helpful because then everybody, well there’s a more cohesive approach to applying it to the clinic setting.

Other clinicians also indicated that sustaining practice would be easier to do as a group. For example, one clinician stated, “Maybe some kind of feedback group… some kind of group to come together occasionally to talk about how they’re using FIRST and now that the study’s over, how to integrate it into your practice.” Another noted, “If there were able to be a group of trained clinicians who met once or twice a month during lunch and gave feedback to each other or something, that might keep me more on the FIRST track and motivated with FIRST.”

Ongoing expert support.

Many participants (77%) spoke about the need for ongoing expert support to sustain the intervention within their clinic settings. In particular, clinicians spoke about difficulties sustaining if they could only rely on supervisors from within their organizations. For example:

There’s no one at the center that would be qualified in the same way that [the study supervisors] were qualified to supervise the material. And it has been one of the biggest frustrations… There’s just no way. So even though we’ve had some people who have gone through the training, I don’t know if they’re qualified to provide the level of supervision that the folks [from the study] were able to do.

Ability to train and supervise others in the intervention.

About half of the participants (54%) felt they had the capacity to train other individuals in their clinics in order to help scale and sustain the intervention. As one example, a clinician noted, “I think probably investing some time and energy into training therapists who are already trained in the practice to become supervisors would be probably the best way to make it more sustainable.” Another agreed: “So like, people like me…you know, three or four clinicians at the center so that they can also supervise people on it to make it so it’s accessible to everyone.”

Fidelity monitoring for quality assurance.

Likewise, when discussing whether they would sustain the intervention, some clinicians (38%) stated that without oversight such as session review and client progress monitoring from study staff, they would most likely fail to use the intervention with as much fidelity as they had during the study. The clinicians noted that some amount of quality assurance, as provided during the intervention trial, would be necessary for taking the intervention to scale clinic-wide. In the words of one clinician, “I just felt like I needed to be my sharpest and at my best, which unfortunately with so many clients you can’t always do.” Another clinician agreed, “As bad as this may sound, I think it felt like I needed to work extra hard just to do a better job and be a better therapist with my [study] client.” Another clinician reported that she was continuing to use the intervention outside of the study, but was doing so in a less organized manner without the oversight of the research study: “I think, like I haven’t looked at the manual since the study ended, and so I think that some details have probably gotten lost over time. I tried harder and was more planful during the study.”

Discussion

The present study used a theoretical thematic analysis approach to identify factors related to community-based clinicians’ sustainment of a transdiagnostic evidence-based intervention for youths after ending participation in a formal pilot study. Results indicate that clinicians perceived this intervention to be sustainable, as all of the clinicians reported current continued use following the formal pilot implementation study. At the same time, and consistent with previous research (Chu et al., 2015; Palinkas et al., 2008), all of the clinicians also reported that they would adapt the intervention for long term use. The interviews in this study were conducted six months after the end of the implementation trial, suggesting that community-based clinicians might make adaptations to EBP protocols soon after they complete formal training experiences.

The current study and those noted above suggest that sustainment of EBP will naturally include changes over time. Changes could include both content and contextual adaptations, and might preserve the core effective elements of an EBP or threaten fidelity (Stirman, Baumann, & Miller, 2019). Such adaptations may lead to better fit with the organization and the clients, thereby resulting in better client outcomes (Chambers et al., 2013; Scheirer & Dearing, 2011). Alternatively, adaptations could dilute the intervention and lead to drops in effectiveness, sometimes called the “implementation cliff” (Weisz, Ng, & Bearman, 2014). Ultimately, further research is needed to measure and characterize clinician-specific adaptations to EBPs in order to more carefully assess their impact on long-term effectiveness (Stirman et al., 2013; 2019).

Thematic analysis was used to better understand factors that contribute to the sustainability of an EBP for youths in community-based settings after the end of a controlled treatment research trial. Through this process, we identified themes that together provide a narrative about the factors impacting sustainment of the EBP, which map onto many of the factors suggested by the EPIS framework (Aarons et al., 2011). These primary factors reported across themes include: (1) clinician factors, (2) client factors, (3) logistical and organizational factors, and (4) the need for ongoing support, including fidelity monitoring for quality assurance.

Clinician factors related to sustainability

Reasons to sustain the intervention included that it complemented clinicians’ existing clinical practice or prior training. A smaller group expressed that they might not use aspects of the protocol because of their personal preferences and previous training. Although clinician orientation has not emerged as a predictor of EBP implementation fidelity during a research study (Bearman et al., 2013), these findings suggest that clinician commitment to sustaining EBPs once the trial ends may vary with prior training experiences and clinical practices. This raises the question of how to best select the staff members who will be trained to provide EBPs for youths in public service settings. For example, there may be benefits to initially screening for staff who have positive or neutral attitudes toward EBPs, given the high-costs related to training and consultation. However, this may not always be possible as organizations frequently introduce new practices on a clinic-wide level, rather than targeting specific clinicians.

Clinicians also stated they would continue to use the intervention when they personally experienced it as advantageous to their clients relative to other treatment options. Although the relative advantages of EBPs are well documented (Southam-Gerow & Prinstein, 2014; Weisz, Bearman, Santucci, & Jensen-Doss, 2017), the lived experience of watching a client respond to treatment was impactful for clinicians—a finding that aligns with other qualitative inquiry in this area (Powell et al., 2013). Early experiences of success, or failure, with a new practice during the implementation phase seemed to be influential with regards to clinician plans for sustainability.

Client factors related to sustainability

While all of the clinicians in this study reported current continued use of the EBP, a majority noted that they would be reluctant to use this directive, skills-based approach with certain clients, a sentiment echoed by other community-based providers (Ringle et al., 2015). This provides some confirmation that theoretical models of sustainability should account for client-related variables, as well as the processes through which clinicians select interventions to use with individual clients. Relevancy mapping studies have indicated that gaps in “coverage” by the evidence base for some clients—due to age, demographic features, or diagnoses—remain (Bernstein, Chorpita, Daleiden, Ebesutani, & Rosenblatt, 2015). Thus, it is possible that the intervention might not have “fit” the problem or the client characteristics, and clinicians wisely abstained from use. It is also possible that clinicians’ perceptions of client “fit” may not reflect the evidence base. More work is needed to determine the clinical decision-making involved with choosing to use—or choosing not to use—available EBPs for specific clients.

Logistical and organizational factors related to sustainability

The clinicians frequently discussed systems-wide, logistical barriers that would deter them from using the EBP. Specifically, they identified insurance policies and funding as key barriers to sustaining the EBP, similar to concerns reported by behavioral health agency and state leaders (Bond et al., 2014). There were no expectations for organizational sustainment after the study, and clinicians pointed out that after the research study they would not be reimbursed for allocating more time and personal effort to the EBP. As a result, many clinicians revealed that they would choose interventions that required less preparation. These results underscore the importance of considering public policy and funding when planning for sustainability (Aarons et al., 2011; Glisson et al., 2008; Massatti, Sweeney, Panzano, & Roth, 2008; Nadeem et al., 2013), and mirror qualitative results indicating that direct financing and other types of support were key to sustaining high-fidelity practice of EBP (Swain, Whitley, McHugo & Drake, 2010).

Similar to previous studies, our findings also highlight the importance of understanding inner-organizational variables when planning for sustainability (Aarons et al., 2011; Glisson et al., 2008; Massati et al., 2008). A number of clinicians reported that they felt discouraged from continuing to use the intervention because their colleagues within the organization were resistant to EBPs. Relatedly, they stated that they would be more likely to continue using the intervention if there were a critical mass of clinicians in the organization trained in the EBP and a network of social support within the organization to promote its use (Aarons et al., 2011). Thus, in line with existing theoretical frameworks (Aarons et al., 2011; Chambers et al., 2013; Feldstein et al., 2008; Glisson et al., 2008) and prior research (Jensen-Doss et al., 2009) our findings suggest that organizational culture and leadership play a crucial role in EBP sustainability. Interventions that target attitudes toward EBPs at various levels within organizations might be necessary as standard implementation practice (Glisson et al., 2010).

The need for ongoing support to sustain EBPs

All of the clinicians spoke about the value of receiving consultation from experts in the intervention during the implementation phase of the study. They reported an increased sense of competence, as well as a better understanding of theoretical principles underlying the practices, all of which motivated them to continue using the intervention on their own. In contrast, clinicians expressed concerns about sustaining the intervention under the guidance of supervisors who were not themselves trained in EBPs. In a prior study, supervisors who were themselves trained in an EBP were seen as critical to its sustainment (Aarons et al., 2016). One approach is to “train the trainer,” cultivating expertise in the intervention, as well as in training and supervising other practitioners. While such models have shown promising effects on adherent EBP implementation (Chamberlain, 2003; Henggeler, Schoenwald, Borduin, Rowland, & Cunningham, 2009), the inclusion of such oversight can be costly to community settings (Smith-Boydston, 2005). Roughly half of the clinicians in this study felt that they had enough expertise in the intervention to supervise others. One potential question for future sustainability research is what dosage of procedural experience clinicians need to have with a newly learned practice in order to sustain it without ongoing support, and to transfer knowledge to others. The clinicians in this study also noted that ongoing public-academic collaboration would increase their ability to take the intervention to scale, and particularly to problem-solve implementation barriers. While public-academic collaborations have the potential to help community-based organizations sustain new interventions (Aarons et al., 2011; Quill & Aday, 2000), more research is needed to identify the best ways to organize and support such efforts.

Similar to previous studies (Aarons et al., 2011; Beidas, Edmunds, Marcus, & Kendall, 2012; Chu et al., 2015), clinicians in our study found significant benefits from receiving outside fidelity monitoring (session review, client progress monitoring) during the implementation trial. Some revealed that they had devoted more effort towards treatment and conceded that they were less likely to deliver the intervention with the same level of effort without quality control checks from study staff. This raises questions about how to promote local fidelity monitoring for EBPs after formal implementation efforts have ended. One solution might be measurement feedback systems, which use a battery of frequent, typically brief assessments to track treatment progress and processes and have been shown to improve client outcomes for youths in community behavioral health settings (Bickman, Kelley, Breda, de Andrade, & Riemer, 2011). While measurement feedback systems have the potential to increase clinician accountability, they require organizational support with regard to technology, training, and providing clinicians with sufficient time to use the systems (Bickman, Kelley, & Athay, 2012).

Limitations

A number of limitations to this study must be noted. First, the clinicians included in this study were employed by clinics that had opted to participate in an open trial of an EBP, and thus may not be representative of community mental health clinics broadly. Second, the sample size is small and data were not triangulated with another data collection method; however, interviews were conducted with all but one of the clinicians that participated in the implementation trial and the process of coding refinement was repeated until no codes emerged from the data that were not already documented in the codebook. Next, given the transdiagnostic nature of the intervention used in this study and the variability among clients treated in the study, the types of cases each clinician had during the pilot study may have played a large role in their impressions of the intervention. Finally, these qualitative data are subjective in nature and can be interpreted differently based on individual differences and biases. To address these limitations, multiple coders analyzed the data independently in order to increase objectivity within the study.

Conclusion

There is growing recognition that EBP developers should plan for sustainability from the onset of treatment development in order to increase the success of implementation efforts (Cooper, Bumbarger, & Moore, 2015; Novins et al., 2013). The themes raised by clinicians in the current study point to a number of factors that, if addressed, may better equip the field to make long-lasting changes for the youths who need effective services the most.

Public Significance Statement.

This study interviewed community mental health clinicians about the factors related to sustaining evidence-based practices following participation in a research trial. Clinicians indicated that a number of personal, organizational, logistical, and client variables influence the sustainment of new interventions, and could be leveraged to sustain evidence-based practices over time.

Acknowledgments

This research was supported by a grant from the National Institute for Mental Health (MH085963). The authors are grateful to the clinicians who participated in this study. FIRST was an earlier version of a treatment manual for which Drs. Weisz and Bearman may receive income.

Contributor Information

Sarah Kate Bearman, The University of Texas at Austin

Abby Bailin, The University of Texas at Austin

Rachel Terry, Yeshiva University

John R. Weisz, Harvard University

References

  1. Aarons GA, Hurlburt M, & Horwitz SM (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 4–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders: DSM-IV-TR. Washington, DC: American Psychiatric Association. [Google Scholar]
  3. Auerbach CF, & Silverstein LB (2003). Qualitative data: An introduction to coding and analysis (1st ed.). New York: New York University Press. [Google Scholar]
  4. Bearman S, Weisz J, Chorpita B, Hoagwood K, Ward A, Ugueto A, & Bernstein A (2013). More Practice, Less Preach? The Role of Supervision Processes and Therapist Characteristics in EBP Implementation. Administration and Policy in Mental Health and Mental Health Services Research, 40(6), 518–529. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Becker KD, & Stirman SW (2011). The science of training in evidence-based treatments in the context of implementation programs: Current status and prospects for the future. Administration and Policy in Mental Health and Mental Health Services Research, 38(4), 217–222. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Beidas RS, Edmunds JM, Marcus SC, & Kendall PC (2012). Training and consultation to promote implementation of an empirically supported treatment: A randomized trial. Psychiatric Services, 63(7), 660–665. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bernstein A, Chorpita BF, Rosenblatt A, Becker KD, Daleiden EL, & Ebesutani CK (2015). Fit of Evidence-Based Treatment Components to Youths Served by Wraparound Process: A Relevance Mapping Analysis. Journal of Clinical Child and Adolescent Psychology, 44(1), 44–57. [DOI] [PubMed] [Google Scholar]
  8. Bickman L, Kelley SD, & Athay M (2012). The technology of measurement feedback systems. Couple and Family Psychology: Research and Practice, 1(4), 274–284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bickman L, Kelley SD, Breda C, de Andrade AR, & Riemer M (2011). Effects of routine feedback to clinicians on mental health outcomes of youths: Results of a randomized trial. Psychiatric Services, 62(12), 1423–1429. [DOI] [PubMed] [Google Scholar]
  10. Bond G, Drake R, McHugo G, Peterson A, Jones A, & Williams J (2014). Long-Term Sustainability of Evidence-Based Practices in Community Mental Health Agencies. Administration and Policy in Mental Health and Mental Health Services Research, 41(2), 228–236. [DOI] [PubMed] [Google Scholar]
  11. Bowman C, Sobo E, Asch S, Gifford A, Hiv Hepatitis Quality Enhancement, & HIV/Hepatitis Quality Enhancement Research Initiative. (2008). Measuring persistence of implementation: QUERI series. Implementation Science, 3(1), 21–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Braun V, & Clarke V (2006). Using thematic analysis in psychology. Qualitative research in psychology, 3(2), 77–101. [Google Scholar]
  13. Chamberlain P (2003). The Oregon multidimensional treatment foster care model: Features, outcomes, and progress in dissemination. Cognitive and Behavioral Practice, 10(4), 303–312. [Google Scholar]
  14. Chambers DA (2011). Advancing sustainability research: Challenging existing paradigms: Advancing sustainability research. Journal of Public Health Dentistry, 71, S99–S100. [DOI] [PubMed] [Google Scholar]
  15. Chambers D, Glasgow R, & Stange K (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science, 8(1), 117–117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Chu BC, Talbott Crocco S, Arnold CC, Brown R, Southam-Gerow MA, & Weisz JR (2015). Sustained implementation of cognitive-behavioral therapy for youth anxiety and depression: Long-term effects of structured training and consultation on therapist practice in the field. Professional Psychology: Research and Practice, 46(1), 70–79. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Cooper B, Bumbarger B, & Moore J (2015). Sustaining Evidence-Based Prevention Programs: Correlates in a Large-Scale Dissemination Initiative. Prevention Science, 16(1), 145–157. [DOI] [PubMed] [Google Scholar]
  18. Feldstein AC, & Glasgow RE (2008). A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Joint Commission Journal on Quality and Patient Safety, 34(4), 228–243. [DOI] [PubMed] [Google Scholar]
  19. Glisson C, Landsverk J, Schoenwald S, Kelleher K, Hoagwood KE, Mayberg S, … The Research Network on Youth Mental Health. (2008). Assessing the organizational social context (OSC) of mental health services: Implications for research and practice. Administration and Policy in Mental Health and Mental Health Services Research, 35(1), 98–113. [DOI] [PubMed] [Google Scholar]
  20. Glisson C, Schoenwald SK, Hemmelgarn A, Green P, Dukes D, Armstrong KS, & Chapman JE (2010). Randomized Trial of MST and ARC in a Two-Level Evidence-Based Treatment Implementation Strategy. Journal of Consulting & Clinical Psychology, 78(4), 537–550. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Guest G, Bunce A, & Johnson L (2006). How many interviews are enough?: An experiment with data saturation and variability. Field Methods, 18(1), 59–82. [Google Scholar]
  22. Henggeler SW, Schoenwald SK, Borduin CM, Rowland MD, & Cunningham PB (2009). Multisystemic therapy for antisocial behavior in children and adolescents (2nd ed). Guilford Press. [Google Scholar]
  23. Herschell AD, Kolko DJ, Baumann BL, & Davis AC (2010). The role of therapist training in the implementation of psychosocial treatments: A review and critique with recommendations. Clinical Psychology Review, 30(4), 448–466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Jensen-Doss A, Hawley KM, Lopez M, & Osterberg LD (2009). Using evidence-based treatments: The experiences of youth providers working under a mandate. Professional Psychology: Research and Practice, 40(4), 417–424. [Google Scholar]
  25. Johnson K, Hays C, Center H, & Daley C (2004). Building capacity and sustainable prevention innovations: A sustainability planning model. Evaluation and Program Planning, 27(2), 135–149. [Google Scholar]
  26. Kazdin AE (2008). Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist, 63(3), 146–159. [DOI] [PubMed] [Google Scholar]
  27. Massatti RR, Sweeney HA, Panzano PC, & Roth D (2008). The de-adoption of innovative mental health practices (IMHP): Why organizations choose not to sustain an IMHP. Administration and Policy in Mental Health and Mental Health Services Research, 35(1), 50–65. [DOI] [PubMed] [Google Scholar]
  28. Nadeem E, Olin SS, Hill LC, Hoagwood KE, & Horwitz SM (2013). Understanding the Components of Quality Improvement Collaboratives: A Systematic Literature Review. Milbank Quarterly, 91(2), 354–394. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Nelson TD, & Steele RG (2008). Influences on practitioner treatment selection: Best research evidence and other considerations. The Journal of Behavioral Health Services and Research, 35(2), 170–178. [DOI] [PubMed] [Google Scholar]
  30. Novins D, Green A, Legha R, & Aarons G (2013). Dissemination and implementation of evidence-based practices for child and adolescent mental health: A systematic review. Journal of the American Academy of Child and Adolescent Psychiatry, 52(10), 1009–1025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Palinkas LA, Schoenwald SK, Hoagwood K, Landsverk J, Chorpita BF, & Weisz JR (2008). An ethnographic study of implementation of evidence-based treatments in child mental health: First steps. Psychiatric Services, 59(7), 738–746. doi: 10.1176/appi.ps.59.7.738 [DOI] [PubMed] [Google Scholar]
  32. Palinkas LA, Weisz JR, Chorpita BF, Levine B, Garland AF, Hoagwood KE, & Landsverk J (2013). Continued use of evidence-based treatments after a randomized controlled effectiveness trial: A qualitative study. Psychiatric Services, 64(11), 1110–1118. [DOI] [PubMed] [Google Scholar]
  33. Powell BJ, Hausmann-Stabile C, & McMillen JC (2013). Mental Health Clinicians’ Experiences of Implementing Evidence-Based Treatments. Journal of Evidence-Based Social Work, 10(5), 396–409. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. International QSR. (2012). NVivo qualitative data analysis software. QSR International Pty. Ltd.,Version 10. [Google Scholar]
  35. Quill BE, & Aday LA (2000). Toward a new paradigm for public health practice and academic partnerships. Journal of public health management and practice: JPHMP, 6(1), 1–3. [DOI] [PubMed] [Google Scholar]
  36. Ringle VA, Read KL, Edmunds JM, Brodman DM, Kendall PC, Barg F, & Beidas RS (2015). Barriers to and Facilitators in the Implementation of Cognitive-Behavioral Therapy for Youth Anxiety in the Community. Psychiatric services (Washington, D.C.), 66(9), 938–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Scheirer MA, Hartling G, & Hagerman D (2008). Defining sustainability outcomes of health programs: Illustrations from an on-line survey. Evaluation and Program Planning, 31(4), 335–346. [DOI] [PubMed] [Google Scholar]
  38. Smith-Boydston JM (2005). Providing a range of services to fit the needs of youth in community mental health centers Handbook of Mental Health Services for Children, Adolescents, and Families, 103–116. Boston, MA: Springer US. [Google Scholar]
  39. Southam-Gerow MA, & Prinstein MJ (2014). Evidence Base Updates: The Evolution of the Evaluation of Psychological Treatments for Children and Adolescents. Journal of Clinical Child and Adolescent Psychology, 43(1), 1–6. [DOI] [PubMed] [Google Scholar]
  40. Southam-Gerow MA, Weisz JR, Chu BC, McLeod BD, Gordis EB, & Connor-Smith JK (2010). Does cognitive behavioral therapy for youth anxiety outperform usual care in community clinics? An initial effectiveness test. Journal Of The American Academy of Child and Adolescent Psychiatry, 49(10), 1043–1052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Stirman SW, Baumann AA, & Miller CJ (2019). Implementation Science, Advance Online Publication, 10.1186/s13012-019-0898-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Stirman SW, Kimberly J, Cook N, Calloway A, Castro F, & Charns M (2012). The sustainability of new programs and innovations: A review of the empirical literature and recommendations for future research. Implementation Science, 7(1), 17–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Stirman SW, Miller CJ, Toder K, & Calloway A (2013). Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implementation Science, 8(1). doi: 10.1186/1748-5908-8-65 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Swain K, Whitley R, McHugo GJ, & Drake RE (2010). The sustainability of evidence-based practices in routine mental health agencies. Community Mental Health Journal, 46(2), 119–129. [DOI] [PubMed] [Google Scholar]
  45. Weisz JR, Bearman SK, Santucci LC, & Jensen-Doss A (2017). Initial test of a principle-guided approach to transdiagnostic psychotherapy with children and adolescents. Journal of Clinical Child and Adolescent Psychology, 46(1), 44–58. [DOI] [PubMed] [Google Scholar]
  46. Weisz JR, Gordis EB, Chu BC, McLeod BD, Updegraff A, Southam-Gerow MA, & … Weiss B (2009). Cognitive–Behavioral Therapy Versus Usual Clinical Care for Youth Depression: An Initial Test of Transportability to Community Clinics and Clinicians. Journal of Consulting and Clinical Psychology, 77(3), 383–396. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Weisz JR, & Gray JS (2008). Evidence-based psychotherapy for children and adolescents: Data from the present and a model for the future. Child and Adolescent Mental Health, 13(2), 54–65. doi: 10.1111/j.1475-3588.2007.00475.x [DOI] [PubMed] [Google Scholar]
  48. Weisz JR, Ng MY, & Bearman SK (2014). Odd couple? Reenvisioning the relation between science and practice in the dissemination-implementation era. Clinical Psychological Science, 2(1), 58–74. doi: 10.1177/2167702613501307 [DOI] [Google Scholar]

RESOURCES