Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Nov 1.
Published in final edited form as: Adm Policy Ment Health. 2013 Nov;40(6):467–481. doi: 10.1007/s10488-013-0494-4

“If it’s worth my time, I will make the time”: School-based providers’ decision-making about participating in an evidence-based psychotherapy consultation program

Aaron R Lyon 1, Kristy Ludwig 1, Evalynn Romano 1, Skyler Leonard 1, Ann Vander Stoep 1, Elizabeth McCauley 1
PMCID: PMC3758430  NIHMSID: NIHMS471483  PMID: 23609107

Abstract

This study evaluated influences on school-based clinicians’ decision-making surrounding participation in a modular psychotherapy training and consultation program lasting one academic year. Clinicians were recruited from three participation groups: those who never engaged, those who engaged and then discontinued, and those who participated fully. Qualitative interviews explored influences on initial and continued participation, as well as differences in decision-making by participation group, knowledge about evidence-based practices, and attitudes toward evidence-based practices. Eight major themes were identified: time, practice utility, intervention/training content, training process, attitudes toward training, social influences, commitment to training, and expectations. Some themes were discussed universally across all comparison groups, while others varied in frequency or content. Recommendations for increasing participation are presented, based on the findings.

Keywords: implementation, consultation, training, school mental health, modular psychotherapy

Introduction

In recent years, youth mental health services researchers have focused increasingly on the implementation of evidence-based practices (EBP) in public sector service settings. This emphasis has stemmed largely from recognition that although EBP are capable of producing superior outcomes to “usual care” approaches (Weisz, Doss, & Hawley, 2006), development of effective psychosocial interventions has greatly outpaced the field’s ability to support their timely uptake and sustained use by real world providers (APA, 2008; Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Herschell, McNeil, & McNeil, 2004; McHugh & Barlow, 2010). As a result, the services many youth receive for mental health problems are unlikely to be optimally effective. Concentrated efforts to facilitate the implementation of EBP are one strategy to enhance the quality of public sector youth mental health services. To advance this objective, the current paper examines factors that influence provider engagement in training and consultation with the goals of increasing and sustaining participation in future implementation efforts.

Training and Consultation

McHugh and Barlow (2010) suggested that training psychotherapists to competently deliver EBP is among the greatest challenges currently facing the field of implementation. This is due both to the complexity of the psychosocial interventions themselves, as well as to the ineffectiveness of the most commonly used training models, which focus only on the short-term, didactic presentation of information (i.e., “train and hope,” Stokes & Baer 1977). Indeed, the need for additional consultation and support to build practitioner competence following didactic training is among the most well-documented and widely-referenced findings in the implementation and training literatures (Beidas & Kendall, 2010; Fixsen et al., 2005; Joyce & Showers, 2002; Lyon, Stirman, Kerns, & Bruns, 2011). For this reason, both initial training and post-training consultation or coaching have been identified as core components or “drivers” of implementation efforts to enhance professional practice (Fixsen et al., 2005). Furthermore, multiple factors are relevant to provider participation in, and the ultimate success of, training and consultation efforts. As detailed below, these include the costs of training and training discontinuation, indicators of training readiness, predictors of training outcome, and variations by training phase or implementation setting.

Costs of training and training discontinuation

Despite the well-documented need for continued support post-training, the provision of long-term consultation is a resource and time-intensive endeavor. In a study of training costs for Motivational Interviewing (MI) in community-based treatment programs, Olmstead, Carrol, Canning-Ball, and Martino (2011) documented that initial workshop training was the single most costly training expense. Consultation, which may be provided weekly or monthly and can easily last 6 months or more (McHugh & Barlow, 2010), requires further investment in the form of additional lost hours of clinician productivity, consultant preparation, methods for communication and feedback (e.g., conference calls, web-based meetings), and support materials. Olmstead et al. (2011) found that just three months of monthly post-workshop supervision and feedback by experts increased MI training costs by more than 50%. Longer-term or more frequent consultation, common to many training models, is likely to equal or exceed the cost of conducting the initial workshop. Premature discontinuation from training and consultation therefore carries significant financial impact or “loss on investment” for agencies, practitioners, and EBP purveyors and detracts substantially from the overall cost effectiveness of an implementation effort.

Pre-training “readiness”

Identification of appropriate practitioners or agencies – those who are prepared to participate fully in training and consultation efforts and then successfully implement new practices – has been described as an additional core component of the implementation process (Fixsen et al., 2005). To this end, multiple authors have discussed an association between clinician attitudes about EBP and their uptake of new practices (e.g., Aarons, 2005; Rogers, 2003), and some studies have evaluated these associations. Nelson and Steele (2008) found that attitudes toward treatment outcome research were a significant predictor of EBP use, and Borntrager and colleagues (2009) documented positive changes in clinician attitudes after completion of training. Although provider knowledge about EBP has received less attention as a predictor of use, low practitioner knowledge has been identified as an important barrier to uptake (Higa & Chorpita, 2008), and more competent clinicians have generally been found to respond better to training (e.g., Siqueland et al. 2000). Furthermore, recent work by Nakamura, High-McMillen, Okamura, and Shimaburkoro (2011) found no association between clinician attitudes and knowledge about EBP, suggesting unique contributions of both constructs.

Despite a growing recognition of the importance of practitioner attitudes and knowledge, offering training only to practitioners who can demonstrate high levels of “readiness,” competence, or motivation may be insufficient to improve a system as a whole. In contrast to initiatives that focus primarily on high-performing practitioners or sites, engaging the maximum number of practitioners possible in training and consultation may be more important to achieving real, lasting cultural change, promoting and expanding the reach of EBP within service systems, and, ultimately, reducing the burden of mental illness on a larger scale; a key priority for the mental health field (Kazdin & Blase, 2011).

Predictors of training outcome

Equally important to the exploration of key characteristics and processes as clinicians enter into training programs, is the evaluation of additional factors that influence training outcomes. Even among mental health providers who have successfully completed EBP training and consultation, many still do not make use of these programs (e.g., Asgary-Eden & Lee, 2011; Sanders, Prinz, & Shapiro 2009). In light of these outcomes, multiple studies have explored barriers and facilitators to the uptake of new practices following a course of training across a variety of programs and contexts. Mancini et al. (2009), for instance, found that the effective implementation of assertive community treatment (ACT) was facilitated by higher-level factors such as effective state administration (e.g., funding, billings systems that could incorporate fidelity monitoring), local program leadership (e.g., managers who understood the program model and empowered staff), sufficient staffing (including low provider turnover), and an organizational culture that was open to change. In one of the largest surveys of barriers to the adoption of new treatments in the community, Cook and colleagues (2009) documented barriers across a range of system levels, including practitioner attitudes, client attitudes and characteristics, contextual/institutional factors, and training issues (e.g., insufficient time, high cost, low accessibility).

Despite the growing literature on predictors of implementation outcomes, little research has addressed the issue of initial engagement in and completion of training and consultation. This is important because initial training participation sets an upper limit on the possible number of providers delivering a new intervention and, consequently, on the potential degree of EBP penetration into a service system. Nevertheless, few studies describe the size, characteristics, or priorities of the pool of potential training participants from which implementation research samples are drawn. Without such information, it is difficult to determine what factors are associated with practitioner decision-making surrounding their participation or how well training programs engage their intended audience.

Training phase and implementation setting

Lyon, Stirman et al. (2011) recently suggested that mental health services research should focus on the predictors of provider motivation and participation in training and consultation opportunities. Identification of these predictors has relevance to practitioners’ initial decisions to participate in training as well as their continuation in ongoing consultation and their ultimate likelihood of new practice adoption. No research has explored whether distinct factors influence initial decisions to participate or decisions to continue to participate in training. Given that studies of intervention adoption following training have found that perceptions of the intervention (e.g., its effectiveness) are an important factor in the use of new practices (Beidas et al., 2012), it is logical that formal exposure to the new content and consultation processes may also influence decision making surrounding continued consultation participation.

Research has also documented significant differences across service settings with regard to variables such as openness to EBP (Aarons, Sommerfeld, & Walrath-Greene, 2009), knowledge about EBP (Nakamura et al., 2011), and use of specific practices (Palmiter, 2004). Practitioner decision-making surrounding participation in training and consultation efforts is similarly likely to vary by context. For this reason, understanding the factors most relevant to participation may be enhanced by detailed analysis of practitioner decisions within specific service settings. There is currently increasing emphasis on improving healthcare quality for underserved populations and low-resource, public sector contexts, where the gap between “typical” and “optimal” practice is often large and the burden of disease associated with untreated – or ineffectively-treated – mental health problems is high (NIMH, 2008; President’s New Freedom Commission, 2003). To boost participation among key service providers working in those settings and maximize the benefit of training and consultation in the public sector, contexts that serve high numbers of at-risk youth, but who are unlikely to employ EBP, should be given high priority.

School Mental Health

The education sector represents an important service context in which to study training and consultation participation and one with significant public health implications. As the location where 70–80% of youth mental health care is delivered (Burns et al., 1995; Farmer, Burns, Phillips, Angold, & Costello, 2003), schools are the largest public sector setting for service delivery. Multiple studies have also documented that school-based services effectively reduce access disparities for ethnic minority and low-income youth, who are unlikely to receive indicated care elsewhere, contributing to the high public health relevance of the school setting (Kataoka, Stein, Nadeem, & Wong 2007; Lyon, Ludwig, Vander Stoep, Gudmundsen, & McCauley, in press; Walker, Kearns, Lyon, Bruns, & Cosgrove, 2010). In recognition of its importance, various federal agencies have called for increased attention to, and funding for, school mental health (President’s New Freedom Commission, 2003; Mental Health in Schools Act, 2013; National Institute of Mental Health [NIMH], 2001; U.S. Public Health Service [USPHS], 2000; U.S. Dept. of Education, 2001). In addition, more extensive incorporation of EBPs into education sector services has been identified as a priority (Evans & Weist, 2004; Rones & Hoagwood, 2000; Sexton, Chamberlin, Landsverk, Ortiz, & Schoenwald, 2010). Unfortunately, school-based providers generally have few training resources available to them (Evans & Weist, 2004), making it particularly important that any existing training and consultation initiatives achieve high levels of participation.

When studying barriers and facilitators to the implementation of a school-based trauma intervention, Langley, Nadeem, Kataoka, Stein, and Jaycox (2010) identified a number of variables that are consistent with those describe in the literature cited above (Cook et al., 2009; Mancini et al., 2009) and which may have relevance to training and consultation participation. At the organizational level, barriers included a lack of school administrator or teacher support as well as competing staff responsibilities. More immediate logistical issues (e.g., hectic and crisis-driven environment) and low levels of parental engagement were also identified. Providers who were able to successfully implement the program enjoyed better organizational structures for school-based service delivery, networks of other clinicians implementing the same program, and administrative support. In another study, Beidas and colleagues (2012) identified post-training barriers to the use of a cognitive behavioral intervention for youth anxiety among school-based providers that largely paralleled those described by Langley and colleagues, including time to deliver indicated services, youth engagement, intervention effectiveness, and family support. Although the work described above contributes to a better understanding of factors associated with implementation success in education sector mental health, no studies have examined the decision-making processes that school clinicians undergo when determining whether to participate in training. Because participation represents a “gateway” behavior, without which the choice to implement new practices becomes irrelevant, it is essential to understand the factors and motivations that influence this decision (Lyon, Stirman et al., 2011).

Study Aims

This study was conducted to evaluate the decision-making processes school-based mental health clinicians undergo when determining whether to participate in professional development training and consultation programs. The study was conducted in conjunction with a program in which providers were offered a training and consultation opportunity focused on the use of modularized psychotherapy for anxiety and depression (described below). In qualitative interviews following the conclusion of the consultation program, participating and non-participating providers were interviewed about their decisions to participate in training and consultation with the goal of addressing the following research questions: (1) What factors influenced providers’ decisions to participate initially in the consultation program? (2) What factors influenced continued participation in the program? (3) To what extent did decisions about initial and continued participation vary by participation status? (4) To what extent did participation decisions vary by provider knowledge and attitudes about EBP?

Method

The current investigation examined decision-making processes among school-based mental health providers surrounding initial and continued participation in ongoing training and consultation in modular psychotherapy during the 2009–2010 academic year. Training was based on tools available through the PracticeWise Managing and Adapting Practice (MAP) system (Chorpita, Becker, Phillips, & Daleiden, 2009; PracticeWise, 2011) which were adapted for implementation by mental health providers in school-based health centers. All study procedures were conducted with approval from the local institutional review board.

The PracticeWise MAP system was created to improve outcomes of children’s mental health services by enhancing the quality and efficiency of services. It allows for systematic matching of youth mental health problems and characteristics to associated treatment modules that have been identified in the scientific literature as common elements of empirically-supported interventions. MAP builds on a growing literature that has supported the acceptability and effectiveness of modular psychotherapy. Most recently, one such approach, the Modular Approach to Therapy for Children with Anxiety, Depression, or Conduct Problems (MATCH-ADC) was found to be more acceptable to practitioners than more traditional evidence-based treatment protocols (Borntrager et al., 2009) and more effective than those protocols or usual care interventions in promoting positive youth outcomes (Weisz et al., 2012).

MAP has three major tools to support clinical decision making: (1) A computerized evidence-based services database that contains information therapists use to select treatment modules that have the strongest evidence for being helpful for a particular presenting problem, (2) A set of easy-to-use practice guides for each treatment module that give step-by-step instructions for implementing the key elements of the evidence-based treatment approaches so that therapists can avoid searching through multiple treatment manuals, and (3) A computerized, Microsoft Excel-based “dashboard” tracking system to monitor use of treatment elements and track a student’s clinical course using standardized and individualized monitoring targets. In the current implementation, an adapted version of the MAP system included modules relevant to depression and anxiety and introduced elements gradually in the context of ongoing consultation throughout the academic year. Depression and anxiety modules were selected based on previous research about the most commonly-treated conditions in school-based health centers (SBHCs; Walker et al., 2010) as well as project-specific, pre-implementation data collection which indicated that these conditions accounted for 40% of all school-based mental health cases. Adaptations were made to maximize efficiency and the relevance of the training and consultation to the school context and to make use of existing consultation structures and resources (see Lyon, Charlesworth-Attie et al., 2011, for a full discussion of adaptations and adaptation rationale).

Participants and Setting

Participants provided mental health services in SBHCs in public middle and high schools in a large urban area in the Pacific Northwest. All SBHC sites were managed by the local public health department, and all participants worked for one of four community health service organizations that held public health service contracts. Seventeen school-based mental health therapists participated in the study out of 18 possible therapists working in the district’s SBHCs. Although all clinicians had the opportunity to take part in the training and consultation program, the sample of clinicians in the current study represents a range of engagement levels. Of the participating clinicians, seven participated fully in the consultation program over one full academic year, four began participation and then discontinued, and six never engaged in the program. Participants were 94% female, 88% Caucasian, and were an average of 40.2 years of age (SD = 9.9). They had worked as therapists for between 0 and 30 years (mean = 11.0, SD = 9.1) and had held their current positions for 0 to 17 years (mean = 4.9, SD = 4.3). Sixteen of the participants had a Master’s degree in social work, education, or counseling, and the primary theoretical orientation of participating clinicians was “integrative/eclectic” (n = 11), suggesting that their clinical practice was derived from a number of different orientations; smaller numbers endorsed cognitive-behavioral (n = 3), behavioral (n = 1), interpersonal (n = 1), and systems orientations (n = 1). The majority of participating providers were the only dedicated mental health provider in their respective schools.

Training and Consultation Process

As stated above, modules for anxiety and depression were introduced gradually in an effort to maximize the fit with an existing consultation structure, rather than one intensive (e.g., 5 day/40 hour) training. Initial training occurred over three separate half-day sessions at sites accessible to SBHC providers. At the first meeting, three clinical child psychologist trainers provided an overview of the MAP system, including the EBS database, dashboard, and a detailed introduction of one practice module. At the second session, additional modules were introduced, and therapists were provided an opportunity to practice interacting with the electronic dashboard. Following the second session, therapists were asked to select five clients with primary presenting problems of anxiety or depression (by their best estimation) to track at a given time.

Therapists tracked their session-by-session use of treatment modules and scores on standardized, as well as, individually tailored (e.g., school attendance) measures of clinical progress using password-protected dashboards. Group consultation meetings continued biweekly for the duration of the academic year. Meetings were held in-person and included (a) case review and (b) training in implementation of additional selected practice modules using active training techniques (e.g., role plays). Therapists reviewed all dashboards for all active MAP cases prior to each consultation meeting. Cases were selected for discussion for a variety of reasons, including deterioration, elevated scores, impending crises, and difficulties with module implementation. Sixteen sessions, scheduled to accommodate clinician schedules, were held in total.

Clinicians who participated fully tracked 66 clients over the consultation period, 75% of whom had a primary presenting problem of depression, 14% anxiety, and 11% mixed depression and anxiety. In all, clinicians used modules for a total of 487 sessions. Therapist dashboard-based reports of module use indicated that self-monitoring (used in 46.5% of possible sessions), cognitive restructuring for depression (45.5%), psychoeducation for depression (43.4%), problem solving (32.6%), and skill building (27.9%) were the most commonly administered modules (see Lyon, Charlesworth-Attie et al., 2011, for complete results).

Data Collected

Qualitative data about participation decisions were collected via semi-structured interviews following program implementation. Quantitative measures assessing evidence-based practice knowledge and attitudes were administered prior to the initiation of the training and consultation program.

Semi-structured qualitative interviews

Semi-structured interviews were conducted, lasting approximately one hour each (average length was 52 minutes), with all 17 school-based mental health providers. Two interviewers, one of whom had helped to provide the training and consultation, conducted all interviews. For the current project, responses to a subset of questions from the full interviews were explored. These questions focused on factors that influenced decisions about initial participation (e.g., What factors were relevant to determining whether or not you began participation in the MAP consultation program?) and factors that influenced continued participation (e.g., If you began participating in the consultation program, what factors were relevant to your decision to continue until the end?). Follow-up probes were used routinely to elicit more detailed responses and specific examples. All interview sections were coded using the same final codebook (see below) to allow for comparisons across questions.

Therapist attitudes

The Evidence-Based Practice Attitudes Scale (EBPAS; Aarons, 2004) is a 15-item tool administered at baseline to measure therapists’ pre-training attitudes toward EBPs. Items are rated on a five-point scale (0 = “Not at All” to 4 = “To a Very Great Extent”). Internal reliability in the present sample was adequate for the total score (α = .81), which was the only scale used in the analyses. This measure was administered at baseline, prior to the initiation of the adapted MAP training and consultation program. Results from the original study indicated that EBPAS scores were generally comparable to those in the national norming samples and that providers who completed the program did not differ markedly from those who did not participate. Average EBPAS total score was 3.08 (SD=0.43, median=2.97, range=1.40).

Therapist knowledge

The Knowledge of Evidence-Based Services Questionnaire (KEBSQ; Stumpf, Higa-McMillan, & Chorpita, 2009) is a 40-item measure of clinician knowledge of the empirical foundation for specific psychotherapy practice elements in the treatment of major categories of youth mental health problems (i.e., anxiety, depression, disruptive behavior, and attention/hyperactivity). Respondents indicate whether each practice described (e.g., “Teaching the child to measure his/her thoughts, emotions, and/or behavior repeatedly”) is empirically supported for the treatment of one or more category, or none of those options. Test-retest reliability of the KEBSQ is acceptable (r = .56). This measure was also administered prior to program implementation. As with the EBPAS, practitioner knowledge was similar to norming samples and did not differ between training participants and non-participants. Average total KEBSQ total score was 100.47 (SD=9.05, median=102.00, range=28).

Analysis

Semi-structured qualitative interviews were audio recorded, transcribed, and then coded using a conventional content analysis (Hsieh & Shannon, 2005) and qualitative coding software (Atlas.ti; Muhr, 2004). Content analysis is typically used to derive the contextual meaning of communications and seeks to understand the meaning of the communication in detail, rather than establish new theory about the ways qualitatively-derived constructs are related (Hsieh & Shannon, 2005). Conventional content analysis, which focuses on describing phenomena of interest, was used to understand clinicians’ decision-making process for participating in the consultation program at different stages (initial and continued).

Coding began with a group of four coders reviewing clinician responses to each question from the same subset of transcripts, identifying potential codes and then meeting to discuss the codes and produce an initial codebook that represented a combination/consolidation of the codes identified. (The two researchers who conducted the interviews also served as coders for the qualitative analysis.) The codebook was then trialed independently through multiple iterations in which all team members coded additional transcripts and met for discussion. During this process, new codes relevant to the research questions were added, others removed, and some codes merged or split into sub-codes. This process continued over several iterations until a stable set of codes was reached.

Coding then occurred using a consensus process similar to that described by Hill and colleagues, in which each transcript was recoded independently by two different raters who then met to arrive at consensus judgments through open dialogue (DeSantis & Ugarriza, 2000; Hill, Knox, Thompson, Nutt Williams, & Hess, 2005; Hill, Thompson, & Nutt Williams, 1997). The two coding team members who had completed the original interviews were split into different groups for consensus coding. The consensus coding process is designed to circumvent some researcher biases while being more likely to capture data complexity, avoid errors, and reduce groupthink. Consensus coding makes explicit use of any differences in opinion or coding ambiguities to prompt discussion and increase confidence in findings; it was performed as an alternative to the calculation of inter-rater reliability and is considered by many qualitative researchers to be a more valid method of analyzing human communication (Hill et al., 1997).

The fourth research question (To what extent do participation decisions vary by provider knowledge and attitudes about EBP?) was evaluated using a mixed methods approach. Mixed methods approaches “focus on collecting, analyzing, and merging both quantitative and qualitative data into one or more studies” (Palinkas et al., 2011, p. 44). In the current study, 16 of the 17 participating clinicians contributed EBPAS data, and 15 contributed KEBSQ data. Total scores were calculated for the EBPAS and KEBSQ and then reviewed to ensure an adequate distribution. Following that, a median split was identified for each measure to create two groups for qualitative theme comparison. Clinicians’ qualitative responses were compared based on the median splits of those variables to examine the impact of knowledge and attitudes on reasons for participation and continuation. For each variable, respondents were placed into categories, based on high and low values and participation status. EBPAS and KEBSQ groupings were spread fairly evenly across participation groups (non-participants, dropouts, full participants).

Results

To address the research questions detailed previously, clinician responses to inquiries about their decisions to participate in training and consultation were evaluated. Discussion focused primarily on the MAP training opportunity that had been offered during the preceding year. Table 1 displays top themes identified from the interviews, as well as their definitions. For both topics (i.e., initial and continued participation), clinicians mentioned the role of scheduling and time constraints, the utility or relevance of the training for their practice, the appeal of specific training content or training and consultation processes, general attitudes toward training, and social influences on participation. Furthermore, two additional themes, the degree to which the training/consultation met their expectations and their level of participation commitment, were identified only when clinicians discussed continued participation. Each of these themes, the extent to which they varied based on providers’ participation status, and how they were discussed during initial and continued participation, are described below. The section concludes with findings which address the question of how themes varied by practitioner knowledge and attitudes about EBP. Throughout the section, comments in response to queries about initial participation reflect the entire sample (n = 17). In contrast, only participants who participated in at least part of the MAP training were asked about their continued participation, so comments reflect only this subset of clinicians (n = 11).

Table 1.

Most Frequent Codes and Descriptions

Code Brief Description
Time Time available to participate in training and/or other time demands within the school setting
Utility / Relevance Training has specific “benefit” or utility or is relevant or applicable to a caseload/service context
Content Description of the content being trained
Process Characteristics of the training or consultation procedures/process
Attitudes toward Training Individual’s positive or negative attitude about trainings in general
Social Role of any social influences on participation (e.g., pressures, social network, trainer)
Commitment Statements about wanting to see the training/consultation process through to the end
Expectations Confusion, disconnect, discrepancy, or difference between expectations and the actual training and consultation

Time

Initial participation

The influence of competing time demands and scheduling issues on training participation was referenced by every participant, regardless of their participation status. Many discussed the time they had available in the context of setting-specific service delivery demands, such as the fact that they were the sole practitioner in their school (e.g., “And the time commitment…being out of the building [for training and consultation] with nobody else to cover”) and the difficulties of interfacing with student schedules (e.g., “a lot of times we did [consultations] in the afternoons, which is actually the best time for me to have sessions with the kids because there are more electives offered in the afternoons”). Considering these issues, multiple respondents described making a cost-benefit determination about the utility of participation, based on the time it would require (e.g., “The decision’s always whether the time you’re going to invest will equal to the reward you get,” “If it’s going to be worth my time, then I will make the time.”). Few differences in the specific content of time-related statements were found between nonparticipants, dropouts, and full participants. Interestingly, all providers in the nonparticipant group stated explicitly that time was either the most important or the only factor in their decision to participate (e.g., “It comes down to time”).

Continued participation

When discussing their continued participation, time was also frequently referenced by dropouts and full participants. One participant who dropped out indicated that time was the central factor in her discontinuation (“I participated for a while and then I found it impossible to get out of here”), and a full participant indicated that although the amount of time required was not so great that she discontinued, it had approached that level (“if it had required much more time than it did, I probably would’ve at least reconsidered”). Although time issues were raised by most clinicians, a notable difference by group was that a number of full participants made comments about the time commitment ultimately being less of a burden than they had originally feared (e.g., “it took more time stressing than actual time”).

Utility/Relevance

Initial participation

The expected utility of participation was a major theme described in conjunction with time limitations. Comments referenced the anticipated ability of the training to provide “an opportunity to learn new skills” as well as the likelihood that those skills would increase clinician effectiveness (e.g., “I would love to be better at my craft and more effective”). Utility was frequently discussed as it related to the relevance of skills and therapy techniques to their caseloads. Initial comments in this area were both positive (“I thought I’d gain a lot of insight and different techniques to utilize with my families and students”) and negative (“Initially, I wasn’t sure that it would be beneficial to the types of kids that I worked with”). Furthermore, a number of clinicians referenced the compatibility of the intervention with the constraints of the larger school setting (e.g., “…how to apply CBT in a school-based setting, what limitations there might be with that, and how it could be tweaked to actually work in a school environment”). Interestingly, all providers who made clear reference to the fit of the practices with the broader school context – and not just the students on their caseloads – were among those who participated fully in the consultation program.

Continued participation

In contrast to initial participation, comments about the utility or relevance of ongoing consultation were made almost exclusively by full participants, many of whom expressed that they derived considerable value from most aspects of the program (e.g., “Over time, using the tools and charting them out, you can definitely see how the methods were working”). One participant simply stated, “I suppose it just came down to an ongoing sense that I was getting enough out of it, that it was worth continuing.”

Content

Initial participation

The specific intervention content trained and supported during the program was frequently mentioned by providers discussing their decisions to participate initially. This included positive comments about the novelty (“I thought it would freshen my practice, give me some new things to think about”), and negative comments about the redundancy (“There wasn’t anything new”), of practices trained. Other providers noted the difficulty of balancing the two in a diverse group of professionals (“You have people with different amounts of experience and it’s kind of hard to meet the needs of all the people in the group”). Almost all responses about intervention novelty came from those clinicians who participated fully. Even more commonly, clinicians referenced the fit between the intervention approach covered in training and their individual treatment approaches (“CBT…provides more of a framework [for me], so I never really thought about it twice”). Additional comments were made about the extent to which the intervention was evidence-based (“doing something that’s evidence-based and getting real clear information and techniques was definitely a draw”) as well as providers’ desires to stay current with empirical research despite the difficulty involved (“I read articles, go to trainings, that kind of thing, but I’m not sure exactly which…[have] that much evidence backing”). Within the EBP subtheme, the only negative comment identified was made by a practitioner who did not participate and stated, “I believe in evidence-based practice, I do. And I think it has limitations. And I think it’s from a very specific perspective. And I don’t buy into that perspective.”

Continued participation

Unlike initial participation, few clinicians from either the dropout or full participant groups commented on the influence of any aspects of the program content on their continued participation, suggesting that this factor may have been more important in initial decision-making. Despite this, one dropout participant did comment that she found the MAP modules “very simplistic” once she had been exposed to them.

Process

Initial participation

Nearly all respondents mentioned characteristics of the training and consultation process in their initial decision-making. Most commonly, providers discussed the documentation requirements of consultation efforts, most of which was directed at the MAP clinical dashboards. Similar to comments about time constraints, many initially described fears that the dashboards would be too difficult or burdensome (e.g., “the whole technical thing with the dashboards was a little bit…overwhelming”). Documentation appeared to be a common theme that was discussed across all participation groups.

An additional process component frequently referenced by providers was the presence of high-quality, relevant, and ongoing case consultation (e.g., “Having consultation…helping not revert back to all your ways of doing therapy”), which was often discussed as providing structure for the experience (“I kind of liked that it was…concrete and nicely carved out.”). Comments about structure were universally positive and nearly all came from full consultation participants. A smaller subset of the providers also referenced the appeal of “hands-on experience” or other opportunities for active practice (e.g., “role plays” “watching [the trainer]…do a case”).

Continued participation

When discussing continued participation decisions, training and consultation process appeared relevant to both full participants and dropouts. Some wanted consultation that was either more active than what was provided (“I was hoping it was going to be more consulting and…role playing”) or focused less on the expertise of the consultants and more on the expertise of the providers (“using the expertise in the room.”). Although one full participant “didn’t find the consultation piece as helpful” as other training components, and one dropout commented that “there was a lot of filler,” others found consultation “extremely helpful,” especially the opportunities it afforded for feedback. Additional process influences on continued participation for the full participants included “the fact that there were expectations…some accountability.” References to dashboard documentation were also present here, although comments made by full participants indicated that, for some, it was less of a problem than initially anticipated (e.g., “the dashboards really weren’t that hard…some people obviously hadn’t opened Excel spreadsheets before, but it wasn’t that complicated”).

Attitudes toward Training

Initial participation

When discussing their initial decision-making, many providers expressed prevailing attitudes about training in general, which appeared to impact the likelihood that they would take part in the consultation program offered. For instance, equal numbers of providers expressed early certainty that they were going to participate (e.g., “I was never not going to participate”) or not participate (e.g., “I’m trying to remember how much I even listened, since I knew I wasn’t going to be able to do it”). Notably, the majority of the non-participants indicated that they had decided not to participate before the trainers had even delivered their first overview presentation of the SBHC MAP training program (e.g., “so I kind of went in saying ‘no’ …and so I wasn’t listening to anything”), suggesting that they may have been unlikely to engage under any conditions, even at that early stage.

Some clinicians also described emotional responses to the prospect of the MAP training, usually in the form of feeling “overwhelmed” or “stressed” by the program and its anticipated additional responsibilities (e.g., “I can’t. I’m too old, I’m too tired, too stressed, and I just can’t add another expectation”). These clinicians were invariably among those who never participated or began participating and then dropped out.

Continued participation

Interestingly, training attitudes – including certainty about participation and emotional responses – appeared to be as relevant to continued participation as they were to initial participation. A number of full participants indicated that they had never been likely to discontinue (e.g., “I didn’t really even have it in my mind that it was an option to stop it”), but one dropout “just felt overwhelmed.”

Social Influences

Initial participation

Many providers referenced the influence of other people on their initial participation decisions. This included discussion of previous trainings being mandated (“Not that I feel forced, but normally there’s not always a choice”), level of supervisor and agency support (“if it hadn’t been approved by my supervisor, I don’t think I would have been able to do it”), the influence of providers’ social networks (“If there’s people who I really feel like have good practice…and if they’re going to do it, I guess that I would be more likely to choose to do it.”), and characteristics of the expert consultants conducting the training (“it was an easy decision to make because I knew the trainers,” “everybody talks about [one trainer]…she knows what she’s talking about and she’s very personable”).

Discussion of social influences also diverged based on participation status. Specifically, although nonparticipants referenced the general appeal of participating as a group (e.g., “I was kind of like, ‘ooh, well, if everyone else is doing that, that’d be kind of cool”), dropouts – and, even more commonly, full participants – referenced their relationships with other providers (“to see my colleagues,” “it was one advantage being with other people doing my work”).

Continued participation

A variety of social influences were also mentioned when clinicians discussed continued participation. This theme was most commonly mentioned by full participants and, similar to initial participation, contained supervisor influences (“my supervisor…just left it all up to me whether I wanted to or didn’t want to,” “there was pressure from the supervisor to keep continuing”) and participants’ social relationships with their colleagues (e.g., “It felt like I had to show up for everyone”).

Commitment

Some practitioners referenced having made a commitment to the training and consultation program or their desire to see the training through to its end. Such comments were unique to discussions about continued participation. Although no dropouts mentioned this theme, nearly every full participant did so (e.g., “I would never start something and then drop out halfway through…” “I had made that commitment when I started”).

Expectations

Finally, multiple clinicians discussed discrepancies between how they initially believed the training would progress and their actual experiences during participation. Not surprisingly, this code was only present in discussions about continued participation and was widely referenced by both groups. Consistent with other codes, dropouts discussed a lack of clarity about the level of time required (“the time commitment…I did not read anywhere that it was gonna be as grand as it actually was”), documentation (“the documentation part was very unclear to me at the beginning and it remained unclear until I decided that I just couldn’t do it any longer”), and other consultation procedures or general uncertainty. Full participants noted similar questions about documentation and procedures, but some ultimately expressed surprise that the training and consultation had gone better than expected (“I think we all thought it wasn’t as challenging as we thought it would be”).

Knowledge and Attitudes about EBP

Few clear differences were identified in therapist-reported influences on participation by either their knowledge about or their attitudes toward EBPs. Most notably, and consistent with their scores on the KEBSQ, low-knowledge participants were more likely than high-knowledge participants to make positive comments about the novelty of the training content (“I feel like I’ve got plenty of stuff to learn”), but also negative comments about the perceived documentation burden described previously in reference to the dashboards (e.g., “sounded like a lot of paperwork, I’m not huge on that”). Furthermore, low-knowledge participants were the only clinicians to state a desire or appreciation for active practice (e.g., “[the trainer] brought in computers and had them there so we could actually have hands on [experience] and get it”), which was a component of the training Process code described above. Interestingly, practitioner attitudes toward EBP did not appear to clearly differentiate any of their responses.

Discussion

The analyses described above were conducted to contribute to the nascent literature on provider decision-making surrounding participation in professional development activities with a focus on schools, the most common service setting for youth mental health care. Clear identification of the influences surrounding clinician decisions about whether to initiate and continue participation in training and consultation has the potential to inform the development of more relevant and appealing programs, increase practitioner engagement, reduce unnecessary costs, and enhance the reach of implementation efforts. Eight themes were identified: time, practice utility or relevance, intervention/training content, training process, attitudes toward training, social influences, commitment to training, and expectations. Some of these themes were discussed universally across all training stages and participation groups, while others varied in frequency or content. These themes and their implications are discussed below.

Time

Practitioner time and availability, especially given the constraints of the school setting (e.g., sole practitioner, scheduling issues), was the most commonly referenced theme, regardless of participation group or training stage. In addition, other codes (e.g., documentation, emotional responses) frequently referenced time issues either explicitly or implicitly. This finding suggests that time concerns were universally important to, but not always decisive in, training and consultation participation. Although somewhat different from participation decisions, the primacy of this theme is consistent with prior studies of barriers to clinician adoption of new practices. Pagoto et al. (2007) also identified logistical considerations, such as time, in their qualitative study of facilitators and barriers to EBP use. In a large, nationwide survey of practicing therapists, Cook and colleagues (2009) found that concerns about time were the most frequently-cited barrier to the adoption of new interventions. The authors also suggested further research to clarify whether time was actually an “umbrella justification for unwillingness to or disinterest in adopting a new treatment” (p. 89). Our research revealed little to indicate that statements about time were made in place of other attitudinal influences on decision making, but findings did suggest that full participants may have experienced greater motivation to “make the time” to participate in the MAP training. Alternatively, although all respondents experienced time pressures, it is possible that the pressures on nonparticipants and dropouts were greater than for full participants, and that these differences were not captured in their responses.

Reduction of competing responsibilities is undoubtedly an important facilitator of participation, as previous research has documented that productivity requirements can hinder EBP implementation (Schoenwald et al., 2008). To address this, providers of training and consultation programs may be able to increase participation and engagement if there are opportunities to adapt the organizational context by negotiating changes in the productivity expectations maintained by participants’ agencies (Aarons & Palinkas, 2007). However, given the high demands of schools and other community-based service settings, this could be a difficult or insurmountable task. It may therefore be equally important to increase provider motivation to overcome or work within existing time constraints by creating compelling training opportunities. This could be accomplished by establishing policies that incentivize the implementation of EBP at multiple levels (Goldman et al., 2001; Rapp et al., 2005) or by creating trainings that are perceived to have high utility, treatment effectiveness, and relevance in the destination context.

Utility/Relevance

Perceptions about the utility or relevance of the training program were a key component of the cost-benefit decisions described by many participants when selecting whether or not to take part. With higher perceived time demands, positive judgments about the utility of the program are even more important to justify the investment. Previous qualitative research has also identified the perceived applicability of evidence-based treatments as a core component of adolescent mental health practitioners’ attitudes toward use (Nelson, Steele, & Mize, 2006). Unfortunately, methods of improving perceptions about utility prior to training participation may prove elusive, especially given that many clinicians do not find research evidence for the efficacy of a given innovation a particularly compelling or sufficient reason to adopt (Gallo & Barlow, 2012; Nelson et al., 2006). When using new innovations, it may not be until professionals have personal experiences of success implementing new practices that they begin to fully endorse their utility (Han & Weiss, 2005). In light of this, training and consultation programs may best enhance participation by focusing less on research-based efficacy and more on similarities between where the program was developed and the destination context and encouraging direct hands-on use of new practices as early as possible.

Content

Although many providers appreciated that the training was evidence-based, and some found the intervention content to be novel, compelling, and a driver of their participation, others questioned whether the program offered much beyond their existing areas of expertise. Analyses comparing statements across high and low knowledge about EBP, based on the KEBSQ, indicated that providers with higher knowledge made more statements about the redundancy of the practices offered. Differences in provider knowledge and experience, driven partially by clinician turnover and the hiring of less experienced replacements, make it difficult to design a training that has widespread appeal. Although some degree of turnover can be beneficial for implementation (Woltmann et al., 2008), high turnover among mental health practitioners working in schools has been identified as one of the most significant barriers to the sustained implementation of new practices (Forman, Olin, Hoagwood, Crowe, & Saka, 2009). Aside from targeted trainings for new and experienced clinicians, designing trainings that balance novel and familiar practices may be the best approach to optimize participation. This is consistent with recommendations that engagement may be highest when training programs are perceived to blend new content with strategies already in use (Lyon, Stirman et al., 2011). Based on the results of the current study, pre-training evaluation of provider knowledge may allow for more targeted training and consultation programs.

Process

Multiple characteristics of the training and consultation process emerged as providers discussed their participation decisions, including documentation, consultation, and opportunities for active practice. Many concerns about documentation were related to provider comfort with, or experience using, Microsoft Excel spreadsheets, which served as the foundation for the clinical dashboards. As the use of technology in healthcare becomes increasingly ubiquitous, it creates demands on providers to have sufficient knowledge and experience interacting with computers and software. Although the Excel dashboards used in the MAP project represent a relatively simple computer technology, the results of the current study suggest that it was unfamiliar to some providers and that this may have impacted participation, especially initially. Given that providers who continued to participate stated that the dashboards were less of a burden than originally anticipated, it may be that the original concerns were driven by some degree of misperception about the difficulty, given limited experience. For this reason, clear initial presentation of the more technical components of a training effort may be beneficial in helping providers make informed decisions about their participation.

The other two process components identified in provider comments, consultation and active practice, represent essential elements of any training and implementation effort (Fixsen et al., 2005; Lyon, Stirman et al., 2011; Rakovshik & McManus, 2010). The results of the current study suggest that school-based clinicians also value these research-supported components (e.g., role plays) when deciding whether to participate in training and consultation. Although consultation occurred regularly and efforts were made to ensure that all didactic presentations incorporated demonstrations and practice opportunities, clinician comments suggest that even greater attention to these components may have been justified to increase rates of participation.

Attitudes and Commitment

Provider attitudes toward training appeared to be particularly influential in their reported decision-making surrounding whether or not to participate in the program. Importantly, attitudes toward training, as coded in the current study, differ from attitudes toward EBP, the commonly-cited barrier to uptake (Aarons, 2005; Cook et al., 2009). Commitment, a code that was identified only in discussions of continued participation, was similar to training attitudes, but reflected a more general conscientiousness and desire to follow through on obligations. Regarding attitudes, high numbers of clinicians indicated that they had decided whether or not to participate prior to receiving explicit details about the program itself. Other clinicians reported negative emotional responses to the perceived additional time burden. Although it was not explicitly evaluated in the current study, this type of reaction and feeling of being overwhelmed may be place providers at risk for future emotional exhaustion, a key component of professional burnout (Cropanzano, Rupp, & Bryne, 2003; Wykes, Stevens, & Everitt, 1997).

Among those who participated, comments appeared to reflect a high degree of provider conscientiousness and internal motivation, predisposing them to participate in any training. The attitudes theme also indicates that many providers make early decisions about participation, often based on incomplete information and sometimes influenced by emotional responses. Therefore, it may be important to communicate key information “early and often” to potential participants well in advance of the point when they will be asked to make a participation commitment. For some providers, making a clear commitment may be an important, ongoing motivator to continue participating. In light of research demonstrating that the use of EBP is associated with lower emotional exhaustion (Aarons, Fettes, Flores, & Sommerfeld, 2009), the potential beneficial effects of participation on provider burnout could also be emphasized.

Social Influences

Social influences on participation were mentioned by clinicians at multiple levels and from multiple sources, including supervisors, other providers, and the trainers. At each level, these factors were most often described as increasing the likelihood of participation, although there was more variability at the supervisor/organizational level. Overall, these results begin to reveal the complex social interactions that can influence provider decision-making and the potential utility of identifying, understanding, and engaging those who may be most influential at each level. In the current study, some of the professionals conducting the training were also identified as influential, based on their existing relationships with some participants as well as perceptions of their expertise. These dimensions are consistent with the literature on persuasive communication, which identifies expertise and aspects of trustworthiness as two of the most important variables to source credibility (Pornpitakpan, 2006).

Further evaluation of the role of providers’ social networks in training and consultation participation may benefit from recent advances in social network analysis, which allows for the systematic evaluation of the connections among individuals (Degenne & Forsé, 2004). Professionals who are positioned centrally in a social network may be particularly influential on their peers and could represent leverage points for implementation (Atkins et al., 2008). Based on the current results, direct targeting of providers’ social networks and existing “key opinion leaders” may be one method of increasing training and consultation participation.

Expectations

Related to consultation continuation, many providers indicated that their actual experience participating in the training differed from their initial expectations. This code often overlapped with the codes already described. Although dropouts and full participants commented about similar issues (e.g., documentation), full participants were more likely to ultimately frame their comments in a positive light or discuss how they addressed the difficulty. Regardless, this theme underscores the importance of clear, accurate communication prior to training initiation about the procedures involved and the anticipated burden of the commitment.

Between-Group Comparisons

In addition to describing themes, this study compared their frequency and quality across the dimensions of EBP knowledge, EBP attitudes, and participation status. Generally, few differences were observed by knowledge and attitudes, a result consistent with the finding from the original pilot project that knowledge and attitudes did not differentiate participation groups. In an exception, lower-knowledge individuals expressed greater interest in active practice experiences during the training and appeared to find the content of the training more novel. These results make intuitive sense and suggest that low knowledge providers were aware of their relative inexperience using many of the techniques trained. This group also seemed to share an understanding that use of active training and consultation techniques, which engage them in the learning process, would be more likely to build their skills than traditional didactic approaches. This finding is noteworthy because this understanding may help to motivate providers to participate in more intensive, and ultimately more effective, training initiatives.

Furthermore, examination of differences in themes described by training nonparticipants, dropouts, and full participants provided an opportunity to discern which themes may be uniquely associated with clinician engagement. Perhaps not surprisingly, full participants were generally more positive about the MAP consultation program than the other groups. Even when describing their concerns at different participation stages (e.g., about the time involved), they clarified that the experience was more positive or less burdensome than expected. Relative to dropouts and nonparticipants, full participants were more nuanced in their discussion of the utility and relevance of the MAP program for their service setting, referring not just to the fit with their caseloads, but also to the fit with other aspects and constraints of the school mental health context. This may reflect either that full participants, as a function of their time in the program, had more opportunities to think about its match with their service setting or that they simply entered the training considering more facets of their context than just their caseload characteristics. Organizational-level processes have frequently been identified as significant contributors to the success of implementation efforts and the uptake of new skills (Beidas & Kendall, 2010; Cook et al., 2009) and providers who are more aware of those processes may be better equipped to implement new practices.

Finally, although all participants found the anticipated utility of the training to be an important influence on initial participation, its influence on continued involvement was only referenced by full participants, suggesting that they may have found it useful while dropouts did not. As noted earlier, perceptions of new program effectiveness are a common component of many models of implementation and sustainability. In their process model of enhanced sustainability, for instance, Han and Weiss (2005) conceptualized observed changes in outcome, attributed to the new program, as constituting a feedback loop to reinforce practice change. In the current project, such changes appear to have influenced some providers’ continued participation, lending support to this perspective.

Limitations

Limitations of the current study include the retrospective nature of the interviews. Even though clinicians were asked to recall their decision-making at the initial and continued participation stages, the interviews with providers occurred after the fact, and it was not possible to determine with certainty whether some of the differences among groups were actually due to different levels of exposure to the MAP consultation program or to a hindsight bias. Furthermore, both interviewers had been involved in the consultation program, which may have caused some participants to be less forthcoming with their actual opinions. Nevertheless, there was no clear evidence of this, and the interviewers’ familiarity with the program and the participants was also likely a strength for engendering comfort and eliciting candid responses. Finally, the findings were derived from a sample of providers working in one particular type of school-based service delivery model (SBHCs) in one school district. As a result, caution is warranted in generalizing the findings to other school-based service delivery models.

Conclusion

This study was designed to identify important determinants of school-based mental health provider motivation for and engagement in professional training and consultation opportunities. Eight themes were drawn from respondents’ accounts of their decisions to participate in the MAP consultation program (time, utility/relevance, content, process, attitudes toward training, social influences, commitment, and expectations), which were largely consistent with the barriers to new skill uptake identified previously (e.g., Cook et al., 2009). Next steps include evaluating which factors are most malleable and designing pre-training interventions to address them directly. Considering the multiple demands placed on practitioners within the school context, increasing time available for participation may be a difficult target for direct intervention; although increasing efficiency through greater administrative support or the introduction of new, time-saving technologies may remain a possibility. Instead, training and consultation programs could focus on increasing perceptions of other constructs, most notably the utility or relevance of the new practices for the school context, so that the benefits of participation outweigh the inevitable time burden and increase the likelihood that a given practitioner will “make the time” to participate. Sansone and Thoman (2006) have conceptualized activity engagement as motivated by an individual’s goals and the degree of interest and investment they have in the goal pursuit process. Although most clinicians endorse a desire to improve their practice, pre-training interventions may be needed to increase investment in the pursuit of that goal across a wide variety of implementation and quality improvement efforts.

Acknowledgments

This publication was made possible in part by funding from grant numbers F32 MH086978 and K08 MH095939, awarded to the first author from the National Institute of Mental Health (NIMH). The authors would also like to thank the school-based mental health provider participants, Seattle Children’s Hospital, and the King County Public Health Department for their support of this project.

Dr. Lyon is an investigator with the Implementation Research Institute (IRI), at the George Warren Brown School of Social Work, Washington University in St. Louis; through an award from the National Institute of Mental Health (R25 MH080916) and the Department of Veterans Affairs, Health Services Research & Development Service, Quality Enhancement Research Initiative (QUERI).

References

  1. Aarons GA. Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS) Mental Health Services Research. 2004;6:61– 74. doi: 10.1023/b:mhsr.0000024351.12294.65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons GA. Measuring provider attitudes toward evidence-based practice: Consideration of organizational context and individual differences. Child and Adolescent Psychiatric Clinics of North America. 2005;14(2):255–271. doi: 10.1016/j.chc.2004.04.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aarons GA, Fettes DL, Flores LE, Sommerfeld DH. Evidence-based practice implementation and staff emotional exhaustion in children’s services. Behavioral Research and Therapy. 2009;47(11):954–960. doi: 10.1016/j.brat.2009.07.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Aarons GA, Palinkas LA. Implementation of evidence-based practice in child welfare: Service provider perspectives. Administration and Policy in Mental Health. 2007;34:411–419. doi: 10.1007/s10488-007-0121-3. [DOI] [PubMed] [Google Scholar]
  5. Aarons GA, Sommerfeld D, Walrath-Greene C. Evidence-based practice implementation: The impact of public versus private sector organization type on organizational support, provider attitudes, and adoption of evidence-based practice. Implementation Science. 2009;4(1):83. doi: 10.1186/1748-5908-4-83. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. American Psychological Association Task Force on Evidence-Based Practice for Children and Adolescents. Disseminating evidence-based practice for children and adolescents: A systems approach to enhancing care. Washington, DC: American Psychological Association; 2008. [Google Scholar]
  7. Asgary-Eden V, Lee CM. So now we’ve picked and evidence-based program, what’s next? Perspectives of service providers and administrators. Professional Psychology: Research and Practice. 2011;42:169–175. [Google Scholar]
  8. Atkins MS, Frazier SL, Leathers SJ, Graczyk PA, Talbott E, Jakobsons L, Bell CC. Teacher key opinion leaders and mental health consultation in low-income urban schools. Journal of Consulting and Clinical Psychology. 2008;76(5):905–908. doi: 10.1037/a0013036. [DOI] [PubMed] [Google Scholar]
  9. Beidas RS, Kendall PC. Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: Science & Practice. 2010;17:1–30. doi: 10.1111/j.1468-2850.2009.01187.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Beidas RS, Mychailyszyn MP, Edmunds JM, Khanna MS, Downey MM, Kendall PC. Training school mental health providers to deliver cognitive-behavioral therapy. School Mental Health. 2012:1–10. doi: 10.1007/s12310-012-9074-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Borntrager CF, Chorpita BF, Higa-McMillan C, Weisz JR. Provider attitudes toward evidence-based practices: Are the concerns with the evidence or with the manuals. Psychiatric Services. 2009;60:677–681. doi: 10.1176/ps.2009.60.5.677. [DOI] [PubMed] [Google Scholar]
  12. Burns BJ, Costello EJ, Angold A, Tweed D, Stangl D, Farmer EMZ, Erkanli E. Children’s mental health service use across service sectors. Health Affairs. 1995;14:147–159. doi: 10.1377/hlthaff.14.3.147. [DOI] [PubMed] [Google Scholar]
  13. Chorpita B, Becker K, Phillips L, Daleiden E. Practitioner Guides. Satellite Beach, FL: PracticeWise; 2009. [Google Scholar]
  14. Cook JM, Schnurr PP, Biyanova T, Coyne JC. Apples don’t fall far from the tree: Influences on psychotherapists’ adoption and sustained use of new therapies. Psychiatric Services. 2009;60:671–676. doi: 10.1176/appi.ps.60.5.671. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Cropanzano R, Rupp DE, Bryne ZS. The relationship of emotional exhaustion to work attitudes, job performance, and organizational citizenship behaviors. Journal of Applied Psychology. 2003;88:160–169. doi: 10.1037/0021-9010.88.1.160. [DOI] [PubMed] [Google Scholar]
  16. Degenne A, Forsé M. Introducing social networks. London: Sage; 2004. [Google Scholar]
  17. DeSantis L, Ugarriza DN. The concept of theme as used in qualitative nursing research. Western Journal of Nursing Research. 2000;22(3):351–372. doi: 10.1177/019394590002200308. [DOI] [PubMed] [Google Scholar]
  18. Evans SW, Weist MD. Implementing empirically supported treatments in the schools: What are we asking? Clinical Child and Family Psychology Review. 2004;7:263–267. doi: 10.1007/s10567-004-6090-0. [DOI] [PubMed] [Google Scholar]
  19. Farmer EM, Burns BJ, Phillips SD, Angold A, Costello EJ. Pathways into and through mental health services for children and adolescents. Psychiatric Services. 2003;54:60–66. doi: 10.1176/appi.ps.54.1.60. [DOI] [PubMed] [Google Scholar]
  20. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation Research: A Synthesis of the Literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network; 2005. [Google Scholar]
  21. Forman S, Olin SS, Hoagwood K, Crowe M, Saka N. Evidence-based interventions in schools: Developers’ views of implementation barriers and facilitators. School Mental Health. 2009;1(1):26–36. [Google Scholar]
  22. Gallo KP, Barlow DH. Factors involved in clinician adoption and nonadoption of evidence-based interventions in mental health. Clinical Psychology: Science & Practice. 2012;19:93–106. [Google Scholar]
  23. Goldman HH, Ganju V, Drake RE, Gorman P, Hogan M, Hyde PS, et al. Policy implications for implementing evidence-based practices. Psychiatric Services. 2001;52:1591–1597. doi: 10.1176/appi.ps.52.12.1591. [DOI] [PubMed] [Google Scholar]
  24. Han SS, Weiss B. Sustainability of teacher implementation of school-based mental health programs. Journal of Abnormal Child Psychology. 2005;33(6):665–679. doi: 10.1007/s10802-005-7646-2. [DOI] [PubMed] [Google Scholar]
  25. Herschell AD, McNeil CB, McNeil DW. Clinical child psychology’s progress in disseminating empirically supported treatment. Clinical Psychology: Science and Practice. 2004;11(3):267–288. [Google Scholar]
  26. Higa CK, Chorpita BF. Evidence-based therapies: Translating research into practice. In: Steele RG, Elkin TD, Roberts MC, editors. Handbook of evidence-based therapies for children and adolescents. Springer; 2008. pp. 45–61. [Google Scholar]
  27. Hill CE, Knox S, Thompson BJ, Nutt Williams E, Hess SA. Consensual qualitative research: An update. Journal of Counseling Psychology. 2005;52(2):196–205. [Google Scholar]
  28. Hill CE, Thompson BJ, Nutt Williams E. A guide to conducting consensual qualitative research. The Counseling Psychologist. 1997;25(4):517–572. [Google Scholar]
  29. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qualitative Health Research. 2005;15:1277–1288. doi: 10.1177/1049732305276687. [DOI] [PubMed] [Google Scholar]
  30. Joyce BR, Showers B. Student Achievement Through Staff Development. 3. Alexandria, VA: Association for Supervision & Curriculum Development; 2003. [Google Scholar]
  31. Kataoka SH, Stein BD, Nadeem E, Wong M. Who gets care? Mental health service use following a school-based suicide prevention program. Journal of the American Academy of Child and Adolescent Psychiatry. 2007;46:1341–1348. doi: 10.1097/chi.0b013e31813761fd. [DOI] [PubMed] [Google Scholar]
  32. Kazdin AE, Blase SL. Rebooting psychotherapy research and practice to reduce the burden of mental illness. Perspectives on Psychological Science. 2011;6:21–37. doi: 10.1177/1745691610393527. [DOI] [PubMed] [Google Scholar]
  33. Langley AK, Nadeem E, Kataoka SH, Stein BD, Jaycox LH. Evidence-based mental health programs in schools: Barriers and facilitators of successful implementation. School Mental Health. 2010;2:105–113. doi: 10.1007/s12310-010-9038-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Lyon AR, Ludwig K, Vander Stoep A, Gudmundsen G, McCauley E. Mental healthcare utilization across service sectors for adolescents at risk for depression. School Mental Health. doi: 10.1007/s12310-012-9097-6. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Lyon AR, Stirman SW, Kerns SEU, Bruns EJ. Developing the mental health workforce: Review and application of training strategies from multiple disciplines. Administration & Policy in Mental Health & Mental Health Services Research. 2011;38:238–253. doi: 10.1007/s10488-010-0331-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Mancini AD, Moser L, Whitley R, McHugo GJ, Bond GR, Finnerty MT, Burns BJ. Assertive community treatment: Facilitators and barriers to implementation in routine mental health settings. Psychiatric Services. 2009;60(2):189–195. doi: 10.1176/ps.2009.60.2.189. [DOI] [PubMed] [Google Scholar]
  37. McHugh RK, Barlow D. The dissemination and implementation of evidence-based psychological treatments: A review of current efforts. The American Psychologist. 2010;65:73–84. doi: 10.1037/a0018121. [DOI] [PubMed] [Google Scholar]
  38. Mental Health in Schools Act, S.195, 113th Cong. 1st Sess. (2013).
  39. Muhr T. ATLAS.ti 5.0 (Version 5) [Software] ATLAS.ti Scientific Software Development GmbH; Berlin, Germany: 2004. Available from http://atlasti.com. [Google Scholar]
  40. Nakamura BJ, Higa-McMillan CK, Okamura KH, Shimabukuro S. Knowledge of and attitudes towards evidence-based practices in community child mental health practitioners. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(4):287–300. doi: 10.1007/s10488-011-0351-2. [DOI] [PubMed] [Google Scholar]
  41. National Institute of Mental Health. Blueprint for change: Research on child and adolescent mental health, a report of the National Advisory Mental Health Council’s Workgroup on child and adolescent mental health intervention and deployment. Rockville, MD: National Institute of Mental Health; 2001. [Google Scholar]
  42. National Institute of Mental Health. The National Institute of Mental Health Strategic Plan. Bethesda, MD: National Institute of Mental Health; 2008. http://www.nimh.nih.gov/about/strategic-planning-reports/index.shtml. [Google Scholar]
  43. Nelson TD, Steele RG, Mize JA. Practitioner attitudes toward evidence-based practice: Themes and challenges. Administration and Policy in Mental Health and Mental Health Services Research. 2006;33:398–409. doi: 10.1007/s10488-006-0044-4. [DOI] [PubMed] [Google Scholar]
  44. Nelson TD, Steele RG. Influences on practitioner treatment selection: Best research evidence and other considerations. Journal of Behavioral Health Services & Research. 2008;35(2):170–178. doi: 10.1007/s11414-007-9089-8. [DOI] [PubMed] [Google Scholar]
  45. Olmstead T, Carrol KM, Canning-Ball M, Martino S. Cost and cost-effectiveness of three strategies for training clinicians in motivational interviewing. Drug and Alcohol Dependence. 2011;116:195–202. doi: 10.1016/j.drugalcdep.2010.12.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Pagoto SL, Spring B, Coups EJ, Mulvaney S, Coutu MF, Ozakinci G. Barriers and facilitators of evidence-based practice perceived by behavioral science health professionals. Journal of Clinical Psychology. 2007;63:695–705. doi: 10.1002/jclp.20376. [DOI] [PubMed] [Google Scholar]
  47. Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed methods design in implementation research. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38:44–53. doi: 10.1007/s10488-010-0314-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Palmiter DJ. A survey of the assessment practices of child and adolescent clinicians. American Journal of Orthopsychiatry. 2004;74:122–128. doi: 10.1037/0002-9432.74.2.122. [DOI] [PubMed] [Google Scholar]
  49. Pornpitakpan C. The persuasiveness of source credibility: A critical review of five decades’ evidence. Journal of Applied Social Psychology. 2004;34(2):243–281. [Google Scholar]
  50. PracticeWise. PracticeWise Evidence-Based Services (PWEBS) Database. Satellite Beach, FL: Author; 2011. Spring. [Google Scholar]
  51. President’s New Freedom Commission on Mental Health. Final report. Rockville, MD: President’s New Freedom Commission on Mental Health; 2003. Achieving the promise: Transforming mental health care in America. [Google Scholar]
  52. Rakovshik SG, McManus F. Establishing evidence-based training in cognitive Behavioral therapy: A review of current empirical findings and theoretical guidance. Clinical Psychology Review. 2010;30(5):496–516. doi: 10.1016/j.cpr.2010.03.004. [DOI] [PubMed] [Google Scholar]
  53. Rapp CA, Bond GR, Becker DR, Carpinello SE, Nikkel RE, Gintoli G. The role of state mental health authorities in promoting improved client outcomes through evidence-based practice. Community Mental Health Journal. 2005;41:347–363. doi: 10.1007/s10597-005-5008-8. [DOI] [PubMed] [Google Scholar]
  54. Rogers EM. Diffusions of Innovations. 5. New York: NY: Free Press; 2003. [Google Scholar]
  55. Rones M, Hoagwood K. School-based mental health services: A research review. Clinical Child and Family Psychology Review. 2000;3:223–241. doi: 10.1023/a:1026425104386. [DOI] [PubMed] [Google Scholar]
  56. Sanders MR, Prinz RJ, Shapiro CJ. Predicting utilization of evidence-based parenting interventions with organizational, service provider and client variables. Administration and Policy in Mental Health. 2009;36:133–143. doi: 10.1007/s10488-009-0205-3. [DOI] [PubMed] [Google Scholar]
  57. Sansone C, Thoman DB. Maintaining activity engagement: Individual differences in the process of self-regulating motivation. Journal of Personality. 2006;74:1697–1720. doi: 10.1111/j.1467-6494.2006.00425.x. [DOI] [PubMed] [Google Scholar]
  58. Schoenwald SK, Chapman JE, Kelleher K, Hoagwood KE, Landsverk J, Stevens J, et al. A survey of the infrastructure for children’s mental health services: Implications for the implementation of empirically supported treatments (ESTs) Administration and Policy in Mental Health. 2009;35:84–97. doi: 10.1007/s10488-007-0147-6. [DOI] [PubMed] [Google Scholar]
  59. Sexton TL, Chamberlain P, Landsverk J, Ortiz A, Schoenwald SK. Action brief: Future directions in the implementation of evidence based treatment and practices in child and adolescent mental health. Administration and Policy in Mental Health and Mental Health Services Research. 2010;37:132–134. doi: 10.1007/s10488-009-0262-7. [DOI] [PubMed] [Google Scholar]
  60. Siqueland L, Crits-Christoph P, Barber JP, Butler SF, Thase M, Najavits L, Onken LS. The role of therapist characteristics in training effects in cognitive, supportive expressive, and drug counseling therapies for cocaine dependence. Journal of Psychotherapy Practice Research. 2000;9:123–130. [PMC free article] [PubMed] [Google Scholar]
  61. Stokes TF, Baer DM. An implicit technology of generalization. Journal of Applied Behavior Analysis. 1977;10:349–367. doi: 10.1901/jaba.1977.10-349. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Stumpf RE, Higa-McMillan CK, Chorpita BF. Implementation of evidence-based services for youth: Assessing provider knowledge. Behavior Modification. 2009;33:48–65. doi: 10.1177/0145445508322625. [DOI] [PubMed] [Google Scholar]
  63. U. S. Department of Education. Twenty-third annual report to Congress on the implementation of the Individuals with Disabilities Education Act. Washington, DC: Author; 2001. [Google Scholar]
  64. U.S. Public Health Service. Report of the Surgeon General’s Conference on Children’s Mental Health: A national action agenda. Washington, DC: Department of Health and Human Services; 2000. [PubMed] [Google Scholar]
  65. Walker SC, Kearns S, Lyon AR, Bruns EJ, Cosgrove T. Impact of school-based health center use on academic outcomes. Journal of Adolescent Health. 2010;46:251–257. doi: 10.1016/j.jadohealth.2009.07.002. [DOI] [PubMed] [Google Scholar]
  66. Weisz JR, Chorpita BF, Palinkas LA, Schoenwald SK, Mirandea J, Bearman SK, et al. Testing standard and modular designs for psychotherapy treating depression, anxiety, and conduct problems in youth. Archives of General Psychiatry. 2012;69:274–282. doi: 10.1001/archgenpsychiatry.2011.147. [DOI] [PubMed] [Google Scholar]
  67. Weisz JR, Doss AJ, Hawley KM. Evidence-based youth psychotherapies versus usual care. A meta-analysis of direct comparisons. American Psychologist. 2006;61:671–689. doi: 10.1037/0003-066X.61.7.671. [DOI] [PubMed] [Google Scholar]
  68. Woltman EM, Whitley R, McHugo GJ, Brunette M, Torrey WC, Coots L, Drake RE. The role of staff turnover in the implementation of evidence-based practices in mental health care. Psychiatric Services. 2008;59:732–737. doi: 10.1176/ps.2008.59.7.732. [DOI] [PubMed] [Google Scholar]
  69. Wykes T, Stevens W, Everitt B. Stress in community careteams: Will it affect the sustainability of community care? Social Psychiatry and Psychiatry Epidemiology. 1997;32:398–407. doi: 10.1007/BF00788180. [DOI] [PubMed] [Google Scholar]

RESOURCES