Abstract
The current study evaluated why and how school mental health clinicians use standardized assessment tools in their work with youth and families. Quantitative and qualitative (focus group) data were collected prior to and following a training and consultation sequence as part of a trial program to assess school clinician’s (n = 15) experiences administering standardized tools to youth on their caseloads (n = 191). Findings indicated that, although assessment use was initially somewhat low, clinicians used measures to conduct initial assessments with the bulk of their caseloads (average = 62.2%) during the implementation period. Clinicians also reported on factors influencing their use of assessments at the client, provider, and system levels; perceived functions of assessment; student responses to assessment use; and use of additional sources of clinically-relevant information (primarily educational data) for the purposes of assessment and progress monitoring. Implications for the contextual appropriateness of standardized assessment and training in assessment tools are discussed.
Keywords: assessment, training, implementation, school mental health
Introduction
Relative to the quickly growing literature on the dissemination and implementation of evidence-based treatments (McHugh & Barlow, 2010), little research has examined the incorporation of evidence-based assessment (EBA) methods and processes into routine clinical practice (Jensen-Doss, 2015; Mash & Hunsely, 2005). The use of standardized assessment (SA) tools has been identified as the cornerstone of EBA (Jensen-Doss & Hawley, 2010). SA tools may be used either at initial assessment or for monitoring client outcomes over time, often with feedback to youth or caregivers (Borntrager & Lyon, 2015). Outcome monitoring with SA is becoming increasingly central to service delivery, and a growing body of research has supported the role of SA and outcome monitoring in improving the effectiveness of services for both adults (Lambert et al., 2003) and youth (Bickman, Kelley, Breda, de Andrade, & Riemer, 2011). Reviews have indicated that SA is especially useful for identifying client deterioration or therapy non-response (e.g., Carlier et al., 2010; Lambert et al., 2003) and that many consumers value routine outcome measurement in the services they receive (Guthrie, McIntosh, Callaly, Trauer, & Coombs, 2008).
Although clinicians report valuing SA information (Bickman et al., 2000), routine collection of this information is uncommon in community-based service delivery (Garland, Kruse, & Aarons, 2003). Hatfield and Ogles (2004) identified a myriad of reasons why providers don’t use SA measures, including logistical problems, lack of perceived usefulness, concerns about how a measure might be used (e.g., misinterpretation), and inadequate training in how to integrate the measures into their practice. Additional research suggests that treatment-related factors (e.g., ability to determine client strengths and weaknesses), practical reasons (e.g., takes too much time), and attitudes about the practicality of SA are also important variables influencing use (Hatfield & Ogles, 2007; Jensen-Doss & Hawley, 2010). The relative importance of reasons for use or nonuse vary based on work setting, theoretical orientation, and source of payment, suggesting potential differences across service sectors (Hatfield & Ogles, 2007). Given their established clinical utility, increased use of SA tools has been identified as a key quality improvement target (Lyon, Dorsey, Pullmann, Silbaugh-Cowdin, & Berliner, in press; Scott & Lewis, 2015) underscoring the need to better understand and overcome barriers to use. Although a number of studies have identified the importance of clinician attitudes and knowledge on SA tool use in general (e.g., Jensen-Doss & Hawley, 2010; 2011), contexts can vary in the extent to which new practices are valued and supported (Aarons, Sommerfeld, & Walrath-Greene, 2009). Context-specific research is therefore important if the goal of increasing clinician use of SA tools is to be realized.
Routine assessment and monitoring of client outcomes using SA tools is particularly applicable to the domain of school mental health (SMH) due to the current emphasis on Response-to-Intervention (RtI) models of educational intervention (Lyon, Borntrager, Nakamura, & Higa-McMillan, 2013). RtI is characterized by an explicit focus on data collection and use to drive intervention decisions (i.e., the need to adapt or maintain interventions) (Bradley, Danielson, & Doolittle, 2007). Despite a high degree of contextual appropriateness, little is known about use of SA in the SMH context. In one of the only existing studies, Connors and colleagues (2015), in a national survey of SMH providers, identified difficulties reaching parents as the most commonly-endorsed barrier to assessment use. Other barriers potentially unique to schools included clinician difficulty obtaining/scoring measures, lack of training in interpretation of assessment findings, and a lack of available assessment-related supervision. Additionally, even though they practiced in the same setting, clinicians reported difficulties obtaining assessments from teachers. Facilitators of use included recognition that assessments facilitated tracking clinical progress and communication with service recipients and providers. Outside of the work by Connors et al., very little is known about how SMH providers incorporate SA into their work or what factors make it easier or more difficult to do so. Because provider attitudes and knowledge/skill have emerged as influential factors in the uptake and use of innovative practices (Aarons et al., 2009; Borntrager, Chorpita, Higa-McMillan, & Weisz, 2009; Higa & Chorpita, 2008; Stumpf, Higa-McMillan, & Chorpita, 2009), research on the use of SAs in SMH should be sure to attend closely to these variables.
The current paper reports on findings from a study designed to evaluate why and how SMH clinicians use SA tools in their work with youth and families. The project represents the most recent step in an initiative focused on quality improvement in schools within a large urban district in the Pacific Northwest. The initiative is the product of a longstanding collaboration between the local Department of Public Health, the School District, and local researchers who also serve as clinical consultants. The primary focus of this initiative is on enhancing the services provided by mental health clinicians working in school-based health centers (SBHCs) in middle schools and high schools throughout the district. SBHCs are a widespread model for education sector healthcare delivery with a proven track record for reducing disparities in health service accessibility (Gance-Cleveland & Yousey, 2005; Walker, Kearns, Lyon, Bruns, & Cosgrove, 2010). Previous research had found that depression and anxiety are two of the most common mental health problem areas for which students seek/are referred to middle school and high school SBHC services (Lyon, Charlesworth-Attie, Vander Stoep, & McCauley, 2011; Walker et al., 2010), which informed the selection of the SA tools that were a focus of the current study.
This study was carried out in the context of a brief training and consultation program focused on SA and executed over a one-month period at the beginning of the academic year. SBHC clinicians were introduced to two SA tools (one focused on depression and the other on anxiety; see Measures) and asked to administer them to all the youth on their caseloads during the implementation period. By providing training and consultation, this piloting of different assessment tools allowed providers to report on the feasibility and appropriateness of SA based on first-hand experience, rather than relying on their abstract opinions of assessment tools they had never seen or used. The study was designed to address the following research questions using a combination of qualitative and quantitative methods: (1) How frequently and in what ways do providers use SA tools in their work? (2) What factors (e.g., attitudes and skill) positively or negatively influence providers’ use of SA? (3) To what extent are standardized measures of depression and anxiety feasible and appropriate to use with the population of students receiving SMH services? and (4) What additional types of information or information sources (other than SA) do providers find most useful for student assessment and clinical decision making and how are these sources used?
Method
The current investigation used mixed qualitative and quantitative data collection and analysis to investigate school-based clinicians’ attitudes toward and experiences using SA tools. Quantitative and qualitative data were collected independently and then integrated during the analysis phase. Due to the exploratory nature of the study, qualitative data were prioritized within the “quan + QUAL” design articulated by Palinkas and colleagues (2011). Mixed methods analyses were conducted for the purposes of data complementarity (i.e., using multiple means to answer different components of a larger question) as well as elaboration (i.e., using qualitative data to provide a greater depth of understanding to quantitative results) (Palinkas et al., 2011). All study procedures were conducted with approval from the local institutional review board and all participants completed standard consent forms.
Participants and Setting
Fifteen SBHC providers (out of 17 invited) participated in the current study. Participants were 87% female and 87% Caucasian (two clinicians were Asian American) and were employed by community mental health service organizations to work in school based clinics. Most participants had a master’s degree in social work, education, or counseling (one had a PsyD). Most providers were the sole dedicated mental health provider at their school and three also worked as supervisors for the school-based providers within their agencies. All providers were embedded in the schools in which they worked for the entirety of the time they spent working for their respective employers. The providers had permanent offices within SBHCs and worked closely and collaboratively with all school-based staff (i.e., school nurses, teachers, counselors, and administrators). Five providers worked in middle schools and the rest (n = 10) worked in high schools. Through a partnership between the local university, public health department, public school district, and community health service organizations, the university investigators had been providing some form of training and consultation in evidence-based practice to a subset of the participating providers for over five years at the time the current study was initiated. During the year in which the study was conducted, the schools in which these 12 clinicians worked had an average enrollment of 1118 (range: 365–1721) and served a populations that were an average of 72% nonwhite (range: 38–98%), with 54% eligible for free and reduced-price lunch (range: 19–83%).
Procedures
Data for the current study were collected along the following schedule: (1) At the end of the academic year (late spring/early summer) that preceded the fall training in SA (see below), clinicians completed a quantitative measure of SA tool use. (2) In the subsequent fall, prior to SA training, quantitative clinician SA attitude and skill data were collected. (3) Throughout the month following initial training (September), quantitative data about youth symptoms and presenting problems (using SA measures and administrative data) were collected in the context of SBHC service delivery. (4) Finally, qualitative focus groups were conducted following the conclusion of the September implementation period. Of the 15 clinicians recruited, 13 provided data for #1 above, 15 provided data for #2, 12 provided data for #3, and 15 provided data for #4.
Training & consultation
Following measure completion, clinicians participated in a half-day training focused on general assessment principles and, specifically, assessment and feedback procedures using two assessment tools: the Patient Health Questionnaire (PHQ-9; Spitzer, Kroenke, & Williams, 1999) and the Multidimensional Anxiety Scale for Children – 10-item version (MASC-10; March, Sullivan, & Parker, 1999). Strategies employed to promote SA tool use included (1) active initial training, (2) post-training consultation, and (3) chart review with a small incentive for achieving an adequate level of administration. Initial training included active discussion of participants’ prior experiences using SA tools, didactic information presentation about assessment processes (e.g., tool-specific administration, scoring, interpretation, feedback), and role plays using measure administration and feedback scripts, provided by the trainers. Immediately following training, providers were asked to voluntarily administer the PHQ-9 and MASC-10 to all of the students they saw clinically (new or returning cases) for a period of one month and to provide assessment-based feedback (henceforth referred to as the implementation period). In the month following the training, two one-hour consultation calls were held to discuss the integration of SA into practice and troubleshoot measure administration and feedback, which clinicians were encouraged to attend on an “as needed” basis. They were also instructed to track their administration of the tools on a Microsoft Excel spreadsheet using ID numbers instead of student names. To further promote SA use, clinicians were informed that the data they collected would be compared to their de-identified service records to determine the percentage of their clients to which they were able to administer a SA measure and that those who administered assessments to 60% of more of their clients in one of their first three sessions would be given a $10 gift card for a local coffee chain.
Focus group procedures
At the conclusion of the one-month measure implementation period, clinicians participated in focus groups designed to gather information about their workflow, access to and use of technology in their jobs, as well as perceptions about and use of assessments prior to and during the September implementation period. Due to the number of participants, two focus groups of approximately equal size were held simultaneously. Three researchers facilitated the focus groups, one per group with the third floating between the two. During the focus groups, facilitators encouraged all participants to speak and worked to ensure that individual members did not dominate the discussion. Following the two separate focus groups, all participating clinicians were brought back together to discuss their experiences during the implementation period more explicitly. The entire discussion process lasted 2 ½ hours. For the current project, only responses relevant to the use of SA and other relevant information sources in practice were explored (see Measures). Three practitioners were unavailable to participate in the in-person focus groups and, instead, responded to the same set of questions over the phone.
Measures
Clinician and student demographics
Demographic information was collected from all SBHC clinicians including gender and ethnicity. In addition, clinicians provided the age, gender, and ethnicity for all students who were administered SA tools during the month of September.
Current Assessment Practice Evaluation (CAPE)
The CAPE (Lyon et al., in press) is a brief, behaviorally-focused, four-item measure of clinician ratings of SA use across different phases of intervention (e.g., at intake, ongoing during treatment, at termination). Items capture the use of SA tools and associated EBA processes (e.g., incorporation of assessment results into treatment planning, provision of SA-based feedback to children/families) and are scored on a 4-point scale (None, Some [1–39%], Half [40–60%], Most [61–100%]). Referent time frames vary across CAPE items with three items rated over the past week (e.g., administered a SA tool) and one item rated over the past month (administered SA tools at intake) due to fewer theoretical opportunities to complete intakes. Providers are also able to indicate that they did not conduct any intakes over the previous month, in which case the intake item is not counted toward their total score. The CAPE has previously been found to demonstrate acceptable inter-item reliability (α = .72) and sensitivity to change in a larger study of community clinicians following a training that emphasized evidence-based assessment in the context of a common elements psychotherapy (Lyon et al., in press). As stated above, because clinicians had no opportunity to use SA tools over the summer, the CAPE was administered in the late spring of the preceding academic year.
Attitudes toward Standardized Assessment Scales (ASA)
The ASA (Jensen-Doss & Hawley, 2010) is a 22-item measure of clinician attitudes about using SA in practice. Items are rated on a 1 to 5 scale (“Strongly Disagree” to “Strongly Agree”) and load onto three subscales: Benefit over Clinical Judgment, Psychometric Quality, and Practicality. Psychometrics were originally established for the ASA using a national sample of 1442 mental health professionals. That sample was 61.8% female, 90.5% Caucasian (90.5%) and included clinicians at the masters and doctoral levels (Jensen-Doss & Hawley, 2010). All subscales were found to demonstrate good psychometrics and higher ratings on all subscales have been associated with a greater likelihood of SA use, however only the Practicality subscale was found to be independently predictive of SA use. In the current project, the ASA was administered immediately preceding the SA training/consultation phase.
Clinician-rated assessment skill
Study clinicians reported on their understanding of and skill using SA measures on a on a five-point Likert-style scale ranging from 1 (“Minimal”) to 5 (“Advanced”). Items address selection of clients for administration as well as tool selection, administration, scoring, interpretation, integration into treatment, feedback, and progress monitoring. In the current sample, the scale had good internal consistency (Cronbach’s α = .85). Items were averaged to create a total assessment skill score.
Semi-structured qualitative focus group protocol
Qualitative data were collected via semi-structured focus groups, which followed a standard outline. Questions evaluated in the current study focused on assessment use in SBHCs (e.g., “What are the challenging aspects of standardized assessment use?” “Other than standardized measures, how do you assess student progress related to treatment?”) and their experiences during the September implementation period (e.g., “How easy or difficult was it to administer measures to all of your cases?”).
Patient-Health Questionnaire (PHQ-9)
The PHQ-9 (Spitzer et al., 1999) is a widely used, 9-item measure of depression symptoms. Although the measure has been used most commonly with adults, research has supported the validity of the PHQ-9 with adolescents (Richardson et al., 2010) and identified an optimal cut point of 11 for detecting the presence of Major Depressive Disorder in that population.
Multidimensional Anxiety Scale for Children – 10 item (MASC-10)
The MASC-10 (March et al., 1999) is a brief version of the full MASC intended for feasible screening and progress monitoring. It uses a subset of the items comprising the larger measure to produce a single, norm-referenced score that indicates the severity of anxiety problems. Raw scores are converted to T scores, based on client age.
Administrative database review
Clinical service records were reviewed to gather two pieces of information. First, as one indicator of SA feasibility, de-identified service records were used to determine the percentage of each provider’s total caseload that was administered a SA measure. Second, to help determine the appropriateness of the selected SA tools for the students to whom they were administered, those students’ DSM-IV diagnoses were reviewed across all of their visits for the academic year in which the study was conducted. Diagnoses (including adjustment disorders) were combined within diagnostic category for analysis (e.g., mood disorder, anxiety disorder, disruptive behavior disorder).
Data Analysis
Total scores were calculated for the CAPE, SA Skill, and the ASA subscales. Clinicians calculated total scores for the PHQ-9 and MASC-10 (and converted MASC-10 raw scores to T scores) prior to submitting that information to research staff. Quantitative data were summarized descriptively for all participants for integration with qualitative codes within a mixed methods approach (Palinkas et al., 2011).
Focus group recordings were audio recorded, transcribed, and then coded using conventional content analysis (Hsieh & Shannon, 2005) and qualitative coding software (ATLAS.ti; Muhr 2004). Coding was conducted by four trained coders, who reviewed the transcripts initially and then met to identify potential codes. An initial codebook was developed, trialed, and revised through discussion over subsequent transcript reviews. Three major iterations of the codebook were trialed prior to arriving at a stable set of qualitative codes. Next, all four reviewers independently coded (or re-coded) all of the transcripts and met to compare their coding using a consensus process in which raters met to arrive at consensus judgments through open dialogue (DeSantis & Ugarriza 2000; Hill, Thompson, & Nutt Williams, 1997; Hill, Knox, Thompson, Nutt Williams, & Hess, 2005). Consensus coding is designed to circumvent some researcher biases while being more likely to capture data complexity, avoid errors, and reduce groupthink. This process yielded six codes related to the function of SA; sixteen related to influences on assessment use (organized by the level at which they were most influential); as well as four other categories of responses, including comments about specific assessment tools, implementation period experiences, student responses to SA, and additional clinically-relevant information sources. In the results below, specific code names are indicated in italics.
Results
Table 1 displays descriptive information for all quantitative variables and Table 2 presents the resulting qualitative code hierarchy and descriptions. Below, we address each research question using a combination of these data. Quantitative data are described first and qualitative data obtained from focus groups are used to lend insight into patterns of SA use previously reported by providers. For Research Question 1, the quantitative (CAPE) data presented reflect the 13 providers who were able to participate in the spring preceding the implementation period. For Research Question 3, PHQ-9 and MASC-10 data were available for students on the caseloads of the 12 providers who supplied that information.
Table 1.
Descriptive Statistics for Quantitative Study Variables
| Possible range | Mean (SD) | |
|---|---|---|
| Standardized assessment total use (CAPE) 1 | 1–4 | 2.19 (0.42) |
| CAPE item 1: Percentage of intake clients administered SA in last month | 1–4 | 3.17 (0.93) |
| CAPE item 2: Percentage of caseload administered SA in last week | 1–4 | 1.85 (0.38) |
| CAPE item 3: Percentage of clients given feedback from SA during last week | 1–4 | 2.31 (1.03) |
| CAPE item 4: Percentage of clients changed treatment plan based on SA scores in last week | 1–4 | 1.54 (0.54) |
| Benefit over clinical judgment2 | 1–5 | 3.24 (0.69) |
| Psychometric quality2 | 1–5 | 3.90 (0.39) |
| Practicality2 | 1–5 | 3.07 (0.53) |
| Standardized assessment (SA) skill2 | 1–5 | 3.34 (0.72) |
| Skill item 1: Selecting individuals for administration | 1–5 | 3.37 (0.90) |
| Skill item 2: Selecting SAs | 1–5 | 3.10 (0.89) |
| Skill item 3: Administering SAs | 1–5 | 3.23 (0.78) |
| Skill item 4: Scoring SAs | 1–5 | 3.23 (0.94) |
| Skill item 5: Interpreting SA results | 1–5 | 3.33 (0.82) |
| Skill item 6: Setting treatment goals based on SA | 1–5 | 3.37 (0.98) |
| Skill item 7: Providing SA-based feedback | 1–5 | 3.53 (0.92) |
| Skill item 8: Building engagement using SA | 1–5 | 3.60 (1.12) |
| Skill item 9: Using SA to monitoring progress | 1–5 | 3.20 (0.94) |
| Skill item 10: Adjusting treatment plan | 1–5 | 3.40 (0.91) |
n = 13;
n = 15
Table 2.
Qualitative Codes and Descriptions
| Code | Brief Description |
|---|---|
| Assessment Function | Ways that providers use standardized assessments or find them useful |
| Feedback | Facilitates feedback/dialogue with students. |
| Elicitation | Opportunities to get information that students might not otherwise disclose. |
| Validation | Normalizing or validating student experiences. |
| Triage | Determinations about severity and symptoms, additional services are indicated, etc. |
| Structure | Provides a structure for the session and treatment planning. |
| Monitoring | Use of assessments to track client progress over time. |
| Assessment Use | Reported influences on whether or not providers use standardized assessment tools. |
| Client | Client level-influences |
| Diagnosis | Assessment relevant to accurate diagnosis / differential diagnosis process. |
| Crisis | Client crisis / major clinical issue interfered with administration. |
| Nonclinical | Client is presenting with a nonclinical issue. |
| Engagement | Concern that using assessment will decrease engagement. |
| Reading Level | Client reading level is below assessment tool requirements. |
| Language/Culture | Mismatch between measure and client English proficiency or culture or origin. |
| History | Negative attitudes about assessments, based on past treatment experience. |
| Repetitive | Progress monitoring does not yield new information / is repetitive. |
| Time | Assessments take too much time away from the services clients require. |
| Provider | Provider-level influences. |
| Knowledge | Insufficient knowledge, exposure, skill, training in the use of standardized tools. |
| Attitudes | General orientation / attitudes toward assessment use or nonuse. |
| System/Policy | System, organizational, or policy influences. |
| Expectation | Organizational expectations or mandates surrounding assessment use. |
| Culture of Use | Informal norms surrounding assessment use within organization or clinical team. |
| Technology | Assessments integrated into technologies (e.g., electronic health records). |
| General | General / uncategorized influences on use. |
| Cost | Cost of assessments. |
| Acquisition | Ease or difficulty of acquiring assessment tools (other than cost). |
| Assessment Tools | Comments about specific tools, aspects of tools (e.g., item wording), or desired characteristics of tools that may or may not exist. |
| Implementation Period | Specific comments about clinician experiences during the September implementation period or its influences. |
| Student Response | Reported student response to the use of assessments. |
| Information Sources | Valuable information sources or assessment domains other than standardized tools. |
RQ1: How frequently and in what ways do providers use SA tools in their work?
Results from the CAPE assessment yielded an average score of 2.19 (SD = 0.42), suggesting that, on average, clinicians engaged in the SA behaviors measured with less than half of their caseloads. Clinicians reported that they were most likely to administer SA tools at intake (Item 1) and least likely to make adjustments to their treatment plans based on the results of SA data (Item 4) (see Table 1).
Participant descriptions from the focus groups conducted during the SA implementation pilot in September (implementation period) suggested low levels of assessment use prior to the SA training and consultation (e.g., “I rarely gave them, only if I…needed it to pass on to make a referral”). Nevertheless, clinicians spent considerable time describing the ways in which they used assessment tools in their clinical roles and how they found them to be most useful (assessment function). Consistent with the findings from the specific CAPE items, most comments about assessment tool use were focused on their utility identifying problems initially (feedback, elicitation, validation, triage, structure), with a smaller subset referring to tracking change over time (monitoring). Key aspects of a subset of these codes are highlighted below.
Although CAPE scores indicated that clinicians, on average, reported engaging in feedback somewhat less often than they administered SA measures, during the focus groups most clinicians indicated strong beliefs that providing feedback to youth gave SA tools added value, serving primarily as “a good conversation starter,” especially “with a [new] student that you haven’t quite met.” One of the most important functions identified and discussed at length by clinicians was the opportunity afforded by SA tools to engage in triage processes and treatment planning, making determinations about case severity and identifying potential referrals. This included, “confirmation [that] we are focusing on the right areas” and “giv[ing] myself some idea of…how worried I should be.” Some suggested that SA might identify the need for an outside provider. On the opposite end, one provider openly wondered if using SA tools was “prejudicing me too much…I could kind of like skate over things a little,” perhaps missing other important indicators.
Multiple comments were also made about monitoring client progress over time. Sometimes, this simply involved repeated assessment before and after major transitions, such as in June and September, and using SA “as a benchmark” to identify change at a later point. Overall, monitoring over time seemed to add another layer of depth and importance to feedback/discussion processes (“go back and look at them…and look at…how they fluctuated over time…hear their observations about what they remember from that time”). Comments spanning many of the codes listed above also revealed a common practice in which providers tended to focus on individual items for interpretation or monitoring (“just the [items] that are …significantly high”). This was particularly true in cases where students did not demonstrate clinically-significant total scores on a particular measure (“I use it as scaling…because most of my kids are preventative”).
RQ2: What factors influence provider use of standardized assessments?
RQ2 involved an evaluation of provider attitudes and skill associated with SA use and an exploration of additional variables potentially relevant to SA use that may be less well represented in the literature. ASA (i.e., SA attitudes) data collected prior to the training yielded subscale means that allowed for cautious comparison to national norming samples (Jensen-Doss & Hawley, 2010; see Measures) and a prior community-based study (Lyon et al., in press). Although the study sample size did not allow for any tests of statistical significance, the ratings in the current project for Benefit over Clinical Judgment, Psychometric Quality, and Practicality also indicated moderately positive attitudes toward SA overall (i.e., average ratings of “neutral” or better across subscales). Furthermore, SA skill scores indicated that providers generally rated themselves with a moderate amount of skill in all areas (total mean score = 3.34, SD=.72). The highest ratings were given to providing feedback to clients (mean=3.53; SD=.92) and building client engagement (mean=3.60; SD=1.12), but the lowest on selecting SA tools (mean=3.10; SD=.89) and using SA tools to monitor treatment progress (mean=3.20; SD=.94).
In focus groups, clinicians explicitly mentioned several factors that positively or negatively influenced their use of assessment tools. Coded data were categorized as representing influences at the client, provider, or broader system and policy levels, although two general codes were left uncategorized (see Table 2 for a complete list of these codes). Across all system levels, codes were particularly reflective of constructs found in the Practicality subscale of the ASA, and nearly all of these were framed as barriers to use. At the client level, practicality-relevant codes included comments about student reading level, the fit of the tools with students’ language/culture, as well as the time required to administer assessments for each student (e.g., “I feel like there's just not enough time and that I end up having to choose whether I want to do this.”). At the level of the provider, the majority of comments focused on the practical issue of whether providers had received sufficient training in or exposure to SA tools to allow them to build a sufficient knowledge base in their effective use (e.g., “I wish somebody had done a lot more training on standardized assessments with me a long time ago because…I've never really felt confident using them.”). Clinicians also reflected on the high cost of some assessment tools, as well as more general barriers to access (acquisition).
Outside of practical issues, most discussion focused on the role of client presentations in dictating whether or not clinicians used SA tools. This included selecting tools to address particular client diagnoses (“if somebody comes in and they’re talking about depression…whip out the Children’s Depression Inventory”) or using tools to inform differential diagnosis processes in situations where clinical presentations were ambiguous. In addition, a number of comments suggested clinicians viewed use of the assessment measures as inappropriate in times of client crises (“one was an overdose, one was a runaway, one was a sexual assault”) or that provider caseloads contained a number of nonclinical or low severity cases for whom SA measures were inappropriate (“a lot of times on my caseload….I’m kind of just doing more case management”). Although less prominent in the discussion, some providers also indicated that they sometimes made administration decisions based on perceptions of how SA administration might impact engagement or client reports of previous negative experiences with assessments (i.e., history).
There were fewer comments at the broader system level, but those made indicated barriers such as the absence of official expectations or mandates surrounding use or the lack of an informal culture of use in organizations (“there is no culture in my little clinical group about using it”). The only facilitating factors identified at this level included one organization that had embedded one assessment tool in their electronic health record (technology): “The PHQ-9…we’ve even had installed in NextGen for a little bit. So that’s something that’s a little easier and also easier to provide feedback on.”
RQ3: To what extent are standardized measures of depression and anxiety feasible and appropriate to use with the population of students receiving SMH services?
The third research question evaluated the applicability of the two measures selected for administration to the population of students receiving SMH care from the group of clinicians (n = 12) participating in the one-month implementation trial. A total of 293 individual students were seen in the month of September (mean of 24.4 per clinician). Clinician spreadsheets indicated that PHQ-9 and/or MASC-10 forms were administered and scored for 191 unique students in one of their first three sessions of the year for an average of 15.92 students per clinician (range: 2–33). If students were administered multiple forms over multiple sessions, only the first set of forms were included in the total. Collectively, participating clinicians administered measures to 65.2% of all the students treated in their respective SBHCs during the month of September. Individual clinician SA administration percentages ranged from 43.3% to 100% (average level of administration was 62.2%).
Students who were administered one or both measures were an average of 15.7 years old (range 12–19) and 77.1% female. Student race/ethnicity was 35% Black/African American, 21.2% Caucasian, 22.4% Asian or Pacific Islander, 11.5% Hispanic/Latino, and 9% multiethnic or other. Diagnosis data from service for participating students indicated that, over the academic year, 24.4%, 35.4%, and 7.3% of those students experienced a problem with anxiety, mood, or disruptive behavior, respectively. Clinicians were far more likely to administer measures in the first session than in a subsequent session, with 89.8% of the PHQ-9 and 90.4% of the MASC-10 forms given during the first contact with a student. Severity scores based on PHQ-9 raw scores and MASC-10 T-scores indicated a relatively mild level of depression and anxiety symptoms. Although the mean PHQ-9 score (9.35, SD = 5.92) was below the cutoff of 11 – identified by Richardson and colleagues (2010) as an indicator of possible dysthymia or mild depression among adolescents – 36.9% of the sample had scores of 11 or above. On the MASC-10, the average T-score was 51.53 (SD = 13.1), which is within the average range identified by March (1997). In addition, 40.6% of students met the cutoff for mild or sub-threshold anxiety or above (55+), 18.9% for moderate anxiety or above (65+), and 14.3% for severe anxiety (70+). Overall, 42.4% of the students met the cutoff of at least an 11 on the PHQ-9 or a 65 on the MASC-10, suggesting the need for some degree of follow-up.
Some comments from focus groups allowed for additional information gathering about the feasibility and appropriateness of the specific assessment tools used and mirrored the client-level assessment use codes reported above. Providers made comments about the specific wording (e.g., “double negatives”) of particular items, which the providers felt they had to help students interpret. They also expressed a concern that some items could be misinterpreted or endorsed for normative reasons unrelated to the major construct the measure was designed to address (“I had a kid give me a very practical reason for why he makes sure things are safe when he walks to school…and I don’t think that’s anxiety”). Clinicians also made comments about the developmental appropriateness of the PHQ-9 for middle school students (“there were a couple of questions on the PHQ-9 which were confusing to them”) and the MASC-10 for high school students (“it’s very elementary”).
Specific comments about the SA implementation period were coded separately and included statements about the impact of the experience on providers’ assessment-related attitudes or behavior. Comments were generally quite positive and indicated that, as a result of the trial, clinicians saw the applicability of assessments in more situations than they had previously. Furthermore, providers noted that, despite early concerns about the perceived time or paperwork burden, using the assessment “wasn’t really time consuming.” Others explained that the experience had allowed them to build new and valuable assessment skills and expressed “regrets that…I didn’t do them sooner.”
Additional comments made by providers about student responses to SA administration and feedback were often – but not always – in reference to their pilot experience and almost universally positive. For example, one provider who worked primarily with middle school students indicated the importance of graphs and visual feedback (“certain kids that are really concrete thinkers and they get excited by seeing they’re getting better – like a graph is exciting”). Others referenced the impact of validation based on assessment results (discussed earlier) (“it just totally validated her experience, like ‘I’m not crazy, it’s just showing what I’m feeling.’”).
RQ4: What additional types of information or information sources (other than SA) do providers find most useful for student assessment and clinical decision making and how are these sources used?
The final research question was answered using qualitative data only. Both in response to explicit questions focus group facilitators raised regarding valuable information sources other than SA tools and also throughout the discussion, clinicians frequently referenced other forms of information that they used to assess, and sometimes monitor, client progress over time. Although information sources mentioned included non-standardized or idiographic assessments, general impressions of social functioning, and the reports from other healthcare providers, by far the most frequently mentioned information sources related to educational (i.e., school or academic) data. This included attendance (e.g., “I get an attendance record if someone’s having attendance issues”), homework completion (“some teachers will list all of the assignments and due dates [in an online system]…when the information’s in there, it’s really helpful”), and student grades. Some providers expressed a desire for increased access to educational data and indicated that, because they were not school employees, they experience policy barriers (e.g., “FERPA won’t let us” “[it is] very hard to get that information, and it’s unfortunate that it’s so hard”). Although some providers were reduced to getting educationally-relevant information directly from students, all acknowledged the difficulty with relying solely on students as respondents.
Discussion
This project was designed to contribute to the sparse literature on the use of SA among mental health clinicians working with children in schools and to identify factors that might facilitate changes in assessment practices. Prior to training and consultation in the use of two brief measures of depression and anxiety, school-based clinicians reported pre-training levels of SA attitudes and use that appear to be consistent with pre-training levels documented among community mental health providers (Lyon et al., in press). Following the SA training and implementation period, clinicians described a wide variety of ways in which they used SA (e.g., feedback, eliciting information from students, providing validation, triage, informing session/treatment structure, monitoring over time). Clinicians participating in the implementation project administered SA tools to over 60% of the students on their caseloads during a one-month period, indicating the potential feasibility of use of these tools. We consider these findings in the context of two overarching themes to organize their implications – contextual appropriateness and future training implications.
Contextual Appropriateness of SA in Schools
The contextual appropriateness (a.k.a., compatibility) of new practices has become an increasingly important area of emphasis among researchers interested in closing the “research to practice gap” (Aarons et al., 2012; Chambers, Glasgow, & Stange, 2013). A study conducted by Lyon and colleagues (2014) found that school-based providers were most likely to describe practical components of appropriateness, and that nearly all responses focused on the provider and client levels, with little reference to factors at the school or larger organization levels. The current results paint a similar picture related to the use of SA tools in which clinicians identified fewer system or organizational-level variables (e.g., organizational expectations, culture of use) relative to the frequently-mentioned client or provider influences. It is unclear, however, the extent to which this reflects a low level of organizational influence on SA use or that such influences were less readily apparent to participants. Although focus group questions and prompts did not specifically ask about any system level, it may be that explicitly asking about organizational-level influences would have yielded a wider range of comments.
Although school-based service delivery affords certain opportunities for flexible and responsive care (e.g., increased service accessibility, direct clinician access to information about school functioning), it is also subject to unique constraints that appeared in clinician responses. Perhaps for this reason, many of the comments about contextual appropriateness suggested that the practicality of using SAs was particularly salient to participants. Among the practicality issues identified, clinicians indicated difficulties prioritizing SA given limited time to devote to student services within the school day. Indeed, SMH practitioners often experience considerable time constraints that interfere with their ability to deliver optimal services or participate in professional development initiatives (Lyon et al., 2013b). Additional practical issues included the variable fit of SA with client presentations (which either facilitated or inhibited use, depending on the circumstances), and the availability of SA training (discussed below). In the current study, youth who were administered the tools were ethnically diverse and most likely to present with depression or anxiety. Regarding the fit of the specific measures with client presentations, 42% of these students demonstrated notable symptoms of depression or anxiety on the tools, suggesting at least moderate appropriateness. Nevertheless, clinicians also expressed some concerns about specific items and item wording when using them with their clients. Concerns about respondent understanding of individual assessment questions have been previously identified as a barrier to SA use in SMH (Connors et al., 2015). Despite this, provider experiences with the current SA training and implementation initiative were overwhelmingly positive and clinicians reported that students were generally willing to engage in SA processes.
The current findings also provide some insight into how the appropriateness of assessments in general may be enhanced in SMH. The SA tools in this study were designed to assess specific clinical problems present in a subset of students and appeared to be appropriate for use with these students. In addition, most clinicians reported using educational data for idiographic assessment and monitoring (e.g., attendance, homework completion). Formal integration of school and academic data in SMH service delivery has been identified as a potentially fruitful pathway to enhancing the contextual fit of the services provided (Lyon et al., 2013a). In support of this, Connors and colleagues (2015) recently found that academic targets were more frequently used and rated as more useful to school-based clinicians than other types of assessment info (including standardized tools).
Implications for Training
Clinicians’ descriptions of their experiences with standardized tools indicate that many of them had minimal prior training in the use of SA. Evaluation of the impact of the training was not a primary goal of the current project, but qualitative comments suggest that SA training initiatives – and their associated opportunities to use SA tools in routine service delivery – may have the ability to shift clinician attitudes and reinforce the potential value of SA tools (“I wish somebody had done a lot more training on standardized assessments with me a long time ago”). Although the current results were based partially on a small-scale training initiative, conducted in a single district, there are a number of implications that may allow future SA trainings to be more acceptable and effective for practitioners working in schools or other community contexts.
First, the assessment functions identified by participating clinicians may provide an acceptable and contextually-appropriate structure for future trainings in SMH. The facilitation of information disclosure (i.e., elicitation), informing triage decisions, assessments to drive individual session and overall treatment structure, feedback processes (including feedback-based validation), and monitoring over time provide a greater degree of specificity than many existing lists of assessment principles or purposes (e.g., Hunsely & Mash, 2008; Kazdin, 2005). The fact that these structures were derived from the focus group discussions suggests that they might reflect school-based clinicians’ existing mental models of assessment and treatment processes. Given that the congruence of training content with clinicians’ existing knowledge and experience may be an important factor when engaging clinicians in new training initiatives (Lyon, Stirman, Kerns, & Bruns, 2011), a training built around these processes may prove particularly successful.
Second, when discussing standardized measures and their use, clinicians frequently made comments focused on individual measure items rather than summary scores, which has practical implications for SA training. Given that only 42% of the students demonstrated clinically-significant or borderline levels of depression or anxiety symptoms, additional progress monitoring measures or approaches may be appropriate and yield clinically meaningful information. Individual items that capture client presenting problems, even if the entire measure does not, essentially reflect structured idiographic assessment targets which can be monitored over time. In light of research that client-identified problem areas are likely to map onto individual items of SA tools (Weisz et al., 2012), a framework for carefully identifying and incorporating into treatment progress monitoring individual SA items which youth endorse at a high level and find meaningful could be an additional component of training.
Relatedly, it is also essential that future training consider the range of evidence-based assessment activities, including both initial assessment and progress monitoring. Although the current project focused primarily on initial assessment and feedback, tracking client outcomes over time is an equally important feature of EBA in schools (Borntrager & Lyon, 2015). Progress monitoring may include the repeated administration of SA tools or tracking idiographic (i.e., individualized) outcome targets as referenced above. As stated above, our focus groups also documented widespread support for the use of school and academic data indicators in clinical practice, which are compelling for idiographic monitoring. Although relatively little attention has been paid to the most effective way to integrate these sources of information into service delivery, Lyon and colleagues (2013a) recently applied a model of data-driven clinical decision making (Daleiden & Chorpita, 2005) to the use of educational data (e.g., attendance, homework completion, disciplinary referrals) in school mental health. The authors noted that the successful application of such a model requires significant training and that uptake of such practices is unlikely to occur without structured, ongoing support.
Limitations
Limitations of the current study include the fact that the project was conducted in the context of only one type of SMH delivery model, SBHCs, in a single urban area. Given the small clinician sample size, quantitative results – and, in particular, the similarities or differences between clinician ratings and those in other studies – should be considered preliminary and interpreted with caution. In addition, the use of a modest incentive to promote practitioner use following training may make the findings less generalizable to studies in which no such incentive is provided. Furthermore, because of the nature of the focus groups, we were unable to distinguish statements made by middle school versus high school providers. Given that some comments were made about the developmental appropriateness of the measures selected, these two subgroups may have had different experiences implementing the assessment tools. Also, although focus groups provide opportunities for participants to build on one another’s comments, generating information that is sometimes richer than individual interviews, the format may also make some participants uncomfortable voicing their opinions (Krueger & Casey, 2008), allowing more vocal participants to dominate the conversation or leading to group biases. To address this issue, the group facilitators worked to ensure equal participation and to limit disturbances.
Next, the design of the study, in which focus groups were conducted following the SA implementation period, made it difficult to determine the extent to which clinicians’ comments reflected their practice prior to the introduction of the PHQ-9 and MASC-10 or during implementation. Nevertheless, specific references to the project experience itself were coded separately and the benefits of creating a shared experience on which providers could base their responses were determined to be of greatest importance. Finally, as indicated earlier, some members of the research team had existing relationships with a subset of the study participants, as they had provided them with previous training. It is possible that this relationship may have affected participation and responses, either biasing comments to be more consistent with the researchers’ perspective or simply facilitating more open and honest feedback.
Conclusion
To contribute to the nascent literature and inform future quality improvement efforts, the current project was designed to identify the determinants and functions of SA use in SMH using mixed-methods data collected prior to and following a brief SA training and consultation program. Although many clinicians had limited SA experience at the beginning of the project, the majority of the participating providers used the two selected measures to conduct initial assessments with the bulk of their caseloads over a one month period. In semi-structured focus groups, clinicians reported on factors influencing their use of SA at the client, provider, and system levels; functions of assessment, including feedback, elicitation of information from youth, validation, triage, structure, and monitoring; student responses to SA administration and feedback; and additional sources of information (largely educational) that they found to be clinically relevant. The results of this project will be used to inform future training initiatives in assessment and progress monitoring for school-based providers. Improving the use of routine, structured assessments in SMH continues to hold considerable promise as a feasible target for quality improvement initiatives. Results suggest that such trainings may be most feasible and appropriate if they explicitly address practical concerns and include versatile applications of SA tools as well as idiographic assessment to address the wide range of client presentations.
Acknowledgments
This publication was made possible in part by funding from grant number K08 MH095939, awarded to the first author from the National Institute of Mental Health (NIMH). The authors would also like to thank the school-based mental health provider participants, Seattle Children’s Hospital, and Public Health of Seattle and King County for their support of this project.
Footnotes
Dr. Lyon is an investigator with the Implementation Research Institute (IRI), at the George Warren Brown School of Social Work, Washington University in St. Louis; through an award from the National Institute of Mental Health (R25 MH080916) and the Department of Veterans Affairs, Health Services Research & Development Service, Quality Enhancement Research Initiative (QUERI).
References
- Aarons GA, Green AE, Palinkas LA, Self-Brown S, Whitaker DJ, Lutzker JR, Chaffin MJ. Dynamic adaptation process to implement an evidence-based child maltreatment intervention. Implementation Science. 2012;7:1–9. doi: 10.1186/1748-5908-7-32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Sommerfeld D, Walrath-Greene C. Evidence-based practice implementation: The impact of public versus private sector organization type on organizational support, provider attitudes, and adoption of evidence-based practice. Implementation Science. 2009;4:83. doi: 10.1186/1748-5908-4-83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bickman L, Rosof-Williams J, Salzer MS, Summerfelt WT, Noser K, Wilson SJ, Karver MS. What information do clinicians value for monitoring adolescent client progress and outcomes? Prof. Psychology: Research and Practice. 2000;31(1):70–74. [Google Scholar]
- Bickman L, Kelley SD, Breda C, de Andrade AR, Riemer M. Effects of routine feedback to clinicians on mental health outcomes of youths: Results of a randomized trial. Psychiatric Services. 2011;62:1423–1429. doi: 10.1176/appi.ps.002052011. [DOI] [PubMed] [Google Scholar]
- Borntrager C, Lyon AR. Client progress monitoring and feedback in school-based mental health. Cognitive & Behavioral Practice. 2015;22:74–86. doi: 10.1016/j.cbpra.2014.03.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Borntrager CF, Chorpita BF, Higa-McMillan C, Weisz JR. Provider attitudes toward evidence- based practices: Are the concerns with the evidence or with the manuals. Psychiatric Services. 2009;60:677–681. doi: 10.1176/ps.2009.60.5.677. [DOI] [PubMed] [Google Scholar]
- Bradley R, Danielson L, Doolittle J. Responsiveness to intervention: 1997 to 2007. Teaching Exceptional Children. 2007;39:8–12. [Google Scholar]
- Carlier IV, Meuldijk D, Van Vliet IM, Van Fenema E, Van der Wee NJ, Zitman FG. Routine outcome monitoring and feedback on physical or mental health status: Evidence and theory. Journal of Evaluation in Clinical Practice. 2012;18(1):104–110. doi: 10.1111/j.1365-2753.2010.01543.x. [DOI] [PubMed] [Google Scholar]
- Chambers D, Glasgow R, Stange K. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implementation Science. 2013;8(1):117. doi: 10.1186/1748-5908-8-117. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Connors EH, Arora P, Curtis L, Stephan SH. Evidence-based assessment in school mental health. Cognitive and Behavioral Practice. 2015;22:60–73. [Google Scholar]
- Daleiden E, Chorpita BF. From data to wisdom: Quality improvement strategies supporting large-scale implementation of evidence based services. Child and Adolescent Psychiatric Clinics of North America. 2005;14:329–349. doi: 10.1016/j.chc.2004.11.002. [DOI] [PubMed] [Google Scholar]
- DeSantis L, Ugarriza DN. The concept of theme as used in qualitative nursing research. Western Journal of Nursing Research. 2000;22(3):351–372. doi: 10.1177/019394590002200308. [DOI] [PubMed] [Google Scholar]
- Gance-Cleveland B, Yousey Y. Benefits of a school-based health center in preschool. Clinical Nursing Research. 2005;14:327–342. doi: 10.1177/1054773805278188. [DOI] [PubMed] [Google Scholar]
- Garland AF, Kruse M, Aarons GA. Clinicians and outcome measurement: What’s the use? Journal of Behavioral Health Services & Research. 2003;30:393–405. doi: 10.1007/BF02287427. [DOI] [PubMed] [Google Scholar]
- Garland AF, Hawley KM, Brookman-Frazee L, Hurlburt MS. Identifying common elements of evidence-based psychosocial treatments for children's disruptive behavior problems. Journal Of The American Academy Of Child & Adolescent Psychiatry. 2008;47(5):505–514. doi: 10.1097/CHI.0b013e31816765c2. [DOI] [PubMed] [Google Scholar]
- Guthrie D, McIntosh M, Callaly T, Trauer T, Coombs T. Consumer attitudes towards the use of routine outcome measures in a public mental health service: A consumer-driven study. International Journal of Mental Health Nursing. 2008;17(2):92–97. doi: 10.1111/j.1447-0349.2008.00516.x. [DOI] [PubMed] [Google Scholar]
- Hatfield DR, Ogles BM. Why some clinicians use outcome measures and others do not. Administration and Policy in Mental Health and Mental Health Services Research. 2007;34:283–291. doi: 10.1007/s10488-006-0110-y. [DOI] [PubMed] [Google Scholar]
- Hatfield DR, Ogles BM. The use of outcome measures by psychologists in clinical practice. Professional Psychology: Research and Practice. 2004;35:485–491. [Google Scholar]
- Higa CK, Chorpita BF. Handbook of evidence-based therapies for children and adolescents. Springer US; 2008. Evidence-based therapies: Translating research into practice; pp. 45–61. [Google Scholar]
- Hill CE, Knox S, Thompson BJ, Nutt Williams E, Hess SA. Consensual qualitative research: An update. Journal of Counseling Psychology. 2005;52(2):196–205. [Google Scholar]
- Hill CE, Thompson BJ, Nutt Williams E. A guide to conducting consensual qualitative research. The Counseling Psychologist. 1997;25(4):517–572. [Google Scholar]
- Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qualitative Health Research. 2005;15:1277–1288. doi: 10.1177/1049732305276687. [DOI] [PubMed] [Google Scholar]
- Hunsley J, Mash EJ, editors. A guide to assessments that work. USA: Oxford University Press; 2008. [Google Scholar]
- Jensen-Doss A. Practical, evidence-based clinical decision making: Introduction to the special series. Cognitive and Behavioral Practice. 2015;22:1–4. [Google Scholar]
- Jensen-Doss A, Hawley KM. Understanding barriers to evidence-based assessment: Clinician attitudes toward standardized assessment tools. Journal of Clinical Child and Adolescent Psychology. 2010;39:885–896. doi: 10.1080/15374416.2010.517169. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jensen-Doss A, Hawley KM. Understanding clinicians’ diagnostic practices: Attitudes toward the utility of diagnosis and standardized diagnostic tools. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(6):476–485. doi: 10.1007/s10488-011-0334-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kazdin AE. Treatment Outcomes, Common Factors, and Continued Neglect of Mechanisms of Change. Clinical Psychology: Science And Practice. 2005;12(2):184–188. [Google Scholar]
- Krueger RA, Casey MA. Focus groups: A practical guide for applied research. 4th ed. Thousand Oaks, CA: SAGE; 2009. [Google Scholar]
- Lambert MJ, Whipple JL, Hawkins EJ, Vermeersch DA, Nielsen SL, Smart DW. Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clinical Psychology: Science and Practice. 2003;10:288–301. [Google Scholar]
- Lyon AR, Borntrager C, Nakamura B, Higa-McMillan C. From distal to proximal: Routine educational data monitoring in school-based mental health. Advances in School Mental Health Promotion. 2013a;6(4):263–279. doi: 10.1080/1754730X.2013.832008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lyon AR, Charlesworth-Attie S, Vander Stoep A, McCauley E. Modular psychotherapy for youth with internalizing problems: Implementation with therapists in school based health centers. School Psychology Review. 2011;40:569–581. [Google Scholar]
- Lyon AR, Dorsey S, Pullmann M, Silbaugh-Cowdin J, Berliner L. Clinician use of standardized assessments following a common elements psychotherapy training and consultation program. Administration and Policy in Mental Health and Mental Health Services Research. in press doi: 10.1007/s10488-014-0543-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lyon AR, Ludwig K, Romano E, Leonard S, Vander Stoep A, McCauley E. "If it's worth my time, I will make the time": School-based providers' decision-making about participating in an evidence-based psychotherapy consultation program. Admin. Policy in Mental Health and Mental Health Services Research. 2013b;40:467–481. doi: 10.1007/s10488-013-0494-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lyon AR, Ludwig K, Romano E, Koltracht J, Vander Stoep A, McCauley E. Using modular psychotherapy in school mental health: Provider perspectives on intervention-setting fit. Journal of Clinical Child & Adolescent Psychology. 2014;43:890–901. doi: 10.1080/15374416.2013.843460. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lyon AR, McCauley E, Vander Stoep A. Toward successful implementation of evidence-based practices: Characterizing the intervention context of counselors in school-based health centers. Emotional and Behavioral Disorders in Youth. 2011;11:19–25. [Google Scholar]
- Lyon AR, Stirman SW, Kerns SE, Bruns EJ. Developing the mental health workforce: review and application of training approaches from multiple disciplines. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(4):238–253. doi: 10.1007/s10488-010-0331-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- March J, Parker J, Sullivan K, Stallings P, Conners C. The Multidimensional Anxiety Scale for Children (MASC): Factor structure, reliability and validity. Journal of the American Academy of Child and Adolescent Psychiatry. 1997;36:554–565. doi: 10.1097/00004583-199704000-00019. [DOI] [PubMed] [Google Scholar]
- March JS, Sullivan K, Parker J. Test-retest reliability of the Multidimensional Anxiety Scale for Children. Journal of Anxiety Disorders. 1999;13(4):349–358. doi: 10.1016/s0887-6185(99)00009-2. [DOI] [PubMed] [Google Scholar]
- Mash EJ, Hunsley J. Evidence-based assessment of child and adolescent disorders: Issues and challenges. Journal of Clinical Child and Adolescent Psychology. 2005;34:362–379. doi: 10.1207/s15374424jccp3403_1. [DOI] [PubMed] [Google Scholar]
- McHugh RK, Barlow DH. The dissemination and implementation of evidence-based psychological treatments: a review of current efforts. American Psychologist. 2010;65(2):73–84. doi: 10.1037/a0018121. [DOI] [PubMed] [Google Scholar]
- Muhr T. ATLAS.ti 5.0: ATLAS.ti Scientific Software Development GmbH. (Version 5.0) [Software] Berlin, Germany: 2004. Available from http://www.atlasti.com/ [Google Scholar]
- Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed methods design in implementation research. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38:44–53. doi: 10.1007/s10488-010-0314-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Richardson LP, McCauley E, Grossman DC, McCarty CA, Richards J, Russo JE, Katon W. Evaluation of the Patient Health Questionnaire-9 Item for detecting major depression among adolescents. Pediatrics. 2010;126(6):1117–1123. doi: 10.1542/peds.2010-0852. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scott K, Lewis CC. Using measurement-cased dare to enhance any treatment. Cognitive and Behavioral Practice. 2015;22:49–59. doi: 10.1016/j.cbpra.2014.01.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spitzer RL, Kroenke K, Williams JB. Validation and utility of a self-report version of PRIME-MD. JAMA. 1999;282(18):1737–1744. doi: 10.1001/jama.282.18.1737. [DOI] [PubMed] [Google Scholar]
- Stumpf RE, Higa-McMillan CK, Chorpita BF. Implementation of evidence-based services for youth: Assessing provider knowledge. Behavior Mod. 2009;33:48–65. doi: 10.1177/0145445508322625. [DOI] [PubMed] [Google Scholar]
- Walker SC, Kearns S, Lyon AR, Bruns EJ, Cosgrove T. Impact of school-based health center use on academic outcomes. J. of Adolescent Health. 2010;46:251–257. doi: 10.1016/j.jadohealth.2009.07.002. [DOI] [PubMed] [Google Scholar]
- Weisz JR, Chorpita BF, Palinkas LA, Schoenwald SK, Miranda J, Bearman SK, Gibbons RD. Testing standard and modular designs for psychotherapy treating depression, anxiety, and conduct problems in youth: A randomized effectiveness trial. Archives of General Psychiatry. 2012;69:274–282. doi: 10.1001/archgenpsychiatry.2011.147. [DOI] [PubMed] [Google Scholar]
