Abstract
Schools are well positioned to facilitate early identification and intervention for youth with emerging mental health needs through universal mental health screening. Early identification of mental health concerns via screening can improve long-term student development and success, but schools face logistical challenges and lack of pragmatic guidance to develop local screening policies and practices. This study summarizes mental health screening practices tested by six school districts participating in a 15-month learning collaborative. Qualitative analysis of 42 Plan-Do-Study-Act cycles revealed that districts tested quality improvement changes across seven screening practice areas, with all teams conducting at least one test to: 1) build a foundation; and 2) identify resources, logistics and administration processes. Quantitative data indicated that the average percentage of total students screened increased from 0% to 22% (range = 270 – 4,850 students screened at follow-up). Together, these results demonstrate how school districts not currently engaged in mental health screening can apply small, specific tests of change to develop a locally-tailored, practical and scalable process to screen for student mental health concerns. Lessons learned are provided to inform future directions for school-based teams.
Keywords: school mental health screening, learning collaborative, plan-do-study-act cycle
Underserved youth mental health need continues to be a public health concern in the United States. It is estimated that the one-time prevalence of mental health disorders among youth is 13% (Centers for Disease Control [CDC], 2013), with rates between 20 and 25% among adolescents (Merikangas et al., 2010). Despite these rates, the average time between diagnosis of mental health disorder(s) and treatment is 8–10 years, with only 20–45% of youth with mental health difficulties receiving timely services (Costello et al., 2014). This delay in services has been attributed to numerous barriers, including limited availability of culturally and linguistically appropriate mental health care, cost of services, transportation, and stigma surrounding mental health and illness (McKay & Bannon, 2004).
In response to these barriers, comprehensive school mental health systems (CSMHSs) aim to provide all youth with social, emotional, and behavioral health supports through equitable approaches that are integrated into the school day (Hoover & Bostic, 2020). In fact, research indicates that schools are the most common provider of mental health services for youth (Costello et al., 2014; Duong et al., 2020). In addition to providing services for students with existing mental health concerns, CSMHSs also include screening and other early identification systems to identify students at risk of developing mental health concerns, as well as those who are experiencing difficulties that have gone undetected. National school mental health (SMH) quality measures and training curricula include mental health screening as a core feature of a high quality, multi-tiered student support system (SMH-QA; Hoover et al., 2015; TFI; Algozzine et al., 2019; ISF-II; Splett et al., 2019).
Mental health screening is increasingly being considered by schools as a mechanism to identify and support the mental health needs of students. A clear and consistent definition of universal mental health screening is a very important foundation for school teams to be able to engage in this work. We define universal mental health screening as, “the assessment of students to determine whether they may be at risk for a mental health concern” (SMH-QA; Hoover et al., 2015). Moreover, screening is “assessment in the absence of known risk factors to identify students who may benefit from Tier 2 or Tier 3 services and supports” (SMH-QA; Hoover et al., 2015). We distinguish universal screening from follow-up assessment of students referred to mental health services (also an evidence-based practice to pinpoint student needs and inform data-driven decisions; Center on Positive Behavioral Interventions and Supports, 2021; Substance Abuse and Mental Health Services Administration [SAMHSA], 2019).
Mental health screening in schools can be accomplished with a systematic tool or process, including reliance on standardized student-report, parent-report, teacher-report measures, or a structured teacher nomination process. Determining whether screening is completed by students, teachers, and/or parents is typically based on the grade level of students and the objectives of screening (Splett et al., 2018). Previous literature on best practices in school-based mental health screening generally recommend that teachers and parents of students in pre-Kindergarten through grade 3 complete rating scales on students’ observed internalizing and externalizing symptoms of distress, while students in grades 4 through 12 complete self-report measures of their wellbeing and other indicators of enhanced social-emotional development, and internalizing symptoms of distress (Kilgus & von der Embse, 2019).
In this paper, we summarize mental health screening practices used by six district teams who participated in a 15-month learning collaborative (LC). These teams were not engaged in mental health screening practices at the beginning of the LC. Small, rapid cycle tests of change called Plan-Do-Study-Act (PDSA) cycles were used in a “trial and error” manner to develop screening practices tailored to their local school communities. PDSAs are a four-stage learning approach that includes planning a small test of change believed to result in the desired improvement, conduct the test (e.g., “Do” the change), observing the result in real time (e.g., “Study”), and then making a decision as a team about next steps (e.g., “Act”) which could include adopting, abandoning, or adjusting for the next test (Taylor et al., 2014). Our goal is to present practical guidance for school or district teams looking to improve their screening practices one step at a time, as well as feature the methods used for steady, incremental, durable scale-up of a local approach to screening to improve early identification and support of student mental health needs. First, the current status of school-based mental health screening is reviewed.
Current Status of School-Based Mental Health Screening
Based on a review of approximately 35 publications on school-based mental health screening conducted internally for this LC, the literature suggests that mental health screening within a multi-tiered system of support (MTSS) framework is occurring nationally in all grade levels (NCSMH, 2015). Yet, data trending slowly upward since 2005 across multiple studies indicate that, at most, only 20% of schools currently engage in mental health screening (Herman et al., 2020). There are several tools for universal and Tier 2/3 screening that have been developed and tested across different populations of interest, demonstrating reliability and validity (NCSMH, 2015; Andrews et al., 2020; Jeffrey et al., 2020; Lane et al., 2009). Research has also developed best practice considerations for selecting appropriate tools to be used in mental health screening based on predictive validity, sensitivity and specificity, objectives of screening, informants or raters, and frequency of screening implementation (Bertone et al., 2019; Moore et al., 2015).
The following review further describes the status of school-based mental health screening research and practice. Benefits identified in the literature are reviewed, followed by challenges facing more widespread implementation of universal mental health screening in schools. Finally, the need for further practical guidance to schools on this topic is detailed, which is provided not only to set the stage for the current study aims and as a call to action for additional pragmatic research on this topic.
Benefits
When carried out thoughtfully and with clear intentions, the benefits of mental health screening in schools are significant. Namely, early identification of student risk can inform a more systematic, proactive approach to the adoption and implementation of services and supports. Within an MTSS framework, screening can improve the quality of comprehensive school-based health systems at the universal (“Tier 1”), targeted (“Tier 2”), and intensive (Tier 3) levels of service delivery (Dowdy et al., 2015). Tier 1 universal services and supports are provided to the entire student body/school community to promote positive social, emotional and behavioral skills and overall wellness (e.g., schoolwide, grade-level or classroom curricular presentations to support positive school climate, social and emotional skills). Tier 2 early intervention services and supports are provided on an as-needed basis to students at risk for mental health concerns to prevent early-onset problems from progressing (e.g., small group interventions, brief individual interventions, mentoring, low-intensity supports such as a daily teacher check-in). Tier 3 targeted intervention or treatment services and supports are also provided to students on an as-needed basis, but address more serious, often diagnosed concerns and prevent the worsening of symptoms that can impact daily functioning (e.g., individual, group, or family therapy; Hoover et al., 2019). Mental health screening lends itself to this public health model, in which student mental health services and supports are provided to more than just those at the highest level of risk or those with the most observable difficulties (e.g., externalizing behaviors; Dowdy et al., 2010). Rather, school-based screening efforts can help schools proactively identify students at-risk of or currently experiencing a range of mental health concerns, including internalizing symptoms of distress (Allen et al., 2018). With this systemic approach, schools are more likely to effect lasting positive change for students, their families, and communities, as they set their goals on prevention, early identification, and intervention efforts.
Data from mental health screening are also critical for monitoring the strengths and needs of students over time. At the Tier 1 level of screening, results from initial implementation can be used as baseline data (Dvorsky et al., 2014), with each subsequent screening utilized to monitor schoolwide and individual trends (Dowdy et al., 2010). For individuals who received follow-up services and interventions as a result of universal screening, the impact of those interventions can also be tracked so that school teams can employ data-driven decision making to implement and adapt evidence-based practices at Tier 2 and 3. Overall, data from screening allow schools to aggregate and disaggregate data according to context and student demographics, among other variables, so that students’ mental health can be monitored, and their wellness can be promoted (Dowdy et al., 2010; Humphrey & Wigglesworth, 2016).
Finally, previous research has demonstrated that school-based mental health screening, specifically for social and emotional health within a multi-tiered system of support is cost effective (Dvorsky et al., 2014; Humphrey & Wigelsworth, 2016; Kuo et al., 2009). In a review of the Developmental Pathways Screening Program, a multi-stage emotional health screening program implemented over four years in 6th grade classrooms in Seattle, WA, researchers found that universal screening allowed for successful connections to services that were also cost-effective to schools (Kuo et al., 2009). With the availability of free and inexpensive screening measures with good psychometric properties now available (see Becker-Haimes et al., 2020 and a searchable library of measures on The SHAPE System (https://shape.3cimpact.com/), screening will ultimately save schools money, as early identification of social, emotional, or behavioral difficulties and early intervention is less costly than long-term or more intensive care (Humphrey & Wigelsworth, 2016). In sum, universal mental health screening in schools presents several benefits to schools, students, and families.
Challenges
Despite the evidence that mental health screening can make a positive impact for students, there are still barriers to implementation that must be addressed. In a national survey of over 450 school administrators, the most commonly-reported reasons for not conducting emotional and behavioral health screening were lack of awareness that such screening practices exist, budget and resource limitations, and access to free and reliable measures with valid interpretations (Bruhn et al., 2014). Other reasons cited were the presence of few support systems in place for those who were identified, stigma around labeling students with mental health difficulties, responsible use and interpretation of data, the large number of required screenings already in place, and anticipation of negative reactions from students’ parents and caregivers (Bruhn et al., 2014). The lack of guidance and continuity around how to carry out school-based mental health screening is also a barrier to implementation, with no universal expectations for measures to be used (Moffa et al., 2018), consent procedures to follow (Moore et al., 2015), or how to follow up with students and families following screening administration, especially at the Tier 1 level (Dowdy et al., 2010). Schools must also consider families’ and communities’ perspectives on mental health and schools’ roles in addressing social, emotional, and behavioral health concerns, as these should influence administration of screening tools and dissemination of findings (Chafouleas et al., 2010).
Although there is a need to consider these many challenges to school-based mental health screening, several factors have contributed to the dearth of work that details these difficulties associated with real-world implementation of screening within an MTSS framework. First, publication bias, or the “File Drawer Problem,” encourages dissemination of study outcomes deemed “good” or positive, with efforts that resulted in less favorable outcomes never published or made public. Additionally, published manuscripts on school-based universal screening typically provide a brief section on screening procedures and then focus on the outcomes of interest. Readers and consumers of research may benefit from accounts of the trials and tribulations of how screening was implemented through specific case examples and experiences. Finally, the research-to-practice gap remains a barrier, as investigator-initiated research often does not generalize to local system-level practice. There is limited research detailing school-wide efforts that also includes measures of social validity, making it difficult to know what works within a school setting and where efforts might have fallen short (Dowdy et al., 2010). Considering the many benefits and opportunities that universal screening can provide to schools, there is a pressing need for literature that elaborates on real-world applications of school-based universal mental health screening, including successes, challenges, and strategies to improve the quality of implementation.
The Need for Further Guidance
Although research underscores the benefits of implementing universal mental health screening in schools, a recent review of state-level guidance across the United States indicated that most state-level educational policy did not provide specific information on policies, practices, and implementation of social, emotional, and behavioral health screening (Briesch, et al., 2018). Most recently, a national survey of school administrators, teachers, support staff, and caregivers indicated that there was agreement across stakeholders about the importance of prevention and early identification for social, emotional, and behavioral concerns (Briesch et al., 2020). Therefore, despite readiness by schools to start engaging in mental health screening due to increased recognition of its importance, practical guidance for schools and districts to implement mental health screening is very limited. This need is particularly pressing in the context of the COVID-19 pandemic, which has raised collective awareness about the importance of promoting mental health and wellness for all students and identifying those most impacted by the health, economic and social burdens of the disease, coupled with widespread school closures. Thus, support and guidance are needed to help schools effectively and sustainably implement universal school-based mental health screening.
The Current Study
This study occurred within the context of a systematic, 15-month learning collaborative (LC) designed to support school district teams improve the quality of their multi-tiered SMH system (see Connors et al., 2020 for a detailed description). Teams applied to participate in the LC to improve SMH system quality, with screening as one of several domains they could opt to focus on. Within the LC, training, ongoing support, and resources were provided for teams who opted to test and implement screening practices in schools. LCs are useful when evidence supporting a certain practice is strong, but it goes unused in everyday work, such that “there is a gap between what we know and what we do” (Institute for Healthcare Improvement [IHI], 2003, pp.1). This is particularly relevant for mental health screening which has numerous benefits (e.g., Dowdy et al., 2015; Pearrow et al., 2016; Schimke & Schimel, 2014; Zenere & Lazarus, 1997), but it is not yet a widespread practice in schools and districts nationwide.
Study Aims
As school-based mental health screening is limited in practice due to numerous implementation barriers, schools and districts can benefit from clear guidance and a menu of practices that have been successfully used in other schools. Therefore, our aims were to 1) report observed trends in SMH screening throughout a 15-month LC; and 2) describe specific tests of change to advance screening practices in school districts.
Methods
Participants
Participants included six school district teams who were enrolled in a 15-month national LC to improve comprehensive SMH quality during the 2015–2016 or 2016–2017 school year. LC district teams were selected through a competitive application process. There were 24 district teams in 14 states who participated in the LC, but teams could focus their quality improvement efforts on a variety of domains within school mental health quality (Connors et al., 2020). Thus, only a subset of teams focused on mental health screening during the LC. Inclusion criteria for the six district teams represented in this study included 1) submitting at least one Plan-Do-Study-Act (PDSA) on screening; and 2) reporting reliable screening data. The parameters for the latter inclusion criteria were established based on data review and included the following criteria: 1) reporting students screened in the absence of risk factors (as opposed to initial assessment of students referred); 2) having a higher number of students screened than at risk based on screening activities. Districts that reported zero (0) students screened at baseline were included if it was a valid entry indicating they did not conduct universal screening at that time (i.e., data were not missing at baseline). In total, the six districts served 73 schools (M = 12; range = 5–42 schools) and 43,519 students (M = 7,253; range = 520 – 24,000) in six states spanning the Northeast, Midwest and Western United States. Urbanicity ranged from metropolitan counties of 1 million population or more (Districts A, B, C, D) to nonmetropolitan, rural counties (Districts E, F). See Table 1 for district team characteristics.
Table 1.
District Team Characteristics
| District | State | Schools | Student Population | Urbanicity† | % Families Below the Poverty Line |
|---|---|---|---|---|---|
| A | CA | 6 | 3,086 | 1 | 22.6% |
| B | NY | 8 | 5,960 | 1 | 4.8% |
| C | MA | 5 | 6,953 | 1 | 6.4% |
| D | RI | 42 | 24,000 | 1 | 30.4% |
| E | NH | 4 | 520 | 7 | 13.9% |
| F | MN | 8 | 3,000 | 4 | 10.6% |
Urbanicity is based on 2013 Rural-Urban Continuum Codes whereby 1–4 are metropolitan counties and 5–9 are nonmetropolitan (rural) counties (U.S. Department of Agriculture Economic Research Service, 2020).
Procedure
District teams participated in one of two LC cohorts. The 15-month LC structure included the following: a two-day, in person learning session; four, 90-minute virtual learning sessions; monthly, 60-minute “Action Period” calls to review data and tests of change submitted. Each month, participating teams submitted screening performance data as well as one-page PDSA worksheets (IHI, 2007) which detail a specific improvement or innovation being tested in their SMH system. Teams selected their PDSA topic based on training and resources presented in the LC, shared learning from other districts, and their own expertise about what changes might lead to desired improvement in their system. This small, rapid test of change was documented along with the results of the test and what the team learned to inform their subsequent test (see Table 3 for example tests). At the conclusion of the LC, screening data and PDSAs pertaining to screening were collected as secondary data for analysis in the present study. Each selected team was contacted to notify them of the use of these secondary data for the purposes of publication and verify their screening data submitted during the LC period. Collection and use of quality improvement data submitted during the LC was approved by the University of Maryland Human Research Protections Office as non-human subjects research exempt from IRB oversight. Additional details about the LC model and data on the feasibility and acceptability of its application to SMH quality are available elsewhere (Connors et al., 2020).
Table 3.
Screening Activities, Illustrative Tests and Lessons Learned from 42 PDSAs Among 6 District Teams
| Screening Activity | Operational Definition | PDSAs (N, %) |
Teams (N, %) |
Illustrative Tests | Lessons Learned |
|---|---|---|---|---|---|
| Build a Foundation | Assemble a team with school-community-family representation. Generate buy-in and support. Decide what you are going to screen for, who, and why (i.e., clarify goals). |
16 (38%) | 6 (100%) All Teams |
• Present screening rationale and possible tools at existing meetings to assess interest, readiness, questions, and concerns • Set initial goals based on stakeholder preferences (e.g., give schools and staff choice) |
“There was more interest than we believed even with this being a busy time of year. One of the principals reported that the key factor was giving the staff time without their students to complete the screener.” “Upper grades seem more interested in and ready to screen than lower grades and teams provided feedback on preferred screening tools. Lower grades had more questions and concerns about resources to screen and what tools to use.” |
| Identify Resources, Logistics and Administration Processes | Identify staffing needed for administration, scoring and follow up. Identify data infrastructure. |
22 (52%) | 6 (100%) All Teams |
• Test feasibility of transitioning screening administration to a different team member • Test data system for timeliness and ease of data entry and scoring |
“Less children were identified than we predicted. That the idea of doing it small is so relevant. If we had done this with the whole school, we would not have been able to make the necessary adjustments and respond to the needs of the teachers as fully as we’re able to with only a few teachers in the process.” “Thinking outside the box to explore electronic means to send SDQ or mailing them out may reduce length of time for return. Also clearly indicating a return by date and send a reminder.” |
| Select an Appropriate Screening Tool | Decide on a tool based on what it screens for, reliability/validity, cost, length to administer and score, training/TA available. | 14 (35%) | 4 (67%) Teams B, C, D, F |
• Test local utility of 2–3 potential screening tools to accurately identify students • Pilot test one screening tool with 5 students, scaling up later to classroom then grade level |
“Screening tool [PHQ-9] was easy to use and would be useful to establish baseline depression needs if administered electronically with appropriate consent procedures” “The acceptance of the process by Psychologists and Social Workers – very willing to use the screening. The team felt it [SDQ] was a useful tool to be easily implemented in schools, and all schools agreed.” |
| Determine Consent and Assent | Decide between passive or active consent, communication about screening, cultural considerations and communication plans | 2 (5%) | 2 (33%) Teams B, F |
• Obtain active consent for a small group of students first to assess parent receptivity. • Review parent communication samples with parent feedback to adapt home-school communications about screening |
“Zero parents opted out of screening. Far more students participated than planned.” “Will need to create a plan for what to do with positive screens before administering.” “Consent was not secured but staff involved in screening administration were since re-trained on the consent protocol to ensure they are used for future screenings with more students.” |
| Test Data Collection Process | Consider data management and privacy including HIPAA, FERPA, data storage, online platform vs paper and pencil | 15 (37%) | 4 (67%) Teams A, C, E, F |
• Develop and test a screening data collection process in 1 classroom and ask team members collecting and scoring data to track their time and report out on feasibility • Test a process to collect and store data in a secure location outside of the student record |
“The amount of rapport building that stemmed from the implementation exceeded our expectations. Students returned shortly after the screening to indicate areas of need other than addressing substance use at a rate much higher than expected. Additionally, mental health staff used this opportunity to discuss the limits of confidentiality with students and to engage in discussions about substance use from a prevention standpoint.” |
| Determine Follow-Up Processes | Clarify post-screening procedures. Select interventions to match student needs identified during screening. |
10 (24%) | 4 (67%) Teams A, B, C, F |
• Develop and test a follow-up guidance document or triage tool for mental health providers to use after screening | “We learned the value of taking a moment with site-level staff to review data and observe their usage of data tools in order to drive the supports we can provide to the broader program.” “Follow up revealed teacher and parent concerns as well as student self-reported concerns that were not previously brought up to guidance until the RCADS administration.” |
| Test Entire Screening Protocol | Pilot the entire workflow including consent, administration, scoring and follow-up to error proof before and during scale up. | 7 (17%) | 3 (50%) Teams B, C, D |
• Form a committee to select create a message, select a screener, administer, score and follow up on a small scale. Learn about needed adjustments and/or that the protocol can be scaled up (e.g., another class, grade, more students). | “Piloting a different tool on a small scale allowed the team to compare ease of scoring and interpretation for both. [We] identified 17% with behavioral health concerns, and several who were not previously identified as eligible for services; [we are] concerned about how to scale up to all grade levels so plan to start one grade level at a time.” |
Measures
Trends in School Mental Health Screening
Observed trends in SMH screening outcomes over time were measured by district team self-report on the screening domain of the SMH Quality Assessment (SMH-QA-Version 1; Hoover et al., 2015). The SMH-QA-Version 1 was developed by the National Center for School Mental Health (NCSMH) to establish national performance standards for SMH quality improvement (Connors et al., 2016). The first version of the SMH-QA was developed through an iterative process involving feedback from experts in the field (Connors et al., 2016). SMH-QA-Version 1 includes seven domains of SMH quality: 1) teaming; 2) screening; 3) needs assessment/resource mapping; 4) evidence-based services and supports; 5) evidence-based implementation; 6) school outcomes and data systems; and 7) data-driven decision making. The screening domain includes of four primary team self-reported items including the number of students enrolled, screened, at-risk, and referred for services in grades K-12.
Teams submitted SMH-QA-Version 1 responses electronically via The SHAPE System (www.theshapesystem.com) each month for 15 months. SHAPE is a public domain website developed and hosted by NCSMH to advance comprehensive SMH quality improvement and sustainability via team self-assessment tools, resources, and strategic planning guides. Participants submitted their data in their district account within The SHAPE System where they could refer to it each month, and the study team could directly access their responses in real time.
Tests of Change to Advance Screening Practices
Plan-Do-Study-Act cycles (PDSAs) are a central component to many quality improvement methods, including the Model for Improvement and Breakthrough Series Collaborative from which this LC was derived (IHI, 2003; Taylor et al., 2014). The PDSA is designed to help teams rapidly test quality improvement change ideas on a small scale to assess their feasibility and utility in the field. Teams used PDSA cycles to actively test an idea for change in their current workflow, procedures or system and used an iterative, data-driven approach to incrementally build on lessons learned in their local setting. This approach facilitates rapid, low-risk learning. Importantly for school teams, PDSAs protect against investing time and resources into a large-scale implementation effort that risks failure and implementer burn out in schools. Instead, PDSAs promote iterative pilot testing, tailoring, and momentum building or buy-in from implementers.
Participating teams received training on how to use PDSAs for their local quality improvement goals and submitted one PDSA worksheet per month through Dropbox to the study team. PDSAs were viewable by other participating teams and actively accessed during the LC by the study team to feature improvements and innovations occurring in real time, to provide feedback and generate shared learning among teams within the LC.
Data Analysis
Aim 1 was accomplished by examining the number and percentage of students screened at the beginning and end of the LC. Aim 2 was accomplished by reviewing and coding all Plan-Do-Study-Act (PDSA) worksheets submitted by LC teams documenting how they developed and tested their screening practices over time.
Aim 1: Trends in School Mental Health Screening
Quantitative screening data reported by CoIIN teams on the SMH-QA during 2015–2017 were extracted, examined for normality, and cleaned. Data submitted monthly were closely examined to identify and code the unique baseline month for each team, defined as the first date screening data were submitted (“Baseline”), as well as the unique final month for each team (“Follow-Up”). On average, the coded Baseline was within the first two months of the LC and the Follow-Up was 11 months after the coded Baseline. Monthly data were summed to result in cumulative numbers of students screened by Follow-up. We used these cumulative data to describe the total number of students screened at Baseline and Follow-Up as well as total number of students at-risk and referred for services at Follow-Up.
Aim 2: Tests of Change to Advance Screening Practices
PDSA worksheets were coded using a systematic, iterative process developed by the study team and implemented with two coders. Our qualitative methods were an adapted version of the rapid qualitative analysis approach, which is designed to be more deductive and explanatory than inductive and exploratory (Hamilton, 2020), and have been found to result in findings and recommendations matched with more traditional, time-intensive thematic analysis (Taylor et al, 2018). First, the 42 screening PDSAs submitted by the 6 participating districts were coded based on eight steps outlined in the SMH Screening Quality Guide (NCSMH, 2020). One coder applied codes to all 42 PDSAs, cross coding where appropriate. Next, a second coder checked codes for 22 (52%) of the PDSAs. Interrater reliability was 92%. Discrepant codes for the 22 double-coded PDSAs were resolved through consensus. Also, “Develop Administration Process” and “Identify Resources and Logistics” were collapsed into “Identify Resources, Logistics and Administration Processes”. “Clarify Goals” was also collapsed into “Build a Foundation” as there were only two PDSAs coded under “Clarify Goals” and both were cross-coded under “Build a Foundation”. Finally, a new code “Test Entire Screening Process/Protocol” was added. This process resulted in seven codable practice areas. The remaining 20 PDSAs were re-coded by the first coder to reflect consensus conversations, updated codes, and definitions.
Mixed Methods
Quantitative (SMH-QA) and qualitative (PDSAs) data were equally weighted and collected simultaneously (QUAN + QUAL, Palinkas et al, 2011). The function of our mixed methods was expansion, as the PDSA results were connected to the SMH-QA results to provide additional detail about the screening practices and lessons learned to explain how the Baseline to Follow-Up increases in number of students screened were achieved.
Results
Aim 1: Trends in School Mental Health Screening
At baseline, most teams (five of six) reported not conducting universal mental health screening, and none reported the number of students at risk and/or referred to mental health services within seven days of screening (see Table 2). Only District F reported screening two students at Baseline, which is not uncommon as the LC encourages very small tests of change using the PDSA format to examine results and refine procedures before scaling up. At baseline, district teams self-reported screening less than 1% of their entire student body in the year prior. At follow-up, district teams self-reported screening an average of 1,607 students (22% of their student body). The follow-up range again was wide, from less than 1% (600 students of 24,000 by District D) to 69% (4,850 of 7,000 students by District C). In total, 9,640 students were screened by these teams at follow-up. At follow-up, an average of 17% of students were identified as at risk (range = 0.3% to 33%) and an average of 59% of those who screened positive were referred to mental health services within seven days across five district teams (range = 0% to 100%). Of note, district teams were developing their tracking systems for students screened, identified at risk, and referred during this time as well. Therefore, the bimodal distribution of students referred, particularly that half of the districts reported no students referred, likely reflects differences in district capacity to track and report students referred. It is our understanding that all districts who identified students at risk were indeed able to follow-up with those students and connect them to services as needed within a timely manner. We assessed this by 1) asking them to report the number of students connected to services within 7 days of identification on the SMH-QA and, when those data were missing, we 2) followed up on monthly calls during PDSA review. In general, using the PDSA method helped protect participating teams from identifying large groups of students they were unable to serve.
Table 2.
District Team Screening Data per SMH-QA
| District | Baseline | Follow-Up | ||||
|---|---|---|---|---|---|---|
| Screening PDSAs | Students Enrolled | Students Screened | Students Screened | Students At-Risk | Students Referred | |
| N | N | N (%) | N (%) | N (%) | N (%) | |
| A | 7 | 3,086 | 0 (0%) | 1,874 (61%) | 459(24%) | 0 (0%) |
| B | 8 | 5,690 | 0 (0%) | 1,499 (26%) | 38 (.3%) | 0 (0%) |
| C | 11 | 7,000† | 0 (0%) | 4,850 (69%) | 750 (13%) | 750 (100%) |
| D | 3 | 24,000 | 0 (0%) | 600 (.03%) | 200 (33%) | 160 (80%) |
| E | 5 | 540† | 0 (0%) | 270 (50%) | 85 (31%) | 60 (71%) |
| F | 8 | 3,000 | 2 (<.05%) | 547 (18%) | 106 (19%) | 0 (0%) |
| Total | 42 | 43,316 | 2 (0%) | 9,640 (22%) | 1,638 (17%) | 970 (59%) |
Teams reported slightly variable number of students enrolled each month; largest mean denominator is reported here.
Note: On average, baseline was within the first two months of the LC and follow-up was 11 months after baseline.
Aim 2: Tests of Change to Advance Screening Practices
The 42 screening PDSAs submitted by six district teams were coded into eight distinct screening practice areas. Teams submitted an average of seven screening PDSAs each (range = 3 to 11). Note that teams also submitted PDSAs on other school mental health quality domains throughout the 15-month LC, so they did not necessarily complete a screening PDSA each month. All teams submitted at least one PDSA on two screening practice areas, which were “Build a Foundation” and “Identify Resources, Logistics, and Administration Processes”. These two practice areas also had the greatest number of PDSAs submitted to test specific change ideas or strategies. See Table 3 for screening practice area definitions, number of PDSAs, representation across teams, illustrative tests and selected lessons learned. Below, we summarize the efforts of district teams in each screening practice area.
Build a Foundation
Building a foundation includes assembling a team with school-community-family representation that can support the mental health screening approach, procedures, communication, and follow-up. This practice area also includes generating buy-in and support for mental health screening. All districts in the current sample submitted at least 1 PDSA in this practice area, which also represented 14 (33%) of all PDSAs. PDSAs fell in two main areas, developing and testing communication tools with involved stakeholders and using committees and meetings to gather feedback and generate buy-in. In terms of testing communication tools, one PDSA focused on piloting a draft format for sharing screening results with an identified principal and teacher to gather their feedback. Another tested a newly-developed communication plan to disseminate information to school principals to assess interest in screening (note that this team learned their principals were much more supportive of mental health screening than they anticipated prior to this outreach). Other PDSAs focused on using committees and meetings (e.g., mental health, RtI, Parent Teacher Organization) to discuss logistics, identify trainings to send mental health staff to, and present screening rationale and possible screening tools to assess interest, readiness, questions and concerns about processes to pilot screening. PDSAs also reflected evidence of teams working toward clarifying goals of screening, which involved deciding what the team was going to screen for (e.g., type of mental health strengths or needs), who (e.g., grade level, school buildings, reporter), and why (e.g., early identification of anxiety for students transitioning to high school; targeted social emotional skill development for 1st grade students). At least two teams started these conversations during the process of building buy-in and obtaining feedback in the “build a foundation” practice area. For example, one team worked directly with two schools to select one grade they wanted to target to pilot the Strengths and Difficulties Questionnaire (SDQ; Goodman, 1997).
Identify Resources, Logistics and Administration Processes
This practice area includes the identification of staffing, a data system, and other resources needed to implement mental health screening, as well as logistical considerations and procedures for collecting the screening data. All teams submitted a PDSA in this practice area and together, PDSAs coded for this area represent over half of all PDSAs submitted (N = 22, 52%). Some teams focused on identifying staffing needed for administration, scoring and follow up. At times this required testing out new roles and responsibilities for various staff members; one team tested the feasibility of care coordinators from a community-partnered organization leading screening instead of clinicians. There was also a focus on identifying a usable data infrastructure, and exploring whether existing student information systems (SIS, such as PowerSchool) could be customized to accommodate screening workflows or if another electronic system was needed. For example, one PDSA tested the digital administration and scoring of the Revised Children’s Anxiety and Depression Scale (RCADS; Chorpita et al., 2000) by developing a google form with RCADS prompts, modifying the spreadsheet to score subscales and highlight beyond “at-risk” threshold, and entering RCADS screening data from 5th grade paper administration into the google spreadsheet. Of note, following a series of PDSAs to perfect this process, google sheets ended up being a very secure and scalable data infrastructure for this district. Many districts used PDSAs in this practice area to start small, with developing and testing processes for who to screen, timepoints, staff to support, and actual administration. For example, one district started by identifying one teacher to conduct the Social, Academic, and Emotional Behavior Risk Screener (SAEBERS; Kilgus et al., 2013) screening with five students and then reviewed the data to discuss feasibility of screening implementation.
Select an Appropriate Screening Tool
Selecting a screening tool appropriate for the local district’s mental health screening goals and resources requires deciding on a measure or process based on what it screens for, psychometric qualities such as reliability and/or validity with youth populations that reflect the student body, cost to access the measure and any associated data platform costs, time to administer and score, and training or technical assistance available. Most districts (4, 67%) in this sample submitted PDSAs in this practice area and just over one third (14, 35%) of all PDSAs submitted were coded in this practice area as well. PDSAs reflected tests of change for a wide variety of standardized tools (e.g., SDQ; Goodman, 1997; RCADS; Chorpita et al., 2000, SAEBERS; Kilgus et al., 2013; Generalized Anxiety Disorder Assessment-7 [GAD-7]; Spitzer et al., 2006; Patient Health Questionnaire-9 [PHQ-9]; Kroenke et al., 2001) as well as approaches involving teacher nomination or multi-gated procedures such as “select a screening tool with a teacher nomination component appropriate for the designated grade level.” When using PDSAs to test various tools of interest rapidly on a small scale, teams could make more informed decisions based on what they learned in “real time” from pilot administration. For instance, one team found, “[The] SDQ yielded more useful subscales but was much harder to score manually than expected. [The] PSC-17 yielded less useful information but was easier to score.” Teams designed PDSAs in this area to answer questions about how much clarification is required to ensure the tool can be implemented effectively, number of students identified, accuracy of screening (i.e., sensitivity and specificity) and whether the same tool can be used for follow-up assessment or a more sensitive tool is needed to complement the initial screening tool.
Determine Consent and Assent
Two district teams conducted PDSAs to decide between passive or active consent, which includes considerations related to communication about screening, cultural considerations with screening and communication plans. Of note, many of these conversations started during “build a foundation” and some teams did not submit PDSAs specific to this decision between active and passive consent if they had decided early on which option to pursue. For example, one district opted for passive consent (complemented by numerous parent communications) in their planning phases, then tested it when scaling up anxiety screening using the GAD-7. However, one PDSA by a team weighing various consent options included obtaining active consent to administer the SDQ with eight students to assess parent responsiveness and receptivity to agree to screening. Then, they sent a follow-up letter home with a parent-reported SDQ questionnaire to be completed. Another focused on obtaining parent feedback on various parent communication samples to tailor home-school communication about screening.
Test Data Collection Process
Testing the screening data collection process includes exploring data sharing, management, and privacy considerations (including HIPAA, FERPA, data storage) as well as actual data administration processes (e.g., online platform versus paper and pencil). Four of the six district teams conducted a PDSA to test data collection processes, with 2–5 PDSAs per team just for this practice area, representing 37% of all PDSAs. As these teams did not conduct universal mental health screening at baseline, PDSAs were used to test the introduction of new data collection processes on a small scale to observe results and adjust processes for subsequent test of change to scale up over time. For example, one team who had already identified the SAEBRS as a potentially appropriate screening tool (through a prior PDSA) then developed a screening protocol and used another PDSA to test the protocol in one classroom “for school behavioral health staff to evaluate the data obtained from that teacher’s classroom.” From this, they learned their protocol was feasible as measured by teacher report of being able to complete screening for 15 students in 15 minutes, and 20–25 minutes for the school behavioral health staff to score. Both the teacher and school behavioral health staff reported the process was easy. In a subsequent PDSA the following month, the district team further tested this protocol on a larger scale (i.e., multiple grade levels) to determine the feasibility of more teachers screening students “in a designated timeframe for the school behavioral team to score and enter data.” They found that this larger scale test required “a bit more time and planning to give teachers the time they needed...but outcomes of teacher interviews were similar.” Other teams had variable lessons learned from their PDSAs, including some who found the screening process took longer than expected or teachers reported pertinent areas were missing from the screener (e.g., bullying, listening skills, defiance). PDSAs were also used to test different administration formats and ways to provide instruction about data collection processes. For instance, in one PDSA “a few parents asked for the SDQ to be e-mailed so the SDQ was adapted to meet the parent request.”
Determine Follow-Up Processes
Teams need to have clear post-screening procedures, including how to select interventions to match student needs identified during the screening tool or process. Four teams submitted a total of 10 (24%) PDSAs on this practice area. Some PDSAs in this area reflected brief tasks before an actual change could be tested by a PDSA. For example, some teams did initial research to “identify district-level resources available for behaviorally-focused RTI,” “develop a plan to follow-up with students who tested positive,” “identify what other sites do to follow-up with screeners (MTSS programming overall and individual student follow-up),” and/or “plan ahead for ongoing progress monitoring.” Most PDSAs in this area involved testing a flow chart or guidance document they developed to outline next steps after screening students. For example, one team tested their guidance document with “mental health coordinators to use during team meetings to discuss Tier 2 and 3 interventions for children identified as at-risk.” In this case they learned that the “Follow up document was more challenging to create than anticipated. Was important to develop this before actually screening.” In fact, more than one team opted to “create [and test] a plan for what to do with positive screens before administering.” Another team prepared to have mental health staff on-call after their first screening test as part of their follow-up procedure and decided after running the PDSA that “on-call staff were not needed after this screening but this practice will stay in place to account for contingencies that may occur during larger-scale screening.” Often, team-based processes were tested to ensure coordinated follow-up.
Test Entire Screening Process/Protocol
Districts B, C, and D submitted 1–4 PDSAs testing their entire screening process or protocol, meaning that they piloted the entire workflow including consent, administration, scoring and follow-up. The goal here was often to error-proof the process before continuing to scale up to screening with larger numbers of students. At this point teams reported larger tests such as screening 65 or 75 students, an entire grade level, or three grade levels to help them achieve “concrete steps towards a larger pilot of screening.” One team tested a new screening tool with a process they had developed and found the following:
Piloting a different tool on a small scale allowed the team to compare ease of scoring and interpretation for both. [We] identified 17% with behavioral health concerns, and several who were not previously identified as eligible for services; [we are] concerned about how to scale up to all grade levels so plan to start one grade level at a time.
Another found, “Zero parents opted out of screening. Far more students participated than planned. No significant problems arose during the screening process.” From this they concluded, “Screening can be effectively conducted through the high school’s advisory program. The data gained through large-scale screening supports the department’s ability to deliver services and identify at-risk students.” In this way, testing the entire protocol or process using PDSAs provided confirmation of prior findings and indication of readiness to scale up for teams working toward larger screening administrations.
Mixed Methods
Quantitative screening data submitted by teams were necessary to inform which teams to include in our PDSA review and coding. Using this method, coded PDSAs are based on teams with documented improvements in screening practices that resulted in actual screening of students. However, there were so few districts in our sample, districts varied in size and the proportion of students screened at follow-up was so variable that we are unable to make conclusions from the qualitative data about which type of PDSAs predicted more students screened. For instance, District A submitted the fewest number of PDSAs yet were able to screen 61% of their student body by Follow-Up. District D was only able to screen 0.03% of their student body by Follow-Up, but they were by far the largest district (24,000 students enrolled in 42 schools) and in looking at absolute numbers of students screened, they were able to screen 600 students with 9 PDSAs submitted.
The coded PDSA content does provide meaningful expansion of quantitative findings by revealing the specific screening practices, tests of change and lessons learned to explain how the Baseline to Follow-Up increases in number of students screened were achieved. The tests of change offer a meaningful blueprint for future school and district teams looking to increase the number of students they can screen over time, by developing, executing, and learning from incremental practice improvements. To further aide mixed methods interpretation of screening practices over time, in-depth description of District C’s use of PDSAs to increase their number of students screened over time is provided below.
Leveraging PDSAs to Increase Number of Students Screened: District C Example
District C achieved the greatest amount of growth in screening practices relative to their student population throughout the LC. Using the PDSA method, small tests of change led to larger tests of change in a manner that allowed for members of the district-wide Mental Health Initiative Committee to monitor the effectiveness of practices and make decisions about scaling them up in a controlled manner that accounted for barriers to implementation and ensured a greater likelihood of successful change. Within one school year, the district moved toward full implementation of two large-scale online screenings at the high school level that integrated a passive consent and opt-out process and subsequent expansion to elementary and middle schools. Follow-up data analysis revealed that 100% of students who required follow-up received it within 7 days of the screening, with urgent concerns being addressed immediately upon identification.
District C’s process for achieving this growth with screening began as micro-level testing of the practice one student at a time, and then grew to class or grade level, schoolwide, and then district-wide screening. Before the LC, District C collected and used psychosocial data very rarely. The universal mental health screening program in District C is now one of the largest and most comprehensive programs in the state, extending from grades three to twelve and focusing primarily on internalizing concerns (e.g., anxiety and depression). Thousands of students are screened annually, and these data are used to proactively identify students who may require mental health services in a manner that directly addresses the extraordinarily lengthy delay in treatment that currently exists. District C reported a 66% increase in the identification of students eligible for therapeutic services for internalizing concerns following the implementation of screening practices.
One early PDSA that the team drafted involved selecting a screening measure and administering it to one student who was already enrolled in services. In this manner, the test of change was small, manageable, and conducted safely (i.e., would not increase the need for services or produce information that was not already known by the team). Rapid-cycle micro tests of change were used to test other screening activities, such as consent procedures, use of specific measures, methods for administering the tool, scoring and interpreting data, and use of the data to inform practice. Small tests of change led to larger tests of change that provided opportunities for the team to reflect on each of these components of screening regularly. As screening was brought to scale, adjustments to practice were identified that ensured effective, efficient screening could continue to occur despite the number of screened students increasing over time. For example, active consent was used at the outset of screening and worked effectively owing to the very small number of students screened. However, the team recognized that, as the number of students screened approached the size of a classroom or grade-level, shifting to passive consent with opt-out procedures would be a necessary adjustment to practice to maintain an effective and efficient screening administration. Additional adjustments to practice included moving from paper and pencil screening to web-based screening, using SMH staff to administer the screening to leveraging the entire staff to administer the screening tool, and screening using one measure at a time to combining multiple tools into a single assessment that yielded data across three distinct problem areas simultaneously.
Screening data afforded District C with an opportunity to reflect on aggregate prevalence rates of multiple presenting problems to inform resource allocation, program design, and system evaluation. For example, 13.36% of students in grades 5–8 scored in the moderate to severe range for internalizing concerns (depression, anxiety, etc.) in the 2017–18 school year, data which have helped to inform discussions regarding resource allocation and the design of services that can support a greater number of students (e.g., group therapy). Baseline rates of anxiety and depression secured during the first large scale screening administrations at District C High School indicated that approximately 20% of students scored in the moderate to severe range for depression, and approximately 22% of students scored in the moderate to severe range for anxiety. After five years of implementation, District C now reports a 6.4% decrease in students scoring in the moderate to severe range for depression and an 8.5% decrease in students scoring in the moderate to severe range for anxiety. As a result of lessons learned in the LC, District C continues to live in the cycle of innovation to adapt and grow their efforts over time, embracing the reality that schools and districts are dynamic, almost living things that do not lend themselves to fixed action plans or end points.
Discussion
School and district teams lack clear procedures and face many challenges with respect to mental health screening. This study provided a valuable opportunity to observe the process of developing and implementing screening procedures within several school districts from start to finish. Quantitative results indicated that the six participating districts who did not initially have any universal mental health screening practices in place were all able to implement screening practices to some degree by the end of the 15-month LC. The proportion of students that were able to screen relative to the size of their student body varied (i.e., range = 0.03% to 69% students screened at follow-up), which could be related to district size, degree of support for screening at the “building a foundation” phase, pace of scale-up, success of PDSAs, and/or number of SMH team members who can participate in screening administration, scoring and follow-up, among other potential factors. Our study was not designed nor fully powered to detect factors associated with proportion of students screened, but future research with more sites (either schools or districts) could be conducted to examine this more closely. Findings do indicate that developing and implementing screening for some proportion of the student body is feasible in a 15-month period for districts of varying size, urbanicity, and socioeconomic status of families.
Qualitative results illustrated that participating districts conducted PDSAs to develop local procedures for seven screening activities, 1) build a foundation; 2) identify resources, logistics and administration processes; 3) select an appropriate screening tool; 4) determine consent and assent; 5) test data collection processes; 6) determine follow-up processes; and 7) test entire screening process/protocol. All six teams submitted a PDSA demonstrating work on screening activity 1) build a foundation; and 2) identify resources, logistics and administration processes. These were also activities for which the highest number of PDSAs were submitted overall, indicating the relative importance of these activities for schools. Importantly, district teams did not spend such a long time in these initial activities because all teams were able to implement screening to some degree, indicating the PDSA process helped them move through all relevant screening activities with stability and efficiency.
Using the PDSA cycle method was highly compatible with screening procedure development and implementation for several reasons. First, PDSAs allowed teams to test and observe changes in an existing SMH system, producing results that are locally valid and specific to context. Evidence-based practices developed and tested in locales outside the school or district will always require some level of adjustment or adaptation to the local school community needs, strengths, and context. With PDSAs, practice-based evidence is generated within the local context automatically. Second, PDSAs allowed teams to start small and scale up only when the PDSA evidence signaled readiness. This minimized risk of developing and implementing a screening procedure for an entire school, for example, without prior testing, which could result in unforeseen scenarios such as identifying more students than the SMH team can respond to in a timely manner or parent or student concerns about consent or purpose of screening. In our experience, schools and districts not engaged in screening are sometimes hesitant to start because they assume they need to screen all students in a school year (which can feel overwhelming), screening will “open pandora’s box” of student need that exceeds capacity, and/or litigation will result from consent practices. Starting small using the PDSA process, especially when feedback from students, families, and school staff is involved, mitigates these potential pitfalls for schools.
Limitations
Our findings should be interpreted in the context of several limitations. First, these findings reflect only six school districts and are mainly descriptive of their observed accomplishments with mental health screening practices during participation in a learning collabortive. However, the screening activities these teams were able to accomplish are not readily available in the extant literature. Related, PDSAs presented in this study are only a small sample of potential tests of change within a given practice area. Other district teams who participated in the LC and submitted PDSAs on screening but did not meet the inclusion criteria of reporting baseline and follow-up screening data were excluded from the study because our goal was to code PDSAs of teams who were successful in their screening efforts. As a result, these PDSAs may underrepresent the broad array of potential tests for a given screening activity. For example, the operational definition of “Determine Consent and Assent” includes cultural considerations and, in fact, there were two district teams in the LC who obtained feedback on candidate screeners and communications from parents of students of color. Therefore, school practitioners working in teams are advised to reference the PDSAs in this paper as a mere set of examples from which to expand and develop their own tests of change to improve screening practices and procedures. There are also limitations with respect to the quantitative data reported by teams. These are uncontrolled district data used for quality improvement, and as such there were month-to-month variations in the total number of students enrolled (which is used as the denominator of the percentage of students screened), some months have missing data, and follow-up intervals varied across teams due to variations in reporting.
Impact of COVID-19 on MH Screening in Schools
The impact of COVID-19 has raised several important questions related to school-based mental health screening that are timely to consider. First, this study was conducted prior to the COVID-19 pandemic. So, although we believe the success of district teams and practices they used are applicable at any time, we acknowledge they were not specific to the COVID-19 context. However, whether schools are transitioning among learning models (e.g., hybrid, remote, in-person) or providing traditional in-person, full-time education, there are several lessons and opportunities making this an ideal time for district teams to embark on mental health screening practices. First, the pandemic has raised awareness and decreased stigma related to mental health due to the increased stress across the general population which presents a critical “window” of time to start universal screening to identify concerns early.
Second, while outpatient mental health visits decreased during COVID, schools were recognized as a hub for mental health supports even during school closures and hybrid instruction (Centers for Medicare & Medicaid, 2020). This presents an opportunity for schools to become a more regularly accepted venue for mental health supports moving forward, particularly when school-employed teams are strategically augmented with the capacity of community partnerships. COVID-19 has also precipitated more “well-being check-ins” in schools, including by front-line educators to assess for the impact of COVID-19 on students and their families. This has again increased the education workforce capacity to conduct screening and also led to innovations in data collection and use. One example is that Closegap (https://www.closegap.org/), a free, daily online well-being check-in for students, was adopted by tens of thousands of educators during COVID-19, and they also developed an adolescent version due to increased demand. Emerging data are also revealing increased mental health concerns for children and adolescents related to the pandemic, including increased risk of anxiety and depression, making it essential that school systems are using a data-driven approach to ensure students at risk receive support and allocate often limited mental health resources to the students with the highest needs (Jiao et al., 2020). In summary, universal mental health screening practices are needed now more than ever, and the results of this study provide actionable data for school systems to use to advance their screening practices.
Practice Implications and Future Directions
School and district teams may avoid universal mental health screening because they are often under the impression this practice would require screening large numbers of the student body at once, resulting in identified mental health need that would overwhelm their system. PDSAs offer school and district teams a method to gradually develop and implement screening in a way that offers systematic scale up with careful attention to stakeholder input and risk management by starting small. Our findings also illustrate that this process can result in relatively few disruptions to workflow and that the benefits to students and staff exceed challenges. A gradual approach also allows time to engage community health and behavioral health partners to augment mental health supports offered by schools, promoting shared responsibility for student mental health. We recommend further application of the PDSA method as well as LCs to support more widespread development of school-based mental health screening practices in schools and districts interested in this work. PDSAs also support the mindset and workflows of ongoing growth for districts as they continue to innovate, implement, and continuously adapt over time.
Of course, this LC was conducted in the context of a partnership with the National Center for School Mental Health. However, the PDSA methods and data collection for screening can be applied by district teams working to advance screening within their own district. In fact, a learning collaborative of school buildings within a district would be a promising approach to ensure shared learning and innovation. There are numerous high-quality, free resources for schools who want to start and/or advance their screening efforts. First, The SHAPE System (www.theShapeSystem.com), a public-access, web-based platform that offers schools, districts, and states a workspace, self-assessment tools, automatic reports and targeted resources to support SMH quality improvement in screening and other areas. The SMH Quality Guide on Screening (NCSMH, 2020) also synthesizes background information, best practices, action steps, examples, and resources from the field. The Screening and Assessment Library is a searchable library of free or low-cost screening and assessment measures related to SMH (NCSMH, 2021). Finally, there is now the National School Mental Health Curriculum: Guidance and Best Practices for States, Districts, and Schools (MHTTC, 2019) which includes a learning module on screening that has also been recorded and in the public domain (MHTTC, 2021).
For practical purposes, screening data serve not only to proactively identify individual students for early intervention (Tier 2) and treatment (Tier 3) services, but, when aggregated, yield an ongoing needs assessment that informs resource allocation, program design, and selection of evidence-based practices. Screening data collection, scoring and triaging to respond to students who screen positive at different risk levels is likely most practical at the school level, and aggregation of screening results for SMH system needs assessment purposes could occur at the school or district level. Ultimately, school and district teams come in varied sizes and structures so should work together across levels to identify locally-appropriate staffing and workflows to use screening data. District and school teams can use screening data to monitor trends of prevalence rates for specific mental health concerns to reflect on current programming and make decisions about how best to staff schools and implement tiered interventions. If screening indicates a high prevalence of a specific need, schools should consider bolstering universal mental health promotion/prevention (Tier 1) and early intervention (Tier 2) supports. This may be particularly true during or after school closures, community-wide stressors or adverse events like COVID-19. For example, if 50% of students report anxiety, stress, depression or adverse experiences, the most appropriate approach should be universal for all students, with additional Tier 2 or 3 supports for individual students in the highest level of risk on an as needed basis. District and school teams should develop cut points for “high”, “moderate”, and “low” risk levels to inform how they match their follow-up procedures and level of support to student needs. To develop these cut points, teams can refer to severity categories based on norm references that are often available with standardized, validated screening tools. Even if the prevalence of student need or risk is high, early identification through screening is still of great value to inform how the SMH system can respond to promote resilience and wellbeing for all. In this way, screening opens the door to start reforming our service delivery models in schools away from an individual problem-focused approach to a more proactive, population-based approach that is based on a strong multitiered system (Dowdy et al, 2015).
Acknowledgements:
This study was funded in part by a cooperative agreement between the School-Based Health Alliance and the Health Resources and Services Administration, Maternal and Child Health Bureau. We are also indebted to the participating school districts who dedicated their time and talents to this learning collaborative. Participating districts include Fairport Central School District (Fairport, NY), Methuen Public Schools (Methuen, MA), School Administrative Unit #7 (Colebrook, NH), Seneca Family of Agencies in partnership with Education for Change Public Schools (Oakland, CA) and Winona Area Public Schools (Winona, MN).
Footnotes
We have no known conflicts of interest to disclose.
References
- Algozzine B, Barrett S, Eber L, George H, Horner R, Lewis T, Putnam B, Swain-Bradway J, McIntosh K, & Sugai G (2019). School-wide PBIS tiered fidelity inventory. OSEP Technical Assistance Center on Positive Behavioral Interventions and Supports. www.pbis.org. [Google Scholar]
- Allen AN, Kilgus SP, Burns MK, & Hodgson C (2019). Surveillance of internalizing behaviors: A reliability and validity generalization study of universal screening evidence. School Mental Health, 11(2), 194–209. 10.1007/s12310-018-9290-3 [DOI] [Google Scholar]
- Andrews JH, Cho E, Tugendrajch SK, Marriott BR, & Hawley KM (2020). Evidence-based assessment tools for common mental health problems: A practical guide for school settings. Children & Schools, 42(1), 41–52. 10.1093/cs/cdz024 [DOI] [Google Scholar]
- Auerbach ER, Chafouleas SM, & Briesch AM (2019). State-Level guidance on school-based screening for social, emotional, and behavioral risk: A follow-up study. School Mental Health, 11(1), 141–147. 10.1007/s12310-018-9278-z [DOI] [Google Scholar]
- Becker-Haimes EM, Tabachnick AR, Last BS, Stewart RE, Hasan-Granier A, & Beidas RS (2020). Evidence base update for brief, free, and accessible youth mental health measures. Journal of Clinical Child and Adolescent Psychology, 49(1), 1–17. 10.1080/15374416.2019.1689824 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bertone A, Moffa K, Wagle R, Fleury I, & Dowdy E (2019). Considerations for mental health screening with Latinx dual language learners. Contemporary School Psychology, 23(1), 20–30. 10.1007/s40688-018-0205-y [DOI] [Google Scholar]
- Bohnenkamp J, Patel C, Connors E, Orenstein S, Ereshefsky S, Lever N, & Hoover S (2021). Testing strategies to promote effective, multidisciplinary team collaboration in School mental health [Manuscript submitted for publication]. National Center for School Mental Health, Division of Child and Adolescent Psychiatry, University of Maryland School of Medicine. [Google Scholar]
- Briesch AM, Chafouleas SM, & Chaffee RK (2018). Analysis of state-level guidance regarding school-based, universal screening for social, emotional, and behavioral risk. School Mental Health, 10(2), 147–162. [Google Scholar]
- Briesch AM, Chafouleas SM, Nissen K, & Long S (2020). A review of state-level procedural guidance for implementing multitiered systems of support for behavior (MTSS-B). Journal of Positive Behavior Interventions, 22(3), 131–144. [Google Scholar]
- Bruhn AL, Woods-Groves S, & Huddle S (2014). A preliminary investigation of emotional and behavioral screening practices in K–12 schools. Education and Treatment of Children, 37(4), 611–634. [Google Scholar]
- Centers for Disease Control and Prevention [CDC]. (2013, May 16). Mental health surveillance among children – United States, 2005—2011. Centers for Disease Control and Prevention. https://www.cdc.gov/mmwr/preview/mmwrhtml/su6202a1.htm?s_cid=su6202a1_w [Google Scholar]
- Centers for Medicare & Medicaid. (2020, September 23). CMS issues urgent call to action following drastic decline in care for children in Medicaid and Children’s Health Insurance Program Due to COVID-19 pandemic [Press release]. https://www.cms.gov/newsroom/press-releases/cms-issues-urgent-call-action-following-drastic-decline-care-children-medicaid-and-childrens-health
- Chafouleas SM, Kilgus SP, & Wallach N (2010). Ethical dilemmas in school-based behavioral screening. Assessment for Effective Intervention, 35(4), 245–252. 10.1177/1534508410379002 [DOI] [Google Scholar]
- Chorpita BF, Yim LM, Moffitt CE, Umemoto LA, & Francis SE (2000). Assessment of symptoms of DSM-IV anxiety and depression in children: A revised Child Anxiety and Depression Scale. Behaviour Research and Therapy, 38(8), 835–855. [DOI] [PubMed] [Google Scholar]
- Connors EH, Stephan SH, Lever N, Ereshefsky S, Mosby A, & Bohnenkamp J (2016). A national initiative to advance school mental health performance measurement in the U.S. Advances in School Mental Health Promotion, 9(1), 50–69. 10.1080/1754730X.2015.1123639 [DOI] [Google Scholar]
- Connors EH, Smith-Millman M, Bohnenkamp JH, Carter T, Lever N, & Hoover SA (2020). Can we move the needle on school mental health quality through systematic quality improvement collaboratives? School Mental Health, 12(3), 478–492. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Costello EJ, He JP, Sampson NA, Kessler RC, & Merikangas KR (2014). Services for adolescents with psychiatric disorders: 12-month data from the National Comorbidity Survey–Adolescent. Psychiatric Services, 65(3), 359–366. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dowdy E, Ritchey K, & Kamphaus RW (2010). School-based screening: A population-based approach to inform and monitor children’s mental health needs. School Mental Health, 2(4), 166–176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dowdy E, Furlong M, Raines TC, Bovery B, Kauffman B, Kamphaus RW, Dever BV, Price M & Murdock J (2015). Enhancing school-based mental health services with a preventive and promotive approach to universal screening for complete mental health. Journal of Educational and Psychological Consultation, 25(2–3), 178–197. [Google Scholar]
- Dvorsky MR, Girio-Herrera E, & Owens J (2014) School-based screening for mental health in early childhood. In Weist M, Lever N, Bradshaw C, Owens J (Eds.), Handbook of School Mental Health (2nd ed., pp. 297–310). Springer, Boston. 10.1007/978-1-4614-7624-5_22 [DOI] [Google Scholar]
- Goodman R (1997). The strengths and difficulties questionnaire: A research note. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 38(5), 581–586. 10.1111/j.1469-7610.1997.tb01545.x [DOI] [PubMed] [Google Scholar]
- Hamilton A (2020, September 20). Rapid qualitative analysis: Updates & developments [Seminar]. Veteran Affairs Health Services Research & Development Cyberseminars. https://www.hsrd.research.va.gov/for_researchers/cyber_seminars/archives/3846-notes.pdf [Google Scholar]
- Hoover S, Lever N, Connors E, & Bohnenkamp J (2015). School Mental Health Quality Assessment- Version 1. University of Maryland, National Center for School Mental Health. https://www.legacy.theshapesystem.com/ [Google Scholar]
- Hoover S, Lever N, Sachdev N, Bravo N, Schlitt J, Acosta Price O, Sheriff L & Cashman J (2019). Advancing Comprehensive School Mental Health: Guidance From the Field. Baltimore, MD: National Center for School Mental Health. University of Maryland School of Medicine. [Google Scholar]
- Herman KC, Reinke WM, Thompson AM, Hawley K, Wallis K, Stormont M, & Peters C (2020). A public health approach to reducing the societal prevalence and burden of youth mental health problems: Introduction to the special issue. School Psychology Review, 50(1), 8–16. [Google Scholar]
- Hoover S, & Bostic J (2020). Schools as a vital component of the child and adolescent mental health system. Psychiatric Services, 72(1), 37–48. 10.1176/appi.ps.201900575 [DOI] [PubMed] [Google Scholar]
- Humphrey N, & Wigelsworth M (2016). Making the case for universal school-based mental health screening. Emotional and Behavioural Difficulties, 21(1), 22–42. [Google Scholar]
- Institute for Healthcare Improvement [IHI]. (2003). The Breakthrough Series: IHI’s Collaborative Model for Achieving Breakthrough Improvement. Institute for Healthcare Improvement; Cambridge, MA. [Google Scholar]
- Institute for Healthcare Improvement [IHI]. (2007). Plan-do-study-act worksheet. Institute for Healthcare Improvement [Website]. http://www.ihi.org/resources/Pages/Tools/PlanDoStudyActWorksheet.aspx
- Jeffrey J, Klomhaus A, Enenbach M, Lester P, & Krishna R (2020). Self-report rating scales to guide measurement-based care in child and adolescent psychiatry. Child and Adolescent Psychiatric Clinics, 29(4), 601–629. [DOI] [PubMed] [Google Scholar]
- Kilgus SP, Chafouleas SM, & Riley-Tillman TC (2013). Development and initial validation of the Social and Academic Behavior Risk Screener for elementary grades. School Psychology Quarterly, 28(3) pp. 210. [DOI] [PubMed] [Google Scholar]
- Kilgus SP, & von der Embse NP(2019). General model of service delivery for school-based interventions. In Radley KC & Dart EH (Eds.), Handbook of Behavioral Interventions in Schools: Multi-Tiered Systems of Support (pp. 106–133). Oxford University Press. 10.1093/med-psych/9780190843229.003.0007 [DOI] [Google Scholar]
- Kroenke K, Spitzer RL, & Williams JB (2001). The PHQ-9: Validity of a brief depression severity measure. Journal of General Internal Medicine, 16(9), 606–613. 10.1046/j.1525-1497.2001.016009606.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kuo E, Stoep AV, McCauley E, & Kernic MA (2009). Cost-effectiveness of a school-based emotional health screening program. Journal of School Health, 79(6), 277–285. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lane KL, Little MA, Casey AM, Lambert W, Wehby J, Weisenbach JL, & Phillips A (2009). A comparison of systematic screening tools for emotional and behavioral disorders. Journal of Emotional and Behavioral Disorders, 17(2), 93–105. [Google Scholar]
- Marcy HM, Chafouleas SM, Briesch AM, McCoach B, & Dineen JN (2018). School-based universal behavior screening: An analysis of state and district-level guidance regarding school-based, universal screening for social, emotional, and behavioral risk (Issue Brief No. 2018-1).
- McKay MM & Bannon WMJ (2004). Engaging families in child mental health services. Child and Adolescent Psychiatric Clinics of North America, 13(4), 905–921. [DOI] [PubMed] [Google Scholar]
- Mental Health Technology Transfer Center [MHTTC]. (2019). National school mental health guidance. Mental Health Technology Transfer Center Network. https://mhttcnetwork.org/centers/mhttc-network-coordinating-office/national-school-mental-health-implementation-guidance [Google Scholar]
- Mental Health Technology Transfer Center [MHTTC]. (2021). National school mental health best practices: Module 4 screening. Mental Health Technology Transfer Center Network. https://mhttcnetwork.org/centers/southeast-mhttc/product/national-school-mental-health-best-practices-module-4-screening [Google Scholar]
- Merikangas KR, He JP, Burstein M, Swanson SA, Avenevoli S, Cui L, Benjet C, Georgiades K, & Swendsen J (2010). Lifetime prevalence of mental disorders in US adolescents: Results from the national comorbidity survey replication–adolescent supplement (NCS-A). Journal of the American Academy of Child & Adolescent Psychiatry, 49(10), 980–989. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moffa K, Dowdy E, & Furlong MJ (2018). Does Including School Belonging Measures Enhance Complete Mental Health Screening in Schools?. In Pathways to Belonging (pp. 65–81). Brill Sense. 10.1163/9789004386969_005 [DOI] [Google Scholar]
- Moore SA, Widales-Benitez O, Carnazzo KW, Kim EK, Moffa K, & Dowdy E (2015). Conducting universal complete mental health screening via student self-report. Contemporary School Psychology, 19(4), 253–267. [Google Scholar]
- National Center for School Mental Health [NCSMH]. (2015, December 1). Mental Health Screening and Assessment in Schools: Part II [PowerPoint]. Quality CoIIN Learning Session. https://www.dropbox.com/s/3165yvbj7vm782n/SMH%20Sreening%20and%20Tiers%202_3%203.1.16.pptx?dl=0 [Google Scholar]
- National Center for School Mental Health [NCMSH]. (2020, January 27). School Mental Health Quality Guide: Screening. NCSMH, University of Maryland School of Medicine; http://www.schoolmentalhealth.org/media/SOM/Microsites/NCSMH/Documents/Quality-Guides/Screening-1.27.20.pdf [Google Scholar]
- National Center for School Mental Health [NCSMH]. (2021). The School Health Assessment and Performance Evaluation (SHAPE) System. NCSMH, University of Maryland School of Medicine. http://theshapesystem.com/ [Google Scholar]
- OSEP Technical Assistance Center on Positive Behavioral Interventions and Supports (2021). Data-based decision making. Positive Behavioral Interventions & Supports. https://www.pbis.org/topics/data-based-decision-making [Google Scholar]
- Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, & Landsverk J (2011). Mixed method designs in implementation research. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 44–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pearrow MM, Amador A, & Dennery S (2016). Boston Public Schools’ Comprehensive Behavioral Health Model. Communiqué, 45(3), 1–20. [Google Scholar]
- Spitzer RL, Kroenke K, Williams JB, & Löwe B (2006). A brief measure for assessing generalized anxiety disorder: The GAD-7. Archives of Internal Medicine, 166(10), 1092–1097. 10.1001/archinte.166.10.1092 [DOI] [PubMed] [Google Scholar]
- Splett JW, Trainor KM, Raborn A, Halliday-Boykins CA, Garzona ME, Dongo MD, & Weist MD (2018). Comparison of universal mental health screening to students already receiving intervention in a multitiered system of support. Behavioral Disorders, 43(3), 344–356. [Google Scholar]
- Splett JW, Perales K & Weist MD (2019). Interconnected Systems Framework – Implementation Inventory (ISF-II). Unpublished Instrument. University of Florida. [DOI] [PubMed] [Google Scholar]
- Substance Abuse and Mental Health Services Administration [SAMHSA]. (2019). Ready, set, go, review: Screening for behavioral health risk in schools. Rockville, MD: Office of the Chief Medical Officer, Substance Abuse and Mental Health Services Administration. [Google Scholar]
- Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, & Reed JE (2014). Systematic review of the application of the plan–do–study–act method to improve quality in healthcare. BMJ Quality & Safety, 23(4), 290–298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Taylor B, Henshall C, Kenyon S, Litchfield I, & Greenfield S (2018). Can rapid approaches to qualitative analysis deliver timely, valid findings to clinical leaders? A mixed methods study comparing rapid and thematic analysis. BMJ Open, 8(10), e019993. [DOI] [PMC free article] [PubMed] [Google Scholar]
- U.S. Department of Agriculture Economic Research Service (2020, December 10). 2013 Rural-urban continuum codes. https://www.ers.usda.gov/data-products/rural-urban-continuum-codes/
- Zenere FJ III, & Lazarus PJ (1997). The decline of youth suicidal behavior in an urban, multicultural public-school system following the introduction of a suicide prevention and intervention program. Suicide and Life-Threatening Behavior, 27(4), 387–403. [PubMed] [Google Scholar]
