Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2021 Dec 18;92(3):898–924. doi: 10.1111/bjep.12480

Social and emotional learning in primary schools: A review of the current state of evidence

Michael Wigelsworth 1,, Lily Verity 1, Carla Mason 1, Pamela Qualter 1, Neil Humphrey 1
PMCID: PMC9540263  PMID: 34921555

Abstract

Background

There is a plethora of reviews that summarize much of the evidence base in Social and Emotional Learning (SEL). However, there are criticisms around variability of quality and focus of those reviews, meaning there is little strategic overview of the current state of the field. Further, there are rising concerns as to systemic gaps in the evidence base itself. An overview of reviews provides an opportunity for a comprehensive classification and corresponding critique of evidence.

Aims

The study sought to examine a‐priori concerns regarding (1) variation in the rigour and quality of the meta‐analytic and systematic evidence base, (2) comparatively less conclusive evidence for whole school approaches when compared to class‐based curricula, and (3) an assumed universality of effect (i.e., lack of examination of any differential gains for sub‐groups).

Method and results

A systematic search of the systematic and meta‐analytic literature identified a total of 33 reviews examining SEL interventions. Papers were subject to a quality assessment in order to examine methodological rigour and were collated in line with the study’s objectives.

Conclusions

We maintain the prevailing consensus that SEL programmes have an important role in education. However, variation in evidence quality remains high and there appear ambiguities regarding what constitutes whole school approaches. The review also highlights a novel and concerning lack of data for differentiating any subgroup effects. The review concludes with recommended novel directions for future research, including adoption of more complex trial architecture in evaluation alongside a move towards a wider plurality in methodological approach.

Keywords: prevention, social and emotional learning, schools, systematic review

Background

Social and Emotional Learning (SEL) is the process by which children and young people develop and learn a broad range of social, emotional, and behavioural skills (Durlak, Domitrovich, Weissberg, & Gullotti, 2015), typically through the promotion of strength‐based skills such as developing emotional self‐awareness or learning social problem‐solving strategies. Early social and emotional competency is seen as a foundation for healthy development as it is associated with later life outcomes extending into adulthood, such as success in the labour market (Heckman & Kautz, 2012), decreased criminal violence and drug use (Durlak, Weissberg, & Pachan, 2010), and protection against the potential emergence of later mental health difficulties (Greenberg, Domitrovich, & Bumbarger, 2001). Schools are seen as a central nexus through which promotion of strength‐based skills for maintaining ‘good health’ under adverse circumstances are taught and learnt (Catalano, Berglund, Ryan, Lonczak, & Hawkins, 2004).1

There is now a wealth of meta‐analytic (e.g., Corcoran, Cheung, Kim, & Xie, 2018; Durlak, Weissberg, Dymicki, Taylor, & Schellinger, 2011; Sklad, Diekstra, Ritter, & Ben, 2012; Taylor, Oberle, Durlak, & Weissberg, 2017; Wigelsworth et al., 2016) and other aggregative‐type reviews (e.g., Barry, Clarke, & Dowling, 2017; Cefai, Bartolo, Cavioni, & Downes, 2018; Clarke, Morreale, Field, Hussein, & Barry, 2015; Weare & Nind, 2011) suggesting that well‐designed and well‐implemented programmes are able to positively impact a range of personal, social, and health‐related outcomes both in the short‐ (e.g., reductions in emotional distress and conduct problems) and long‐ (e.g., reductions in adult antisocial and criminal behaviour) term (see Clarke et al., 2015; Gutman & Schoon, 2013). However, the current review landscape suffers from significant issues: (1) variations in scope and quality of examined studies (partly a result of an ambiguous nebula of terminology), (2) concerns that there is an over‐generalization of the perceived effectiveness of SEL, both in respect of different approaches to provision (namely whole school vs. programme‐focused approaches), and (3) the universality of gains in respect to differential responses to intervention. Such foci led to a systemic gap in the knowledge base. Further, when selecting evidence to inform decision‐making, non‐specialists may lack the resources to critically appraise all available syntheses and may, instead, treat all evidence equally. Given that evidence used in SEL reviews varies considerably in reliability and scope, there are major challenges to those wishing to undertake evidence‐based decision‐making. This review will, thus, highlight knowledge gaps in the field of SEL, but also provide non‐specialists with an informed expert overview of those previous reviews. This paper now examines these issues in respect to the study objectives.

Conceptual coherence

The term ‘SEL’ has often been used as umbrella term to represent a diverse collection of approaches concerned with developing a wide range of inter and intra cognitive, social, and emotional skills. Given this breadth, aspects of SEL are present within many educational subfields, each of which use their own lexicon, including (for instance), bullying prevention, civic and character education, conflict resolution, social skills training, life skills, ‘soft’ or ‘non‐cognitive’ skills (Jones, Bailey, Brush, Nelson, & Barnes, 2016). Most recently, SEL has also been associated with mental health and well‐being (see, for instance; Clarke, 2020). Further complexity comes from many different terms used within and across these fields are actually inter‐related or even synonymous with one another (known as the ‘jingle fallacy’) or conversely, some terms bear the same name, but describe different constructs (known as the ‘jangle fallacy’; Marsh, 1994). For instance, SEL has been co‐opted and described within a ‘well‐being’ frame, despite the skills and definition being the same (Clarke, 2020). Conversely, the term ‘emotional intelligence’ is used by different authors to describe different, often divergent, theoretical constructs, several of which overlap with SEL (e.g., the ability to perceive and express emotion; Basu & Mermillod, 2011). Arguably the most ubiquitous model of SEL is that of CASEL’s five core competencies and related specific skills, as shown in Table 1. It is important to note that the CASEL model is not a developmental theory, but, instead, is a broad framework to be interpreted in respect to practice in education. A consequence of adopting this model as a basis of this review is ensuring that skills and competencies that form part of SEL are captured in review and discussion, even if they do not adopt a consistent terminology

Table 1.

Core competencies and associated specific skills

Broad construct Core competency Specific skills
Intra‐personal skills Self‐awareness Identifying emotions; Accurate self‐perception; Recognizing strengths; Self‐confidence; Self‐efficacy
Self‐management Impulse control; Stress management; Self‐discipline; Self‐motivation; Goal setting; Organizational skills
Inter‐personal skills Social awareness Perspective taking; Empathy/sympathy; Appreciating diversity; Respect for others
Relationship skills Communication; Social engagement; Relationship building; Teamwork
Responsible decision‐making Responsible decision‐making Identifying problems; Analysing solutions; Solving problems; Evaluating; Reflecting; Ethical responsibility

Quality of evidence

There has been significant variation in terms of both the quality of evaluations and the level of scrutiny of intervention programmes, with some displaying significant histories of positive effects (e.g., Promoting Alternative Thinking Strategies) and others having next to nothing in terms of summative and/or independent evaluation (e.g., Character First). Even where programmes have a comparatively rigorous evidence base, there remains a failure in some evaluation studies to address the real‐world complexities associated with programme implementation, for instance, failing to assess the nature of existing provision alongside the needs and capacities of individual schools (Merrell & Gueldner, 2010). Concerns regarding quality of evidence extend into summative reviews themselves. For instance, Clarke et al. (2015) note significant limitations in how studies included in the reviews were conducted and how they are reported (for studies conducted within the United Kingdom). Criticisms include a focus of reviews on evaluation studies that lack robust and powerful research designs (e.g., use of Randomized Control Trial methodology), other markers of trial quality such as the use of valid and reliable measures, and a lack of assessment in respect to the equity of impact of a programme for diverse groups. As a result, the current field is populated with a number of different reviews that vary both in respect of the extent to which SEL is covered and in terms of the quality and rigour of included studies. This impacts the quality and focus of reviews themselves. A logical and appropriate next step is to conduct an overview of reviews where findings of separate reviews are examined and appraised in terms of their recommendations for the development of the field. Our review includes, but also moves beyond, summative evaluation of evidence quality because an overview of reviews allows a strategic commentary on the systemic absence of evidence in the SEL field – key points that are emerging, but are yet omitted in summative literature. Pertinent examples are now discussed below.

Integration within school context

SEL programmes are typically seen to be embedded in the context and wider environment of the school setting through the use of multiple and co‐ordinated strategies, including activities across curriculum teaching, emphasis on wider school ethos and environment, and family and community engagement (Dobia et al., 2020; Goldberg, Sklad, & Elfrink, 2018; Oberle, Domitrovich, Meyers, & Weissberg, 2016). The World Health Organization defines a whole school approach as the entire school community as the unit of change (WHO, 1998), highlighting the holistic and comprehensive nature of a whole school approach. However, such a term does not sufficiently describe the nature of the components within the unit of change in terms of specificity, frequency, or relative importance of the activity (e.g., in what respect should parents be included, how often and how does this compare to other components such as frequency of whole‐school SEL‐based assemblies, as example?). Whole school approaches have been described in juxtaposition to programme‐based approaches, noting the increased complexity of taking a multiple‐component approach (Dobia et al., 2020); it is firmly theorized that ‘whole‐school’ SEL is a vital component of effective provision, with authors recommending that approach over an otherwise reductionist focus on taught curricula, (Goldberg et al., 2018) in which programme‐based features are part, but not the totality, of the whole school approach. However, summative evaluations typically do not focus on those factors, with a heavy dominance, instead, towards summaries of programme‐based curricular versus wider approaches. Therefore, there is a key opportunity to summarize current literature in this regard, specifically to examine outcome data for whole school approaches, comparing this with programme‐based approaches.

Differential gains

Meta‐analytic data have typically reported universal or ‘main effects’ of interventions, with comparatively little examination of sub‐groups or differential response to intervention in any extensive or rigorous manner (Domitrovich, Durlak, Staley, & Weissberg, 2017). In instances where subgroup analyses are present, they are typically conducted post‐hoc with demographic factors (e.g., socio‐economic status) and preclude clearly defined a‐priori justification in relation to programme logic (e.g., a focus on self‐regulation may be particularly beneficial to those with attentional difficulties [Farrell, Henry, & Benttencourt, 2013]). The absence of compelling, theoretically derived subgroup analyses is particularly problematic because there are competing hypotheses regarding differential response to intervention, making it difficult to assess whether an intervention has been successful within identified subgroups or not. For instance, the accumulated advantages (or ‘Matthew effect’) hypothesis (Walberg & Tsai, 1983) postulates that children starting ‘strong’ are likely to benefit more from intervention because they are more capable of building on initial skills. Conversely, the compensatory hypothesis (McClelland, Tominey, Schmitt, & Duncan, 2017) suggests that children without optimal access to resources and/or who are subject to risk factors (such as low social economic status) known to relate to social, emotional, behavioural, and academic problems are likely to benefit more from early intervention (Evans & English, 2002), as they have more room for improvement. Both hypotheses present very different pictures of what successful outcomes would look like. There is value, therefore, in examining what current literature indicates in this regard.

Objectives

Consumers of SEL research are faced with a number of variable reviews, making it difficult to assess relevance quality or scope. Added to this, there are a‐priori concerns about potential overgeneralization of the effectiveness of SEL provision and the current body of knowledge in key areas of research. Accordingly, an overview of reviews is a logical next step for the field, allowing a summary and appraisal of systematic and meta‐analytic evidence pertaining to SEL provision, utilizing a ‘gold standard’ approach (systematic review of reviews; Evans, 2003). Beyond the usefulness of bringing together a summary of evidence in one place, this study sought to offer a critical appraisal of the current position of evidence base in relation to these a‐priori concerns, namely (1) the overall rigour and quality of the meta‐analytic and systematic evidence base, (2) the balance of literature supporting whole school processes compared to an emphasis on class‐based curricula, and (3) evidence for differential gains in support of either the accumulated advantages or compensatory hypotheses.

Method

This study followed the example set by Weare and Nind's (2011) review of reviews given the similar topic area and approach (Weare and Nind examine quality of evidence in relation to school‐based mental health promotion). This included the search strategy, study selection, quality, and rigour assessment and description of study characteristics (see below).

Search strategy

The search strategy aimed to identify both published and unpublished reviews. All review types (e.g., aggregative, configurative, and meta‐analytic) were accepted, provided they reported on some form of synthesis drawn from primary studies. Following discussions between the project funders and review team, a list of key words was agreed. These attempted to balance sensitivity (e.g., finding all studies) and specificity (finding all relevant studies). This is particularly important regarding issues with conceptual coherence (as per literature review). Where possible, thesaurus tools were used in order to increase sensitivity of the searches else the search strategy used a systematic combination of free‐text words and phrases referring to the agreed key word list. Key search terms were:

  • (social and emotional OR social OR emotion* OR well‐being OR mental health) AND (program* OR promotion OR initiative OR pupil OR student* OR elementary* OR Primary* school OR curriculum).

Of particular note was the decision to demarcate the terms ‘social’ and ‘emotional’, additional to searching for these as a combined term. This meant reviews included SEL either as an explicit focus (e.g., Durlak et al., 2011), as part of wider programme content (e.g., school‐based mental health interventions [e.g., Paulus, Ohmann, & Popow, 2016]); or with aspects of SEL as a particular focus (e.g., interventions to promote self‐regulation [Pandey et al., 2018]). Scoping was conducted for reviews published or made available between 31 December 19952 and 1June 2018.

Keywords were adapted and deployed within the following bibliographic databases: EMBASE, ERIC, MEDLINE, PsychINFO. In addition, Google Scholar was utilized in an attempt to identify any grey literature (e.g., Grant et al., 2017). Subsequent reference harvesting from identified sources was also used.

We expected strategies described above to provide relevant literature. Time and resources afforded additional searching as an exhaustive approach. Therefore, additional searches were carried out:

  1. The authors personal networks were utilized by contacting expert organizations and individuals (i.e., email contact with CASEL, European, Network for Social and Emotional Competencies, individual researchers) explaining the purpose of the study and requesting relevant sources.

  2. Targeted searches of the following specific peer reviewed journals were conducted – Child Development; Prevention Science.

  3. Searches of websites of relevant organizations (i.e., Early Intervention Foundation, Education Endowment Foundation, Child Trends, The Wallace Foundation)

Study selection

As noted above, reviews selected for inclusion included primary school‐based interventions that included SEL, either as an explicit focus or as part of wider programme content (e.g., school‐based mental health interventions). Reviews that focused on particular aspects of SEL were also included. This ensured that reviews adopting allied nomenclature (as discussed above) were represented.

This search and selection procedure is shown in Figure 1.

Figure 1.

Figure 1

Search and selection procedure.

Rigour and quality

A summary of identified sources is shown in Table 2. Identified texts were appraised and subject to summary scoring in order to assess the overall strength of the evidence base (Table 3). Scoring criteria were adapted from Weare and Nind (2011) and adapted in conversation with study funders. Scores (as indicated by a star rating from 0 to 5) were awarded on the basis of the source meeting the following criteria: (1) whether the review provided a focused research question; (2) whether reviews had explicitly stated inclusion and/or exclusion criteria; (3) whether reviews presented a transparent and appropriate search strategy and data analysis plan; (4) whether reviews had also assessed the quality of included literature; and (5) whether results were presented to allow a quantitative and inferential assessment of impact. Scores were awarded by members of the research team. Following a calibration procedure regarding criteria for scoring, a random selection of 50% of the studies was independently examined by two authors in order to ensure accuracy in scoring. Total agreement was achieved.

Table 2.

Summary of identified sources

Author (year) Review type Number of studies included Aim SEL components covered Age range Level Evidence of differentiation Clinical samples Countries included
Adi et al. (2007) Systematic Review 31 To support the development of NICE guidance on promoting the mental well‐being of children in primary education Mental well‐being (resilience, confidence, good social relationships) 4–11 years Whole school/universal approaches and targeted No No The United States, Canada, and Germany
Barry et al. (2017) Literature Review/Case Study N/A To provide a critical perspective on the international evidence promoting young people’s social and emotional well‐beings in schools Social and emotional well‐being 4–18 years Universal No No Not specified
Barry & Dowling (2015) Systematic Review 26 To synthesize findings of evidence reviews of the effectiveness of psychosocial skills development programmes for children and young people All components 4–25 years Parenting, pre‐school, school and community‐based Age, gender, ethnicity, socio‐economic background, and level of vulnerability Yes The United States, Asia, Europe
CASE L (2012) Systematic Review (SELect guide) 23 (programmes) To provide a systematic framework for evaluating the quality of classroom‐based SEL programmes All components Pre‐school and elementary school aged children Universal SEL No No Not specified
Catalano et al. (2004) Systematic Review 77 To summarize the evaluations of youth development programmes Social competence, self‐efficacy, prosocial behaviour 6–20 years Community, family, and school No No The United States
Cefai et al. (2018) Systematic Review 13 To make recommendations on the basis of international research, EU policy, and current practices in Member States for the integration of SEL education as a core component of curricula across the EU All components 4–18 years Universal school‐based social and emotional education No No Europe, the United States, and other. Particular focus on European countries
Clarke et al. (2015) Systematic Review 94 To determine the evidence on the effectiveness of SEL programmes available in the United Kingdom All components 4–20 years In school and out of school, universal, indicated Yes No Europe and the United States
Corcoran et al. (2018) Systematic review & meta‐analysis 40 To examine the effects of school‐based SEL interventions on reading, mathematics and science achievement All components 4–18 years School‐based Socioeconomic status No Not specified
Das et al. (2016) Meta‐Systematic review 38 To examine interventions for adolescent mental health Self‐regulation 15–24 years School‐based No Yes Not specified
Dray et al. (2017) Systematic Review & Meta‐Analysis 57 To examine the effect of universal, school‐based resilience‐focused interventions on mental health problems in children and adolescents Social Skills 4–18 years Universal school‐based Gender No 16 countries, largest number conducted in Australia (n = 18) and the United States (n = 14)
Durlak et al. (2011) Meta‐analysis 213 To examine the impact of school‐based universal interventions for enhancing SEL All components 4–18 years Universal school‐based No No Not specified
Farahmand et al. (2010) Meta‐analysis 23 To examine the effectiveness of school‐based mental health and behavioural programmes for low‐income, urban youth Emotional or social functioning 6–18 years School‐based universal and selected Low income, urban, ethnicity No The United States
Franklin et al. (2017) Systematic Review & Meta‐Analysis 24 Effectiveness of psychosocial interventions, delivered by teachers, on internalizing and externalizing outcomes Social Skills 4–18 years School‐based delivered by teachers Age, gender and race No Not specified
Garrard and Lipsey (2007) Meta‐analysis 36 To examine the impact of conflict resolution education programmes on anti‐social behaviour Relationship skills 4–18 years Universal Age No The United States
Goldberg et al. (2018) Meta‐analysis 45 To determine the effectiveness of SEL interventions adopting a whole‐school approach All components Whole school No No Not specified
Grant et al. (2017) Review of evidence – summaries 60 (programmes) To summarize the existing evidence for SEL interventions All components 4–18 years Not specified No No Not specified
Gutman and Schoon (2013) Literature Review Not specified To summarize the existing evidence on how non‐cognitive skills can be defined and measured and the role of interventions that aim to improve non‐cognitive skills Self‐perception, motivation, perseverance, self‐control, social competencies, resilience, and coping 4–16 years Universal, selected school‐based, community‐based, and outdoors No Yes Not specified
Horowitz and Garber (2006) Meta‐analysis 30 To assess the efficacy of studies aimed at preventing depressive symptoms in children and adolescents Problem‐solving, social skills, stress‐management, and emotion‐focused coping 4–16 years School‐based delivered by teachers Sex and Age Yes Not specified
January et al.n (2011) Meta‐analysis 28 To assess the effectiveness of classroom‐wide interventions for the improvement of social skills Social Skills 4–18 years School‐based Socioeconomic status No Not specified
Korpershoek et al. (2016) Meta‐analysis 54 To assess which classroom management strategies and programmes enhanced students’ academic, behavioural, social–emotional, and motivational outcomes in primary education All components 4–12 years Teacher focused Sex, Age, Socioeconomic status, and student behaviour (e.g., regular or behaviour problems) No The United States and Other.
Maggin and Johnson (2014) Meta‐analysis 17 To evaluate the overall effectiveness of the research underpinning the FRIENDS programme Self‐regulation 4–18 years Class‐based Risk status Pre‐clinical (‘at risk’) International
O’Conner et al. (2017) Systematic Review of reviews 83 To examine SEL programmes in terms of implementation strategies and state and district policies, teacher and classroom strategies, and the outcomes among different student populations and settings All components 3–8 years School‐based. Socioeconomic status, sex, race/ethnic minorities, English learner students, students in urban schools, students in rural schools No The United States
Oliver et al. (2011) Systematic Review 24 To examine the effects of teachers’ universal classroom management practices in reducing disruptive, aggressive, and inappropriate behaviours No specific component covered 4–18 years Universal No No The United States and the Netherlands
Pandey et al. (2018) Systematic Review and Meta‐analysis 49 To examine the effectiveness of universal self‐regulation based interventions to improve self‐regulation and affect health and social outcomes in children and adolescents Self‐regulation 0 to 19yrs Curriculum, physical activity‐based, mindfulness/yoga, family‐based Age and socioeconomic status No The United States, Canada, Australia, Switzerland, the United Kingdom, Italy, Belgium, Spain, China, Chile, and Ireland
Paulus et al. (2016) Systematic Review 39 To improve knowledge about school‐based interventions, to specify effective programmes, and discuss prerequisites of the implementation process Emotional and behavioural problems 2–17 years Universal, selective and indicated No Yes The United States, Australia, Europe, the United Kingdom, and Puerto Rico
Payton et al. (2008) Systematic Review (3

180 (Universal)

80 (Indicated)

57 (Afterschool)

317 Total

To summarize the primary findings and implications of three large‐scale reviews of research evaluating the impact of SEL programmes for school children All components 4–14 years Universal, indicated and after‐school No No The United States and Other
Rones and Hoagwood (2000) Systematic Review 47 To provide a review of the evidence base for mental health services delivered in schools Emotional, behavioural and social functioning Children and adolescents Universal, selected, and indicated No No Not specified
Sancassiani et al. (2015) Systematic Review 22 To describe the main features and to establish the effectiveness of universal school‐based RCTs for children and youth Social and emotional skills 0–17 years Universal school‐based (with focus on whole‐school approach) No No The United States, Europe, Australia, Canada, Mexico, South Africa, Hong Kong, Taiwan, and Thailand
Sklad et al. (2012) Meta‐analysis 75 To examine whether teaching SEL to foster social‐emotional development can help schools extend their role beyond the transfer of knowledge All components 4–16 years Universal school‐based No No North America, Europe, Canada, and Other
Taylor et al. (2017) Meta‐analysis 82 To examine follow‐up effects of SEL programmes All components 4–18 years Universal school‐based Race, SES, school location No The United States versus international
The Center for Health and Health Care in Schools (2014) Annotated bibliography, Systematic Review 12 To identify recent empirical studies and reviews linking behavioural health promotion and prevention interventions with student academic outcomes All components universal school‐based ‐behavioural or health No No The United States
Weare and Nind (2011) Systematic Review of reviews 52 To clarify the evidence‐base for mental health promotion and problem prevention within schools All components 4–19 years Universal, targeted, indicated, school‐based and class‐room based No No The United States, the United Kingdom, the Netherlands, Germany, Canada, Australia, New Zealand, Norway, and Belgium
White (2017) Systematic Review 50 To examine the effectiveness of health and well‐being interventions in a school setting to potentially reduce inequalities in educational outcomes All components School‐aged children and/or young people School‐based No No The United Kingdom and Ireland
Wigelsworth et al. (2016) Meta‐analysis 89 To examine the potential effects of trial stage, developer involvement, and international transferability on universal social and emotional learning programme outcomes All components 4–18 years Universal school‐based No No Not specified

Table 3.

Assessment of the quality of evidence

Authors (year) Clearly focused question? Only control trials (RCTs/CCTs) included? Transparency‐appropriate search strategy and substantial meta‐analysis/data synthesis? Quality of studies assessed and used to guide results? Results presented to allow quantitative and inferential assessment of impact? Summary of quality markers
Adi et al. (2007) Yes Yes Yes Yes Yes *****
Barry et al. (2017) Yes No‐ review of current practice and pupil and professional feedback and opinion on practice No‐ states where the sources were found for example existing programmes used in schools but not the strategy used to identify them No No *
Barry & Dowling (2015) Yes No Yes No No **
CASEL (2012) Yes No No‐ states that current successful SEL programs are used but not how they are identified No No *
Catalano et al. (2004) Yes Yes Yes No Yes ****
Cefai et al. (2018) Yes No‐ policy documents (inc EU and international) and international literature included Yes Yes No ***
Clarke et al. (2015) Yes No Yes No No **
Corcoran et al. (2018) Yes No Yes No Yes ***
Das et al. (2016) Yes No Yes No Yes ***
Dray et al. (2017) Yes Yes Yes No Yes ****
Durlak et al. (2011) Yes Yes Yes Yes Yes *****
Farahmand et al. (2010) Yes No Yes No Yes ***
Franklin et al. (2017) Yes Yes Yes No Yes ****
Goldberg et al. (2018) Yes No Yes Yes‐Quality Assessment Tool for Quantitative Studies used Yes ****
Grant et al. (2017) No No No No No
Gutman and Schoon (2013) Yes No Yes No Yes‐ effect sizes from meta‐analysis and experimental studies‐ not from original research ***
Horowitz and Garber (2006) Yes Yes Yes Yes No ****
January et al. (2011) Yes No Yes Yes Yes ****
Korpershoek et al. (2016) Yes No Yes Yes Yes ****
Maggin and Johnson (2014) Yes Yes Yes No Yes ****
O’Conner et al. (2017) Yes No No‐ Researchers note that it did not meet the aims of the research to do an exhaustive search of literature No No‐ Narrative results only *
Oliver (2011) Yes No Yes No‐ state quality and reliability was screened but does not state screening criteria Yes ***
Pandey et al. (2018) Yes Yes Yes Yes‐ Quality assessment was conducted using the Effective Public Health Practice Project Quality Assessment Tool for Quantitative Studies No ****
Paulus et al. (2016) Yes No Yes No No **
Payton et al. (2008) Yes Yes Yes No Yes ****
Rones and Hoagwood (2000) Yes No Yes No No **
Sancassiani et al. (2015) Yes Yes Yes No No‐ descriptive only ***
Sklad et al. (2012) Yes No Yes Yes Yes ****
Taylor et al. (2017) Yes Yes Yes No Yes ****
Centre for Health Care in Schools (2014) Yes No No No No *
Weare and Nind (2011) Yes No Yes Yes No‐ Results presented quantitatively‐ focused on transferability ***
White (2017) Yes No Yes No No **
Wigelsworth et al. (2016) Yes No Yes No Yes ***

Scores were used to inform the conclusions arising from the discussion of evidence, by which sources judged to be of high quality were used to first inform key conclusions arising (NB: No formal cut‐off was used, as different rating criteria were appropriate for different circumstances (e.g., quantitative assessment of impact is not always possible/belies a qualitative understanding of the literature). Where evidence is drawn directly from the evidence review, scores (as per Table 3) are presented in order to make clear the relative weight of evidence underpinning subsequent conclusions.

Results

After filtering of false positives and application of inclusion/exclusion criteria as noted above (e.g., not a primary study), a final 33 reviews were identified. Details are shown in Table 2.

Findings and discussion

This study is the first ‘review of reviews’ to examine evidence supporting SEL. Results maintain the prevailing consensus that SEL programmes have an important role in education. However, the purpose of the current review was to consider concerns related to (1) variation in the rigour and quality of the evidence base, (2) comparatively less conclusive evidence for whole‐school approaches when compared to class‐based curricula and (3) an assumed universality of effect (i.e., lack of examination of any differential gains for sub‐groups). These themes are now examined in conjunction.

Summary of evidence quality

Table 3 shows that just under half of the reviews (14/33) were of high quality (scoring 4 or 5 stars); the same number (14/33) were of medium quality (scoring 2 or 3 stars), and the remaining studies scored 1 star (4/33); one study (Grant et al., 2017) provided a descriptive narrative of programmes only, and was not awarded any stars. The most common factors amongst studies limiting a full star rating were (1) the failure to include only randomized control trials (20/33 of studies did not limit their findings to control trials), and (2) a lack of assessment in trial quality (23/33 studies did not assess quality of source). Table 2 shows that a relatively small number of reviews reported subgroup effects, with less than half (14/33) considering evidence of differentiation.

Whole school approaches

Five reviews provided empirical evidence for ‘whole‐school’ processes. Evidence was considered if there was discussion of coordinated activities in which SEL practice was continually and consistently embedded into the school across school years and contexts (e.g., in class, during break, and home‐school relations; Goldberg et al., 2018****). Co‐ordinated approaches have been described as ‘whole school’, ‘school‐level’, and ‘multi‐component’, respectively. Overall, the evidence, for whole school approaches is mixed. Adi et al. (2007)***** present comparatively favourable evidence for whole‐school, multi‐component programmes, which include significant teacher training and development and support for parenting, in comparison to ‘curriculum only’‐based approaches. This is particularly true for mental‐health‐based outcomes: Das et al. (2016)*** noted that community‐based approaches (e.g., activities occurring outside of the ‘school day’ such as extra‐curricular clubs) were positively associated with behavioural changes including self‐confidence and self‐esteem (alongside school‐based approaches). However, this finding is not unanimous, with Durlak et al. (2011*****) noting multicomponent programmes were no more effective than the single component programmes and Catalano et al. (2004****) showing no impact of whole school approaches on outcomes more closely aligned with core SEL skills. Very recent and comparatively robust data from a meta‐analysis of whole school approaches (Goldberg et al., 2018****) shows that although, on average, such programmes do produce positive effects, the average effect size (d = 0.22) for improvements in social and emotional skills is at least half in comparison to prior meta‐analyses (Corcoran, Cheung, Kim, & Xie, 2017***; Durlak et al., 2011; Sklad et al., 2012; Wigelsworth et al., 2016***) that do not differentiate by whole‐school components (e.g., programmes that are heavily orientated towards or exclusively made up from class‐based taught curricula).

Several authors as cited above provide a framework for ‘school‐wide’ practices, noting the inclusion both of family and community partnerships and school climate policies and practices as distinct from classroom curriculum. However, this review highlights a number of current limitations in the field, namely the difficulty in capturing differences in which components are/are not implemented and a complexity in how various components might interact. Consequently, we make the case that the current term ‘whole school’ is not sufficiently disambiguated. For instance, although the above literature treats whole school as a single grouping, Authors list a myriad of characteristics associated with whole school approaches (e.g., extra curricula activities, daily practice of skills, staff training, co‐ordination with outside agencies, and school policy [Goldberg et al., 2018]), and there is significant variation in the extent to which each whole school element may be practiced. However, as yet, studies do not provide sufficient clarity as to what extent each of these components were necessarily implemented. As comprehensive evaluations of whole school approaches to SEL are comparatively rare (Barry et al., 2017*), there remains a particular paucity of evidence regarding the usefulness and importance of specific multi‐component elements in the field, especially in relation to how they may support or interact with other components. The significance of this finding should not be understated given that a whole school approach is seen as essential to SEL provision (Cefai et al., 2018***; Oberle et al., 2016). One solution is greater specification when conducting studies in respect in capturing these components, though calls for greater emphasis on examining implementation is not new to the field (e.g., Lendrum & Humphrey, 2012).

Classroom and curricula approaches

Nine reviews help inform evidence for this section. Evaluation of specific curriculum packages arguably dominates a significant part of the evaluation landscape. For example, conclusions as to the overall effectiveness of SEL is (mostly) drawn from aggregated pupil data stemming from several large‐scale, robust, and recent meta‐analyses (e.g., Durlak et al., 2011; Sklad et al., 2012; Wigelsworth et al., 2016***). Effect sizes drawn from those analyses show effect sizes between 0.21 and 0.70, demonstrating that, on average, programmes can be effective in promoting SEL skills.

Commonly, it can be generally agreed that SEL intervention (as a minimum) is identified by an explicit curriculum or set of practices, almost exclusively presented in the form of a manual, typically with a teacher’s handbook and accompanying/or stimulus materials for use with a whole class. There is typically a lesson structure (indicating a sequenced and regular progression through the programme material), with guidance to teachers (varying from broad guidance to heavily scripted), delivering SEL content within explicit curriculum time. The materials themselves are likely to include any number of pedagogical elements such as role play, cognitive modelling, self‐talk, storytelling, written work sheets, teacher instruction, and multi‐media stimulus (e.g., videos), dependent on the specific programme. SEL activities may or may not be accompanied by additional activities pertaining to generalization outside of the ‘SEL lesson’, and ‘cross‐linking’ to ‘whole‐school’ or multi‐component elements (discussed above) (Pandey et al., 2018****).

The above description begins to reflect a principal and significant critique of the evidence arising as a result of this review, that of heterogeneity. It is evident in the above description of what typically constitutes an SEL programme that there is potential for a great deal of variation in the specifics of how SEL can be designed and delivered. For instance, there is variation in the length and intensity of programme, focus or relative importance of a particular sub‐domain of SEL, whether or skills are rehearsed across multiple school years. Conclusions drawn from meta‐analytical techniques are compromised by heterogeneity because the singular premise of meta‐analyses is to compare ‘like with like’. There have been some attempts to identify a small number of critical differences that clearly delineate the SEL literature. For instance, Durlak et al. (2011*****) denote programmes fulfilling ‘SAFE’ (Sequenced Active, Focused, and Explicit) criteria are comparatively more effective than those not able to fulfil those criteria and Wigelsworth et al. (2016***) consider (amongst other things) the cultural transferability of programmes (e.g., being implemented outside their country of origin). However, it is extremely difficult to account for multiple variations between units of analysis as this compromises results, an issue which is acknowledged by the meta‐analyses themselves.

One key element yet unconsidered in existing literature and highlighted by the current review is the extent of coverage of CASEL’s core domains within specific programmes. Coverage ranges from ‘holistic’ approaches with programmes addressing all five core competences denoted in CASEL’s model (see Table 1), to very specific training and instruction in a much smaller range of target areas (McClelland et al., 2017). For instance, positive gains in self‐regulation have been shown in as little as 10 weeks when programmes are focused on addressing self‐regulation (e.g., FRIENDS – a universal 10‐week intervention based on Cognitive Behavioural Therapy (CBT) designed to address and prevent worry, anxiety, and depression through self‐contained activities focusing on self‐regulation activities such as kinaesthetic exercises for controlling physical symptoms of anxiety). In this instance, wider gains such as better relationships are hypothesized as a product of self‐regulation strategies, rather than dedicating additional intervention time to coverage of this domain. However, this has raised the question about who is best placed to deliver interventions. Increasingly, mental health is an explicit topic considered within prevention programmes (Rones & Hoagwood, 2000**), some of which arguably require specialist knowledge in their delivery (e.g., CBT). As a consequence, there is concern that teachers may not be optimally equipped to deliver intervention content where specialist knowledge is required, as meta‐analytic evidence suggests that for teacher delivering of mental health interventions, effects are typically small. For instance, Franklin et al. (2017****) found small significant reductions in students’ internalizing outcomes (e.g., worry; d = 0.13) and no statistically significant effect for externalizing outcomes (e.g., behaviour). In relation to concerns arising from whole school approaches, examination of elements underpinning both the pedagogical approaches and content covered by programmes would help address who (and why) is best suited to implement programmes and strategies.

Differential gains

Although SEL programming has been seen to be successfully delivered across a diverse range of contexts (e.g., Payton et al., 2008****), there is little consensus as to whether differential gains are made for identified subgroups (e.g., low income, ethnic minority status, SEN, and or ‘at risk’ status for mental health difficulties). Indeed, the review highlights a general lack of attention to subgroups both in meta‐analytical approaches specifically, and more generally, meaning there is limited evidence available (Barry et al., 2017; Barry & Dowling, 2015).

There is conclusive evidence that exposure to multiple poverty‐related risks increases the odds that students who are socioeconomically disadvantaged will demonstrate less social and emotional competence, lower executive functioning skills, and more behaviour problems (Webster‐Stratton & Reid, 2008). Accordingly, where subgroups have been examined, those have typically been in relation to ‘at risk’ populations likely to experience those conditions, namely low‐income populations, those with ethnic minority status, and those identified as having special educational and/or additional needs. As noted, whether this translates to differential uptake of intervention is less certain. Evidence from the reviews is examined below.

Social economic status

For studies that consider the possible moderating effects of economic status, findings suggest, overall, that SEL programmes are as effective for students in low income families as for to those in middle‐to high income brackets (Adi et al., 2007*****; Clarke et al., 2015**; Corcoran et al., 2017***; Sklad et al., 2012****): studies typically do not find any differential impact of SEL programmes for students with low SES. This conclusion would tentatively be in support of the compensatory hypothesis (barring more robust examination): there is strong evidence to suggest those individuals experiencing low SES have effectively compensated by demonstrating equivalent results at post‐test. However, several studies note the comparatively small number of studies explicitly examining SES, limiting conclusions drawn (Corcoran et al., 2017***). Indeed, SES information was missing from a third of studies examined by Durlak et al. (2011)*****, leading to calls for additional work in this area. This is particularly true for the United Kingdom and Ireland, where few studies have examined the effects of SES in relation to SEL programming in detail (White, 2017**).

Ethnic minority status

Evidence of SEL interventions with racial/ethnic minority students is mixed, though trends appear to indicate little distinct difference in programme effects. Farahmand, Grant, Polo, and Duffy (2010***) did not find a significant effect of race/ethnicity in their review of school‐based mental health programmes. Although Franklin et al. (2017****) report that improvements in externalizing behaviour were significantly positively associated with the proportion of Caucasian students in the sample. That said, the associated effect size was extremely small (b = 0.002).

Special educational and/or additional needs

There is some limited evidence that particular elements of SEL programming are suited for students with special educational needs. For instance, conditions such as Attention Deficit and Hyperactivity Disorder (ADHD) are related to difficulties with inhibitory control, which self‐regulation strategies address specifically. Even for children who do meet full diagnostic criteria for conditions such as ADHD, they can still be impaired by high symptom levels (Faraone et al., 2015) and, therefore, are responsive to intervention components based on self‐regulation strategies (Moore et al., 2018; Pandey et al., 2018****). While other links might be made in relation to broader categories of Special Education Needs or Difficulties (e.g., social skills training for students with social skill difficulties, e.g., Autistic Spectrum Disorder), this is most likely in conjunction with integrated levels of support, and is not consistent with a prevention and promotion framework as the only mode of intervention. Little to no evidence was apparent within this evidence review in relation to other forms of Special Education and Additional Needs that are recognized within the English policy context. A related consideration is individuals experiencing nascent mental health difficulties, as there are strong theoretical reasons why early prevention and promotion approaches may be helpful in addressing and/or preventing further presence of difficulties (Goldman, Stamler, Kleinman, Kerner, & Lewis, 2016). In a meta‐analysis examining the effect of the FRIENDS programme, no immediate effect was found for ‘high risk’ children (those experiencing anxiety at pre‐test) in comparison to small effects for those with low levels of anxiety (these effects were seen to wash out after 12 months; Maggin & Johnson, 2014****). There is a debate within the literature (as exemplified by a response to Maggin & Johnson’s conclusions [Barrett, Cooper, Stallard, Zeggio, & Gallegos‐Guajardo, 2017]) as to (1) what constitutes ‘at risk’ (as different studies presented different criteria) and (2) the distinction between ‘treatment’ and ‘prevention’ (e.g., at what point do emergent difficulties become an issue for treatment?). This issue is especially complex across the broad range of outcomes for universal prevention programmes because there are few established normative values for comparison. Because there is evidence for individual programmes and/or studies demonstrating differential effects for identified sub‐groups (e.g., ‘at risk’ children), differences in results may be due to a complex set of interrelationships between programme design, community factors, and/or methodological limitations (e.g., cultural validity of either an intervention and/or measures used; Adi et al., 2007; Barry et al., 2017*). That is consistent with the work of Simmons, Brackett, and Adler (2018) who note that the barriers to equity are complex. It is not simply a case of identifying categorical ‘membership’ to a given group (e.g., eligibility for free school meals). For instance, poverty indicators themselves can potentially lead to implicit biases in school staff, which in turn creates inequities in delivery (Simmons et al., 2018). This continuing omission has particularly significant implications for our current understanding of the field: the vast majority of conclusions are based on results from universal samples (e.g., whole class) and intervention effects that particularly benefit yet unbeknownst subgroups (compensatory hypothesis) are potentially being missed; any accumulated advantages are also effectively ‘hiding’ those who are not responsive to intervention.

A further and significant critique of the extant literature discussed so far is that current understanding of differential gains are still almost solely based on broad socio‐demographic data, which is not necessarily related to the context or aims of a given intervention. Although notable examples of theoretically derived groups do exist (e.g., Maggin & Jonhson, 2014****, as above), given that it has been long established that variability in effects is a likely outcome for universal programmes (as heterogeneous subgroups are receiving the same intervention) (Farrell, Henry, & Bettencourt, 2013), this lack of data is surprising. As above, the implications arising as a result of this finding are potentially quite serious, indicating the possibility of inaccurate estimation of effects for the very groups that SEL programming is designed to address. Farrellel, Henry and Bettencourt (2013) provide some solution to this in the form of subgroups derived from intervention logic models (i.e., group‐specific hypotheses derived from risk and protective factors of a given intervention is designed to address); person centred approaches offer potential further utility as a practical method for analysis (e.g., Split, Koot, & can Lier, 2013).

A final difficulty in assessing differential gains is the disconnection between the theory of prevention and the typical approach taken in evaluation and outcome trials (on which meta‐analytic approaches are based). Prevention approaches seek to reduce the prevalence of later difficulties, but the majority of evaluation studies examine immediate post‐test outcomes only. Indeed, only 8% of studies examined by Wigelsworth et al. (2016***) followed up on outcomes beyond 18 months. Therefore, any differential effects for groups more likely to experience later life difficulties are not typically captured. There is a clear call in this field for the use of robust, longitudinal work which follows recipients of prevention programming into the later life course. Further methodological difficulties revolve around power and sample size. As preventative programming is delivered universally, individual trials are only typically powered to detect main effects, meaning there is not always sufficient power in order to allow rigorous testing of subgroups. Lack of reporting in individual studies means meta‐analytic approaches can be similarly impaired (Corcoran et al., 2017***).

Strengths and limitations

To the authors’ best knowledge, this is the first overview of reviews undertaken specifically in relation to SEL in primary schools. As such, the work offers new insights into the current state of the field, indicating some emergent areas for concern. This includes an ambiguity underlying the current evidence base for whole school approaches, a need for further specificity in examining components of class‐based curricula, and a substantial underinvestment in our understanding and examination of differential gains. A principal strength underpinning the findings is the use of a systematic and robust method. Reviews of reviews have been considered the ‘gold standard’ (Evans, 2003) in the evaluation of evidence through the use of exhaustive search criteria. This approach has been further enhanced by the use of summary scoring in order to assess the overall strength of the evidence base.

The current review of reviews is not without its limitations. Secondary synthesis methods can only capture ‘critical mass’ of factors already included in meta/systematic reviews, as, by definition, primary research studies are not included. We have attempted to address this by including an expanded literature base when discussing implications, but new studies with promising approaches have not been identified through systematic protocols. Of note is the overall quality of studies informing the review, with less than half being scored as high quality. Although the review itself arguably biases impact‐based studies (as one star was awarded for reviews whereby results presented to allow quantitative and inferential assessment of impact, prejudicing qualitative and/or configurative‐type approaches), a number of identified reviews did not assess the quality of studies, including the use of control group as an inclusion criterion and/or the use of weigh‐of‐evidence scoring (such as the one deployed in the current study). Those elements are important in creating confidence in deploying evidence‐based practices.

Summary and future directions

This review supports the prevailing consensus that preventative SEL programmes have an important role in education, demonstrating beneficial effects across a range of favourable outcomes. However, variation in demonstrating success remains high and inconsistencies remain in regard to the recommendations for practice, and the quality and rigour underpinning those decisions. For instance, although there is little disagreement that ‘whole school’ practices should form part of effective delivery, there is a great deal of ambiguity as to the specific components as to what this specifically entails. For the small basis of empirical evidence supporting ‘whole school’ approaches (e.g., Goldberg et al., 2018****) it is not clear whether the small gains augment or replace gains made through curricula practice, or the precise components that contribute to this positive effect. More complex trial architecture may help address some of those elements, for example through the use of factorial design in order to test the presence or absence of specific components. We acknowledge the need for greater specification as to what might be included as ‘whole school’ ahead of empirical testing.

Evidence for classroom activities and curricula not only remain positive but also arguably lack specificity in relation to dissemination of good practice. Little evidence is currently available in relation to the precise nature of the components underpinning positive outcomes, which greatly limits how best to ensure fit between practice and context. Without a step‐wise change in the way evaluations and conducted, we argue that inconsistent findings are likely to remain. We tentatively suggest that future research might approach this difficulty through approaches such as distillation and matching (Chorpita, Daleiden, & Weisz, 2005), which highlights effective elements common across programmes (see Jones et al., 2017 for an example).

With regards to differential gains, there is much work to be done. Indeed, the conceptualization of subgroups through overt socio‐demographic membership is over simplistic and, therefore, potentially counterproductive (Simmons et al., 2018). As a whole, reviews did not acknowledge the broader educational frameworks of tiered support for example by identifying those more (or less) likely to benefit from intervention. Both as part of evaluative trials and through the adoption of wider educational frameworks (e.g., baseline screening and categorization of level of need; Fuchs & Fuchs, 2017) a more nuanced identification of vulnerable groups may be possible. A significant implication arising from this review is that, without further attention to this issue, SEL may not be nearly as effective for those very individuals it is specifically designed to help.

Conclusion

Research continues to maintain the prevailing consensus that SEL programmes have an important role in education. However, a renewed emphasis on study quality with an accompanying diversification of efforts into defining and examining whole school approaches, greater specificity regarding components of class‐based curricula, and a more nuanced understanding of potential differential response to intervention could further improve the uptake and impact of SEL in schools.

Conflict of interest

The authors have no known conflict of interest to disclose.

Author contribution

Michael Wigelsworth: Conceptualization (equal); Funding acquisition (equal); Methodology (equal); Project administration (equal); Supervision (equal); Writing – original draft (equal); Writing – review & editing (equal). Lily Verity: Data curation (equal); Formal analysis (equal). Carla Mason: Data curation (equal); Formal analysis (equal). Pamela Qualter: Formal analysis (equal); Writing – review & editing (equal). Neil Humphrey: Conceptualization (equal); Formal analysis (equal); Funding acquisition (equal).

Acknowledgments

This research was supported in part by funding from the Education Endowment Foundation. The authors thank EEF and EIF colleagues; Matthew van Poortvliet and Aleisha Clarke and the EEF stakeholder panel who helped inform the wider project.

Footnotes

1

Although universal promotion programmes are available across all school years, the focus of this research is in the primary years of education (Years 1–6) only. Primary school represents earliest time at which all school children receive mandatory education in a systematic and universal manner (i.e., within schools and classrooms; McClelland et al., 2017).

2

The start date for the literature search coincides with the publication of Daniel Goleman’s ‘Emotional Intelligence’, broadly attributed as a catalyst for the rise in popularity of SEL based approaches in school (Elias & Weissberg, 2000).

References

*= Source identified as part of the evidence review.

  1. * Adi, Y. , Mcmillan, A. S. , Kiloran, A. , Stewart‐brown, S. , Medical, W. , Stewart‐brown, C. S. , & Campus, G. H. (2007). Systematic review of the effectiveness of interventions to promote mental wellbeing in primary schools Report 3: Universal Approaches with focus on prevention of violence and bullying, 1–106. London, UK: National Institute for Health and Clinical Excellence. 10.1016/j.ocecoaman.2006.08.015 [DOI] [Google Scholar]
  2. Basu, A. , & Mermillod, M. (2011). Emotional intelligence and social‐emotional learning: An overview. Psychology Research, 1, 182–185. 10.17265/2159-5542/2011.03.004 [DOI] [Google Scholar]
  3. Barrett, P. M. , Cooper, M. , Stallard, P. , Zeggio, L. , & Gallegos‐Guajardo, J. (2017). Effective evaluation of the FRIENDS anxiety prevention program in school settings: A response to Maggin and Johnson. Education and Treatment of Children, 40(1), 97–110. 10.1353/etc.2017.0006 [DOI] [Google Scholar]
  4. Barry, M. , & Dowling, K. (2015). A review of the evidence on enhancing psychosocial skills development in children and young people. Galway, Ireland: HPRC. [Google Scholar]
  5. *Barry, M. , Clarke, A. , & Dowling, K. (2017). Promoting social and emotional well‐being in schools. Health Education, 117(5), 434–451. 10.1108/MRR-09-2015-0216 [DOI] [Google Scholar]
  6. *CASEL . (2012). Effective social and emotional learning programs. Preschool and Elementary School Edition. Chicago, IL: CASEL. http://casel.org/wp‐content/uploads/2016/01/2013‐casel‐guide‐1.pdf [Google Scholar]
  7. *Catalano, R. F. , Berglund, M. L. , Ryan, J. A. M. , Lonczak, H. S. , & Hawkins, J. D. (2004). Positive youth development in the United States: Research findings on evaluations of positive youth development programs. The ANNALS of the American Academy of Political and Social Science, 591(1), 98–124. 10.1177/0002716203260102 [DOI] [Google Scholar]
  8. *Cefai, C. , Bartolo, P. , Cavioni, V. , & Downes, P. (2018). Strengthening social and emotional education as a core curricular area across the EU. A review of the International Evidence. Luxembourg: NESET. 10.2766/664439 [DOI] [Google Scholar]
  9. *Centre for Health and Care in Schools . (2014). The impact of school‐connected behavioral and emotional health interventions on student academic performance. Washington, DC: The Center of Health and Health Care in Schools. [Google Scholar]
  10. Chorpita, B. F. , Daleiden, E. L. , & Weisz, J. R. (2005). Identifying and selecting the common elements of evidence based interventions: A distillation and matching model. Mental Health Services Research, 7(1), 5–20. 10.1007/s11020-005-1962-6 [DOI] [PubMed] [Google Scholar]
  11. Clarke, A. (2020). Strategies to support children’s social & emotional wellbeing on returning to school. London, UK: Early Intervention Foundation. [Google Scholar]
  12. * Clarke, A. M. , Morreale, S. , Field, C. A. , Hussein, Y. , & Barry, M. M. (2015). What works in enhancing social and emotional skills development during childhood and adolescence? A review of the evidence on the effectiveness of school‐based and out‐of‐school programmes in the UK. Galway, Ireland: WHO Collaborating Centre for Health Promotion Research. [Google Scholar]
  13. *Corcoran, R. P. , Cheung, A. C. K. , Kim, E. , & Xie, C. (2018). Effective universal school‐based social and emotional learning programs for improving academic achievement: A systematic review and meta‐analysis of 50 years of research. Educational Research Review, 25, 56–72. 10.1016/j.edurev.2017.12.001 [DOI] [Google Scholar]
  14. *Das, J. K. , Salam, R. A. , Lassi, Z. S. , Khan, M. N. , Mahmood, W. , Patel, V. , & Bhutta, Z. A. (2016). Interventions for adolescent mental health: An overview of systematic reviews. Journal of Adolescent Health, 59, S49–S60. 10.1016/j.jadohealth.2016.06.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Dobia, B. , Arthur, L. , Jennings, P. , Khlentzos, D. , Parada, R. , Roffey, S. , & Sheinman, N. (2020). In Implementation of social and emotional learning. In Chatterjee Singh N., & Duraiappah A. (Eds.), Rethinking learning: A review of social and emotional learning for education systems (pp. 155–186). New Delhi, India: UNESCO MGIEP. https://mgiep.unesco.org/rethinking‐learning [Google Scholar]
  16. Domitrovich, C. , Durlak, J. A. , Staley, K. C. , & Weissberg, R. P. (2017). Social‐emotional competence: An essential factor for promoting positive adjustment and reducing risk in school children. Child Development, 88(2), 408–416. 10.1111/cdev.12739 [DOI] [PubMed] [Google Scholar]
  17. * Dray, J. , Bowman, J. , Campbell, E. , Freund, M. , Wolfenden, L. , Hodder, R. K. , … Wiggers, J. (2017). Systematic review of universal resilience‐focused interventions targeting child and adolescent mental health in the school setting. Journal of the American Academy of Child and Adolescent Psychiatry, 56(10), 813–824. 10.1016/j.jaac.2017.07.780 [DOI] [PubMed] [Google Scholar]
  18. Durlak, J. , Domitrovich, C. , Weissberg, R. P. , & Gullotti, T. P. (2015). Handbook of social and emotional learning. New York, NY: Guildford Press. [Google Scholar]
  19. *Durlak, J. , Weissberg, R. , Dymicki, A. , Taylor, R. , & Schellinger, K. (2011). The impact of enhancing students’ social and emotional learning: A meta‐analysis of school‐based universal interventions. Child Development, 82(1), 405–432. 10.1111/j.1467-8624.2010.01564.x [DOI] [PubMed] [Google Scholar]
  20. Durlak, J. , Weissberg, R. , & Pachan, M. (2010). A meta‐analysis of after‐school programs that seek to promote personal and social skills in children and adolescents. American Journal of Community Psychology, 45(3–4), 294–309. 10.1007/s10464-010-300-6 [DOI] [PubMed] [Google Scholar]
  21. Elias, M. J. , & Weissberg, R. P. (2000). Primary prevention: educational approaches to enhance social and emotional learning. Journal of School Health, 70(5), 186–190. [DOI] [PubMed] [Google Scholar]
  22. Evans, D. (2003). Hierarchy of evidence: A framework for ranking evidence evaluating. Journal of Clinical Nursing, 12(1), 77–84. 10.1046/j.1365-2702.2003.00662.x [DOI] [PubMed] [Google Scholar]
  23. Evans, G. , & English, K. (2002). The environment of poverty: Multiple stressor exposure, psychophysiological stress, and socioemotional adjustment. Child Development, 73(4), 1238–1248. 10.1111/1467-8624.00469 [DOI] [PubMed] [Google Scholar]
  24. * Farahmand, F. K. , Grant, K. E. , Polo, A. J. , & Duffy, S. N. (2010). School‐based mental health and behavioral programs for lowincome, urban youth: A systematic and meta‐analytic review. Clinical Psychology: Science and Practice, 18(4), 372–390. 10.1111/j.1468-2850.2011.01265.x [DOI] [Google Scholar]
  25. Faraone, S. V. , Asherson, P. , Banaschewski, T. , Biederman, J. , Buitelaar, J. K. , Ramos‐Quiroga, J. A. , … Franke, B. (2015). Attention‐deficit/hyperactivity disorder. Nature Reviews Disease Primers, 1, 15020. 10.1038/nrdp.2015.20 [DOI] [PubMed] [Google Scholar]
  26. Farrell, A. D. , Henry, D. B. , & Bettencourt, A. (2013). Methodological challenges examining subgroup differences: Examples from universal school‐based youth violence prevention trials. Prevention Science, 14(2), 121–133. 10.1007/s11121-011-0200-2 [DOI] [PubMed] [Google Scholar]
  27. *Franklin, C. , Kim, J. S. , Beretvas, T. S. , Zhang, A. , Guz, S. , Park, S. , … Maynard, B. R. (2017). The effectiveness of psychosocial interventions delivered by teachers in schools: A systematic review and meta‐analysis. Clinical Child and Family Psychology Review, 20(3), 333–350. 10.1007/s10567-017-0235-4 [DOI] [PubMed] [Google Scholar]
  28. Fuchs, D. , & Fuchs, L. S. (2017). Critique of the national evaluation of response to intervention: A case for simpler frameworks. Exceptional Children, 83(3), 255–268. 10.1177/0014402917693580 [DOI] [Google Scholar]
  29. *Garrard, W. M. , & Lipsey, M. W. (2007). Conflict resolution education and antisocial behavior in U.S. schools: A meta‐analysis. Conflict Resolution Quarterly, 25(1), 9–38. 10.1002/crq.188 [DOI] [Google Scholar]
  30. * Goldberg, J. M. , Sklad, M. , Elfrink, T. R. , Schreurs, K. M. G. , Bohlmeiher, E. T. , & Clarke, A. M. (2018). Effectiveness of interventions adopting a whole school approach to enhancing social and emotional development: A meta‐analysis. European Journal of Psychology of Education, 34, 755–782. 10.1007/s10212-018-0406-9 [DOI] [Google Scholar]
  31. Goldman, E. , Stamler, J. , Kleinman, K. , Kerner, S. , & Lewis, O. (2016). Child mental health: Recent developments with respect to risk, resilience, and interventions. In Korin M. R. (Ed.), Health promotion for children and adolescents (pp. 99–123). Boston, MA: Springer US. 10.1007/978-1-4899-7711-3_6 [DOI] [Google Scholar]
  32. * Grant, S. , Hamilton, L. S. , Wrabel, S. L. , Gomez, C. J. , Whitaker, A. , Tamargo, J. , … Ramos, A. (2017). Social and emotional learning interventions: Intervention summaries (pp. 1–255). Santa Monica, CA: RAND Corporation. [Google Scholar]
  33. Greenberg, M. T. , Domitrovich, C. , & Bumbarger, B. (2001). The prevention of mental disorders in school‐aged children: Current state of the field. Prevention & Treatment, 4(1), 1–62. 10.1037/1522-3736.4.1.47c [DOI] [Google Scholar]
  34. * Gutman, L. M. , & Schoon, I. (2013). The impact of non‐cognitive skills on outcomes for young people: literature review (p. 59). London, UK: Education Endowment Foundation. [Google Scholar]
  35. Heckman, J. J. , & Kautz, T. (2012). Hard evidence on soft skills. Labour Economics, 19(4), 451–464. 10.1016/j.labeco.2012.05.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. *Horowitz, J. , & Garber, J. (2006). The prevention of depressive symptoms in children and adolescents: A meta‐analytic review. Journal of Consulting and Clinical Psychology, 74(3), 401–415. 10.1037/0022-006X.74.3.401. [DOI] [PubMed] [Google Scholar]
  37. *January, A. M. , Casey, R. J. , & Paulson, D. (2011). A meta‐analysis of classroom‐wide interventions to build social skills: Do they work? School Psychology Review, 40(2), 242–256. 10.1080/02796015.2011.12087715 [DOI] [Google Scholar]
  38. Jones, S. , Bailey, R. , Brush, K. , Nelson, B. , & Barnes, S. (2016). What is the same and what is different? Harvard, IL: The EASEL Lab. [Google Scholar]
  39. Jones, S. , Brush, K. , Bailey, R. , Brion‐meisels, G. , Mcintyre, J. , Kahn, J. , … Stickle, L. (2017). Navigating SEL from the inside out (March), 0–348. Harvard, IL: The EASEL Lab. [Google Scholar]
  40. *Korpershoek, H. , Harms, T. , de Boer, H. , van Kuijk, M. , & Doolaard, S. (2016). A meta‐analysis of the effects of classroom management strategies and classroom management programs on students academic, behavioral, emotional, and motivational outcomes. Review of Educational Research, 86(3), 643–680. 10.3102/0034654315626799 [DOI] [Google Scholar]
  41. Lendrum, A. , & Humphrey, N. (2012). The importance of studying implementation of interventions in school settings. Oxford Review of Education, 38(5), 635–652. 10.2307/41702781 [DOI] [Google Scholar]
  42. * Maggin, D. M. , & Johnson, A. H. (2014). A meta‐analytic evaluation of the FRIENDS program for preventing anxiety in student populations. Education and Treatment of Children, 37(2), 277–306. 10.1353/etc.2014.0018 [DOI] [Google Scholar]
  43. Marsh, H. W. (1994). Sport motivation orientations: Beware of jingle‐jangle fallacies. Journal of Sport & Exercise Psychology, 16(4), 365–380. 10.1123/jsep.16.4.365 [DOI] [Google Scholar]
  44. McClelland, M. M. , Tominey, S. L. , Schmitt, S. A. , & Duncan, R. (2017). SEL interventions in early childhood. Future of Children, 27(1), 33–47. 10.2307/4596916 [DOI] [Google Scholar]
  45. Merrell, K. W. , & Gueldner, B. A. (2010). Social and emotional learning in the classroom: Promoting mental health and academic success. London, UK: Guilford Press. [Google Scholar]
  46. Moore, D. A. , Russell, A. E. , Matthews, J. , Ford, T. J. , Rogers, M. , Ukoumunne, O. C. , … Gwernan‐Jones, R. (2018). School‐based interventions for attention‐deficit/hyperactivity disorder: A systematic review with multiple synthesis methods. Review of Education, 6(3), 209–263. 10.1002/rev3.3149 [DOI] [Google Scholar]
  47. *O’Conner, R. O. , De Feyter, J. , Carr, A. , Luo, J. L. , & Romm, H. (2017). A review of the literature on social and emotional learning for students ages 3–8: Implementation strategies and state and district support policies (part 2 of 4). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Mid‐Atlantic. Retrieved from http://ies.ed.gov/ncee/edlabs [Google Scholar]
  48. Oberle, E. , Domitrovich, C. , Meyers, D. , & Weissberg, R. (2016). Establishing systemic social and emotional learning approaches in schools: A framework for schoolwide implementation. Cambridge Journal of Education, 46(3), 277–297. 10.1080/0305764X.2015.1125450 [DOI] [Google Scholar]
  49. * Oliver, R. , Wehby, J. H. , & Reschly, D. J. (2011). The effects of teachers classroom management practices on disruptive, or aggressive student behavior. Oslo, Norway: The Campbell Collaboration. 10.4073/csr.2011.4 [DOI] [Google Scholar]
  50. *Pandey, A. , Hale, D. , Das, S. , Goddings, A.‐L. , Blakemore, S.‐J. , & Viner, R. M. (2018). Effectiveness of universal self‐regulation–based interventions in children and adolescents. JAMA Pediatrics, 172(6), 566. 10.1001/jamapediatrics.2018.0232 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. *Paulus, F. W. , Ohmann, S. , & Popow, C. (2016). Practitioner review: School‐based interventions in child mental health. Journal of Child Psychology and Psychiatry and Allied Disciplines, 57(12), 1337–1359. 10.1111/jcpp.12584 [DOI] [PubMed] [Google Scholar]
  52. * Payton, J. , Weissberg, R. , Durlak, J. , Dymnicki, A. , Taylor, R. , Schellinger, K. , & Pacham, M. (2008). The positive impact of social and emotional learning for kindergarten to eighth‐grade students. Chicago, IL: CASEL. 10.1142/9789814289078_0016 [DOI] [Google Scholar]
  53. *Rones, M. , & Hoagwood, K. (2000). School‐based mental health services: A research review. Clinical Child and Family Psychology Review, 3(4), 223–241. 10.1023/A:1026425104386 [DOI] [PubMed] [Google Scholar]
  54. *Sancassiani, F. , Pintus, E. , Holte, A. , Paulus, P. , Moro, M. F. , Cossu, G. , … Lindert, J. (2015). Enhancing the emotional and social skills of the youth to promote their wellbeing and positive development: A systematic review of universal school‐based randomized controlled trials. Clinical Practice & Epidemiology in Mental Health, 11(1), 21–40. 10.2174/1745017901511010021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Simmons, D. N. , Brackett, M. A. , & Adler, N. (2018). Applying an Equity Lens to Social, Emotional, and Academic Development, (June), 1–13. Retrieved from https://www.rwjf.org/content/dam/farm/reports/issue_briefs/2018/rwjf446338 [Google Scholar]
  56. *Sklad, M. , Diekstra, R. , Ritter, M. , & Ben, J. (2012). Effectiveness of school‐based universal social, emotional, and behavioral programs: Do they enhance students’ development in the area of skill, behavior, and adjustment? Psychology in the Schools, 49(9), 892–909. 10.1002/pits.21641 [DOI] [Google Scholar]
  57. Split, J. L. , Koot, J. M. , & van Lier, P. (2013). or whom does it work? Subgroup differences in the effects of a school‐based universal prevention program. Prevention Science, 14(5), 479–488. 10.1007/s11121-012-0329-7 [DOI] [PubMed] [Google Scholar]
  58. *Taylor, R. D. , Oberle, E. , Durlak, J. A. , & Weissberg, R. P. (2017). Promoting Positive youth development through school‐based social and emotional learning interventions: A meta‐analysis of follow‐up effects. Child Development, 88(4), 1156–1171. 10.1111/cdev.12864 [DOI] [PubMed] [Google Scholar]
  59. Walberg, H. J. , & Tsai, S. (1983). Matthew effects in education. American Educational Research Journal, 20(3), 359–373. 10.2307/1162605 [DOI] [Google Scholar]
  60. *Weare, K. , & Nind, M. (2011). Mental health promotion and problem prevention in schools: What does the evidence say? Health Promotion International, 26(Suppl. 1), i29–i69. 10.1093/heapro/dar075 [DOI] [PubMed] [Google Scholar]
  61. Webster‐Stratton, C. , Jamila Reid, M. , & Stoolmiller, M. (2008). Preventing conduct problems and improving school readiness: Evaluation of the incredible years teacher and child training programs in high‐risk schools. Journal of Child Psychology and Psychiatry, 49(5), 471–488. 10.1111/j.1469-7610.2007.01861.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. * White, J. (2017). Rapid evidence review: Reducing the attainment gap – The role of health and wellbeing interventions in schools. Retrieved From http://www.healthscotland.scot/media/1694/reducing‐the‐attainment‐gap‐the‐role‐of‐health‐and‐wellbeing‐interventions‐in‐schools.pdf [Google Scholar]
  63. *Wigelsworth, M. , Lendrum, A. , Oldfield, J. , Scott, A. , Ten‐Bokkel, I. , Tate, K. , & Emery, C. (2016). The impact of trial stage, developer involvement and international transferability on universal social and emotional learning programme outcomes: A meta‐analysis. Cambridge Journal of Education, 46(3), 347–376. 10.1080/0305764X.2016.1195791 [DOI] [Google Scholar]
  64. World Health Organization (WHO) . (1998). Health promoting evaluation: Recommendations for policy‐makers, report of the WHO european working group on health promotion evaluation. Copenhagen, Denmark: WHO. [Google Scholar]

Articles from The British Journal of Educational Psychology are provided here courtesy of Wiley

RESOURCES