Skip to main content
PLOS One logoLink to PLOS One
. 2024 Jun 11;19(6):e0303033. doi: 10.1371/journal.pone.0303033

Move Well, Feel Good: Feasibility and acceptability of a school-based motor competence intervention to promote positive mental health

Stuart J Fairclough 1,*, Lauren Clifford 1, Lawrence Foweather 2, Zoe R Knowles 2, Lynne M Boddy 2, Emma Ashworth 3, Richard Tyler 1
Editor: Henri Tilga4
PMCID: PMC11166299  PMID: 38861557

Abstract

Background

In response to the adverse impacts of the COVID-19 lockdown measures Move Well, Feel Good (MWFG) was developed as a school intervention using improvement of motor competence as a mechanism for promoting positive mental health. Study objectives were to evaluate the feasibility and acceptability of MWFG and to describe changes in child-level outcomes.

Methods

Five northwest England primary schools were recruited. MWFG was delivered over 10-weeks through physical education (PE) lessons, which were supplemented by optional class-time, break-time, and home activities. The intervention focused on development of 9–10 year-old children’s motor competence in locomotor, object control, and stability skills, and psychosocial skills. Feasibility was evaluated against nine pre-defined criteria using surveys, interviews (teachers), and focus groups (children). Pre- and post-intervention assessments of motor competence, mental health, prosocial behaviour, wellbeing, and 24-hour movement behaviours were also completed.

Results

The five recruited schools represented 83% of the target number, 108 children consented (54% of target) with teachers recruited in all schools (100% of target). Intervention dose was reflected by 76% of the 45 scheduled PE lessons being delivered, and adherence was strong (>85% of children attending ≥75% of lessons). Positive indicators of acceptability were provided by 86% of children, 83% of PE teachers, and 90% of class teachers. Data collection methods were deemed acceptable by 91% of children and 80% of class teachers, and children spoke positively about participating in the data collection. Child-level outcome data collection was completed by 65%-97% of children, with a 3%-35% attrition rate at post-intervention, depending on measure. Favourable changes in motor competence (+13.7%), mental health difficulties (-8.8%), and prosocial behaviour (+7.6%) were observed.

Conclusions

MWFG is an acceptable and feasible motor competence intervention to promote positive mental health. Content and delivery modifications could inform progression to a pilot trial with a more robust design.

Introduction

The COVID-19 pandemic and associated social distancing measures resulted in unprecedented, enforced changes to UK citizens’ routines and lifestyles. Children were particularly affected through school closures, online home learning, the ceasing of organised sports, and restrictions on face-to-face social interactions. The lockdown restrictions were negatively associated with children’s mental health and wellbeing [13], with those from lower socioeconomic status families disproportionately affected [3, 4]. Furthermore, the restrictions contributed to increases in children’s digital screen use [5] and decreases in physical activity [6], particularly structured activities (e.g., physical education lessons, organised leisure time sport participation [7, 8]). These activities are critical for development of motor competence [9] (i.e., the degree to which a child performs goal-directed movements in a coordinated, accurate, and relatively error-free manner [10]), which is an essential foundation for future movement and physical activity behaviours [11]. Given the established relationships between motor competence and physical activity [11], it is unsurprising that reduced activity during the pandemic was associated with attenuated motor competence once lockdown restrictions were lifted [12]. Moreover, poor motor competence is associated with reduced mental health, including internalising difficulties such as anxiety and depression [13], and these associations may be mediated by limited social support mechanisms [13, 14] and low self-perceptions [15, 16]. Improving children’s motor competence may therefore be a mechanism for promoting children’s mental health, through enhancing aspects of psychosocial development.

The inter-relationships between children’s motor competence, mental health, and their mediators are described in the Elaborated Environmental Stress Hypothesis model (EESH) [17]. Empirical support exists for the EESH [18], which posits that poor motor competence predisposes children to internalising mental health difficulties (e.g., anxiety, depression) via interactions with environmental stressors such as low self-esteem, low social support, physical inactivity, being overweight, etc. These stressors can, in turn, be ‘buffered’ by social and personal resources such as peer and parental support and enhanced perceived competence [18]. Previous studies in community samples have reported favourable relationships between motor competence, internalising difficulties and other psychosocial outcomes described in the EESH model [14, 19].

There is a need for intervention strategies in primary school children to address the well-established low levels of motor competence and poor mental health and wellbeing which declined further as immediate and persisting consequences of the COVID-19 lockdown measures [1, 12], even after they ended [6, 20, 21]. Schools are suitable settings for the promotion of child health and wellbeing [22]. Furthermore, primary school motor competence interventions can be efficacious for improving motor skills [23, 24] and there is some albeit limited evidence that they can also enhance mental health and wellbeing [25, 26]. However, no such intervention studies involving children without movement difficulties (e.g., developmental coordination disorder) within mainstream schools have been undertaken in the UK. In response to the adverse health and wellbeing consequences of the lockdown restrictions observed in schools, we applied the EESH model to underpin a school-based motor competence intervention to enhance children’s mental health entitled ‘Move Well, Feel Good’ (MWFG).

This study focused on the feasibility of the MWFG intervention. Feasibility studies are an integral part of the Medical Research Council’s complex intervention development and evaluation framework, which assess pre-defined progression criteria related to interventions and/or their evaluation [27]. Feasibility studies generate sufficient evidence to make informed decisions about whether an intervention has promise and its findings can be replicated in a scaled-up trial [28]. As the development of MWFG was consistent with the Medical Research Council framework, the primary aim of this study was to evaluate the feasibility and acceptability of MWFG to assess its potential for implementation as a pilot trial. A secondary aim was to describe changes in child-level outcomes that reflected elements of the EESH model.

Materials and methods

Study design

This feasibility study was conducted as a single-arm pre-post intervention [29] whereby all children received the same intervention. Pre-intervention (T0) and post-intervention (T1) measures were obtained over two consecutive days one week before (week of 12th September 2022) and after the intervention (week of 12th December 2022), respectively. Additional feasibility data were collected from teachers during each week of the intervention between T0 and T1.

Sample size estimation for feasibility outcomes

Using pilot study progression criteria Red/Stop upper limit and Green/Go lower limit reference tables specifically develop for feasibility outcomes [30] we estimated a sample size for child recruitment. Based on an alpha level of 0.05 and 90% power to reject being in the Red zone if the Green zone holds true, the minimum required sample size was n = 46. Depending on the number of Year 5 classes in each school we proposed to recruit between 100 and 200 children to the study. This is significantly more than the estimate from the power calculation, and consistent with the upper end of the sample size scale (n = 100) typically observed in health behavioural pilot and feasibility studies [28].

Participants and recruitment

Phase 1 of MWFG involved school stakeholders (e.g., children, PE teachers, class teachers, school leaders) and the researchers co-creating the intervention programme [31] with schools and children recruited to the study during this phase between January 31st and March 1st 2022. Briefly, participants were Year 5 children (age 9–10 years) recruited from primary schools in West Lancashire, northwest England. Eligible schools had to be located in low socioeconomic status (SES) areas based on school postcode-linked English Indices of Multiple Deprivation (EIMD; deciles 1–3) [32] and have >18% (England average) of children eligible for free-school meals. Schools meeting these criteria and with at least 25 children per class were contacted to ascertain their interest in the study. All Year 5 children who were physically able to participate in physical education (PE) lessons were eligible to participate. Written informed consent was obtained for all child (parental/carer consent and child assent) and teacher participants. Ethical approval was received from the Science Research Ethics Committee at Edge Hill University (#ETH2122-0062). The study was registered with the International Standard Randomised Controlled Trial Number (ISRCTN) Registry (#ISRCTN23960783). The study was registered retrospectively and before enrolment of participants started because the design and content of the intervention was unknown at the time of study commencement (i.e., Phase 1 intervention co-creation). The authors confirm that all ongoing and related trials for this intervention are registered.

MWFG intervention

The core element of MWFG was PE lessons focused on improving motor competence. The lessons were taught in three blocks of three weeks (minimum dose of nine lessons in total), focusing on combinations of locomotor, object-control, and stability skills, respectively. The lessons were developed by the research team and included adaptations of existing physical literacy and PE resources [33, 34]. Specific psychosocial concepts (e.g., perceived competence, self-worth, resilience, co-operative working) were integrated into the PE lesson plans to reinforce these important EESH ‘buffers’ against risks to positive mental health (see S1 File). The design and content of the lessons were based on established pedagogical principles. Specifically, the SAAFE Framework emphasised supportive, active, autonomous, fair, and enjoyable lessons [35], while assessment for learning [36] was integrated through use of task-oriented resource cards for self- and peer-assessment. PE teachers received a weekly summary lesson plan which cross-referenced activity resource cards that could be used in each lesson. Each lesson also included an associated activity card for children to try at home with family members or during recess with friends (see S2 File). These self-directed individual or group ‘skill snacks’ were of short duration and related directly to the content of that week’s PE lesson. These activity cards and accompanying demonstration videos could be accessed via a QR code on the PE teacher’s lesson plans and an electronic link which was sent to each child’s parent/carer via the schools’ communication systems.

Two weeks before the intervention started the PE teachers delivering the intervention received an online familiarisation session. The online format was used for convenience to accommodate PE teachers’ availability and a recording of the session with transcript was shared for attendees and non-attendees to refer to in their own time. PE teachers were encouraged to ask questions or make suggestions to the researchers about the programme subsequently via email, text, or phone. To reinforce PE lesson content, class teachers received suggestions for integration of motor competence activities focused on the same movement skills into their teaching for class-time physically active learning, which also emphasised the importance of incorporating the psychosocial concepts into lessons. All PE lessons, skill snacks resources, and class teacher resources were provided to participating teachers in electronic and hard copy formats.

Following an introductory classroom launch lesson, the core element of MWFG was delivered by PE teachers during one weekly ~45-minute PE lesson between September and December 2022. Schools were closed, as was typical, for one week’s break mid-way through this period, during which children were provided with activity cards as ‘homework’ and encouraged to try them at home. Throughout the intervention, additional digital and video resources for motor competence and psychosocial development were provided electronically to class teachers and parents/carers for use in lessons taught in classrooms or elsewhere in the school grounds, at break-time, and at home.

Feasibility outcomes

The uncertainties relating to the feasibility of MWFG and the subsequent progression to a pilot intervention trial represented the primary study outcomes. These related to eligibility and recruitment, intervention implementation (e.g., resources, delivery), intervention adherence and engagement, acceptability of data collection procedures, and data attrition [29]. These feasibility outcomes were measured using quantitative and qualitative methods, which are summarised in Table 1 and explained in detail in S3 File. Nine a priori feasibility criteria were agreed with the study funder as progression criteria to a full trial. These were incorporated into a traffic light system (i.e., green: continue to trial, amber: further discussion and changes needed, red: do not proceed to trial) [29, 37] and are detailed in Table 2.

Table 1. Intervention feasibility outcome measures summary.

Feasibility outcome measures When administered/completed Participants Feasibility outcomes Response modes
Delivery log Weekly PE teachers
Classroom teachers
Dose
Adherence
Closed question responses
Intervention and research methods acceptability surveys T1 PE teachers
Classroom teachers
Acceptability of intervention
Acceptability of data collection methods
Likert scale (Strongly disagree to Strongly agree)
Intervention and research methods acceptability survey T1 Children Likert scale smiley face emoticons (Strongly disagree to Strongly agree)
Semi-structured interviews T1 PE teachers
Classroom teachers
Acceptability of intervention
Acceptability of data collection methods
Oral responses to open-end interviewer questions and prompts
Participatory focus groups T1 6 children per school Acceptability of intervention
Acceptability of data collection methods
Drawing task and methods photographs to stimulate discussion; Oral responses to open-end interviewer questions and prompts
Consent forms Before T0 Children
PE teachers
Classroom teachers
Child and teacher recruitment Signed informed consent
Child-level outcomes T0
T1
Children Data collection and data attrition rates As per administration protocols for each measure (see Child-level outcomes section)

Table 2. Traffic light feasibility progression criteria.

Progression criteria Red
(stop)
Amber
(discuss and amend)
Green
(go)
School recruitment (targeting n = 6) ≤50% of target number 50–90% of target number ≥90% of target number
Child participant recruitment (targeting n = 200) <20% of eligible children 20–74% of eligible children ≥75% of eligible children
Deliverer recruitment ≥1 class teacher and ≥1 PE deliverer in <50% of schools ≥1 class teacher and ≥1 PE deliverer in ≥50%-75% of schools ≥1 class teacher and ≥1 PE deliverer per school
Intervention dose <40% of scheduled sessions delivered/week 40–79% of scheduled sessions delivered/week ≥80% of scheduled sessions delivered/week
Intervention adherence <40% of recruited children attend ≥75% of MWFG PE lessons 40–69% of recruited children attend ≥75% of MWFG PE lessons ≥70% of recruited children attend ≥75% of MWFG PE lessons
Acceptability of intervention <50% of teachers and children found MWFG acceptable 50–79% of teachers and children found MWFG acceptable ≥80% of teachers and children found MWFG acceptable
Acceptability of data collection methods <50% of teachers and children found data collection methods acceptable 50–79% of teachers and children found data collection methods acceptable ≥80% of teachers and children found data collection methods acceptable
Secondary outcome data collected at baseline Data collected from <50% of children Data collected from 50–74% of children Data collected from ≥75% of children
Follow-up secondary outcome data attrition >40% data attrition at T1 26–40% data attrition at T1 ≤25% data attrition at T1

Child-level outcomes (collected in schools)

Motor competence

Motor competence was assessed using the Canadian Agility and Movement Skill Assessment (CAMSA) [38] and three stability skills from the Dragon Challenge [39]. Both the CAMSA and Dragon Challenge were developed and validated to assess motor competence in primary school-aged children. The CAMSA involved children completing a sequence of seven movement skills in a timed agility course. These skills include jumping on two feet through three hoops, sliding sideways between cones, catching, overhand throw, skipping between cones, hopping on one foot through six hoops, and kicking a ball. The quality of the performance of each skill was scored based on 14 movement criteria, each receiving one point if the criterion was present and zero points if absent. The time taken to complete the agility course was recorded, with faster times scoring higher (out of 14). Children had two practice attempts before completing two timed and scored trials. Scores from both trials were summed to provide a CAMSA overall score (range 2 to 56) [38].

The three Dragon Challenge stability skills were the balance bench, core agility, and wobble spot [39]. Children observed a demonstration of each skill and had one practice attempt before completing a scored trial. The quality and outcome of each skill were assessed using two technical/process criteria and one outcome/product criterion. One point was given for successful demonstration of each of the technical/process criteria successfully and two points were awarded for each outcome/product criterion. A maximum of four points were awarded for each skill and these were summed to provide a Dragon Challenge stability skills score (range 0 to 12) [39]. The CAMSA and Dragon Challenge stability skills were assessed and coded by trained assessors who had experience in administering and analysing both assessments and had no prior knowledge of the participants’ movement skill abilities.

Mental health

Internalising and externalising mental health difficulties were measured in classrooms using the Strengths and Difficulties Questionnaire (SDQ) [40] and the Me and My Feelings questionnaire (MMF) [41]. The 25-item SDQ includes five subscales related to perceived emotional difficulties, behavioural difficulties, hyperactivity, peer relationship difficulties, and prosocial behaviour in the last six months. Each subscale consists of five items which are scored on a 3-point scale ranging from 0 (‘not true’) to 2 (‘certainly true’). Items 7, 11, 14, 21, and 25 are reversed scored. Scores for each subscale are computed by summing their respective items and range from 0 to 10. A total difficulties score reflective of overall mental health can also be computed by summing the four mental health difficulties subscales, ranging from 0 to 40, with higher scores reflecting increased mental health difficulties. Prosocial behaviour is not included within the total difficulties score as it is the positive mental health subscale on the SDQ. When the SDQ is used with community samples it is recommended that the broad constructs of internalising difficulties (emotional difficulties and peer relationships), externalising difficulties (behavioural difficulties and hyperactivity), and overall mental health and prosocial behaviour are reported [42]. Computed scores for each of the five subscales and the total difficulties scale can be classified on a four-band categorisation: close to average, slightly raised, high, and very high [43]. Internal consistency of the SDQ for overall mental health, internalising difficulties, externalising difficulties, and prosocial behaviour was Cronbach’s α = 0.73, 0.76, 0.76, 0.72 (T0), respectively. At T1 Cronbach’s α values were 0.78, 0.78, 0.77, 0.69, respectively.

The MMF questionnaire is a 16-item school-based measure of child mental health. It covers the domains of emotional difficulties (10-items) and behavioural difficulties (6-items). The children respond to each statement asking how they feel by selecting ‘never’, ‘sometimes’, or ‘always’. These responses are scored on a 3-point scale ranging from 0 (never) to 2 (always), with item-15 reversed-scored. Scores for overall mental health as well as emotional difficulties and behavioural difficulties are computed by summing their respective items. These range from 0 to 32, 20, and 12, respectively. MMF has demonstrated acceptable validity and reliability [41, 44] and scores for emotional and behavioural difficulties subscales can be categorised as indicating borderline or clinically significant difficulties [41]. Cronbach’s α internal consistency values for overall mental health, emotional difficulties, and behavioural difficulties at T0 were 0.84, 0.85, and 0.81, respectively. The corresponding values at T1 were 0.89, 0.85, and 0.82, respectively.

Wellbeing

The KIDSCREEN-10 questionnaire was used as a measure of wellbeing. KIDSCREEN-10 is a 10-item questionnaire, which asks participants how they felt in the last week [45]. The questions reflect the factors of physical well-being, psychological well-being, autonomy, parent relations, peers and social support, and school environment, which are derived from the 27-item version of KIDSCREEN. Responses to each question are recorded using 1–5 Likert scale ranging from 1 (Not at all/Never) to 5 (Extremely/Always). Questions 3 and 4 are reverse-scored then scores for each question are summed, before being converted to T-scores using the methodology described in the KIDSCREEN administration manual [46]. Cronbach’s α at T0 and T1 were 0.75 and 0.80, respectively.

Global self-worth

The Global Self-Worth (GSW) sub-scale of the Self-Perception Profile for Children [47] was used to measure GSW. The GSW sub-scale consisted of 6-items presented in a structured alternative format. This uses statement types such as ‘Some children are very happy being the way they are’ BUT ‘Other children wish they were different’. Children first decided which kind of child they were most like, after which they decided whether the description was ‘really true for me’ or ‘sort of true for me’. Responses were scored on a four-point scale, where 1 indicated the lowest GSW and 4 reflected the highest level of GSW. On the GSW sub-scale, items 1, 2, and 6 were reverse-scored. Internal consistency was Cronbach’s α = 0.81 (T0) and 0.84 (T1).

Peer support

The peer support sub-scale of the Student Resilience Survey [48] consisted of 12 questions asking about support from other children at school. The children’s responses were scored on a 5-point scale ranging from 1 (never) to 5 (always). The mean of the 12 scores provided a measure of peer support [48]. Internal consistency for T0 and T1 was Cronbach’s α = 0.90, 0.91, respectively.

Social influences on physical activity

Social influences on the children’s physical activity were measured using the Social Influences Scale [49]. This consisted of eight statements relating to friends’ and family members’ influences on the child’s activity in the previous two weeks. Each statement was accompanied with a Yes/No response option which was scored as 1 or 2, respectively. Cronbach’s α at T0 and T1 were 0.76 and 0.79, respectively.

Encouraging friends to be active

A single question was included asking during a typical week how often the children encouraged friends to be active [50]. The response options of ‘never’, ‘sometimes’, and ‘every day’ were scored with 0, 1, and 2, respectively. All questionnaires were completed at T0 and T1 in classrooms with guidance and instructions from the research team and in the presence of the class teachers.

School-day physical activity and sedentary time

Physical activity and sedentary time during school were assessed using ActiGraph GT9X triaxial accelerometers. Participants were asked to wear the devices on their non-dominant wrist for 24-hours per day over 9 consecutive days at T0 and T1. The devices were initialised to record at 100 Hz and the subsequent data were downloaded using ActiLife (versions 6.13.4). The raw data files (gt3x format) were processed in R using package GGIR version 2.8–2 [51]. Signal processing included autocalibration using local gravity as a reference [52], detection of implausible values, and identification of non-wear. Non-wear was imputed by default in GGIR whereby invalid data were imputed by the average at similar time points on other days of the week [53]. School start and end times were used to calculate school day durations and subsequent data processing was relative to these time windows. The raw triaxial accelerometer signals were converted to one summary measure of acceleration (Euclidean Norm Minus-One; ENMO) expressed in milligravitational units (mg) [53]. ENMO values were reduced to 5-s epochs and averaged per school day to represent average acceleration as a measure of activity volume [54]. The intensity gradient represents the inverse relationships between time and activity intensity, and was also calculated as a measure of the children’s school day intensity profile [54]. Child-specific non-dominant wrist ENMO cut-points of 48 mg [55], 201 mg, and 707 mg [56] defined estimated sedentary time and light physical activity (LPA), moderate physical activity (MPA), and vigorous physical activity (VPA), respectively. Participants’ accelerometer data were included in the analytical sample if valid wear was recorded on at least three valid school days, and if post-calibration error was <10 mg. A valid day was defined as accelerometer wear for at least 95% of the school day. The 5% buffer accounted for short, accumulated periods of recorded non-wear due to sustained periods of stillness during class time. During the accelerometer wear protocol daily rainfall and ambient temperature were recorded for the locations corresponding to each school’s postcode (https://www.metoffice.gov.uk).

Additional measures

Anthropometrics

All consenting children had their height and weight measured at T0 and T1 (to the nearest cm/kg) using a portable stadiometer and digital scale (both Seca, Hamburg Germany). Body mass index (BMI) and BMI z-scores (BMIz) were calculated for each participant [57] and international age- and sex-specific BMI cut-points applied to determine weight status [58]. For all measurements, participants wore light clothing with shoes removed in an area away from the other data collection activities.

Demographic information

Participants’ dates of birth, home postcodes, and ethnicity were obtained from the schools’ information management systems. Decimal age and 2019 EIMD scores [32] were calculated using data collection dates and home post codes, respectively. EIMD scores provide an area-level relative measure of deprivation based on income, employment, education, health, crime, housing, and living environment. Area-level SES was represented by the EIMD decile score for each participant. The Family Affluence Scale (FAS) [59] was included in the questionnaire pack that was completed by the children in school. The FAS consists of six questions asking about home and family indicators of affluence (e.g., car ownership, own bedroom, foreign holidays). The response options differed depending on the questions asked but had two, three, or four responses available which were scored from 1 to 2, 3, or 4, respectively. Question scores were summed to give the overall FAS score.

Recognition of school and child participation

Each school received a bag of multi-skills and physical activity equipment to the value of £300 which was delivered to them prior to the intervention starting. All schools received the same equipment and were encouraged to make it available to children during non-curricular times (e.g., recess) and during PE lessons. At the end of the intervention each child received a £10 gift voucher as a gesture of recognition for their engagement in the study.

Data analysis

Percentage values were calculated for the traffic light criteria to determine which progression zone each one corresponded to. Descriptive statistics were calculated using IBM SPSS (Armonk, NY) for the feasibility and child-level outcomes at T0 and T1. Data provision rates for questionnaire data and accelerometer data were also recorded at T0 and T1. The children’s qualitative visual data and recorded transcription data were pooled together for complimentary purposes to add context and depth to the drawings. Credibility was enhanced and researcher biases were reduced by triangulating the two data sources. Verbatim quotes and drawings from the children’s participatory focus groups and teachers’ semi-structured interviews were extracted to exemplify representation of the participants’ experiences and perceptions.

Results

Feasibility outcomes traffic light progression criteria

School and participant recruitment

Fig 1 presents a CONSORT flow diagram of participants through the study. The six schools that were initially approached agreed to take part in the study, but one withdrew before the study commenced due to staff absences. This meant 83% of the target number of schools were recruited (progression criterion 1 = amber; Table 3). These schools had only one Year 5 class which limited the maximum number of children invited to participate to n = 144. Overall, 54% (n = 108) of the maximum target number of 200 children provided written parental informed consent to take part (progression criterion 2 = amber). This represented a response rate of 75% of the invited children, which ranged from 68% to 93% between the schools. The children were aged 9.6 ± 0.4 years, were predominantly of White British ethnicity (89.8%) with healthy weight status (68.5%), and just over half (51.9%) were girls. In each school one PE teacher and one Year 5 class teacher were recruited (progression criterion 3 = green).

Fig 1. Flow of participants through the Move Well, Feel Good feasibility intervention (based on CONSORT flow diagram).

Fig 1

Table 3. Summary of traffic light feasibility progression criteria results.
Progression criteria Results Progression decision
1. School recruitment (targeting n = 6) 6 schools recruited (1 withdrew) leaving 5 for the duration of the study. No large (i.e., 2-class entry) schools were recruited, which limited the maximum number of invited children. 83% of target number of schools recruited.
Amber: Discuss and amend.
2. Child participant recruitment (targeting n = 200) 108 of the targeted 144 (75%) children consented and recruited. 54% of target number of children recruited.
Amber: Discuss and amend.
3. Deliverer recruitment 1 class teacher and 1 PE teacher in each of the recruited schools. 100% of target number of deliverers recruited.
Green: Go.
4. Intervention dose Across the 5 schools, 34 lessons out of a possible 45 were delivered over the 9-week programme. No lessons were delivered in 1 school following intervention from the PE Coordinator. 76% of lessons delivered.
Amber: Discuss and amend.
5. Intervention adherence 108 children per week attended a total of 34 from a possible 45 MWFG PE lessons across the 5 schools. This includes the school where no lessons were delivered. 76% of recruited children attended ≥75% of lessons.
Green: Go.
6. Acceptability of intervention 90 children reported that the programme was enjoyable and they learned new skills.
3 PE teachers (2 did not respond) and all 5 class teachers reported that the programme and resources were straightforward to follow and use, and that the programme was interesting and engaging.
86% of recruited children found the programme acceptable. 83% of PE teachers and 90% of class teachers found the programme acceptable (87% combined).
Green: Go.
7. Acceptability of data collection methods 95 children reported that the data collection sessions were enjoyable. All 5 class teachers reported that the data collection methods worked well and children found them interesting and engaging. 91% of recruited children found the data collection methods acceptable.
80% of class teachers found the data collection methods acceptable.
Green: Go.
8. Secondary outcome data collected at baseline Mental health (n = 108)
Motor competence (n = 108)
Wellbeing (n = 105)
Self-worth (n = 105)
Peer-support (n = 97)
Peer influences (n = 102)
Encouraging friends (n = 94)
Accelerometer (n = 86)
Anthropometrics (n = 108)
80%-100% of children completed secondary outcome measures at baseline.
Green: Go.
9. Follow-up secondary outcome data attrition Mental health; attrition = 3%
Motor competence; attrition = 12%
Wellbeing; attrition = 5%
Self-worth; attrition = 4%
Peer-support; attrition = 10%
Peer influences; attrition = 6%
Encouraging friends; attrition = 7%
Accelerometer; attrition = 35%
Anthropometrics; attrition = 12%
65%-97% of children completed secondary outcome measures at baseline.
Attrition rate ranged from 3%-35% across the measures.
Amber: Discuss and amend.

Intervention delivery, adherence, and acceptability

The PE teachers delivered 76% of the core MWFG lessons (progression criterion 4 = amber). Across the five schools this reflected 34 lessons taught from a possible 45. Moreover, it accounts for the fact that in one school no lessons were taught following a decision from that school’s PE Coordinator which was not communicated to the research team. Over 75% of the 34 lessons were attended by 76% of the recruited children (progression criterion 5 = green). The MWFG programme was perceived as enjoyable by 86% of the children, with no systematic differences in demographic characteristics among the few children who were unsure about or who did not enjoy the programme. Similarly, 83% of the PE teachers and 90% of the class teachers found the programme to be straightforward to deliver and engaging for the children (progression criterion 6 = green).

Data collection acceptability and feasibility

Ninety-one percent of the recruited children reported that the data collection sessions were enjoyable. This was corroborated by 80% of the class teachers who observed that data collection worked well and was interesting and engaging for the children (progression criterion 7 = green). At T0, secondary outcome measures were completed by between 80% and 100% of recruited children (progression criterion 8 = green). Data attrition at T1 ranged from 3% (mental health questionnaires) to 35% (accelerometers) (progression criterion 9 = amber). The next highest level of attrition was 12% for the motor competence and anthropometric measures.

Qualitative acceptability and feasibility results

Thirty children participated in the write, draw, show, tell participatory focus groups. When asked to write or draw the most memorable activities throughout MWFG, the children most frequently drew the motor competence assessment activities from the data collection sessions. This was consistent across the five focus groups. Further, the points that the children most consistently referred to were fun and enjoyment of the data collection activities themselves and feeling like they were able to both tell the truth and express their feelings when completing the questionnaires. An example of the visual (Fig 2) and spoken data are presented to illustrate this.

Fig 2. Drawing from a boy aged 10, illustrating the activities that were memorable to him throughout the MWFG programme.

Fig 2

“So, I actually drew the whole thing [motor competence assessment/practical skills circuit]…I think that’s probably the most fun thing we did [CAMSA and Dragon Challenge] because all the things you had to do are one. I did not like the scale [anthropometric measures] because it was very, very boring”

[Focus group participant, School 2].

“I liked the survey because when it’s a piece of paper asking you the questions you’re more likely to tell the truth…so it’s nice to let other people know without having to say anything.”

[Girl, School 3]).

All children spoke positively about the motor competence assessments (e.g., “I really liked the ones with the jumping in the hoops one-footed, and the balancing spot” (Boy—School 1), with mixed comments about the anthropometric measures. There were also some varied comments about the accelerometers (“The watch kind of irritated me, but it was good” (Girl–School 1)), although most children spoke positively about these and enjoyed the novelty of wearing them.

The teachers’ interview responses typically mentioned the MWFG teaching resources, engagement of the children, the timing of the intervention, delivery constraints and data collection methods. Teachers commented positively on the volume, detail, and clarity of the teaching resources and that the content was appropriate for the children’s abilities (e.g., “…some of them have got it straight away and some it took a little bit longer, but I don’t think there was anyone that really, really struggled with the activities that were there.” [PE specialist, male, School 4]). Importantly, the MWFG lesson activities were deemed to be inclusive and engaging for the children. One teacher stated, “I’ve got some children who are very reluctant to take part in PE…. I thought it [the programme overall?] was great because a lot of those children who do struggle really enjoyed the programme and really wanted to take part, and it was fun.”

[Class teacher, female, School 4].

Teachers felt that the 10 weeks duration of MWFG was appropriate, but that the timing in the first term back after the summer break was sometimes problematic. This was partly because of how busy that period was (e.g., “I would never, ever do it in the autumn term ever again because it is literally the worst term to do it in.” [Class teacher, female, School 4]) and partly because of child absences and competing school priorities in December (e.g., festive celebrations). Moreover, timing issues were sometimes compounded by limited availability of indoor practical spaces for MWFG lesson delivery or data collection (e.g., “…every time you’ve emailed saying ‘when can we come?’, the reality’s been, there isn’t a space to do that. So yes, that has been quite tricky, really.”

[Class teacher, female, School 5]).

The teachers were generally positive about the data collection methods. In particular, references were made to the accelerometers (e.g., “… [children] absolutely loved the watches…” [Class teacher, female, School 4]) and questionnaires (e.g., “…there’s definitely calming effects with them critically thinking about themselves, noticing themselves, noticing how they’re feeling…the questionnaire, to evaluate yourself, can have that effect…” [Class teacher, female, School 2]). In contrast, one teacher mentioned how some children were reluctant to be weighed and whether additional privacy measures could have been taken in these situations (“I have got a few [pupils] who are very self-conscious, because obviously they are overweight for their age at this point…but maybe it would have been better to take them…just a bit further away from the others…”

[Class teacher, female, School 4]).

Child-level outcomes

Fifty-six girls and 52 boys (mean age = 9.6 ± 0.4 years) took part in the study. Mean BMI z-score was 0.69 ± 1.4; 68.5% of the children were categorised as healthy weight, and the remaining 31.5% were in the overweight/obese category. On average, the children resided in areas of relatively high deprivation (mean EIMD decile = 3.6 ± 3.0). Descriptive results for the secondary outcomes at T0 and T1 are detailed in Table 4. At T1 Compared to T0, motor competence scores for the CAMSA and Dragon Challenge stability skills were 13.7% and 38.3% higher, respectively. The narrowness of the 95% confidence intervals for the T1-T0 differences suggested that these changes were meaningful. Favourable changes in mental health outcomes were also observed between T0 and T1, with meaningful differences indicated for SDQ-assessed total difficulties, externalising difficulties, and prosocial behaviour. Wellbeing and psychosocial outcome scores were generally higher at T1 but the differences seen were negligible. Children wore the accelerometers for 385 min·school-day-1 (99.7% of total school day duration) on 3.9 (T0) and 4.0 (T1) school-days per week. Children were less active and more sedentary at T1 than at T0 when the expected seasonal changes in weather conditions were also observed. Specifically, between T0 and T1 the average ambient day-time temperature decreased from 13.3°C to 3.8°C, and rainfall increased from 1.7 mm·day-1 to 3.1 mm·day-1.

Table 4. Descriptive comparison of mean (SD) secondary outcomes at T0 and T1.

Outcome n T0 T1 T1-T0 mean difference
(95% CI)
Motor competence
CAMSA overall score 95 29.91 (8.05) 34.01 (7.27) 4.11(2.95, 5.26)
DC stability skills score 95 4.59 (2.51) 6.35 (2.42) 1.76 (1.25, 2.26)
Mental health
MMF: emotional difficulties 105 6.46 (4.10) 6.31 (4.13) -0.15(-0.54, 0.82)
MMF: behavioural difficulties 105 2.42 (2.40) 2.10 (2.27) -0.32 (-0.13, 0.77)
SDQ: total difficulties 105 12.42 (6.98) 11.33 (6.90) -1.09 (-0.10, -2.07)
SDQ: externalising difficulties 105 5.96 (3.91) 5.30 (3.74) -0.66 (-1.21, -0.11)
SDQ: internalising difficulties 105 6.43 (4.19) 6.02 (4.14) -0.40 (-0.21, 1.06)
SDQ: prosocial behaviour 105 7.92 (2.17) 8.52 (1.69) 0.60 (0.26,0.94)
Wellbeing
KIDSCREEN-10: T-scores 100 50.12 (9.61) 50.48 (11.10) 0.37 (-2.23, 1.50)
Psychosocial outcomes
Global self-worth 102 3.28 (0.70) 3.25 (0.75) 0.03 (-0.11, 0.16)
Support from other children 87 3.92 (0.91) 3.80 (0.89) -0.12 (-0.03, 0.28)
Social influences on physical activity 96 13.43 (2.28) 13.74 (2.28) -0.31 (-0.85, 0.22)
Encourage friends’ physical activity 88 1.22 (0.65) 1.24 (0.63) 0.02 (-0.18, 0.14)
School-day physical activity outcomes
School-day duration (min·day-1) 385.9 (6.6) 385.9 (6.6)
Number of valid days 57 3.91 (0.29) 3.96 (0.53) -0.05 (-0.22, 0.12)
Wear time (min·day-1) 57 385.20 (7.20) 384.60 (8.40) 0.48 (-0.60, 1.50)
SED (min·day-1) 57 227.01 (27.87) 243.27 (30.03) 16.26 (26.58, 5.95)
LPA (min·day-1) 57 158.06 (31.95) 138.84 (32.22) -19.22 (-7.58, -30.86)
MPA (min·day-1) 57 29.36 (8.95) 25.43 (9.49) -3.93 (-6.86–1.00)
VPA (min·day-1) 57 4.88 (2.36) 4.40 (3.10) 0.48 (-0.30, 126)
MVPA (min·day-1) 57 34.24 (10.75) 29.83 (12.25) -4.41 (-8.02, -0.81)
Average acceleration (mg) 57 80.04 (16.07) 70.81 (20.12) 9.23 (3.30, 15.20)
Intensity gradient 57 -1.81 (0.15) -1.89 (0.16) 0.08 (0.04, 0.12)

Note. Bold font highlights 95% confidence intervals not crossing zero which suggests meaningful T1-T0 differences; CAMSA: Canadian Agility and Movement Skill Assessment; DC: Dragon Challenge; MMF: Me and My Feelings questionnaire; SDQ: Strengths and Difficulties Questionnaire; SED: sedentary time; LPA: light physical activity; MPA: moderate physical activity; VPA: vigorous physical activity; MVPA: moderate-to-vigorous physical activity.

Discussion

Feasibility outcomes

This study evaluated the feasibility and acceptability of the MWFG intervention and also reported changes in child-level outcomes. Regarding the nine traffic light feasibility criteria for progressing to a pilot trial, five were fully met (i.e., green = go). These related to deliverer recruitment (100% of target number), intervention adherence (76% of children attended lessons), intervention acceptability (86%, 83%, and 90% of children, PE teachers, and class teachers, respectively), acceptability of data collection methods (91% of children and 80% of class teachers, respectively), and secondary outcome data collected at baseline (measures completed by 80%-100% of children). The remaining progression criteria were partially met (i.e., amber = discuss and amend) and were related to school recruitment (83% of target number), child recruitment (54% of maximum target number), intervention dose (76% of lessons delivered), and follow-up secondary outcome data attrition (T1 measures completed by 65%-97% of children). Of the child-level outcomes, favourable differences in motor competence, mental health, and prosocial behaviour were observed.

School recruitment was initially 100% until one school withdrew shortly after initially agreeing, due to multiple staff absences, a common reason for attrition in school-based research studies. Jago and colleagues suggested the need for increased engagement at the school recruitment stage to better understand school staffing capacities to help avoid such situations [60]. We recruited 54% of the maximum target number of 200 children. This maximum was assumed from that of our previous work in the same locality [61], namely that we would recruit at least two ‘large sized’ schools (i.e., those with at least two Year 5 classes). This assumption was not realised as all schools that agreed to take part had one Year 5 class, which limited the maximum target sample of children to n = 144. A replacement school was not sought because at the time it was felt that extending the duration of school recruitment would place pressure on the timeline for the co-production phase. As a result, this child recruitment criterion did not meet the ‘green/go’ criterion of ≥75% children. To an extent, this lower recruitment rate was offset by the high proportion of targeted children providing parental/carer consent to take part (75%), which was greater than that reported in other UK primary school motor competence/physical activity feasibility interventions [60, 62]. Notwithstanding this favourable participation rate, the ‘opt-in’ method of parental/carer informed consent that we used likely inhibited the potential to recruit more children. Our previous work with schools in the same location which was ethically approved to use ‘opt-out’ consent returned a 95% participation rate [61]. An opt-out consent approach reduces burden on researchers, teachers, and parents, yields higher response rates [63], and reduces selection bias [64]. Moreover, when an opt-out approach is adopted, parental/carer consent is supplemented by school management, which acts in loco parentis, and provides a further ethical safety net [65]. To progress MWFG to a pilot trial that achieves target recruitment levels within planned timescales, a convincing case for opt-out consent to the institutional ethics committee would be made.

Thirty-four MWFG PE lessons were taught from a possible 45 (76% of scheduled lessons), which did not meet the ≥80% green/go progression criterion. In one school two lessons were not delivered because the PE teacher was absent due to injury. Of greater significance was the decision by another school not to deliver any of the MWFG lessons. Instead, the PE teacher was instructed by the more senior PE Coordinator to teach the school’s regular PE curriculum, and this decision was only communicated to the researchers by the PE teacher during the end of programme interviews. This PE Coordinator had taken part in the phase 1 co-creation workshops and this decision was not anticipated, particularly without any consultation with the research team whilst the study was ongoing. Notwithstanding this specific situation, it is important to recognise that schools are fluid and dynamic environments where unpredicted issues can arise, such as here [66]. Further, teachers have a relatively high degree of professional autonomy as well as diverse responsibilities (e.g., subject coordination, leadership roles) [67]. As a result, teachers also have many competing priorities, and in most cases will view involvement in research projects as that of additional workload [66]. We can thus only speculate that the decision not to allow the MWFG lessons to be taught was based on a combination of factors, and that time pressures to meet other demands may have taken priority over the need to discuss matters with the research team.

The factors causing these three amber progression criteria results could have been temporary and were, arguably, remediable [37]. For example, the school withdrawing due to staff illness and child recruitment being below the maximum target number could have been remedied by extending the school recruitment phase and the geographical location to ensure that schools with more than one Year 5 class were recruited. As such, this may have delayed the start of the co-creation phase, but on reflection there was some flexibility to adapt the timings of the co-creation workshops. The third amber criteria that related to one school not delivering the MWFG lessons could have been remedied by a discussion between the school and research team to agree a resolution. Avery et al. recommend that in such cases where progression criteria fall into the amber ‘discuss and amend’ zone as a result of remediable and temporary factors, this should not compromise progression to a subsequent trial [37].

The fourth amber criterion related to the 35% accelerometer data attrition at T1. As has been observed in similar studies, loss of data at follow-up was partly due to child absences at the time of T1 data collection (n = 11) [62]. A significant cause however, which has also been reported previously, was that 27 children did not achieve the accelerometer wear criteria [62]. As was the case in MWFG, Johnstone and colleagues’ Active Play feasibility intervention collected accelerometer data in September and December. They had significant accelerometer data attrition and suggested that because December is, typically, a congested time in UK primary schools (e.g., end of term assessments, festive celebrations and performances, wet weather for outdoor PE, etc) it may not be deemed suitable to collect accelerometer data in this month [62]. In addition to these factors there was a spate of illness among the children around the time of T1 data collection. One teacher summed this up by stating, “It’s Christmas week and you haven’t managed to get all the results…because of parties and bugs [illnesses] and all of that” [Class teacher, female, School 5]. The accelerometer outcomes were included to gauge whether the MWFG programme influenced school-day movement behaviours, as is inferred in the EESH model. If accelerometers were retained in a pilot trial to provide secondary outcome data, we would need to consider modifications to the wear protocol (e.g., more frequent parental text message reminders to wear the device, less stringent wear time inclusion criteria based on sensitivity analysis, more frequent or higher value incentives). Aside from the accelerometer data the rate of attrition ranged from 3% to 12%, which met the green/go progression criterion of ≤25% and is consistent with other school-based physical activity feasibility studies [68, 69].

The teachers’ comments relating to the timing of the intervention and data collection in December were supported by comments of others who also highlighted the general busyness of schools during the September to December term. Moreover, the time commitment needed to accommodate the programme was raised in the MWFG teacher interviews (e.g., “It’s [data collection activities] just been a massive time commitment, the second we give you [the research team] two afternoons, we’ve then got to find those two afternoons to catch up our curriculum elsewhere.” [Class teacher, female, School 5]. Time available in schools for interventions and for teachers to engage with them is indeed arguably the most significant barrier to successful project implementation in this setting [66].

The other five progression criteria achieved green/go status, which reflected high levels of intervention acceptability in respect of children’s attendance and enjoyment of lessons and engaged participation in the research data collection. One child noted that the CAMSA and Dragon Challenge motor competence assessments were “…probably the most fun thing we [pupils] did because it had so many different fun activities in it” [Boy, School 2]. Some children also valued completing the mental health questionnaires as a way of anonymously expressing their feelings.

Of note were the findings that PE and class teachers were recruited in all schools, and 87% of all teachers agreed that the MWFG lesson resources were easy to use and interesting for the children. One PE specialist stated, “I think the resources were good, it had all the details in it that you needed. I think the kids received it quite well, they enjoyed doing different activities and the games” [PE specialist, male, School 4]. This view was consistent with those of the class teachers (e.g., “It [MWFG lesson resource pack] was good being cross-curricular, I was pleasantly surprised how much it referenced and touched on other subjects, like working together as a team for PSHE [Personal, Social, and Health Education]. I think anyone could pick up that folder, read it and use it as it’s intended to be used” [Class teacher, male, School 2]). The PE teachers particularly appreciated how the lesson plans contained embedded electronic links to the activity cards, which facilitated more efficient lesson delivery indoors and outside. One PE teacher commented that,

“The resources were perfect. You could put the cards up on the screen so the pupils could see, so there’s a demonstration, a picture, and a step-by-step rather than just listening to me and me showing it, it was good to have the points where pupils could read to each other as well”

[PE specialist, male, School 1 and 3],

and that,

“It was helpful when you put the links together for me, so I didn’t have to go backwards and forwards for the file. It was a lot easier just to click the link and I’d know where I was working then using the iPad outside”

[PE specialist, male, School 1 and 3].

However, improvements to some of the resources were also suggested, such as the inclusion of laminated activity task cards for outdoor use to complement the electronic and paper versions.

Child engagement and quality of programme resources are important indicators of intervention feasibility as high lesson attendance is vital to children’s intervention participation and learning [60], and lesson resources need to be simple and straightforward to use [68], particularly when additional teacher lesson preparation time may be not be possible [70]. Although MWFG was primarily delivered through PE lessons the lesson content was deliberately task-focused emphasising motor skill development through guided practice and progressive activities, which often involved peer learning. In some schools this was a welcome departure from a more traditional sport-focused PE curriculum and may have benefited a wider range of children. This point was summarised by a teacher who said,

“I feel like, for me, kids who do struggle with the sport side of things, it’s been great, because they’ve actually been able to have some success, whereas normally, if we’re doing anything like footie [football/soccer], they can’t cope because they know they can’t do it”

[Class teacher, female, school 4].

The intention of the write, draw, show, tell task was to act as a stimulus to get the children to engage in the focus groups. We did not intend to take a consultative approach with the children regarding the data collection process, but it was enlightening to see the data that emerged which we feel was valuable from a feasibility and acceptability standpoint. Combining the visual and verbatim data enhanced data credibility, and revealed complementary findings on children’s views, experiences and perceptions of the data collection process which were not captured in the quantitative feasibility survey. For example, an activity might have been memorable to a child because of its negative experience (e.g., anthropometric measures considered ‘boring’), but without the supporting verbatim data to provide the context this would not have been acknowledged. This further emphasises the utility of using multiple qualitative techniques with children rather than relying on one method. alone [71]. Pooling multiple data sources together reduced the risk of misinterpretation by researchers, provided greater depth and context to the visual data and enhanced confidence in the findings [71]. Further, the anonymised qualitative findings regarding the data collection methods could be relayed back to schools during the recruitment phase in future school-based research, to demonstrate the feasibility and acceptability of such methods from the perspectives of both the children and teachers in their own words.

Child-level outcomes

Descriptive analysis of motor competence and mental health outcomes as the respective key EESH model exposure and outcome variables revealed favourable changes in the hypothesised directions. The T1-T0 change scores for overall CAMSA (assessing locomotor and object control skills) (+13.7%) and Dragon Challenge stability skills (+8.1%) and the narrowness of the mean change confidence intervals suggested that the changes were meaningful. This was reflected in the proportion of children advancing between the CAMSA motor competence categories of ‘Beginning’, Progressing’, Achieving’, and ‘Excelling’ at T0 and T1 (S6 File). Further, for a 10-year old child the mean CAMSA change score of 4.11 would reflect progression from the ‘Beginning’ motor competence category to the ‘Progressing’ category, while for a child with a score around the middle range of the ‘Progressing category’, a 4.11 improvement would progress them into the ‘Achieving’ or ‘Excelling’ categories [72]. Similarly, in relation to the SDQ total difficulties score, for a child at the lower end of the UK normative categories of ‘Very high’, ‘High’, and ‘Slightly raised’ risk for overall mental health difficulties, the observed -1.09 change (-8.8%) would be sufficient to move them down to the next risk category [43]. These favourable changes in motor competence and mental health are consistent with the ESSH, but to determine whether the findings are supportive of the model would require a structural equation model applied to a larger dataset.

We used the SDQ and shorter MMF tools to assess mental health but the magnitudes of the relative T1-T0 changes were much smaller for MMF. It is unclear why the changes differed between the tools to the extent that they did. The more established SDQ includes more items than MMF and was used to assess MMF’s construct validity [44]. Nevertheless, MMF has previously demonstrated that it can capture clinical mental health need to a similar level as the SDQ [44], although in a community sample, detection of scores above a clinical threshold was lower than that reported for the SDQ [73]. Our motivation for including both mental health measures was to assess their acceptability among children and teachers, as well as to describe changes in children’s responses between T0 and T1. One child felt that some questions on the MMF “…don’t make sense” [Boy, School 2], but this was countered by other children who were positive about the questionnaires (e.g., “…I understood all the questions and I liked it” [Boy, School 3]). Overall, consistent negative feedback about either tool was not received, inferring that both were broadly considered as acceptable. Moreover, the Cronbach’s alpha inter-item reliability scores for both mental health questionnaires exceeded 0.80, suggesting that on balance the children understood the questions and responded to them in a consistent manner. MMF has fewer items than the SDQ and scores are distilled down to separate emotional and behaviour difficulties constructs, rather than to a score for overall mental health, which could possibly be more sensitive to change. Further, the low mean SDQ total difficulties scores indicated that the children were generally at low risk of poor mental health [43]. A combination of these factors and the relatively high children’s mental health scores may have limited the potential magnitude of the MMF’s T1-T0 change scores.

With the exception of the prosocial behaviour construct of the SDQ, we observed negligible changes in psychosocial outcomes. The individual SDQ prosocial items focused on children being considerate of others’ feelings, sharing with others, being helpful, and kind. These qualities reflected the underlying psychosocial qualities of MWFG that were embedded into the PE lesson plans and classroom activity ideas. It is possible that these SDQ prosocial behaviour items represented a more suitable measure of the intended psychosocial development in MWFG, than the stand-alone questionnaires that focused on self-worth, social influences and support from others, and encouragement of friends’ physical activity. If this were the case, solely using the SDQ to assess mental health and prosocial behaviour would be significantly less burdensome for children, schools, and researchers than employing multiple measures as we did in this feasibility study.

Differences in school-day sedentary time and physical activity outcomes were observed between T0 and T1. These outcomes were included in the study because physical inactivity is positioned in the EESH model as a factor influenced by poor motor competence that has a bi-directional relationship with obesity risk, and which influences the psychosocial mediators of mental health [18]. It is possible that physical inactivity and sedentary time were positively affected during discrete periods of the school-day, such as the MWFG PE lessons or classroom lessons. However, we did not analyse accelerometer data for specific lesson timings, and even if favourable changes did occur during these lessons, it is also possible that compensation effects could occur elsewhere during the school-day. The most likely explanation for the observed changes in movement behaviours was the seasonal differences in weather conditions between September and December, which were reflected by a 71% drop in temperature and 82% increase in rainfall between T0 and T1. Indeed, a recent meta-analysis has demonstrated how children’s physical activity and sedentary behaviours are adversely affected by unfavourable weather conditions [74]. In a MWFG pilot trial, analysis of physical activity data would be adjusted to account for weather conditions.

An important strength of this study was the nine feasibility progression criteria which guided our assessment of feasibility and acceptability. We developed the intervention content through co-creation with children, teachers, and PE teachers [31], which positively reflected participants’ levels of engagement. Use of mixed methods that employed validated quantitative measures and developmentally-appropriate qualitative approaches afforded children and teachers opportunities to express their views and perceptions in different ways. Moreover, the intervention was conceptually underpinned by the ESSH model, which informed the choice of child-level outcome measures. Overall, the design and implementation of the study (e.g., deliverer selection, prescribed intervention delivery, etc) limited the ‘risk of generalisability biases’ [75]. Limitations of the study were the lack of comparison group, relatively short duration, and mixed-quality of teacher-researcher communication which resulted in non-delivery of the intervention in one school. Further, a small number of schools and participants participated which, although typical of feasibility studies, also limits the generalisability of the findings.

Conclusions

Five of the nine progression criteria were green, while the remaining amber criteria that related to school and child recruitment and intervention delivery were all remediable. These areas could be addressed during the recruitment period by targeting a larger pool of schools, using opt-out consent, improving researchers’ understanding of staffing capacities and overloaded periods of the school year, and by reinforcing the importance of early communication between schools and researchers. Moreover, favourable changes were observed in children’s motor competence, mental health, and prosocial behaviour. MWFG is therefore an acceptable and feasible school motor competence intervention to promote positive mental health. With specific content and delivery modifications it could be progressed to a pilot trial using a more robust design.

Supporting information

S1 File. Example lesson plan.

(PDF)

S2 File. Example non-curricular skill snack activity card.

(PDF)

pone.0303033.s002.pdf (2.7MB, pdf)
S3 File. Detailed description of feasibility outcome measure methods and administration.

(PDF)

pone.0303033.s003.pdf (53.7KB, pdf)
S4 File. TREND checklist.

(PDF)

pone.0303033.s004.pdf (358.1KB, pdf)
S5 File. Ethics-approved protocol.

(PDF)

pone.0303033.s005.pdf (203.3KB, pdf)
S6 File. Proportion of children in each CAMSA category.

(PDF)

pone.0303033.s006.pdf (16.8KB, pdf)

Acknowledgments

The authors gratefully acknowledge the engagement and support of all the participating children and teachers, and staff at West Lancashire Sport Partnership.

Data Availability

The data are available from https://osf.io/qfmc7/.

Funding Statement

This work was supported by a grant from The Waterloo Foundation (#1669/4195) that was awarded to SJF, RT, LF, LMB, ZK, and EA. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Wunsch K, Nigg C, Niessner C, Schmidt SCE, Oriwol D, Hanssen-Doose A, et al. The impact of COVID-19 on the interrelation of physical activity, screen time and health-related quality of life in children and adolescents in germany: Results of the Motorik-Modul Study. Children. 2021;8:98. doi: 10.3390/children8020098 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Xiang M, Liu Y, Yamamoto S, Mizoue T, Kuwahara K. Association of changes of lifestyle behaviors before and during the COVID-19 pandemic with mental health: a longitudinal study in children and adolescents. Int J Behav Nutr Phys Activity. 2022;19(1):92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Ravens-Sieberer U, Kaman A, Erhart M, Devine J, Schlack R, Otto C. Impact of the COVID-19 pandemic on quality of life and mental health in children and adolescents in Germany. Eur Child & Adol Psychiatry. 2022; 31:879–889. doi: 10.1007/s00787-021-01726-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Raw J, Waite P, Pearcey S, Creswell C, Shum A, Patalay P. Examining changes in parent-reported child and adolescent mental health throughout the UK’s first COVID-19 national lockdown. J Child Psychol Psychiatry. 2021;62:1391–401. doi: 10.1111/jcpp.13490 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Salway R, Walker R, Sansum K, House D, Emm-Collison L, Reid T, et al. Screen-viewing behaviours of children before and after the 2020–21 COVID-19 lockdowns in the UK: a mixed methods study. BMC Public Health. 2023;23(1):116. doi: 10.1186/s12889-023-14976-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Salway R, Foster C, de Vocht F, Tibbitts B, Emm-Collison L, House D, et al. Accelerometer-measured physical activity and sedentary time among children and their parents in the UK before and after COVID-19 lockdowns: a natural experiment. Int J Behav Nutr Phys Activity. 2022;19(1):51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Sport England. Active Lives Children and Young People Survey Coronavirus (Covid-19) Report. London: Sport England; 2021. [Google Scholar]
  • 8.Moore SA, Faulkner G, Rhodes RE, Brussoni M, Chulak-Bozzer T, Ferguson LJ, et al. Impact of the COVID-19 virus outbreak on movement and play behaviours of Canadian children and youth: a national survey. Int J Behav Nutr Phys Activity. 2020;17(1):85. doi: 10.1186/s12966-020-00987-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Tyler R, Mackintosh KA, Foweather L, Edwards LC, Stratton G. Youth motor competence promotion model: a quantitative investigation into modifiable factors. J Sci Med Sport. 2020;23:955–61. doi: 10.1016/j.jsams.2020.04.008 [DOI] [PubMed] [Google Scholar]
  • 10.Foweather L, Rudd JR. Fundamental movement skill interventions. In: Brusseau TA, Fairclough SJ, Lubans DR, editors. The Routledge Handbook of Youth Physical Activity. London: Routledge; 2020. p. 715–37. [Google Scholar]
  • 11.Stodden DF, Goodway JD, Langendorfer SJ, Robertson MA, Rudisill ME, Garcia C, et al. A developmental perspective on the role of motor skill competence in physical activity: An emergent perspective. Quest. 2008;60:290–306. [Google Scholar]
  • 12.Pombo A, Luz C, de Sá C, Rodrigues LP, Cordovil R. Effects of the COVID-19 lockdown on Portuguese children’s motor competence. Children. 2021;8(3):199. doi: 10.3390/children8030199 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Mancini VO, Rigoli D, Heritage B, Roberts LD, Piek JP. The relationship between motor skills, perceived social support, and internalizing problems in a community adolescent sample. Frontiers Psychol. 2016;7(543). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wilson A, Piek JP, Kane R. The mediating role of social skills in the relationship between motor ability and internalizing symptoms in pre-primary children. Infant Child Dev. 2013;22:151–64. [Google Scholar]
  • 15.Rigoli D, Piek JP, Kane R. Motor coordination and psychosocial correlates in a normative adolescent sample. Pediatrics. 2012;129:e892–e900. doi: 10.1542/peds.2011-1237 [DOI] [PubMed] [Google Scholar]
  • 16.Viholainen H, Aro T, Purtsi J, Tolvanen A, Cantell M. Adolescents’ school-related self-concept mediates motor skills and psychosocial well-being. Br J Educ Psychol. 2014;84:268–80. doi: 10.1111/bjep.12023 [DOI] [PubMed] [Google Scholar]
  • 17.Cairney J, Rigoli D, Piek J. Developmental coordination disorder and internalizing problems in children: The environmental stress hypothesis elaborated. Developmental Rev. 2013;33:224–38. [Google Scholar]
  • 18.Mancini VO, Rigoli D, Cairney J, Roberts LD, Piek JP. The Elaborated Environmental Stress Hypothesis as a framework for understanding the association between motor skills and internalizing problems: a mini-review. Frontiers Psychol. 2016;7(239). doi: 10.3389/fpsyg.2016.00239 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Poole KL, Schmidt LA, Missiuna C, Saigal S, Boyle MH, Van Lieshout RJ. Motor coordination and mental health in extremely low birth weight survivors during the first four decades of life. Res Developmental Dis. 2015;43–44:87–96. doi: 10.1016/j.ridd.2015.06.004 [DOI] [PubMed] [Google Scholar]
  • 20.Zijlmans J, Tieskens JM, van Oers HA, Alrouh H, Luijten MAJ. The effects of COVID-19 on child mental health: Biannual assessments up to April 2022 in a clinical and two general population samples. JCPP Advances. 2023;3(2):e121520. doi: 10.1002/jcv2.12150 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Carcamo-Oyarzun J, Salvo-Garrido S, Estevan I. Actual and perceived motor competence in Chilean schoolchildren before and after COVID-19 lockdowns: A cohort comparison. Behav Sci. 2023;13(4):306. doi: 10.3390/bs13040306 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Langford R, Bonell CP, Jones HE, Pouliou T, Murphy SM, Waters E, et al. The WHO Health Promoting School framework for improving the health and well-being of students and their academic achievement. Cochrane Database Syst Rev. 2014(4):CD008958. doi: 10.1002/14651858.CD008958.pub2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Eddy LH, Wood ML, Shire KA, Bingham DD, Bonnick E, Creaser A, et al. A systematic review of randomised and case-controlled trials investigating the effectiveness of school-based motor-skill interventions in 3-12-year-old children. Child: Care, Health Dev. 2019;45:773–90. [DOI] [PubMed] [Google Scholar]
  • 24.Engel AC, Broderick CR, van Doorn N, Hardy LL, Parmenter BJ. Exploring the relationship between fundamental motor skill interventions and physical activity levels in children: A systematic review and meta-analysis. Sports Medicine. 2018; 48:1845–1857. doi: 10.1007/s40279-018-0923-3 [DOI] [PubMed] [Google Scholar]
  • 25.Piek JP, Kane R, Rigoli D, McLaren S, Roberts CM, Rooney R, et al. Does the Animal Fun program improve social-emotional and behavioural outcomes in children aged 4–6 years? Hum Move Sci. 2015;43:155–63. doi: 10.1016/j.humov.2015.08.004 [DOI] [PubMed] [Google Scholar]
  • 26.Yu JJ, Burnett AF, Sit CH. Motor skill interventions in children with developmental coordination disorder: a systematic review and meta-analysis. Arch Physical Med Rehab. 2018;99:2076–99. doi: 10.1016/j.apmr.2017.12.009 [DOI] [PubMed] [Google Scholar]
  • 27.Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medi Research Council guidance. Br Med J. 2021;374:n2061. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Beets MW, von Klinggraeff L, Weaver RG, Armstrong B, Burkart S. Small studies, big decisions: the role of pilot/feasibility studies in incremental science and premature scale-up of behavioral interventions. Pilot Feasibility Studies. 2021;7(1):173. doi: 10.1186/s40814-021-00909-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Pearson N, Naylor P-J, Ashe MC, Fernandez M, Yoong SL, Wolfenden L. Guidance for conducting feasibility and pilot studies for implementation trials. Pilot Feasibility Studies. 2020;6(1):167. doi: 10.1186/s40814-020-00634-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Lewis M, Bromley K, Sutton CJ, McCray G, Myers HL, Lancaster GA. Determining sample size for progression criteria for pragmatic pilot RCTs: the hypothesis test strikes back! Pilot Feasibility Studies. 2021;7(1):40. doi: 10.1186/s40814-021-00770-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Clifford L, Tyler R, Knowles Z, Ashworth E, Boddy L, Foweather L, et al. Co-Creation of a school-based motor competence and mental health intervention: Move Well, Feel Good. Children. 2023;10(8):1403. doi: 10.3390/children10081403 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.English Indices of Deprivation 2019 [Internet]. 2019 [cited 9 June 2020]. http://imd-by-postcode.opendatacommunities.org/imd/2019.
  • 33.Australian Sports Commission. Playing for life [Internet]. 2022 [cited 9 June 2022]. https://www.sportaus.gov.au/p4l.
  • 34.Sport Wales. Skills for an active life [Internet]. 2022 [cited 9 June 2022]. https://citbag.sport.wales/en/.
  • 35.Lubans DR, Lonsdale C, Cohen K, Eather N, Beauchamp MR, Morgan PJ, et al. Framework for the design and delivery of organized physical activity sessions for children and adolescents: rationale and description of the ‘SAAFE’ teaching principles. Int J Behav Nutr Phys Activity. 2017;14(1):24. doi: 10.1186/s12966-017-0479-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Chng LS, Lund J. Assessment for learning in physical education: the what, why and how. J Phys Educ Rec Dance. 2018;89:29–34. [Google Scholar]
  • 37.Avery KNL, Williamson PR, Gamble C, O’Connell Francischetto E, Metcalfe C, Davidson P, et al. Informing efficient randomised controlled trials: exploration of challenges in developing progression criteria for internal pilot studies. BMJ Open. 2017;7(2):e013537. doi: 10.1136/bmjopen-2016-013537 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Longmuir PE, Boyer C, Lloyd M, Borghes M, Knight E, et al. Canadian Agility and Movement Skill Assessment (CAMSA): Validity, objectivity, and reliability evidence for children 8–12 years of age. J Sport Health Sci. 2017;6: 231–240. doi: 10.1016/j.jshs.2015.11.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Tyler R, Foweather L, Mackintosh KA, Stratton G. A dynamic assessment of children’s physical competence: The Dragon Challenge. Med Sci Sports Exerc. 2018;50:2474–87. doi: 10.1249/MSS.0000000000001739 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Goodman R. Psychometric properties of the Strengths and Difficulties Questionnaire. J Am Acad Child Adolesc Psychiatry. 2001;40:1337–45. doi: 10.1097/00004583-200111000-00015 [DOI] [PubMed] [Google Scholar]
  • 41.Deighton J, Tymms P, Vostanis P, Belsky J, Fonagy P, Brown A, et al. The development of a school-based measure of child mental health. J Psychoeducational Assess. 2013;31:247–57. doi: 10.1177/0734282912465570 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Goodman A, Lamping DL, Ploubidis GB. When to use broader internalising and externalising subscales instead of the hypothesised five subscales on the Strengths and Difficulties Questionnaire (SDQ): Data from British parents, teachers and children. J Abnorm Child Psychol. 2010;38:1179–91. doi: 10.1007/s10802-010-9434-x [DOI] [PubMed] [Google Scholar]
  • 43.Youth in Mind. Scoring the Strengths and Difficulties Questionnaire for age 4–17 or 18+ [Internet]. 2016 [cited 9 June 2022]. https://www.sdqinfo.org/py/sdqinfo/c0.py.
  • 44.Patalay P, Deighton J, Fonagy P, Vostanis P, Wolpert M. Clinical validity of the Me and My School questionnaire: a self-report mental health measure for children and adolescents. Child Adol Psychiatry Mental Health. 2014;8(1):17. doi: 10.1186/1753-2000-8-17 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Ravens-Sieberer U, Erhart M, Rajmil L, Herdman M, Auquier P, Bruil J, et al. Reliability, construct and criterion validity of the KIDSCREEN-10 score: a short measure for children and adolescents’ well-being and health-related quality of life. Qual Life Res. 2010;19:1487–500. doi: 10.1007/s11136-010-9706-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.KIDSCREEN Group Europe. The KIDSCREEN Questionnaires—Quality of life questionnaires for children and adolescents. Handbook. Lengerich: Pabst Science Publishers; 2006. [Google Scholar]
  • 47.Harter S. Self-Perception Profile for Children: Manual and questionnaires (Grades 3–8). Denver, Colarado: University of Denver; 2012. [Google Scholar]
  • 48.Lereya ST, Humphrey N, Patalay P, Wolpert M, Böhnke JR, Macdougall A, et al. The Student Resilience Survey: psychometric validation and associations with mental health. Child Adol Psychiatry Mental Health. 2016;10(1):44. doi: 10.1186/s13034-016-0132-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Saunders RP, Pate RR, Felton G, Dowda M, Weinrich MC, Ward DS, et al. Development of questionnaires to measure psychosocial influences on children’s physical activity. Prev Med. 1997;26:241–7. doi: 10.1006/pmed.1996.0134 [DOI] [PubMed] [Google Scholar]
  • 50.Ward DS, Saunders RP, Pate RR. Physical Activity Interventions in Children and Adolescents. Champaign, IL: Human Kinetics; 2007. [Google Scholar]
  • 51.Migueles JH, Rowlands AV, Huber F, Sabia S, Van Hees VT. GGIR: A Research community–driven open source r package for generating physical activity and sleep outcomes from multi-day raw accelerometer data. J Measurement Phys Behav. 2019;2:188–196. [Google Scholar]
  • 52.van Hees VT, Fang Z, Langford J, Assah F, Mohammad A, da Silva IC, et al. Autocalibration of accelerometer data for free-living physical activity assessment using local gravity and temperature: an evaluation on four continents. J Appl Physiol. 2014;117:738–44. doi: 10.1152/japplphysiol.00421.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.van Hees VT, Gorzelniak L, León EC, Eder M, Pias Mo, Taherian S, et al. Separating movement and gravity components in an acceleration signal and implications for the assessment of human daily physical activity. PLoS ONE. 2013;8(4):e61691. doi: 10.1371/journal.pone.0061691 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Rowlands AV, Edwardson CL, Davies MJ, Khunti K, Harrington DM, Yates T. Beyond cut-points: accelerometer metrics that capture the physical activity profile. Med Sci Sports Exerc. 2018;50:1323–32. doi: 10.1249/MSS.0000000000001561 [DOI] [PubMed] [Google Scholar]
  • 55.Hurter L, Fairclough SJ, Knowles ZR, Porcellato LA, Cooper-Ryan AM, et al. Establishing raw acceleration thresholds to classify sedentary and stationary behaviour in children. Children. 2018;5(12). doi: 10.3390/children5120172 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Hildebrand M, Van Hees VT, Hansen BH, Ekelund U. Age-group comparibility of raw accelerometer output from wrist- and hip-worn monitors. Med Sci Sports Exerc. 2014;46:1816–24. [DOI] [PubMed] [Google Scholar]
  • 57.Cole T, Freeman J, Preece M. Body mass index reference curves for the UK, 1990. Arch Dis Child. 1995;73:25–9. doi: 10.1136/adc.73.1.25 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Cole T, Bellizzi M, Flegal K, Dietz W. Establishing a standard definition for child overweight and obesity worldwide: international survey. Br Med J. 2000;320:1240–1243. doi: 10.1136/bmj.320.7244.1240 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Currie C, Molcho M, Boyce W, Holstein B, Torsheim T, Richter M. Researching health inequalities in adolescents: the development of the Health Behaviour in School-Aged Children (HBSC) family affluence scale. Soc Sci Med. 2008;66:1429–36. doi: 10.1016/j.socscimed.2007.11.024 [DOI] [PubMed] [Google Scholar]
  • 60.Jago R, Tibbitts B, Sanderson E, Bird EL, Porter A, Metcalfe C, et al. Action 3:30R: Results of a cluster randomised feasibility study of a revised teaching assistant-led extracurricular physical activity intervention for 8 to 10 year olds. Int J Environ Res Public Health. 2019;16(1):131. doi: 10.3390/ijerph16010131 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Taylor SL, Noonan RJ, Knowles ZR, Owen MB, McGrane B, Curry WB, et al. Evaluation of a pilot school-based physical activity clustered randomised controlled trial-Active Schools: Skelmersdale. Int J Environ Res Public Health. 2018;15(5). doi: 10.3390/ijerph15051011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Johnstone A, Hughes AR, Bonnar L, Booth JN, Reilly JJ. An active play intervention to improve physical activity and fundamental movement skills in children of low socio-economic status: feasibility cluster randomised controlled trial. Pilot Feasibility Studies. 2019;5(1):45. doi: 10.1186/s40814-019-0427-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.O’Donnell LN, Duran RH, San Doval A, Breslin MJ, Juhn GM, Stueve A. Obtaining written parent permission for school-based health surveys of urban young adolescents. J Adol Health. 1997;21:376–83. doi: 10.1016/S1054-139X(97)00108-0 [DOI] [PubMed] [Google Scholar]
  • 64.Santelli JS, Smith-Rogers A. Parental permission, passive consent, and children in research. J Adol Health. 2002;31:303–4. doi: 10.1016/s1054-139x(02)00443-3 [DOI] [PubMed] [Google Scholar]
  • 65.Felzmann H. Ethical Issues in School-Based Research. Res Ethics. 2009;5:104–9. [Google Scholar]
  • 66.Moore A, Ashworth E, Mason C, Santos J, Mansfield R, Stapley E, et al. ’Shall We Send a Panda?’ A practical guide to engaging schools in research: learning from large-scale mental health intervention trials. Int J Environ Res Public Health. 2022;19(6). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Koh GA, Askell-Williams H. Sustainable school-improvement in complex adaptive systems: A scoping review. Rev Educ. 2021;9:281–314. [Google Scholar]
  • 68.McLellan G, Arthur R, Donnelly S, Bakshi A, Fairclough SJ, Taylor SL, et al. Feasibility and acceptability of a classroom-based active breaks intervention for 8–12-year old children. Res Quarterly Exerc Sport. 2022; 93:813–824. doi: 10.1080/02701367.2021.1923627 [DOI] [PubMed] [Google Scholar]
  • 69.Watson AJL, Timperio A, Brown H, Hesketh KD. A pilot primary school active break program (ACTI-BREAK): Effects on academic and physical activity outcomes for students in Years 3 and 4. J Sci Med Sport. 2019;22:438–43. doi: 10.1016/j.jsams.2018.09.232 [DOI] [PubMed] [Google Scholar]
  • 70.Dyrstad SM, Kvalø SE, Alstveit M, Skage I. Physically active academic lessons: acceptance, barriers and facilitators for implementation. BMC Public Health. 2018;18(1):322. doi: 10.1186/s12889-018-5205-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Noonan RJ, Boddy LM, Fairclough SJ, Knowles ZR. Write, draw, show, and tell: a child-centred dual methodology to explore perceptions of out-of-school physical activity. BMC Public Health. 2016;16(1):1–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Healthy Active Living and Obesity Research Group. Canadian Assessment of Physical Literacy. Manual for Test Administration. Second Edition ed. Otttawa, Ontario: HALO; 2017. [Google Scholar]
  • 73.Goodman R, Meltzer H, Bailey V. The strengths and difficulties questionnaire: A pilot study on the validity of the self-report version. Eur Child Adol Psychiatry. 1998;7:125–30. doi: 10.1007/s007870050057 [DOI] [PubMed] [Google Scholar]
  • 74.Zheng C, Feng J, Huang W, Wong SH. Associations between weather conditions and physical activity and sedentary time in children and adolescents: A systematic review and meta-analysis. Health Place. 2021;69:102546. doi: 10.1016/j.healthplace.2021.102546 [DOI] [PubMed] [Google Scholar]
  • 75.Beets MW, Weaver RG, Ioannidis JPA, Geraci M, Brazendale K, Decker L, et al. Identification and evaluation of risk of generalizability biases in pilot versus efficacy/effectiveness trials: a systematic review and meta-analysis. Int J Behav Nutr Phys Activity. 2020;17(1):19. doi: 10.1186/s12966-020-0918-y [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Henri Tilga

21 Feb 2024

PONE-D-23-31624Move Well, Feel Good: Feasibility and acceptability of a school-based motor competence intervention to promote positive mental healthPLOS ONE

Dear Dr. Fairclough,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Apr 06 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Henri Tilga, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following financial disclosure: "This work was supported by a grant from The Waterloo Foundation (#1669/4195) that was awarded to SJF, RT, LF, LMB, ZK, and EA."

Please state what role the funders took in the study.  If the funders had no role, please state: ""The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."" 

If this statement is not correct you must amend it as needed. 

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

Additional Editor Comments :

The Reviewers have provided several useful comments to increase the quality of this manuscript. Please carefully follow all the comments made by the Reviewers and revise the manuscript accordingly.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: N/A

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The present paper describes a single-arm feasibility trial. A point must be noted feasibility is the study is not a powered trial. It should be used to evaluate the feasibility of recruitment, randomization, retention, assessment procedures, outcome selection, new methods, and implementation of the novel intervention to name a few. As a result, I recommend removing the power analysis that is reported in line 116-124. Since testing of hypothesis is never the aim of this type of trial, no power analysis, and sample size using such analysis is needed. Look at the guidance of Leon et al. (Journal of Psychiatric Research 45 (2011) 626e629).

Another point I want to note, the ultimate goal of such a feasibility/pilot study is to collect information in preparation for a large-scale powered trial. As such a brief section outlining “lessons learned” and changes to be made in future studies will be helpful for the curious reader.

Reviewer #2: Thank you for the opportunity to review the manuscript entitled” Move Well, Feel Good: Feasibility and acceptability of a school-based motor competence intervention to promote positive mental health”. This manuscript addressed an important topic that is children intervention to foster their motor competence and psychosocial skills in order to promote their health. It was my pleasure to read this paper because it is really related with my research field and I am familiar with most of the instruments used. This article is well-written and clear, and most of my doubts were solved as I was reading the document. However, there are some aspects that should be improved prior to consider for publication. Below you can see my suggestions. I hope these comments could help the authors to improve the quality of their manuscript.

Comment 1. In my humble opinion there is a lack of flow in the introduction. The first time I read the paper I did not understand the mention to the lockdown at the beginning of this section as no information about the lockdown was included in the abstract. Even when I noticed the relationship with the data collection and study variables, I am wondering if the authors could include information about this link in the abstract. Additionally, I think that the authors could reorganize the introduction and include first the information about the Elaborated Environmental Stress Hypothesis model (EESH) (lines 72-81) and after that, the information regarding the lockdown (lines 52-70). Perhaps this strategy could help them to link it better with the second last paragraph of the introduction. This is only a suggestion.

Comment 2. As some time has passed since lockdown, I think it would be interesting to include references that support the idea that the effects derived from lockdown still persist.

Comment 3. The recruitment was deeply explained however I appreciate if the authors could add concrete information about number of participants, schools, mean age, gender, etc.

Comment 4. There was a control of the voluntary/optional activities? (The authors stated “which were supplemented by optional class-time, break-time, and home activities”). If yes, please explain it in detail.

Comment 5. Only three stability skills from the dragon challenge (DC) were used? How do you compute a motor quotient for these three tasks? Please, explain it in detail. Why the authors did not consider other alternatives such as KTK? In any case, I consider the authors could examine the correlations between CAMSA and DC (3 tasks) to see if both measures are related (as both are supposed to measure participants’ motor competence).

Comment 6. Could you provide further information about the validity and reliability of both tests? I strongly recommend to explain if there was a training period of raters and comparison between them and experts.

Comment 7: I really would like to know, as a reader, which questions the authors/research team asked to obtain data presented in lines 387-395.

Comment 8: Could you further explain how the focus groups were conducted and recorded? (in which room, if silence was required, etc.).

Comment 9: I really appreciate if the authors could include examples of this result: “All children spoke positively about the motor competence assessments” (line 423).

Comment 10: Could you provide further statistical results such as mean comparisons (e.g., paired t-test) between T0 and T1? This information could complement the results included in table 4.

Comment 11: Have the authors check the gender effect? As psychosocial variables are included, I am wondering if changes can be done only in boys or girls. I suggest the authors to consider a model gender*time to see the effects of the MWFG.

Comment 12: In lines 656-658, the authors stated: “The T1-T0 change scores for overall CAMSA (assessing locomotor and object control skills) (+13.7%) and Dragon Challenge stability skills (+8.1%) and the narrowness of the mean change confidence intervals suggested that the changes were meaningful”. I think that authors should consider my proposal of comments 10 and 11 as they could made assertions based on statistical results.

Comment 13: In lines 659 and 660, the authors mentioned the categories of the CAMSA. Could you provide a supplementary material with descriptive information about how many participants are in each category?

Reviewer #3: I am grateful for the opportunity to review this interesting and well written manuscript. The primary purpose of the study is to evaluate the feasibility and acceptability of Move Well, Feel Good (MWFG). Feasibility and acceptability are fundamental to the field of implementation research, and are here examined exemplary well, which leads to results that provide good information about the qualities of the intervention in these areas. If possible, I would like to see some further problematization of the results, for example regarding what characterizes the 14% of the children who did not perceive the program as fun. That kind of information would further enhance this already well-conducted and well-reported study, I therefor recommend a minor revision including these aspects in the manuscript. .

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Jun 11;19(6):e0303033. doi: 10.1371/journal.pone.0303033.r002

Author response to Decision Letter 0


20 Mar 2024

Please see uploaded 'Responses to Reviewers' document for these responses in table format.

R1

Comment number Comment Response

Reviewer 1

1 The present paper describes a single-arm feasibility trial. A point must be noted feasibility is the study is not a powered trial. It should be used to evaluate the feasibility of recruitment, randomization, retention, assessment procedures, outcome selection, new methods, and implementation of the novel intervention to name a few. As a result, I recommend removing the power analysis that is reported in line 116-124. Since testing of hypothesis is never the aim of this type of trial, no power analysis, and sample size using such analysis is needed. Look at the guidance of Leon et al. (Journal of Psychiatric Research 45 (2011) 626e629). Thank you for this comment. We agree that our feasibility trial was not powered to evaluate effectiveness of the child-level secondary outcomes, and this was never the intention. This was reflected in the choice of sample size estimation that was used, which related only to the feasibility outcomes, which were the primary outcomes in our study. The method of Lewis et al. [1] is based on a traffic light system of feasibility outcomes which are expressed as progression criteria. This method does not seek to estimate a sample size for the purposes of testing a hypothesis of intervention effectiveness. To make this clearer we have extended the sub-heading and added more detail to the opening sentence (lines 147-151). This now reads: “Sample size estimation for feasibility outcomes. Using pilot study progression criteria Red/Stop upper limit and Green/Go lower limit reference tables specifically develop for feasibility outcomes (28) we estimated a sample size for child recruitment.”

2 Another point I want to note, the ultimate goal of such a feasibility/pilot study is to collect information in preparation for a large-scale powered trial. As such a brief section outlining “lessons learned” and changes to be made in future studies will be helpful for the curious reader. The manuscript is already quite long and although we did initially discuss adding a section on ‘lessons learned’ we ultimately decided against this. This was in the interests of brevity, and also because we felt that the conclusion already highlighted these points in the form of the remediable progression criteria and how they could be addressed (“These areas could be addressed during the recruitment period by targeting a larger pool of schools, using opt-out consent, improving researchers’ understanding of staffing capacities and overloaded periods of the school year, and by reinforcing the importance of early communication between schools and researchers… With specific content and delivery modifications it could be progressed to a pilot trial with a more robust design) (lines 778-786). For these reasons we respectfully thank the reviewer for this suggestion but feel that it is already addressed in the manuscript.

Reviewer 2

1 Thank you for the opportunity to review the manuscript entitled” Move Well, Feel Good: Feasibility and acceptability of a school-based motor competence intervention to promote positive mental health”. This manuscript addressed an important topic that is children intervention to foster their motor competence and psychosocial skills in order to promote their health. It was my pleasure to read this paper because it is really related with my research field and I am familiar with most of the instruments used. This article is well-written and clear, and most of my doubts were solved as I was reading the document. However, there are some aspects that should be improved prior to consider for publication. Below you can see my suggestions. I hope these comments could help the authors to improve the quality of their manuscript. Thank you for your positive evaluation of the manuscript.

2 In my humble opinion there is a lack of flow in the introduction. The first time I read the paper I did not understand the mention to the lockdown at the beginning of this section as no information about the lockdown was included in the abstract. Even when I noticed the relationship with the data collection and study variables, I am wondering if the authors could include information about this link in the abstract.

Additionally, I think that the authors could reorganize the introduction and include first the information about the Elaborated Environmental Stress Hypothesis model (EESH) (lines 72-81) and after that, the information regarding the lockdown (lines 52-70). Perhaps this strategy could help them to link it better with the second last paragraph of the introduction. This is only a suggestion. We have amended the abstract which now includes contextual reference to the COVID-19 lockdown measures in the opening sentence (“In response to the adverse impacts of the COVID-19 lockdown measures Move Well, Feel Good (MWFG) was developed as a school intervention using improvement of motor competence as a mechanism for promoting positive mental health”).

We are grateful for the observations about the structure of the Introduction and the suggestions to amend it. Having reflected on this and discussed the suggestion as an authorship team we feel that the original structure should be retained. In our opinion, it is important to position the contextual references to the COVID-19 lockdown at the beginning of the Introduction, and from there, explain the detrimental impacts on motor competence and mental health which lead into the description of the EESH. We hope that the reviewer can see our perspective on this.

3 As some time has passed since lockdown, I think it would be interesting to include references that support the idea that the effects derived from lockdown still persist. We have included additional references to highlight how the adverse impacts of the lockdowns continued post-pandemic. In lines 110-113 we state “There is a need for intervention strategies in primary school children to address the well-established low levels of motor competence and poor mental health and wellbeing which declined further as immediate and persisting consequences of the COVID-19 lockdown measures (1, 12), even after they ended (6, 20, 21).”

4 The recruitment was deeply explained however I appreciate if the authors could add concrete information about number of participants, schools, mean age, gender, etc. We have now included this information in the opening section of the Results (lines 406-408): “The children were aged 9.6�0.4 years, were predominantly of White British ethnicity (89.8%) with healthy weight status (68.5%), and just over half (51.9%) were girls”.

5 There was a control of the voluntary/optional activities? (The authors stated “which were supplemented by optional class-time, break-time, and home activities”). If yes, please explain it in detail.

The optional activities were just that and so the research team and teachers had very limited control over how frequently or not the children engaged in these activities. The PE and class teachers highlighted the optional activities to the children and online links to the activity cards were available to the children for home use through QR codes. However, engagement in these activities was completely voluntary.

6 Only three stability skills from the dragon challenge (DC) were used? How do you compute a motor quotient for these three tasks? Please, explain it in detail.

Why the authors did not consider other alternatives such as KTK?

In any case, I consider the authors could examine the correlations between CAMSA and DC (3 tasks) to see if both measures are related (as both are supposed to measure participants’ motor competence). The DC includes three stability skills, all of which were included. We therefore were not selective in choosing some skills over others (we presume this is what your reference to “only three stability skills” refers to). As per the DC administration manual the stability scores from these three skills are computed from the sum of the technique and outcome criteria for each skill. The scores can range from 0 to 12. These criteria are detailed in Tyler et al. [2]

The KTK is used to measure gross motor coordination abilities related to dynamic postural balance, which is slightly different to the stability skills that were selected from the DC. The rationale was to supplement the CAMSA with stability skills that are not present within the CAMSA to provide an overall and more holistic measure of motor competence for our study. We acknowledge that the CAMSA includes some dynamic postural balance tasks (i.e., hopping) but does not include the stability skills that are covered within the three DC tasks. In terms of the KTK, two of the DC tasks are similar to two of the subtests from the KTK, however, the KTK does not include core stability/body management, which is encompassed in one of the included DC tasks includes (i.e, core stability). Thus, in order to include a holistic indication of motor competence, and without replicating aspects of motor competence, the CAMSA and DC tasks were chosen.

Further, KTK takes 20–30 min per child to complete all four subtests, which was too long in the context of the available data collection time window we had in each school. Thus, from a feasibility standpoint the KTP was considered but not utilised. The same was true of other motor competence assessments that we considered but were deemed unfeasible due to the anticipated duration taken to complete them.

Theoretically, we could look at the correlation between CAMSA and stability skills from the DC, but the stability skills are there to supplement the CAMSA from a theoretical holistic assessment of motor competence, instead of replacing it. The DC as a whole measures motor competence, but the individual tasks have been shown to individually contribute to assessing aspects of motor competence (see [2]), so theoreticall,y tasks can be used individually to measure specific aspects of motor competence, such as stability.

To reiterate, we combined the CAMSA and stability skills aspect of the DC to provide a holistic and time-efficient assessment of motor competence that was suitable for this intervention feasibility study.

7 Could you provide further information about the validity and reliability of both tests? I strongly recommend to explain if there was a training period of raters and comparison between them and experts. For the CAMSA, evidence of face validity, convergent validity, and inter-, intra-, and test-retest reliability have been demonstrated and reported by Longmuir et al. [3]. We have now updated the reference citations to include this study (the original reference #36 by the same lead author as the CAMSA study referred to above was mistakenly included). For the DC, evidence for face validity, content validity, and concurrent validity and inter-, intra-, and test-retest reliability have been demonstrated and reported by Tyler et al. [2].

As noted in lines 256-258 the DC assessments and coding were done by trained researchers. Prior to data collection, one of the authors who had substantial experience of administering the CAMSA and DC (RT) trained another author (LC) in the administration and scoring of both assessments. This involved discussing a presentation slide deck outlining the administration and scoring of both assessments, studying the test manuals, and undertaking practice video and live assessments. The training was concluded with LC completing gold standard video assessment trials for both assessments. Agreement with the gold standard scores of > 0.85 were achieved for CAMSA and DC, indicating good-to-excellent reliability.

8 I really would like to know, as a reader, which questions the authors/research team asked to obtain data presented in lines 387-395. Child attendance rate was gathered from class registers taken by the PE teachers. Children’s perceptions of programme enjoyment and teachers’ ratings of programme delivery and child engagement were gathered from the intervention and research methods acceptability surveys detailed in Table 1. These were completed at the end of the programme.

9 Could you further explain how the focus groups were conducted and recorded? (in which room, if silence was required, etc.) The focus groups took place in a quiet and empty classroom in each school. They were recorded using a digital audio recorder. The children firstly completed the drawing task which focused on their most memorable activities from the project. These drawings were then used as stimuli for the focus group discussions during which the children were encouraged to share their thoughts in an open and safe environment, following question prompts from the researcher.

10 I really appreciate if the authors could include examples of this result: “All children spoke positively about the motor competence assessments” (line 423). We have now included a focus group quote from one of the children, which reads “I really liked the ones with the jumping in the hoops one-footed, and the balancing spot” (lines 460-461).

11 Could you provide further statistical results such as mean comparisons (e.g., paired t-test) between T0 and T1? This information could complement the results included in table 4. Further to Reviewer 1’s comments and our responses, we reiterate that the study was not an effectiveness trial and therefore was not powered to assess change in the child-level outcomes. It was for this reason that the pre-post child-level outcomes were presented as descriptive statistics. We did however, present the 95% confidence intervals and where these did not cross zero the results were presented in bold text to indicate ‘meaningful differences’ between the two time points (see Table 4 note). Our approach was guided by the research methodology literature which for feasibility and pilot studies recommends the use of descriptive statistics with confidence intervals rather than formal hypothesis testing with inferential tests and reporting of p values [4, 5].

12 Have the authors check the gender effect? As psychosocial variables are included, I am wondering if changes can be done only in boys or girls. I suggest the authors to consider a model gender*time to see the effects of the MWFG. For the reasons outlined in the response above to comment #11, descriptive analyses were most appropriate for this study. Thus, we did not include a gender*time model. However, if we were to implement a fully powered trial in the future then gender would definitely be included in the statistical analyses.

13 In lines 656-658, the authors stated: “The T1-T0 change scores for overall CAMSA (assessing locomotor and object control skills) (+13.7%) and Dragon Challenge stability skills (+8.1%) and the narrowness of the mean change confidence intervals suggested that the changes were meaningful”. I think that authors should consider my proposal of comments 10 and 11 as they could made assertions based on statistical results. Thank you for this observation. We respectfully refer back to our responses to comments #11 and #12 which also apply here.

14 In lines 659 and 660, the authors mentioned the categories of the CAMSA. Could you provide a supplementary material with descriptive information about how many participants are in each category? We have included this information for boys and girls at both time points in file S6. We have made reference to these additional results and signposted readers to the new Supporting Information File in lines 696-699: “This was reflected in the proportion of children advancing between the CAMSA motor competence categories of ‘Beginning’, Progressing’, Achieving’, and ‘Excelling’ at T0 and T1 (Supporting Information File S6.)”.

Reviewer 3

1 I am grateful for the opportunity to review this interesting and well written manuscript. The primary purpose of the study is to evaluate the feasibility and acceptability of Move Well, Feel Good (MWFG). Feasibility and acceptability are fundamental to the field of implementation research, and are here examined exemplary well, which leads to results that provide good information about the qualities of the intervention in these areas.

If possible, I would like to se

Attachment

Submitted filename: MWFG RESPONSE TO REVIEWERS.docx

pone.0303033.s007.docx (35.1KB, docx)

Decision Letter 1

Henri Tilga

18 Apr 2024

Move Well, Feel Good: Feasibility and acceptability of a school-based motor competence intervention to promote positive mental health

PONE-D-23-31624R1

Dear Dr. Fairclough,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Henri Tilga, PhD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: The authors have dealt well with each of my earlier concerns and comments. Overall, the manuscript is clearer now and is relevant for the readership of PLOS ONE. I have no further comment and consider this Manuscript as ready for publication.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

**********

Acceptance letter

Henri Tilga

29 Apr 2024

PONE-D-23-31624R1

PLOS ONE

Dear Dr. Fairclough,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Henri Tilga

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Example lesson plan.

    (PDF)

    S2 File. Example non-curricular skill snack activity card.

    (PDF)

    pone.0303033.s002.pdf (2.7MB, pdf)
    S3 File. Detailed description of feasibility outcome measure methods and administration.

    (PDF)

    pone.0303033.s003.pdf (53.7KB, pdf)
    S4 File. TREND checklist.

    (PDF)

    pone.0303033.s004.pdf (358.1KB, pdf)
    S5 File. Ethics-approved protocol.

    (PDF)

    pone.0303033.s005.pdf (203.3KB, pdf)
    S6 File. Proportion of children in each CAMSA category.

    (PDF)

    pone.0303033.s006.pdf (16.8KB, pdf)
    Attachment

    Submitted filename: MWFG RESPONSE TO REVIEWERS.docx

    pone.0303033.s007.docx (35.1KB, docx)

    Data Availability Statement

    The data are available from https://osf.io/qfmc7/.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES