Abstract
Background
Many nations require child‐serving professionals to report known or suspected cases of significant child abuse and neglect to statutory child protection or safeguarding authorities. Considered globally, there are millions of professionals who fulfil these roles, and many more who will do so in future. Ensuring they are trained in reporting child abuse and neglect is a key priority for nations and organisations if efforts to address violence against children are to succeed.
Objectives
To assess the effectiveness of training aimed at improving reporting of child abuse and neglect by professionals and to investigate possible components of effective training interventions.
Search methods
We searched CENTRAL, MEDLINE, Embase, 18 other databases, and one trials register up to 4 June 2021. We also handsearched reference lists, selected journals, and websites, and circulated a request for studies to researchers via an email discussion list.
Selection criteria
All randomised controlled trials (RCTs), quasi‐RCTs, and controlled before‐and‐after studies examining the effects of training interventions for qualified professionals (e.g. teachers, childcare professionals, doctors, nurses, and mental health professionals) to improve reporting of child abuse and neglect, compared with no training, waitlist control, or alternative training (not related to child abuse and neglect).
Data collection and analysis
We used methodological procedures described in the Cochrane Handbook for Systematic Reviews of Interventions. We synthesised training effects in meta‐analysis where possible and summarised findings for primary outcomes (number of reported cases of child abuse and neglect, quality of reported cases, adverse events) and secondary outcomes (knowledge, skills, and attitudes towards the reporting duty). We used the GRADE approach to rate the certainty of the evidence.
Main results
We included 11 trials (1484 participants), using data from 9 of the 11 trials in quantitative synthesis. Trials took place in high‐income countries, including the USA, Canada, and the Netherlands, with qualified professionals. In 8 of the 11 trials, interventions were delivered in face‐to‐face workshops or seminars, and in 3 trials interventions were delivered as self‐paced e‐learning modules. Interventions were developed by experts and delivered by specialist facilitators, content area experts, or interdisciplinary teams. Only 3 of the 11 included studies were conducted in the past 10 years.
Primary outcomes
Three studies measured the number of cases of child abuse and neglect via participants’ self‐report of actual cases reported, three months after training. The results of one study (42 participants) favoured the intervention over waitlist, but the evidence is very uncertain (standardised mean difference (SMD) 0.81, 95% confidence interval (CI) 0.18 to 1.43; very low‐certainty evidence).
Three studies measured the number of cases of child abuse and neglect via participants’ responses to hypothetical case vignettes immediately after training. A meta‐analysis of two studies (87 participants) favoured training over no training or waitlist for training, but the evidence is very uncertain (SMD 1.81, 95% CI 1.30 to 2.32; very low‐certainty evidence).
We identified no studies that measured the number of cases of child abuse and neglect via official records of reports made to child protection authorities, or adverse effects of training.
Secondary outcomes
Four studies measured professionals’ knowledge of reporting duty, processes, and procedures postintervention. The results of one study (744 participants) may favour the intervention over waitlist for training (SMD 1.06, 95% CI 0.90 to 1.21; low‐certainty evidence).
Four studies measured professionals' knowledge of core concepts in all forms of child abuse and neglect postintervention. A meta‐analysis of two studies (154 participants) favoured training over no training, but the evidence is very uncertain (SMD 0.68, 95% CI 0.35 to 1.01; very low‐certainty evidence).
Three studies measured professionals' knowledge of core concepts in child sexual abuse postintervention. A meta‐analysis of these three studies (238 participants) favoured training over no training or waitlist for training, but the evidence is very uncertain (SMD 1.44, 95% CI 0.43 to 2.45; very low‐certainty evidence).
One study (25 participants) measured professionals' skill in distinguishing reportable and non‐reportable cases postintervention. The results favoured the intervention over no training, but the evidence is very uncertain (SMD 0.94, 95% CI 0.11 to 1.77; very low‐certainty evidence).
Two studies measured professionals' attitudes towards the duty to report child abuse and neglect postintervention. The results of one study (741 participants) favoured the intervention over waitlist, but the evidence is very uncertain (SMD 0.61, 95% CI 0.47 to 0.76; very low‐certainty evidence).
Authors' conclusions
The studies included in this review suggest there may be evidence of improvements in training outcomes for professionals exposed to training compared with those who are not exposed. However, the evidence is very uncertain. We rated the certainty of evidence as low to very low, downgrading due to study design and reporting limitations. Our findings rest on a small number of largely older studies, confined to single professional groups. Whether similar effects would be seen for a wider range of professionals remains unknown. Considering the many professional groups with reporting duties, we strongly recommend further research to assess the effectiveness of training interventions, with a wider range of child‐serving professionals. There is a need for larger trials that use appropriate methods for group allocation, and statistical methods to account for the delivery of training to professionals in workplace groups.
Keywords: Child, Humans, Child Abuse, Child Abuse/diagnosis, Child Abuse/prevention & control, Family, Health Personnel, Mandatory Reporting, Systematic Reviews as Topic
Plain language summary
Child protection training for professionals to improve reporting of child abuse and neglect
Key messages
‐ Due to a lack of strong evidence, it is unclear whether child protection training is better than no training or alternative training (e.g. cultural sensitivity training) at improving professionals’ reporting of child abuse and neglect.
‐ Larger, well‐designed studies are needed to assess the effects of training with a wider range of professional groups.
‐ Future research should compare face‐to‐face with e‐learning interventions.
Why do we need to improve the reporting of child abuse and neglect?
Child abuse and neglect results in significant harm to children, families, and communities. The most serious consequence is child fatality, but other consequences include physical injuries, mental health problems, alcohol and drug misuse, and problems at school and in employment. Many professional groups, such as teachers, nurses, doctors, and the police, are required by law or organisational policy to report known or suspected cases of child abuse and neglect to statutory child protection authorities. To prepare them for reporting, various training interventions have been developed and used. These can vary in duration, format, and delivery methods. For example, they may aim to increase knowledge and awareness of the indicators of child abuse and neglect; the nature of reporting duty and procedures; and attitudes towards reporting duty. Such training is usually undertaken postqualification as a form of continuing professional development; however, little is known about whether training works, either in improving reporting of child abuse and neglect generally, for different types of professionals, or for different types of abuse.
What did we want to find out?
We wanted to find out:
‐ if child protection training improves professionals' reporting of child abuse and neglect;
‐ what components of effective training help professionals to report child abuse and neglect; and
‐ if the training causes any unwanted effects.
What did we do?
We searched for studies that compared:
‐ child protection training with no training or with a waitlist control (those placed on a waiting list to receive the training at a later date); and
‐ child protection training with alternative training (not related to child abuse and neglect, e.g. cultural sensitivity training).
We compared and summarised study results and rated our confidence in the evidence based on factors such as study methods and size.
What did we find?
We found 11 studies that involved 1484 people. The studies ranged in size from 30 to 765 participants. Nine studies were conducted in the USA, one in Canada, and one in the Netherlands. A number of different types of training interventions were tested in the studies. Some were face‐to‐face workshops, ranging in duration from a single two‐hour workshop to six 90‐minute seminars conducted over one month; and some were self‐paced e‐learning interventions. The training was developed by experts and delivered by specialist facilitators, content area experts, or interdisciplinary teams. Nine studies received external funding: five from federal government agencies, two from a university and philanthropic organisation, one from the philanthropic arm of an international technology company, and one from a non‐government organisation (a training intervention developer).
Main results
It is unclear if child protection training has an effect on:
‐ the number of reported cases of child abuse and neglect (one study, 42 participants); or
‐ the number of reported cases based on hypothetical cases of child abuse and neglect (two studies, 87 participants).
Based on the available information, we were unable answer our question about whether training has an effect on the number of official cases recorded by child protection authorities, or the quality of those reports; or whether training has any unwanted effects.
Child protection training may increase professionals' knowledge of reporting duty, processes, and procedures (one study, 744 participants). However, it is unclear if this training has an effect on:
‐ professionals’ knowledge of core concepts in child abuse and neglect generally (two studies, 154 participants);
‐ professionals’ knowledge of core concepts in child sexual abuse specifically (three studies, 238 participants);
‐ professionals’ skill in distinguishing between reportable and non‐reportable cases (one study, 25 participants); or
‐ professionals’ attitudes towards the duty to report (one study, 741 participants).
What are the limitations of the evidence?
We have low to very low confidence in the evidence. This is because the results were based on a small number of studies, some of which were old and which had methodological problems. For example, the people involved in the studies were aware of which treatment they were getting, and not all of the studies provided data for all our outcomes of interest. In addition, our analyses sometimes only included one professional group, limiting the applicability of our findings to other professional groups.
How up‐to‐date is this evidence?
The evidence is current to 4 June 2021.
Summary of findings
Summary of findings 1. Child protection training for professionals to improve reporting of child abuse and neglect compared with no training, waitlist control, or alternative training not related to child abuse and neglect (primary outcomes).
Setting: professionals' workplaces or online e‐learning, mainly in the USA Patient of population: postqualified professionals, including elementary and high school teachers, childcare professionals, medical practitioners, nurses, and mental health professionals Intervention: face‐to‐face or online training, with a range of teaching strategies (e.g. didactic presentations, role‐plays, video, experiential exercises), ranging from 2 hours to 6 x 90‐minute sessions over a 1‐month period Comparator: no training, waitlist for training, alternative training (not related to child abuse and neglect) | ||||||
Outcomes | Anticipated absolute effects* (95% CI) | Relative effect (95% CI) | No. of participants (studies) | Certainty of the evidence | Comments | |
Risk with control conditions | Risk with training interventions | |||||
Number of reported cases of child abuse and neglect (professionals' self‐report, actual cases) Time of outcome assessment: short term (3 months postintervention) |
‐ | The mean number of cases reported in the training group was, on average, 0.81 standard deviations higher (0.18 higher to 1.43 higher). | ‐ | 42 (1 RCT) |
⨁◯◯◯ Very Lowa,b,c | SMD of 0.81 represents a large effect size (Cohen 1988). Outcome measured by professionals' self‐report of cases they had reported to child protection authorities. |
Number of reported cases of child abuse and neglect (professionals' self‐report, hypothetical vignette cases) Time of outcome assessment: short term (postintervention) |
‐ | The mean number of cases reported in the training group was, on average, 1.81 standard deviations higher (1.30 higher to 2.32 higher). | ‐ | 87 (2 RCTs) |
⨁◯◯◯ Very Lowa,b,c | SMD of 1.81 represents a large effect size (Cohen 1988). Outcome measured by professionals’ responses to hypothetical case vignettes. |
Number of reported cases of child abuse and neglect (official records of reports made to child protection authorities) | ‐ | Unknown | ‐ | 0 (0 studies) |
‐ | No studies were identified that measured numbers of official reports made to child protection authorities. |
Quality of reported cases of child abuse and neglect (official records of reports made to child protection authorities) | ‐ | Unknown | ‐ | 0 (0 studies) |
‐ | No studies were identified that measured the quality of official reports made to child protection authorities. |
Adverse events | ‐ | Unknown | ‐ | 0 (0 studies) |
‐ | No studies were identified that measured adverse effects. |
*The risk in the intervention group (and its 95% CI) is based on the assumed risk in the comparison group and the relative effect of the intervention (and its 95% CI). CI: confidence interval; RCT: randomised controlled trial; SMD: standardised mean difference | ||||||
GRADE Working Group grades of evidence High certainty: we are very confident that the true effect lies close to that of the estimate of the effect. Moderate certainty: we are moderately confident in the effect estimate: the true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different. Low certainty: our confidence in the effect estimate is limited: the true effect may be substantially different from the estimate of the effect. Very low certainty: we have very little confidence in the effect estimate: the true effect is likely to be substantially different from the estimate of effect. |
aDowngraded by one level due to high risk of bias for multiple risk of bias domains. bDowngraded by one level due to imprecision (CI includes small‐sized effect or small sample size, or both). cDowngraded by one level due to indirectness (single or limited number of studies, thereby restricting the evidence in terms of intervention, population, and comparators).
Summary of findings 2. Child protection training for professionals to improve reporting of child abuse and neglect compared with no training, waitlist control, or alternative training not related to child abuse and neglect (secondary outcomes).
Setting: professionals' workplaces or online e‐learning, mainly in the USA Patient of population: postqualified professionals, including elementary and high school teachers, childcare professionals, medical practitioners, nurses, and mental health professionals Intervention: face‐to‐face or online training, with a range of teaching strategies (e.g. didactic presentations, role‐plays, video, experiential exercises), ranging from 2 hours to 6 x 90‐minute sessions over a 1‐month period Comparator: no training, waitlist for training, alternative training (not related to child abuse and neglect) | ||||||
Outcomes | Anticipated absolute effects* (95% CI) | Relative effect (95% CI) | No. of participants (studies) | Certainty of the evidence | Comments | |
Risk with control conditions | Risk with training interventions | |||||
Knowledge of reporting duty, processes, and procedures Measured by: professionals' self‐reported knowledge of jurisdictional or institutional reporting duties, or both Time of outcome assessment: short term (postintervention) |
‐ | The mean knowledge score in the training group was, on average, 1.06 standard deviations higher (0.90 higher to 1.21 higher). | ‐ | 744 (1 RCT) |
⨁⨁◯◯ Lowa,b | SMD of 1.06 represents a large effect size (Cohen 1988). |
Knowledge of core concepts in child abuse and neglect (all forms) Measured by: professionals' self‐reported knowledge of all forms of child abuse and neglect (general measure) Time of outcome assessment: short term (postintervention) |
‐ | The mean knowledge score in the training group was, on average, 0.68 standard deviations higher (0.35 higher to 1.01 higher). | ‐ | 154 (2 RCTs) |
⨁◯◯◯ Very lowa,b,c | SMD of 0.68 represents a medium effect size (Cohen 1988). |
Knowledge of core concepts in child abuse and neglect (child sexual abuse only) Measured by: professionals' self‐reported knowledge of child sexual abuse (specific measure) Time of outcome assessment: short term (postintervention) |
‐ | The mean knowledge score in the training group was, on average, 1.44standard deviations higher (0.43 higher to 2.45 higher). | ‐ | 238 (3 RCTs) |
⨁◯◯◯ Very lowa,b,c,d | SMD of 1.44 represents a large effect size (Cohen 1988). |
Skill in distinguishing between reportable and non‐reportable child abuse and neglect cases Measured by: professionals’ performance on simulated cases scored by trained and blinded expert panel Time of outcome assessment: short term (postintervention) |
‐ | The mean skill score in the training group was, on average, 0.94standard deviations higher (0.11 higher to 1.77 higher). | ‐ | 25 (1 RCT) |
⨁◯◯◯ Very Lowa,b,c | SMD of 0.94 represents a large effect size (Cohen 1988). |
Attitudes toward the duty to report child abuse and neglect Measured by: professionals’ self‐reported attitudes towards the duty to report child abuse and neglect Time of outcome assessment: short term (postintervention) |
‐ | The mean attitude score in the training group were, on average, 0.61 standard deviations higher (0.47 higher to 0.76 higher). | ‐ | 741 (1 RCT) |
⨁◯◯◯ Very Lowa,b,c | SMD of 0.61 represents a medium effect size (Cohen 1988). |
*The risk in the intervention group (and its 95% CI) is based on the assumed risk in the comparison group and the relative effect of the intervention (and its 95% CI). CI: confidence interval; RCT: randomised controlled trial; SMD: standardised mean difference | ||||||
GRADE Working Group grades of evidence High certainty: we are very confident that the true effect lies close to that of the estimate of the effect. Moderate certainty: we are moderately confident in the effect estimate: the true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different. Low certainty: our confidence in the effect estimate is limited: the true effect may be substantially different from the estimate of the effect. Very low certainty: we have very little confidence in the effect estimate: the true effect is likely to be substantially different from the estimate of effect. |
aDowngraded by one level due to high risk of bias for multiple risk of bias domains. bDowngraded by one level due to indirectness (one or both of the following reasons: (1) single or limited number of studies, thereby restricting the evidence in terms of intervention, population, and comparators; (2) outcome not a direct measure of reporting behaviour by professionals). cDowngraded by one level due to imprecision (one or both of the following reasons: (1) CI includes small‐sized effect; (2) small sample size) dAlthough studies can only be downgraded by three levels, it is important to note that there was significant heterogeneity of the effect for this outcome (i.e. inconsistency), which also impacts the certainty of the evidence.
Background
Description of the condition
Child abuse and neglect
Child abuse and neglect is a broad construct including physical abuse, sexual abuse, psychological or emotional abuse, and neglect. Exposure to domestic violence is increasingly considered to be a fifth domain (Kimber 2018). Most child abuse and neglect occurs in private, is inflicted or caused by parents and caregivers, and does not become known to government authorities or helping agencies. Except for sexual abuse, younger children (aged one year and under) are the most vulnerable of all children to be abused and neglected (US DHHS 2021). Whilst its true extent is unknown, child abuse and neglect is a well‐established problem worldwide (Hillis 2016; Pinheiro 2006). Numerous prevalence studies have established that the various forms of child abuse and neglect are very widespread, although some forms of abuse and neglect are more common than others (Almuneef 2018; Chiang 2016; Cuartas 2019; Finkelhor 2010; Lev‐Weisel 2018; Nguyen 2019; Nikolaidis 2018; Radford 2012; Stoltenborgh 2011; Stoltenborgh 2012; Stoltenborgh 2015; Ward 2018).
The adverse effects of child abuse and neglect are significant and can endure throughout a person's life. The most serious consequence is child fatality, with an estimated 155,000 deaths globally per annum (WHO 2006). Other effects include: physical injuries; failure to thrive; impaired social, emotional, and behavioural development; reduced reading ability and perceptual reasoning; depression; anxiety; post‐traumatic stress disorder; low self‐image; alcohol and drug use; aggression; delinquency; long‐term deficits in educational achievement; and adverse effects on employment and economic status (Bellis 2019; Egeland 2009; Gilbert 2009; Hildyard 2002; Hughes 2017; Landsford 2002; Maguire‐Jack 2015; Norman 2012; Paolucci 2001; Taillieu 2016). Coping mechanisms used to deal with the trauma, such as alcohol and drug use, can compound adverse health outcomes, and chronic stress can cause coronary artery disease and inflammation (Danese 2009; Danese 2012). There is some evidence suggesting that child abuse and neglect affects brain development and produces epigenetic neurobiological changes (Moffitt 2013; Nelson 2020; Shalev 2013; Tiecher 2016). For society, effects include lost productivity and cost to child welfare systems (Currie 2010; Fang 2012; Fang 2015), and intergenerational victimisation (Draper 2008). The annual economic cost in the USA has been estimated at USD 124 billion, based on a cost per non‐fatal case of USD 210,012 (Fang 2012).
Although there is some variance across cultures in perceptions of what may and may not constitute child abuse and neglect (Finkelhor 1988; Korbin 1979), in recent decades there is an emerging consensus about its parameters, especially for child sexual abuse (Mathews 2019), physical abuse (WHO 2006), emotional abuse (Glaser 2011), and neglect (Dubowitz 2007). This is reflected in criminal prohibitions on this conduct across low‐, middle‐, and high‐income countries, and scholarly research addressing the contribution of structural inequalities in societies to child maltreatment (Bywaters 2019; Finkelhor 1988). Global legal and policy norms recognise the main domains of child abuse and neglect and require substantial efforts to identify and respond to them. The Convention on the Rights of the Child has been almost universally ratified, and article 19 embeds children's right to be free from abuse and neglect (United Nations 1989). It requires States Parties to take all appropriate legislative, administrative, social, and educational measures to protect the child from all forms of maltreatment, and to include effective procedures for the identification and reporting of maltreatment. Similarly, the universal Sustainable Development Goals urge all nations to eradicate child maltreatment, with Target 16.2 aiming to end child abuse and requiring governments to report their efforts (United Nations 2015).
Professionals' reporting of child abuse and neglect
To identify child abuse and neglect, and to enable early intervention to assist children and their families, many nations' governments require members of specified professional groups to report known or suspected cases of significant child abuse and neglect (Mathews 2008a). The duty to report is usually conferred on professionals who encounter children frequently in their daily work, such as teachers, nurses, doctors, and law enforcement (Mathews 2008b). In some jurisdictions and for some categories of professionals, reporting duties have been enacted in child protection legislation (called 'mandatory reporting laws'), but in others, reporting duties are ascribed solely in organisational policies. Although differences exist across jurisdictions and professions with respect to some features of reporting duties (e.g. in stating which types of abuse and neglect must be reported), there is also consistency in the essential nature of reporting duties (e.g. in always requiring reports of child sexual abuse; and in activating the reporting duty when the reporter has a reasonable suspicion the abuse has occurred, rather than requiring knowledge or evidence) (Mathews 2008a). These differences and similarities also determine key dimensions of child protection training for professionals in different contexts.
Studies have found that professionals who are required to report child abuse and neglect consider that they have not had sufficient training to fulfil their role (Abrahams 1992; Christian 2008; Hawkins 2001; Kenny 2001; Kenny 2004; Mathews 2011; Reiniger 1995; Starling 2009; Walsh 2008). Research has also found low levels of knowledge about both the nature of the reporting duty, Beck 1994; Mathews 2009, and indicators of abuse and neglect (Hinson 2000), and that professionals may hold attitudes which are not conducive to reporting (Feng 2005; Jones 2008; Kalichman 1993; Mathews 2009; Zellman 1990). Effective reporting is thought to be influenced by several factors, including higher levels of knowledge of the reporting duty (Crenshaw 1995; Kenny 2004), ability to recognise abuse (Crenshaw 1995; Goebbels 2008; Hawkins 2001), and positive attitudes towards the duty (Fraser 2010; Goebbels 2008; Hawkins 2001).
Improved reporting offers the prospect of enhanced detection of child abuse and neglect (Mathews 2016), provision of interventions and redress for victims (Kohl 2009), and engagement with parents and caregivers to establish supportive measures (Drake 1996; Drake 2007). In this way, improved reporting is an essential part of a public health response to child abuse and neglect, which requires both tertiary and secondary prevention as well as primary prevention, and the full participation of communities and organisations (McMahon 1999). Improved reporting by professionals should also diminish clearly unnecessary reports and avoid the wasting of scarce government resources and unwarranted distress to families (Ainsworth 2006; Calheiros 2016). In addition, effective child protection training for professionals should also assist in developing greater understanding of legal protections conferred on professional reporters themselves, and avoidance of potential legal liability and professional discipline for non‐compliance. At its best, child protection training could also enhance professional ethical identities and contribute to broader workforce professionalisation.
Description of the intervention
In this review, child protection training for professionals is defined as education or training undertaken postqualification, after initial professional qualifications have been awarded, as a form of continuing or ongoing professional education or development. Child protection training interventions that are the subject of this review aim to improve reporting of child abuse and neglect to statutory child protection authorities by professionals who are required by law or policy to do so. Improving reporting is conceptualised as increasing the reporting of cases where abuse or neglect exists or can reasonably be thought to exist; and decreasing the reporting of cases where there are insufficient grounds upon which to make a report and where reporting is unnecessary or unwarranted.
Different approaches may be taken in training professionals to improve reporting of child abuse and neglect. Child protection training may focus on increasing knowledge and awareness of the indicators of each type of abuse and neglect, the nature of the reporting duty, and reporting procedures. Training may also focus on enhancing reporters' attitudes towards the reporting duty or to child protection generally. Training may vary in duration (Donohue 2002; Hazzard 1983), be implemented in a range of different formats (e.g. single sessions through to extended multisession courses), and target different skill levels (e.g. basic through to advanced) (Walsh 2019). Different delivery methods may be adopted, for example online, face‐to‐face, or blended learning modes (Kenny 2001; McGrath 1987).
How the intervention might work
Viewed as an application of adult learning (Knowles 2011), child protection training for professionals is an educational intervention through which professionals develop knowledge, skills, attitudes, and behaviours. By raising awareness, providing information and resources, developing skills and strategies, and fostering dispositions, training may change professionals' ability and willingness to engage in decision‐making processes that lead to improved reporting. There is some evidence to suggest that, for some categories of professionals and for some types of abuse, exposure to training is associated with effective reporting (Fraser 2010; Walsh 2012a), self‐reported preparedness to report (Fraser 2010), confidence identifying abuse (Hawkins 2001), and awareness of reporting responsibilities (Hawkins 2001). Some studies have indicated that lack of adequate training is associated with low awareness of the reporting duty (Hawkins 2001), low preparedness to report (Kenny 2001), low self‐reported confidence identifying child abuse (Hawkins 2001; Mathews 2008b; Mathews 2011), and low knowledge of indicators of abuse (Mathews 2011). However, the literature has not been synthesised, and the specific components of training that are responsible for improving reporting are not yet known.
Why it is important to do this review
Child abuse and neglect results in significant costs for children, families, and communities. As a core public health strategy, many professional groups are required by law and policy in many jurisdictions to report suspected cases. Numerous different training initiatives appear to have been developed and implemented for professionals, but there is little evidence regarding their effectiveness in improving reporting of child abuse and neglect both generally, for specific professions, and for distinct types of child abuse and neglect. To enhance reporting practice, designers of training programmes require detailed information about what programme features will offer the greatest benefit. A systematic review that identifies the effectiveness of different training approaches will advance the evidence base and develop a clearer understanding of optimal training content and methods. In addition, it will provide policymakers with a means by which to assess whether current training interventions are congruent with what is likely to be effective.
Objectives
To assess the effectiveness of training aimed at improving reporting of child abuse and neglect by professionals and to investigate possible components of effective training interventions.
Methods
Criteria for considering studies for this review
Types of studies
Randomised controlled trials (RCTs), quasi‐RCTs (i.e. studies in which participants are assigned to intervention or comparison or control groups using a quasi‐randomised method such as allocation by date of birth, or similar methods), and controlled before‐and‐after (CBA) studies (i.e. studies where participants are allocated to intervention and control groups by means other than randomisation). We included CBA studies because studies of educational interventions are often conducted in settings where truly randomised trials may not be feasible, for example in the course of a training series where enrolment decisions are based on group availability or logistics.
When deciding on included studies we used explicit study design features rather than study design labels. We followed the guidance on how to assess and report on non‐randomised studies in the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2022a; Reeves 2022; Sterne 2022).
Types of participants
Studies that involved qualified professionals who are typically required by law or organisational policy to report child abuse and neglect (e.g. teachers, nurses, doctors, and police/law enforcement).
Types of interventions
Included
Child protection training interventions aimed explicitly at improving reporting of child abuse and neglect by qualified professionals, irrespective of programme type, mode, content, duration, intensity, and delivery context. These interventions were compared with no training, waitlist control, or alternative training not related to child abuse and neglect (e.g. cultural sensitivity training).
Excluded
We excluded training interventions in which improving professionals' reporting of child abuse and neglect was a minor training focus, such as brief professional induction or orientation programmes targeting a broad range of employment responsibilities in which it would not be possible to isolate the specific intervention effects for a child protection training component (e.g. training for interagency working). We excluded child protection training conducted before professional qualifications were awarded (e.g. as part of undergraduate college or university professional preparation programmes in initial teacher education, pre‐service education for nurses, or entry‐level medical education).
Types of outcome measures
We included studies assessing the primary and secondary outcomes listed below. We excluded studies that did not set out to measure any of these outcomes.
Primary outcomes
-
Number of reported cases of child abuse and neglect:
as measured subjectively by participant self‐reports of actual cases reported;
as measured subjectively by participant responses to vignettes; and
as measured objectively in official records of reports made to child protection authorities.
Quality of reported cases of child abuse and neglect, as measured via coding of the actual contents of reports made to child protection authorities (i.e. in government records or archives).
-
Adverse events, such as:
increase in failure to report cases of child abuse and neglect that warrant a report as measured subjectively by participant self‐reports (i.e. in questionnaires); and
increase in reporting of cases that do not warrant a report as measured subjectively by participant self‐reports (i.e. in questionnaires).
We note that studies using official records (i.e. primary outcome 1c), such as the number of reports made and the number of reports substantiated after investigation as indicative of training outcomes, must be interpreted with caution. Although objective, official records cannot measure all types of reporting behaviours, for example non‐reporting behaviour in which a professional fails to report a case that should have been reported. Official records must also be interpreted within the context and purpose of training, for example if training was introduced in the context of responses to recommendations from a public inquiry, or if training was used for the purpose of encouraging or discouraging specific types of reports, or both.
Secondary outcomes
Knowledge of the reporting duty, processes, and procedures.
Knowledge of core concepts in child abuse and neglect such as the nature, extent, and indicators of the different types of abuse and neglect.
Skill in distinguishing between cases that should be reported from those that should not.
Attitudes towards the duty to report child abuse and neglect.
Timing of outcome assessment
We classified primary and secondary outcomes using three time periods: short‐term outcomes (assessed immediately after the training intervention and up to 12 months after); medium‐term outcomes (assessed between one and three years after the training intervention); and long‐term outcomes (assessed more than three years after the training intervention).
Search methods for identification of studies
We used the MEDLINE strategy from our protocol and adapted it for other databases (Mathews 2015). The first round of searches for the review were conducted in December 2016, with search updates in January 2017 and December 2018. When we came to update the searches in 2020, we noticed that errors had been made in earlier searches. We corrected the errors and re‐ran all searches in all databases up to June 2021. We de‐duplicated these records by comparing them with records from previous searches and removed records which had already been screened. We did not apply any date or language restrictions, and sought translation for papers published in languages other than English.
We recorded data for each search in a Microsoft Excel spreadsheet (Microsoft Corporation 2018), including: date of the search, database and platform, exact search syntax, number of search results, and any modifications to search strategies to accommodate variations in search functionalities for specific databases. The results for each search were exported as RIS files and stored in EndNote X8.0.1 (EndNote 2018), with a folder for each searched database. Search strategies and specific search dates are shown in Appendix 1. Changes to the planned search methods in our review protocol, Mathews 2015, are detailed in the Differences between protocol and review section.
Electronic searches
We searched the following databases.
Cochrane Central Register of Controlled Trials (CENTRAL; 2021, Issue 6), in the Cochrane Library (searched 11 June 2021).
Cochrane Database of Systematic Reviews (CDSR; 2021, Issue 6), in the Cochrane Library (searched 11 June 2021).
Ovid MEDLINE (1946 to 4 June 2021).
Embase.com Elsevier (1966 to 11 June 2021).
CINAHL (Cumulative Index to Nursing and Allied Health Literature) EBSCOhost (1981 to 4 June 2021).
ERIC EBSCOhost (1966 to 4 June 2021).
PsycINFO EBSCOhost (1966 to 4 June 2021).
Social Services Abstracts via ProQuest Research Library (1966 to 18 June 2021).
Science Direct Elsevier (1966 to 4 June 2021).
Sociological Abstracts via ProQuest Research Library (1952 to 18 June 2021).
ProQuest Psychology Journals via ProQuest Research Library (1966 to 11 June 2021).
ProQuest Social Science via ProQuest Research Library (1966 to 23 July 2021).
ProQuest Dissertations and Theses via ProQuest Research Library (1997 to 23 July 2021).
LexisNexis Lexis.com (1980 to 19 December 2018).
LegalTrac GALE (1980 to 19 December 2018).
Westlaw International Thomson Reuters (1980 to 19 December 2018).
Conference Proceedings Citation Index – Social Science & Humanities (Web of Science; Clarivate) (1990 to 11 June 2021).
Violence and Abuse Abstracts (EBSCOhost) (1971 to 4 June 2021).
EducationSource (EBSCOhost) (1880 to 4 June 2021).
LILACS (Latin American and Caribbean Health Science Information database) (lilacs.bvsalud.org/en/) (2003 to 11 June 2021).
World Health Organization International Clinical Trials Registry Platform (WHO ICTRP; trialsearch.who.int; searched 2000 to 11 June 2021).
OpenGrey (opengrey.eu/; searched 27 May 2019).
Searching other resources
We carried out additional searches to identify studies not captured by searching the databases listed above. We handsearched the following journals.
Child Maltreatment (2 July 2021).
Child Abuse and Neglect (2 July 2021).
Children and Youth Services Review (2 July 2021).
Trauma, Violence and Abuse (2 July 2021).
Child Abuse Review (2 July 2021).
We also searched the following key websites for additional studies.
International Society for Prevention of Child Abuse and Neglect via ispcan.org/ (2 July 2021).
US Department of Health and Human Services Children’s Bureau, Child Welfare Information Gateway via childwelfare.gov/ (2 July 2021).
Promising Practices Network operated by the RAND Corporation via promisingpractices.net/ (21 March 2019).
National Resource Center for Community‐Based Child Abuse Prevention (CBCAPP) via friendsnrc.org/ (2 July 2021).
California Evidence‐Based Clearinghouse for Child Welfare (CEBC) via cebc4cw.org/ (2 July 2021).
Coalition for Evidence‐Based Policy via coalition4evidence.org/ (21 March 2019).
Institute of Education Sciences What Works Clearinghouse via ies.ed.gov/ncee/wwc/ (2 July 2021).
National Institute for Health and Care Excellence (NICE) UK via nice.org.uk/ (9 July 2021).
Finally, we harvested the reference lists of included studies to identify further potential studies. We did not contact key researchers in the field for unpublished studies as prescribed in our review protocol. Instead, we circulated requests for relevant studies via email to the Child‐Maltreatment‐Research‐Listserv, a moderated electronic mailing list with over 1500 subscribers, as this offered the possibility of reaching a far larger number of researchers (Walsh 2018 [pers comm]).
Data collection and analysis
We conducted data collection and analysis following our published protocol (Mathews 2015), and in accordance with the guidance in the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2011; Higgins 2022a). In the following sections, we have reported only those methods that were used in this review. Preplanned but unused methods are reported in Appendix 2.
Selection of studies
We used SysReview review management software for title and abstract and full‐text screening (Higginson 2014). Search results were imported from Endnote into SysReview and duplicates removed prior to title and abstract screening. Each title and abstract was screened by at least two review authors working independently to determine eligibility according to the inclusion and exclusion criteria. During title and abstract screening, screeners (KW, EE, LH, BM, NA, ED, EP) assessed if each record was: (i) an eligible document type (e.g. not a book review); (ii) a unique document (i.e. not an undetected duplicate); (iii) about child protection training; and (iv) a study conducted with professionals. A third screener resolved any conflicts in screening decisions (either KW or EE). Titles and abstracts published in languages other than English were translated into English using Google Translate.
Three review authors (KW, EE, LH) working independently screened the full texts of potentially eligible studies against the inclusion criteria as described in Criteria for considering studies for this review. Any discrepancies were resolved by discussion with a third review author who had not previously screened the record (BM, MK, EE, KW) until consensus was reached. As authors of potentially included studies, BM and MK were excluded from decisions on studies for which they were authors.
We documented the primary reasons for exclusion of each excluded record. To determine eligibility for studies published in languages other than English, we translated studies into English using Google Translate. We contacted study authors to request missing information if there was insufficient information to determine eligibility.
We identified and linked together multiple reports on the same study so that each study, rather than each report, was the principal unit of interest (e.g. Hazzard 1984; Palusci 1995). We listed studies that were close to meeting the eligibility criteria but were excluded at the full‐text screening or data extraction stages, along with the primary reasons for their exclusion, in the Characteristics of excluded studies table. We recorded our study selection decisions in a PRISMA flow diagram (Moher 2009).
Data extraction and management
We used SysReview review management software for data extraction and management (Higginson 2014). We developed and pilot‐tested a data extraction template based on the checklist of items in the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2011, Table 7.3a; Li 2022, Table 5.3a), the PRISMA minimum standards (Liberati 2009), and the Template for Intervention Description and Replication (TIDieR) checklist and guide (Hoffmann 2014). We extracted data from study reports concerning details of:
study general information: title identifier, full citation, study name, document type, how located, country, ethical approval, funding;
study design and methods: research design, comparison condition, unit of allocation, randomisation (and details on how this was implemented), baseline assessment (including whether intervention and comparison conditions were equivalent at baseline), unit of analysis, adjustment for clustering;
participant characteristics: participants, recruitment, eligibility criteria, number randomised, number consented, number began (intervention and control groups), number completed (intervention and control groups), number T1, T2, T3 (etc.), age (mean, standard deviation (SD), range) (baseline, intervention and control groups), gender (% female) (baseline, intervention and control groups), ethnicity (intervention and control groups), socio‐economic status (intervention and control groups), years of experience, previous child protection training, previous experience with child maltreatment reporting, any other information;
intervention characteristics: name of intervention, setting, delivery mode, contents and topics, methods and processes, duration, intensity, trainers and qualifications, integrity monitoring, fidelity issues; and
outcome measures: primary outcomes, secondary outcomes, other outcomes.
As authors of potentially included studies, BM and MK were not involved in data extraction. Data were extracted from each study and entered into SysReview by at least two review authors (EE, LH, KW) working independently. A third review author (KW) also extracted data on intervention and outcome characteristics and prepared the Characteristics of included studies tables. Any discrepancies between review authors were resolved through discussion.
Assessment of risk of bias in included studies
Our study protocol, Mathews 2015, was designed prior to introduction of the Risk Of Bias In Non‐randomized Studies of Interventions (ROBINS‐I) tool (Sterne 2016), and predated new guidance for assessing risk of bias in non‐randomised studies provided in Chapter 25 of the Cochrane Handbook for Systematic Reviews of Interventions (Sterne 2022). As planned in our protocol (Mathews 2015), we used the original Cochrane risk of bias tool (Higgins 2011, Table 8.5a), which has seven domains: (i) sequence generation; (ii) allocation concealment; (iii) blinding of participants and personnel; (iv) blinding of outcome assessment; (v) incomplete outcome data; (vi) selective reporting; and (vii) other sources of bias. In our protocol, we added three additional domains: (viii) reliability of outcome measures, as we anticipated that some studies may have used custom‐made instruments and scales; (ix) group comparability; and (x) contamination. Adoption of this approach corresponds with the 'Suggested risk of bias criteria for EPOC reviews' from Cochrane Effective Practice and Organisation of Care (EPOC 2017).
One review author (EE) incorporated the above 10 domains into a module within SysReview. Three review authors (KW, EE, LH), working independently, assessed risk of bias of the included studies. Assessors were not blinded to the names of the authors, institutions, journals, or study results. Where possible, we extracted verbatim text from the study reports as support for risk of bias judgements, resolving any disagreements by discussion. For studies where essential information to assess risk of bias was not available, we planned to contact study authors with a request for missing information, but this was not needed. We entered the information first into SysReview and then into Review Manager 5 (Review Manager 2020), and summarised findings in the risk of bias tables for each included study. We generated two summary figures: a risk of bias graph and a risk of bias summary showing scores for all studies, and showing the proportion of studies for each risk of bias domain. We planned to conduct sensitivity analyses for each outcome to determine how results might be affected by our inclusion/exclusion of studies at high risk of bias; however, this was not possible owing to the small number of studies with data available for meta‐analyses.
For each included study, we scored the relevant risk of bias domains as 'low', 'high', or 'unclear' risk of bias. We made judgements by answering 'yes' (scored as low risk of bias), 'no' (scored as high risk of bias), or 'unclear' (scored as unclear risk of bias) to a prespecified question for each domain as detailed in Appendix 3, with reference to the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2011, Table 8.5b) and the 'Suggested risk of bias criteria for EPOC reviews' (EPOC 2017).
Measures of treatment effect
We calculated intervention effects using Cochrane software RevMan Web (RevMan Web 2021).
Continuous data
All eligible outcomes in all of the included studies were measured on continuous scales, most of which were slightly different from each other. For continuous outcomes, we extracted postintervention means and SDs and summarised study effects using standardised mean differences (SMDs) and 95% confidence intervals (CI), to account for scale differences in the meta‐analyses.
Unit of analysis issues
Cluster‐randomised trials
Cluster‐randomised trials are widespread in the evaluation of healthcare and educational interventions (Donner 2002), but are often poorly reported (Campbell 2004). Adjusting for this clustering in analyses is important in order to reduce the risk of overestimating the treatment effect or underestimating the variance (or both), and thereby the weight of the study in meta‐analyses (Hedges 2015; Higgins 2022b).
Congruent with our protocol (Mathews 2015), we planned that for included studies with incorrectly analysed data that did not account for clustering, we would use procedures for adjusting study sample sizes outlined in Section 16.3.4 of the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2011). None of the included studies reported an intracluster correlation coefficient (ICC), nor were these available from the study authors. No published ICC for child protection training interventions for professionals could be found, so we imputed a conservative ICC of 0.20 based on reviews of ICCs for professional development interventions with teachers (ICC range 0.15 to 0.21) (Kelcey 2013), and primary care providers (ICC range 0.01 to 0.16) (Eccles 2003).
We planned to test the robustness of these assumptions in sensitivity analysis, in which we would use two extreme ICC values reported in the literature for each professional subgroup to assess the extent to which different ICC values affected the weights assigned to the included trials. We also planned to investigate whether results were similar or different for cluster and non‐cluster trials. However, due to the small number of studies included for each outcome (one to three studies), we deemed these analyses to be inappropriate. Rather, where a study with clustering was included for a given outcome, we have presented two results: one without an adjustment for clustering, and one with an adjustment for clustering (using an ICC of 0.2).
Dealing with missing data
Missing data can be in the form of missing studies, missing outcomes, missing outcome data, missing summary data, or missing participants. We did not anticipate missing studies, as our search strategy was comprehensive, and we took all reasonable steps to locate the full texts of eligible studies. Where possible, we identified missing outcomes by cross referencing study reports with trial registrations. For studies with missing or incomplete outcome data, or missing summary data required for effect size calculation, we contacted first‐named study authors via email to supply the missing information (e.g. intervention and control group participant totals, means, SDs, ICCs).
If the data to calculate effect sizes with Review Manager Web or the RevMan Web calculator (or both) were not available in study reports or from study authors (RevMan Web 2021), we used David B Wilson's suite of effect size calculators to calculate an effect size (Wilson 2001). This was then entered directly into RevMan Web, and meta‐analyses were conducted using the generic inverse‐variance method in RevMan Web (Deeks 2022).
Assessment of heterogeneity
We used RevMan Web to conduct our analyses according to the guidance in Section 10.3 of the Cochrane Handbook (Deeks 2022). To estimate heterogeneity, this software uses the inverse‐variance method for fixed‐effect meta‐analysis, and the DerSimonian and Laird method for random‐effects meta‐analysis (Deeks 2022). We used standard default options in RevMan Web to calculate the 95% CI for the overall effect sizes.
To assess the extent of variation between studies, we initially examined the distributions of relevant participant (e.g. professional discipline), delivery (e.g. classroom), and trial (e.g. type and duration of intervention) variables. Using forest plots produced in RevMan Web (RevMan Web 2021), we visually examined CI for the outcome results of individual studies, paying particular attention to poor overlap, which can be used as an informal indicator of statistical heterogeneity (Deeks 2022; Higgins 2011). Using output provided by RevMan Web (RevMan Web 2021), we examined three estimates that assess different aspects of heterogeneity as recommended by Borenstein 2009. Firstly, as a test of statistical significance of heterogeneity, we examined the Q statistic (Chi²) and its P value. For any observed Chi², a low P value was deemed to provide evidence of heterogeneity of intervention effects (i.e. that studies do not share a common effect size) (Deeks 2022; Higgins 2011). Secondly, we examined Tau² to provide an estimate of the magnitude of variation between studies. Thirdly, we examined the I² statistic, which describes the proportion of variability in effect estimates due to heterogeneity rather than to chance (Deeks 2022; Higgins 2011). These three quantities (Chi², Tau², and the I² statistic) together provide a comprehensive summary of the presence and the degree of heterogeneity amongst studies and are viewed as complementary rather than mutually exclusive quantities.
Rather than defaulting to interpretations of heterogeneity based on rules of thumb (i.e. that an I² statistic value of 30% to 60% represents moderate heterogeneity, 50% to 90% substantial heterogeneity, and 75% to 100% considerable heterogeneity), we used all three measures of heterogeneity (Chi², Tau², and the I² statistic) to fully assess and describe the aspects of variability in the data as detailed in Borenstein 2009. For example, we used Tau² or the I² statistic (or both) to assess the magnitude of true variation, and the P value for Chi² as an indicator of uncertainty regarding the genuineness of the heterogeneity (P < 0.05).
Assessment of reporting biases
We assessed reporting bias in the form of selective outcome reporting as one of the domains within the risk of bias assessment.
Data synthesis
We calculated effect sizes for single studies and quantitatively synthesised multiple studies using RevMan Web (RevMan Web 2021). We first assessed the appropriateness of combining data from studies based on sufficient similarity with respect to training interventions delivered, study population characteristics, measurement tools or scales used, and summary points (i.e. outcomes measured within comparable time frames pre‐ and postintervention). We combined data for comparable professional groups (e.g. elementary and high school teachers), similar outcome measures (e.g. knowledge measures, attitude measures), and training types (i.e. online and face‐to‐face).
If studies reported means, SD, and the number of participants by group, we directly inputted that data into RevMan Web (RevMan Web 2021). If these data were not reported, and could not be obtained from the study authors, we consulted David B Wilson's suite of effect size calculators to ascertain if an effect size could be calculated (e.g. Randolph 1994 for primary outcome 1a). In cases where we needed to compute an effect size outside of RevMan Web (RevMan Web 2021), which then needed to be combined with other studies via meta‐analysis, we used the generic inverse method in RevMan Web to conduct the meta‐analysis (e.g. Dubowitz 1991 and analysis for secondary outcome 2a) (RevMan Web 2021).
If there was only one study with available data to calculate an effect size for a given outcome, we reported a single SMD with 95% CIs. We acknowledge this is not standard practice and that normally the mean difference would be reported, but we adopted this strategy in order to maintain consistency and comparability in the presentation of the results, mindful of readers.
If there were at least two comparable studies with available data to calculate effect sizes, we performed meta‐analysis to compute pooled estimates of intervention effects for a given outcome. We reported the results of the meta‐analyses using SMDs and 95% CIs. Where we judged that studies were estimating the same underlying treatment effect, we used fixed‐effect models to combine studies. Fixed‐effect models ignore heterogeneity, but are generally interpreted as being the best estimate of the intervention effect (Deeks 2022). However, where the intervention effects are unlikely to be identical (e.g. due to slightly varying intervention models), random‐effects models can provide a more conservative estimate of effect because they do not assume that included studies estimate precisely the same intervention effect (Deeks 2022). We thus used a random‐effects meta‐analysis to combine studies where we judged that studies may not be estimating an identical treatment effect (e.g. different training curriculum).
We had planned to develop a training intervention programme typology by independently coding and categorising intervention components (e.g. contents and methods) and then attempting to link specific intervention components to intervention effectiveness (Mathews 2015). However, we were unable to statistically test these proposals in subgroup analyses because there were too few studies. Instead, we provided a detailed narrative summary in the Characteristics of included studies tables.
Subgroup analysis and investigation of heterogeneity
An insufficient number of studies precluded our planned subgroup analyses. However, in future review updates these methods may be required (Appendix 2).
Sensitivity analysis
We planned several sensitivity analyses; however, these were precluded by an insufficient number of included studies. Planned methods are provided in Appendix 2.
Summary of findings and assessment of the certainty of the evidence
To provide a balanced summary of the review findings, we have presented all review findings in two summary of findings tables, one that summarises primary outcomes and adverse effects (Table 1), and one that summarises secondary outcomes (Table 2). We chose this approach as both sets of outcomes have utility for practice and research. Each table summarises the evidence for RCT and quasi‐RCT studies that compare child protection training to no training, waitlist control, or alternative training (not related to child protection). None of the studies included long‐term follow‐up, and therefore the tables present findings only for outcomes that were measured in the short term, that is immediately postintervention or within three months after the intervention. Although the review includes CBA studies, we created the summary of findings tables only for RCTs and quasi‐RCTs, and rated the certainty of the evidence only for these studies.
At least two review authors (KW, EE, LH) rated the certainty of the evidence for all primary and secondary outcomes, with no disagreements to resolve. We rated the certainty of the evidence using the GRADE approach (Guyatt 2008; Guyatt 2011; Schünemann 2013; Schünemann 2022). The GRADE system classifies the certainty of evidence into one of four categories, as follows.
High certainty: we are very confident that the true effect lies close to that of the estimate of the effect.
Moderate certainty: we are moderately confident in the effect estimate: the true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different.
Low certainty: our confidence in the effect estimate is limited: the true effect may be substantially different from the estimate of the effect.
Very low certainty: we have very little confidence in the effect estimate: the true effect is likely to be substantially different from the estimate of effect.
We considered the following factors when grading the certainty of evidence: study design, risk of bias, precision of effect estimates, consistency of results, directness of evidence, and magnitude of effect (Schünemann 2022). We based our decisions on whether to downgrade the certainty of the evidence following the guidance in the Cochrane Handbook (Schünemann 2022), and entered the data for each factor in the GRADEpro GDT tool to obtain the overall rating of certainty (GRADEpro GDT). We recorded the process and rationale for downgrading the certainty of the evidence in footnotes to Table 1 and Table 2.
All studies used to estimate treatment effects were RCTs or quasi‐RCTs. Each outcome began with an overall rating of high certainty; however, all outcomes were downgraded by a maximum of two or three levels. We downgraded the certainty of the evidence for all outcomes by one level due to high risk of bias and a further level due to indirectness of the evidence. We considered all findings to have concerns related to indirectness, either because the effect was estimated by a single study, thereby restricting evidence in terms of intervention, population, and comparators; or because the outcome was not a direct measure of reporting behaviour (i.e. the primary outcome of clinical relevance). Other outcomes were downgraded a further level due to inconsistency in the results (i.e. significant heterogeneity) or imprecision (i.e. CIs that included the possibility of a small effect size or small sample size), or both.
Results
Description of studies
Results of the search
In total, we identified 45,743 records through database searching, and a further 1839 records from other sources. After duplicates were removed, we screened the titles and abstracts of 33,702 records, excluding 32,221 as irrelevant. We assessed 1481 full‐text reports against our inclusion criteria, as detailed in Criteria for considering studies for this review. We excluded 1454 of these reports with reasons, as shown in Figure 1, with 'near misses' detailed in the Characteristics of excluded studies tables. We identified two ongoing studies (Ongoing studies) and three studies awaiting classification (Studies awaiting classification).
Included studies
We included 11 unique studies reported in 17 papers, as shown in the study flow diagram (Figure 1). Details for each of the 11 included studies are summarised in the Characteristics of included studies tables.
Study design
Of the 11 included studies, five were RCTs (Kleemeier 1988; Mathews 2017; McGrath 1987; Randolph 1994; Smeekens 2011). Of these, two RCTs were conducted with individual participants (Mathews 2017; Smeekens 2011), and three were conducted with participants in groups (Kleemeier 1988; McGrath 1987; Randolph 1994). Four studies were quasi‐RCTs (Alvarez 2010; Dubowitz 1991; Hazzard 1984; Kim 2019). Of these, two were conducted at the individual level (Alvarez 2010; Dubowitz 1991), and two were conducted at the group level (Hazzard 1984; Kim 2019). The remaining two studies used a CBA design (Jacobsen 1993; Palusci 1995), with one apiece conducted with individuals, Palusci 1995, and groups, Jacobsen 1993.
Location
One study was conducted in Canada (McGrath 1987), one in the Netherlands (Smeekens 2011), and the remaining nine studies were conducted in the USA.
Sample sizes
The number of participants randomised per study ranged from 30, in Palusci 1995, to 765, in Mathews 2017. Only one study reported having used a sample size calculation (Mathews 2017).
Settings
Settings for the training interventions were aligned to workplaces. Reported settings included an urban public hospital (Palusci 1995), a university clinic (Dubowitz 1991), a rural school district (Jacobsen 1993; Randolph 1994), and a suburban school district (Kleemeier 1988). In three studies, interventions were conducted online as e‐learning modules (Kim 2019; Mathews 2017; Smeekens 2011). Specific settings for interventions were not reported in three studies (Alvarez 2010; Hazzard 1983; McGrath 1987).
Participants
Profession
The 11 studies included a total of 1484 participants. Participants were drawn from a small number of key groups having contact with children in their everyday work. In six studies, participants were elementary and high school teachers (Hazzard 1983; Jacobsen 1993; Kim 2019; Kleemeier 1988; McGrath 1987; Randolph 1994). In two studies, participants were doctors ‐ specifically paediatric residents, Dubowitz 1991, and physicians, Palusci 1995. One study apiece was conducted with mental health professionals (Alvarez 2010), childcare professionals (Mathews 2017), and nurses (Smeekens 2011).
One study included both professional and student participants, but did not separate outcome data by group (Alvarez 2010).
Demographic data contextually relevant to child protection training was reported in a minority of studies, including: years of professional work experience reported in seven studies (Jacobsen 1993; Kim 2019; Kleemeier 1988; Mathews 2017; McGrath 1987; Randolph 1994; Smeekens 2011); previous experience with child maltreatment reporting in three studies (Alvarez 2010; Dubowitz 1991; Hazzard 1983); and previous child protection training in three studies (Dubowitz 1991; Hazzard 1983; Mathews 2017). Participants in the 11 studies were relatively experienced in their professions, with the mean in the range of 9 years, in Smeekens 2011, to 15.4 years, in McGrath 1987.
Age, gender, and ethnicity
Study participants' demographic details at baseline were inconsistently reported. Four study authors reported mean age of participants at baseline separately for intervention and control groups (Alvarez 2010; Jacobsen 1993; Randolph 1994; Smeekens 2011). One study reported age bracket data for intervention, control, and total participants (Mathews 2017). Four studies reported statistical assessment of baseline differences in age (Alvarez 2010; Mathews 2017; Randolph 1994; Smeekens 2011). One study reported an age range of 18 to 55+ years for total participants (Kim 2019), and another study reported a median age bracket of 31 to 35 years (Hazzard 1983). Other studies reported means but not SDs for doctors (Dubowitz 1991: 27 years) and teachers (Kleemeier 1988: 41 years). One study did not report any data on participant age (McGrath 1987).
The distribution of females to males in the included studies was low at 2:5 (44% female) for doctors (Dubowitz 1991), and high at 10:1 (97.7% female) for childcare professionals (Mathews 2017). Seven studies did not report gender‐specific proportions (Alvarez 2010; Dubowitz 1991; Hazzard 1983; McGrath 1987; Palusci 1995; Randolph 1994; Smeekens 2011).
Ethnicity data were reported in only four studies, with the majority of participants in these studies being identified by use of the term 'White' or 'Caucasian': 70% (Jacobsen 1993), 75% (Kleemeier 1988), 84.2% (Mathews 2017), and 97.5% (Kim 2019). A minority of participants were Hispanic, African‐American, or Asian.
Interventions
Intervention conditions
The 11 trials examined the effectiveness of 11 distinct but comparable interventions. Interventions named were: child maltreatment reporting workshop (Alvarez 2010); child maltreatment course (Dubowitz 1991); one‐day training workshop on child abuse (Hazzard 1983); teacher training workshop (Kleemeier 1988); three‐hour inservice training on child sexual abuse adapted from Kleemeier 1988 (Jacobsen 1993); teacher awareness programme (McGrath 1987); child sexual abuse prevention teacher training workshop (Randolph 1994); interdisciplinary team‐based training (Palusci 1995); the Next Page (Smeekens 2011); iLookOut for Child Abuse (Mathews 2017); and Committee for Children Second Step Child Protection Unit (Kim 2019). All were education and training interventions aimed at building the capacity of postqualifying child‐serving professionals to protect children from harm by exposing these professionals to a series of intentional learning experiences.
In eight of the 11 trials, interventions were delivered in face‐to‐face workshops or seminars, whilst in the remaining three trials, interventions were delivered as self‐paced e‐learning modules (Kim 2019; Mathews 2017; Smeekens 2011).
Contents or topics covered
All trials reported contents or topics covered in the training interventions. The most common topics included: indicators of child abuse and neglect; definitions and types of child abuse and neglect; reporting laws, policies, and ethics; how to make a report; incidence or prevalence, or both; and concerns, fears, myths, and misconceptions. Fewer interventions addressed aetiology (Dubowitz 1991; Kleemeier 1988), effects (Jacobsen 1993; Kleemeier 1988), responding to disclosures (Jacobsen 1993; Kim 2019; Randolph 1994), or community resources and referrals (Kleemeier 1988).
Training interventions for teachers were more likely to cover primary prevention, that is strategies for preventing child abuse and neglect before it occurs or preventing its reoccurrence (Jacobsen 1993; Kim 2019; Kleemeier 1988). Training for doctors, nurses, and mental health professionals tended to emphasise evaluation and diagnosis, communicating with children, and interviewing caregivers (Alvarez 2010; Dubowitz 1991; Palusci 1995; Smeekens 2011).
In three studies, all of the which evaluated e‐learning interventions, the study authors explained elements of underlying programme theory. In iLookOut, training was conceptualised as having two key dimensions designed to enhance participants' cognitive and affective attributes for reporting child maltreatment (Mathews 2017). In the Next Page, content was built around three dimensions: recognition, responding (acting), and communicating (Smeekens 2011). In Second Step Child Protection Unit, the staff training component was part of a broader comprehensive 'whole school' approach to prevention of child sexual abuse. This training addressed multiple features of the school ecology: school policies and procedures, staff training, student lessons, and family education (Kim 2019).
Teaching methods, strategies, or processes
Teaching methods, strategies, or processes used in intervention delivery were reported in nine of the 11 trials, with two trials providing no information (Kim 2019; McGrath 1987). The most common methods included: the use of films/videos; modelling via observations of clinicians; experiential exercises; and role‐plays. With the advent of technology, case simulations were used (Mathews 2017; Smeekens 2011). These methods were directed towards providing insights into real‐life situations in which child abuse and neglect would be encountered, providing opportunities to observe experienced practitioners at work, engage in practice, and receive feedback. Some interventions also included question‐and‐answer sessions with experts (Hazzard 1984; Kleemeier 1988; Randolph 1994). Didactic presentations with group discussion were common, but less so the provision of reading tasks, Dubowitz 1991, and written activities, Randolph 1994. E‐learning modules offered opportunities for the use of interactive elements, including animations, Smeekens 2011, and filmmaking techniques designed to activate empathy for victims, Mathews 2017. For example, in iLookOut, e‐learning modules have an “interactive, video‐based storyline with films shot in point‐of‐view (i.e. the camera functioning as the learner’s eyes) ... as key events unfold through interactions involving children, parents, and co‐workers (all played by actors), the learner had to decide how to best respond” (Mathews 2017, p 19).
The duration and intensity of interventions in the included trials ranged from a single two‐hour workshop, Alvarez 2010; McGrath 1987, to six 90‐minute seminars conducted over a one‐month period (Dubowitz 1991). A six‐hour workshop for teachers, first reported in Hazzard 1984, was also used in Kleemeier 1988, and was then adapted for a three‐hour workshop by Jacobsen 1993. Similar content was spread over three two‐hour sessions by Randolph 1994. E‐learning interventions used in three studies offered the advantage of self‐paced learning within a specified window of availability, but also presented a challenge in specifying training length (Kim 2019; Mathews 2017; Smeekens 2011).
The interventions were developed and delivered by specialist facilitators (Alvarez 2010; Jacobsen 1993; McGrath 1987), content area experts (Hazzard 1984; Kim 2019; Kleemeier 1988; Mathews 2017; Randolph 1994), and interdisciplinary teams (Dubowitz 1991; Palusci 1995; Smeekens 2011).
Control conditions
In one trial, the comparison condition was an alternative training, that is a cultural sensitivity workshop, which study authors explained was used for its appeal in recruiting participants who were looking for continuing education credits (Alvarez 2010, p 213). In four studies, the training intervention group was compared to a waitlist control group (Kim 2019; Mathews 2017; McGrath 1987; Randolph 1994), and in five studies the comparison condition was no training (Dubowitz 1991; Hazzard 1984; Kleemeier 1988; Palusci 1995; Smeekens 2011). One study did not report the comparison condition (Jacobsen 1993).
Unit of analysis issues
Allocation of individuals to intervention or control conditions in many of the included studies occurred by workplace groups (e.g. all teachers in entire schools, all paediatricians on clinic rotations), thus forming clusters. None of the studies conducted at group level were labelled as clustered studies by study authors, nor were data analysed using statistical methods to account for similarities amongst participants in the same cluster. In some studies, all participants in a cluster (e.g. a school) were allocated to a condition (e.g. Hazzard 1984; Kim 2019). In other studies, clustered data were created by allocating several participants from the same workplace to one condition (e.g. Alvarez 2010; Kleemeier 1988). We identified unit of analysis issues, which we addressed in our reporting in the Effects of interventions section.
Missing data
We identified two types of missing data in the included studies: missing outcome data required for effect size calculation, and missing participant data due to attrition (Alvarez 2010; Dubowitz 1991; Hazzard 1984; Kim 2019; Kleemeier 1988; Mathews 2017; McGrath 1987; Smeekens 2011). For details, see Characteristics of included studies tables. The approaches we used for dealing with missing data and data synthesis for each of these studies are detailed in Appendix 4.
Funding sources
All but two of the 11 included studies reported receiving external funding. Studies were funded by federal government agencies in the USA, Alvarez 2010; Dubowitz 1991; Kleemeier 1988; Randolph 1994, and Canada (McGrath 1987), and by a combination of university and philanthropic funding in the USA (Hazzard 1984; Mathews 2017). One study was funded by the philanthropic arm of an international technology company, which also hosted the online training platform used in the study (Smeekens 2011). One study was funded by a training intervention developer, a non‐government organisation in the USA (Kim 2019).
Outcomes
In this section, we have summarised the primary and secondary outcomes of interest that were investigated in the 11 included studies. For details by individual study, see Characteristics of included studies.
Primary outcomes
1. Number of reported cases of child abuse and neglect
As shown in Table 1 below, three of the 11 included studies measured changes in the number of reported cases of child abuse and neglect via participants' self‐reports of actual cases reported (i.e. primary outcome 1a) (Hazzard 1984; Kleemeier 1988; Randolph 1994). Although differently named, the instruments used were almost identical, comprising a battery of seven, Hazzard 1984, and five items, Kleemeier 1988; Randolph 1994, assessing self‐reported actions taken in relation to child abuse and neglect (i.e. a behavioural measure). One common item in the batteries, 'reporting a case of suspected abuse' to a protective services agency, was classified as a 1b primary outcome measure. Data were collected in the three studies at six‐week, Kleemeier 1988, three‐month, Randolph 1994, and six‐month, Hazzard 1984, follow‐up periods.
Five of the 11 included studies measured changes in the number of reported cases of child abuse and neglect via participant responses to vignettes (i.e. primary outcome 1b) (Alvarez 2010; Jacobsen 1993; Kleemeier 1988; Palusci 1995; Randolph 1994). Alvarez 2010 used an inventory of eight child maltreatment vignettes, two for each of the four child maltreatment subtypes (physical abuse, emotional abuse, sexual abuse, and neglect), one of which required a report and the other which did not. Jacobsen 1993, Kleemeier 1988, and Randolph 1994 used eight child sexual abuse vignettes in a measure known as the Teacher Vignettes Measure, to elicit participant knowledge of behavioural indicators, ability to respond to disclosures, and enact appropriate courses of action including reporting. From the text descriptions in the study reports, we can assume that the same vignettes were used in all three studies, and the vignettes were published in Jacobsen 1993 (p 43‐9). Kleemeier 1988, the original authors of the measure, reported psychometric properties including internal consistency (alpha (α) = 0.78) and scorer interrater reliability (0.99, coefficient not reported). Unlike Kleemeier 1988 and Randolph 1994, however, Jacobsen 1993 did not use the Teacher Vignettes Measure at baseline. Palusci 1995 presented four illustrated case vignettes as Part Three in a longer survey. Participants were asked to assess anatomical findings and decide on case reportability based on a short patient history and photographs. Internal consistency of the entire survey was reported (α = 0.69).
Smeekens 2011 used eight simulated scenarios based on real clinical cases with in vivo video‐recorded assessment, which was later coded independently. However, we judged this intervention, the skills/capabilities it targeted, and its measurement to be qualitatively different from the other included studies and the text‐based vignette assessments they used. We have reported on Smeekens 2011 under secondary outcome 3 below.
None of the 11 included studies measured changes in the number of reported cases of child abuse and neglect by objective, official records of reports made to child protection authorities (i.e. primary outcome 1c). It is possible for such assessments of changes to reporting practice to be made by accessing administrative records of reports made to child protection agencies and examining these at the jurisdictional level required for the specific professional groups that are participating in interventions. However, such assessments are unlikely to be linkable to any trained individual or training cohort. This primary outcome measure would therefore be difficult, if not impossible, to assess in trials of training interventions. Research questions about the influence of training interventions on actual reporting practice may be better answered in other studies, for example time series analyses using child protection reporting data (e.g. Gilbert 2012; Mathews 2016). These themes are further unpacked in Implications for research.
2. Changes in the quality of reported cases of child abuse and neglect
As shown in Table 1, none of the 11 included studies measured changes in the quality of reported cases of child abuse and neglect by objective official records of reports made to child protection authorities (i.e. primary outcome 2), for example via coding of de‐identified reports made to child protection authorities held in government or agency records.
3. Adverse events
As shown in Table 1, none of the 11 included studies assessed adverse events, such as increases in failure to report (i.e. primary outcome 3a ‐ known colloquially as ‘under‐reporting’), or increases in reporting of cases that do not warrant a report (i.e. primary outcome 3b ‐ known colloquially as ‘over‐reporting’).
Table 1. Primary outcomes (* indicates inclusion in meta‐analysis or single effect size calculation)
Primary outcomes from included studies | Measure named in included studies | Studies |
1. Number of reported cases of child abuse and neglect | ||
1a. measured subjectively by participant self‐reports of actual cases reported | Reported Involvement in Child Abuse ‐ single item 'reporting a case of suspected abuse' from a multi‐item measure | Hazzard 1984 |
Teacher Prevention Behavior Measure ‐ a single item 'reporting a case of suspected abuse' from a multi‐item measure |
Kleemeier 1988 Randolph 1994* |
|
1b. measured subjectively by participant responses to vignettes |
Recognition and Intention to Report Suspected Child Maltreatment | Alvarez 2010 |
Teacher Vignettes Measure |
Jacobsen 1993 Kleemeier 1988* Randolph 1994* |
|
Survey (Part Three) | Palusci 1995 | |
1c. measured objectively in official records of reports made to child protection authorities | Nil | Nil |
2. Changes in the quality of reported cases of child abuse and neglect, measured via coding of the actual contents of reports made to child protection authorities (i.e. in government records or archives) | Nil | Nil |
3. Adverse events | ||
3a. increase in failure to report cases of child abuse and neglect that warrant a report, measured subjectively by participant self‐reports (i.e. in questionnaires) | Nil | Nil |
3b. increase in reporting of cases that do not warrant a report, measured subjectively by participant self‐reports (i.e. in questionnaires) | Nil | Nil |
Secondary outcomes
1. Knowledge of the reporting duty, processes, and procedures
Four of the 11 included studies measured knowledge of the reporting duty, processes, and procedures (i.e. secondary outcome 1 as shown in Table 2) (Alvarez 2010; Kim 2019; Mathews 2017; McGrath 1987). Measurement instruments were customised to align with jurisdictional or institutional (or both) reporting requirements. Alvarez 2010 used a 15‐item inventory with multiple‐choice response options to assess knowledge of child maltreatment reporting laws and reported both internal consistency (α = 0.18) and test‐retest reliability (r = 0.88, P < 0.01) (Alvarez 2008, p 56). Kim 2019 used the Educators and Child Abuse Questionnaire (Knowledge of Reporting Procedures subscale) (Kenny 2004). McGrath 1987 presented participants with five items assessing knowledge of legislative reporting requirements and five items assessing school board policy reporting requirements, but correct answers were not summed for an overall score. Data were reported by item. No reliability data were reported. Mathews 2017 used a 21‐item scale to assess knowledge of the legal duty to report child abuse and neglect, which was subjected to psychometric testing; however, these data were not reported.
2. Knowledge of core concepts in child abuse and neglect such as the nature, extent, and indicators of the different types of child abuse and neglect
Eight of the 11 included studies measured knowledge of core concepts in child abuse and neglect (i.e. secondary outcome 2 as shown in Table 2) (Dubowitz 1991; Hazzard 1984; Jacobsen 1993; Kim 2019; Kleemeier 1988; McGrath 1987; Palusci 1995; Randolph 1994). Four of these studies assessed knowledge of core concepts in child abuse and neglect (inclusive of all forms of child abuse and neglect) (Dubowitz 1991; Hazzard 1984; Kim 2019; McGrath 1987), and four studies assessed knowledge of core concepts in child sexual abuse specifically (Jacobsen 1993; Kleemeier 1988; Palusci 1995; Randolph 1994).
Knowledge scales for core concepts in child abuse and neglect varied in length from four items, in McGrath 1987, to 34 items, in Hazzard 1984. Response options were presented as multiple choice or variations on true/false/don’t know, so that correct answers could be summed for an overall score. Dubowitz 1991 developed a custom‐made test based on course content. Hazzard 1984 developed the Knowledge About Child Abuse scale, which assessed knowledge about definitions, characteristics, causes and effects. Internal consistency was reported (α = 0.80, p 290). Kim 2019 used the Educators and Child Abuse Questionnaire (Awareness of Signs and Symptoms of Child Abuse subscale) (Kenny 2004), for which validity and reliability data were established in a similar population (Kenny 2004). McGrath 1987 used four items from a longer questionnaire ("First Measure subscale 1"), to separately assess knowledge of indicators of physical abuse, neglect, sexual abuse, and emotional abuse.
Knowledge scales for core concepts in child sexual abuse specifically were 30 items in length (Jacobsen 1993; Kleemeier 1988; Palusci 1995; Randolph 1994). The Teacher Knowledge Scale, Kleemeier 1988, was used in three studies (Jacobsen 1993; Kleemeier 1988; Randolph 1994). Jacobsen 1993 provided a full list of scale items for the Teacher Knowledge Scale (p 36‐8), and Kleemeier 1988 reported on internal consistency (α = 0.84) and test‐retest reliability (r = 0.90). Palusci 1995 used a 30‐item survey divided into three parts (i.e. subscales). Only part one, assessing knowledge of female genital anatomy (12 items), was relevant to this outcome.
3. Skill in distinguishing cases that should be reported from those that should not
In our study protocol (Mathews 2015), we did not sufficiently describe this secondary outcome such that during data extraction, it became clear that there was potential for overlap between this secondary outcome and primary outcome 1b: changes in the 'number of reported cases of child abuse and neglect as measured subjectively by participant responses to vignettes’. To clarify, skills are individual attributes that can be assessed as an individual’s ability to perform a task to a given level (Dalziel 2017). There may be at least two skills involved in distinguishing cases that should be reported from those that should not: (i) the ability to accurately identify child abuse and neglect; and (ii) the ability to determine whether the type and extent of abuse or neglect that is presented to the reporter falls within a category required by law or policy to be reported. These skills can be developed for professionals via exposure to ‘real situations’ which may involve, for example, supervised clinic rotations, practicum or fieldwork placements, or internships. These skills can be assessed ethically, using in vivo assessments and participation in simulation games, and may also be assessable via responses to vignettes (Stanley 2017).
One of the 11 included studies measured skill in distinguishing cases that should be reported from those that should not (i.e. secondary outcome 3 as shown in Table 2) via in vivo assessment (Smeekens 2011). In this study, the intervention was an e‐learning programme for Dutch emergency department nurses comprising interactive clinical case simulations and video animations. Nurses' performances in two different simulated cases, randomly generated from a pool of eight possible simulated cases, were video recorded at pre‐ and post‐test and coded by a trained and blinded expert panel. In the in vivo assessments, nurses were guided through a simulated paediatric patient examination and were scored on the quality and quantity of the questions they asked and their completion of a standardised checklist (Smeekens 2011). Interrater reliability for the expert panellists was reported as 0.70 (p 333).
4. Attitudes towards the duty to report child abuse and neglect
According to attitude theories, attitudes are phenomenologically distinct from opinions, beliefs, and feelings. Attitudes can be defined as “a psychological tendency that is expressed by evaluating a particular entity with some degree of favour or disfavour” (Eagly 1993, p 1). An attitude thus must be directed towards a specific attitude object such as a behaviour (i.e. reporting child abuse and neglect), a condition (i.e. child sexual abuse), a person or group (e.g. perpetrators or victims of child abuse), or an event (e.g. a campaign about violence against children). Attitudes towards one particular 'thing' cannot be conflated with attitudes towards another different 'thing'.
Two of the 11 included studies measured attitudes towards the duty to report child abuse and neglect (Kim 2019; Mathews 2017). Kim 2019 used the previously validated 14‐item Teacher Reporting Attitude Scale ‐ Child Sexual Abuse (Walsh 2010; Walsh 2012b). Mathews 2017 used a 13‐item scale adapted from previous research. Kleemeier 1988 and Randolph 1994 measured attitudes towards child sexual abuse rather than attitudes towards the duty to report child abuse and neglect ‐ these are listed in Table 3 as ineligible outcomes. Dubowitz 1991 included "Attitudinal Items"; however, on closer inspection of the scale items and response scale, we classified this as a child abuse reporting self‐efficacy measure, which assessed levels of competence in managing cases of child abuse, comprising five items rated on a five‐point Likert‐type scale ‐ this is listed in Table 3 as an ineligible outcome.
Table 2. Secondary outcomes (* indicates inclusion in meta‐analysis or single effect size calculation)
Secondary outcomes from included studies | Measure as named in included studies | Studies |
1. Knowledge of the reporting duty, processes, and procedures | Knowledge of Child Maltreatment Reporting Laws | Alvarez 2010 |
Educators and Child Abuse Questionnaire (Knowledge of Reporting Procedures subscale) (Kenny 2004) | Kim 2019 | |
First measure (subscales 2 and 3) | McGrath 1987 | |
iLookOut Knowledge | Mathews 2017* | |
2. Knowledge of core concepts in child abuse and neglect such as the nature, extent, and indicators of the different types of abuse and neglect | ||
2a. knowledge of core concepts in child abuse and neglect (i.e. all forms of child abuse and neglect) | Test based on course content | Dubowitz 1991* |
Knowledge About Child Abuse | Hazzard 1984* | |
Educators and Child Abuse Questionnaire (Awareness of Signs and Symptoms of Child Abuse subscale) (Kenny 2004) | Kim 2019 | |
Second measure | McGrath 1987 | |
2b. knowledge of core concepts in child sexual abuse (i.e. only child sexual abuse) | Teacher Knowledge Scale (Kleemeier 1988) |
Jacobsen 1993 Kleemeier 1988* Randolph 1994 |
First measure (subscale 1, indicators of child sexual abuse) | McGrath 1987* | |
Survey (Part One and Part Two) | Palusci 1995 | |
3. Skill in distinguishing between cases that should be reported from those that should not | Performance in Simulated Cases | Smeekens 2011* |
4. Attitudes towards the duty to report child abuse and neglect | Teacher Reporting Attitude Scale ‐ Child Sexual Abuse (Walsh 2012b) | Kim 2019 |
iLookOut Attitudes | Mathews 2017* |
Ineligible outcomes
Several of the included studies also measured ineligible outcomes, as shown in Table 3 below.
Table 3. Ineligible outcomes
Ineligible outcomes from included studies | Measure as named in included studies | Studies |
1. Skills in safeguarding therapeutic relationships | Clinical Expertise in Reporting Suspected Child Maltreatment Scale | Alvarez 2010 |
2. Child abuse reporting self‐efficacy | Attitudinal items | Dubowitz 1991 |
3. Child abuse detection self‐efficacy | Self‐efficacy | Smeekens 2011 |
4. Feelings | Feelings About Child Abuse | Hazzard 1984 |
5. Attitudes towards child sexual abuse | Teacher Opinion Scale |
Kleemeier 1988 Randolph 1994 |
6. Knowledge of female genital anatomy | Survey (Part One) | Palusci 1995 |
7. Knowledge of ‘reportability’ of sexually transmitted infections in children | Survey (Part Two) | Palusci 1995 |
8. Teacher‐student relations | Delaware School Climate Survey | Kim 2019 |
9. Acceptability | Abbreviated Acceptability Rating Profile | Kim 2019 |
Excluded studies
We excluded 1454 records after full‐text screening. Most of these records were excluded on the basis of there being no report on an eligible intervention. The Characteristics of excluded studies tables list only those 20 studies that appeared to meet the eligibility criteria but were excluded during close inspection at the full‐text screening or data extraction stages, along with the primary reasons for exclusion. These are studies that were 'near misses' and that readers may view as relevant. Of note were two studies, Rheingold 2012 and Rheingold 2015, both of which were multisite RCTs comparing (i) face‐to‐face with (ii) web‐based training using the Stewards of Children training programme for professionals. These studies addressed outcomes relevant to this review, but our study protocol did not allow for the inclusion of head‐to‐head training comparison studies. This is a study limitation (addressed below in the Potential biases in the review process section) that could be remedied in future review updates. Other notable studies that were excluded include: Hawkins 2001a and Hawkins 2001b, a frequently cited evaluation of a training intervention for mandatory reporters in Australia; Lee 2017, reporting on training interventions for nurses; and a registered Phase 2 trial, NCT03185728, of an e‐learning intervention first reported in Mathews 2017 (included in this review).
Ongoing studies and studies awaiting classification
We identified two ongoing studies. We contacted the authors of a potentially completed registered trial in our searches of clinical trials registries (IRCT2015042713748N3), but received no response regarding its status. We identified another trial‐in‐progress both through our search and enquiries to the Child‐Maltreatment‐Research‐Listserv (NCT03185728), which is a trial testing the effectiveness of the iLookOut for Child Abuse e‐learning training programme. We identified three studies awaiting classification. One study that was written in a language other than English appeared to meet our inclusion criteria (De Faria Brino 2003), but attempts to contact the author to verify eligibility failed. We could not source the full text for another potentially eligible study, and also could not categorically exclude it based on the title and abstract (Herrera 1993). We have contacted study authors where possible, and endeavour to finalise these studies in subsequent review updates.
Risk of bias in included studies
Risk of bias judgements for the 11 included studies are summarised in Figure 2 and Figure 3. We have reported risk of bias judgements for the 9 RCTs and quasi‐RCTs separately from risk of bias judgements for the 2 CBA studies.
Across the nine RCTs and quasi‐RCTs studies, three domains were most at risk of bias: performance bias (all nine studies rated at high risk of bias); detection bias (all nine studies rated at high risk of bias); and reporting bias (six of nine studies rated at high risk of bias, although the more recent studies, Alvarez 2010, Mathews 2017, and Smeekens 2011, were rated as low risk of bias). Selection bias was problematic, with allocation concealment and group comparability assessed as being at unclear or high risk of bias for all nine studies. We assessed attrition bias, indicated by incomplete reporting of outcome data, as unclear or high risk of bias for all nine studies. The vast majority of studies reported insufficient information to judge risk of bias equivocally. No domains were rated predominantly at low risk of bias (i.e. with > 50% of studies rating as low risk of bias).
Of the two CBA studies, Palusci 1995 was rated as high or unclear risk of bias on 9 of 10 risk of bias domains, mirroring the ratings for Dubowitz 1991, upon which it was based, on all domains except for measurement bias. Jacobsen 1993 was rated as high or unclear risk of bias on 8 of 10 risk of bias domains, mirroring the ratings for Kleemeier 1988, upon which it was based, on several domains and improving on the original study for reporting bias (perhaps owing to the additional space afforded in a thesis format). Both CBA studies were judged as at high risk of bias, and were omitted from effect size calculation and meta‐analyses (Jacobsen 1993; Palusci 1995).
We did not find published protocols for any of the included studies; however, two studies had been registered (Mathews 2017; Smeekens 2011). Nonetheless, the trial register entries did not report any detail on proposed outcome measures.
Allocation
Random sequence generation
We rated three studies at high risk of bias (Dubowitz 1991; Jacobsen 1993; Palusci 1995), two of which were CBA studies (Jacobsen 1993; Palusci 1995). One quasi‐RCT used naturally occurring clinician rotations to divide participants into intervention or control groups (Dubowitz 1991). We rated three studies at low risk of bias, as they provided adequate descriptions of appropriate methods used to generate the allocation sequence, such as a computer‐generated random number list (Kim 2019; Mathews 2017; Smeekens 2011). We rated the remaining five studies at unclear risk of bias because they provided inadequate descriptions of sequence generation.
Allocation concealment
We rated no studies as low risk of bias for this domain (Dubowitz 1991; Jacobsen 1993; Palusci 1995). We rated three studies at high risk of bias due to inadequate concealment of allocations prior to assignment; two of these were CBA studies (Jacobsen 1993; Palusci 1995), whilst in the third study, a quasi‐RCT, participants were allocated based on clinical rotations, and therefore participants and investigators could reasonably have foreseen participant allocation to intervention or control group prior to or during the allocation process (Dubowitz 1991). The remaining eight studies did not adequately report an appropriate method of concealing allocation of participants to treatment groups and were judged at unclear risk of bias.
Blinding
Blinding of participants and personnel
In most instances with training interventions, it is not possible to blind study participants and personnel from knowledge of group membership. Participants know that they are taking part in training, and this may influence subjective outcomes such as self‐report measures. For this reason, and in the absence of adequate reporting on blinding, we rated all 11 included studies at high risk of bias. The authors of one RCT reported on blinding, acknowledging that blinding was not possible owing to the nature of the trial (Smeekens 2011), but did not explain how the risk was (or could be) mitigated.
Blinding of outcome assessment
We rated one RCT at low risk of bias because it reported blinding outcome assessors to which participants belonged to either the intervention or control group (Smeekens 2011). In Smeekens 2011, an objective individual assessment of nurse participants’ performance was undertaken via in situ responses to standardised video case simulations evaluated by experienced paediatricians blinded to group membership. We classified the remaining studies at high risk of bias owing to inadequate or no reporting on blinding of outcome assessors.
Incomplete outcome data
We rated no studies at low risk of bias for this domain. We assessed eight studies at unclear risk of attrition bias. Six of these studies, including the two CBA studies, did not provide complete data on participant attrition, exclusions, and withdrawals, or report reasons for missing data or imputation methods used (Alvarez 2010; Dubowitz 1991; Hazzard 1984; Kleemeier 1988; Palusci 1995; Randolph 1994). The other two studies provided CONSORT diagrams, but did not explicitly report reasons for attrition, so it was not possible to verify the potential impact on effect estimates (Kim 2019; Mathews 2017). We assessed three studies at high risk of bias. In one of these studies, the authors reported a “high level of attrition” from the experimental group, but did not provide further details (McGrath 1987, p 126). In another study, attrition was reported as around one‐third of participants across both intervention and control groups in an already‐small trial (19 participants in each group). The authors reported the use of imputation and comparison of imputed versus not imputed results, noting that there was no difference in results, yet the results for both analyses were not reported (Smeekens 2011). In the third study, all control group participants were lost at the four‐month follow‐up, thereby preventing between‐group comparisons at that time point (Mathews 2017).
Selective reporting
We rated four studies at low risk of selective reporting bias because prespecified outcomes were reported in sufficient detail to assess their completeness. Two of these studies had published trial registrations, though not protocols (Mathews 2017; Smeekens 2011). In the third study, there was congruence between published and unpublished reports (Alvarez 2010); and the fourth study, a CBA, was a thesis, which arguably enabled more space for detailed reporting (Jacobsen 1993). We rated the remaining seven studies, including one CBA study (Palusci 1995), at high risk of bias because study protocols were not available, and incomplete outcome data were reported, thus increasing the possibility of selective reporting (Dubowitz 1991; Hazzard 1984; Kim 2019; Kleemeier 1988; McGrath 1987; Palusci 1995; Randolph 1994).
Other potential sources of bias
We rated none of the 11 included studies at low risk of bias in all three additional domains: reliability of outcome measures (measurement bias); group comparability (selection bias); and contamination (contamination bias). We rated six studies, including one CBA study, at unclear risk of bias across the three additional domains (Jacobsen 1993; Kim 2019; Mathews 2017; McGrath 1987; Randolph 1994; Smeekens 2011). We rated five studies, including one CBA study, at high risk of bias across the three additional domains (Alvarez 2010; Dubowitz 1991; Hazzard 1984; Kleemeier 1988; Palusci 1995).
An insufficient number of included studies precluded analysis for publication bias.
Reliability of outcome measures
We rated five studies, including one CBA study, at low risk of measurement bias due to their use of reliable measures for outcome assessment; these studies reported internal consistency coefficient alphas of > 0.60 for the scales used (Jacobsen 1993; Kim 2019; Kleemeier 1988; Palusci 1995; Randolph 1994). One study reported interrater reliability for objective outcome assessment, but did not report data on the internal consistency of the measures (Smeekens 2011); we rated this study at unclear risk of bias on this domain, along with three further studies (Hazzard 1984; Mathews 2017; McGrath 1987). One study reported internal consistency data for a scale comprised of separate items in which the use of coefficient alpha would not have been appropriate (Hazzard 1984). One study reported methods used to improve internal consistency, but did not report relevant data, so it was not possible to determine reliability (Mathews 2017). One study used a pre‐existing scale that could not be found in order to determine reliability (McGrath 1987). We rated two studies at high risk of bias because they reported low coefficient α (Alvarez 2010), or did not report any reliability data (Dubowitz 1991, which was a CBA study).
Group comparability
We rated no studies at low risk of bias for group comparability. No studies provided sufficient detail for each outcome measure to enable a true assessment of intervention and control group comparability at baseline. We rated seven studies, including one CBA study (Jacobsen 1993), at unclear risk of bias for this domain because incomplete reporting of information prevented an assessment of whether analysed participants were comparable at baseline (Alvarez 2010; Jacobsen 1993; Kim 2019; Mathews 2017; McGrath 1987; Randolph 1994; Smeekens 2011). We rated four studies, including one CBA study (Palusci 1995), at high risk of bias for this domain because, despite reporting group equivalence, no data were provided to support the claim (Hazzard 1984; Palusci 1995), or a claim was made about group equivalence, yet there appeared to be important differences between groups, such as in professional qualifications and previous experience, that were unaccounted for and would likely affect group equivalence and study outcomes (Dubowitz 1991; Kleemeier 1988).
Contamination
We rated three studies at low risk of contamination bias. In one study, Hazzard 1984, participants in the intervention group were drawn from one state in the USA, and participants in the control group from another state. In two studies, whole schools were randomised to intervention or control groups (Kim 2019; McGrath 1987). We rated the remaining eight studies, including the two CBA studies (Jacobsen 1993; Palusci 1995), at unclear risk of contamination bias because it was unclear whether participants in the intervention and control groups worked in the same or different settings, thereby making it possible that control group participants working in the same setting as intervention group participants would be exposed to some parts of the intervention via proximity or informal communication channels, or both.
Effects of interventions
The results of analyses and our GRADE ratings are presented in Table 1 (primary outcomes) and Table 2 (secondary outcomes) for child protection training compared with no training, waitlist control, or alternative training not related to child abuse and neglect.
In this section, we have presented the main findings on the effects of interventions for the primary and secondary outcomes, drawing only on data from the five RCTs and four quasi‐RCTs (see Data collection and analysis). We have qualitatively synthesised the findings of the two CBA studies.
Primary outcomes
No studies evaluated changes in the number of reported cases of child abuse and neglect, as measured objectively in official records of reports made to child protection authorities; changes in the quality of reported cases of child abuse and neglect, as measured via coding of actual contents of reports made to child protection authorities; or adverse events.
Number of reported cases of child abuse and neglect
Participant self‐reports of actual cases reported
Three studies measured changes in the number of cases of child abuse and neglect via participants' self‐report of actual cases reported: two RCTs (Kleemeier 1988; Randolph 1994), and one quasi‐RCT (Hazzard 1984). In two studies data were missing for calculation of effect sizes, and due to the age of the studies, contact with authors to obtain missing data was not possible (Hazzard 1984; Kleemeier 1988). One study, Randolph 1994, included a total of 42 participants, and the effect estimate suggested a large effect of training on self‐reported cases at three‐month follow‐up (standardised mean difference (SMD) 0.81, 95% confidence interval (CI) 0.18 to 1.43; very low‐certainty evidence). This effect size was calculated using David B Wilson's suite of effect size calculators, as RevMan Web would not calculate an effect size when the mean for a group was zero, as was the case for the control group in this study (RevMan Web 2021).
We identified clustering in the Randolph 1994 study. After adjusting for clustering, the SMD reduced slightly, and the CIs widened slightly (SMD 0.80, 95% CI 0.16 to 1.45; very low‐certainty evidence). These analyses suggested that adjusting for clustering had only slight effects on the results.
Participant responses to vignettes
Three studies measured changes in the number of cases of child abuse and neglect via participants' responses to vignettes: two RCTs (Kleemeier 1988; Randolph 1994), and one quasi‐RCT (Alvarez 2010). One study did not separate outcome data for students and professionals (Alvarez 2010), and our attempts to locate the required data breakdown after contact and co‐operation from study authors were unsuccessful. The two remaining studies, both RCTs, included 87 participants (training n = 47; comparison n = 40) (Kleemeier 1988; Randolph 1994), and the overall effect estimate suggested a large effect of training on the number of reported cases of child abuse and neglect at post‐test (fixed‐effect model: SMD 1.81, 95% CI 1.30 to 2.32; P < 0.001, I² = 8%, 2 studies, 87 participants; very low‐certainty evidence; Analysis 1.1). A forest plot of the distribution of effect sizes is provided in Figure 4. All indicators suggested minimal heterogeneity across study effects (Tau² = 0.01; Chi² = 1.08, df = 1, P = 0.30; I² = 8%).
We identified clustering in the Randolph 1994 and Kleemeier 1988 studies. After adjusting for clustering, the SMD increased slightly, the CI widened slightly (fixed‐effect model: SMD 1.82, 95% CI 1.28 to 2.35; P < 0.001, I² = 1%, 2 studies, 80 participants (adjusted sample size); very low‐certainty evidence; Analysis 1.2), and heterogeneity was reduced (Tau² = 0.00; Chi² = 1.01, df = 1, P = 0.31; I² = 1%). These analyses suggested that adjusting for clustering had only slight effects on results.
Two CBA studies also reported responses to vignettes as an outcome measure (Jacobsen 1993, n = 40; Palusci 1995, n = 30). Jacobsen 1993 did not assess this outcome at baseline. Palusci 1995 included medical students and qualified medical professionals, and although the authors summarised baseline and postintervention data by professional status subgroups, they did so only for the experimental group, and reported only the total number of correct survey answers in a figure without any other data to permit calculation of an effect size. Part three of the survey measured professionals' responses to four case vignettes, yet the authors reported results survey scores as overall totals rather than by the three distinct survey parts. We were therefore unable to discern the effect of the intervention on professionals' responses to vignettes.
Secondary outcomes
Knowledge of the reporting duty, processes, and procedures
Four studies measured professionals' knowledge of reporting duty, processes, and procedures after participation in training: two RCTs (Mathews 2017; McGrath 1987), and two quasi‐RCTs (Alvarez 2010; Kim 2019). In two studies data were missing for calculation of effect sizes (Kim 2019; McGrath 1987), and another study did not separate data for professionals and students (Alvarez 2010). Our attempts to obtain missing data from the study authors were unsuccessful. Using the supplementary data for the remaining study (Mathews 2017), with a total of 744 participants (training n = 373; comparison n = 371), the effect estimate suggested a large effect of training on postintervention knowledge of reporting duty, processes, and procedures (SMD 1.06, 95% CI 0.90 to 1.21; low‐certainty evidence; Analysis 2.1). Due to attrition, calculation of between‐group effects was not possible at the four‐month follow‐up for this study.
Knowledge of core concepts in child abuse and neglect
Six studies measured professionals' knowledge of core concepts in child abuse and neglect, such as the nature, extent, and indicators of different types of child abuse and neglect (Dubowitz 1991; Hazzard 1984; Kim 2019; Kleemeier 1988; McGrath 1987; Randolph 1994).
Child abuse/maltreatment (general)
Four studies used a generalised measure of professionals' knowledge of core concepts in child abuse or maltreatment, or both: one RCT (McGrath 1987), and three quasi‐RCTs (Dubowitz 1991; Hazzard 1984; Kim 2019). In two studies data were missing for calculation of effect sizes, and our attempts to obtain the data from the study authors were unsuccessful (Kim 2019; McGrath 1987). The two remaining studies, Dubowitz 1991 and Hazzard 1984, included 154 participants (training n = 82; comparison n = 72), and the overall effect estimate suggested a moderate effect of training on generalised knowledge of child abuse and neglect post‐test (fixed‐effect model: SMD 0.68, 95% CI 0.35 to 1.01; P < 0.001, I² = 0%, 2 studies, 154 participants; very low‐certainty evidence; Analysis 3.1). A forest plot of the distribution of effect sizes is provided in Figure 5. All indicators suggested minimal heterogeneity across study effects (Tau² = 0.00; Chi² = 0.01, df = 1, P = 0.91; I² = 0%). Whilst follow‐up data were collected in the Dubowitz 1991 study, no data were reported to permit an assessment of the effect of training three to four months after the intervention.
We identified clustering in the Hazzard 1984 study. After adjusting for clustering, the SMD for the meta‐analysis reduced slightly, the estimates for heterogeneity did not change, but the CIs for Hazzard 1984 widened to include zero (fixed‐effect model: SMD 0.66, 95% CI 0.17 to 1.15; P = 0.009, I² = 0%, 2 studies, 70 participants (adjusted sample size); very low‐certainty evidence; Analysis 3.2).
Child sexual abuse (specific)
Three studies, all RCTs, used a specific measure of professionals' knowledge of core concepts in child sexual abuse (Kleemeier 1988; McGrath 1987; Randolph 1994), and included 238 participants (training n = 104; comparison n = 134). The overall effect for training on specific knowledge of child sexual abuse post‐test was large and positive (random‐effects model: SMD 1.44, 95% CI 0.43 to 2.45; P = 0.005, I² = 89%, 3 studies, 238 participants; very low‐certainty evidence; Analysis 3.3), but had substantial heterogeneity across effect sizes (Tau² = 0.69; Chi² = 17.44, df = 2, P < 0.001; I² = 89%; Figure 6).
There were too few studies to conduct subgroup analyses, so a qualitative assessment of the three studies was used to identify the potential source of heterogeneity. All studies measured the outcome at the same time point, used a face‐to‐face delivery method, and had similar content and teaching methods (although McGrath 1987 did not report the latter). However, there were three discernible differences between the studies: (i) the comprehensiveness of the outcome, whereby McGrath 1987 used a single‐item scale, and Kleemeier 1988 and Randolph 1994 used the same 30‐item scale; (ii) McGrath 1987 utilised a train‐the‐trainer model; and (iii) the length of the training: a six‐hour workshop in Kleemeier 1988, three two‐hour sessions in Randolph 1994, and a two‐hour workshop in McGrath 1987.
We identified clustering in all three studies (Kleemeier 1988; McGrath 1987; Randolph 1994). After adjusting for clustering, the SMD decreased slightly; the CI widened slightly (random‐effects model: SMD 1.42, 95% CI 0.44 to 2.39; P = 0.004, I² = 85%, 3 studies, 178 participants (adjusted sample size); very low‐certainty evidence; Analysis 3.4); and heterogeneity was reduced (Tau² = 0.62; Chi² = 13.35, df = 2, P = 0.001; I² = 85%).
Two CBA studies also utilised outcomes measuring professionals' knowledge of core concepts related to child sexual abuse (Jacobsen 1993; Palusci 1995). We were unable to discern the effect of the intervention on professionals' knowledge for Palusci 1995 (as explained above in this section under 'Participant responses to vignettes'). The results for Jacobsen 1993 were consistent with the results of the meta‐analysis of RCTs (SMD 1.81, 95% CI 1.07 to 2.56, 40 participants; very low‐certainty evidence; Analysis 3.5).
Skill in distinguishing reportable and non‐reportable cases
One RCT measured professionals' skill in distinguishing reportable and non‐reportable cases after participation in training (Smeekens 2011). Based on a total of 25 participants (training n = 13; comparison n = 12), the effect estimate suggested a large effect of training on professionals' skill in distinguishing reportable and non‐reportable cases at post‐test (SMD 0.94, 95% CI 0.11 to 1.77; very low‐certainty evidence; Analysis 4.1).
Attitudes towards the duty to report child abuse and neglect
Two studies measured attitudes towards the duty to report child abuse and neglect: one RCT (Mathews 2017), and one quasi‐RCT (Kim 2019). In one study data were missing for calculation of effect sizes, and our attempts to obtain missing data from the study authors were unsuccessful (Kim 2019). Using the supplementary data for the remaining study (Mathews 2017), with a total of 741 participants (training n = 372; comparison n = 369), the effect estimate suggested a moderate effect of training on attitudes towards the duty to report child abuse and neglect (SMD 0.61, 95% CI 0.47 to 0.76; very low‐certainty evidence; Analysis 5.1). Due to attrition, calculation of between‐group effects was not possible at the four‐month follow‐up for this study.
Discussion
Summary of main results
We conducted this systematic review to assess the effectiveness of child protection training to improve reporting of child abuse and neglect by professionals, and to investigate possible components of effective training interventions. We assessed the eligibility of 1481 full‐text reports, of which 11 trials (in 17 reports) met our inclusion criteria: five RCTs, four quasi‐RCTs, and two CBAs. We included data from nine of the 11 trials in the quantitative synthesis.
We found that child protection training of the type reported in this review may be more helpful than no training at all; however, overall the evidence is very uncertain. Professionals who received training scored higher on measures of knowledge, skills, and attitudes. However, the results were based on a small number of studies, some of which were dated, and had methodological problems. Our analyses in some cases included only one professional group, limiting the applicability of the findings to other professional groups.
All trials were conducted with intradisciplinary groups of qualified professionals (elementary and high school teachers, childcare professionals, medical practitioners, and nurses), except for one study involving an interdisciplinary group of mental health professionals from psychology, educational psychology, counselling, and social work (Alvarez 2010). These are key professional groups having regular contact with children and who are most often required by law or occupational policy to report child abuse and neglect to statutory child protection authorities.
Trials were mainly conducted in the USA. Interventions were developed by experts and delivered by specialist facilitators or content area experts, and three interventions were facilitated by an interdisciplinary team (Dubowitz 1991; Palusci 1995; Smeekens 2011). Training intensity ranged from two hours to six 90‐minute sessions over a one‐month period. Eight trials tested face‐to‐face training interventions (Alvarez 2010; Dubowitz 1991; Hazzard 1984; Jacobsen 1993; Kleemeier 1988; McGrath 1987; Palusci 1995; Randolph 1994). Three trials tested the effectiveness of self‐paced e‐learning interventions (Kim 2019; Mathews 2017; Smeekens 2011). Comparison conditions were no training, waitlist control, or alternative training (unrelated to child protection).
Effectiveness of training: primary outcomes
We were able to assess training effectiveness for only one of the three primary outcomes specified in our study protocol (Mathews 2015): number of reported cases of child abuse and neglect.
Number of reported cases of child abuse and neglect
Compared with those with no training or who were waitlisted to receive training, trained professionals reported higher numbers of actual cases to child protection authorities up to three months after receiving training, and higher numbers of hypothetical cases presented to them as case vignettes immediately after receiving training. On both counts, this represents a large training effect. However, our findings were based on very few studies including only one professional group (teachers) (Kleemeier 1988; Randolph 1994). Like many of the older studies included in this review, these studies predated standards for reporting on trials (e.g. Hoffmann 2014; Schulz 2010), and were assessed as having methodological problems that could contribute to over‐ or underestimation of training effects. The certainty of evidence for this outcome was therefore very low.
Effectiveness of training: secondary outcomes
We were able to assess training effectiveness for all four secondary outcomes specified in our study protocol (Mathews 2015): (i) knowledge of the reporting duty, processes, and procedures; (ii) knowledge of core concepts in child abuse and neglect such as the nature, extent, and indicators of the different types of abuse and neglect; (iii) skill in distinguishing between cases that should be reported from those that should not; and (iv) attitudes towards the duty to report child abuse and neglect.
Knowledge of the reporting duty, processes, and procedures
Compared with those waitlisted to receive training, trained professionals demonstrated higher levels of knowledge of the reporting duty, processes, and procedures when tested immediately after receiving training. This represented a large training effect based on data from only one study including childcare professionals (Mathews 2017). In this study, childcare centre staff were trained with an e‐learning intervention, iLookOut, and assessed using self‐report measures that were completed online. Although positive, the finding was questionable due to the low certainty of the evidence. We are aware that further studies of this training programme are currently underway (NCT03185728).
Knowledge of core concepts in child abuse and neglect
The ‘core concepts’ knowledge domain was assessed using two approaches, depending on the training intervention focus. The first approach was assessment of core concepts such as the nature, extent, and indicators of all forms of child abuse and neglect; we refer to this as a generalised measure. The second approach was assessment of core concepts relating only to child sexual abuse; we refer to this as a specific measure.
Compared with those who received no training, trained professionals showed higher levels of knowledge of core concepts in child abuse and neglect (generalised measure) when tested immediately after receiving training. This represented a medium training effect. However, this finding was based on a single study conducted with one professional group (teachers), limiting the applicability of the evidence to that professional group (Hazzard 1984). The study had methodological problems that may have contributed to over‐ or underestimation of training effects, making us very uncertain about the result.
Compared with those who received no training or were waitlisted to receive training, trained professionals showed higher levels of knowledge of core concepts in child sexual abuse (specific measure) when tested immediately after receiving training. This represents a large training effect. Our finding was based on three studies (Kleemeier 1988; McGrath 1987; Randolph 1994), all of which were conducted with teachers, limiting the applicability of the evidence to one professional group. We rated these studies at high risk of bias for multiple issues relating to how the trials were conducted. Overall, the evidence is very uncertain.
Skill in distinguishing between cases that should be reported from those that should not
Compared with those who received no training, trained professionals demonstrated higher levels of skill in distinguishing cases of child abuse and neglect that should be reported from those that should not, when tested immediately postintervention. This represents a large training effect and was based on data from one small study of nurses (Smeekens 2011). In this study, nurses were exposed to an e‐learning intervention and evaluated using an in vivo assessment in which they were scored on their actual performance in simulated cases using standardised criteria. Our analysis showed that the measurement was somewhat imprecise, meaning we are very uncertain about the training’s true effect. Nevertheless, this study is important because it was the only trial to assess participants’ demonstrated cognitive and practical skill in attending to the nature and salience of simulated case features, and their significance when deciding to report or to not report. This study therefore provided important qualitative insights into this secondary outcome, supplementing the quantitative results about numbers of reports in the studies detailed above for our primary outcomes.
Attitudes towards the duty to report child abuse and neglect
Compared with those who were waitlisted to receive training, trained professionals demonstrated more positive attitudes towards the duty to report child abuse and neglect when tested immediately after receiving training. This represents a medium training effect. Our finding was based on a single study, limiting the applicability of the evidence to that specific professional group, that is childcare professionals (Mathews 2017). Our analysis showed that the measurement of this variable was imprecise, leading us to be very uncertain about the results.
Overall completeness and applicability of evidence
We conducted extensive searches for relevant studies, in several iterations and without date or language restrictions. We are reasonably certain that our approach yielded all relevant trials.
The included studies were conducted in high‐income countries, mainly in the USA. Given the widespread adoption of reporting duties in law and policy for professionals whose work is focused on children in numerous countries throughout the world, including in low‐ and middle‐income countries (Mathews 2008a), the available evidence on the effectiveness of training interventions is limited. The results of our quantitative synthesis were in many instances confined to single professional groups. Whether similar effects would be seen in different countries or for a wider range of professionals therefore remains unknown. In addition, considering the wide range of different professional groups who possess reporting duties, the range of professional groups with whom child protection training interventions have been evaluated is also limited. For example, we found no trials including police, who comprise a particularly important reporter group. Police consistently make a large proportion of all reports of all types of child abuse and neglect, and are an essential front‐line response to child protection; yet, police also face unique challenges especially in appropriate reporting of exposure to domestic and family violence (Cross 2012). Similarly, few studies have involved early childhood care and education professionals, who play an important role given the high vulnerability of young children to serious harm. It is important that all reporter groups, especially those with either higher exposure to children in general or exposure to particularly vulnerable children, receive effective training and that such training is evaluated for efficacy and, where necessary, further customised to the professional group’s context.
Our searches revealed that a significant number of evaluations of training interventions have been conducted, as evidenced by the high number of full‐text studies reviewed (n = 1481) that were screened and the list of ‘near misses’ (n = 20) (see Excluded studies). This shows substantial investment in training programmes and their evaluation, yet also underscores potential wastage of scarce research resources, because too few studies used empirical methods designed to identify whether specific training interventions with particular characteristics are effective or not. Both interventions and training come at a substantial cost.
We identified 11 trials for inclusion in this review; however, we were able to use data only from nine trials in the quantitative synthesis, mainly because information was missing from study reports, which placed the studies at risk of bias. The age of many of the included studies prevented contact with some study authors. None of the trials had published a study protocol, and only two of the more recent trials had been registered (Mathews 2017; Smeekens 2011). Several factors limited the overall completeness and applicability of evidence. A paucity of studies appropriately assessed and transparently reported baseline equivalence of intervention and comparison groups on relevant demographic (e.g. age, gender, ethnicity, qualifications, years of experience) and experiential variables likely to influence the effects of training (e.g. prior training, prior experiences reporting to child protection authorities). Few studies conducted direct comparisons and reported complete data across study time points. Very few studies assessed long‐term training effects beyond the time immediately postintervention. There was generally poor reporting on participant attrition. There were many instances of missing data from analyses. None of the included studies accounted for clustering of professionals in groups in the analysis of study data. Research conduct and reporting would be improved in future by commitment by study authors to the use of established guidelines such as CONSORT, Schulz 2010, and the Template for Intervention Description and Replication (TIDieR) (Hoffmann 2014), providing guidance to study authors on the minimum information that should be reported for trials and interventions.
For studies in which data were sufficiently reported or were available, we were unable to use some of these data owing to heterogeneity in outcomes and outcome measurement. For example, diverse measures were used to assess different types of knowledge. There was heavy reliance on the use of tailor‐made measures. A group of studies relied upon knowledge measures from Kleemeier 1988, which, in turn, was modelled after Hazzard 1984 (with full scales reported in Jacobsen 1993), thus perpetuating limitations in the original measure. Research would be improved in future with the use of standardised measures of training knowledge outcomes regarding core constructs ‐ even where the specific detail of that construct may change over time, hence requiring customisation in core detail ‐ rather than novel measures. In the Implications for research section, we discuss further problems arising from adoption of knowledge measures that are dated, or that are jurisdictionally specific.
We identified several ineligible secondary outcomes. In some studies, constructs such as attitudes were inaccurately or loosely conceptualised and labelled, meaning that their measurement lacked validity (e.g. Dubowitz 1991), although we acknowledge that many of the research measures in our review predate advances in research on attitudes (e.g. Ajzen 2005; Albarracin 2005).
Self‐efficacy is an outcome we had not identified in our study protocol (Mathews 2017). This will be an important secondary outcome to consider in future review updates in the light of advances in the use of self‐efficacy theory, Bandura 1993, in observational studies of professionals’ child maltreatment reporting behaviour (e.g. Ayling 2020; Colgrave 2021; Lee 2012). Among the included studies, Smeekens 2011 (p 332) assessed self‐efficacy for detection (but not reporting) of child abuse, and Dubowitz 1991 (p 306) used an "attitudinal measure", which we reclassified as an ineligible self‐efficacy measure because sample items assessed confidence in managing child abuse cases measured on a Likert‐type competence scale.
No studies considered prespecified potential adverse events. In our study protocol we defined adverse events as: (i) increase in failure to report cases of child abuse and neglect that warrant a report as measured subjectively by participant self‐reports (i.e. in questionnaires); and (ii) increase in reporting of cases that do not warrant a report as measured subjectively by participant self‐reports (i.e. in questionnaires). In retrospect, our description of adverse events may have been too narrow. Given advances in trial safety, codes of ethics for research conduct, deeper awareness of the need for trauma‐informed approaches to professional development, and requirements for researchers to report unexpected adverse events to their institutions, it would be advantageous to include in future review updates a category of adverse events that captures traumatic responses by trial participants themselves. Similarly, adverse events in future reviews may also consider emotional distress for study participants, especially since a key feature of the interventions is presentation of hypothetical cases.
No study reported on the financial costs associated with training intervention delivery, or evaluation. This would be helpful information for studies to report. Future programme design and evaluation may benefit by being informed of such costs, as well as comparisons of cost between online and face‐to‐face delivery. This is discussed further in Implications for practice. No study reported on training interventions for improving mandatory reporting specifically in culturally diverse contexts.
The completeness and applicability of evidence is limited by the concentration of studies from high‐income Western countries. The completeness and applicability of evidence is also limited by our inability to conduct subgroup analyses or sensitivity analysis because there were too few studies. For the same reason, we were unable to develop a training intervention typology. We address the implications in Implications for practice and Implications for research.
Quality of the evidence
We assessed the certainty of evidence in the review as low to very low. We included only RCTs and quasi‐RCTs in the estimation of intervention effects. We downgraded the certainty of evidence due to: limitations in the design and implementation of available studies, suggesting a high likelihood of bias (all outcomes); indirectness of outcome measurement, owing to the limited number of available studies, which restricts the generalisability of results (all outcomes); imprecision of results due to small sample sizes (all outcomes); and/or unexplained heterogeneity or inconsistency of results (one outcome). We were unable to assess publication bias because fewer than 10 trials were included in our meta‐analyses (Boutron 2022), hence we were unable to use publication bias as a criterion for rating the certainty of evidence.
Overall, the included studies were at risk of bias. The most common problems were blinding of participants and personnel (performance bias; 11 studies) and blinding of outcome assessment (detection bias; 10 studies). Blinding is seldom possible in studies of training interventions, as group membership is obvious to participants, trainers, and likely also to colleagues in participants’ workplaces, even if these individuals are not the training targets. Allocation concealment (selection bias) was unclear for the majority of studies (seven studies). More importantly for the underpinning science, reporting bias was evident in selective reporting, lack of completeness in reporting of outcome data, including for group comparability as noted above (Figure 2; Figure 3; Risk of bias in included studies).
In summary, because the GRADE certainty ratings were low or very low for all outcomes, we are uncertain about the effectiveness of training interventions compared with no training, waitlist control, or alternative training (not related to child protection). This means that the true effects for these outcomes may be substantially different from the estimated effects.
Potential biases in the review process
We followed the procedures in the Cochrane Handbook for Systematic Reviews of Interventions,Higgins 2022a, by first developing a study protocol, Mathews 2015, and following this protocol in our conduct of the review. We used two key strategies to reduce the potential for bias in the review process. Firstly, our searches were comprehensive and included CENTRAL, MEDLINE, Embase, 18 other databases, a trials register, and handsearching of key journals and websites. We have confidence in our detailed search strategy after having corrected errors in the search strategy reported in the protocol, and then ensuring that all searches closely replicated the MEDLINE search across all search locations. We systematically screened and assessed all records captured by the original search and the corrected search, meaning that our search was more comprehensive than intended. We also made two separate appeals for relevant studies via email to the Child‐Maltreatment‐Research‐Listserv, a moderated electronic mailing list that distributes email messages to over 1500 subscribers (Walsh 2018 [pers comm]). Despite these efforts, it was not possible for us to capture reports on trials of training interventions that were not made public, or that were covered by commercial‐in‐confidence agreements. In addition, due to the small number of included studies, we were unable to formally assess publication bias, which is the tendency for positive/statistically significant trial findings to be published more often than negative/non‐significant findings.
Secondly, multiple authors were involved in the selection of studies, and all were trained in using a decision guide closely based on the review inclusion and exclusion criteria. Data extraction was conducted by multiple authors, in some instances twice, during the lengthy process of conducting our review. Some issues arose during the review that we had not anticipated in our study protocol. These are detailed below in Differences between protocol and review. Furthermore, as noted below in the Declarations of interest, review authors who were also study authors, or colleagues or associates of study authors, were not involved in extracting data from or assessing risk of bias for any of the studies for which conflicts were present. This was designed to reduce the possibility of conflicts of interest. Instead, these tasks were undertaken by two independent review authors.
One shortcoming of our review is that we did not specifically allow for head‐to‐head comparisons of training interventions in our study protocol (Mathews 2017), which meant that we excluded two well‐designed trials of a widely used training intervention known as Stewards of Children (Rheingold 2012; Rheingold 2015), which otherwise met our inclusion criteria. Head‐to‐head trials assess different research questions, that is determining whether one type of training is more effective than another. Rheingold and colleagues' trial compared in‐person training, web‐based training, and no training (waitlisted to receive training). The training intervention studied in the trial has been used in 76 countries to train over 1.7 million adults (Darkness to Light 2021, p 7). In future review updates, consideration should be given to addressing this limitation.
Agreements and disagreements with other studies or reviews
To our knowledge, this is the first systematic review of child protection training for professionals to improve mandatory reporting of child abuse and neglect. Our review findings are generally consistent with results reported in individual studies included in the review, which tended to favour training over no training, waitlist for training, or alternative training (not related to child protection). However, our review highlights the uncertainty of the evidence, which was based on methodological problems with the evaluation of training interventions that have characterised the field for several decades. Notwithstanding, there are three related reviews that warrant mentioning that, in combination with this review, can assist in charting a way forward.
Carter 2006 narratively synthesised 10 years of published evidence on the effectiveness of procedural and training interventions for improving health professionals’ identification and management of child abuse and neglect. Procedural interventions included structured forms, checklists, and flowcharts. Training interventions included those focused on raising awareness of child safeguarding. The 23 studies included in the review were inclusive of a broad range of study designs. Congruent with our findings, critical appraisal in the review found a lack of rigorous evaluation, including confounding interventions, under‐utilisation of control groups, selection bias, and lack of follow‐up assessment of training outcomes beyond the immediate postintervention period. The confounding of concurrently administered procedural and training interventions is an important finding of this previous review, and a problem we sought to avoid in our review by defining our selection criteria for types of training interventions (see Types of interventions).
Louwers 2010 synthesised the published literature to February 2008 to identify effective interventions to increase detection (rather than reporting) rates of child abuse in hospital emergency departments. Four studies were identified, all of which investigated the effects of screening tools such as structured forms, checklists, and flowcharts. The review found increases in detection rates of suspected or confirmed cases and improvements in the quality of supporting documentation. There was no assessment of methodological quality of the included studies. In our review, we identified several trials of interventions focused only on improving detection rather than reporting of child abuse and neglect. Although this addresses the issue of potential confounding of detection and reporting interventions if these are offered concurrently (Carter 2006), it is also well‐established that the detection of child abuse or neglect (or both) is a necessary but insufficient basis for reporting, because many professionals who detect, also fail to report.
Baker 2021 conducted a content analysis of US, state‐sponsored, online, mandated reporter training. Although not a systematic review, this study applied systematic, transparent, and replicable searches to identify 44 training curriculums and coded these against 10 evidence‐based thematic domains: “legal requirements and protections; the role of the mandated reporter; reasons why reporters should make a report; identifying maltreatment; dealing with disclosures by children; barriers to reporting; the mechanics of reporting; the impact on the reporter; how to help families; and format of training" (p 5). These coding domains may be useful for developing a programme typology in future updates of our review.
Authors' conclusions
Implications for practice.
Training for professionals to improve reporting of child abuse and neglect is an essential part of a comprehensive public health response. All professionals having direct contact with children and families require this training to equip them with the required knowledge, attitudes, and skills to report cases of child abuse and neglect, and to avoid making unwarranted reports.
However, the development of training programmes, and research into their efficacy, is still in its infancy. Consequently, at least when measured against rigorous GRADE criteria, it is not possible to provide firm conclusions about the extent to which professional training of the types described in this review increase knowledge, skills, attitudes, and reporting practices due to the low and very low certainty of evidence.
We know little about the effectiveness of training interventions delivered in different modes (online versus face‐to‐face), and by trainers with different expertise (e.g. specialists versus non‐specialists). Evidence of such comparative effects requires studies of a sufficiently high standard, reported in sufficient detail to enable quantitative synthesis for overall trends. In addition, the generalisability and applicability of the available evidence is limited by the scarcity of training intervention trials conducted with key professional groups, such as police, doctors, paediatric nurses, and allied health professionals. The evidence is also limited by the lack of long‐term follow‐up of outcomes relevant to training effectiveness.
Despite these evidence gaps, child protection training designers and providers should consider the evidence in this review when planning training interventions for specific professionals in relation to the reporting of different types of child abuse and neglect. Although the paucity of studies precluded development of a training typology, the Characteristics of included studies table and in‐text summary provide training developers with important information about the possible components of training interventions.
Implications for research.
Further rigorous studies are required in a wider range of countries, with diverse groups of child‐serving professionals to assess the effectiveness of training interventions for improving reporting of child abuse and neglect. Interventions with police and doctors are needed in particular. Such studies are methodologically complex, costly, and time‐consuming. Nevertheless, the need to support professionals in reporting appropriate cases of different types of abuse and neglect, and avoiding unwarranted reports, demands evidence‐based approaches to training interventions. Rigorous training of professionals about appropriate reporting of child abuse and neglect directly promotes children’s rights to protection from abuse and neglect. In doing so, it implements the United Nations Convention on the Rights of the Child (United Nations 1989, article 19) and the United Nations Sustainable Development Goals 2015 (United Nations 2015, Target 16.2).
Greater rigour in interventions and their reporting is required. In particular, all interventions must be informed by an accurate and updated understanding of the nature of the duty to report different types of child abuse and neglect in the specific location, as applied to the relevant profession. Duties in both law and occupational policy to report different types of child abuse and neglect vary substantially across locations and professions, and over time. Accordingly, existing scales cannot simply be reused uncritically, even in the same location or professional setting. Rather, every training intervention and its evaluation must be underpinned by an updated, accurate review of the current reporting duties. Training on reporting duties must be customised, as should outcome measures. Outcome measures must be designed to capture data on participants' knowledge, attitudes, and practices in relation to specific duties and different types of child abuse and neglect. Training interventions and assessment therefore need to be designed by multidisciplinary teams with the capacity to identify the contemporary applicable law and ensure its accurate integration into direct and indirect measures of training outcomes.
In future studies, baseline comparisons of intervention participants (those receiving training) and control group participants (those not receiving training or waitlisted to receive training) should be undertaken to determine group equivalence on variables likely to influence training outcomes, such as years of experience, prior training, and encounters with the child protection system. Trials should be adequately powered and use appropriate methods for group allocation. Researchers should register trials (see, for example, www.isrctn.com/), and publish study protocols (Chan 2013; see www.spirit-statement.org/). Interventions should be comprehensively reported using international guidelines such as Template for Intervention Description and Replication (TIDieR) (Hoffmann 2014; and see www.consort-statement.org/resources/tidier-2) and CONSORT (Schulz 2010; and see www.consort-statement.org/).
Decisions concerning outcome measures pose challenges for future research due to constraints of cost and time. At a minimum, studies should always conduct pre‐ and post‐test assessment of key secondary outcomes as conceptualised in this review, including knowledge and attitudes. This is required to assess mastery of training content in accordance with educational theories which recommend both direct and indirect assessment of learning outcomes (e.g. Allan 1996; Allen 2006; Calderon 2013; Suskie 2018). Ideally, research should examine long‐term outcomes of training, and test the effect of supplementary or booster training. Measurement of primary outcomes as conceptualised in this review is challenging, since actual reporting of cases of child abuse and neglect by trained individuals occurs infrequently (Mathews 2020). In particular, actual reports (primary outcome 1c) is contraindicated due to statutory agency recording conventions which de‐identify reporters. Nevertheless, measurement via subjective self‐reports of reporting behaviour (primary outcome 1a) remains possible, albeit requiring long‐term follow‐up. We recommend strongly that future research employ case vignettes for direct assessment of training outcomes (primary outcome 1b). Using vignettes enables researchers to collect data at scale about participants’ responses to hypothetical scenarios requiring the application of knowledge and demonstration of skills, and can be used in combination with other direct and indirect assessments to indicate intended future reporting behaviour. Future research may also consider the use of animations, films, and virtual reality in case vignettes.
Our final conclusions on future research were suggested by peer reviewers who drew attention to the need for research on training interventions for professionals to improve reporting of child abuse and neglect, expressly for professionals serving culturally diverse children and families (Flemington 2021). As the field matures, and studies improve in quality and scope, there is also potential for future research to assess the broader social and economic impacts of child protection training for individuals, organisations, and systems.
What's new
Date | Event | Description |
---|---|---|
20 July 2022 | Amended | PMID for reference corrected. |
History
Protocol first published: Issue 6, 2015 Review first published: Issue 7, 2022
Notes
This review is co‐registered within the Campbell Collaboration, and a version of it appears on the Campbell Library.
Acknowledgements
The authors gratefully acknowledge the support of the Cochrane Developmental, Psychosocial and Learning Problems Editorial Team based at the Centre for Public Health at Queen's University Belfast and the University of Bristol. For advice and assistance at all stages of the review, we sincerely thank Dr Joanne Duffield (Managing Editor). We thank Professor Geraldine Macdonald (Co‐ordinating Editor), Dr Sarah Davies (Deputy Managing Editor), and Gemma O'Laughlin (former Assistant Managing Editor) for their support through protocol development and review completion. We acknowledge all members of the Cochrane Editorial Unit for their contributions.
We sincerely thank peer reviewers for their time, expertise, and feedback on earlier versions of this review. In particular, we and the Cochrane Review Group Editorial Team are grateful to the Editor, Hege Kornør, Norwegian Institute of Public Health, Oslo, and the following reviewers for their time and comments: Dr Debra Allnock, Safer Young Lives Research Centre, University of Bedfordshire, Bedfordshire, UK; Dr Tara Flemington, Mid North Coast Local Health District and the University of Sydney, Australia; Adjunct Professor Kathleen Kufeldt, University of Calgary, Calgary (AB), Canada; Associate Professor Karthik Balajee Laksham, Jawaharlal Institute of Postgraduate Medical Education Research (JIPMER), Puducherry, India; and Areti Angeliki Veroniki, Cochrane Statistical Methods Group. We are also grateful to Lisa Winer for copyediting this review.
We acknowledge the research assistance of Dr Sandra Coe on the review protocol, and Andrea Boskovic and Adele Sommerfield in the early stages of the review.
Appendices
Appendix 1. Search strategies
Electronic databases
Cochrane Central Register of Controlled Trials (CENTRAL)
Searched 11 June 2021
Cochrane Central Register of Controlled Trials (CENTRAL; 2021, 06) in the Cochrane Library (searched 11 June 2021)
Cochrane Database of Systematic Reviews (CDSR; 2021, 06) in the Cochrane Library (searched 11 June 2021)
MeSH descriptor: [Child Welfare] this term only
((baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) NEAR/3 (abuse* OR maltreat* OR neglect*)):ti,ab
((baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) NEAR/3 (protect* OR safeguard*)):ti,ab
(("at risk" OR "high risk") NEAR/1 child*):ti,ab
MeSH descriptor: [Child Abuse] explode all trees
#1 OR #2 OR #3 OR #4 OR #5
MeSH descriptor: [Child] explode all trees
MeSH descriptor: [Adolescent] this term only
(baby OR babies OR infant* OR child* OR teen* OR adolescen*):ti,ab
#7 OR #8 OR #9
((non‐accidental OR deliberate) NEAR/3 injur*):ti,ab
((emotion* OR psycholo*) NEAR/3 (abus* OR maltreat* OR mal‐treat* OR neglect*)):ti:ab
MeSH descriptor: [Sex Offenses] this term only
MeSH descriptor: [Rape] this term only
((sex* NEAR/3 abus*) OR rape OR incest*):ti,ab
#11 OR #12 OR #13 OR #14 OR #15
#10 AND #16
#6 OR #17
MeSH descriptor: [Education, Professional] explode all trees
MeSH descriptor: [Inservice Training] this term only
MeSH descriptor: [Teaching] this term only
MeSH descriptor: [Education] explode all trees
MeSH descriptor: [Health Knowledge, Attitudes, Practice] this term only
MeSH descriptor: [Clinical Competence] this term only
(educat* OR instruct* OR teach* OR train*) NEAR/3 (program* OR intervention* OR course* OR model* OR post‐qualif* OR continuing):ti,ab
MeSH descriptor: [Mandatory Reporting] explode all trees
(mandatory NEAR/1 (notif* OR report*)):ti,ab
(educat* OR instruct* OR teach* OR train*) NEAR/3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*"):ti,ab
#19 OR #20 OR #21 OR #22 OR #23 OR #24 OR #25 OR #26 OR #27 OR #28
#18 AND #29
(("child abuse" OR "sex* abuse") NEAR/1 (detect* OR diagnos* OR educat* OR train*)):ti,ab
#30 OR #31
("comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment*):ti,ab
#32 AND #33
MEDLINE Ovid
Searched 04 June 2021
exp Child Abuse/
Child Welfare/
((baby or babies or infant$ or child$ or preschool$ or pre‐school$ or teen$ or adolescen$) adj3 (abuse$ or maltreat$ or mal‐treat$ or neglect$)).tw.
((baby or babies or infant$ or child$ or preschool$ or pre‐school$ or teen$ or adolescen$) adj3 (protect$ or safeguard$ or safe‐guard)).tw.
((at risk or high risk) adj1 child$).tw.
or/1‐5
exp Child/
adolescent/
(baby or babies or infant$ or child$ or teen$ or adolescen$).tw.
or/7‐9
((non‐accidental or deliberate) adj3 injur$).tw.
((emotional$ or psycholog$) adj3 (abuse$ or maltreat$ or mal‐treat$ or neglect$)).tw.
sex offenses/ or rape/
((sex$ adj3 abuse$) or rape or incest$).tw.
or/11‐14
10 and 15
6 or 16
exp education, professional/
exp inservice training/
exp Teaching/
education.fs.
Health Knowledge, Attitudes, Practice/
Clinical Competence/
((education$ or instruction$ or teach$ or train$) adj3 (program$ or intervention$ or course$ or model$ or post‐qualif$ or continuing )).tw.
((education$ or instruction$ or teach$ or train$) adj3 (dentist$ or doctor$ or medic$ or midwi#e$ or nurs$ or social worker$ or social service$ or police$ or teacher$ or health professional$)).tw.
Mandatory Reporting/
(mandatory adj1 (notif$ or report$)).tw.
or/18‐27
17 and 28
((child abuse or sex$ abuse) adj1 (detect$ or diagnos$ or education or training)).tw.
29 or 30
randomized controlled trial.pt.
controlled clinical trial.pt.
randomi#ed.ab.
placebo$.ab.
drug therapy.fs.
randomly.ab.
trial.ab.
groups.ab.
or/32‐39
exp animals/ not humans.sh.
40 not 41
31 and 42
Embase
Searched 11 June 2021
'child welfare'/mj
((baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) NEAR/3 (abuse* OR maltreat* OR neglect*)):ti,ab
((baby or babies or infant* or child* or preschool* or pre‐school* or teen* or adolescen*) NEAR/3 (protect* or safeguard*)):ti,ab
(("at risk" or "high risk") NEAR/1 child*):ti,ab
'child abuse'/exp
#1 OR #2 OR #3 OR #4 OR #5
'child'/exp
'adolescent'/mj
(baby or babies or infant* or child* or teen* or adolescen*):ti,ab
#7 OR #8 OR #9
((non‐accidental OR deliberate) NEAR/3 injur*):ti,ab
((emotion* OR psycholo*) NEAR/3 (abus* OR maltreat* OR mal‐treat* OR neglect*)):ti,ab
sexual crime'/mj OR 'rape'/mj
((sex* NEAR/3 abus*) OR rape OR incest*):ti,ab
#11 OR #12 OR #13 OR #14
#10 AND #15
#6 OR #16
'continuing education'/exp
'in service training'/exp
'teaching'/exp
'education'/mj
'attitude to health'/mj OR 'health service'/mj
'clinical competence'/mj
((education* OR instruct* OR teach* OR train*) NEAR/3 (program* OR intervention* OR course* OR model* OR 'post qualif*' OR continuing)):ti,ab
'mandatory reporting'/mj
(mandatory NEAR/1 (notif* OR report*)):ti,ab
((educat* OR instruct* OR teach* OR train*) NEAR/3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR 'social worker*' OR 'social service*' OR police* OR teacher* OR 'health professional*')):ti,ab
#18 OR #19 OR #20 OR #21 OR #22 OR #23 OR #24 OR #25 OR #26 OR #27
#17 AND #28
(('child abuse' OR 'sex* abuse') NEAR/1 (detect* OR diagnos* OR educat* OR train*)):ti,ab
29 or 30
comparison condition*':ti,ab OR 'comparison group*':ti,ab OR 'control condition*':ti,ab OR 'control group*':ti,ab OR 'matched group*':ti,ab OR 'propensity score*':ti,ab OR eval*:ti,ab OR experiment*:ti,ab OR random*:ti,ab OR rct:ti,ab OR trial*:ti,ab OR intervent*:ti,ab OR program*:ti,ab OR therap*:ti,ab OR treatment*:ti,ab
#31 AND #32
#33 AND 'human'/de
CINAHL EBSCOhost
Searched 04 June 2021
(MH "Child Welfare")
TI ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (abuse* OR maltreat* OR neglect*) ) OR AB ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (abuse* OR maltreat* OR neglect*) )
TI ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (protect* OR safeguard*) ) OR AB ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (protect* OR safeguard*) )
TI ( ("at risk" OR "high risk") N1 child* ) OR AB ( ("at risk" OR "high risk") N1 child* )
(MH "Child Abuse+")
S1 OR S2 OR S3 OR S4 OR S5
(MH "Child+")
(MH "Adolescence")
TI ( baby OR babies OR infant* OR child* OR teen* OR adolescen* ) OR AB ( baby OR babies OR infant* OR child* OR teen* OR adolescen* )
S7 OR S8 OR S9
TI ( (non‐accidental OR deliberate) N3 injur* ) OR AB ( (non‐accidental OR deliberate) N3 injur* )
TI ( (emotion* OR psycholo*) N3 (abus* OR maltreat* OR mal‐treat* OR neglect*) ) OR AB ( (emotion* OR psycholo*) N3 (abus* OR maltreat* OR mal‐treat* OR neglect*) )
(MH "Sexual Abuse+")
TI ( (sex* N3 abus*) OR rape OR incest* ) OR AB ( (sex* N3 abus*) OR rape OR incest* )
S11 OR S12 OR S13 OR S14
S10 AND S15
S6 OR S16
(MH "Education, Continuing+")
(MH "Staff Development+")
(MH "Teaching+")
MJ Education
(MH "Health Knowledge") OR (MH "Attitude to Health")
(MH "Clinical Competence")
TI ( (educat* OR instruct* OR teach* OR train*) N3 (program* OR intervention* OR course* OR model* OR post‐qualif* OR continuing) ) OR AB ( (educat* OR instruct* OR teach* OR train*) N3 (program* OR intervention* OR course* OR model* OR post‐qualif* OR continuing) )
(MH "Mandatory Reporting")
TI ( mandatory N1 (notif* OR report*) ) OR AB ( mandatory NEAR/1 (notif* OR report*) )
TI ( (educat* OR instruct* OR teach* OR train*) N3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*") ) OR AB ( (educat* OR instruct* OR teach* OR train*) N3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*") )
S18 OR S19 OR S20 OR S21 OR S22 OR S23 OR S24 OR S25 OR S26 OR S27
S17 AND S28
TI ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) ) OR AB ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) ) TI ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) ) OR AB ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) )
S29 OR S30
TI ( "comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment* ) OR AB ( "comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment* )
S30 AND S31 Limiters ‐ Publication Type: Abstract, Book, Book Chapter, Brief Item, CEU, Clinical Trial, Commentary, Consumer/Patient Teaching Materials, Doctoral Dissertation, Journal Article, Legal Case, Masters Thesis, Meta Analysis, Meta Synthesis, Nurse Practice Acts, Nursing Interventions, Practice Acts, Practice Guidelines, Proceedings, Protocol, Questionnaire/Scale, Randomized Controlled Trial, Research, Research Instrument, Review, Standards, Systematic Review, Teaching Materials
ERIC EBSCOhost
Searched 04 June 2021
DE "Child Welfare"
TI ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (abuse* OR maltreat* OR neglect*) ) OR AB ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (abuse* OR maltreat* OR neglect*) )
TI ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (protect* OR safeguard*) ) OR AB ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (protect* OR safeguard*) )
TI ( ("at risk" OR "high risk") N1 child* ) OR AB ( ("at risk" OR "high risk") N1 child* )
DE "Child Abuse" OR DE "Child Neglect" OR DE "Child Welfare" OR DE "Sexual Abuse" OR DE "Rape"
S1 OR S2 OR S3 OR S4 OR S5
SU Child
SU Adolescent
TI ( baby OR babies OR infant* OR child* OR teen* OR adolescen* ) OR AB ( baby OR babies OR infant* OR child* OR teen* OR adolescen* )
S7 OR S8 OR S9
TI ( (non‐accidental OR deliberate) N3 injur* ) OR AB ( (non‐accidental OR deliberate) N3 injur* )
TI ( (emotion* OR psycholo*) N3 (abus* OR maltreat* OR mal‐treat* OR neglect*) ) OR AB ( (emotion* OR psycholo*) N3 (abus* OR maltreat* OR mal‐treat* OR neglect*) )
DE "Sexual Abuse" OR DE "Rape"
TI ( (sex* N3 abus*) OR rape OR incest* ) OR AB ( (sex* N3 abus*) OR rape OR incest* )
S11 OR S12 OR S13 OR S14
S10 AND S15
S6 OR S16
DE "Professional Education" OR DE "Administrator Education" OR DE "Architectural Education" OR DE "Business Administration Education" OR DE "Engineering Education" OR DE "Home Economics Education" OR DE "Information Science Education" OR DE "Legal Education (Professions)" OR DE "Medical Education" OR DE "Professional Continuing Education" OR DE "Public Administration Education" OR DE "Teacher Education" OR DE "Theological Education"
DE "Inservice Education" OR DE "Inservice Teacher Education"
DE "Teaching (Occupation)" OR DE "Urban Teaching"
DE "Education"
(DE "Health Education") OR (DE "Medical Education")
DE "Competence"
TI ( (educat* OR instruct* OR teach* OR train*) N3 (program* OR intervention* OR course* OR model* OR post‐qualif* OR continuing) ) OR AB ( (educat* OR instruct* OR teach* OR train*) N3 (program* OR intervention* OR course* OR model* OR post‐qualif* OR continuing) )
KW "mandatory reporting"
TI ( mandatory N1 (notif* OR report*) ) OR AB ( mandatory NEAR/1 (notif* OR report*) )
TI ( (educat* OR instruct* OR teach* OR train*) N3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*) ) OR AB ( (educat* OR instruct* OR teach* OR train*) N3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*) )
S18 OR S19 OR S20 OR S21 OR S22 OR S23 OR S24 OR S25 OR S26 OR S27
S17 AND S28
TI ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) ) OR AB ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) ) TI ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) ) OR AB ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) )
S29 OR S30
TI ( "comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment* ) OR AB ( "comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment* )
S31 AND S32 Limiters ‐ Publication Type: Books, Collected Works (All), Collected Works ‐ General, Collected Works ‐ Proceedings, Collected Works ‐ Serials, Dissertations/Theses (All), Dissertations/Theses ‐ Doctoral Dissertations, Dissertations/Theses ‐ Masters Theses, Dissertations/Theses ‐ Practicum Papers, ERIC Digests in Full Text, ERIC Publications, Information Analyses, Journal Articles, Legal/Legislative/Regulatory Materials, Multilingual/Bilingual Materials, Numerical/Quantitative Data, Opinion Papers, Reports (All), Reports ‐ Descriptive, Reports ‐ Evaluative, Reports ‐ General, Reports ‐ Research, Reports ‐ Research‐practitioner Partnerships, Speeches/Meeting Papers, Tests/Questionnaires
PsycINFO EBSCOhost
Searched 04 June 2021
(ZE "child welfare")
TI ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (abuse* OR maltreat* OR neglect*) ) OR AB ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (abuse* OR maltreat* OR neglect*) )
TI ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (protect* OR safeguard*) ) OR AB ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (protect* OR safeguard*) )
TI ( ("at risk" OR "high risk") N1 child* ) OR AB ( ("at risk" OR "high risk") N1 child* )
DE "Child Abuse" OR DE "Battered Child Syndrome"
S1 OR S2 OR S3 OR S4 OR S5
SU child
SU adolescent
TI ( baby OR babies OR infant* OR child* OR teen* OR adolescen* ) OR AB ( baby OR babies OR infant* OR child* OR teen* OR adolescen* )
S7 OR S8 OR S9
TI ( (non‐accidental OR deliberate) N3 injur* ) OR AB ( (non‐accidental OR deliberate) N3 injur* )
TI ( (emotion* OR psycholo*) N3 (abus* OR maltreat* OR mal‐treat* OR neglect*) ) OR AB ( (emotion* OR psycholo*) N3 (abus* OR maltreat* OR mal‐treat* OR neglect*) )
ZE "Sex Offenses"
ZE "Rape"
TI ( (sex* N3 abus*) OR rape OR incest* ) OR AB ( (sex* N3 abus*) OR rape OR incest* )
S11 OR S12 OR S13 OR S14 OR S15
S10 AND S16
S6 OR S17
DE "Professional Development" OR DE "Professional Certification" OR DE "Professional Competence" OR DE "Professional Ethics" OR DE "Professional Licensing" OR DE "Professional Recognition" OR DE "Professional Socialization" OR DE "Professional Specialization"
DE "Inservice Training" OR DE "Inservice Teacher Education" OR DE "Mental Health Inservice Training"
DE "Teaching" OR DE "Instructional Media" OR DE "Teaching Methods"
MJ Education
(DE "Health Knowledge") OR (DE "Health Attitudes")
DE "Professional Competence"
TI ( (educat* OR instruct* OR teach* OR train*) N3 (program* OR intervention* OR course* OR model* OR post‐qualif* OR continuing) ) OR AB ( (educat* OR instruct* OR teach* OR train*) N3 (program* OR intervention* OR course* OR model* OR post‐qualif* OR continuing) )
DE "Child Abuse Reporting"
TI ( mandatory N1 (notif* OR report*) ) OR AB ( mandatory NEAR/1 (notif* OR report*) )
TI ( (educat* OR instruct* OR teach* OR train*) N3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*) ) OR AB ( (educat* OR instruct* OR teach* OR train*) N3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*) )
S19 OR S20 OR S21 OR S22 OR S23 OR S24 OR S25 OR S26 OR S27 OR S28
S18 AND S29
TI ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) ) OR AB ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) ) TI ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) ) OR AB ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) )
S30 OR S31
TI ( "comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment* ) OR AB ( "comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment* )
S32 AND S33 Limiters ‐ Document Type: Chapter, Dissertation, Journal Article, Review‐Any
Social Services Abstracts & Sociological Abstracts ProQuest
Searched 18 June 2021
MAINSUBJECT.EXACT("Child welfare")
ab((baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) NEAR/3 (abuse* OR maltreat* OR neglect*))
ab((baby or babies or infant* or child* or preschool* or pre‐school* or teen* or adolescen*) NEAR/3 (protect* or safeguard*))
ab(("at risk" or "high risk") NEAR/1 child*)
MAINSUBJECT.EXACT.EXPLODE("Child Abuse")
S1 OR S2 OR S3 OR S4 OR S5
MAINSUBJECT.EXACT.EXPLODE("Children")
MAINSUBJECT.EXACT("Adolescents")
ab((baby or babies or infant* or child* or teen* or adolescen*))
S7 OR S8 OR S9
ab((non‐accidental or deliberate) NEAR/3 injur*)
ab((emotion* or psycholo*) NEAR/3 (abus* or maltreat* or mal‐treat* or neglect*))
MAINSUBJECT.EXACT("Rape") OR MAINSUBJECT.EXACT("Child Sexual Abuse") OR MAINSUBJECT.EXACT("Incest") OR MAINSUBJECT.EXACT("Sexual Abuse") OR MAINSUBJECT.EXACT("Sexual Assault")
ab((sex* NEAR/3 abus*) OR rape OR incest*)
S11 OR S12 OR S13 OR S14
S10 AND S15
S6 OR S16
MAINSUBJECT.EXACT("Education")
MAINSUBJECT.EXACT.EXPLODE("Adult Education")
MAINSUBJECT.EXACT.EXPLODE("Teaching")
su(education)
MAINSUBJECT.EXACT("Health Behavior")
MAINSUBJECT.EXACT("Competence")
ab((education* or instruct* or teach* or train*) NEAR/3 (program* or intervention* or course* or model* or post‐qualif* or continuing))
su("mandatory report*")
ab(mandatory NEAR/1 (notif* or report*))
ab((educat* or instruct* or teach* or train*) NEAR/3 (dentist* or doctor* or medic* or midwi* or nurs* or "social worker*" or "social service*" or police* or teacher* or "health professional*"))
S18 OR S19 OR S20 OR S21 OR S22 OR S23 OR S24 OR S25 OR S26 OR S27
S17 AND S28
ab(("child abuse" OR "sex* abuse") NEAR/1 (detect* OR diagnos* OR educat* OR train*))
S29 OR S30
ab(("comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment*))
S31 AND S32
Excluded General Information and Editorials
Note: These two databases are now combined as one in ProQuest.
ScienceDirect Elsevier
Searched 04 June 2021
("child abuse" OR "child welfare" OR "child neglect" OR "child maltreatment") AND (education OR instruct OR teach OR training OR course)
Note: no more than 8 Boolean operators permitted, cannot combine search lines.
Psychology ProQuest
Searched 11 June 2021
MAINSUBJECT.EXACT("Child welfare")
ab((baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) NEAR/3 (abuse* OR maltreat* OR neglect*))
ab((baby or babies or infant* or child* or preschool* or pre‐school* or teen* or adolescen*) NEAR/3 (protect* or safeguard*))
ab(("at risk" or "high risk") NEAR/1 child*)
MAINSUBJECT.EXACT("Child abuse & neglect")
S1 OR S2 OR S3 OR S4 OR S5
MAINSUBJECT.EXACT("Children & youth")
ab((baby or babies or infant* or child* or teen* or adolescen*))
S7 OR S8
ab((non‐accidental or deliberate) NEAR/3 injur*)
ab((emotion* or psycholo*) NEAR/3 (abus* or maltreat* or mal‐treat* or neglect*))
MAINSUBJECT.EXACT("Sex crimes")
ab((sex* NEAR/3 abus*) OR rape OR incest*)
S10 OR S11 OR S12 OR S13
S9 AND S14
S7 OR S15
MAINSUBJECT.EXACT("Inservice training")
MAINSUBJECT.EXACT("Teaching")
MAINSUBJECT.EXACT("Education")
MAINSUBJECT.EXACT("Clinical competence")
ab((education* or instruct* or teach* or train*) NEAR/3 (program* or intervention* or course* or model* or post‐qualif* or continuing))
MAINSUBJECT.EXACT("Reporting requirements")
ab((educat* or instruct* or teach* or train*) NEAR/3 (dentist* or doctor* or medic* or midwi* or nurs* or ("social worker" OR "social workers") or ("social service" OR "social services") or police* or teacher* or ("health professional" OR "health professionals")))
ab(mandatory NEAR/1 (notif* or report*))
S17 OR S18 OR S19 OR S20 OR S21 OR S22 OR S23 OR S24
S16 AND S25
ab(("child abuse" OR "sex* abuse") NEAR/1 (detect* OR diagnos* OR educat* OR train*))
S26 OR S27
ab(("comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment*))
S28 AND S29
(S28 AND S29) NOT stype.exact("Dissertations & Theses" OR "Wire Feeds")
S28 AND S29) NOT (at.exact("General Information" OR "Editorial" OR "News" OR "Correspondence" OR "Interview" OR "Letter to the Editor" OR "Obituary" OR "Directory" OR "Bibliography" OR "Biography") NOT stype.exact("Dissertations & Theses" OR "Wire Feeds"))
Social Sciences ProQuest
Searched 23 July 2021
MAINSUBJECT.EXACT("Child welfare")
ab((baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) NEAR/3 (abuse* OR maltreat* OR neglect*))
ab((baby or babies or infant* or child* or preschool* or pre‐school* or teen* or adolescen*) NEAR/3 (protect* or safeguard*))
ab(("at risk" or "high risk") NEAR/1 child*)
MAINSUBJECT.EXACT("Child abuse & neglect")
S1 OR S2 OR S3 OR S4 OR S5
MAINSUBJECT.EXACT("Children & youth")
ab((baby or babies or infant* or child* or teen* or adolescen*))
S7 OR S8
ab((non‐accidental or deliberate) NEAR/3 injur*)
ab((emotion* or psycholo*) NEAR/3 (abus* or maltreat* or mal‐treat* or neglect*))
MAINSUBJECT.EXACT("Sex crimes")
ab((sex* NEAR/3 abus*) OR rape OR incest*)
10 OR S11 OR S12 OR S13
S9 AND S14
S6 OR S15
MAINSUBJECT.EXACT("Inservice training")
MAINSUBJECT.EXACT("Teaching")
MAINSUBJECT.EXACT("Education")
MAINSUBJECT.EXACT("Clinical competence")
ab((education* or instruct* or teach* or train*) NEAR/3 (program* or intervention* or course* or model* or post‐qualif* or continuing))
MAINSUBJECT.EXACT("Reporting requirements")
ab(mandatory NEAR/1 (notif* or report*))
ab((educat* or instruct* or teach* or train*) NEAR/3 (dentist* or doctor* or medic* or midwi* or nurs* or "social worker*" or "social service*" or police* or teacher* or "health professional*"))
S17 OR S18 OR S19 OR S20 OR S21 OR S22 OR S23 OR S24
S16 AND S25
ab(("child abuse" OR "sex* abuse") NEAR/1 (detect* OR diagnos* OR educat* OR train*))
S26 OR S27
ab(("comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment*))
S28 AND S29
Dissertation & Theses Global ProQuest
Searched 23 July 2021
MAINSUBJECT.EXACT("Child welfare")
ab((baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) NEAR/3 (abuse* OR maltreat* OR neglect*))
ab((baby or babies or infant* or child* or preschool* or pre‐school* or teen* or adolescen*) NEAR/3 (protect* or safeguard*))
ab(("at risk" or "high risk") NEAR/1 child*)
MAINSUBJECT.EXACT("Child abuse & neglect")
S1 OR S2 OR S3 OR S4 OR S5
MAINSUBJECT.EXACT("Children & youth")
ab((baby or babies or infant* or child* or teen* or adolescen*))
S7 OR S8
ab((non‐accidental or deliberate) NEAR/3 injur*)
ab((emotion* or psycholo*) NEAR/3 (abus* or maltreat* or mal‐treat* or neglect*))
MAINSUBJECT.EXACT("Sex crimes")
ab((sex* NEAR/3 abus*) OR rape OR incest*)
10 OR S11 OR S12 OR S13
S9 AND S14
S6 OR S15
MAINSUBJECT.EXACT("Inservice training")
MAINSUBJECT.EXACT("Teaching")
MAINSUBJECT.EXACT("Education")
MAINSUBJECT.EXACT("Clinical competence")
ab((education* or instruct* or teach* or train*) NEAR/3 (program* or intervention* or course* or model* or post‐qualif* or continuing))
MAINSUBJECT.EXACT("Reporting requirements")
ab(mandatory NEAR/1 (notif* or report*))
ab((educat* or instruct* or teach* or train*) NEAR/3 (dentist* or doctor* or medic* or midwi* or nurs* or "social worker*" or "social service*" or police* or teacher* or "health professional*"))
S17 OR S18 OR S19 OR S20 OR S21 OR S22 OR S23 OR S24
S16 AND S25
ab(("child abuse" OR "sex* abuse") NEAR/1 (detect* OR diagnos* OR educat* OR train*))
S26 OR S27
ab(("comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment*))
S28 AND S29
LexisNexis
Searched 19 December 2018
(((child abuse OR sexual abuse OR child neglect) AND (mandatory reporting) AND (doctor! OR nurse! OR teacher! OR police!)))
Note: due to issues with export functionality, only those deemed aligned with the topic and that were empirical research were manually entered into EndNote for systematic screening.
LegalTrac
Searched 19 December 2018
(child* Or infant* Or teen* Or adolescen*) And ("child sexual abuse" Or "child abuse" Or "child neglect") And ("mandatory reporting" Or referral* Or indentif*) And (train* Or educat* Or program*) And (Nurse* Or police* Or doctor* Or teacher* OR social worker* Or dentist*) LIMITS:Peer‐Reviewed
Note: due to issues with export functionality, only those deemed aligned with the topic and that were empriical research were manually entered into EndNote for systematic screening.
Westlaw Thomson Reuters
Searched 19 December 2018
adv: (child abuse OR sexual abuse OR child neglect OR child welfare) AND (mandatory reporting OR indentif! OR referral!) AND (train! OR course! OR program!) AND professional!
Conference Proceedings Citation Index ‐ Social Science and Humanities Web of Science
Searched 11 June 2021
TS=("child welfare")
TS=((baby NEAR/3 abuse*) OR (babies NEAR/3 abuse*) OR (infant NEAR/3 abuse*) OR (child NEAR/3 abuse*) OR (preschool NEAR/3 abuse*) OR (preschool NEAR/3 abuse*) OR (teen* NEAR/3 abuse*) OR (adolesc* NEAR/3 abuse*) OR (baby NEAR/3 maltreat*) OR (babies NEAR/3 maltreat*) OR (infant NEAR/3 maltreat*) OR (child NEAR/3 maltreat*) OR (preschool NEAR/3 maltreat*) OR (preschool NEAR/3 maltreat*) OR (teen* NEAR/3 maltreat*) OR (adolesc* NEAR/3 maltreat*) OR (baby NEAR/3 neglect*) OR (babies NEAR/3 neglect*) OR (infant NEAR/3 neglect*) OR (child NEAR/3 neglect*) OR (preschool NEAR/3 neglect*) OR (preschool NEAR/3 neglect*) OR (teen* NEAR/3 neglect*) OR (adolesc* NEAR/3 neglect*))
TS=((baby NEAR/3 protect*) OR (babies NEAR/3 protect*) OR (infant NEAR/3 protect*) OR (child NEAR/3 protect*) OR (preschool NEAR/3 protect*) OR (preschool NEAR/3 protect*) OR (teen* NEAR/3 protect*) OR (adolesc* NEAR/3 protect*) OR (baby NEAR/3 safeguard*) OR (babies NEAR/3 safeguard*) OR (infant NEAR/3 safeguard*) OR (child NEAR/3 safeguard*) OR (preschool NEAR/3 safeguard*) OR (preschool NEAR/3 safeguard*) OR (teen* NEAR/3 safeguard*) OR (adolesc* NEAR/3 safeguard*) )
TS=(*risk NEAR/1 child*)
TS=("child abuse")
1 or 2 or 3 or 4 or 5
TS=(child)
TS=(adolescent)
TS=(baby OR babies OR infant* OR child* OR teen* OR adolescen*)
7 or 8 or 9
TS=((non‐accidental NEAR/3 deliberate) OR (non‐accidental NEAR/3 injur*))
TS=((*accidental NEAR/3 deliberate) OR (*accidental NEAR/3 injur*))
TS=((emotion* NEAR/3 abus*) OR (emotion* NEAR/3 maltreat*) OR (emotion* NEAR/3 malt‐treat*) OR (emotion* NEAR/3 neglect*) OR (psycholo* NEAR/3 abus*) OR (psycholo* NEAR/3 maltreat*) OR (psycholo* NEAR/3 malt‐treat*) OR (psycholo* NEAR/3 neglect*))
(TS=("sex offen*" OR rape)) OR (TS=((sex* NEAR/3 abus*) OR rape* OR incest*))
11 or 12 or 13 or 14
10 and 15
6 or 16
SU=(Educat* OR Profession* OR Train* OR Teach* OR Knowledge OR Attitude* OR Practice* OR Competen*)
TS=((educat* NEAR/3 program*) OR (educat* NEAR/3 intervention*) OR (educat* NEAR/3 course*) OR (educat* NEAR/3 model*) OR (educat* NEAR/3 post‐qualif*) OR (educat* NEAR/3 continuing) OR (instruct* NEAR/3 program*) OR (instruct* NEAR/3 intervention*) OR (instruct* NEAR/3 course*) OR (instruct* NEAR/3 model*) OR (instruct* NEAR/3 post‐qualif*) OR (instruct* NEAR/3 continuing) OR (teach* NEAR/3 program*) OR (teach* NEAR/3 intervention*) OR (teach* NEAR/3 course*) OR (teach* NEAR/3 model*) OR (teach* NEAR/3 post‐qualif*) OR (teach* NEAR/3 continuing) OR (train* NEAR/3 program*) OR (train* NEAR/3 intervention*) OR (train* NEAR/3 course*) OR (train* NEAR/3 model*) OR (train* NEAR/3 post‐qualif*) OR (train* NEAR/3 continuing))
TS=((mandatory NEAR/1 notif*) OR (mandatory NEAR/1 report*))TS=((mandatory NEAR/1 notif*) OR (mandatory NEAR/1 report*))
TS=((educat* NEAR/3 dentist*) OR (educat* NEAR/3 police*) OR (educat* NEAR/3 doctor*) OR (educat* NEAR/3 medic*) OR (educat* NEAR/3 midwi*) OR (educat* NEAR/3 nurs*) OR (educat* NEAR/3 "social worker*") OR (educat* NEAR/3 "social service*") OR (educate NEAR/3 teacher*) OR (educat* NEAR/3 "health professional*") OR (instruct* NEAR/3 dentist*) OR (instruct* NEAR/3 police*) OR (instruct* NEAR/3 doctor*) OR (instruct* NEAR/3 medic*) OR (instruct* NEAR/3 midwi*) OR (instruct* NEAR/3 nurs*) OR (instruct* NEAR/3 "social worker*") OR (instruct* NEAR/3 "social service*") OR (instruct* NEAR/3 teacher*) OR (instruct* NEAR/3 "health professional*") OR (teach* NEAR/3 dentist*) OR (teach* NEAR/3 police*) OR (teach* NEAR/3 doctor*) OR (teach* NEAR/3 medic*) OR (teach* NEAR/3 midwi*) OR (teach* NEAR/3 nurs*) OR (teach* NEAR/3 "social worker*") OR (teach* NEAR/3 "social service*") OR (teach* NEAR/3 teacher*) OR (teach* NEAR/3 "health professional*") OR (train* NEAR/3 dentist*) OR (train* NEAR/3 police*) OR (train* NEAR/3 doctor*) OR (train* NEAR/3 medic*) OR (train* NEAR/3 midwi*) OR (train* NEAR/3 nurs*) OR (train* NEAR/3 "social worker*") OR (train* NEAR/3 "social service*") OR (train* NEAR/3 teacher*) OR (train* NEAR/3 "health professional*"))
18 or 19 or 20 or 21
17 and 22
TS=(("child abuse" NEAR/1 detect*) OR ("child abuse" NEAR/1 diagnos*) OR ("child abuse" NEAR/1 educat*) OR ("child abuse" NEAR/1 train*) OR ("sex* abuse" NEAR/1 detect*) OR ("sex* abuse" NEAR/1 diagnos*) OR ("sex* abuse" NEAR/1 educat*) OR ("sex* abuse" NEAR/1 train*))
23 or 24
TS=("comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR effective* OR efficacy OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment*)TS=("comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR effective* OR efficacy OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment*)
25 or 26
Violence & Abuse Abstracts EBSCOhost
Searched 04 June 2021
(ZU "child welfare")
TI ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (abuse* OR maltreat* OR neglect*) ) OR AB ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (abuse* OR maltreat* OR neglect*) )
TI ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (protect* OR safeguard*) ) OR AB ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (protect* OR safeguard*) )
TI ( ("at risk" OR "high risk") N1 child* ) OR AB ( ("at risk" OR "high risk") N1 child* )
(ZU "child abuse")
S1 OR S2 OR S3 OR S4 OR S5
SU "Child"
SU "Adolescent"
TI ( baby OR babies OR infant* OR child* OR teen* OR adolescen* ) OR AB ( baby OR babies OR infant* OR child* OR teen* OR adolescen* )
S7 OR S8 OR S9
TI ( (non‐accidental OR deliberate) N3 injur* ) OR AB ( (non‐accidental OR deliberate) N3 injur* )
TI ( (emotion* OR psycholo*) N3 (abus* OR maltreat* OR mal‐treat* OR neglect*) ) OR AB ( (emotion* OR psycholo*) N3 (abus* OR maltreat* OR mal‐treat* OR neglect*) )
(ZU "rape") OR (SU "sex offen*")
TI ( (sex* N3 abus*) OR rape OR incest* ) OR AB ( (sex* N3 abus*) OR rape OR incest* )
S11 OR S12 OR S13 OR S14
S10 AND S15
S6 OR S16
(ZU "professional education") OR (ZU "professional employee training") OR (SU "professional education")
(ZU "in‐service training of nurses") OR (ZU "in‐service training of social workers") OR (ZU "in‐service training of teachers") OR (SU "in‐service training" OR "inservice training")
(ZU "teaching")
SU "Education"
(ZU "health behavior") or (ZU "health education") or (ZU "health literacy") (ZU "health behavior") or (ZU "health education") or (ZU "health literacy")
(ZU "clinical competence")
TI ( (educat* OR instruct* OR teach* OR train*) N3 (program* OR intervention* OR course* OR model* OR post‐qualif* OR continuing) ) OR AB ( (educat* OR instruct* OR teach* OR train*) N3 (program* OR intervention* OR course* OR model* OR post‐qualif* OR continuing) )
(ZU "mandatory reporting (law)") OR (SU "mandatory report*")
TI ( mandatory N1 (notif* OR report*) ) OR AB ( mandatory NEAR/1 (notif* OR report*) )
TI ( (educat* OR instruct* OR teach* OR train*) N3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*) ) OR AB ( (educat* OR instruct* OR teach* OR train*) N3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*) )
S18 OR S19 OR S20 OR S21 OR S22 OR S23 OR S24 OR S25 OR S26 OR S27
S17 AND S28
TI ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) ) OR AB ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) )
S29 OR S30
TI ( "comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment* ) OR AB ( "comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment* )
S31 AND S32
EducationSource EBSCOhost
Searched 04 June 2021
DE "Child welfare"
TI ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (abuse* OR maltreat* OR neglect*) ) OR AB ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (abuse* OR maltreat* OR neglect*) )
TI ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (protect* OR safeguard*) ) OR AB ( (baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) N3 (protect* OR safeguard*) )
TI ( ("at risk" OR "high risk") N1 child* ) OR AB ( ("at risk" OR "high risk") N1 child* )
DE "Child abuse" OR DE "Child sexual abuse"
S1 OR S2 OR S3 OR S4 OR S5
SU "Child"
SU "Adolescent"
TI ( baby OR babies OR infant* OR child* OR teen* OR adolescen* ) OR AB ( baby OR babies OR infant* OR child* OR teen* OR adolescen* )
S7 OR S8 OR S9
TI ( (non‐accidental OR deliberate) N3 injur* ) OR AB ( (non‐accidental OR deliberate) N3 injur* )
TI ( (emotion* OR psycholo*) N3 (abus* OR maltreat* OR mal‐treat* OR neglect*) ) OR AB ( (emotion* OR psycholo*) N3 (abus* OR maltreat* OR mal‐treat* OR neglect*) )
DE "Child sexual abuse"
TI ( (sex* N3 abus*) OR rape OR incest* ) OR AB ( (sex* N3 abus*) OR rape OR incest* )
S11 OR S12 OR S13 OR S14
S10 AND S15
S6 OR S16
DE "Professional education" OR DE "Clinical education" OR DE "Education of executives" OR DE "Education of school administrators" OR DE "Interns" OR DE "Interprofessional education" OR DE "Library education" OR DE "Medical education" OR DE "Professional education of women" OR DE "Teacher education"
DE "Employee training" OR DE "Apprenticeship programs" OR DE "Business interns" OR DE "Employee orientation" OR DE "In‐service training of teachers" OR DE "Internship programs" OR DE "Self‐managed learning (Personnel management)" OR DE "Training of library employees"
DE "Teaching" OR DE "Audiovisual education" OR DE "Catholic school teaching" OR DE "Class size" OR DE "Classroom management" OR DE "College teaching" OR DE "Comprehensive instruction (Reading)" OR DE "Creative teaching" OR DE "Cumulative instruction" OR DE "Dalton laboratory plan" OR DE "Departmental teaching" OR DE "Dictation (Educational method)" OR DE "Differentiated teaching staffs" OR DE "Direct instruction" OR DE "Effective teaching" OR DE "Elementary school teaching" OR DE "Explicit instruction" OR DE "Fieldwork (Educational method)" OR DE "Formal discipline" OR DE "Global method of teaching" OR DE "High school teaching" OR DE "Junior high school teaching" OR DE "Kindergarten teaching" OR DE "Lesson planning" OR DE "Logic in teaching" OR DE "Mass instruction" OR DE "Mastery learning" OR DE "Microteaching" OR DE "Middle school teaching" OR DE "Monitorial system of education" OR DE "Montessori method of education" OR DE "Object‐teaching" OR DE "Orthography & spelling ‐‐ Study & teaching" OR DE "Preschool teaching" OR DE "Primary school teaching" OR DE "Private school teaching" OR DE "Programmed instruction" OR DE "Questioning" OR DE "Recitation (Education)" OR DE "Reflective teaching" OR DE "Reggio Emilia approach (Early childhood education)" OR DE "Remedial teaching" OR DE "Student assignments" OR DE "Student teaching" OR DE "Substitute teaching" OR DE "Supervised study" OR DE "Systematic instruction" OR DE "Targeted instruction" OR DE "Teacher‐student relationships" OR DE "Teachers' institutes" OR DE "Teaching aids" OR DE "Teaching teams" OR DE "Test preparation (Classroom instruction)" OR DE "Tutors & tutoring"
DE "Education"
DE "Health education"
DE "Clinical competence"
TI ( (educat* OR instruct* OR teach* OR train*) N3 (program* OR intervention* OR course* OR model* OR post‐qualif* OR continuing) ) OR AB ( (educat* OR instruct* OR teach* OR train*) N3 (program* OR intervention* OR course* OR model* OR post‐qualif* OR continuing) )
(ZU "mandatory reporting (law)") OR (SU "mandatory reporting")
TI ( mandatory N1 (notif* OR report*) ) OR AB ( mandatory NEAR/1 (notif* OR report*) )
TI ( (educat* OR instruct* OR teach* OR train*) N3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*") ) OR AB ( (educat* OR instruct* OR teach* OR train*) N3 (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*") )
S18 OR S19 OR S20 OR S21 OR S22 OR S23 OR S24 OR S25 OR S26 OR S27
S17 AND S27
TI ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) ) OR AB ( ("child abuse" OR "sex* abuse") N1 (detect* OR diagnos* OR educat* OR train*) )
S29 OR S30
TI ( "comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment* ) OR AB ( "comparison condition*" OR "comparison group*" OR "control condition*" OR "control group*" OR "matched group*" OR "propensity score*" OR eval* OR *experiment* OR random* OR RCT OR trial* OR intervent* OR program* OR therap* OR treatment* )
S30 AND S31
LILACS (lilacs.bvsalud.org/en/)
Searched 11 June 2021
tw:((baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*) AND (abuse* OR maltreat* OR neglect* OR safeguard* OR protect*) AND (educat* OR instruct* OR teach* OR train*) AND (dentist* OR doctor* OR medic* OR midwi* OR nurs* OR "social worker*" OR "social service*" OR police* OR teacher* OR "health professional*") AND (eval* OR experiment* OR random* OR rct OR trial* OR intervent* OR program* OR therap* OR treatment*)) AND (db:("LILACS"))
Note: tw = title, abstract, subject fields
WHO International Clinical Trials Registry Platform (trialsearch.who.int)
Searched 11 June 2021
Title: baby OR babies OR infant* OR child* OR preschool* OR pre‐school* OR teen* OR adolescen*
Condition: abuse* OR maltreat* OR neglect* OR safeguard* OR protect*
Intervention: educat* OR instruct* OR teach* OR train*
OpenGrey (www.opengrey.eu)
Searched 27 May 2019
child NEAR/6 (abuse OR neglect) AND train* [LIMIT: Until December 2018]
Websites and grey literature sources
International Society for Prevention of Child Abuse and Neglect via ispcan.org
Searched 02 July 2021
Search approach
In E‐Library section of website, examined all titles listed for relevancy (N = 1 exported)
Examined Publications section of website (N = 1 exported, mostly duplicates from E‐Library and link to Child Abuse & Neglect journal which has already been searched, but one publication in World Perspectives on Child Abuse section was deemed potentially eligible)
Examined two published compilations of conference abstracts for relevant studies (saved PDFs and highlighted abstracts that were read in full for relevancy) and N = 3 + 14 were deemed potentially eligible for the review.
US Department of Health and Human Services Children’s Bureau, Child Welfare Information Gateway via childwelfare.gov
Searched 02 July 2021
Search approach
In Publications section of website, examined all titles in the following series: Bulletins for Professionals and Issues Briefs
4 Bulletins were exported for screening based on their abstracts.
0 Issues Briefs were exported based on their abstracts.
Promising Practices Network operated by the RAND Corporation via promisingpractices.net
Searched 21 March 2019
Search approach
Examined all titles in the ‘Programs that Work’ section (Child Abuse and Neglect topic) and ‘Issue Briefs’ section (Child Abuse and Neglect and Evidence‐Based Practices topics)
3 references from within summaries of programs were exported for screening based on the description of the intervention (Child Sexual Abuse Prevention: Teacher Training Workshop)
0 issue briefs were exported for screening based on their full‐text (were more general information about the problem with focus on parenting programs).
Note from website: "The Promising Practices Network began in 1997 as a partnership between four state‐level organizations that help public and private organizations improve the well‐being of children and families. Due to funding constraints, the PPN project has concluded. The PPN website was archived in June 2014 and has not been updated since then"
National Resource Center for Community‐Based Child Abuse Prevention (CBCAPP) via friendsnrc.org
Searched 02 July 2021
Search approach
Examined the website and identified that the majority of resources were the actual training materials, education materials and toolkits for practitioners and others. Examined the Matrix of Evidence‐Based Practice and all programs were focused on implementation with families and children rather than practitioners.
0 records were exported from this website.
California Evidence‐Based Clearinghouse for Child Welfare (CEBC) via cebc4cw.org
Searched 02 July 2021
Search approach
Using the Advanced Search function, selected topics that were relevant to practitioner training: Casework Practice; Child Welfare Workforce Development and Support Programs. This search identified 26 results.
Each result was examined for relevancy. While there were many practitioner focused training programs, most were either pre‐service training, aimed at retention of workers, aimed at increasing uptake of evidence‐based practices, general organisational reform, human resources in child welfare, or casework in a generic or highly specific area (e.g., juvenile justice).
0 records were exported from this website.
Coalition for Evidence‐Based Policy via coalition4evidence.org
Searched 21 March 2019
Search approach
Examined ‘Complete List’ of publications and examined all records in the following subsections: Coalition Policy Papers (Early Childhood, Education), Social Programs that Work (Prenatal / Early Childhood), The Rigorous Evidence Newsletter.
0 records had any relevance to the review in the Coalition Policy Papers section.
0 records had any relevance to the review in the Social Programs that Work section.
0 records had any relevance to the review in the Rigorous Evidence Newsletter section.
Note from website: "The Coalition for Evidence‐Based Policy wound down its operations in the spring of 2015, and the Coalition’s leadership and core elements of the group’s work have been integrated into the Laura and John Arnold Foundation (as described here). This website is no longer updated, but will remain available. Its key content will soon be migrated to http://www.arnoldfoundation.org/initiative/evidence-based-policy-innovation/, and will be regularly updated on that site."
Institute of Education Sciences What Works Clearinghouse via ies.ed.gov/ncee/wwc
Searched 02 July 2021
Search approach
On the face of it, most studies and intervention reviews indexed on this website are focused on educational interventions for school‐aged children.
Conducted a search of the website using the following terms: maltreatment, abuse, protection, mandatory, notification, and welfare. All identified records were examined.
An advanced search was also conducted on the IES evaluation section, using the following topics: Teacher and Leaders, Principal Professional Development, Teacher Evaluation Systems and Teacher Professional Development. This returned 1 result, which was not relevant to the review.
0 records were deemed relevant to the review.
National Institute for Health and Care Excellence (NICE) UK via nice.org.uk
Searched 09 July 2021
Search approach
The ‘Evidence Search’ section of the website does not allow for long or complex search strings. The first search (dated 21 March 2019) was restricted to the following: (abuse* OR maltreat* OR neglect* OR protect* OR safeguard* OR "safe‐guard" OR mandatory OR notific* OR welfare) AND (child* OR baby OR babies OR infant* OR preschool* OR "pre‐school*" OR teen* OR adolescen*). This search identified 99 results, which were all downloaded, imported into SysReview, and screened.
-
The functionality of the 'Evidence Search' section changed for the second search (dayed 09 July 2021). For this search, the following was used: ("child abuse" OR "child welfare" OR "child protect*" OR "mandatory notification*" OR mandatory report*"). The sources that were known to have aolready been captured by other searches were filtered out using the website's filter options. These are:
World Health Organization (n = 12)
PubMed (n = 45)
ISRCTN Registry (n = 7)
Implementation Science journal (n = 10)
Cochrane Library (n = 9)
The above search identified 774 results, which were downloaded, imported into SysReview, and screened.
Conducted a search within the ‘Nice Guidance’ section of the website, using the following terms: child abuse, child welfare, neglect, maltreatment, mandatory. All records were examined for relevancy; however, all records returned were guidance documents for practitioners. There were no specific references attached to the guidance to harvest. 0 records were exported from this section of the website.
Journal handsearches
Searched 02 July 2021
The search was conducted in the Web of Science, which allows searching within specific journals. The search strategy reported in the protocol was adapted as follows*.
TS=(child* OR baby OR babies OR infant* OR preschool* OR "pre‐school*" OR teen* OR adolescen*)
TS=(abuse* OR maltreat* OR maltreat* OR neglect* OR protect* OR safeguard* OR "safe‐guard" OR "at‐risk" OR "at risk" OR "high‐risk" OR "high risk" OR "non‐accident*" OR deliberate* OR injur* OR rape* OR rapist* OR incest* OR "sex‐offence*" OR "sex‐offense*" OR "sex* offence*" OR "sex* offense*)
TS=(educat* OR train* OR teach* OR practic* OR detect* OR instruct* OR competen* OR mandatory OR notif*)
TS=(profess* OR practition* OR "post‐qual*" OR "post qual*" OR staff* OR dentist* OR doctor* OR midwi* OR nurs* OR worker* OR "social service*" OR "social‐service*" OR teacher* OR police*)
#4 AND #3 AND #2 AND #1
SO=(CHILD ABUSE NEGLECT OR CHILD ABUSE REVIEW OR CHILD MALTREATMENT OR CHILDREN YOUTH SERVICES REVIEW OR TRAUMA VIOLENCE ABUSE)
#6 AND #5
*Indexes=SCI‐EXPANDED, SSCI, A&HCI, CPCI‐S, CPCI‐SSH, BKCI‐S, BKCI‐SSH, ESCI, CCR‐EXPANDED, IC Timespan=All years
Appendix 2. Methods for use in future review updates
Section of protocol (reference) | Method |
Timing of outcome assessment (Mathews 2015, p 4) | We planned to classify primary and secondary outcomes using three time periods: short‐term outcomes (assessed immediately after the intervention and up to 12 months after the intervention); medium‐term outcomes (assessed between one and three years after the intervention); and long‐term outcomes (assessed more than three years after the intervention). However, this method was not used because there were no included studies that assessed outcomes beyond three to six months. |
Measures of treatment effect (Mathews 2015, p 7) |
Dichotomous data Where necessary, we planned to report dichotomous data with raw counts and rates for intervention and control groups. We would have summarised study effects using risk ratios and corresponding 95% confidence intervals. However, this method was not used because none of the studies included outcomes with dichotomous data. |
Mean difference For continuous data where the same scale was used to measure similar outcomes, we planned to summarise study effects as mean differences and 95% confidence intervals. However, this method was not used because we found studies used different measures with different scales. | |
Unit of analysis issues
(Mathews 2015, p 7) |
Cluster‐randomised trials We planned to use an estimate of the intracluster correlation coefficient (ICC) from an included study that adequately accounted for a clustered design and reported an ICC. However, this method was not used because clustering was not addressed in the original trials. We planned to conduct sensitivity analyses to assess the adjustments by ICC. However, this method was not used because there were too few included studies. |
Studies with multiple treatment groups In trials with multiple intervention groups and control groups, or both (i.e. multi‐arm studies), we planned to determine which intervention groups were most relevant to the review according to the intervention type and outcomes. Where appropriate, we would have combined all relevant intervention groups into a single intervention group and all control groups into a single control group, to enable a single pairwise comparison (Higgins 2022d, Section 23.3.4). For continuous data, we planned to combine sample sizes, means, and standard deviations using the formula detailed in the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2022c, Section 6.5.2.10, Table 6.5a). For dichotomous data, we planned to collate numbers of participants in each of the intervention groups who did and did not experience the outcome. However, these methods were not used because we found no multi‐arm studies. | |
Assessment of reporting biases
(Mathews 2015, p 8) |
Assessing non‐reporting bias and small‐study effects (i.e. publication bias) We planned to assess publication bias in the case of sufficient studies. The Cochrane Handbook for Systematic Reviews of Interventions recommends that tests for funnel plot asymmetry should be used (i) only when there are at least 10 studies included in a meta‐analysis (as fewer studies mean that the test will be underpowered), and (ii) when studies vary in size (as similar‐sized studies will likely have similar standard errors of effect estimates) (Page 2022, Section 13.3.5.4). However, these methods were not used because fewer than 10 studies could be included in our meta‐analyses. |
Data synthesis (Mathews 2015, p 9) |
Programme typology We planned to statistically investigate possible components of effective training interventions in subgroup analyses in an attempt to link specific intervention components to effectiveness. However, we were unable to test these proposals in subgroup analyses because there were too few studies. Instead, we have provided a narrative summary of the characteristics of included studies and present details in the Characteristics of included studies tables. |
Subgroup analyses and investigation of heterogeneity (Mathews 2015, p 9) | Subgroup analyses involve dividing data into subsets for comparison. In this review, we planned to answer questions about intervention types. If there were at least 10 studies (Deeks 2022, Section 10.11.5.1), we would have undertaken the following subgroup analyses to identify if effects were different by subgroup:
We planned to assess differences between subgroups by informally comparing the magnitude of effects via initial inspection of the confidence intervals. If these do not overlap, it may indicate a statistically significant difference in training effects between the subgroups. This must then be followed by a formal statistical approach; for example, examining variability in effect estimates via comparison of I² statistics or examining interaction effects using analysis of variance (ANOVA), as described by Deeks 2022 (Section 10.11.3.1), or both. However, we did not perform these subgroup analyses because there were too few studies reporting these data. |
Sensitivity analysis (Mathews 2015, p 9) | We planned to perform sensitivity analyses to test the robustness of decisions made in the review (Deeks 2022), providing there were sufficient data (i.e. 10 or more studies). We planned to:
However, we did not perform these analyses due to insufficient numbers of included studies. |
Appendix 3. Criteria used to assess risk of bias
As planned in our review protocol (Mathews 2015), we used the original Cochrane risk of bias tool, which consists of seven domains: (1) sequence generation; (2) allocation concealment; (3) blinding of participants and personnel; (4) blinding of outcome assessment; (5) incomplete outcome data; (6) selective reporting; and (7) other sources of bias (Higgins 2011, Table 8.5a). To further address the risk of bias in controlled before‐and‐after studies, under the seventh domain ('Other sources of bias'), we added three additional subdomains corresponding to domains in the 'Suggested risk of bias criteria for EPOC reviews' by Cochrane Effective Practice and Organisation of Care (EPOC 2017), as follows: (7a) reliability of outcome measures, as we anticipated that some studies may use custom‐made instruments and scales; (7b) group comparability, as we anticipated there may be some variation in reporting of baseline equivalence; and (7c) contamination, given that training frequently occurs in workplace groupings. Below we provide a description of each domain and the key question we asked in assessing risk of bias.
1. Sequence generation
Description: the method used to generate the allocation sequence was described in sufficient detail to enable assessment of whether it could produce comparable baseline groups.
Question: did the study authors describe a random component in the sequence generation process?
2. Allocation concealment
Description: the method used to conceal the allocation sequence was described in sufficient detail to determine whether allocations could have been predicted before or during the assignment‐to‐groups process.
Question: did the study authors report an adequate method of concealing allocation to intervention or control groups?
3. Blinding of participants and personnel
Description: the methods used, if any, to blind study participants and personnel from knowledge of participants’ group membership were described in sufficient detail to enable assessment of their effectiveness.
Question: did the study authors report an adequate method of blinding participants and personnel from knowledge of participants’ belonging to either intervention or control groups?
4. Blinding of outcome assessment
Description: the methods used to blind outcome assessors from knowledge of participants' group membership were described in sufficient detail to enable assessment of their effectiveness.
Question: did the study authors note blinding of outcome assessors from knowledge of participants' belonging to either intervention or control groups?
5. Incomplete outcome data
Description: data on attrition, exclusions, and withdrawals were reported (numbers compared with the total number randomised or as a proportion of the total number randomised, or both), and reasons for incomplete outcome data were provided.
Question: did the study authors report missing data, reasons for missing data, and imputation methods?
6. Selective reporting
Description: the study's prespecified primary and secondary outcomes were reported in sufficient detail to assess their completeness.
Question: did the study authors report on all prespecified outcomes of interest (e.g. as proposed in a published protocol or trial register)?
7. Other sources of bias
Description: the study was free from other sources of bias such as fraudulence.
Question: was the study free of other problems that could put it at risk of bias?
7(a). Reliability of outcome measures
Description: the study outcomes were measured using reliable instruments or scales (Cronbach’s alpha of 0.6 or above), and reliability scores were reported or could be found in other publications.
Question: did the study authors report reliability data in sufficient detail to enable its assessment?
7(b). Group comparability
Description: information on the comparability of groups at baseline was provided in sufficient detail for each outcome measure to enable its assessment.
Question: did the study authors report group comparability at baseline for each of the outcome measures of interest?
7(c). Contamination
Description: the measures taken to prevent or minimise the possibility that participants in a control group might receive part or all of the intervention were described in sufficient detail to enable assessment of contamination between groups.
Question: did the study authors report contamination minimisation measures or ways in which contamination may have been possible (e.g. media reports during a training intervention period)?
Appendix 4. Missing data issues and synthesis approaches
We identified two types of missing data in the included studies: missing outcome data required for effect size calculation and missing participant data due to attrition (Alvarez 2010; Dubowitz 1991; Hazzard 1984; Kim 2019; Kleemeier 1988; Mathews 2017; McGrath 1987; Smeekens 2011). Because many of the included studies with missing data were published 30‐40 years ago in the 1980s and 1990s (Dubowitz 1991; Hazzard 1984; Kleemeier 1988; McGrath 1987), it was difficult to locate these authors to obtain missing data.
Missing outcome data
Table 1 below provides a summary of the missing outcome data issues for the following older studies and the synthesis approach taken.
Table 1: Missing data issues and synthesis approaches
Outcome | Missing data and approach |
Primary outcome: number of reported cases of child abuse and neglect (self‐report) |
Hazzard 1984: percentage of participants in each group who self‐reported making reports provided in‐text, but no report of participant attrition, so unable to use existing formulae/calculators to calculate an effect size from proportions and group sizes. Approach: exclude from analyses, include in study summaries Kleemeier 1988: no data reported for this outcome, aside from a statement of non‐significance. Approach: exclude from analyses, include in study summaries |
Secondary outcome: knowledge of the reporting duty, processes, and procedures |
McGrath 1987: tables in the text report the percentage of each group who answered each questionnaire item correctly, without any other summary statistics (e.g. means, standard deviations). There are existing formulae and calculators to permit effect size calculations using proportions and group size (www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-OR2.php); however, we deemed it inappropriate to calculate and report effect sizes for this study. Firstly, whilst calculating a composite effect size would be possible, formulae to adjust the standard error require the correlation between effect sizes, and we could not locate any data that provided an estimate of the correlation between the items. Assuming that the correlation is the same between multiple effect sizes may bias the calculated composite effect size and its standard error (Borenstein 2009). Secondly, whilst selecting an individual item most aligned with the outcome domain would be appropriate, there was little detail reported for the exact items to guide our decision‐making. Approach: exclude from analyses, include in study summaries |
Secondary outcome: knowledge of core concepts in child abuse and neglect |
McGrath 1987: as above for secondary outcome: knowledge of the reporting duty, processes, and procedures. Dubowitz 1991: no means or standard deviations reported, only t‐test and P value between experimental and control groups at post‐test. Approach: utilised David B Wilson's suite of effect size calculators to calculate a Cohen's d, 95% confidence intervals (CI), and variance (www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-SMD7.php). The standard error was calculated from the 95% CI and was used, along with the standardised mean difference, in a generic inverse variance meta‐analysis for this outcome. No data were reported for the follow‐up time point for this outcome, aside from a statement of non‐significance. |
For two of the four studies with missing data that were published in the last 10 to 15 years, we contacted the corresponding authors with a request to provide missing outcome data (intervention and control group participant totals, means, standard deviations, intraclass correlation coefficients). One study included both professional and student participants but did not report outcome data by participant type to isolate the intervention effects for professionals (Alvarez 2010). Attempts to locate the required data after contact and co‐operation from study authors were unsuccessful. The other study, Kim 2019, examined the moderators of treatment effect using structural equation modelling but provided no means or standard deviations by group, or other data to calculate the effect from other coefficients. Unfortunately, we were unable to obtain the required data from the corresponding author. We therefore excluded these two studies from the analyses, but included them in the description of the studies (see Included studies and Characteristics of included studies).
Missing participant data
The remaining two recently published studies with missing data reported participant attrition after randomisation. Smeekens 2011 lost > 30% of their participants in both groups between allocation and their postintervention measure. The authors conducted "both an intention‐to‐treat analysis with the pre‐test score carried forward and a multiple imputation analysis" (p 332), and that because the "results were not essentially altered" (p 332), the results for only those who completed the postintervention measure are reported. Whilst we utilised the data reported by study authors in effect size calculation, we downgraded the certainty of the evidence for the outcome to which this single study contributed (secondary outcome: skills in distinguishing cases). Mathews 2017 lost < 5% of their participants between allocation and postintervention, therefore we used the analysed n and summary data for effect size calculation. Because of the loss of all control group participants at the four‐month follow‐up for this study, we note the inability to conduct between‐group comparisons at follow‐up in the Results section, and downgraded the certainty of the evidence for the outcome to which this single study contributed (secondary outcome: knowledge of the reporting duty, processes, and procedures). We did not report the pattern of results for the experimental group participants, as this would be a biased estimate of the intervention effects.
Data and analyses
Comparison 1. Number of reported cases of child abuse and neglect.
Outcome or subgroup title | No. of studies | No. of participants | Statistical method | Effect size |
---|---|---|---|---|
1.1 Number of reported cases of child abuse and neglect (vignettes) | 2 | 87 | Std. Mean Difference (IV, Fixed, 95% CI) | 1.81 [1.30, 2.32] |
1.2 Number of reported cases of child abuse and neglect (vignettes), adjusted for clustering | 2 | 80 | Std. Mean Difference (IV, Fixed, 95% CI) | 1.82 [1.28, 2.35] |
Comparison 2. Knowledge of the reporting duty, processes, and procedures.
Outcome or subgroup title | No. of studies | No. of participants | Statistical method | Effect size |
---|---|---|---|---|
2.1 Knowledge of reporting duty, processes, and procedures | 1 | Std. Mean Difference (IV, Fixed, 95% CI) | Totals not selected |
Comparison 3. Knowledge of core concepts in child abuse and neglect.
Outcome or subgroup title | No. of studies | No. of participants | Statistical method | Effect size |
---|---|---|---|---|
3.1 Child abuse/maltreatment (general) | 2 | 154 | Std. Mean Difference (IV, Fixed, 95% CI) | 0.68 [0.35, 1.01] |
3.2 Child abuse/maltreatment (general), adjusted for clustering | 2 | 70 | Std. Mean Difference (IV, Fixed, 95% CI) | 0.66 [0.17, 1.15] |
3.3 Child sexual abuse (specific) | 3 | 238 | Std. Mean Difference (IV, Random, 95% CI) | 1.44 [0.43, 2.45] |
3.4 Child sexual abuse (specific), adjusted for clustering | 3 | 178 | Std. Mean Difference (IV, Random, 95% CI) | 1.42 [0.44, 2.39] |
3.5 Child sexual abuse (specific, non‐randomised CBA) | 1 | Std. Mean Difference (IV, Random, 95% CI) | Totals not selected |
Comparison 4. Skills in distinguishing cases.
Outcome or subgroup title | No. of studies | No. of participants | Statistical method | Effect size |
---|---|---|---|---|
4.1 Skills in distinguishing cases | 1 | Std. Mean Difference (IV, Fixed, 95% CI) | Totals not selected |
Comparison 5. Attitudes towards the duty to report child abuse and neglect.
Outcome or subgroup title | No. of studies | No. of participants | Statistical method | Effect size |
---|---|---|---|---|
5.1 Attitudes towards the duty to report child abuse and neglect | 1 | Std. Mean Difference (IV, Fixed, 95% CI) | Totals not selected |
Characteristics of studies
Characteristics of included studies [ordered by study ID]
Alvarez 2010.
Study characteristics | ||
Methods | Study design: quasi‐randomised controlled trial Unit of allocation: participant Unit of analysis: participant Adjustment for clustering: no (participants received the intervention in groups. The composition of groupings was not reported.) | |
Participants | Location: Nevada, USA Setting: not reported Sample size calculation: not reported Sample size: 55 mental health professionals with a Bachelor’s level degree or higher and graduate students in mental health programs (i.e. psychology, educational psychology, counselling, social work) (Alvarez 2010, p 213); intervention group n and control group n not reported Mean age (SD): (i) intervention group = 40 (10.9) years, (ii) control group = 36.6 (12.4) years (Alvarez 2010, p 213) Gender: (i) intervention group = 88.9% women, (ii) control group = 77.8% women (Alvarez 2010, p 213) Race/ethnicity: (i) intervention group = 80.8% Caucasian (understood to be white), (ii) control group = 74.1% Caucasian (understood to be white) (Alvarez 2010, p 213) Years of experience: not reported Previous child protection training: not reported Previous experience with child maltreatment reporting: (i) intervention group = 51.9% yes, (ii) control group = 59.3% yes (Alvarez 2010, p 213) Baseline equivalence: authors report no statistical differences between participants in the intervention and control conditions prior to receiving training (Alvarez 2010, p 213) | |
Interventions | Name: child maltreatment reporting workshop (Alvarez 2010, p 215) Contents: (i) how to involve caregivers in the reporting process, (ii) systematic dissemination of state and federal laws relevant to reporting suspected child maltreatment, (iii) common indicators of child maltreatment, (iv) common misconceptions resulting in failure to report suspected child maltreatment (Alvarez 2010, p 213), (v) empirically supported procedures to assist in making a child maltreatment report (Alvarez 2010, p 215), and (vi) prevalence rates of child maltreatment (Alvarez 2008, pp 51‐2) Processes and teaching methods: (i) presentation of training agenda, (ii) presentation of information, (iii) modelling in videotaped role‐play scenario, (iv) practice of techniques in role plays, (v) group discussion, and (vi) questions and answers (Alvarez 2008, p 51‐2) Delivery mode: face‐to‐face workshop Trainers and qualifications: non‐licensed graduate students at Master's level enrolled in a clinical psychology doctoral programme (Alvarez 2010, p 215) Duration: 2 hours (Alvarez 2008, p 56) Intensity: not reported Intervention integrity: workshop facilitators and blinded independent raters completed protocol checklists (Alvarez 2010, p 215) Comparison condition: alternative training (cultural sensitivity workshop) (Alvarez 2010, p 215) | |
Outcomes |
Eligible measures (outcome domain)
Ineligible measures (reason): clinical expertise in reporting suspected child maltreatment (measures skills in safeguarding therapeutic relationships; not listed in protocol for this review), comprising 15 items with multiple‐choice response options for each item (Alvarez 2010, p 214) Timing of outcome assessment: pre‐test (immediately before training), post‐test (immediately after training) (Alvarez 2010, p 215) |
|
Notes | Funding: National Institute on Drug Abuse (1R01DA020548‐01A1) Author contact: yes | |
Risk of bias | ||
Bias | Authors' judgement | Support for judgement |
Random sequence generation (selection bias) | Unclear risk |
Comment: inadequate description of the generation of the randomised sequence Quote: "on completion of baseline measures, participants were randomly assigned to one of two workshops" (Alvarez 2010, p 215) |
Allocation concealment (selection bias) | Unclear risk | Comment: method of concealment not reported by study authors |
Blinding of participants and personnel (performance bias) All outcomes | High risk | Comment: performance bias due to lack of blinding, and therefore likely knowledge of the allocated intervention by participants and personnel during the study, which may have influenced subjective study outcomes (i.e. self‐report measures) |
Blinding of outcome assessment (detection bias) All outcomes | High risk | Comment: detection bias due to knowledge of the allocated intervention by outcome assessors, and outcome measurement is likely to be influenced by lack of blinding (pre‐post self‐report measures tied closely to intervention purpose) |
Incomplete outcome data (attrition bias) All outcomes | Unclear risk | Comment: 1 participant was excluded from analyses due to inability to complete the postintervention measure; however, authors do not report the participant's allocation condition (Alvarez 2010, p 212) |
Selective reporting (reporting bias) | Low risk | Comment: study protocol not available; however, the published report, Alvarez 2010, and unpublished report, Alvarez 2008, are consistent |
Other bias | High risk | Comment: additional potential sources of bias related to the specific study design have been identified |
Reliability of outcome measures (measurement bias) | High risk | Comment: low internal consistency for eligible outcomes measured by the Knowledge of Child Maltreatment Reporting Laws Inventory (α = 0.18) and Recognition and Intent to Report Child Maltreatment Inventory (α = 0.10); however, test‐retest reliability was acceptable for both measures (r = 0.88 for both) (Alvarez 2008, p 288) |
Group comparability (selection bias) | Unclear risk | Comment: no statistically significant differences between conditions at baseline on demographic variables, which is supported by report of data between conditions and formal statistical testing. No formal assessment of comparability on primary and secondary outcomes at baseline. Across conditions at baseline, mean scores < 1‐point difference. The Knowledge of Child Maltreatment Reporting Laws Inventory had a range of 0 to 15, and the Recognition and Intent to Report Child Maltreatment Inventory had a range of 0 to 48 (Alvarez 2010, p 214). |
Contamination (contamination bias) | Unclear risk | Comment: unclear whether control and experimental participants worked in the same setting, thereby potentially leading to contamination |
Dubowitz 1991.
Study characteristics | ||
Methods | Study design: quasi‐randomised controlled trial Unit of allocation: rotations during hospital residency received the course Unit of analysis: participants Adjustment for clustering: no (participants received the intervention in groups) | |
Participants | Location: Maryland, USA Setting: university teaching hospital Sample size calculation: not reported Sample size: 50 paediatric residents (Dubowitz 1991, p 305); intervention group n = 31, control group n = 19 (Dubowitz 1991, p 306) Mean age (SD): 27 years (SD not reported) (Dubowitz 1991, p 306) Gender: not reported by group; entire sample = 56% men (Dubowitz 1991, p 306) Race/ethnicity: not reported Years of experience: not reported Previous child protection training: not reported by group; 42% of entire sample had received 0 to 1 hour of teaching in child maltreatment, 50% 2 to 5 hours, and 8% 6 to 10 hours (Dubowitz 1991, p 306) Previous experience with child maltreatment reporting: not reported by group; 18% of entire sample had managed 0 to 1 cases of child maltreatment, 47% 2 to 5 cases, 20% 6 to 10 cases, and 14% > 10 cases (Dubowitz 1991, p 306) Baseline equivalence: authors report "no significant differences between the experimental group and controls" on several variables (age, marital status, number with children, and amount of previous teaching on child maltreatment). However, "the experimental group included more second‐year residents than did the controls (66% vs 11%), and a greater number had managed more than 5 cases of child maltreatment (47% vs 16%)" (Dubowitz 1991, p 306). Analyses to support the assessment of group equivalence were not reported. | |
Interventions | Name: child maltreatment course Contents: (i) incidence, prevalence, and diagnosis of child maltreatment, (ii) aetiological factors and theories, (iii) sexual abuse, psychological abuse, neglect, failure to thrive, (iv) interviewing techniques, (v) legal and ethical issues, and (vi) roles of the paediatrician (Dubowitz 1991, p 305) Processes and teaching methods: (i) didactic presentation, (ii) group discussions, (iii) slides, (iv) videotapes, (v) role playing, (vi) participants given 2 to 4 articles per topic and asked to read these in preparation for the seminars, and (vii) direct observations in a clinical setting of interdisciplinary evaluations of children who were suspected to have experienced abuse (Dubowitz 1991, p 305) Delivery mode: face‐to‐face seminars Trainers and qualifications: interdisciplinary team comprising paediatricians, social worker, child psychologist, and nurse (Dubowitz 1991, p 305) Duration: 1 month (Dubowitz 1991, p 305) Intensity: 6 x 90‐minute seminars (Dubowitz 1991, p 305) Intervention integrity: not reported Comparison condition: no training | |
Outcomes | Eligible measures (outcome domain): test based on course content (secondary outcome: knowledge of core concepts in child abuse and neglect), comprising 31 multiple choice items (Dubowitz 1991, p 306) Ineligible measures (reason): attitudinal measure comprised of 5 items rated on a 5‐point Likert scale (authors refer to this as an “attitude” measure; however, sample items reflect capabilities measured on a “competence” scale (Dubowitz 1991, p 306), which equates more readily to the construct of self‐efficacy; not listed in the protocol for this review) (Dubowitz 1991, p 306) Timing of outcome assessment: pre‐test (immediately before the course), post‐test (immediately following the course), follow‐up (3 to 4 months after the course had ended) (Dubowitz 1991, p 306) | |
Notes | Funding: National Center on Child Abuse and Neglect, Office of Human Development Services, US Department of Health and Human Services; grant 90CA1205/01 Author contact: no | |
Risk of bias | ||
Bias | Authors' judgement | Support for judgement |
Random sequence generation (selection bias) | High risk |
Comment: selection bias due to inadequate generation of a randomised sequence Quote: "the course was taught on alternate rotations so that a group of residents did not receive the course and served as a control group" (Dubowitz 1991, p 305) |
Allocation concealment (selection bias) | High risk | Comment: selection bias (biased allocation to interventions) due to inadequate concealment of allocations prior to assignment. Participants were assigned based on rotation (participants and investigators likely to foresee assignment). |
Blinding of participants and personnel (performance bias) All outcomes | High risk |
Comment: performance bias due to lack of blinding, and therefore likely knowledge of the allocated intervention by participants and personnel during the study, which may have impacted study subjective outcomes (i.e. self‐report measures) Quote: "all the residents were informed that the project involved an assessment of residency training and child maltreatment" (Dubowitz 1991, p 305) |
Blinding of outcome assessment (detection bias) All outcomes | High risk | Comment: detection bias due to knowledge of the allocated intervention by outcome assessors, and outcome measurement is likely to be influenced by lack of blinding (pre‐post self‐report measures tied closely to intervention purpose) |
Incomplete outcome data (attrition bias) All outcomes | Unclear risk | Comment: reported sample is 50 participants; however, the journal article does not report on attrition over time (i.e. at recruitment, intervention, outcome assessment) |
Selective reporting (reporting bias) | High risk | Comment: study protocol not available, but authors describe the development of outcome measures specifically for the study (Dubowitz 1991, p 306). All outcomes are reported in the study, but are reported in an incomplete manner and so cannot be used in the meta‐analyses. |
Other bias | High risk | Comment: additional potential sources of bias related to the specific study design have been identified |
Reliability of outcome measures (measurement bias) | High risk | Comment: outcome measures developed specifically for the study, and no reliability data were reported |
Group comparability (selection bias) | High risk |
Comment: authors report no significant differences between groups on several variables (age, marital status, number with children, and amount of previous teaching on child maltreatment). However, there were differences between groups in professional ranking and previous experience managing child maltreatment cases. No formal assessment of comparability on primary and secondary outcomes at baseline was reported. Across conditions at baseline, mean knowledge scores had a 1% point difference. Quote: "the experimental group included more second‐year residents than did the controls (66% vs 11%, and greater number had managed more than 5 cases of child maltreatment (47% vs 16%)" (Dubowitz 1991, p 306) |
Contamination (contamination bias) | Unclear risk | Comment: study authors do not report contamination minimisation measures or ways in which contamination may have been possible. It is unclear from the journal article whether experimental and control participants had contact with each other in their work placements. |
Hazzard 1984.
Study characteristics | ||
Methods | Study design: quasi‐randomised controlled trial Unit of allocation: geographical region (city) Unit of analysis: participants Adjustment for clustering: no (participants received the intervention in groups. Teachers from 4 schools in 1 city were allocated to receive the intervention. Teachers from 4 schools in another city were allocated as controls. Intervention participants attended 1 of 2 workshops. No breakdown of schools by workshop groups was reported) | |
Participants | Location: Atlanta, Georgia, USA Setting: 2 small cities in a county in the metro‐Atlanta area (Hazzard 1984, p 289) Sample size calculation: not reported Sample size: 104 4th, 5th, and 6th grade elementary teachers and junior high health education teachers (p 289); intervention group n = 51, control group n = 53 (Hazzard 1984, p 289) Mean age (SD): not reported by group; median age category = 31 to 35 years (Hazzard 1984, p 290) Gender: not reported by group; 83% = women, 17% = men (Hazzard 1984, p 290) Race/ethnicity: not reported Previous child protection training: not reported by group; 76% = yes, 24% = no (Hazzard 1984, p 290) Years of experience: not reported Previous experience with child maltreatment reporting: not reported by group; 38% = yes, 62% = no (Hazzard 1984, p 290) Baseline equivalence: not reported. Authors state that "analysis of demographic information revealed no significant differences between treatment and control teachers", but no data were reported to support this statement (Hazzard 1984, p 290) | |
Interventions | Name: one‐day training workshop on child abuse Contents: (i) rationale for training teachers about child abuse, (ii) definitions, myths, and realities, (iii) identifying abused children, (iv) family dynamics, (v) personal concerns about dealing with abuse cases, (vi) communicating with an abused child, (vii) legal issues and social service referrals; and (viii) “all types of abuse" (Hazzard 1984, p 290) Processes and teaching methods: (i) didactic presentations, (ii) questions and answers (Q&A) session with county protective services personnel, (iii) video presentations, (iv) modelling and role play, and (v) large and small group discussions (Hazzard 1984, p 290) Delivery mode: face‐to‐face workshops Trainers and qualifications: mental health professionals (1 man and 1 woman) with extensive experience with child abuse (Hazzard 1984, p 290) Duration: 1 day (Hazzard 1984, p 288) Intensity: 1 x 6‐hour workshop (Hazzard 1984, p 289) Intervention integrity: not reported Comparison condition: no training | |
Outcomes |
Eligible measures (outcome domain)
Ineligible measures (reason): feelings about child abuse (measures emotional reactions to child abuse; not prespecified in the protocol for this review), comprising 3 typical‐case vignettes with Likert‐type responses to 6 emotions evoked, including: anger, disgust, sadness, discomfort, sympathy, and caring Timing of outcome assessment: pre‐test (1 week before workshop), post‐test (1 week after workshop), follow‐up (6 months later) (Hazzard 1984, p 289) |
|
Notes | Funding: research funded by Emory University Research Fund; intervention workshops funded by McDonald Foundation, Atlanta Foundation, Metropolitan Atlanta Foundation, Ray & Elizabeth Lee Foundation, Gay & Erskine Love Foundation, James Starr Memorial Foundation, Shearson‐American Express, and American Tara Corporation Author contact: no | |
Risk of bias | ||
Bias | Authors' judgement | Support for judgement |
Random sequence generation (selection bias) | Unclear risk |
Comment: inadequate description of the generation of the randomised sequence Quote: "... school teachers (N = 104) were surveyed concerning their abuse‐related experience, knowledge and attitudes ... Half of the teachers (n = 51) were then randomly assigned to participate in a one‐day training workshop on child abuse" (Hazzard 1983, p 288) |
Allocation concealment (selection bias) | Unclear risk | Comment: method of concealment was not reported by study authors |
Blinding of participants and personnel (performance bias) All outcomes | High risk | Comment: performance bias due to lack of blinding, and therefore likely knowledge of the allocated intervention by participants and personnel during the study, which may have impacted subjective study outcomes (i.e. self‐report measures) |
Blinding of outcome assessment (detection bias) All outcomes | High risk | Comment: detection bias due to likely knowledge of the allocated intervention by outcome assessors, and outcome measurement is likely to be influenced by lack of blinding (pre‐post self‐report measures) |
Incomplete outcome data (attrition bias) All outcomes | Unclear risk | Comment: the reported sample is 104 participants; however, the journal article does not report on attrition over time (i.e. at recruitment, intervention, outcome assessment) |
Selective reporting (reporting bias) | High risk | Comment: study protocol not available, but authors describe the development of outcome measures specifically for the study. All outcomes are reported in the study, but not all outcomes were reported in a sufficiently complete manner to permit inclusion in meta‐analyses. |
Other bias | High risk | Comment: additional potential sources of bias related to the specific study design have been identified |
Reliability of outcome measures (measurement bias) | Unclear risk | Comment: outcome measures were developed specifically for the study. Authors reported coefficient alpha for the Knowledge of Child Abuse Scale (α = 0.80). Reported Involvement in Child Abuse was comprised of separate items, for which coefficient alpha was not appropriate (p 290). |
Group comparability (selection bias) | High risk |
Comment: information on the comparability of groups at baseline was not provided in sufficient detail for each outcome measure to enable assessment of equivalence. Authors report group equivalence based on demographic variables, but no data are reported to support this statement. Quote: "analysis of demographic information revealed no significant differences between treatment and control teachers" (Hazzard 1984, p 290) |
Contamination (contamination bias) | Low risk | Comment: measures taken to prevent or minimise the possibility that participants in a control group might receive part or all of the intervention were not described to enable a precise assessment of contamination between groups. However, the experimental and control group participants were in different cities, thus reducing the likelihood of contamination. |
Jacobsen 1993.
Study characteristics | ||
Methods | Study design: controlled before‐and‐after study Unit of allocation: district Unit of analysis: participant Adjustment for clustering: no (participants received the intervention in a group. Participants were from schools in 1 school district) | |
Participants | Location: rural western school district, USA Setting: school district (Jacobsen 1993, p 10) Sample size calculation: not reported Sample size: 40 kindergarten through 6th‐grade regular and special education teachers (Jacobsen 1993, p 23); intervention group n = 20, control group n = 20 (Jacobsen 1993, p 23) Mean age (SD): (i) intervention group = 40 years (SD not reported), (ii) control group = 37.9 years (SD not reported) (p 23) Gender: (i) intervention group = 75% women, (ii) control group = 85% women (Jacobsen 1993, p 23) Race/ethnicity: (i) intervention group = 70% white, (ii) control group = 75% white (Jacobsen 1993, p 22) Years of experience: (i) intervention group = 12.7 years, (ii) control group = 9.7 years (Jacobsen 1993, p 23) Previous child protection training: (i) intervention group = 75% received at least 1 hour of prior education about child sexual abuse, (ii) control group = 70% received at least 1 hour of prior education about child sexual abuse (Jacobsen 1993, p 23) Previous experience with child maltreatment reporting: (i) intervention group = 60% no, (ii) control group = 60% no (Jacobsen 1993, p 23) Baseline equivalence: not reported | |
Interventions | Name: 3‐hour inservice training on child sexual abuse (adapted from Kleemeier 1988) Contents: (i) prevalence, laws, and reporting, (ii) definitions, myths and facts about child sexual abuse, (iii) indicators of child sexual abuse, (iv) long‐term effects of child sexual abuse, (v) identifying, reporting, and handling disclosure, and (vi) child sexual abuse prevention programmes (Jacobsen 1993, pp 18‐20) (adapted from Kleemeier 1988) Processes and teaching methods: (i) specification of workshop goals, (i) didactic presentation, (iii) practical application of concepts, (iv) video presentation, and (v) group discussion Delivery mode: face‐to‐face workshop Trainers and qualifications: 2 x facilitators (school psychology graduate interns) Duration: 3 hours Intensity: not reported Intervention integrity: not reported Comparison condition: not reported | |
Outcomes |
Eligible measures (outcome domain)
Ineligible measures (reason): teacher opinion scale (items assess attitudes towards child sexual abuse rather than attitudes towards the reporting duty; not prespecified in the protocol for this review), comprising a 25‐item scale with response options on 4‐point Likert‐type scale Timing of outcome assessment: pre‐test (details not reported), post‐test (details not reported) |
|
Notes | Funding: not reported Author contact: no | |
Risk of bias | ||
Bias | Authors' judgement | Support for judgement |
Random sequence generation (selection bias) | High risk |
Comment: selection bias due to non‐randomised allocation of participants Quote: "the treatment group was not randomly assigned but took part in the study based on interest and the degree to which site administrator deemed the information important to the start ... . the control group consisted of 20 randomly selected elementary school teachers ..." (Jacobsen 1993, p 10) |
Allocation concealment (selection bias) | High risk | Comment: selection bias (biased allocation to interventions) due to inadequate concealment of allocations prior to assignment attributable to research design (non‐randomised study). Participants were assigned based on rotation (participants and investigators could foresee assignment). |
Blinding of participants and personnel (performance bias) All outcomes | High risk | Comment: performance bias due to lack of blinding and almost certain knowledge of the allocated intervention by participants and personnel during the study, which may have influenced subjectively measured study outcomes (i.e. self‐report measures) |
Blinding of outcome assessment (detection bias) All outcomes | High risk | Comment: detection bias due to likely knowledge of the allocated intervention by outcome assessors, and outcome measurement is likely to be influenced by lack of blinding (pre‐post self‐report measures) |
Incomplete outcome data (attrition bias) All outcomes | Unclear risk |
Comment: reported sample is 40; however, the author did not report on attrition over time (i.e. at recruitment, intervention, outcome assessment) Quote: "Control Group (n = 20)"; "Treatment Group (n = 20)" (Jacobsen 1993, p 23) |
Selective reporting (reporting bias) | Low risk | Comment: although the study protocol was not available, all outcomes described in the methods were fully reported in the study. Appropriate data and comparisons were offered (Jacobsen 1993, p 23; p 26). |
Other bias | Unclear risk | Comment: additional potential sources of bias related to the specific study design have been identified |
Reliability of outcome measures (measurement bias) | Low risk | Comment: authors reported coefficient alphas for the Teacher Knowledge Scale (α = 0.84), Teacher Opinion Scale (α = 0.78), and Teacher Vignettes Measure (α = 0.78) (Jacobsen 1993, pp 10‐3) |
Group comparability (selection bias) | Unclear risk | Comment: authors reported differences between groups on demographic characteristics likely to influence results, including: mean years teaching experience, prior experience with child sexual abuse, and previous child protection education. Groups appear comparable on all characteristics apart from experience (control group 9.7 years; treatment group 12.7 years). Authors reported group comparability data for each of the study outcomes, but did not assess whether baseline equivalence was achieved (e.g. via statistical testing) (Jacobsen 1993, pp 23, 25). |
Contamination (contamination bias) | Unclear risk | Comment: study author does not report the ways in which contamination may have been possible, or what may have been done to prevent or minimise this. It is unclear whether experimental and control participants had contact with each other in their workplaces. |
Kim 2019.
Study characteristics | ||
Methods | Study design: quasi‐randomised controlled trial Unit of allocation: school Unit of analysis: participant Adjustment for clustering: unclear | |
Participants | Location: USA, specific location not reported Setting: not reported Sample size calculation: not reported Sample size: 161 elementary school teachers (Kim 2019, p 730); intervention group n and control group n not reported Mean age (SD): not reported by group; age range = 18 to 55+ years; largest age category = 35 to 44 years (35.8%) (Kim 2019, p 730) Gender: not reported by group; total sample = 91.9% women Race/ethnicity: not reported by group; total sample = 97.5% Caucasian (understood to be white) Years of experience: M = 15.4 years (SD = 7.4 years); range = 1 to 30+ years Previous child protection training: not reported Previous experience with child maltreatment reporting: (i) intervention group = 56.6% yes, (ii) control group = 65.8% yes (Kim 2019, p 730) Baseline equivalence: not reported | |
Interventions | Name: Second Step Child Protection Unit (Committee for Children 2021) Contents: (i) recognise indicators of child sexual abuse, (ii) respond in a supportive way, (iii) report abuse, (iv) "Recognize, Respond, and Report Abuse", (v) addressing discomfort with the topic, and (vi) how to teach student lessons and engage families. Part of a "comprehensive approach" involving "(a) school policies and procedures, (b) staff training, (c) student lessons, and (d) family education" (Kim 2019, p 728) Processes and teaching methods: methods used in online modules were not reported Delivery mode: online Trainers and qualifications: not reported Duration: 75 to 90 minutes Intensity: self‐paced Intervention integrity: not reported; online delivery offers the possibility of uniformity Comparison condition: waitlist control | |
Outcomes |
Eligible measures (outcome domain)
Ineligible measures (reason)
Timing of outcome assessment: pre‐test (before online training), post‐test (after online training) (Kim 2019, p 730) |
|
Notes | Funding: Committee for Children, Seattle (WA) Author contact: yes | |
Risk of bias | ||
Bias | Authors' judgement | Support for judgement |
Random sequence generation (selection bias) | Low risk |
Comment: adequate description of the generation of the randomised sequence Quote: "randomisation was conducted at the school (Imai, King, & Nall, 2009). For the randomisation, schools were first matched based on school characteristics such as grade levels (K‐5, pre‐K‐2, 3–5) and school size, and then randomly assigned to the intervention or wait‐list control using a computer‐generated random number list (Kim & Shin, 2014)" (Kim 2019, p 730) |
Allocation concealment (selection bias) | Unclear risk | Comment: method of concealment not reported by study authors |
Blinding of participants and personnel (performance bias) All outcomes | High risk | Comment: performance bias due to lack of blinding, and therefore likely knowledge of the allocated intervention by participants and personnel during the study, which may have influenced subjective study outcomes (i.e. self‐report measures) |
Blinding of outcome assessment (detection bias) All outcomes | High risk | Comment: detection bias due to knowledge of the allocated intervention by outcome assessors, and outcome measurement is likely to be influenced by lack of blinding (pre‐post self‐report measures tied closely to intervention purpose) |
Incomplete outcome data (attrition bias) All outcomes | Unclear risk | Comment: loss of 5 participants between allocation and baseline assessments in the waitlist control group, but no losses in the experimental group. A total of 3 participants were "lost to follow‐up" (1/83 in experimental; 2/76 control condition) (p 731), and a further 3 participants in the experimental group "discontinued intervention" (p 731). Reasons for losses were not explicitly reported. Overall rate of attrition is low, and randomised sample size was used for the experimental group (n = 83), but not for the control group (n = 78) (Kim 2019, p 731). |
Selective reporting (reporting bias) | High risk | Comment: study protocol not available. All outcomes in the methods section are reported in the study, but not all outcomes were reported in a sufficiently complete manner to permit effect size calculation or inclusion in meta‐analyses, or both. |
Other bias | Unclear risk | Comment: additional potential sources of bias related to the specific study design have been identified |
Reliability of outcome measures (measurement bias) | Low risk | Comment: pre‐existing scales were used to measure outcomes. Authors generated reliability data at pre‐ and post‐test for the Educators and Child Abuse Questionnaire (α = 0.62 and α = 0.70, respectively), the Teacher Reporting Attitude Scale‐Child Sexual Abuse (α = 0.82 and α = 0.84, respectively), and the Delaware School Climate Survey (α = 0.77 and α = 0.78, respectively) (p 732). Reliability data for the Abbreviated Acceptability Rating Profile were generated at post‐test (α = 0.91) (Kim 2019, p 733). |
Group comparability (selection bias) | Unclear risk | Comment: group comparability at baseline was not reported |
Contamination (contamination bias) | Low risk | Comment: measures taken to prevent or minimise the possibility that participants in the control group might receive part or all of the intervention were not described sufficiently to enable a precise assessment of contamination between groups. However, the experimental and control group participants were in different schools, thus reducing the likelihood of contamination. |
Kleemeier 1988.
Study characteristics | ||
Methods | Study design: randomised controlled trial Unit of allocation: participant (teacher) Unit of analysis: participant Adjustment for clustering: no (participants received the intervention in groups. Intervention participants were from 4 schools, and controls from 4 different schools. No breakdown of schools by intervention groups was reported) | |
Participants | Location: southeastern USA Setting: suburban school district (Kleemeier 1988, p 556) Sample size calculation: not reported Sample size: 45 3rd and 4th grade teachers; intervention group n = 26, control group n = 19 (Kleemeier 1988, p 556) Mean age (SD): not reported by group; mean age of entire sample = 41 years (SD not reported) (Kleemeier 1988, p 556) Gender: not reported by group; entire sample = 100% women (Kleemeier 1988, p 556) Race/ethnicity: not reported by group; entire sample = 75% white (Kleemeier 1988, p 556) Previous child protection training: not reported by group; entire sample = 72% had received at least 1 hour of previous training (Kleemeier 1988, p 556) Years of experience: not reported by group; entire sample M = 12.5 years Previous experience with child maltreatment reporting: not reported by group; entire sample = 44% no Baseline equivalence: authors reported significant differences on demographic and experiential variables (Kleemeier 1988, p 556) | |
Interventions | Name: teacher training workshop Contents: (i) incidence, (ii) dynamics, (iii) indicators, (iv) short‐ and long‐term effects, (v) basic interview techniques, (vi) reporting, (vii) treatment resources, and (viii) primary prevention (Kleemeier 1988, p 558) Processes and teaching methods: (i) didactic presentations, (ii) videotapes, (iii) experiential exercises, (iv) role playing, (v) group discussion, and (vi) Q&A session with a child protective services worker (Kleemeier 1988, p 558) Delivery mode: face‐to‐face (Kleemeier 1988, p 558) Trainers and qualifications: 2 psychologists with expertise in child sexual abuse (Kleemeier 1988, p 558) Duration: 6 hours (Kleemeier 1988, p 558) Intensity: 1 x 6‐hour session (Kleemeier 1988, p 558) Intervention integrity: not reported Comparison condition: no training | |
Outcomes |
Eligible measures (outcome domain)
Ineligible measures (reason): teacher opinion scale (items assess attitudes towards child sexual abuse rather than attitudes towards the reporting duty; not prespecified in the protocol for this review), comprising a 25‐item scale with response options on a 4‐point Likert‐type scale Timing of outcome assessment: pre‐test (before training, teacher knowledge scale only), post‐test (teacher knowledge scale, teachers vignettes measure, after training), follow‐up (teacher prevention behaviour measure, teacher opinion scale, 6 weeks following training) |
|
Notes | Funding: National Institute of Mental Health (MH41161‐01) (Kleemeier 1988, p 555) Author contact: no | |
Risk of bias | ||
Bias | Authors' judgement | Support for judgement |
Random sequence generation (selection bias) | Unclear risk |
Comment: inadequate description of the generation of the randomised sequence Quote: "twenty‐six teachers from four schools were randomly assigned to the treatment group and attended a six‐hour training workshop on child sexual abuse. Nineteen teachers from four other schools were randomly assigned to the control group" (Kleemeier 1988, p 556) |
Allocation concealment (selection bias) | Unclear risk | Comment: method of concealment not reported by study authors |
Blinding of participants and personnel (performance bias) All outcomes | High risk | Comment: performance bias due to lack of blinding, and therefore likely knowledge of the allocated intervention by participants and personnel during the study, which may have influenced subjective study outcomes (i.e. self‐report measures) |
Blinding of outcome assessment (detection bias) All outcomes | High risk | Comment: detection bias due to likely knowledge of the allocated intervention by outcome assessors, and outcome measurement is likely to be influenced by lack of blinding (pre‐post self‐report measures) |
Incomplete outcome data (attrition bias) All outcomes | Unclear risk | Comment: reported sample is 45 participants; however, the journal article does not report on attrition over time (i.e. at recruitment, intervention, postintervention outcome assessment). Loss of 20% of participants (not broken down by group) for the single outcome measured at the 6‐week follow‐up (Kleemeier 1988, p 559) |
Selective reporting (reporting bias) | High risk | Comment: study protocol not available. All outcomes described in the methods were reported in the study; however, not all outcomes were reported in a sufficiently complete manner to permit inclusion in meta‐analyses. |
Other bias | High risk | Comment: additional potential sources of bias related to the specific study design have been identified |
Reliability of outcome measures (measurement bias) | Low risk | Comment: authors reported coefficient alphas for the Teacher Knowledge Scale (α = 0.84) and Teacher Opinion Scale (α = 0.78). The Teacher Vignettes Measure was comprised of separate items with open‐ended responses, for which coefficient alpha was possible. No reliability data were reported for the Teacher Prevention Behaviour Measure (Kleemeier 1988, pp 557‐8). |
Group comparability (selection bias) | High risk | Comment: authors reported 2 significant differences between groups. The treatment group included a larger variation in teaching staff positions and were "more likely than control teachers to have previously suspected that specific children had been abused" (Kleemeier 1988, p 556). Information on the comparability of groups at baseline was not provided in sufficient detail for each outcome measure to enable assessment of equivalence. |
Contamination (contamination bias) | Unclear risk | Comment: measures taken to prevent or minimise the possibility that participants in a control group might receive part or all of the intervention were not described in sufficient detail to enable a reasonable assessment of contamination between groups (e.g. number of intervention and control group participants per school). However, authors report that teachers assigned to the treatment group were from "four schools", and teachers assigned to the control group were from "four other schools" (Kleemeier 1988, p 556). |
Mathews 2017.
Study characteristics | ||
Methods | Study design: randomised controlled trial Unit of allocation: participant Unit of analysis: participant Adjustment for clustering: not applicable | |
Participants | Location: Pennsylvania, USA Setting: licensed childcare facilities (Mathews 2017, p 4) Sample size calculation: yes (Mathews 2017, p 7) Sample size: 765 early education and care providers over 18 years of age, working with children under 5 years of age; intervention group n = 388, control group n = 374 (Mathews 2017, p 4) Mean age (SD): not reported; largest age category: (i) intervention group = 18 to 29 years (40.6%), (ii) control group = 18 to 29 years (40.1%) (Mathews 2017, p 9) Gender: (i) intervention group = 98.9% women, (ii) control group = 96.5% women (Mathews 2017, p 9) Race/ethnicity: (i) intervention group = 84.7% non‐Hispanic white, (ii) control group = 83.7% non‐Hispanic white Years of experience: largest group > 15 years: (i) intervention group = 25.2%, (ii) control group = 25.2% (Mathews 2017, p 9) Previous child protection training: (i) intervention group = 79% yes, (ii) control group = 78.1% yes (Mathews 2017, p 9) Previous experience with child maltreatment reporting: not reported Baseline equivalence: analysis by Chi² tests showed that each group had comparable baseline demographics (Mathews 2017, p 9) | |
Interventions | Name: iLookOut for Child Abuse ‐ Online Learning Module for Early Childcare Providers Contents: (1) cognitive aspects of mandated reporter training: (i) definitions of abuse, (ii) signs of abuse, (iii) legal requirements for reporting; (2) affective aspects of mandated reporter training: (i) empowering participants to contact CPS when there was a reasonable suspicion, and (ii) developing attitudinal dispositions to help protect children (Mathews 2017, p 5) Processes and teaching methods: immersion in real‐life simulations via "interactive, video‐based storyline with films shot in point of‐view (i.e. the camera functioning as the learner's eyes), with the learner taking the role of a teacher of 4 year olds at a childcare centre" (Mathews 2017, p 5) Delivery mode: online, designed "to allow for mobile access" and "to accommodate individuals with sensory disabilities" (Mathews 2017, p 5) Trainers and qualifications: not applicable, as training was online. Module developed by team of experts in child abuse, instructional design, paediatrics, early childhood education, online learning, mandated reporter training, law, ethics, and child advocacy. Duration: 4 weeks (Mathews 2017, p 5) Intensity: self‐paced Intervention integrity: not reported; online delivery offers the possibility of uniformity Comparison condition: waitlist control | |
Outcomes |
Eligible measures (outcome domain)
Ineligible measures (reason): nil Timing of outcome assessment: pre‐test (before training), post‐test (after training), follow‐up (after 4 months) (Mathews 2017, p 6) |
|
Notes | Funding: intervention was funded by Penn State University Center for the Protection of Children; research was funded by Penn State REDCap, Penn State Clinical and Translational Research Institute, NIH/NCATS Grant Number UL1 TR000127 Author contact: yes | |
Risk of bias | ||
Bias | Authors' judgement | Support for judgement |
Random sequence generation (selection bias) | Low risk |
Comment: selection bias unlikely due to adequate description of the generation of the randomised sequence Quote: "the host website randomised participants using an automated scheme" (Mathews 2017, p 6) |
Allocation concealment (selection bias) | Unclear risk | Comment: study protocol states the trial was open and unmasked, yet published report of Phase I of the trial states that "participants were blinded to treatment allocation" (Mathews 2017, p 6). Unclear whether blinding refers to allocation concealment up until point of assignment |
Blinding of participants and personnel (performance bias) All outcomes | High risk | Comment: study protocol states the trial was open and unmasked, yet published report of Phase I of the trial states that "participants were blinded to treatment allocation" (Mathews 2017, p 6). It is likely that participants would have been able to identify if they were taking part in the intervention, as this was self‐administered online. Due to subjective nature of measures (pre‐post self‐report) and their close alignment with intervention content, performance bias is likely. |
Blinding of outcome assessment (detection bias) All outcomes | High risk | Comment: no blinding of outcome assessment, detection bias likely due to the subjective nature of measures and their close alignment with intervention content (pre‐post self‐report) |
Incomplete outcome data (attrition bias) All outcomes | High risk |
Comment: loss of 21 participants after allocation was not evenly distributed across conditions (5/374 (1.34%) control condition; 16/388 (4.12%) experimental condition). Reasons for losses were not explicitly reported, but were described as "failure to complete" (Figure 1, p 8). The overall rate of attrition is low (2.76%), and analyses excluded lost participants (Mathews 2017, p 8). A total of 406 participants agreed to be contacted to gather follow‐up data (n participants by group not reported), yet only 201 participants who had received the intervention completed the 4‐month follow‐up measure. Consequently, no between‐group comparisons could be performed by study authors. |
Selective reporting (reporting bias) | Low risk | Comment: study protocol is available, and all outcomes that were prespecified were reported in the prespecified way |
Other bias | Unclear risk | Comment: additional potential sources of bias related to the specific study design have been identified |
Reliability of outcome measures (measurement bias) | Unclear risk | Comment: outcome measures were developed specifically for the study. No report of coefficient alphas, but authors report actions to enhance reliability and validity of the measures (e.g. use of experts in development, piloting to "improve content validity and reliability, including test‐retest reliability" (Mathews 2017, p 6). |
Group comparability (selection bias) | Unclear risk |
Comment: group comparability at baseline was supported by statistical testing on demographic variables (Table 1, p 9). Statistically significant difference in gender between conditions (13/369 control; 3/372 experimental). No other statistically significant differences were found. Authors do not report statistical assessment of group comparability of primary and secondary outcomes at baseline. For knowledge, mean = 13.54 for both conditions at baseline. For attitude, mean = 5.78 for control condition and 5.80 for experimental condition at baseline (Mathews 2017, p 9). Quote: "baseline data showed each group had almost identical knowledge and attitudes" (Mathews 2017, p 8) |
Contamination (contamination bias) | Unclear risk | Comment: unclear whether control and experimental participants worked in the same settings, thereby potentially leading to contamination. Although the potential participant pool was large ‐ educators at 1900 childcare facilities ‐ it is possible that a small proportion of participants in intervention and control groups may have been from the same workplaces. |
McGrath 1987.
Study characteristics | ||
Methods | Study design: randomised controlled trial Unit of allocation: schools Unit of analysis: participant Adjustment for clustering: no (participants received the intervention in school groups. Intervention participants and control participants were from different schools. Cluster sizes were not reported) | |
Participants | Location: Ottawa, Ontario, Canada Setting: Ottawa Board of Education Sample size calculation: not reported Sample size: 184 kindergarten to grade 8 teachers from 10 schools; intervention group n = 37, control group n = 94 (McGrath 1987, p 127) Mean age (SD): not reported Gender: not reported Race/ethnicity: not reported Previous child protection training: not reported Years of experience: M = 15.6 years (SD not reported) Previous experience with child maltreatment reporting: not reported Baseline equivalence: authors report assessing differences between the groups at baseline, but do not report method or results | |
Interventions | Name: teacher awareness programme on child abuse/comprehensive professional development workshop Contents: (i) definition of all forms of abuse, (ii) indicators of abuse, (iii) legal issues, including mandatory reporting requirements, (iv) reluctance to report, (v) role of the teacher, (vi) board policy, and (vii) community resources Processes and teaching methods: "train‐the‐trainer" model. Training manual included: (i) model lectures, (ii) overhead transparencies, (iii) reading materials, and (iv) audio‐visual resources (McGrath 1987, p 126). Delivery mode: face‐to‐face Trainers and qualifications: social workers Duration: 2 hours Intensity: 1 x 2‐hour workshop Intervention integrity: not reported Comparison condition: waitlist control | |
Outcomes |
Eligible measures (outcome domain)
Ineligible measures (reason): second measure (ineligible because items assessed a mix of knowledge, myths, and attitudes), comprising 19 statements about child abuse and neglect with true/false/don’t know response options Timing of outcome assessment: pre‐test (before training), post‐test (after training), follow‐up (2 months after training) (McGrath 1987, pp 127‐30) |
|
Notes | Funding: Ontario Ministry of Community and Social Services, and a Career Scientist Award of the Ontario Ministry of Health Author contact: no | |
Risk of bias | ||
Bias | Authors' judgement | Support for judgement |
Random sequence generation (selection bias) | Unclear risk |
Comment: inadequate description of the generation of the randomised sequence Quote: "all teachers in a specific school were randomly assigned to either immediate teaching (experimental group) or delayed teaching (control group)" (McGrath 1987, p 127) |
Allocation concealment (selection bias) | Unclear risk | Comment: method of concealment not reported by study authors |
Blinding of participants and personnel (performance bias) All outcomes | High risk | Comment: performance bias due to lack of blinding, and therefore likely knowledge of the allocated intervention by participants and personnel during the study, which may have influenced subjective study outcomes (i.e. self‐report measures) |
Blinding of outcome assessment (detection bias) All outcomes | High risk | Comment: detection bias due to likely knowledge of the allocated intervention by outcome assessors, and outcome measurement is likely to be influenced by lack of blinding (pre‐post self‐report measures) |
Incomplete outcome data (attrition bias) All outcomes | High risk |
Comment: unspecified loss of participants between intervention and follow‐up. Authors did not report on specific attrition over time (i.e. at recruitment, intervention, outcome assessment). Quote: "there was a high level of attrition from the experimental group at the follow‐up assessment" (McGrath 1987, p 126) |
Selective reporting (reporting bias) | High risk | Comment: study protocol not available. All outcomes described in the methods were reported in the study; however, reporting of outcomes was inconsistent across 3 measurement time points, and outcomes were not reported in a sufficiently complete manner to permit inclusion in meta‐analyses. |
Other bias | Unclear risk | Comment: additional potential sources of bias related to the specific study design have been identified |
Reliability of outcome measures (measurement bias) | Unclear risk | Comment: 1 outcome measure was based on a pre‐existing scale, and the other was developed specifically for the study. No reliability data were reported, and reliability data for the pre‐existing scale could not be sourced. |
Group comparability (selection bias) | Unclear risk |
Comment: authors reported no significant differences between participants who completed all 3 measurement time points and those who did not, but no supporting data were given. Authors do not report group comparability on primary and secondary outcomes at baseline, and insufficient information was reported to determine this comparability. Quote: "there were no significant differences in scores between teachers completing all three measures and those not completing all measures" (McGrath 1987, p 126) |
Contamination (contamination bias) | Low risk |
Comment: participants received the intervention in school groups. Intervention and control participants were from different schools, thus reducing the likelihood of contamination. Quote: "all teachers in a specific school were randomly assigned to either immediate teaching (experimental group) or delayed teaching (control group)" (McGrath 1987, p 127) |
Palusci 1995.
Study characteristics | ||
Methods | Study design: controlled before‐and‐after study Unit of allocation: participants Unit of analysis: participants Adjustment for clustering: not applicable | |
Participants | Location: New York, USA Setting: outpatient child abuse clinic located in a university‐affiliated municipal hospital Sample size calculation: not reported Sample size: 157 physicians and medical students (resident physicians, fellows, attending physicians, medical students) (Palusci 1995, p 1033); intervention group n = 127; control group n = 15 Mean age (SD): not reported Gender: not reported Race/ethnicity: not reported Previous child protection training: not reported Years of experience: not reported Previous experience with child maltreatment reporting: not reported Baseline equivalence: not reported | |
Interventions | Name: interdisciplinary team‐based training Contents: variously described. 1 description includes: (i) didactic training in interviewing, (ii) sexual development, and (iii) the psychologic basis of sexual abuse evaluation (Palusci 1995, p 1032). Another description includes: (i) medical knowledge and skills needed for an assessment of the child's interview, (ii) anogenital examination, and (iii) indications for case reporting to child protection authorities (Palusci 1995, p 1031). Processes and teaching methods: (i) didactic lectures, (ii) case discussions, (iii) videotapes, and (iv) direct participation in patient evaluation ("two mornings in a child abuse clinic where they interview, and examine children for possible sexual abuse" (Palusci 1995, p 1032) Delivery mode: face‐to‐face Trainers and qualifications: interdisciplinary team comprising experts in child abuse and neglect, general paediatricians, psychologists, child life specialists, social workers, and nurses (Palusci 1995, p 1032) Duration: unclear Intensity: "3 hours of didactic training and 6‐12 hours of patient care exposure" (Palusci 1995, p 1036) Intervention integrity: not reported Comparison condition: no training | |
Outcomes |
Eligible measures (outcome domain): part 3 (primary outcome: number of reported cases of child abuse and neglect, measured subjectively by participant responses to vignettes), comprising 8 items (response options not reported, but "one point is awarded for correctly answering each item" Palusci 1995, p 1032)
Ineligible measures (reason)
Timing of outcome assessment: pre‐test (before training), post‐test (after training) (Palusci 1995, p 1033) |
|
Notes | Funding: none reported Author contact: no | |
Risk of bias | ||
Bias | Authors' judgement | Support for judgement |
Random sequence generation (selection bias) | High risk |
Comment: selection bias due to non‐randomised allocation of participants to groups Quote: "we evaluated the results of this training ... in a non‐randomised controlled trial" (p 1031) and "training occurs during required and elective child development rotations" (Palusci 1995, p 1032) |
Allocation concealment (selection bias) | High risk |
Comment: selection bias (biased allocation to interventions) due to inadequate concealment of allocations prior to assignment attributable to research design (non‐randomised study). Participants were assigned based on rotation (participants and investigators could foresee assignment). Quote: "... an intervention group was assembled from all ... who participated in the training program during the study period" (Palusci 1995, p 1033) |
Blinding of participants and personnel (performance bias) All outcomes | High risk |
Comment: performance bias due to lack of blinding, and therefore likely knowledge of the allocated intervention by participants and personnel during the study, which may have impacted subjective study outcomes (i.e. self‐report measures) Quote: "the purpose of the study and its instructions were explained to the participants" (Palusci 1995, p 1033) |
Blinding of outcome assessment (detection bias) All outcomes | High risk |
Comment: detection bias due to knowledge of the allocated intervention by outcome assessors, and outcome measurement is likely to be influenced by lack of blinding (pre‐post self‐report measures tied closely to intervention purpose) Quote: "the survey instrument was administered by a single investigator" (Palusci 1995, p 1033) |
Incomplete outcome data (attrition bias) All outcomes | Unclear risk |
Comment: the reported sample is 157; however, authors did not report on attrition over time (i.e. at recruitment, intervention, outcome assessment) Quote: "no one refused to participate in the study, although two incomplete forms were excluded from the analysis" (Palusci 1995, p 1033) |
Selective reporting (reporting bias) | High risk | Comment: study protocol not available. Authors describe 3 component parts of the survey instrument (Palusci 1995, p 1032); however, scores appear to be combined for an overall result. Hence, outcomes were reported in an incomplete manner. |
Other bias | High risk | Comment: additional potential sources of bias related to the specific study design have been identified |
Reliability of outcome measures (measurement bias) | Low risk | Comment: outcome measures were developed specifically for the study. Authors report internal consistency with coefficient α of 0.69 (Palusci 1995, p 1034). |
Group comparability (selection bias) | High risk | Comment: authors do not report differences between groups on demographic characteristics, and there were likely to be differences between groups on professional ranking and previous experience in managing child maltreatment cases. No formal assessment of comparability on primary or secondary outcomes at baseline was reported. |
Contamination (contamination bias) | Unclear risk | Comment: study authors do not report ways in which contamination may have been possible or contamination minimisation measures. It is unclear whether experimental and control participants had contact with each other in their work placements. |
Randolph 1994.
Study characteristics | ||
Methods | Study design: randomised controlled trial Unit of allocation: participants Unit of analysis: participants Adjustment for clustering: no (participants received the intervention in groups. Participants were from 4 schools in 1 school district. No breakdown of schools by groups was reported) | |
Participants | Location: North Carolina, USA Setting: schools in a rural school district, after school hours Sample size calculation: not reported Sample size: 42 kindergarten to 12th grade teachers; intervention group n = 21, control group n = 21 (Randolph 1994, p 488) Mean age (SD): (i) intervention group = 42.7 years (SD not reported), (ii) control group = 41.7 years (SD not reported) (Randolph 1994, p 488) Gender: (i) intervention group = 80.95% women, (ii) control group = 80.95% women (Randolph 1994, p 488) Race/ethnicity: not reported Previous child protection training: not reported Years of experience: (i) intervention group = 12.8 years, (ii) control group = 12.05 years (Randolph 1994, p 488) Previous experience with child maltreatment reporting: not reported Baseline equivalence: authors report no significant differences between groups for sex, age, years teaching experience, marital status, race, education, experience in child sexual abuse (Randolph 1994, p 491); however, no data were reported | |
Interventions | Name: child sexual abuse prevention, teacher training workshop curriculum Contents: (i) how to recognise behavioural/physical signs of child sexual abuse, (ii) how to respond appropriately to disclosures, (iii) how to report sexual abuse cases, (iv) dynamics and emotions involved in child sexual abuse, (v) overcoming discomfort and addressing emotions associated with making a report, and (vi) developing empathetic understanding for the child victim (Randolph 1994, p 487‐8) Processes and teaching methods: (i) didactic presentations, (ii) videotapes, (iii) role‐plays, (iv) paper & pencil activities, (v) question and answer (Q&A) sessions, and (vi) group activities (Randolph 1994, p 488) Delivery mode: face‐to‐face Trainers and qualifications: 5 speakers considered experts in the area of child sexual abuse (e.g. psychologist, police officer, lawyer, and social worker) (Randolph 1994, p 490) Duration: 6 hours Intensity: 3 x 2‐hour sessions on 3 consecutive days (Randolph 1994, p 490) Intervention integrity: not reported Comparison condition: waitlist control | |
Outcomes |
Eligible measures (outcome domain)
Ineligible measures (reason): teacher opinion scale (items assess attitudes towards child sexual abuse rather than attitudes towards the reporting duty; not prespecified in the protocol for this review), comprising a 23‐item scale with response options on 4‐point Likert‐type scale Timing of outcome assessment: pre‐test (immediately before training), post‐test (the day after training), follow‐up (teacher prevention behaviour measure only, 3 months after training) (Randolph 1994, p 490) |
|
Notes | Funding: not reported Author contact: no | |
Risk of bias | ||
Bias | Authors' judgement | Support for judgement |
Random sequence generation (selection bias) | Unclear risk |
Comment: inadequate description of the generation of the randomised sequence Quote: "one‐half (21) were randomly assigned by sex and grade level of instruction (in order to have an equal number of males versus females and elementary versus middle grades teachers per group), to an experimental group. They attended the child abuse prevention workshop. The remaining 21 formed the control group" (Randolph 1994, p 488) |
Allocation concealment (selection bias) | Unclear risk | Comment: method of concealment not reported by study authors |
Blinding of participants and personnel (performance bias) All outcomes | High risk | Comment: performance bias due to lack of blinding, and therefore likely knowledge of the allocated intervention by participants and personnel during the study, which may have influenced subjective study outcomes (i.e. self‐report measures) |
Blinding of outcome assessment (detection bias) All outcomes | High risk | Comment: detection bias due to likely knowledge of the allocated intervention by outcome assessors, and outcome measurement is likely to be influenced by lack of blinding (pre‐post self‐report measures) |
Incomplete outcome data (attrition bias) All outcomes | Unclear risk | Comment: reported sample is 42 participants; however, the journal article does not report on attrition over time (i.e. at recruitment, intervention, outcome assessment) |
Selective reporting (reporting bias) | High risk | Comment: study protocol not available. All outcomes described in the methods were reported in the study; however, not all outcomes were reported in a sufficiently complete manner to enable inclusion in meta‐analyses. |
Other bias | Unclear risk | Comment: additional potential sources of bias related to the specific study design have been identified |
Reliability of outcome measures (measurement bias) | Low risk | Comment: pre‐existing scales were used to measure outcomes. Authors relied on previous studies for reliability data for the Teacher Knowledge Scale (α = 0.84) and Teacher Opinion Scale (α = 0.78) (reported by the study authors as originating from Hazzard 1988, but after tracing this was found to be an error. Reliability data should be referenced to Kleemeier 1988). No reliability data were reported for the Teacher Prevention Behaviour Measure (Randolph 1994, pp 488‐9). |
Group comparability (selection bias) | Unclear risk |
Comment: information on the comparability of groups at baseline was not provided in sufficient detail for each outcome measure to enable assessment of equivalence. Authors report group equivalence based on demographic variables, but no data were reported to support this statement. Quote: "the MANCOVA revealed no significant differences between the treatment and the control subjects in terms of sex, age, years teaching, marital status, race, degree held or previous experience in the area of child sexual abuse" (Randolph 1994, p 491) |
Contamination (contamination bias) | Unclear risk |
Comment: authors report simple measures to prevent contamination; however, insufficient detail was reported to enable assessment of contamination between groups (e.g. number of intervention and control group participants per school) Quote: "as part of their participation, all volunteers were asked not to discuss the information presented in the training session with teachers in the other group as it could affect the outcome of the study" (Randolph 1994, p 490) |
Smeekens 2011.
Study characteristics | ||
Methods | Study design: randomised controlled trial Unit of allocation: participant Unit of analysis: participant Adjustment for clustering: not applicable | |
Participants | Location: Utrecht, the Netherlands Setting: online module that participants could complete at the hospital (a university medical centre) or at home Sample size calculation: not reported Sample size: 38 emergency department nurses with permanent contracts; intervention group n = 19, control group n = 19 (Smeekens 2011, p 332) Mean age (SD): (i) intervention group = 41 years (SD = 9), (ii) control group = 41 years (SD = 11) (Smeekens 2011, p 332) Gender: (i) intervention group = 78.9% women, (ii) control group = 78.9% women (Smeekens 2011, p 332) Race/ethnicity: not reported Previous child protection training: not reported Years of experience: (i) intervention group = 9 years, (ii) control group = 9 years (Smeekens 2011, p 332) Previous experience with child maltreatment reporting: not reported Baseline equivalence: no significant differences in participant characteristics (Smeekens 2011, p 332) | |
Interventions | Name: Next Page (e‐learning module) Contents: 3 modules on child abuse: (i) recognition, (ii) acting, and (iii) communication Processes and teaching methods: online modules including: (i) simulations of clinical cases, (ii) video animations, and (iii) interactive elements Delivery mode: online Trainers and qualifications: module was hosted by a not‐for‐profit organisation called Augeo, whose website says interventions were designed in consultation with government, professional associations, and the International Society for Prevention of Child Abuse and Neglect (ISPCAN) Duration: 2 hours Intensity: participants complete the module in a minimum of 2 hours over a specified 2‐week window Intervention integrity: not applicable Comparison condition: no training | |
Outcomes | Eligible measures (outcome domain): performance in simulated cases (primary outcome: number of reported cases of child abuse and neglect, measured subjectively by participant responses to vignettes), comprising 8 cases based on real clinical cases with in vivo video‐recorded assessment and evaluated by an "expert panel of three paediatricians experienced in the recognition of child abuse" (Smeekens 2011, p 331) Ineligible measures (reason): self‐efficacy (self‐reported child abuse detection self‐efficacy; not prespecified in the protocol for this review), comprising 6 items corresponding to each of the 6 steps in the SPUTOVAMO‐R checklist, assessed on a 0‐to‐800‐millimetre visual analogue response scale Timing of outcome assessment: pre‐test (6 to 8 months before training), post‐test (2 weeks after training) (Smeekens 2011, p 331) | |
Notes | Funding: Augeo Foundation Author contact: no | |
Risk of bias | ||
Bias | Authors' judgement | Support for judgement |
Random sequence generation (selection bias) | Low risk |
Comment: adequate description of the generation of the randomised sequence Quote: "participants were allocated to an intervention or a control group using a computer‐generated randomisation list created by an independent statistician" (Smeekens 2011, p 331) |
Allocation concealment (selection bias) | Unclear risk |
Comment: authors state that the trial was blinded, but it is unclear whether blinding refers to allocation concealment up until point of assignment. Study protocol cannot be used to verify this, as it has a different design to the trial reported in the journal article (see ClinicalTrials.gov NCT00844571). Quote: "design: blinded, randomised controlled trial" (Smeekens 2011, p 330) |
Blinding of participants and personnel (performance bias) All outcomes | High risk |
Comment: authors state that participants and personnel were not blinded, and would be able to identify if they were taking part in the intervention or not (self‐administered). Due to subjective nature of measures (pre‐post self‐report), possible performance bias without blinding. Quote: "owing to the nature of the trial it was not possible to blind the participants and the head researcher to randomisation" (Smeekens 2011, p 331) |
Blinding of outcome assessment (detection bias) All outcomes | Low risk |
Comment: outcome assessors were blinded to participant group allocation, and assessed participant performance in simulations using a standardised assessment form Quote: "an expert panel of three paediatricians experienced in the recognition of child abuse, who were blinded to the allocation, evaluated the recorded performance ... . The case‐simulations were recorded on video and, after the completion of pre‐ and post‐test, the blinded expert panel scored the performance using a standardised assessment form which was designed to score quantity and quality of the questions posed by the nurse" (Smeekens 2011, pp 331–2) |
Incomplete outcome data (attrition bias) All outcomes | High risk |
Comment: 6/19 experimental and 7/19 control group participants were lost to follow‐up and not included in the analyses. Reasons for attrition reported (p 332): (a) "not scheduled to work in measurement period" (5 experimental; 6 control); (b) "unfinished e‐learning" (1 experimental); and (c) "participant left the role" (1 control). Authors performed additional analyses to examine the impact of attrition, but do not report data to support their statement that the results were unchanged. Quote: "to account for loss to follow‐up, both an intention‐to‐treat analysis with the pretest score carried forward and a multiple imputation analysis were performed. As the results were not essentially altered by these analyses we decided to present the analysis of the participants who performed the post‐test" (Smeekens 2011, p 332) |
Selective reporting (reporting bias) | Low risk | Comment: study protocol is available, and outcomes align with those reported in the journal article. Self‐efficacy is assigned as a primary outcome in the study protocol, but is reported as a secondary outcome in the journal article. |
Other bias | Unclear risk | Comment: timing of the post‐measure for the control and experimental groups may have differed, with the control group completing their post‐measure a short time (~2 weeks) prior to the experimental group. This poses a potential threat to internal validity. Also, additional potential sources of bias related to the specific study design have been identified. |
Reliability of outcome measures (measurement bias) | Unclear risk |
Comment: internal consistency for simulation outcome possibly not appropriate; however, authors report on the interrater reliability for the outcome assessors. No reliability data reported for the self‐efficacy measure (not eligible for this review). Quote: "the inter‐rater reliability for the three experts during post‐test was found to be 0.70 (95% CI 0.51 to 0.84, p value 0.000), which can be considered good" (Smeekens 2011, p 333) |
Group comparability (selection bias) | Unclear risk | Comment: authors provide table of data comparing experimental and control participants on demographic and outcomes at baseline (no significant differences). However, the data are for the randomised participants (n = 38) and not the analysed participants (n = 25), which does not permit examination of whether analysed participants were comparable at baseline. |
Contamination (contamination bias) | Unclear risk | Comment: unclear whether control and experimental participants worked in the same settings, thereby potentially leading to contamination |
α: alpha; CPS: Child Protection Services; M: mean; NCATS: National Clinical Assessment and Treatment Service; NIH: US National Institutes of Health; Q&A: questions and answers; REDCap: research electronic data capture; SD: standard deviation; SPUTOVAMO‐R: a 9‐item Dutch checklist in which each letter in the acronym refers to 1 question in the checklist.
Characteristics of excluded studies [ordered by study ID]
Study | Reason for exclusion |
---|---|
Al‐Dabaan 2016 | Ineligible research design (uncontrolled, repeated cross‐sectional design) |
Cerezo 2004 | Ineligible research design (interrupted time series) |
Flemington 2017 | Ineligible research design (uncontrolled, repeated cross‐sectional design) |
Hawkins 2001a | Ineligible research design (3‐sample independent groups design) |
Hawkins 2001b | Ineligible research design (3‐sample independent groups design) |
Humphreys 2021 | Ineligible research design (study compared 2 training programmes head‐to‐head) |
König 2020 | Ineligible intervention (training programme focused on institutional safeguarding rather than mandatory reporting) |
Lee 2017 | Ineligible outcomes (outcomes assessed were efficacy expectations and outcomes expectations for child abuse and neglect reporting) |
Letourneau 2016 | Ineligible research design (interrupted time series) |
Levi 2021 | No impact evaluation reported (paper described key features of the iLookOut training programme and its instructional innovations) |
Menick 2005 | Ineligible research design (uncontrolled, repeated cross‐sectional design) |
Morris 1982 | Ineligible research design (uncontrolled post‐test only or single administration, cross‐sectional design) |
NCT03758794 | Ineligible comparison (study compared 2 training approaches head‐to‐head: online and face‐to‐face) |
Paek 2019 | Ineligible research design (uncontrolled, repeated cross‐sectional design) |
Rheingold 2012 | Ineligible comparison (study compared 2 training approaches head‐to‐head: online and face‐to‐face) |
Rheingold 2015 | Ineligible comparison (study compared 2 training approaches head‐to‐head: online and face‐to‐face) |
Socolar 1998 | Ineligible intervention (training programme focused on documentation of child sexual abuse examinations rather than child protection training to improve reporting of child abuse and neglect) |
Sullivan 1990 | Ineligible research design (uncontrolled, repeated cross‐sectional design) |
Volpe 1981 | Ineligible research design (uncontrolled, repeated cross‐sectional design) |
Yang 2020 | Ineligible research design (single group, pre‐ and post‐test design) |
Characteristics of studies awaiting classification [ordered by study ID]
De Faria Brino 2003.
Methods | Study design: unclear, but likely controlled before‐and‐after study Unit of allocation: unclear Unit of analysis: unclear Adjustment for clustering: unclear |
Participants | Participants: teachers Location: Brazil Setting: Municipal Department of Education Sample size calculation: not reported Sample size: 11 teachers; intervention group n = 5, control group n = 6 (p 1) Mean age (SD): intervention group = 31 to 50 years, control group = 32 to 60 years Gender: not reported Race/ethnicity: not reported Years of experience: not reported Previous child protection training: not reported Baseline equivalence: not reported |
Interventions | Name: not reported Contents: (i) definitions of child sexual abuse; (ii) beliefs (myths and realities); (iii) causes and consequences (effects) of abuse sexual; (iv) legal aspects of sexual abuse; (v) duties of the professional in these cases; (vi) referral and treatment of the sexually abused child Processes and teaching methods: (i) presentations and oral presentations on the topic; (ii) group discussions; (iii) role‐play; (iv) case studies; (v) film presentations and videos; (vi) general guidelines on sexual abuse and on legislation; (vii) suggestions for readings relevant to the topic; and (viii) space for questions and comments Delivery mode: training session plus workshops conducted as fortnightly meetings Trainers and qualifications: unclear Duration: approximately 2 months Intensity: 4 x 3 hourly sessions conducted once per fortnight Intervention integrity: not reported Comparison condition: not reported |
Outcomes |
|
Notes | Funding: not reported Author contact: authors contacted via email 24 April 2019, but no response received (Walsh 2019a [pers comm]) |
Herrera 1993.
Methods | Study design: unclear Unit of allocation: unclear Unit of analysis: unclear Adjustment for clustering: unclear |
Participants | Participants: elementary school teachers Location: USA Setting: unclear Sample size calculation: unclear Sample size: unclear Mean age (SD): unclear Gender: unclear Race/ethnicity: unclear Years of experience: unclear Previous child protection training: unclear Baseline equivalence: unclear |
Interventions | Name: 3‐hour training workshop for education on child sexual abuse Contents: unclear Processes and teaching methods: unclear Delivery mode: unclear Trainers and qualifications: unclear Duration: unclear Intensity: unclear Intervention integrity: unclear Comparison condition: unclear |
Outcomes |
|
Notes | Funding: not reported Author contact: this is a student thesis. The abstract is very brief, and we were unable to obtain the full text. We tracked down the following related paper, but have not been able to obtain the full text, and attempts to locate the study authors have been unsuccessful: Herrera M, Carey KT. Child sexual abuse: issues and strategies for school psychologists. School Psychology International 1993;14(1): 69‐81. DOI: 10.1177/0143034393141005. |
Peker 2020.
Methods | Study design: unclear, but possibly quasi‐experimental as part of a larger mixed‐methods study Unit of allocation: unclear Unit of analysis: unclear Adjustment for clustering: unclear |
Participants | Participants: teachers Location: Erzurum, Turkey Setting: Erzurum Provincial Directorate of National Education (p 76) Sample size calculation: not reported Sample size: unclear, but possibly 16 school counsellors (p 76); intervention group n = 8, control group n = 8 (p 76) Mean age (SD): not reported Gender: not reported Race/ethnicity: not reported Years of experience: not reported Previous child protection training: not reported Baseline equivalence: not reported |
Interventions | Name: psycho‐education programme for sexual abuse Contents: (i) risk factors, types of abuse, sexual abuse definition and indicators; (ii) types of sexual abuse, aetiology, effects; (iii) risk factors and developmental consequences; (iv) school administrators' duties and responsibilities; (v) parent responses; (vi) school counsellor responses; (vii) actions; (viii) interventions (cognitive behavioural therapy); (ix) interventions (other); (x) evaluation (p 77, Table 1) Processes and teaching methods: not reported Delivery mode: group format delivered online via video conference using Zoom Trainers and qualifications: not reported Duration: 5 weeks Intensity: 70‐minute sessions conducted twice per week for 5 weeks Intervention integrity: not reported Comparison condition: not reported |
Outcomes |
|
Notes | Funding: not reported Author contact: authors contacted via email 16 August 2021 with request to provide missing information or citations for dependent studies, or both (Walsh 2021 [pers comm]) |
SD: standard deviation
Characteristics of ongoing studies [ordered by study ID]
IRCT2015042713748N3.
Study name | Public title: Intention to report child abuse Scientific title: Nurses' intention to report child abuse |
Methods | Study design: controlled before‐and‐after study |
Participants | Participants: nurses Setting: emergency and paediatric departments Country: Iran Inclusion criteria: voluntary participation; minimum of 6 months professional experience Exclusion criteria: unwillingness to continue participation in the study Recruitment status: recruitment complete (date of last update was not reported) |
Interventions | Intervention group: education programme (child abuse types; risk factors for child abuse; signs and symptoms of child abuse; reporting method for child abuse; resources for child support) Control group: no education programme |
Outcomes |
Timing of outcome assessment: before the intervention and 4 weeks after the intervention |
Starting date | Expected starting date: 21 May 2015 Expected end date: 21 June 2015 (no further update available) |
Contact information | s‐khanjari@tums.ac.ir |
Notes | Trial registration number: IRCT2015042713748N3 Funding: Tehran University of Medical Sciences and Center for Nursing Care Research of Iran University of Medical Sciences Comments: author contacted by email from KW on 9 May 2019, but no response received (Walsh 2019b [pers comm]) |
NCT03185728.
Study name | Public title: iLookOut for child abuse: an innovative learning module for childcare providers (iLookOut) Scientific title: iLookOut for child abuse: an innovative learning module for childcare providers |
Methods | Study design: 3‐arm randomised controlled trial, with a stepped wedge design |
Participants | Participants: childcare providers Setting: childcare sites Country: Maine, USA Inclusion criteria: (i) works or volunteers at a childcare facility in Maine (i.e. home‐based childcare, childcare centre, Head Start facility, nursery school, preschool); (ii) 18 years of age or older Exclusion criteria: (i) does not work or volunteer at a childcare facility in Maine; (ii) under 18 years of age Recruitment status: ongoing (last updated 11 June 2021) |
Interventions | Intervention group 1: iLookOut for child abuse online interactive e‐learning module (video‐based story‐line with follow‐up activities) Control group 1: standard online mandated reporter training Control group 2: waitlist for programmes |
Outcomes |
Timing of outcome assessment: not reported |
Starting date | Expected starting date: 3 October 2017 Expected end date: 31 July 2022 |
Contact information | blevi@pennstatehealth.psu.edu |
Notes | Trial registration number: NCT03185728 Funding: National Institute of Child Health and Human Development (ID: 1Ro1HD088448‐01) Comments: author contacted by email from KW; response received 21 December 2019 (Levi 2019 [pers comm]) |
Differences between protocol and review
Methods
Types of outcome measures
Although we included studies assessing primary and secondary outcomes listed in our review protocol, in practice, when conducting the review, we also excluded studies that did not set out to measure any of these outcomes. This was not made clear in the review protocol.
We found the potential for overlap between primary outcome 1b 'number of reported cases of child abuse and neglect as measured subjectively by participant responses to vignettes’ and secondary outcome 3 ‘skill in distinguishing cases that should be reported from those that should not’. We resolved this by conceptualising ‘skill’ outcomes as measurable via in vivo assessment.
We found that secondary outcome 2 ‘knowledge of core concepts in child abuse and neglect’ comprised two outcomes: 2a 'knowledge of core concepts in child abuse and neglect (general)' and 2b 'knowledge of core concepts in child sexual abuse (specific)', and thus treated it as such.
After conducting the review, we concluded that measuring changes in primary outcome 1c 'number of reported cases of child abuse and neglect as measured objectively in official records of reports made to child protection authorities’ may be unfeasible in trials of training interventions and should be removed from the list of outcome measures in future review updates.
Search methods
Sociological Abstracts includes Social Services Abstracts as companion files in ProQuest, therefore we searched these simultaneously.
We did not carry out the following planned searches.
Social Policy and Practice (Ovid), as this platform was no longer available at our institutions.
Database of Abstracts of Reviews of Effects (DARE), which included non‐Cochrane systematic reviews, as it has not existed as a standalone database since 2015. Instead, we searched CENTRAL, including the Cochrane Library, which we anticipated would capture DARE records.
ClinicalTrials.gov or the Australia and New Zealand Clinical Trials Registry, as these records were included in the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP). However, in future review updates it will be necessary to search all trial platforms separately, as it is possible that search functionalities may change, thereby retrieving different results.
We searched legal databases, including Lexis, LegalTrac, and Westlaw International only to 19 December 2018. These databases were not available during the top‐up searches in July 2021.
We searched OpenGrey only to 27 May 2019, as it was shut down and archived in March 2021.
We added Education Source EBSCOhost (1880 to July 2021) to the database search list because it indexes more education journals than any other database, and child protection training is an educational intervention.
We searched Promising Practices Network and Coalition for Evidence‐Based Policy to 21 March 2019. These websites were archived in 2014 and 2015, respectively.
We planned to contact key researchers in the field for unpublished studies. Instead, we circulated requests for relevant studies via email to the Child‐Maltreatment‐Research‐Listserv, which is managed by the US National Data Archive on Child Abuse and Neglect (see Walsh 2018 [pers comm] and www.ndacan.acf.hhs.gov/cmrl/cmrl-description.cfm). The listserv has over 1500 members.
Assessment of risk of bias in included studies
Since the publication of our protocol, the Risk Of Bias In Non‐randomized Studies of Interventions (ROBINS‐I) tool has been developed (Sterne 2016), and additional guidance has been provided in the Cochrane Handbook for Systematic Reviews of Interventions (Sterne 2022). Congruent with our review protocol (Mathews 2015), we used all seven assessment domains specified in the original Cochrane risk of bias tool (Higgins 2011, Table 8.5a), with three additional domains corresponding with the 'Suggested risk of bias criteria for EPOC reviews' (Cochrane Effective Practice and Organisation of Care) (EPOC 2017).
Dealing with missing data
For continuous data, if data to calculate effect sizes were not available in study reports or from study authors, we planned to calculate missing standard deviations (SDs) from other test statistics (e.g. t values, F values). In cases where SDs were unavailable and could not be calculated from other test statistics, we planned to impute an average SD from other included studies, as this method has been found to produce approximately correct results (Deeks 2022, Section 10.12.2, Box 10.12a). We then planned to assess the extent to which this alters the results in a sensitivity analysis. However, because many of the studies were dated, and study authors could not be contacted or could not locate relevant data, we used David B Wilson's suite of effect size calculators to calculate an effect size. This was then entered directly into RevMan Web (RevMan Web 2021) and meta‐analyses were conducted using the generic inverse method in RevMan Web.
Data synthesis
If there was only one study with available data to calculate an effect size for a given outcome, we calculated and reported a single standardised mean difference (SMD) with 95% confidence intervals. This is not standard practice, as normally a mean difference (MD) would be presented. We considered how useful/practical a non‐SMD on a range of different Likert scales would be to readers. To make reporting the MD meaningful to readers, we would need to explain what different levels of the measure meant for the effect to be interpretable. For this review topic, and because it is a single computation presented alongside several SMD estimates, we reasoned that it would be more useful and practical for the results to be presented in SMD so that readers could assess if there were meaningful differences between groups. For ease of reference, we linked readers to the analyses so that they can view the raw means and the differences between groups.
Summary of findings table and assessment of the certainty of the evidence
In line with current guidance, we amended the GRADE criteria to refer to ‘certainty’ rather than ‘quality’ of the evidence.
In our review protocol, we did not nominate specific criteria for downgrading the certainty of evidence. We specify this criteria in the Methods and in the footnotes to the summary of findings tables (Table 1; Table 2).
In our review protocol, we did not provide a rationale for prioritising the most clinically important outcomes for presentation in the summary of findings table. Our rationale for including two summary of findings tables: one for primary outcomes, including adverse events, and one for secondary outcomes, is further explained in the Implications for research section. As we discuss, we are leaning towards the view that some of the primary outcomes nominated in our review protocol may be impossible (and possibly undesirable) to measure, for example primary outcome 1c 'number of reported cases of child abuse and neglect as measured objectively in official records of reports made to child protection authorities'. Assessment of this outcome is likely to be impossible in the context of a training intervention evaluation. It would require longitudinal assessment with a different research design. This does not mean that the outcome is not clinically relevant to the field; rather, that it may not be clinically measurable in an evaluation study.
Other preplanned methods
We were not able to use all of our preplanned methods (Mathews 2015). These have been archived in Appendix 2 for use in future updates of this review.
Contributions of authors
Kerryann Walsh (KW): conceived and designed the project (with BM); protocol writing; screening and study selection; data extraction; risk of bias assessment; review writing (Background, Methods, 'Characteristics of included studies' tables, Results, Discussion, Authors' conclusions); data entry into RevMan Web; GRADE assessment; and overall responsibility for the review and guarantor for the review. KW did not assess the eligibility, extract data, assess risk of bias, or grade certainty of the evidence for the Mathews 2017 study included in this review (see Declarations of interest below).
Elizabeth Eggins (EE): review project management; SysReview technical expertise, training, and support; conducted database searches; troubleshooted database searches; screening and study selection; data extraction; risk of bias assessment; statistical data entry; statistical analysis; review writing (Methods, PRISMA flowchart, text for Results), and GRADE assessment.
Lorelei Hine (LH): SysReview data entry; maintenance of review audit file; handsearching and grey literature searching; ordering of records; screening and study selection; data extraction; risk of bias assessment; and entering data into RevMan Web.
Ben Mathews (BM): conceived and designed the project (with KW); lead author for the study protocol; screening and study selection; and review writing (text for Background, Discussion, Authors' conclusions). BM did not assess the eligibility, extract data, assess risk of bias, or grade the certainty of evidence for the Mathews 2017 study included in this review (see Declarations of interest below).
Maureen C Kenny (MK): protocol writing (Background rationale, study selection); mediation on study eligibility (excluding Alvarez 2010; Mathews 2017, see Declarations of interest below); and review writing (text for Background, 'Characteristics of included studies' tables, Discussion, Authors' conclusions). MK did not assess the eligibility, extract data, assess risk of bias, or grade certainty of the evidence for the Mathews 2017 study included in this review (see Declarations of interest below).
Sarah Howard (SH): initial database searches and drafting of search methods.
Natasha Ayling (NA): screening and study selection.
Elizabeth Dallaston (ED): screening and study selection.
Elizabeth Pink (formerly Wallace) (EP): screening and study selection.
Dimitrios Vagenas (DV): protocol writing (measures of treatment effect, unit of analysis issues, dealing with missing data, assessment of heterogeneity, data synthesis) and statistical analysis (expert statistical support, troubleshooting, interpretation).
All authors read and approved the final version of the review before submission.
Sources of support
Internal sources
-
Queensland University of Technology, Australia
In‐kind support for this review was provided in the form of salaries and resources for KW, BM, SH, and DV.
External sources
-
External sources of support, Australia
There were no external sources of support for this review.
Declarations of interest
Kerryann Walsh (KW): has declared that she was not a co‐author on any study included in this review. However, she has previously co‐authored studies with an author who is a co‐author on a study included in this review (Mathews 2017), which was funded by Penn State University (the researchers had complete control over the study designs, methods, data analysis, and interpretation of data).
Elizabeth Eggins (EE): has declared that she is an Editor with the Campbell Collaboration Crime and Justice Group, a position supported with funding from the Campbell Collaboration Secretariat. EE has been a co‐author on several Campbell and industry‐funded reviews related to child welfare, none of which are directly related to this review.
Lorelei Hine (LH): has declared that she has no conflicts of interest.
Ben Mathews (BM): has declared that he was a co‐author and investigator (but not lead investigator) on one of the studies included in this review (Mathews 2017), funded by Penn State University. The researchers of this study had complete control over the study design, methods, data analysis, and interpretation.
Maureen C Kenny (MK): has declared that she was not a co‐author on any study included in this review, but that she has previously co‐authored studies with the authors of two studies included in this review: Alvarez 2010, partially funded by the National Institute of Drug Abuse, US National Institutes of Health, and Mathews 2017, funded by Penn State University; to MK's knowledge, the researchers of both studies had complete control over the study design, methods, data analysis, and interpretation of data.
Sarah Howard: has declared that she has no conflicts of interest.
Natasha Ayling: has declared that she has no conflicts of interest.
Elizabeth Dallaston: has declared that she has no conflicts of interest.
Elizabeth Pink: has declared that she has no conflicts of interest.
Dimitrios Vagenas: has declared that he has no conflicts of interest.
Edited (no change to conclusions)
References
References to studies included in this review
Alvarez 2010 {published data only (unpublished sought but not used)}1662630
- Alvarez KM, Donohue B, Carpenter A, Romero V, Allen DN, Cross C. Development and preliminary evaluation of a training method to assist professionals in reporting suspected child maltreatment. Child Maltreatment 2010;15(3):211-8. [DOI: 10.1177/1077559510365535] [PMCID: PMC3489268] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alvarez KM. The Development and Evaluation of a Child Maltreatment Reporting Training Program for Mandated Mental Health Professionals [PhD thesis]. Las Vegas (NV): Unversity of Nevada, 2008. [Google Scholar]
Dubowitz 1991 {published data only}12442
- Dubowitz H, Black M. Teaching pediatric residents about child maltreatment. Journal of Developmental and Behavioral Pediatrics 1991;12(5):305-7. [PMID: ] [PubMed] [Google Scholar]
Hazzard 1984 {published data only}123115300
- Hazzard A, Rupp G. Training teachers to identify and intervene with abused children. American Psychologist 1983;39(6):604-38. [ERIC NUMBER: ED240476] [Google Scholar]
- Hazzard A. Training teachers to identify and intervene with abused children. Journal of Clinical Child Psychology 1984;13(3):288-93. [DOI: 10.1080/15374418409533204] [DOI] [Google Scholar]
Jacobsen 1993 {published data only}22734
- Jacobsen DR. Evaluation of a Teacher Training Program for the Identification, Intervention, and Prevention of Child Sexual Abuse [MS thesis]. Fresno (CA): California State University, 1993. [Google Scholar]
Kim 2019 {published data only (unpublished sought but not used)}23672
- Kim S, Nickerson A, Livingston JA, Dudley M, Manges M, Tulledge J, et al. Teacher outcomes from the Second Step Child Protection Unit: moderating roles of prior preparedness, and treatment acceptability. Journal of Child Sexual Abuse 2019;28(6):726-44. [DOI: 10.1080/10538712.2019.1620397] [PMID: ] [DOI] [PubMed] [Google Scholar]
Kleemeier 1988 {published data only}12205
- Kleemeier C, Webb C, Hazzard A, Pohl J. Child sexual abuse prevention: evaluation of a teacher training model. Child Abuse & Neglect 1988;12(4):555-61. [DOI: 10.1016/0145-2134(88)90072-5] [PMID: ] [DOI] [PubMed] [Google Scholar]
Mathews 2017 {published data only (unpublished sought but not used)}273339938
- Mathews B, Yang C, Lehman EB, Mincemoyer C, Verdiglione N, Levi BH. Educating early childhood care and education providers to improve knowledge and attitudes about reporting child maltreatment: a randomized controlled trial. PLOS ONE 2017;12(5):e0177777. [DOI: 10.1371/journal.pone.0177777] [PMCID: PMC5438118] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
- NCT02225301. iLookOut for child abuse: online learning module for early childcare providers (iLookOut) [Early childhood practitioner's knowledge and attitudes regarding reporting child abuse/neglect - iLookOut]. clinicaltrials.gov/ct2/show/NCT02225301 (first received 22 August 2014).
McGrath 1987 {published data only}4675
- McGrath P, Capelli M, Wiseman D, Khalil N, Allan B. Teacher awareness program on child abuse: a randomized controlled trial. Child Abuse & Neglect 1987;11(1):125-32. [DOI: 10.1016/0145-2134(87)90041-x] [PMID: ] [DOI] [PubMed] [Google Scholar]
Palusci 1995 {published data only}12045136406705
- McHugh MT, Palusci VJ, Moraitis C. An interdisciplinary approach to physician training in the evaluation of child sexual abuse. In: 10th National Conference of the National Center of Child Abuse and Neglect; 1993 Dec 3; Pittsburgh (PA). 1993.
- Palusci VJ, McHugh M. Interdisciplinary training in the evaluation of child sexual abuse. In: Fifth International Congress on Child Abuse and Neglect; 1984 Sep 16–19; Montreal, Canada. 1984. [DOI] [PubMed]
- Palusci VJ, McHugh MT. Interdisciplinary training in the evaluation of child sexual abuse. Child Abuse & Neglect 1995;19(9):1031-8. [DOI: 10.1016/0145-2134(95)00065-g] [PMID: ] [DOI] [PubMed] [Google Scholar]
Randolph 1994 {published data only}12002
- Randolph MK, Gold CA. Child sexual abuse prevention: evaluation of a teacher training program. School Psychology Review 1994;23(3):485-95. [DOI: 10.1080/02796015.1994.12085727] [DOI] [Google Scholar]
Smeekens 2011 {published data only}8131
- NCT00844571. Effect e-learning program about child maltreatment [The effectiveness of an e-learning program about child maltreatment for nurses on an A&E department: a randomised controlled trial]. clinicaltrials.gov/ct2/show/NCT00844571 (first received 12 February 2009). [SYSREVIEW: 27300]
- Smeekens AE, Broekhuijsen-van Henten DM, Sittig JS, Russel IM, Ten Cate OT, Turner NM, et al. Successful e-learning programme on the detection of child abuse in emergency departments: a randomized controlled trial. Archives of Disease in Childhood 2011;96(4):330-4. [DOI: 10.1136/adc.2010.190801] [PMID: ] [DOI] [PubMed] [Google Scholar]
References to studies excluded from this review
Al‐Dabaan 2016 {published data only}334
- Al-Dabaan R, Asimakopoulou K, Newton JT. Effectiveness of a web-based child protection training programme designed for dental practitioners in Saudi Arabia: a pre- and post-test study. European Journal of Dental Education 2016;20(1):45-54. [DOI: 10.1111/eje.12141] [DOI] [PubMed] [Google Scholar]
Cerezo 2004 {published data only}1082
- Cerezo MA, Pons-Salvador G. Improving child maltreatment detection systems: a large-scale case study involving health, social services, and school professionals. Child Abuse & Neglect 2004;28(11):1153-69. [DOI: 10.1016/j.chiabu.2004.06.007] [PMID: ] [DOI] [PubMed] [Google Scholar]
Flemington 2017 {published data only}19950
- Flemington T, Fraser J. Building workforce capacity to detect and respond to child abuse and neglect cases: a training intervention for staff working in emergency settings in Vietnam. International Emergency Nursing 2017;34:29-35. [DOI: 10.1016/j.ienj.2017.03.004] [DOI] [PubMed] [Google Scholar]
Hawkins 2001a {published data only}5279
- Hawkins R, McCallum C. Mandatory notification training for suspected child abuse and neglect in South Australian schools. Child Abuse & Neglect 2001;25(12):1603-25. [DOI: 10.1016/S0145-2134(01)00296-4] [PMID: ] [DOI] [PubMed] [Google Scholar]
Hawkins 2001b {published data only}5280
- Hawkins R, McCallum C. Effects of mandatory notification training on the tendency to report hypothetical cases of child abuse and neglect. Child Abuse Review 2001;10(5):301-22. [DOI: 10.1002/car.699] [DOI] [Google Scholar]
Humphreys 2021 {published data only}34999
- Humphreys KL, Piersiak HA, Panlilio CC, Lahman EB, Vergiglione N, Dore S, et al. A randomized control trial of a child abuse mandated reporter training: knowledge and attitudes. Child Abuse & Neglect 2021;117:105033. [DOI: 10.1016/j.chiabu.2021.105033] [PMCID: PMC8360385 (available on 2022-07-01)] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
König 2020 {published data only}35106
- König E, Maier A, Fegert JM, Hoffman U. Development and randomized controlled trial evaluation of E-learning trainings for professionals. Archives of Public Health 2020;78(1):122. [DOI: 10.1186/s13690-020-00465-4] [PMCID: PMC7680992] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Lee 2017 {published data only}9901
- Lee P-Y, Fan-Hao C. A training programme for Taiwan nurses to improve child abuse reporting. Journal of Clinical Nursing 2017;26(15-16):2297-306. [DOI: 10.1111/jocn.13447] [PMID: ] [DOI] [PubMed] [Google Scholar]
Letourneau 2016 {published data only}3986
- Letourneau EJ, Nietert PJ, Rheingold AA. Initial assessment of Stewards of Children program effects on child sexual abuse reporting rates in selected South Carolina counties. Child Maltreatment 2016;21(1):74-9. [DOI: 10.1177/1077559515615232] [PMCID: PMC4870719] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Levi 2021 {published data only}35138
- Levi BH, Belser A, Kapp K, Verdiglione N, Mincemoyer C, Dore S, et al. ILookOut for child abuse: conceptual and practical considerations in creating an online learning programme to engage learners and promote behaviour change. Early Child Development and Care 2021;191(4):535-44. [DOI: 10.1080/03004430.2019.1626374] [PMCID: PMC8258631 (available on 2022-01-01)] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Menick 2005 {published data only}26352
- Menick DM, Ngoh F. Child abuse in Cameroon: evaluation of a training course on awareness, detection, and reporting of child abuse [Violences à caractère éducatif au Cameroun évaluation d’un séminaire de formation à la reconnaissance, à la détection et au signalement des sévices physiques infligés aux enfants]. Médecine Tropicale: Revue du Corps de Santé Colonial [Tropical Medicine: Journal of the Colonial Health Service] 2005;65(1):33-8. [PMID: ] [PubMed] [Google Scholar]
Morris 1982 {published data only}26903
- Morris JH Jr. Child Abuse Teacher Inservice Training. Washington, DC: US Department of Education, 1982. [ERIC: ED258772] [Google Scholar]
NCT03758794 {published data only}10191
- NCT03758794. Knowledge and attitude of dental interns on child abuse [Knowledge and attitude of dental interns on child abuse after implementation of two educational modules: a randomized control trial]. clinicaltrials.gov/ct2/show/NCT03758794 (first received 25 November 2018).
Paek 2019 {published data only}28160
- Paek SH, Kwak YH, Noh H, Jung JH. A survey on the perception and attitude change of first-line healthcare providers after child abuse education in South Korea: a pilot study. Medicine 2019;98(2):e14085. [DOI: 10.1097/MD.0000000000014085] [PMCID: PMC6336610] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Rheingold 2012 {published data only}7318
- Rheingold AA, Zajac K, Patton M. Feasibility and acceptability of a child sexual abuse prevention program for childcare professionals: comparison of a web-based and in-person training. Journal of Child Sexual Abuse 2012;21(4):422-36. [DOI: 10.1080/10538712.2012.675422] [PMID: ] [DOI] [PubMed] [Google Scholar]
Rheingold 2015 {published data only}7317
- Rheingold AA, Zajac K, Chapman JE, Patton M, De Arellano M, Saunders B, et al. Child sexual abuse prevention training for childcare professionals: an independent multi-site randomized controlled trial of Stewards of Children. Prevention Science 2015;16(3):374-85. [DOI: 10.1007/s11121-014-0499-6] [PMID: ] [DOI] [PubMed] [Google Scholar]
Socolar 1998 {published data only}8205
- Socolar RR, Raines B, Chen-Mok M, Runyan DK, Green C, Paterno S. Intervention to improve physician documentation and knowledge of child sexual abuse: a randomized, controlled trial. Pediatrics 1998;101(5):817-24. [DOI: 10.1542/peds.101.5.817] [PMID: ] [DOI] [PubMed] [Google Scholar]
Sullivan 1990 {published data only}8464
- Sullivan R, Clancy T. An experimental evaluation of interdisciplinary training in intervention with sexually abused adolescents. Health & Social Work 1990;15(3):207-14. [DOI: 10.1093/hsw/15.3.207] [PMID: ] [DOI] [PubMed] [Google Scholar]
Volpe 1981 {published data only}11837
- Volpe R. The development and evaluation of a training program for school-based professionals dealing with child abuse: the University of Toronto interfaculty child abuse prevention project 1978-1979. Child Abuse & Neglect 1981;5(2):103-10. [DOI: 10.1016/0145-2134(81)90027-2] [DOI] [Google Scholar]
Yang 2020 {published data only}34219
- Yang C, Panlilio C, Verdiglione N, Lehman EB, Hamm RB, Fiene R, et al. Generalizing findings from a randomized controlled trial to a real-world study of the iLookOut, an online education program to improve early childhood care and education providers' knowledge and attitudes about reporting child maltreatment. PLOS ONE 2020;15(1):e0227398. [DOI: 10.1371/journal.pone.0227398] [PMCID: PMC6948728] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
References to studies awaiting assessment
De Faria Brino 2003 {published data only}28629136
- Brino RF. Capacitação do Educador Acerca Doabuso Sexual Infantil [Educator Training on Child Sexual Abuse] [PhD thesis]. São Carlos: Universidade Federal de São Carlos, 2002. [Google Scholar]
- De Faria Brino R, Williams LC. Training teachers on child sexual abuse [Capacitação do educador acerca do abuso sexual infantil]. Interação em Psicologia 2003;7(2):1-10. [DOI: 10.5380/psi.v7i2.3218] [revistas.ufpr.br/psicologia/article/download/3218/2580] [DOI] [Google Scholar]
Herrera 1993 {published data only}21923
- Herrera M. Elementary Teacher Training Workshop for the Prevention of Child Sexual Abuse [Master's thesis]. Los Angeles (CA): California State University, 1993. [Google Scholar]
Peker 2020 {published data only}35346
- Peker A, Cengiz S, Celik AK. The effect of psycho-education program developed for sexual abuse on counseling teachers' reporting sexual abuse and information and risk recognition attitudes. International Journal of Education & Literacy Studies 2020;8(4):74-86. [DOI: 10.7575/aiac.ijels.v.8n.4p.74] [DOI] [Google Scholar]
References to ongoing studies
IRCT2015042713748N3 {published data only}IRCT2015042713748N3181
- IRCT2015042713748N3. Intention to report child abuse [Nurses intention to report child abuse]. www.irct.ir/trial/13530 (first received 22 October 2015).
NCT03185728 {published data only}13603
- Humphreys KL, Piersiak HA, Panlilio CC, Lehman EB, Verdiglione N, Dore S, et al. A randomized control trial of a child abuse mandated reporter training: knowledge and attitudes. Child Abuse & Neglect 2021 Apr 23 [Epub ahead of print]. [DOI: 10.1016/j.chiabu.2021.105033] [PMCID: PMC8360385 (available on 2022-07-01)] [PMID: ] [SYSREVIEW: 34999] [DOI] [PMC free article] [PubMed]
- Kapp KM, Dore S, Fiene R, Grable B, Panlilio C, Hamm RM, et al. Cognitive mapping for iLookOut for child abuse: an online training program for early childhood professionals. Online Journal of Distance Education and E-Learning 2020;8(2):80-9. [PMCID: PMC7511090] [PMID: ] [WEBSITE: tojdel.net/journals/tojdel/articles/v08i02/v08i02-02.pdf] [PMC free article] [PubMed] [Google Scholar]
- Levi BH, Belser A, Kapp K, Verdiglione N, Mincemoyer C, Dore S, et al. iLookOut for child abuse: conceptual and practical considerations in creating an online learning program to engage learners and promote behavior change. Early Child Development and Care 2021;191(4):535-44. [DOI: 10.1080/03004430.2019.1626374] [PMID: ] [SYSREVIEW: 35138] [DOI] [PMC free article] [PubMed] [Google Scholar]
- NCT03185728. iLook Out for child abuse: an innovative learning module for childcare providers (iLookOut) [iLook Out for child abuse: an innovative learning module for childcare providers]. clinicaltrials.gov/ct2/show/NCT03185728 (first received 20 April 2017).
- Yang C, Panlilio C, Verdiglione N, Lehman EB, Hamm RM, Fiene R, et al. Generalizing findings from a randomized controlled trial to a real-world study of the iLookOut, an online education program to improve early childhood care and education providers’ knowledge and attitudes about reporting child maltreatment. PLOS ONE 2020;15(1):e0227398. [DOI: 10.1371/journal.pone.0227398] [PMCID: PMC6948728] [PMID: ] [SYSREVIEW: 34219] [DOI] [PMC free article] [PubMed] [Google Scholar]
Additional references
Abrahams 1992
- Abrahams N, Casey K, Daro D. Teachers' knowledge, attitudes, and beliefs about child abuse and its prevention. Child Abuse & Neglect 1992;16(2):229-38. [DOI: 10.1016/0145-2134(92)90030-u] [PMID: ] [DOI] [PubMed] [Google Scholar]
Ainsworth 2006
- Ainsworth F, Hansen P. Five tumultuous years in Australian child protection: little progress. Child & Family Social Work 2006;11(1):33-41. [DOI: 10.1111/j.1365-2206.2006.00388.x] [DOI] [Google Scholar]
Ajzen 2005
- Ajzen I. Attitudes, Personality and Behaviour. 2nd edition. Maidenhead (UK): Open University Press, 2005. [Google Scholar]
Albarracin 2005
- Albarracin D, Zanna MP, Johnson BT, Kumkale GT. The influence of attitudes on behaviour. In: Albarracin D, Johnson BT, Zanna MP, editors(s). The Handbook of Attitudes. 1st edition. Mahwah (NJ): Lawrence Erlbaum Associates, 2005:3-19. [Google Scholar]
Allan 1996
- Allan J. Learning outcomes in higher education. Studies in Higher Education 1996;21(1):93-108. [DOI: 10.1080/03075079612331381487] [DOI] [Google Scholar]
Allen 2006
- Allen MJ. Assessing General Education Programs. Indianapolis (IN): Jossey-Bass, 2006. [ISBN: ISBN-978-1-8829-8295-0] [Google Scholar]
Almuneef 2018
- Almuneef M, ElChoueiry N, Saleheen H, Al-Eissa M. The impact of adverse childhood experiences on social determinants among Saudi adults. Journal of Public Health 2018;40(3):e219–27. [DOI: 10.1093/pubmed/fdx177] [PMID: ] [DOI] [PubMed] [Google Scholar]
Alvarez 2008
- Alvarez KM. The Development and Evaluation of a Child Maltreatment Reporting Training Program for Mandated Mental Health Professionals [PhD thesis]. Las Vegas (NV): University of Nevada, 2008. [SYSREVIEW STUDY IDENTIFIER: 630] [Google Scholar]
Ayling 2020
- Ayling NJ, Walsh K, Williams KE. Factors influencing early childhood education and care educators’ reporting of child abuse and neglect. Australasian Journal of Early Childhood 2020;45(1):95-108. [DOI: 10.1177/1836939119885307] [DOI] [Google Scholar]
Baker 2021
- Baker AJ, LeBlanc S, Adebayo T, Mathews B. Training for mandated reporters of child abuse and neglect: content analysis of state-sponsored curricula. Child Abuse & Neglect 2021;113:104932. [DOI: 10.1016/j.chiabu.2021.104932] [DOI] [PubMed] [Google Scholar]
Bandura 1993
- Bandura A. Perceived self-efficacy in cognitive development and functioning. Educational Psychologist 1993;28(2):117-48. [DOI: 10.1207/s15326985ep2802_3] [DOI] [Google Scholar]
Bear 2014
- Bear GG, Yang C, Pell M, Gaskins C. Validation of a brief measure of teachers' perceptions of school climate: relations to student achievement and suspensions. Learning Environments Research 2014;17(3):339-54. [DOI: 10.1007/s10984-014-9162-1] [DOI] [Google Scholar]
Beck 1994
- Beck KA, Ogloff JR, Corbishley A. Knowledge, compliance, and attitudes of teachers toward mandatory child abuse reporting in British Columbia. Canadian Journal of Education 1994;19(1):15-29. [DOI: 10.2307/1495304] [WEB PAGE: www.jstor.org/stable/1495304] [DOI] [Google Scholar]
Bellis 2019
- Bellis MA, Hughes K, Ford K, Rodriguez GR, Sethi D, Passmore J. Life course health consequences and associated annual costs of adverse childhood experiences across Europe and North America: a systematic review and meta-analysis. Lancet Public Health 2019;4(10):e517-28. [DOI: 10.1016/S2468-2667(19)30145-8] [PMCID: PMC7098477] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Borenstein 2009
- Borenstein M, Hedges LV, Higgins JP, Rothstein HR. Introduction to Meta-Analysis. West Sussex (UK): John Wiley & Sons Ltd, 2009. [Google Scholar]
Boutron 2022
- Boutron I, Page MJ, Higgins JP, Altman DG, Lundh A, Hróbjartsson A. Chapter 7: Considering bias and conflicts of interest among the included studies. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.
Bywaters 2019
- Bywaters P, Featherstone B, Morris K. Child protection and social inequality. Social Sciences 2019;8(2):42. [DOI: 10.3390/socsci8020042] [DOI] [Google Scholar]
Calderon 2013
- Calderon O. Direct and indirect measures of learning outcomes in an MSW program: What do we actually measure? Journal of Social Work Education 2013;49(3):408-19. [DOI: 10.1080/10437797.2013.796767] [DOI] [Google Scholar]
Calheiros 2016
- Calheiros MM, Monteiro MB, Patrício JN, Carmona M. Defining child maltreatment among lay people and community professionals: exploring consensus in ratings of severity. Journal of Child & Family Studies 2016;25(7):2292–305. [DOI: 10.1007/s10826-016-0385-x] [DOI] [Google Scholar]
Campbell 2004
- Campbell MK, Grimshaw JM, Elbourne DR. Intracluster correlation coefficients in cluster randomized trials: empirical insights into how they should be reported. BMC Medical Research Methodology 2004;4:9. [DOI: 10.1186/1471-2288-4-9] [PMCID: PMC415547] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Carter 2006
- Carter YH, Bannon MJ, Limbert C, Docherty A, Barlow J. Improving child protection: a systematic review of training and procedural interventions. Archives of Disease in Childhood 2006;91(9):740-3. [DOI: 10.1136/adc.2005.092007] [DOI] [PMC free article] [PubMed] [Google Scholar]
Chan 2013
- Chan AW, Tetzlaff JM, Altman DG, Laupacis A, Gøtzsche PC, Krleža-Jerić K, et al. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Annals of Internal Medicine 2013;158(3):200-7. [DOI: 10.7326/0003-4819-158-3-201302050-00583] [PMCID: PMC5114123] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Chiang 2016
- Chiang LF, Kress H, Sumner SA, Gleckel J, Kawemama P, Gordon RN. Violence Against Children Surveys (VACS): towards a global surveillance system. Injury Prevention 2016;22(Suppl 1):i17–22. [DOI: 10.1136/injuryprev-2015-041820] [PMCID: PMC6158784] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Christian 2008
- Christian CW. Professional education in child abuse and neglect. Pediatrics 2008;122(Suppl 1):S13-7. [DOI: 10.1542/peds.2008-0715f] [PMID: ] [DOI] [PubMed] [Google Scholar]
Cohen 1988
- Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd edition. Hillsdale (NJ): Erlbaum Associates, 1988. [Google Scholar]
Colgrave 2021
- Colgrave JP, Stasa H, Fraser J. Validity and reliability of the psychometric properties of a child abuse questionnaire. Nurse Researcher 2020;28(1):42-9. [DOI: 10.7748/nr.2020.e1677] [PMID: ] [DOI] [PubMed] [Google Scholar]
Committee for Children 2021
- Committee for Children. Second Step Child Protection Unit. www.secondstep.org/child-protection (accessed 12 August 2021).
Crenshaw 1995
- Crenshaw WB, Crenshaw LM, Lichtenberg JW. When educators confront child abuse: an analysis of the decision to report. Child Abuse & Neglect 1995;19(9):1095-113. [DOI: 10.1016/0145-2134(95)00071-f] [PMID: ] [DOI] [PubMed] [Google Scholar]
Cross 2012
- Cross TP, Mathews B, Tonmyr L, Scott D, Ouimet C. Child welfare policy and practice on children's exposure to domestic violence. Child Abuse & Neglect 2012;36(3):210-6. [DOI: 10.1016/j.chiabu.2011.11.004] [PMID: ] [DOI] [PubMed] [Google Scholar]
Cuartas 2019
- Cuartas J, McCoy DC, Rey-Guerra C, Britto PR, Beatriz E, Salhi C. Early childhood exposure to non-violent discipline and physical and psychological aggression in low- and middle-income countries: national, regional, and global prevalence estimates. Child Abuse & Neglect 2019;92:93-105. [DOI: 10.1016/j.chiabu.2019.03.021] [PMID: ] [DOI] [PubMed] [Google Scholar]
Currie 2010
- Currie J, Widom CS. Long-term consequences of child abuse and neglect on adult economic well-being. Child Maltreatment 2010;15(2):111-20. [DOI: 10.1177/1077559509355316] [PMCID: PMC3571659] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Dalziel 2017
- Dalziel P. Education and qualifications as skills. In: Buchanan J, Finegold D, Mayhew K, Warhurst C, editors(s). The Oxford Handbook of Skills and Training. Oxford (UK): Oxford University Press, 2017:444-65. [DOI: 10.1093/oxfordhb/9780199655366.013.7] [DOI] [Google Scholar]
Danese 2009
- Danese A, Moffitt TE, Harrington H, Milne BJ, Polanczyk G, Pariante CM, et al. Adverse childhood experiences and adult risk factors for age-related disease: depression, inflammation, and clustering of metabolic risk markers. Archives of Pediatrics & Adolescent Medicine 2009;163(12):1135-43. [DOI: 10.1001/archpediatrics.2009.214] [PMCID: PMC3560401] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Danese 2012
- Danese A, McEwen BS. Adverse childhood experiences, allostasis, allostatic load, and age-related disease. Physiology & Behavior 2012;106(1):29-39. [DOI: 10.1016/j.physbeh.2011.08.019] [PMID: ] [DOI] [PubMed] [Google Scholar]
Darkness to Light 2021
- Darkness to Light. The Year of Courage: Fiscal Year 2019 Impact Report. www.d2l.org/about/ourimpact/ (accessed 18 June 2021).
Deeks 2022
- Deeks JJ, Higgins JP, Altman DG, editor(s). Chapter 10: Analysing data and undertaking meta-analyses. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.3 (updated February 2022). Cochrane, 2022. Available from training.cochrane.org/handbook.
Dilsiz 2015
- Dilsiz H, Mağden D. Determining teachers' knowledge and risk recognition levels on child abuse and neglect. Hacettepe University Faculty of Health Sciences Journal 2015;1(2):678-93. [DOI: 10.7575/aiac.ijels.v.8n.4p.74] [SYSREVIEW IDENTIFIER: 35346] [DOI] [Google Scholar]
Donner 2002
- Donner A, Klar N. Issues in the meta-analysis of cluster randomized trials. Statistics in Medicine 2002;21(19):2971-80. [DOI: 10.1002/sim.1301] [PMID: ] [DOI] [PubMed] [Google Scholar]
Donohue 2002
- Donohue B, Carpin K, Alvarez KM, Ellwood A, Jones RW. A standardized method of diplomatically and effectively reporting child abuse to state authorities: a controlled evaluation. Behavior Modification 2002;26(5):684-99. [DOI: 10.1177/014544502236657] [PMID: ] [DOI] [PubMed] [Google Scholar]
Drake 1996
- Drake B. Unraveling "unsubstantiated". Child Maltreatment 1996;1(3):261-71. [DOI: 10.1177/1077559596001003008] [DOI] [Google Scholar]
Drake 2007
- Drake B, Jonson-Reid M. A response to Melton based on the best available data. Child Abuse & Neglect 2007;31(4):343-60. [DOI: 10.1016/j.chiabu.2006.08.009] [PMID: ] [DOI] [PubMed] [Google Scholar]
Draper 2008
- Draper B, Pfaff JJ, Pirkis J, Snowdon J, Lautenschlager NT, Wilson I, et al. Long-term effects of childhood abuse on the quality of life and health of older people: results from the Depression and Early Prevention of Suicide in General Practice Project. Journal of the American Geriatrics Society 2008;56(2):262-71. [DOI: 10.1111/j.1532-5415.2007.01537.x] [PMID: ] [DOI] [PubMed] [Google Scholar]
Dubowitz 2007
- Dubowitz H. Understanding and addressing the "neglect of neglect": digging into the molehill. Child Abuse & Neglect 2007;31(6):603-6. [DOI: 10.1016/j.chiabu.2007.04.002] [PMID: ] [DOI] [PubMed] [Google Scholar]
Eagly 1993
- Eagly AH, Chaiken S. The Psychology of Attitudes. Orlando (FL): Harcourt Brace Javanovich, 1993. [Google Scholar]
Eccles 2003
- Eccles M, Grimshaw J, Campbell M, Ramsay C. Research designs for studies evaluating the effectiveness of change and improvement strategies. Quality and Safety in Healthcare 2003;12(1):47-52. [DOI: 10.1136/qhc.12.1.47] [PMCID: PMC1743658] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Egeland 2009
- Egeland B. Taking stock: childhood emotional maltreatment and developmental psychopathology. Child Abuse & Neglect 2009;33(1):22-6. [PMID: 10.1016/j.chiabu.2008.12.004] [PMID: ] [DOI] [PubMed] [Google Scholar]
EndNote 2018 [Computer program]
- EndNote X8.0.1 . Version X8.0.1. Amsterdam: Clarivate, 2018.
EPOC 2017
- Cochrane Effective Practice and Organisation of Care (EPOC). EPOC Resources for review authors: Suggested risk of bias criteria for EPOC reviews. epoc.cochrane.org/resources/epoc-resources-review-authors (accessed 11 August 2021).
Fang 2012
- Fang X, Brown DS, Florence CS, Mercy JA. The economic burden of child maltreatment in the United States and implications for prevention. Child Abuse & Neglect 2012;36(2):156-65. [DOI: 10.1016/j.chiabu.2011.10.006] [PMCID: PMC3776454] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Fang 2015
- Fang X, Fry DA, Brown DS, Mercy JA, Dunne MP, Butchart AR, et al. The burden of child maltreatment in the East Asia and Pacific region. Child Abuse & Neglect 2015;42:146-62. [DOI: 10.1016/j.chiabu.2015.02.012] [PMCID: PMC4682665] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Feng 2005
- Feng JY, Levine M. Factors associated with nurses' intention to report child abuse: a national study of Taiwanese nurses. Child Abuse & Neglect 2005;29(7):783-95. [DOI: 10.1016/j.chiabu.2004.11.006] [PMID: ] [DOI] [PubMed] [Google Scholar]
Finkelhor 1988
- Finkelhor D, Korbin J. Child abuse as an international issue. Child Abuse & Neglect 1988;12(1):3-23. [DOI: 10.1016/0145-2134(88)90003-8] [PMID: ] [DOI] [PubMed] [Google Scholar]
Finkelhor 2010
- Finkelhor D, Turner H, Ormrod R, Hamby SL. Trends in childhood violence and abuse exposure: evidence from 2 national surveys. Archives of Pediatrics & Adolescent Medicine 2010;164(3):238-42. [DOI: 10.1001/archpediatrics.2009.283] [PMID: ] [DOI] [PubMed] [Google Scholar]
Flemington 2021
- Flemington T, Lock M, Shipp J, Hartz D, Lonne B, Fraser JA. Cultural safety and child protection responses in hospitals: a scoping review. International Journal on Child Maltreatment: Research, Policy and Practice 2021;4:5-33. [DOI: 10.1007/s42448-020-00065-3] [DOI] [Google Scholar]
Fraser 2010
- Fraser JA, Mathews B, Walsh K, Chen L, Dunne M. Factors influencing child abuse and neglect recognition and reporting by nurses: a multivariate analysis. International Journal of Nursing Studies 2010;47(2):146-53. [DOI: 10.1016/j.ijnurstu.2009.05.015] [PMID: ] [DOI] [PubMed] [Google Scholar]
Gilbert 2009
- Gilbert R, Widom CS, Browne K, Fergusson D, Webb E, Janson S. Burden and consequences of child maltreatment in high-income countries. Lancet 2009;373(9657):68-81. [DOI: 10.1016/S0140-6736(08)61706-7] [PMID: ] [DOI] [PubMed] [Google Scholar]
Gilbert 2012
- Gilbert R, Fluke J, O'Donnell M, Gonzalez-Izquierdo A, Brownell M, Gulliver P, et al. Child maltreatment: variation in trends and policies in six developed countries. Lancet 2012;379(9817):758-72. [DOI: 10.1016/S0140-6736(11)61087-8] [PMID: ] [DOI] [PubMed] [Google Scholar]
Glaser 2011
- Glaser D. How to deal with emotional abuse and neglect - further development of a conceptual framework (FRAMEA). Child Abuse & Neglect 2011;35(10):866-75. [DOI: 10.1016/j.chiabu.2011.08.002] [PMID: ] [DOI] [PubMed] [Google Scholar]
Goebbels 2008
- Goebbels AF, Nicholson JM, Walsh K, De Vries H. Teachers' reporting of suspected child abuse and neglect: behaviour and determinants. Health Education Research 2008;23(6):941-51. [DOI: 10.1093/her/cyn030] [PMID: ] [DOI] [PubMed] [Google Scholar]
GRADEpro GDT [Computer program]
- GRADEpro GDT. Version accessed 13 June 2021. Hamilton (ON): McMaster University (developed by Evidence Prime). Available at gradepro.org.
Guyatt 2008
- Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008;336(7650):924-6. [DOI: 10.1136/bmj.39489.470347.AD] [PMCID: PMC2335261] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Guyatt 2011
- Guyatt GH, Oxman AD, Vist G, Kunz R, Brozek J, Alonso-Coello P, et al. GRADE guidelines: 4. Rating the quality of evidence - study limitations (risk of bias). Journal of Clinical Epidemiology 2011;64(4):407-15. [DOI: 10.1016/j.jclinepi.2010.07.017] [PMID: ] [DOI] [PubMed] [Google Scholar]
Hawkins 2001
- Hawkins R, McCallum C. Mandatory notification training for suspected child abuse and neglect in South Australian schools. Child Abuse & Neglect 2001;25(12):1603-25. [DOI: 10.1016/s0145-2134(01)00296-4] [PMID: ] [DOI] [PubMed] [Google Scholar]
Hazzard 1983
- Hazzard A. Training teachers to identify and intervene with abused children. 91st Annual Convention of the American Psychological Association; 1983 Aug 26-30; Anaheim (CA) 1983. [ERIC: ED240476] [SYSREVIEW STUDY IDENTIFIER: 5300]
Hedges 2015
- Hedges LV, Citkowicz M. Estimating effect size when there is clustering in one treatment group. Behavior Research Methods 2015;47(4):1295-308. [DOI: 10.3758/s13428-014-0538-z] [PMID: ] [DOI] [PubMed] [Google Scholar]
Higgins 2011
- Higgins JP, Green S, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011). The Cochrane Collaboration, 2011. Available from training.cochrane.org/handbook/archive/v5.1/.
Higgins 2022a
- Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.3 (updated February 2022). Cochrane, 2022. Available from training.cochrane.org/handbook.
Higgins 2022b
- Higgins JP, Eldridge S, Li T, editor(s). Chapter 23: Including variants on randomized trials. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.3 (updated February 2022). Cochrane, 2022. Available from training.cochrane.org/handbook.
Higgins 2022c
- Higgins JP, Li T, Deeks JJ, editor(s). Chapter 6: Choosing effect measures and computing estimates of effect. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.
Higgins 2022d
- Higgins JP, Eldridge S, Li T, editor(s). Chapter 23: Including variants on randomized trials. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.
Higginson 2014 [Computer program]
- SysReview [systematic review management software]. Higginson A, Neville R. Brisbane, Australia: The University of Queensland, 2014.
Hildyard 2002
- Hildyard KL, Wolfe DA. Child neglect: developmental issues and outcomes. Child Abuse & Neglect 2002;26(6-7):679-95. [DOI: 10.1016/s0145-2134(02)00341-1] [PMID: ] [DOI] [PubMed] [Google Scholar]
Hillis 2016
- Hillis S, Mercy J, Amobi A, Kress H. Global prevalence of past-year violence against children: a systematic review and minimum estimates. Pediatrics 2016;137(3):e20154079. [DOI: 10.1542/peds.2015-4079] [PMCID: PMC6496958] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Hinson 2000
- Hinson J, Fossey R. Child abuse: what teachers in the '90s know, think, and do. Journal of Education for Students Placed at Risk 2000;5(3):251-66. [DOI: 10.1207/S15327671ESPR0503_4] [DOI] [Google Scholar]
Hoffmann 2014
- Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014;348:g1687. [DOI: 10.1136/bmj.g1687] [PMID: ] [DOI] [PubMed] [Google Scholar]
Hughes 2017
- Hughes K, Bellis MA, Hardcastle KA, Sethi D, Butchart A, Mikton C, et al. The effect of multiple adverse childhood experiences on health: a systematic review and meta-analysis. Lancet Public Health 2017;2(8):e356-66. [DOI: 10.1016/S2468-2667(17)30118-4] [PMID: ] [DOI] [PubMed] [Google Scholar]
Jones 2008
- Jones R, Flaherty EG, Binns HJ, Price LL, Slora E, Abney D, et al. Clinicians' description of factors influencing their reporting of suspected child abuse: report of the Child Abuse Reporting Experience Study Research Group. Pediatrics 2008;122(2):259-66. [DOI: 10.1542/peds.2007-2312] [PMID: ] [DOI] [PubMed] [Google Scholar]
Kalichman 1993
- Kalichman SC, Brosig CL. Practicing psychologists' interpretations of and compliance with child abuse reporting laws. Law and Human Behavior 1993;17(1):83-93. [DOI: 10.1007/BF01044538] [DOI] [Google Scholar]
Kelcey 2013
- Kelcey B, Phelps G. Considerations for designing group randomized trials of professional development with teacher knowledge outcomes. Educational Evaluation and Policy Analysis 2013;35(3):370-90. [DOI: 10.3102/0162373713482766] [WEB PAGE: www.jstor.org/stable/43773438] [DOI] [Google Scholar]
Kenny 2001
- Kenny MC. Child abuse reporting: teachers' perceived deterrents. Child Abuse & Neglect 2001;25(1):81-92. [DOI: 10.1016/s0145-2134(00)00218-0] [PMID: ] [DOI] [PubMed] [Google Scholar]
Kenny 2004
- Kenny MC. Teachers' attitudes toward and knowledge of child maltreatment. Child Abuse & Neglect 2004;28(12):1311-9. [DOI: 10.1016/j.chiabu.2004.06.010] [PMID: ] [DOI] [PubMed] [Google Scholar]
Kimber 2018
- Kimber M, Adham S, Gill S, McTravish J, MacMillan HL. The association between child exposure to intimate partner violence (IPV) and perpetration of IPV in adulthood - a systematic review. Child Abuse & Neglect 2018;76:273-86. [DOI: 10.1016/j.chiabu.2017.11.007] [PMID: ] [DOI] [PubMed] [Google Scholar]
Knowles 2011
- Knowles MS, Holton EF, Swanson RA. The Adult Learner: The Definitive Classic in Adult Education and Human Resource Development. 7th edition. Oxford (UK): Butterworths-Heinemann Ltd, 2011. [Google Scholar]
Koç 2020
- Koç S, Ekşi H, Türk T. Psychometric properties of teachers' attitudes towards reporting child sexual abuse scale: Turkish form. Elementary Education Online 2020;19(1):173-82. [DOI: 10.17051/ilkonline.2020.649372] [DOI] [Google Scholar]
Kohl 2009
- Kohl Pl, Jonson-Reid M, Drake B. Time to leave substantiation behind: findings from a national probability study. Child Maltreatment 2009;14(1):17-26. [DOI: 10.1177/1077559508326030] [PMID: ] [DOI] [PubMed] [Google Scholar]
Korbin 1979
- Korbin J. A cross-cultural perspective on the role of community in child abuse and neglect. Child Abuse & Neglect 1979;3(1):9-18. [DOI: 10.1016/0145-2134(79)90006-1] [DOI] [Google Scholar]
Landsford 2002
- Landsford JE, Dodge KA, Pettit GS, Bates JE, Crozier J, Kaplow J. A 12-year prospective study of the long-term effects of early child physical maltreatment on psychological, behavioral, and academic problems in adolescence. Archives of Pediatrics & Adolescent Medicine 2002;156(8):824-30. [DOI: 10.1001/archpedi.156.8.824] [PMCID: PMC2756659] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Lee 2012
- Lee PY, Dunne MP, Chou FH, Fraser JA. Development of the child abuse and neglect reporting self-efficacy questionnaire for nurses. Kaohsiung Journal of Medical Sciences 2012;28(1):44-53. [DOI: 10.1016/j.kjms.2011.10.032] [PMID: ] [DOI] [PubMed] [Google Scholar]
Levi 2019 [pers comm]
- Levi B. Training interventions for mandatory reporters [personal communication]. Email to: K Walsh 19 December 2018.
Lev‐Weisel 2018
- Lev-Weisel R, Eisikovits Z, First M, Gorrfried R, Mehlhausen D. Prevalence of child maltreatment in Israel: a national epidemiological study. Journal of Child & Adolescent Trauma 2018;11(2):141-50. [DOI: 10.1007/s40653-016-0118-8] [PMCID: PMC7163892] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Li 2022
- Li T, Higgins JP, Deeks JJ, editor(s). Chapter 5: Collecting data. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.
Liberati 2009
- Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLOS Medicine 2009;6(7):e1000100. [DOI: 10.1371/journal.pmed.1000100] [PMCID: PMC2707010] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Louwers 2010
- Louwers EC, Affourtit MJ, Moll HA, De Koning HJ, Korfage IJ. Screening for child abuse at emergency departments: a systematic review. Archives of Disease in Childhood 2010;95(3):214-8. [DOI: 10.1136/adc.2008.15165] [PMID: ] [DOI] [PubMed] [Google Scholar]
Maguire‐Jack 2015
- Maguire-Jack K, Lanier P, Johnson-Motoyama M, Welch H, Dineen M. Geographic variation in racial disparities in child maltreatment: the influence of county poverty and population density. Child Abuse & Neglect 2015;47:1-13. [DOI: 10.1016/j.chiabu.2015.05.020] [PMID: ] [DOI] [PubMed] [Google Scholar]
Mathews 2008a
- Mathews B, Kenny MC. Mandatory reporting legislation in the United States, Canada, and Australia: a cross-jurisdictional review of key features, differences, and issues. Child Maltreatment 2008;13(1):50-63. [DOI: 10.1177/1077559507310613] [PMID: ] [DOI] [PubMed] [Google Scholar]
Mathews 2008b
- Mathews B, Fraser J, Walsh K, Dunne M, Kilby S, Chen L. Queensland nurses' attitudes and knowledge of the legislative duty to report child abuse and neglect: results of a state-wide survey. Journal of Law and Medicine 2008;16(2):288-304. [PMID: ] [PubMed] [Google Scholar]
Mathews 2009
- Mathews B, Walsh K, Rassafiani M, Butler D, Farrell A. Teachers’ reporting child sexual abuse: results of a three-state study. University of New South Wales Law Journal 2009;32(3):772-813. [WEBSITE: www.unswlawjournal.unsw.edu.au/article/teachers-reporting-suspected-child-sexual-abuse-results-of-a-three-state-study/] [Google Scholar]
Mathews 2011
- Mathews B. Teacher education to meet the challenges posed by child sexual abuse. Australian Journal of Teacher Education 2011;36(11):13-32. [WEB PAGE: files.eric.ed.gov/fulltext/EJ943405.pdf] [Google Scholar]
Mathews 2016
- Mathews B, Lee XJ, Norman RE. Impact of a new mandatory reporting law on reporting and identification of child sexual abuse: a seven year time trend analysis. Child Abuse & Neglect 2016;56:62-79. [DOI: 10.1016/j.chiabu.2016.04.009] [PMID: ] [DOI] [PubMed] [Google Scholar]
Mathews 2019
- Mathews B, Collin-Vézina D. Child sexual abuse: toward a conceptual model and definition. Trauma, Violence & Abuse 2019;20(2):131-48. [DOI: 10.1177/1524838017738726] [PMCID: PMC6429628] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Mathews 2020
- Mathews B, Bromfield L, Walsh K. Comparing reports of child sexual and physical abuse using child welfare agency data in two jurisdictions with different mandatory reporting laws. Social Sciences 2020;9(5):75. [DOI: 10.3390/socsci9050075] [DOI] [Google Scholar]
McGrath 1987
- McGrath P, Cappelli M, Wiseman D, Khalil N, Allan B. Teacher awareness program on child abuse: a randomized controlled trial. Child Abuse & Neglect 1987;11(1):125-32. [DOI: 10.1016/0145-2134(87)90041-x] [PMID: ] [DOI] [PubMed] [Google Scholar]
McMahon 1999
- McMahon PM, Puett RC. Child sexual abuse as a public health issue: recommendations of an expert panel. Sexual Abuse 1999;11(4):257-66. [DOI: 10.1023/A:1021358813220] [PMID: ] [DOI] [PubMed] [Google Scholar]
Microsoft Corporation 2018 [Computer program]
- Microsoft Excel. Microsoft Corporation, Version 2204. Redmond (WA): Microsoft Corporation, 2018. Available at office.microsoft.com/excel.
Moffitt 2013
- Moffitt TE, Klaus-Grawe 2012 Think Tank. Childhood exposure to violence and lifelong health: clinical intervention science and stress biology research join forces. Development and Psychopathology 2013;25(4 Pt 2):1619-34. [DOI: 10.1017/S0954579413000801] [PMCID: PMC3869039] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Moher 2009
- Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLOS Medicine 2009;6(7):e1000097. [DOI: 10.1371/journal.pmed.1000097] [PMCID: PMC2707599] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Nelson 2020
- Nelson CA, Scott RD, Bhutta ZA, Harris NB, Danese A, Samara M. Adversity in childhood is linked to mental and physical health throughout life. BMJ 2020;371:m3048. [DOI: 10.1136/bmj.m3048] [PMCID: PMC7592151] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Nguyen 2019
- Nguyen KH, Padilla M, Villaveces A, Patel P, Atuchukwu V, Onotu D, et al. Coerced and forced sexual initiation and its association with negative health outcomes among youth: results from the Nigeria, Uganda, and Zambia Violence Against Children Surveys. Child Abuse & Neglect 2019;96:1040742. [DOI: 10.1016/j.chiabu.2019.104074] [PMCID: PMC6760991] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Nikolaidis 2018
- Nikolaidis G, Petroulaki K, Zarokosta F, Tsirigoti A, Hazizaj A, Cenko E, et al. Lifetime and past-year prevalence of children’s exposure to violence in 9 Balkan countries: the BECAN study. Child and Adolescent Psychiatry and Mental Health 2018;12(1):1-15. [DOI: 10.1186/s13034-017-0208-x] [PMCID: PMC5749026] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Norman 2012
- Norman RE, Byambaa M, De R, Butchart A, Scott J, Vos T. The long-term health consequences of child physical abuse, emotional abuse, and neglect: a systematic review and meta-analysis. PLOS Medicine 2012;9(11):e1001349. [DOI: 10.1371/journal.pmed.1001349] [PMCID: PMC3507962] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Page 2022
- Page MJ, Higgins JP, Sterne JA. Chapter 13: Assessing risk of bias due to missing results in a synthesis. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.
Paolucci 2001
- Paolucci E, Genuis ML, Violato C. A meta-analysis of the published research on the effects of child sexual abuse. Journal of Psychology 2001;135(1):17-36. [DOI: 10.1080/00223980109603677] [PMID: ] [DOI] [PubMed] [Google Scholar]
Pinheiro 2006
- Pinheiro PS. World report on violence against children. digitallibrary.un.org/record/587334?ln=en (accessed prior to 6 June 2022).
Radford 2012
- Radford L, Corral S, Bradley C, Fisher H. Trends in child maltreatment. Lancet 2012;379(9831):2048. [DOI: 10.1016/S0140-6736(12)60887-3] [PMID: ] [DOI] [PubMed] [Google Scholar]
Reeves 2022
- Reeves BC, Deeks JJ, Higgins JP, Shea B, Tugwell P, Wells G. Chapter 24: Including non-randomized studies on intervention effects. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.3 (updated February 2022). Cochrane, 2022. Available from training.cochrane.org/handbook.
Reiniger 1995
- Reiniger A, Robison E, McHugh M. Mandated training of professionals: a means for improving reporting of suspected child abuse. Child Abuse & Neglect 1995;19(1):63-9. [DOI: 10.1016/0145-2134(94)00105-4] [PMID: ] [DOI] [PubMed] [Google Scholar]
Review Manager 2020 [Computer program]
- Review Manager 5 (RevMan 5). Version 5.4. Copenhagen: Nordic Cochrane Centre, The Cochrane Collaboration, 2020.
RevMan Web 2021 [Computer program]
- Review Manager Web (RevMan Web). Version 2.7.0. The Cochrane Collaboration, 2021. Available at revman.cochrane.org.
Schulz 2010
- Schulz KF, Altman DG, Moher D, CONSORT Group. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c332. [DOI: 10.1136/bmj.c332] [PMID: ] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Schünemann 2013
- Schünemann H, Brożek J, Guyatt G, Oxman A, editor(s). Handbook for grading the quality of evidence and the strength of recommendations using the GRADE approach (updated October 2013). GRADE Working Group, 2013. Available from gdt.guidelinedevelopment.org/app/handbook/handbook.html.
Schünemann 2022
- Schünemann HJ, Higgins JP, Vist GE, Glasziou P, Akl EA, Skoetz N, et al. Chapter 14: Completing ‘Summary of findings’ tables and grading the certainty of the evidence. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.3 (updated February 2022). Cochrane, 2022. Available from training.cochrane.org/handbook.
Shalev 2013
- Shalev I, Moffitt TE, Sudgen K, Williams B, Houts RM, Danese A, et al. Exposure to violence during childhood is associated with telomere erosion from 5 to 10 years of age: a longitudinal study. Molecular Psychiatry 2013;18(5):576-81. [DOI: 10.1038/mp.2012.32] [PMCID: PMC3616159] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Stanley 2017
- Stanley G. Accreditation and assessment in vocational education and training. In: Buchanan J, Finegold D, Mayhew K, Warhurst C, editors(s). The Oxford Handbook of Skills and Training. Oxford (UK): Oxford University Press, 2017:124-42. [DOI: 10.1093/oxfordhb/9780199655366.013.6] [DOI] [Google Scholar]
Starling 2009
- Starling SP, Heisler KW, Paulson JF, Youmans E. Child abuse training and knowledge: a national survey of emergency medicine, family medicine, and pediatric residents and program directors. Pediatrics 2009;123(4):e595-602. [DOI: 10.1542/peds.2008-2938] [PMID: ] [DOI] [PubMed] [Google Scholar]
Sterne 2016
- Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomized studies of interventions. BMJ 2016;355:i4919. [DOI: 10.1136/bmj.i4919] [PMCID: PMC5062054] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Sterne 2022
- Sterne JA, Hernán MA, McAleenan A, Reeves BC, Higgins JP. Chapter 25: Assessing risk of bias in a non-randomized study. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.
Stoltenborgh 2011
- Stoltenborgh M, Van Ijzendoorn MH, Euser EM, Bakermans-Kranenburg MJ. A global perspective on child sexual abuse: meta-analysis of prevalence around the world. Child Maltreatment 2011;16(2):79-101. [DOI: 10.1177/1077559511403920] [PMID: ] [DOI] [PubMed] [Google Scholar]
Stoltenborgh 2012
- Stoltenborgh M, Bakermans-Kranenburg MJ, Alink LR, Van Ijzendoorn MH. The universality of childhood emotional abuse: a meta-analysis of worldwide prevalence. Journal of Aggression, Maltreatment & Trauma 2012;21(8):870-90. [DOI: 10.1080/10926771.2012.708014] [DOI] [Google Scholar]
Stoltenborgh 2015
- Stoltenborgh M, Bakermans-Kranenburg MJ, Alink LR, Van Ijzendoorn MH. The prevalence of child maltreatment across the globe: review of a series of meta-analyses. Child Abuse Review 2015;24(1):37-50. [DOI: 10.1002/car.2353] [DOI] [Google Scholar]
Suskie 2018
- Suskie L. Assessing Student Learning: A Common Sense Guide. San Francisco (CA): John Wiley & Sons, 2018. [ISBN: 978-1-119-42693-6] [Google Scholar]
Taillieu 2016
- Taillieu TL, Brownridge DA, Sareen J, Afifi TO. Childhood emotional maltreatment and mental disorders: results from a nationally representative adult sample from the United States. Child Abuse & Neglect 2016;59:1-12. [DOI: 10.1016/j.chiabu.2016.07.005] [PMID: ] [DOI] [PubMed] [Google Scholar]
Tarnowski 1992
- Tarnowski KJ, Simonian SJ. Assessing treatment acceptance: the Abbreviated Acceptability Rating Profile. Journal of Behavior Therapy and Experimental Psychology 1992;23(2):101-6. [DOI: 10.1016/0005-7916(92)90007-6] [PMID: ] [DOI] [PubMed] [Google Scholar]
Tiecher 2016
- Tiecher MH, Samson JA. Annual research review: enduring neurobiological effects of childhood abuse and neglect. Journal of Child Psychology and Psychiatry 2016;57(3):241-66. [DOI: 10.1111/jcpp.12507] [PMCID: PMC4760853] [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
United Nations 1989
- United Nations Office of the High Commissioner for Human Rights. Convention on the Rights of the Child. www.ohchr.org/en/professionalinterest/pages/crc.aspx (accessed prior to 6 June 2022).
United Nations 2015
- United Nations Department of Economic and Social Affairs. Transforming our world: the 2030 Agenda for Sustainable Development. www.un.org/ga/search/view_doc.asp?symbol=A/RES/70/1&Lang=E (accessed prior to 6 June 2022).
US DHHS 2021
- US Department of Health and Human Services, Administration for Children and Families, Administration on Children, Youth and Familes, Children's Bureau. Child Maltreatment 2019. www.acf.hhs.gov/cb/report/child-maltreatment-2019 (accessed prior to 6 June 2022).
Walsh 2008
- Walsh K, Bridgstock R, Farrell A, Rassafiani M, Schweitzer R. Case, teacher and school characteristics influencing teachers' detection and reporting of child physical abuse and neglect: results from an Australian survey. Child Abuse & Neglect 2008;32(10):983-93. [DOI: 10.1016/j.chiabu.2008.03.002] [PMID: ] [DOI] [PubMed] [Google Scholar]
Walsh 2010
- Walsh K, Rassafiani M, Mathews B, Farrell A, Butler D. Teachers’ attitudes toward reporting child sexual abuse: problems with existing research leading to new scale development. Journal of Child Sexual Abuse 2010;19(3):310-36. [DOI: 10.1080/10538711003781392] [DOI] [PubMed] [Google Scholar]
Walsh 2012a
- Walsh KM, Mathews B, Rassafiani M, Farrell A, Butler DA. Understanding teachers' reporting of child sexual abuse: measurement methods matter. Children and Youth Services Review 2012;34(9):1937-46. [DOI: 10.1016/j.childyouth.2012.06.004] [DOI] [Google Scholar]
Walsh 2012b
- Walsh K, Rassafiani M, Mathews B, Farrell A, Butler D. Exploratory factor analysis and psychometric properties of the teacher reporting attitude scale for child sexual abuse. Journal of Child Sexual Abuse 2012;21(5):489-506. [DOI: 10.1080/10538712.2012.689423] [PMID: ] [DOI] [PubMed] [Google Scholar]
Walsh 2018 [pers comm]
- Walsh K. Training interventions for mandatory reporters [personal communication]. Email to: NDACAN Child-maltreatment-research-Listserv 17 December 2018. [WEB PAGE: www.ndacan.acf.hhs.gov/cmrl/cmrl-past-postings.cfm]
Walsh 2019
- Walsh K. Re-visioning education and training for child protection using a public health approach. In: Lonne B, Scott D, Higgins D, Herrenkohl TI, editors(s). Re-Visioning Public Health Approaches for Protecting Children. Dordrecht: Springer, 2019:379-96. [Google Scholar]
Walsh 2019a [pers comm]
- Walsh K. Request for information - Your article entitled Capacitação do educador acerca do abuso sexual infantil [personal communication]. Email to: R De Faria Brino and LC De Albuqerque Williams 24 April 2019. [EMAIL: brino@ufscar.br; williams@ufscar.br]
Walsh 2019b [pers comm]
- Walsh K. Request for information - Nurses intention to report child abuse [personal communication]. Email to: S Khanjari 9 May 2019.
Walsh 2021 [pers comm]
- Walsh K. Request for information - your article entitled The Effect of Psycho-Education Program Developed for Sexual Abuse on Counseling Teachers’ Reporting Sexual Abuse and Information and Risk Recognition Attitudes [personal communication]. Email to: S Cengiz 16 August 2021. [EMAIL: srkn_cngz_25@hotmail.com]
Ward 2018
- Ward CL, Artz L, Leoschut L, Kassanjee R, Burton P. Sexual violence against children in South Africa: a nationally representative cross-sectional study of prevalence and correlates. Lancet Global Health 2018;6(4):e460-8. [DOI: 10.1016/S2214-109X(18)30060-3] [PMID: ] [DOI] [PubMed] [Google Scholar]
WHO 2006
- World Health Organization, International Society for the Prevention of Child Abuse and Neglect. Preventing child maltreatment: a guide to taking action and generating evidence. whqlibdoc.who.int/publications/2006/9241594365_eng.pdf (accessed prior to 6 June 2022).
Wilson 2001
- Wilson DB. Practical meta-analysis effect size calculator. www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-Home.php (accessed 26 January 2022).
Zellman 1990
- Zellman GL. Child abuse reporting and failure to report among mandated reporters: prevalence, incidence, and reasons. Journal of Interpersonal Violence 1990;5(1):3-22. [DOI: 10.1177/088626090005001001] [DOI] [Google Scholar]
References to other published versions of this review
Mathews 2015
- Mathews B, Walsh K, Coe S, Kenny MC, Vagenas D. Child protection training for professionals to improve reporting of child abuse and neglect. Cochrane Database of Systematic Reviews 2015, Issue 6. Art. No: CD011775. [DOI: 10.1002/14651858.CD011775] [DOI] [PMC free article] [PubMed] [Google Scholar]