Skip to main content
PLOS One logoLink to PLOS One
. 2023 May 25;18(5):e0285985. doi: 10.1371/journal.pone.0285985

Early childhood education and care quality and associations with child outcomes: A meta-analysis

Antje von Suchodoletz 1,2,*, D Susie Lee 3,4, Junita Henry 5, Supriya Tamang 6, Bharathy Premachandra 7, Hirokazu Yoshikawa 2,3
Editor: Sze Yan Liu8
PMCID: PMC10212181  PMID: 37228090

Abstract

Objectives

The effectiveness of early childhood education and care (ECEC) programs for children’s development in various domains is well documented. Adding to existing meta-analyses on associations between the quality of ECEC services and children’s developmental outcomes, the present meta-analysis synthesizes the global literature on structural characteristics and indicators of process quality to test direct and moderated effects of ECEC quality on children’s outcomes across a range of domains.

Design

A systematic review of the literature published over a 10-year period, between January 2010 and June 2020 was conducted, using the databases PsychInfo, Eric, EbscoHost, and Pubmed. In addition, a call for unpublished research or research published in the grey literature was sent out through the authors’ professional network. The search yielded 8,932 articles. After removing duplicates, 4,880 unique articles were identified. To select articles for inclusion, it was determined whether studies met eligibility criteria: (1) study assessed indicators of quality in center-based ECEC programs catering to children ages 0–6 years; and (2) study assessed child outcomes. Inclusion criteria were: (1) a copy of the full article was available in English; (2) article reported effect size measure of at least one quality indicator-child outcome association; and (3) measures of ECEC quality and child outcomes were collected within the same school year. A total of 1,044 effect sizes reported from 185 articles were included.

Results

The averaged effects, pooled within each of the child outcomes suggest that higher levels of ECEC quality were significantly related to higher levels of academic outcomes (literacy, n = 99: 0.08, 95% C.I. 0.02, 0.13; math, n = 56: 0.07, 95% C.I. 0.03, 0.10), behavioral skills (n = 64: 0.12, 95% C.I. 0.07, 0.17), social competence (n = 58: 0.13, 95% C.I. 0.07, 0.19), and motor skills (n = 2: 0.09, 95% C.I. 0.04, 0.13), and lower levels of behavioral (n = 60: -0.12, 95% C.I. -0.19, -0.05) and social-emotional problems (n = 26: -0.09, 95% C.I. -0.15, -0.03). When a global assessment of child outcomes was reported, the association with ECEC quality was not significant (n = 13: 0.02, 95% C.I. -0.07, 0.11). Overall, effect sizes were small. When structural and process quality indicators were tested separately, structural characteristics alone did not significantly relate to child outcomes whereas associations between process quality indicators and most child outcomes were significant, albeit small. A comparison of the indicators, however, did not yield significant differences in effect sizes for most child outcomes. Results did not provide evidence for moderated associations. We also did not find evidence that ECEC quality-child outcome associations differed by ethnic minority or socioeconomic family background.

Conclusions

Despite the attempt to provide a synthesis of the global literature on ECEC quality-child outcome associations, the majority of studies included samples from the U.S. In addition, studies with large samples were also predominately from the U.S. Together, the results might have been biased towards patterns prevalent in the U.S. that might not apply to other, non-U.S. ECEC contexts. The findings align with previous meta-analyses, suggesting that ECEC quality plays an important role for children’s development during the early childhood years. Implications for research and ECEC policy are discussed.

Introduction

Over the past decades, the number of children participating in early childhood education and care (ECEC) programs increased worldwide [1]. Children’s experiences in ECEC programs have thus become an important factor in their development during the early childhood years [2, 3]. The expansion of ECEC provision worldwide is in line with global policy efforts, such as the Sustainable Development Goal (SDG) goal 4 and specifically target 4.2, which calls for universal access to one year of preprimary education [4, 5]. ECEC programs are commonly implemented with the goal to “enhance early learning and development” and/or “increase opportunities for all children to succeed in school” [6, p. 3]. Many calls have been made for high-quality programs, including in the wording of SDG 4 itself (“ensure that all girls and boys have access to quality early childhood development, care and pre-primary education” [5]. Moreover, the G20 Initiative for Early Childhood Development [7] emphasizes the importance of political buy-in and state and non-state investments in the early years in order to narrow achievement and opportunity gaps that exist between children from higher and lower socioeconomic backgrounds. The effectiveness of ECEC programs for children’s development in various domains has been well documented, both short-term and long-term, although short-term effects have been larger [8, 9]. Yet, the evidence base on the associations of quality of ECEC with children’s developmental outcomes is lacking accuracy in at least three ways. First, the literature is dominated by research from countries with a long history of ECEC provision, such as the United States (U.S.) and European countries like Finland, Germany, the Netherlands, or the United Kingdom, resulting in a scarcity of systematic research syntheses that integrate the global literature on ECEC to build knowledge about children’s educational opportunities across the world. Second, a large body of original research and research syntheses investigates the role of structural aspects of the ECEC program. However, a pending question remains whether such structural characteristics itself systematically change the effects of process quality on child outcomes, or whether process quality changes the impact of structural characteristics on child outcomes. And third, there is a need to broaden the scope of developmental outcomes. Participation in ECEC programs may also have impacts on other, non-academic outcomes, such as social-emotional skills that have become a major focus of ECEC programs and interventions [10]. To address these gaps, our primary goal in the current meta-analysis was to provide a synthesis of available international evidence to better understand the mechanisms underlying the pathways from ECEC quality to child outcomes.

Defining ECEC quality: What matters for children’s outcomes?

Although there is a growing consensus that the level of quality of ECEC services influences children’s developmental outcomes [3, 11], the definition of ECEC quality has been a topic of debate since the 1970s, a debate that has undergone continuous change paralleling changes in socio-political structures and influences [2]. More recent definitions refer to ECEC quality as a multidimensional construct that includes structures of, and processes and practices in ECEC settings [12]. This is reflected in ECEC policy with quality standards being used as regulations with “the assumption that improving structural or process quality improves children’s outcomes indirectly or directly” [3, 6 p. 4]. Yet, the empirical evidence relating ECEC quality indicators with child outcomes is mixed and inconsistent in size [3], raising conceptual and methodological concerns regarding ECEC quality models [6].

Structures and processes

Structural aspects are major factors of ECEC programs [13, 14]. The most commonly studied structural aspects are teacher-child ratio and class size [3]. Evidence shows moderate positive effect sizes of fewer children per teacher and smaller class sizes on children’s outcomes [13, 15]. Yet, not all studies have found significant associations. For example, a meta-analysis from the U.S. found teacher-child ratio and class size to be unrelated with children’s language, reading, math and social skills [16]. It may be that, as regulation advances in particular countries, variation in these structural dimensions decreases and therefore the predictive power of ratios and class sizes is limited. A second major area of structural quality includes teacher factors, such as training, education, and experience. Associations between these factors and children’s outcomes are also inconsistent. The above cited meta-analysis found significant positive, yet small effects of teacher factors on children’s language, reading and math skills [16]. Other studies, however, did not find significant associations [1719]. For example, data from a U.S. state’s Quality Rating and Improvement System showed that neither general teacher education (i.e., type of degree) nor additional ECEC training were associated with children’s school readiness skills [18]. Although one might conclude from the inconsistent empirical evidence for many structural aspects that these indicators of ECEC quality do not matter for children’s outcomes to the anticipated extent, it remains important “to continue examining pathways from structural quality to children’s outcomes to inform policy and practice” [6 p. 7, 16]. Because structural features of ECEC settings have been more regulable, many countries focus on structural standards as key strategy for improving the quality of ECEC programs [3, 17]. The G20 Development Working Group [7] urges to focus on the quality of the infrastructure and capacity building of, decent work conditions for and adequate training of the ECEC workforce. Government regulations can set standards for these features, for example, by lifting the minimum requirements for teacher-child ratio or requiring a certain percentage of teaching staff to be qualified in early childhood education. Such structural regulations determine the setting in which children learn and thus may be important preconditions for process quality [3].

Over the past decade, aspects of process quality—teacher-child interactions and the level of stimulation of early learning in particular—have become an important focus of the effort to raise ECEC quality [3]. Socioecological, attachment, and learning theories recognize that teacher-child interactions provide an important context for children’s development and learning [6, 20]. Research, predominantly from the U.S., indicates small but statistically significant positive associations between the quality of teacher-child interactions and children’s academic and social-emotional outcomes [2123]. Associations for academic outcomes are often stronger than for socio-emotional outcomes [24]. However, depending on the domain of teacher-child interactions assessed in a study, estimated effects vary in size with some research also finding no significant associations. Theoretically it is thought that frequent warm and emotionally supportive interactions between teachers and children foster gains in children’s skills. Yet, effect sizes for teacher provision of emotional support are often small (0.01–0.08) when controlling for earlier child skills [13, 25, 26]. Teacher-child interactions characterized by conflict, tension and anger, in contrast, have been linked with a higher chance that children develop achievement or behavioral problems [2729]. A recent Starting Strong report [3] found a significant negative summary effect size of negative teacher-child interactions across multiple studies (-0.33). Several studies documented the benefits of well-organized classrooms for children’s development, both academically [30] and behaviorally [31]. Leyva et al. [26] conclude that effect sizes for this aspect of teacher-child interactions were higher, ranging between 0.14–0.49 (p. 784). Finally, clear instruction intended to enhance knowledge of concepts and language, tying new facts to children’s prior knowledge, and providing immediate, specific feedback [20], predicts growth in children’s academic skills [13, 25]. Effect sizes have been reported to be small to moderate (0.002–0.32, cited from [26, p. 784]. However, as with other aspects of teacher-child interactions, not all studies reported significant associations between the level of instruction provided by the teacher and children’s academic skills [24, 32].

Different approaches to the measurement of process quality might have contributed to the inconsistent empirical evidence for the link between process quality indicators and child outcomes. The two most common approaches to the measurement of process quality are observations and teacher self-reports. The Classroom Assessment Scoring System (CLASS) [33] and the Early Childhood Environment Rating Scale (ECERS) [34], for example, are often used as observational tools to assess indicators of process quality in ECEC programs in the U.S. and internationally. The most widely used self-report questionnaire is the Student-Teacher Relationship Scale (STRS) [35] to assess “the teacher’s perception of, and feeling about, the child’s behavior toward her” which are thought to be a key component influencing the teacher’s ability to engage in positive interactions with the child [36, p. 126]. Compared to observational measures, the use of self-reports is often more economic. However, self-report data can be influenced by self-representation bias or social desirability bias, “a tendency of individuals to present themselves and their practices in a favorable way” [37, p. 628]. Thus, it is important to test whether observational and self-report measures yield the same results. Comparability across studies is also often challenged by differences in the data collection schedule (beginning, mid, or end of school year; cross-sectional or longitudinal).

ECEC quality indicators as moderators

In the past, most research has tested direct associations between ECEC quality indicators and children’s skills and knowledge. As reviewed above (and in other sources, e.g., 6], the strength of direct associations is small to moderate, yet, with a considerable number of studies finding no significant links. Indirect associations have been examined to a lesser extent despite the fact that predominant conceptual models assume process quality as a mechanism underlying the association between structural characteristics and child outcomes [6, p. 6]. The NICHD Early Child Care Research Network [38] study provided initial evidence for indirect relations from structural aspects of ECEC program to child outcomes (cognitive and social competence) through the quality of processes in the ECEC program. However, the indirect associations were very small and findings have not been replicated in some other large-scale data sets [6].

A better understanding of the underlying processes linking ECEC quality with child outcomes may be gained by testing interaction effects of ECEC quality indicators. It is possible that it is a specific combination of structural and process aspects that matters for children’s outcomes. For example, it has been found that associations between process quality and children’s social-emotional skills were moderated by dosage. Children who spent more time in high-quality ECEC settings were reported to have higher levels of social-emotional skills compared to children who spent less time in high-quality ECEC settings [39, 40]. Such results suggest that structural characteristics can reinforce positive effects of high levels of process quality as well as negative effects of low levels of process quality. Likewise, positive effects of the level of instructional processes on children’s gains in literacy and numeracy might only be present in small classes where teachers can engage in differentiated instruction, whereas in large classes such an effect might be absent. However, results are mixed and other studies did not find significant results when testing structural characteristics of the ECEC setting as moderators of the association between process quality and child outcomes [41].

Alternatively, it might also be possible that associations between structural aspects and child outcomes will be stronger under high levels of process quality, compared to low levels of process quality. For example, a study found that teacher emotional support moderated the association between classroom composition (i.e., high levels of problem behaviors in the classroom) and children’s relational functioning. The negative effect of a highly challenging class on individual children’s relational functioning was buffered by teachers who were highly emotionally supportive [42]. Although fewer studies tested the moderating role of process quality, it can provide important information about the mechanisms underlying the associations between ECEC quality indicators and child outcomes.

Different family and economic backgrounds

The policy focus on ECEC quality is driven by the assumption that participation in ECEC can compensate for educational disadvantages associated with low family socio-economic status and ethnic minority status. Indeed, evaluations of ECEC programs (for example, Head Start in the U.S.) provide evidence for this assumption, suggesting that program effects may be largest for children from disadvantaged backgrounds [4346]. ECEC programs have the potential to compensate for educational disadvantages by providing rich and engaging learning environments and to support these children to catch up with their peers [47]. As such, ECEC programs can disrupt trends leading to achievement gaps which have been found to start prior to age three [44]. Yet, to date, systems, including ECEC, continue to perpetuate racism and inequities, thus “reduc[ing] opportunities for certain groups to thrive and meet their potential” [44, p. 65]. In order to strengthen the impact of early learning, more effective, evidence-based policies are thus needed.

Children from ethnic minority backgrounds

Children from ethnic minority backgrounds, many of whom grow up in bilingual environments with different languages spoken in the home and at school [48], often experience difficulties at school which can lead to school readiness and achievement gaps [49, 50]. Several explanations are discussed in the literature, such as racism and lack of attention to culture [51], differences in parental involvement in children’s education [52], in child-rearing and education-related beliefs and practices [53], in parents’ education [48], and in opportunities for informal learning at home [49]. In addition, differences in the quality of teacher-child interactions may contribute to the different school experiences of children from ethnic minority backgrounds. Teacher-child interactions operate similarly across ethnic groups [26, 54, 55]. Yet, the quality of instruction has been shown to decrease as the proportion of ethnic minority children in a classroom increases [5658]. Such findings are of concern as they highlight the risk of ECEC programs not meeting the needs of ethnic minority children.

Children from low-income families

High-quality ECEC programs are thought to promote equality in educational opportunities for all children, but especially for children from low-income families [2, 46]. There is consensus that these children, compared to their peers from more advantaged backgrounds, have a greater risk of school failure and adjustment problems that may start as early as preprimary age [3, 8, 47, 59]. Due to limited material resources or educational opportunities at home, these children may lack the skills necessary to succeed in school [59]. Large-scale interventions have been implemented to reduce socioeconomic achievement gaps. While beneficial effects have been demonstrated during the preschool years (such as cognitive gain and reduction of behavior problems), they often do not persist into kindergarten and school [8, 60]. Such results give reason to question whether ECEC services are reaching their potential, thus questioning whether these programs have the anticipated beneficial effects for children from low-income families [6].

The present study

The present meta-analysis aims to contribute synthesized evidence from across the world on the developmental impact of ECEC to inform the formulation of an extended ECEC quality model for use in research, policy-making and practice. It also aims to address various shortcomings of prior meta-analyses, for example, the limitation to one geographical region [9, 19, 24], or in the child outcomes assessed (e.g., cognitive and academic outcomes) [19]. Specifically, we examined the magnitude of associations between ECEC quality indicators and a variety of child outcomes using studies from around the world. In doing so, we explored characteristics and mechanisms that may underlie the associations between ECEC quality and child outcomes. The additional questions are: (a) Do ECEC quality-child outcome associations differ by quality indicator (structural versus process) or by different aspects of process quality (emotional support, instructional support, classroom management)? (b) To what degree does one indicator of ECEC quality (structural or process) moderate the associations between the other indicator of ECEC quality (structural or process) and child outcomes? (c) Do process quality-child outcome associations differ by the method of process quality assessment (observed versus teacher-report)? (d) Do ECEC quality-outcome associations differ by timing (beginning versus end of school year; concurrently versus longitudinally)? And (e) Do ECEC quality-child outcome associations differ by family ethnic minority status and socio-economic background?

Method

Literature search, inclusion criteria, and coding

Search procedures

We conducted a systematic review (the review was not registered) of the global literature published over a 10-year period, between January 2010 and June 2020, using the databases PsychInfo, Eric, EbscoHost, and Pubmed. We based these dates on previous meta-analyses investigating associations between various aspects of ECEC quality and child outcomes for which literature searches included articles published prior to 2010 [e.g., 24, 61–67]. We allowed overlap in years with the previous meta-analyses and stopped in 2020, at a time when ECEC provision was challenged because of the COVID-19 pandemic. The time period (2010–2020) also covers a period of ECEC-focused policy initiatives across the world, for example, the national plan for medium and long-term education reform and development (2010–2020) in China [68], recommendations on high-quality ECEC systems by the [69], or the G20 Initiative for Early Childhood Development [7] and G20 Education Ministers’ Declaration [70].

Searches focused on early childhood education; additional search strings included keywords for ECEC quality indicators (process quality and structural characteristics) and child outcomes (for a complete list of keywords see S1 File). In addition, a call for unpublished research or research published in the grey literature was sent out through the authors’ professional network to reduce publication bias. In total, 170 scholars in research institutions, NGOs, and public institutions worldwide were contacted via email and asked to share their work on this topic. This call yielded 40 studies. Prior to the coding process we checked if any of these studies were published in the meantime and used the published version of the article. This was the case for two studies (published in 2021). Together, this search yielded 8,932 articles. After removing duplicates, 4,880 unique articles were identified.

Inclusion criteria

To select articles for inclusion in the meta-analysis, it was first determined whether studies met eligibility criteria: (1) the study assessed indicators of quality in center-based ECEC programs catering to children ages 0–6 years; and (2) the study assessed child outcomes. This was done by reading titles and abstracts. To ensure intercoder agreement, 20% of studies were screened by two independent coders. The agreement between coders was, on average, 86%. During this phase, 808 articles met the eligibility criteria. Second, the articles were screened for three additional inclusion criteria: (1) a copy of the full articles was available in English; (2) the article reported effect size measure of at least one quality indicator-child outcome association; and (3) measures of ECEC quality and child outcomes were collected within the same school year. During this step, meta-analyses, literature reviews and systematic syntheses of multiple datasets were excluded. Longitudinal studies were only excluded when child outcome measures were from a different school year than ECEC quality measures. Intervention studies were excluded unless relevant effect size measures were reported prior to the intervention. Intercoder agreement was, on average, 83% for 20% of the studies. After excluding articles that did not meet the inclusion criteria, 265 articles were eligible for coding. During coding, 77 articles were excluded. Reasons for exclusion during the coding included the following: that the study did not differentiate between different school years in the analysis (n = 43); did not report relevant effect sizes (n = 11); used child outcome as predictor of ECEC quality (n = 3); or the ECEC quality measure was collected after child outcome was assessed (n = 9). The selection process is shown in a PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flowchart in Fig 1. An overview of all included studies is shown in S1 Table of S2 File.

Fig 1. PRISMA flowchart of article selection.

Fig 1

Coding of study characteristics

Copies of eligible articles were obtained. Articles were coded by two coders. Coding accuracy was ensured by double-coding 20% of articles by both coders; intercoder agreement was high (91%). Disagreements between coders were discussed; the consensus became the code. We recorded several key study features: year of publication, country where data was collected, name of the study from which data was reported, number of participants (teacher, child), where participants were recruited (ECEC setting or through other sources), age and gender of participants, percentage of children in the sample from low-income background and from ethnic minority background, and the time in the school year when the ECEC quality and child outcome measures were collected (beginning/mid/end of school year).

We also recorded structural and process indicators of ECEC quality, and associations between these indicators with a child outcome reported in each study. Five structural indicators were coded, differentiating between classroom-level (two indicators: teacher-child ratio, group size) and teacher-level factors (three indicators: teacher age, teacher education [coded as percentage of teachers in the sample with a multi-year college or university degree], years of experience). Four process quality indicators focused on the quality of interactions and instruction: emotional quality (emotional support/closeness/positive climate/responsiveness), instructional quality (stimulation of cognitive and language development), managerial quality (classroom organization/classroom management/chaos [reversed coded–absence of chaos]), and conflict/negative climate [reversed coded–absence of conflict]. To categorize items or scales of measures into these four indicators, we relied on the description of the measure and labeling by the author(s) of the original study. If the study used a global process-quality score (i.e., the score did not differentiate between specific indicators but instead reflected an average level of process quality across multiple indicators) this was recorded as a separate process quality indicator. When multiple measures within one domain were reported, the average was calculated and used in further analyses.

We recorded any association measures reported between ECEC quality and eight types of child outcomes. Types of child outcome included academic outcomes (separately for math and literacy/language), behavioral skills (such as self-regulation, executive function, positive learning behaviors), social competence, behavioral problems (such as aggressive behavior, conduct problems), social-emotional problems (such as withdrawal, anxious/depressed), and motor skills. If the study used a global score of child outcome, this was recorded. When multiple outcomes within one category were reported, the average was calculated and used in further analyses.

In addition, we coded whether an observational or self-report (i.e., teacher report) measure was used for the particular aspect of process quality. Finally, we recorded whether zero-order correlation (coefficient without covariates) or regression coefficient (coefficient with covariates), or both, were reported for an association between process quality and child outcome. If both types of coefficients were provided for the same process quality-child outcome association, both coefficients were coded.

The coding was done using Excel where the coding sections were detailed. An amendment was made to include process quality descriptive information (Mean, SD) as an additional Excel document. Both documents are available in the Data and Results of S13 File.

Data analytic plan

Our measure of effect size to examine the associations between ECEC quality indicators and child outcome domains was the strength and direction of association, or correlation, between an ECEC quality measure and a child outcome measure (the document is available in the Data and Results of S13 File). Most studies reported correlation indices in the form of either zero-order correlation coefficient or regression coefficient that were both standardized (by multiplying the unstandardized coefficient by the coefficient of the standard deviation of an ECEC quality measure divided by the standard deviation of a child outcome measure), such that the range of a coefficient is between -1 to 1. Three studies were excluded from the meta-analysis during the data preparation phase. The effect sizes reported in two studies could not be standardized because the required standard deviation information was not reported; the other study reported insignificant findings but did not include the exact estimates which were recorded as missing during the coding (Fig 1). Out of the remaining 1,112 unique effect sizes that were recorded, 68 were excluded because the standardized effect size values did not range between -1 and 1. As a result, a total of 1,044 effect sizes reported from 185 articles were eligible for meta-analysis. These standardized effect sizes were then transformed into Fisher’s z-scale, to normalize the sampling distribution of effect sizes, using the ‘escalc’ function from the R package ‘metafor’ [71].

For all meta-analyses, two sets of models were estimated–one with and one without control variables regarding child sample characteristics: sex composition (proportion of girls in the sample) and average age of children in the sample. These two variables were standardized before entering analyses. However, out of the 185 studies, 23 studies did not have either of these variables available. To maximize the number of studies available to estimate pooled effect sizes (overall association between ECEC quality indicators and child outcomes), we first estimated a model based on all available 185 studies, and then estimated another model based on a subset of studies that reported the control variables, so that a model with and without these variables could be compared.

For the meta-analytic models estimated, we report tau-squared (τ2) which captures the absolute measure of between-study variance in random-effects meta-analysis. We also report I-squared (I2) which describes the percentage of variation across studies that is due to heterogeneity rather than chance, and is calculated as the ratio of true heterogeneity to total variance across the observed effect sizes [72]. Rho, a within-study effect size correlation, was set at 0.8 for the analyses, and a sensitivity analysis was conducted to check if τ2 and average effect size estimates were robust to different values of rho ranging from 0 to 1.

To examine the overall magnitude of associations between ECEC quality indicators and child outcomes, we averaged effects pooled within each of the child outcomes via eight random-effects models. We used robust variance estimation to calculate the pooled effect sizes in which weights of each estimate of association were based on the working models that effect sizes are correlated within studies. Details on the formulas specifying the correlated effects covariance structure and weights calculation can be found in [73, p. 4]. Structural and process indicators of ECEC quality were combined to maximize the number of included effect sizes. Because effect sizes for each child outcome could include the association between several quality indicators and the child outcome category, some studies contributed multiple effect sizes for each analysis. To handle within-study dependence among effect sizes, robust variance estimation was implemented for all meta-analyses conducted [74, 75], using the ‘robumeta’ package in R [76]. This approach followed previous studies that dealt with the similar issue of correlated effect sizes within studies [77, 78].

In addition, we explored characteristics and mechanisms that may underlie the associations between ECEC quality and child outcomes. We describe the analytical approach below for each additional question:

  1. Do ECEC quality-outcome associations differ by quality indicators (structural versus process) or by different domains of process quality? We first tested the associations between structural characteristics and process quality indicators with child outcomes separately, using the same analytical approach as described above. Next, we conducted moderator analyses using mixed-effects meta-regression models. Specifically, a binary variable of whether an effect size was for a process quality indicator-child outcome association or a structural indicator-child outcome association was added to the models described above. As such, effect sizes that involved a structural indicator of ECEC quality were compared with those that involved a process indicator of ECEC quality, regardless of whether an effect size came from the same study or not. Mixed-effects model allowed for adjusting within-study clustering of effect sizes. We further examined if the associations differed by three domains of process quality: instructional quality, emotional quality (reflects the presence of emotional support and absence of conflict), and managerial (reflects the presence of classroom management and absence of chaos). To test this difference, we estimated the same models as above, but restricted the analysis to process quality-child outcome associations and added a categorical variable indicating whether an effect size was for one of the three domains of process quality. Due to too few studies available (less than 3), we could not reliably estimate models involving motor skills and thus do not present results on motor skills.

  2. To what degree does one indicator of ECEC quality (structural or process) moderate the ECEC quality-child outcome associations of the other indicator of ECEC quality (structural or process)? We first tested whether structural indicators moderated process quality-child outcome associations. For that, we restricted data to effect sizes for process quality-child outcome associations and to studies that reported measures for at least one of the five structural indicators (teacher-child ratio, group size, teacher age, teacher education, teacher’s years of teaching in ECEC settings). Because not all studies reported the five structural indicators simultaneously, separate mixed-effects models were estimated for each structural indicator within each child outcome, to test if process quality-child outcome associations differed by the degree of a structural indicator. To do so, we added a structural indicator variable to a model, where the outcome variable was an effect size for process quality-child outcome associations. We followed the same approach for testing whether process quality moderated structural indicator-child outcome associations. For these analyses, the data were restricted to effect sizes for structural indicator-child outcome associations and to studies that reported mean scores of measures for at least one of the five process quality indicators (emotional quality, instructional quality, managerial, conflict, and global score). The available effect sizes were grouped by child outcome to test whether structural indicator-child outcome associations differed by the degree of a process quality indicator. A process quality indicator variable was added to a model where the outcome variable was an effect size for structural indicator-child outcome associations. For both sets of analyses, we could not reliably estimate models involving motor skills because models did not converge due to the small number of studies available.

  3. Do process quality-child outcome associations differ by the method of process quality assessment (observed versus teacher-report)? Here the goal was to compare effect sizes reported from the same studies rather than estimating pooled effect sizes across studies. We prepared a dataset with those studies for which the same process quality-child outcome association was reported twice, once using an observed process quality measure and once using a self-reported process quality measure. To compare effect sizes, we used Kruskal-Wallis non-parametric test because the normality assumption was not met for the effect size values.

  4. Do process quality-child outcome associations differ by timing (beginning versus end of school year; concurrently versus longitudinally)? The dataset for this analysis included only studies that reported the same type of quality-child outcome association at different time points during the school year (beginning versus end of school year; at the same versus at different time points during the school year). To compare effect sizes, we used Kruskal-Wallis non-parametric test because the normality assumption was not met for the effect size values.

  5. Do ECEC quality-child outcome associations differ by family ethnic minority and socio-economic background? We built on the models for estimating the overall pooled effect sizes but instead of random-effects models, mixed-effects models were used by simultaneously estimating fixed-effects of the percentage of children in a study sample from family ethnic minority background or low-income background. In these models, the percentage of children in a study sample from a minority or a low-income background was used. The coefficients for the background moderation effect were multiplied by 10 to aid interpretation (i.e., 1 unit equals 10% change in the proportion of children from minority/low-income background). The models were restricted to studies from high-income countries as this information was not reported in studies from low-to-middle income countries. Effect sizes involving motor skills did not enter the analysis, because there was only one study that reported both information.

Sensitivity analyses

We undertook different sets of sensitivity analyses, mainly for the pooled effect sizes within each of the eight child outcome categories. First, we examined possible publication bias using significance funnel plots [79] (S3 File). Significance funnel plots display the same data (i.e., point estimates on x-axis and estimated standard errors on y-axis) as classic funnel plots; however, while the latter assumes that publication bias is less severe among large-sample studies, significance funnel plots help to assess publication bias based on the assumption that publication bias operates on smaller p-values rather than sample size per se. As such publication bias based on selection (e.g., the file drawer problem) is well known in psychology, we chose to examine significance funnel plots assuming that selection acts at the alpha level of 0.05. Second, we examined whether pooled effect sizes changed after adding publication year as a covariate to the models estimated. A third sensitivity analyses examined whether pooled effect sizes changed if positive behavioral and social-emotional outcomes were grouped together. This approach was repeated with negative behavioral and social-emotional outcomes being grouped together. A final sensitivity analysis examined whether effect sizes differed depending on the type of effect size (effect size measure calculated with vs without covariates). This analysis used only studies that reported the same type of quality-child outcome associations assessed at the same time point using the same process quality observation method, so that any difference between effect sizes was attributable to the type of effect sizes. We again used Kruskal-Wallis non-parametric test to compare effect sizes reported from different conditions within studies (regression coefficient vs. zero-order coefficient) because the normality assumption was not met for the effect size values (S4 File).

Results

Descriptive overview

The meta-analysis included effect sizes from 185 studies with a total of 38,168 teachers and 229,697 children, from over 8,237 ECEC sites (information about number of sites was missing for 87 studies). Studies varied in sample size (teachers: 4–7,600, M = 240, SD = 743; children: 47–16,356, M = 1,248, SD = 2,317). Children were, on average, 54.41 months old (ranging from 14 to 75 months, SD = 10.72) and gender was similarly distributed (on average, 49% girls). On average, 66% of children in study samples were from low-income backgrounds and 58% from ethnic minority backgrounds. When reported, the majority of teachers were female (97%) and had 12 years of teaching experience (SD = 4.33). Information regarding class size was reported in 58 studies, with an average of 19 children per classroom (SD = 5.62, ranging from 7 to 35 children). Participants (teachers and children) were predominantly recruited in the ECEC setting.

The majority of studies (123 out of 185) reported research from the US. A breakdown of studies by United Nations Regional Groups (https://en.wikipedia.org/wiki/United_Nations_Regional_Groups) revealed that 39 studies were from countries grouped in the Western European and Others Group. Studies were also grouped by the World Bank classification of countries by income level (https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups). According to this breakdown, 165 studies reported research from countries grouped high income (Table 1 for a summary). S1 Table in the S2 File shows an overview of the studies entered in the meta-analysis, including study and sample information, indicators of ECEC quality (structural and process), and child outcomes.

Table 1. Number of studies providing usable effect size for the meta-analysis (total of 185).

Category # of studies
U.S. and others U.S. 123
Others 62
By region1a1 Africa 3
Asia and the Pacific 12
Eastern Europe 0
Latin America and Caribbean 8
Western Europe 39
By income2 Low to middle income 20
High income 165

1 United Nations Regional Groups (link).

2 World Bank Country and Lending Groups (link).

What was the association between ECEC quality and child outcomes?

We first sought to establish the magnitude of the association between indicators of ECEC quality and child outcomes. We combined effect sizes for structural and process quality indicators; effect sizes were pooled with each of the 8 child outcome categories. Fig 2 summarizes the pooled effect sizes for the eight child outcome categories, based on the intercept-only models before adjusting for potential differences by child sex and age composition of the samples. We interpreted the pooled effect sizes as significant when the 95% Confidence Interval did not include zero.

Fig 2. Pooled effect size estimates for ECEC quality-child outcome associations.

Fig 2

Pooled estimate for each child outcome category is shown in circles, which differ in size by the number of unique studies available (larger circles reflect a higher number of unique studies available; number of unique studies is given in the parentheses). The estimates’ 95% confidence intervals are shown in black lines.

Most child outcome categories showed a significant overall association with ECEC quality, with small effect sizes (in the description of results below, n refers to the number of unique studies). Higher levels of ECEC quality were significantly related to higher levels of academic outcomes (literacy, n = 99: 0.08, 95% C.I. 0.02–0.13; math, n = 57: 0.07, 95% C.I. 0.03–0.10), behavioral skills (n = 65: 0.12, 95% C.I. 0.07–0.17), and social competence (n = 61: 0.13, 95% C.I. 0.07–0.19), and lower levels of behavioral (n = 61: -0.12, 95% C.I. -0.19 - -0.05) and social-emotional problems (n = 27: -0.09, 95% C.I. -0.15 - -0.03). For motor skills, however, the small number of studies and available effect sizes limited our ability to assess how robust the positive association between ECEC quality and motor skills would be (n = 3: 0.12, 95% C.I. -0.04–0.29). Of note, when a global assessment of child outcomes was reported, the association with ECEC quality was not significant (n = 14: 0.02, 95% C.I. -0.07–0.11). In addition, the 95% confidence intervals suggested some uncertainty around the estimates. We repeated the analysis using only studies from high-income countries. The results replicated and are reported S5 File. However, for low-to-middle income countries, the same separate analysis could not be completed because of the small number of unique studies.

When significance funnel plots were examined (S3 File), there was evidence that the summary estimates were robust to publication bias for academic skills (math and language/literacy), behavioral skills, and social competence, suggested by the close distance between the pooled estimates across effect sizes (the black diamond) and within only the non-affirmative (i.e., p-value larger than 0.05) effect sizes (the grey diamond). In contrast, some evidence of publication bias was suggested for behavioral problems, social-emotional problems, and motor skills. Except for behavioral problems and social-emotional problems, publication bias was assumed to operate in a positive direction at an alpha level of 0.05.

Control variables

The sex composition (proportion of girls in the sample) was not associated with effect size magnitude for any the models. The average age of children in the sample (in months) was associated with effect size magnitude, however, only for the models that included behavioral problems and the global measure. Specifically, for the association between ECEC quality and behavioral problems (available studies: n = 51), 1 standardized unit (11.3) increase in the average age of children was associated with -0.07 (95% C.I. -0.13, -0.02) smaller standardized effect size. For the association between ECEC quality and the global measure of child outcomes, 1 standardized unit increase (11.3) in the average age of children was associated with, on average, 0.10 (95% C.I. 0, 0.20) increase in effect size.

(a) Structural indicators of ECEC quality versus process quality indicators. We first tested for associations between structural characteristics and child outcomes, and between process quality indicators and child outcomes. In separate analyses, we combined effect sizes for structural characteristics and the effect sizes process quality indicators; effect sizes were pooled with each of the 8 child outcome categories. For structural characteristics, none of the associations were significant (S6 File for the results). For process quality indicators, most child outcome categories showed a significant association. Higher levels of process quality were significantly related to higher levels of academic outcomes (literacy, n = 96: 0.09, 95% C.I. 0.03–0.16; math, n = 56: 0.09, 95% C.I. 0.05–0.12), behavioral skills (n = 64: 0.13, 95% C.I. 0.08–0.18), and social competence (n = 59: 0.14, 95% C.I. 0.08–0.20), and lower levels of behavioral (n = 59: -0.14, 95% C.I. -0.20 - -0.07) and social-emotional problems (n = 27: -0.09, 95% C.I. -0.15 - -0.02). For motor skills (n = 2: 0.09, 95% C.I. -0.02–0.20) and when a global assessment of child outcomes was reported (n = 12: 0.04, 95% C.I. -0.08–0.16), however, the association was not significant.

Table 2 presents the results of the comparison between effect sizes that involved a structural indicator of ECEC quality with those that involved a process quality indicator. The comparison did not consider whether an effect size came from the same study or not. For most child outcomes, the difference in effect size between structural indicators of ECEC quality and process quality indicators was not significant. Two exceptions were found. For math and behavioral skills, the comparison between effect sizes was significant. The results indicated that effect sizes for structural indicators were significantly smaller than for process quality indicators. The findings were robust to the consideration of sex composition and the average age of children in the sample.

Table 2. Differences in effect size by ECEC quality indicator.
Child Outcome Included studies (n) Coefficient (SE) t (df) 95% CI lower, upper
Math 41 -0.09 (0.03) -2.95* (12.12) -0.15, -0.02
41 -0.08 (0.03) -2.56* (12.32) -0.15, -0.01
Language/Literacy 84 -0.07 (0.04) -1.62 (25.94) -0.16, 0.02
84 -0.07 (0.04) -1.70 (26.06) -0.16, 0.02
Behavioral skills 55 -0.08 (0.03) -2.77* (6.31) -0.15, -0.01
55 -0.08 (0.03) -2.57* (6.41) -0.15, -0.00
Social competence 48 -0.08 (0.05) -1.62 (9.75) -0.18, 0.03
48 -0.06 (0.05) -1.18 (9.89) -0.18, 0.05
Behavioral problems 51 0.21 (0.10) 2.13 (10.26) -0.01, 0.43
51 0.18 (0.10) 1.69 (10.66) -0.05, 0.41
Social-emotional problems 24 0.04 (0.08) 0.56 (1.05) -0.82, 0.90
24 0.03 (0.10) 0.31 (1.05) -1.13, 1.19
Global Score 13 -0.14 (0.10) -1.42 (3.03) -0.45, 0.17
13 -0.03 (0.09) -0.33 (3.20) -0.30, 0.24

Note. The reference is process quality indicator. The second row reports the analyses that controlled for sex composition (proportion of girls in the sample) and average child age in the sample (in months).

* Difference between coefficients is significant (i.e., 95% CI does not include zero).

Next, we examined whether the effect size for the associations between process quality and child outcomes differed by the three domains of process quality: instructional, emotional (includes the presence of emotional support and absence of conflict), and managerial quality (includes the presence of routines and structures, and absence of chaos). The first set of analyses did not include control variables. In general, associations between the individual domains of process quality and child outcomes were small and mostly non-significant (S7 File). Some significant albeit small associations were found for the instructional quality domain. Higher levels of instructional support by the teacher in the classroom were positively associated with academic outcomes (literacy, n = 81: 0.11, 95% C.I. 0.06–0.15; math, n = 40: 0.11, 95% C.I. 0.07–0.15), behavioral skills (n = 54: 0.12, 95% C.I. 0.06–0.19), and social competence (n = 46: 0.10, 95% C.I. 0.03–0.17), and negatively associated with behavioral (n = 49: -0.12, 95% C.I. -0.20 - -0.04) and social-emotional problems (n = 24: -0.10, 95% C.I. -0.19 - -0.01). When control variables were included, the results were robust and no significant differences between the three domains of process quality were found (S8 File).

(b) Evidence for moderation. We tested whether structural indicators of ECEC quality moderated process quality-child outcome associations (S9 File). The results of the moderator analyses were not significant. Only two out of 33 tested moderations were significant. Teacher education moderated the association between process quality and language/literacy, both without (coefficient (SE) = -0.12 (0.04), t(df) = -3.45 (30.11), 95% CI -0.20, -0.00) and with control variables (coefficient (SE) = -0.12 (0.04), t(df) = -2.85 (24.27), 95% CI -0.21, -0.03). Teacher-child ratio moderated the association between process quality and behavioral problems (coefficient (SE) = 0.02 (0.00), t(df) = 7.42 (2.50), 95% CI 0.01, 0.03; The model could only be run without control variables because these were not reported in the available studies.). We further examined whether process quality moderated the associations between structural indicators of ECEC quality and child outcomes (S10 File). The results of the moderator analyses were not significant.

(c) Method of process quality assessment (observed versus teacher-report). The next question explored whether different approaches to measuring process quality (observational measures versus self-report measures) gave rise to different or similar associations between process quality indicators and child outcomes. The analysis included a within-study comparison of observational versus self-report measures of process quality. There were eight pairs of effect sizes reported from five studies, which allowed for within-study comparison of process quality-child outcome associations according to the method of process quality measurement. The pairs are represented by each line in Fig 3, comparing the effect sizes based on self-reported process quality with those based on observed process quality. Results indicated that effect sizes based on self-reported process quality measures were not only higher (mean difference = 0.33, Kruskal-Wallis [KW] chi-squared = 11.33, df = 1, p-value = 0.00) but also more variable than the ones based on observed process quality measures.

Fig 3. Comparison of effect sizes based on the method of process quality assessment (self-report versus observation).

Fig 3

(d) Differences by the timing of the data collection (beginning versus end of school year; concurrently versus longitudinally). This analysis explored whether timing during the ECE year was associated between process quality indicators and child outcomes. There was little evidence to suggest that effect sizes differed depending on whether they were reported in the beginning versus end of the school year (WK chi-squared = 0.71, df = 1, p-value = 0.40) or at the same versus different time points during the same school year (KW chi-squared = 0.23, df = 1, p-value = 0.63) (S11 File).

(e) Did ECEC quality-child outcome associations differ by ethnic minority or socioeconomic family background?. The final question examined the unique associations between ECEC quality indicators and child outcomes when family background differences were taken into account. We used the percentage of children from an ethnic minority and a low-income family background in each study as a moderator of the association between ECEC quality indicators and child outcomes. The results yielded coefficients that were close to zero. The models without control variables detected two significant coefficients. The percentage of children from a low-income family background in a study moderated the association between ECEC quality indicators and social competence (coefficient (SE) = -0.00 (0.00), t(df) = 2.54 (10.52), 95% CI -0.00, -0.00) and behavioral problems (coefficient (SE) = 0.00 (0.00), t(df) = 2.33 (12.55), 95% CI 0.00, 0.00). However, the coefficients did not remain significant when control variables were included in the models, suggesting that neither the percentage of children from an ethnic minority in a study nor the percentage of children from a low-income family background in a study were associated with the magnitude of effect sizes between ECEC quality indicators and child outcomes (S12 File).

Discussion

This meta-analysis examined the associations of structural and process indicators of ECEC quality with young children’s academic, behavioral and social outcomes in recent studies conducted between 2010 and 2020, a period when widespread acknowledgement of the importance of both types of quality advanced [11] and when quality was incorporated into the wording of global goals for early childhood development [5]. Although the current meta-analysis aimed to synthesized evidence from around the world, the majority of studies reported data from the U.S. (123 out of 185 studies), thus highlighting the need to expand research beyond the U.S. to inform global, regional, and local ECEC quality research, policy-making and practice. Overall, our findings suggest that higher levels of ECEC quality were significantly related to higher levels of academic outcomes, behavioral skills, motor skills, and social competence, and lower levels of behavioral and social-emotional problems, albeit the effect sizes were small. The findings align with previous meta-analyses, suggesting that ECEC quality plays an important role for children’s development during the early childhood years [3, 6, 8, 9, 19, 24, 49]. Structural characteristics alone did not significantly relate to child outcomes whereas associations between process quality indicators and most child outcomes were significant, albeit small. A comparison of structural characteristics and process quality indicators, however, did not yield significant differences in effect sizes for most child outcomes. With regard to combined effects of structural and process quality indicators of ECEC on child outcomes, we did not find evidence for moderated associations. We also did not find evidence that ECEC quality-child outcome associations differed by ethnic minority or socioeconomic family background, suggesting that children from various backgrounds benefit from high quality ECEC services.

Overall, the combined effect sizes for structural and process quality indicators of ECEC quality on children’s outcomes were small. The present meta-analysis thus complements findings from a recent meta-analysis on the links between ECEC quality and children’s outcomes [19] but with a larger sample drawn from all regions of the world. In addition, considerable variation in effect sizes was found, both within and across child outcomes. In our meta-analysis, effect sizes were somewhat larger for behavioral and social-emotional outcomes compared to academic outcomes. It is possible that these findings might hint to different accentuations of ECEC quality effects that depend on the respective child outcome. Supporting children in the development of academic skills may be different than facilitating children’s behavioral and social-emotional development [80]. However, differences in effect sizes between child outcomes were small. For this reason, our interpretation remains speculative and further exploration of how ECEC structures and processes differentially influence children’s academic, behavioral, and social-emotional skills is warranted.

When tested separately, the results of the present meta-analysis indicate significant, albeit small overall effect sizes for process quality indicators for most child outcomes but not for structural characteristics, suggesting that the combined effect sizes might have been driven by process quality indicators. Numerous studies have demonstrated beneficial effects of high levels of process quality on child outcomes [13, 17, 20, 22, 26, 31, 55, 65]. It is the immediate experiences of children that arise through interactions and activities, rich in content and stimulation, that are central to children’s learning [81]. ECEC policy and practice thus needs to focus on fostering a physical and social environment that enhances positive child developmental outcomes, by providing them with educational material adapted to their needs and accompanied by warm, sensitive interactions [81, 82]. To ensure the consistent implementation of such practices, ECEC standards, often still focused on structural aspects, need to include process quality indicators. Similarly, assessment tools and rating systems used by governments to monitor and evaluate ECEC programs need to place a greater emphasis on process quality [3, 83].

The lack of significant findings for structural characteristics–child outcome associations in the present meta-analysis might be due to the much smaller number of studies for which effect sizes were available. Alternatively, it might have been the selection of structural characteristics included in the meta-analysis that has driven the non-significant findings. The selection was informed by prior research and reflected frequently studied characteristics, such as teacher-child ratio and class size as well as teacher age, education and years of experience. Because efforts to improve ECEC quality in many countries are focused on these characteristics, they received a lot of policy attention that is reflected in regulations and quality monitoring [3, 17]. As a consequence, variation in these structural dimensions might have been limited, resulting in reduced predictive power. In the future, other structural characteristics that are, to date, less frequently studied, such as capacity building of the ECEC workforce and working conditions, need to move into the focus of policy and research to further advance the provision of quality services [7].

When effect sizes of structural characteristics and process quality indicators were compared, differences between the two sets of ECEC quality indicators were largely not significant. Two exceptions were identified. Children seemed to benefit more from process quality indicators compared to structural indicators with regard to early math development and behavioral skills. Promoting early math skills requires very specific stimulation and, as such, high-quality processes may be more critical to math development than structural indicators of ECEC quality [19]. Similarly, process quality has been suggested to be the key educational driver supporting the development of behavioral skills critical for learning, such as self-regulation and positive learning behaviors [84]. It is important to point out though that the comparison did not consider whether an effect size came from the same study or not. As such, the results could be driven by differences between studies rather than differences between the two sets of ECEC quality indicators. The next step will be to identify differential effects of ECEC quality indicators on children’s outcomes in order to best support optimal learning.

In addition to investigating direct associations between ECEC quality indicators and child outcomes, we also tested moderation. Overall, we did not find consistent evidence that structural indicators of ECEC quality moderated process quality-child outcome associations nor did we find consistent evidence that process quality moderated the associations between structural indicators of ECEC quality and child outcomes. These findings are in contrast with theoretical assumptions that it is the combination of structural and process indicators of ECEC quality that affects children’s development [6, 81]. However, the lack of findings in the present meta-analysis aligns with the scarce empirical support for this theoretical assumption. If moderated effects were found in prior research, they were only found for few structural-process indicator combinations and were smaller than direct effects (for a review, [85]. The lack of stronger results may be due to the fact that structural indicators of ECEC quality explain little variance in process quality indicators [85]. This may be particularly true in countries with strong national regulation and monitoring of structural quality, such as the U.S. and many Western European countries, where there is limited variation in structural indicators. It is also important to note that only a few studies included both structural and process indicators of ECEC quality and even fewer studies focused on the same child outcome. Thus, we might not have had enough power to detect moderation effects.

Overall, associations between ECEC quality and child outcomes were not influenced by the percentage of ethnic-minority children in a study nor by the percentage of children from a low-income family background in a study. The findings complement recent meta-analytic findings, suggesting that the beneficial effects of ECEC quality on children’s outcomes can be found for children from various family backgrounds [19]. Together, the meta-analytic evidence supports the notion that high-quality learning can be one strategy to ensure that children are prepared for school [44]. However, it will be important to “sustain the boost that quality preschool education can provide” beyond ECEC [46, p. 31]. The effects of ECEC on child outcomes are more likely sustained when children transition to higher quality schools [46]. However, children from disadvantaged family backgrounds are more likely to attend high-poverty, segregated schools and have less educated school teachers [44]. What is needed are research-based specific practices and policies that address the root causes of educational disparities. For that, research must consider “children’s development in the context of the child-care system as well as the family system, and recognize the links between these systems and the larger society” [86, p. 165].

In addition, we found that self-report measures of process quality indicators yielded higher and more variable effect sizes compared to observational measures. These findings might have been influenced by biases common to self-report measures, i.e., self-representation bias or social desirability bias [37]. For example, it is possible that teachers overestimated the quality of interactions with children in their classroom. Similar results, i.e., teachers overestimating the quality of their instructional practices, have been found in previous research and were explained by an individual’s desire to avoid their responses reflecting negatively on them or contradicting common values and expectations of their group [37]. Observational measures have the potential to counter these issues; however, their implementation in research and practice is time- and cost-intensive. Although observational measures are “preferable for seizing the learning potential of ECE centers […] the development of more economic but equally reliable and valid alternatives is necessary” [19, p. 1484].

The final results of the present meta-analysis related to the time in the school year when ECEC quality and child outcome measures were collected. We did not find evidence to suggest that effect sizes differed depending on whether they were reported in the beginning versus end of the school year or at the same versus different time points during the same school year.

Limitations and future directions

Despite the attempt to provide a synthesis of the global literature on ECEC quality and child outcome associations, the vast majority of studies eligible for the present meta-analyses included samples from the U.S. In addition, studies with large samples (>500 participants) were also predominantly from the U.S. Together, the results might have been biased towards patterns prevalent in the U.S. that might not apply to other, non-U.S. ECEC contexts. It is possible that the use of English-language databases and the English requirement for studies to be included in the coding have resulted in studies from non-English majority speaking countries being underrepresented in the data. Increased efforts and resources are needed to overcome the challenges of locating, assessing and including non-English studies in systematic reviews, for example, by using professional translators [87]. Another limitation related to the dominance of studies from the U.S. might be related to the measures used to assess ECEC quality. For example, 81 studies, of which 50 were from the U.S., used a version of the CLASS, an observational tool developed in the U.S. to assess indicators of process quality. As a result, other measures, such as the ECERS-R and ECERS-E were not as commonly reported which might have biased the results towards a certain conceptualization of ECEC quality. Relatedly, conceptual and theoretical frameworks that have been derived from research conducted in the U.S. might not apply to ECEC classrooms in other countries [88]. ECEC services in many countries have undergone significant changes over the past years [19]. To build knowledge about children’s educational opportunities across the world, establishing a global research agenda on ECEC will be critical, with a particular emphasis on the role of ECEC on children’s outcomes [19]. The present meta-analysis only included short-term outcomes, restricted to the same school year. An additional limitation was that many studies did not include both, structural and process quality indicators in the analyses, and/or multiple child outcomes. Thus, only a limited number of studies was available for testing the mechanisms underlying the associations between ECEC quality indicators and child outcomes. For this reason, we grouped the available effect sizes by child outcome and thus could not test all possible interactions of ECEC quality indicators. Similarly, not all studies reported detailed demographic (age, sex, ethnic minority status, family SES) information about the children participating in the study. Finally, the present meta-analysis relied on correlational effects. Zero-order correlations do not reflect the complexity of ECEC classrooms, pointing to the need of innovative meta-analytic approaches that allow for the aggregation of published multivariate findings [19].

Conclusion

The results of the present meta-analysis suggest that ECEC quality indicators affect a broad range of developmental outcomes. The implementation of high-quality ECEC services needs to be guided by an extended list of ECEC quality standards that go beyond traditional classroom- and teacher-level structural characteristics and include additional, less frequently studied structural aspects, such as capacity building of the ECEC workforce and working conditions, as well as process quality indicators. The enhancement of ECEC quality standards also needs to be reflected in assessment tools and rating systems used by governments to monitor and evaluate ECEC programs. If informed by scientific evidence, such a shift in ECEC quality standards can help maximize the benefits of ECEC participation. In addition, more rigorous research is needed to better understand the unique and combined effects of multiple ECEC quality dimensions (and child and family characteristics) to identify effective ways for quality improvement and meeting children’s unique needs. For that, a more nuanced analysis of structural and process quality, and of child and family characteristics, is needed to understand the multiple dimensions to consider for quality improvement.

Supporting information

S1 Checklist. PRISMA 2020 for abstracts checklist.

(DOCX)

S2 Checklist. PRISMA 2020 checklist.

(DOCX)

S1 File. List of keywords for literature search.

(DOCX)

S2 File. Descriptive information of studies included in the meta-analysis.

(XLSX)

S3 File. Funnel plots.

(DOCX)

S4 File. Sensitivity analysis.

(DOCX)

S5 File. Association between ECEC quality and child outcomes (only studies from high-income countries).

(DOCX)

S6 File. Associations between structural characteristics and child outcomes.

(DOCX)

S7 File. Differences in ECEC quality–Child outcome associations by the type of process quality domain.

(DOCX)

S8 File. Differences in effect size by process quality domain.

(DOCX)

S9 File. Tests of structural indicators of ECEC quality as moderators of process quality-child outcome associations.

(DOCX)

S10 File. Tests of process quality indicators of ECEC quality as moderators of structural quality-child outcome associations.

(DOCX)

S11 File. Differences by the timing of the data collection.

(DOCX)

S12 File. Differences by ethnic minority or socioeconomic family background.

(DOCX)

S13 File

(ZIP)

Data Availability

All relevant data are within the paper and its Supporting Information files.

Funding Statement

Specific grant number: GST03 Initials of authors who received each award: AvS Full names of commercial companies that funded the study or authors: no commercial company funded the study or authors; the funding was provided by the Global TIES for Children Research Center at New York University Abu Dhabi URL to sponsor’ website: https://nyuad.nyu.edu/en/research/faculty-labs-and-projects/global-ties-for-children.html The funders (other than the named authors) had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.UNESCO Institute for Statistics. Update on SDG 4. 2019. [Google Scholar]
  • 2.Klinkhammer N, Schäfer, B. Quality development and assurance in early childhood education and care: International perspectives. In: Klinkhammer N, Harring D, Gwinner A, editors. Monitoring quality in early childhood education and care: Approaches and experiences from selected countries. Munich, Germany: DJI Verlag; 2017. [Google Scholar]
  • 3.OECD. Engaging young children: Lessons learned from research about quality in early childhood education and care. Paris, France: OECD Publishing; 2018. [Google Scholar]
  • 4.Nations United. Transforming our world: The 2030 agenda for sustainable development. New York, NY: United Nations; 2015. [Google Scholar]
  • 5.Nations United. Revised list of global Sustainable Development Goal indicators. New York, NY: United Nations; 2017. [Google Scholar]
  • 6.Burchinal M. Measuring early care and education quality. Child Development Perspectives. 2018;12:3–9. [Google Scholar]
  • 7.G20 Development Working Group. G20 Initiative for Early Childhood Development Buenos Aires, Argentina; 2018. Available from: http://www.g20.utoronto.ca/2018/g20_initiative_for_early_childhood_development.pdf [Google Scholar]
  • 8.Barnett W. Effectiveness of early educational intervention. Science. 2011;333:975–8. doi: 10.1126/science.1204534 [DOI] [PubMed] [Google Scholar]
  • 9.Camilli G, Vargas S, Ryan S, Barnett W. Meta-analysis of the effects of early education interventions on cognitive and social development. Teachers College Record. 2010;112:579–620. [Google Scholar]
  • 10.Bierman K, Motamedi M Social-emotional learning programs for preschool children. In: Durlak J, Weissberg R, Gullotta T, editors. The handbook of social and emotional learning: Research and practice. New York, NY: Guilford; 2017. [Google Scholar]
  • 11.Britto P, Yoshikawa H, Boller K. Quality of early childhood development programs in global contexts rationale for investment, conceptual framework and implications for equity. Social Policy Report. 2011;25:1–31. [Google Scholar]
  • 12.European Union Quality Framework. Proposal for key principles of a Quality Framework for Early Childhood Education and Care; 2014, p. 1–71. [Google Scholar]
  • 13.Mashburn A, Pianta RC, Hamre BK, Downer, J Barbarin, O, Bryant D, et al. Measures of classroom quality in prekindergarten and children’s development of academic, langugae, and social skills. Child Development. 2008;79:732–49. [DOI] [PubMed] [Google Scholar]
  • 14.Slot P, Leseman PPM, Verhagen J, Mulder H Associations between structural quality aspects and process quality in Dutch early childhood education and care settings. Early Childhood Research Quarterly. 2015;33:64–76. [Google Scholar]
  • 15.Bowne J, Magnuson K, Schindler H, Duncan G Yoshikawa H. A meta-analysis of class sizes and ratios in early childhood education programs: Are thresholds of quality associated with greater impacts on cognitive, achievement, and socioemotional outcomes? Educational Evaluation and Policy Analysis. 2017;39:407–28. [Google Scholar]
  • 16.Burchinal M, Hong S, Sabol T, Forestieri N, Peisner-Feinberg E, Tarullo L, et al. Quality rating and improvement systems: Secondary data analyses of psychometric properties of scale development. Washington, DC: U.S. Administration for Children and Families; 2016. [Google Scholar]
  • 17.Early D, Maxwell K, Burchinal M, Alva S, Bender R, Bryant D, et al. Teachers’ education, classroom quality, and young children’s academic skills: Results from seven studies of preschool programs. Child Development. 2007;78:558–80. doi: 10.1111/j.1467-8624.2007.01014.x [DOI] [PubMed] [Google Scholar]
  • 18.Lin Y, Magnuson KA. Classroom quality and children’s academic skills in child care centers: Understanding the role of teacher qualifications. Early Childhood Research Quarterly. 2018;42:215–27. [Google Scholar]
  • 19.Ulferts H, Wolf K, Anders Y. Impact of process quality in early childhood education and care on academic outcomes: Longitudinal meta-analysis. Child Development. 2019;90:1474–89. doi: 10.1111/cdev.13296 [DOI] [PubMed] [Google Scholar]
  • 20.Hamre B, Pianta RC, Downer J, DeCoster J, Mashburn A, Jones S, et al. Teaching through interactions: Testing a developmental framework of teacher effectiveness in over 4,000 classrooms. The Elementary School Journal. 2013;113:461–87. doi: 10.1086/669616 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Cappella E, Aber JL, Kim HY. Teaching Beyond Achievement Tests: Perspectives From Developmental and Education Science. In: American Educational Research Association, editor. Handbook of Research on Teaching; 2016. [Google Scholar]
  • 22.Keys T, Farkas G, Burchinal M, Duncan G, Vandell D, Li W, et al. Preschool center quality and school readiness: Quality effects and variation by demographic and child characteristics. Child Development. 2013;84:1171–90. doi: 10.1111/cdev.12048 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Hamre B, Pianta RC. Can Instructional and Emotional Support in the First-Grade Classroom Make a Difference for Children at Risk of School Failure? Child Development. 2005;76(5):949–67. doi: 10.1111/j.1467-8624.2005.00889.x [DOI] [PubMed] [Google Scholar]
  • 24.Burchinal M, Kainz K, Cai Y. How well do measures of quality predict child outcomes? A meta-analysis and coordinated analysis of data from large-scale studies of early childhood settings. In: Zaslow M, editor. Quality measurement in early childhood settings. Baltimore, MD: Brooks; 2011. [Google Scholar]
  • 25.Howes C, Burchinal M, Pianta RC, Bryant D, Early D, Clifford R, et al. Ready to learn? Children’s pre-academic achievement in pre-kindergarten programs. Early Childhood Research Quarterly. 2008;23:27–50. [Google Scholar]
  • 26.Leyva D, Weiland C, Barata M, Yoshikawa H, Snow C, Treviño E, et al. Teacher-child interactions in Chile and their associations with prekindergarten outcomes. Child Development. 2015;86:781–99. doi: 10.1111/cdev.12342 [DOI] [PubMed] [Google Scholar]
  • 27.Doumen S, Verschueren K, Buyse E, Germeijs V, Luyck K, Soenens B. Reciprocal relations between teacher-child conflict and externalizing behavior in kindergarten: A three-wave longitudinal study. Journal of Clinical Child and Adolescent Psychology. 2008;37:588–99. [DOI] [PubMed] [Google Scholar]
  • 28.Hughes J. Longitudinal effects of teacher and student perceptions of teacher-student relationship qualities on academic adjustment. The Elementary School Journal. 2011;112:38–60. doi: 10.1086/660686 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Mason B, Hajovsky D, McCune L, Turek J. Conflict, closeness, and academic skills: A longitudinal examination of the teacher-student relationship. School Psychology Review. 2017;46:177–89. [DOI] [PubMed] [Google Scholar]
  • 30.Ponitz C, Rimm-Kaufman S, Brock L, Nathason L. Early adjustment, gender differences, and classroom organizational climate in first grade. The Elementary School Journal. 2009;110:142–62. [Google Scholar]
  • 31.Rimm-Kaufman S, Curby T, Grimm K, Nathanson L, Brock L. The contribution of children’s self-regulation and classroom quality to children’s adaptive behaviors in the kindergarten classroom. Developmental Psychology. 2009;45:958–72. doi: 10.1037/a0015861 [DOI] [PubMed] [Google Scholar]
  • 32.Weiland C, Ulvestad K, Sachs J, Yoshikawa H. Associations between classroom quality and children’s vocabulary and executive function skills in an urban public prekindergarten program. Early Childhood Research Quarterly. 2013;28:199–209. [Google Scholar]
  • 33.Pianta R, La Paro KM, Hamre BK. Classroom Assessment Scoring System (CLASS): Paul H. Brookes Publishing Company; 2008. [Google Scholar]
  • 34.Harms T, Clifford, RM, Cryer, D. Early Childhood Environmental Rating Scale—Revised. Vermont, MA: Teachers College Press; 1998.
  • 35.Pianta R. Student-Teacher Relationship Scale. Charlottesville, VA: University of Virginia; 2001. [Google Scholar]
  • 36.Saft E, Pianta RC. Teachers’ perceptions of their relationships with students: Effects of child age, gender, and ethnicity of teachers and children. School Psychology Quarterly. 2001;16:125–41. [Google Scholar]
  • 37.Kopcha T, Sullivan H. Self-presentation bias in surveys of teachers’ educational technology practices. Educational Technology Research and Development. 2007;55:627–46. [Google Scholar]
  • 38.NICHD Early Child Care Research Network. Child-Care Structure → Process → Outcome: Direct and indirect effects of child-care quality on young children’s development. Psychological Science. 2002;13:199–206. [DOI] [PubMed] [Google Scholar]
  • 39.Aguiar A. Teacher-child interactions and children’s social skills and problem behaviors: ECEC dosage and disability status as moderators [Dissertation]. Lisboa: Instituto Universitario de Lisboa; 2016. [Google Scholar]
  • 40.Cunha F, Heckman J , Lochner L , Masterov D Interpreting the evidence on Life Cycle Skill Formation. In: Hanushek FW E., editor. Handbook of the economics of education. Amsterdam, The Netherlands: North Holland; 2006. p. 697–812. [Google Scholar]
  • 41.Xue J, Burchinal M, Auger A, Tien H-C, Mashburn A, Peisner-Feinberg E, et al. Testing for dosage outcome associations in early care and education. Monographs of the Society for Research in Child Development. 2016;81:64–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Buyse E, Verschueren K, Doumen S, Van Damme J, Maes F. Classroom problem behavior and teacher-child relationships in kindergarten: The moderating role of classroom climate. Journal of School Psychology. 2008;46(4):367–91. [DOI] [PubMed] [Google Scholar]
  • 43.Bitler M, Hoynes H, Domina T. Experimental evidence on distributional effects of Head Start. 2014. [Google Scholar]
  • 44.Iruka I. Using a social determinants of early learning framework to eliminate educational disparities and opportunity gaps. In: Foundation for Child Development, editor. Getting it Right: Using Implementation Research to Improve Outcomes in Early Care and Education. New York, NY: Author; 2020. p. 63–86. [Google Scholar]
  • 45.Weiland CY H. Impacts of a prekindergarten program on children’s mathematics, language, literacy, executive function, and emotional skills. Child Development. 2013;84(6):2112–30. doi: 10.1111/cdev.12099 [DOI] [PubMed] [Google Scholar]
  • 46.Yoshikawa H, Weiland C, Brooks-Gunn J. When does preschool matter? The Future of Children. 2016;26(2):21–35. [Google Scholar]
  • 47.Ulferts H, Anders Y. Effects of ECEC on academic outcomes in literacy and mathematics: Meta-analysis of European longitudinal studies. 2015. [Google Scholar]
  • 48.Castro D, Espinosa L, Páez M. Defining and measuring quality early childhood practices that promote dual language learners’ development and learning. Quality measurement in early childhood settings. 2011:257–80. [Google Scholar]
  • 49.Burger K. How does early childhood care and education affect cognitive development? An international review of the effects of early interventions for children from different social backgrounds. Early Childhood Research Quarterly. 2010;25:140–65. [Google Scholar]
  • 50.Hoff E. Interpreting the early language trajectories of children from low-SES and language minority homes: Implications for closing achievement gaps. Developmental Psychology. 2013;49:4–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Bruno E, Iruka I Reeaxmining the Carolina abecedarian project using an antiracist perspective: Implications for early care and education research. Early Childhood Research Quarterly. 2022;58:165–76. [Google Scholar]
  • 52.Durand T. Latino parental involvement in kindergarten: Findings from the Early Childhood Longitudinal Study. Hispanic Journal of Behavioral Sciences. 2011;33:469–89. [Google Scholar]
  • 53.Harwood R, Leyendecker B, Carlson V, Asenico M, Miller A. Parenting among Latino families in the U.S. In: Bornstein M, editor. Handbook of parenting: Vol 4 Social and applied parenting. Mahwah, NJ: Lawrence Erlbaum; 2002, p. 21–46. [Google Scholar]
  • 54.Cadima J, Doumen S, Verschueren K, Buyse E. Child engagement in the transition to school: Contributions of self-regulation, teacher–child relationships and classroom climate. Early Childhood Research Quarterly. 2015;32:1–12. [Google Scholar]
  • 55.Pakarinen E, Kiuru N, Lerkkanen M-K, Pokkeus A, Ahonena T, Nurmi J-E. Instructional support predicts children’s task avoidance in kindergarten. Early Childhood Research Quarterly. 2011;26:376–86. [Google Scholar]
  • 56.Early D, Iruka I, Ritchie S, Barbarin O, Winn DM, Crawford G, et al. How do pre-kindergarteners spend their time? Gender, ethnicity, and income as predictors of experiences in pre-kindergarten classrooms. Early Childhood Research Quarterly. 2010;25:177–93. [Google Scholar]
  • 57.Latham S, Corcoran S, Sattin-Bajaj C, Jennings J. Racial disparities in pre-K quality: Evidence from New York City’s universal pre-K program. Educational Researcher. 2021;50(9):607–17. [Google Scholar]
  • 58.Tonyan H, Howes C. Exploring patterns in time children spend in a variety of child care activities: Associations with environmental quality, ethnicity, and gender. Early Childhood Research Quarterly. 2003;18:121–42. [Google Scholar]
  • 59.Eurydice. Early childhood education and care in Europe: Tackling social and cultural inequalities. Brussels, Belgium; 2009. [Google Scholar]
  • 60.Bailey D, Duncan G , Cunha F, Foorman B, Yeager D. Persistence and fade-out of educational intervention effects: Mechanisms and potential solutions. Psychological Science in the Public Interest. 2020;21(2):55–97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Brunsek A, Perlman M, Falenchuk O, McMullen E, Fletcher B, Shah P. The relationship between the Early Childhood Environment Rating Scale and its revised form and child outcomes: A systematic review and meta-analysis. Plos One. 2017;12(6):e0178512. doi: 10.1371/journal.pone.0178512 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Eggert F, Fukkink R, Eckhardt A. Impact of in-service professional development programs for early childhood teachers on quality ratings and child outcomes: A meta-analysis. Review of Educational Research. 2018;88(3):401–33. [Google Scholar]
  • 63.Falenchuk O, Perlman M, McMullen E, Fletcher B, Shah P. Education of staff in preschool aged classrooms in child care centers and child outcomes: A meta-analysis and systematic review. Plos One. 2017;12(8):e0183673. doi: 10.1371/journal.pone.0183673 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Jensen P, Rasmussen A Professional development and its impact on children in early childhood education and care: A meta-analysis based on European studies. Scandinavian Journal of Educational Research. 2019;63(6):935–50. [Google Scholar]
  • 65.Hong S, Sabol T, Burchinal M, Tarullo L, Zaslow M, Peisner-Feinberg E. ECE quality indicators and child outcomes: Analyses of six large child care studies. Early Childhood Research Quarterly. 2019;49:202–17. [Google Scholar]
  • 66.Perlman M, Falenchuk O, Fletcher B, McMullen E, Beyene J, Shah P. A systematic review and meta-analysis of a measure of staff/child interaction quality (the classroom assessment scoring system) in early childhood education and care settings and child outcomes. Plos One. 2016;11(12):e0167660. doi: 10.1371/journal.pone.0167660 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Perlman M, Fletcher B, Falenchuk O, Brunsek A, McMullen E, Shah P. Child-staff ratios in early childhood education and care settings and child outcomes: A systematic review and meta-analysis. Plos One. 2017;12(1):e0170256. doi: 10.1371/journal.pone.0170256 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Zhou X. Early childhood education policy development in China. International Journal of Child Care and Education Policy. 2011;5:29–39. [Google Scholar]
  • 69.Council of the European Union. Council Recommendation of 22 May 2019 on High-Quality Early Childhood Education and Care Systems (2019/C 189/02); 2019. Available from: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32019H0605(01)&rid=4.
  • 70.G20 Education Ministers’ Declaration. G20 Education Ministers’ Declaration 2018 Building consensus for fair and sustainable development. Unleashing people’s potential 2018. Available from: http://www.g20.utoronto.ca/2018/2018-09-05-g20_education_ministers_declaration_english.pdf.
  • 71.Viechtbauer W. Conducting meta-analyses in R with the metafor package. Journal of Statistical Software. 2010;36:1–48. [Google Scholar]
  • 72.Higgins J, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327(7414):557–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Fisher Z, Tipton, E robumeta : An R-package for robust variance estimation in meta-analysis. 1503.02220 ed: arXiv preprint arXiv; 2015. [Google Scholar]
  • 74.Hedges L, Tipton E, Johnson M. Robust variance estimation in meta- regression with dependent effect size estimates. Research Synthesis Methods. 2010;1:39–65. doi: 10.1002/jrsm.5 [DOI] [PubMed] [Google Scholar]
  • 75.Tanner-Smith E, Tipton E Robust variance estimation with dependent effect sizes: Practical considerations including a software tutorial in Stata and SPSS. Research Synthesis Methods. 2014;5:13–30. doi: 10.1002/jrsm.1091 [DOI] [PubMed] [Google Scholar]
  • 76.Fisher Z, Tipton E, Zhipeng H, Fisher MZ. Robumeta : Robust variance meta-regression. 2017. [Google Scholar]
  • 77.Sheridan S, Smith T, Moorman K, Beretvas S, Park S. A meta- analysis of family-school interventions and children’s social-emotional functioning: Moderators and components of efficacy. Review of Educational Research. 2019;89:296–332. [Google Scholar]
  • 78.Smith T, Sheridan SM, Kim EM, Park S, Beretvas SN. The effects of 26 family-school partnership interventions on academic and social-emotional functioning: A meta-analysis exploring what works for whom. Education Psychology Review. 2020;32:511–44. [Google Scholar]
  • 79.Mathur M, VanderWeele TJ Sensitivity analysis for publication bias in meta‐analyses. Journal of the Royal Statistical Society Series C, Applied Statistics. 2020;69(5):1091–119. doi: 10.1111/rssc.12440 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Hestenes L, Kintner-Duffy, V Wang, YC, La Paro K, Mims SU, Crosby D, et al. Comparisons among quality measures in child care settings: Understanding the use of multiple measures in North Carolina’s QRIS and their links to social-emotional development in preschool children. Early Childhood Research Quarterly. 2015;30:199–214. [Google Scholar]
  • 81.Melhuish E, Ereky-Stevens K, Petrogiannis K, Ariescu A, Penderi E, Rentzou K, et al. A review of research on the effects of early childhood education and care (ECEC) upon child development. 2015. Contract No.: WP4.1 Curriculum and Quality Analysis Impact Review (CARE). [Google Scholar]
  • 82.Brigas N, Bouchard C, Cantin G, Brunson L, Coutu S, Lemay L, et al. A comparative study of structural and process quality in center-based and family-based child care services. Child & Youth Care Forum. 2010;39:129–50. [Google Scholar]
  • 83.Ishimine K, Tayler C, Bennett J. Quality and early childhood education and care: A policy initiative for the 21st century. International Journal of Child Care and Education Policy. 2010;4:67–80. [Google Scholar]
  • 84.Sylva K, Sammons P, Melhuish E, Siraj I, Taggart B. Developing 21st century skills in early childhood: the contribution of process quality to self-regulation and pro-social behaviour. Zeitschrift für Erziehungswissenschaft. 2020;23:465–84. [Google Scholar]
  • 85.Slot P. Structural characteristics and process quality in early childhood education and care: A literature review. Paris, France; 2018. [Google Scholar]
  • 86.Marshall N. The quality of early child care and children’s development. Current Directions in Psychological Science. 2004;13(4):165–8. [Google Scholar]
  • 87.Neimann Rasmussen L, Montgomery P. The prevalence of and factors associated with inclusion of non-English language studies in Campbell systematic reviews: a survey and meta-epidemiological study. Systematic Reviews. 2018;7:Article 129. doi: 10.1186/s13643-018-0786-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Chen S, Wolf S. Measuring the Quality of Early Childhood Education in Low- and Middle-Income Countries. Frontiers in Psychology. 2021. doi: 10.3389/fpsyg.2021.774740 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Sze Yan Liu

16 Jan 2023

PONE-D-22-31943Early Childhood Education and Care Quality and Associations with Child Outcomes: A Meta-AnalysisPLOS ONE

Dear Dr. Von Suchodolotz,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 02 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Sze Yan Liu, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. 

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

3. Please ensure that you include a title page within your main document. You should list all authors and all affiliations as per our author instructions and clearly indicate the corresponding author.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. 

Additional Editor Comments:

This is a well-written study. I agree with the reviewers that while the analysis is generally clear the text could benefit from more details.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: PONE-D-22-31943 Review

This is a thoughtful and well-written meta-analysis examining the influence of ECEC program structural and process indicators of quality on child outcomes. It is clear that the authors put a tremendous amount of work into this review, which is an important contribution to the literature. I have listed some major and several minor comments below.

Abstract:

Can the structural indicators of ECEC quality and process quality indicators that were associated with child outcomes be described in the abstract? It remains vague to just state that indicators of quality (in general) are associated with various child outcomes.

Introduction

Line 105-109: It would be useful for authors to expand on why it is important to continue to examine whether structural indicators of ECEC quality are associated with child outcomes, when they have not been shown to matter in previous work. Why is further investigation needed to inform policy and practice.

Methods

Line 241: Why did authors exclude studies prior to 2010? The rationale in the article was that a previous meta-analysis (Rao published 2017) had gone up to 2012. But that analysis did not attempt to answer the questions about quality, but rather different types of programs such as child-focused or parent-directed or nutrition. A number of pre-2010 papers could be included, for example:

Aboud, F. E. (2006). Evaluation of an early childhood preschool program in rural Bangladesh. Early Childhood Research Quarterly, 21, 46–60. doi:10.1016/j.ecresq.2006.01.008

Moore, A. C., Akhter, S., & Aboud, F. E. (2008). Evaluating an improved quality preschool program in rural Bangladesh. International Journal of Educational Development, 28, 118–131. doi:10.1016/j.ijedudev.2007.05.003.

Mwaura, P., Sylva, K., & Malmberg, L.-E. (2008). Evaluating the Madrasa pre-school programme in East Africa: A quasi experimental study. International Journal of Early Years Education, 16, 237–255.

The supporting excel sheet listing studies and their measures appears to exclude research using the ECERS-E as the measure of quality. Correlations with the ECERS-E tend to be higher than the ECERS-R and studies using the measure have been frequently conducted in LMICs and in Britain.

It is not clear why longitudinal studies, when a child outcome came from a time after the quality measure, and intervention studies were excluded. Why would their associations be irrelevant to the questions asked here? These two features are most likely to exclude LMIC studies where interventions are often the only ethical reason for conducting such a study.

It appears that 6 interventions were excluded. The number of longitudinal studies excluded is not reported.

Several publications after 2010 were omitted. It would be important to include these especially as they are from LMIC, which the authors claim to be lacking:

Malmberg L-E, Mwaura P, Sylva K. Effects of a preschool intervention on cognitive development among East-African preschool children: A flexibly time-coded growth model. Early Child Res Q 2011;26(1):124-33.

Aboud, Frances E., Kerrie Proulx, and Zaitu Asrilla. An impact evaluation of Plan Indonesia’s early childhood program. Canadian Journal of Public Health 107.4 (2016): e366-e372.

Su, Yufen, et al. Preschool quality and child development in China. Early childhood research quarterly 56 (2021): 15-26.

Aboud, F.E. & Hossain, K (2011). The impact of preprimary school on primary school achievement in Bangladesh. Early Childhood Research Quarterly, 26, 237-246.

PLOS recent published a meta-analysis of parenting programs, separating out high-income country findings from LMICs. Could the same be done here? Out of 185 studies listed in the excel sheet, 165 were from HICs. This is not representative of the quality-outcome research conducted in LMICs. Perhaps you can conduct one analysis for HIC and a separate one for LMIC studies, adding more LMIC studies than currently (see comments above).

Line 289: please specify for the readers what is meant by a “global process quality score”?

Please specify how each estimate of association is weighted when calculating the pooled effect size.

Line 295. The five structural qualities were clear. However, the four process qualities were not. How did you categorize CLASS and ECERS-R items into these four process qualities?

Results:

Two questions were posed: "whether such structural characteristics itself systematically change the effects of process quality on child outcomes, or whether process quality changes the impact of structural characteristics on child outcomes. Could you also ask and present the results for the two simpler questions before the moderated ones, namely: Do structural characteristics impact child outcomes and Do process characteristics impact child outcomes?

Where are the individual measures of association in each study presented? Meta-analyses typically present the data extracted from each study that contributes to the analyses.

Figure 1: I would expect that effect sizes would differ depending on the indicator of quality (i.e., type of structural and type of process indicators of quality). Why were these not separated, and effect sizes for child outcomes calculated for each indicator?

Figure 1: why does the size of the circle not represent the number of effect sizes (rather than unique studies) used to estimate the pooled effect size? It seems it should be number of estimates of association since some studies had multiple estimates of association. Also, in the results section (e.g., paragraph starting on line 484), does the n represent number of studies or number of effect sizes used to estimate the pooled effect size?

Table 2: please make clear which type of quality indicator is used as the reference (I believe it is process). In the text, you state that effect sizes for associations that include process indicators are more positive than those that include structural indicators, yet the regression coefficient estimates in the table are negative. This is confusing. I suggest authors stay consistent in the way they discuss and present the direction of associations.

Figure SI 5: While this figure is nice, it would benefit from also listing the estimates and 95% CIs for the pooled effect sizes.

Table S2: I see that instructional quality was used as the reference group. But there are two other groups, so why do we not see how each group differs from the reference?

(b) Evidence for moderation: Where are the non-significant results presented?

Line 484. Can you comment on whether the effect sizes were small, moderate or large? They all appear to be small and Literacy and math appear to be very small.

Line 589: The authors state that there was significant moderation from family income on the association between quality indicators and social competence and behavioral problems. However, the coefficients are 0 (95% CI: 0-0). Please explain.

In moderation analyses, it is typical to see effect sizes for each stratum (e.g., high vs low proportion of children from low-income families). What do the coefficients in Figure S5 represent? Is this the coefficient for the interaction term? If so, please make this explicit in the Figure. If not, please explain and clarify what the coefficient represents.

Discussion

Line 699. It is difficult to draw conclusions about frameworks and evidence from LMICs unless you add more research from LMICs and conduct analyses comparing HICs and LMICs.

Line 712. You stated that the reliance on correlation coefficients is a limitation. What kind of analysis would be more appropriate?

Reviewer #2: I appreciate the opportunity to review the meta-analysis on early childhood education and care quality and child outcomes. The study is well done with clear rationales and descriptions of the methods and results. I believe the findings will add to the literature on ECEC quality and children’s development. I provide specific comments below but want to emphasize that I think the authors need to be clear that the effect sizes found are small and more information is needed on the practical significance of the findings. Additionally, the implication section is underdeveloped, and more effort should be put into discussing how these findings fit with the broader literature and what this means for practice and policy.

Literature review is well written and thorough, with the exception of the discussion of the interaction of quality indicators. The justification for interaction effects is underdeveloped – why would one think that an association between structural aspects and child outcomes will be stronger with higher levels of process quality? The theoretical model of structure – process – outcome would not predict this. Unclear what is motivating this question. Also, do the authors have any hypotheses for which quality elements together are most predictive of child outcomes or how the combination may differ depending on outcome examined?

Section on Children from Ethnic Minority Backgrounds appropriately and importantly highlights the challenges students may encounter and how their background can contribute to differences in achievement. However, evidence from Head Start and other pre-K evaluations suggest that multilingual learners may benefit the most from the ECE (see work by Marianne Bitler and other on Head Start, NC preK RCT evaluation results).

Given the focus on structural and process quality, I was surprised the authors did not discuss policy more in the introduction and literature review as how quality in programs is regulated. This is particularly important in the global context where policies differ widely and may contribute to differences observed/reported in studies.

Methods – overall this section is well done. I have just a few specific comments below.

• Does the requirement of English result in studies from non-English majority speaking countries not being represented in the data?

• What is the rationale for dropping intervention studies?

• Did the authors attempt to contact study authors to obtain any missing information?

• After looking at the excel database, I was surprised to find some key ECE studies not there (Soliday Hong et al., 2019; Keys et al., 2013; Burchinal et al., 2016). I am guessing this is because of the studies including meta-analyses of the datasets. I think a further description of the methods used to achieve the final sample, even in the supplementary files, would help readers better understand decisions.

Results are clearly written and nicely organized.

Discussion section

• Overall, the section is thorough. However, the authors should emphasize throughout, particularly in the first paragraph that the overall effect is small for all child outcomes. This is done nicely in the second paragraph.

• I’m not sure what to make of the significant differences between structural and process quality indicators for the two outcomes – the authors should expand on these findings. Such as is this a data issue or is there reason to believe the findings are meaningful and aligned with prior research.

• I’m unsure what is meant by two sentences on page 32-33 “While, at the sample level, these results seem to be robust…when studying the nature of the effects of ECE quality.”

• The authors should situate their effect sizes in the literature and provide information on the practical significance. Also, understanding the cost of improving quality is important and is not equal for all indicators. More should be said on this point.

• The conclusion section is very broad and generally mentions quality improvement. As discussed above, this can look quite different depending on which areas of quality are the focus (e.g., structural – requiring teachers to have certain degrees vs. improving process quality). Right now, I think the implications are too general and not particularly useful to the field.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 May 25;18(5):e0285985. doi: 10.1371/journal.pone.0285985.r002

Author response to Decision Letter 0


27 Mar 2023

15-03-2023

We wish to thank the reviewers for their valuable comments. In the revised manuscript, we addressed all comments. The detailed responses are listed below and in the response letter.

Reviewer #1: PONE-D-22-31943 Review

This is a thoughtful and well-written meta-analysis examining the influence of ECEC program structural and process indicators of quality on child outcomes. It is clear that the authors put a tremendous amount of work into this review, which is an important contribution to the literature. I have listed some major and several minor comments below.

Abstract:

Can the structural indicators of ECEC quality and process quality indicators that were associated with child outcomes be described in the abstract? It remains vague to just state that indicators of quality (in general) are associated with various child outcomes.

RESPONSE: Many thanks for your comments. In the analyses, indicators of ECEC quality were combined to maximize the number of included effect sizes. As such, the results do not differentiate between indicators. We are now more explicit about this in the abstract by adding: The averaged effects, pooled within each of the child outcomes, suggest that higher levels of ECEC quality are significantly related child outcomes (line 24).

In response to another comment raised by the reviewer, we now added separate analyses for structural and process quality indicators. The results from these analyses are now also reported in the abstract (lines 31-35): When structural and process quality indicators were tested separately, structural characteristics alone did not significantly relate to child outcomes whereas associations between process quality indicators and most child outcomes were significant, albeit small. A comparison of structural characteristics and process quality indicators, however, did not yield significant differences in effect sizes for most child outcomes.

Introduction

Line 105-109: It would be useful for authors to expand on why it is important to continue to examine whether structural indicators of ECEC quality are associated with child outcomes, when they have not been shown to matter in previous work. Why is further investigation needed to inform policy and practice.

RESPONSE: In response to the comment, we expanded on why it is important to examine structural indicators of ECEC quality. In lines 114-122, we added:

Because structural features of ECEC settings have been more regulable, many countries focus on structural standards as key strategy for improving the quality of ECEC programs (Early, et al., 2007; OECD, 2018). The G20 Development Working Group (2018) urges to focus on the quality of the infrastructure and capacity building of, decent work conditions for and adequate training of the ECEC workforce. Government regulations can set standards for these features, for example, by lifting the minimum requirements for teacher-child ratio or requiring a certain percentage of teaching staff to be qualified in early childhood education. Such structural regulations determine the setting in which children learn and thus may be important preconditions for process quality (OECD, 2018).

Methods

Line 241: Why did authors exclude studies prior to 2010? The rationale in the article was that a previous meta-analysis (Rao published 2017) had gone up to 2012. But that analysis did not attempt to answer the questions about quality, but rather different types of programs such as child-focused or parent-directed or nutrition. A number of pre-2010 papers could be included, for example:

Aboud, F. E. (2006). Evaluation of an early childhood preschool program in rural Bangladesh. Early Childhood Research Quarterly, 21, 46–60. doi:10.1016/j.ecresq.2006.01.008

Moore, A. C., Akhter, S., & Aboud, F. E. (2008). Evaluating an improved quality preschool program in rural Bangladesh. International Journal of Educational Development, 28, 118–131. doi:10.1016/j.ijedudev.2007.05.003.

Mwaura, P., Sylva, K., & Malmberg, L.-E. (2008). Evaluating the Madrasa pre-school programme in East Africa: A quasi experimental study. International Journal of Early Years Education, 16, 237–255.

RESPONSE: Many thanks for pointing this out. We have now focused on previous meta-analyses that attempt to answer questions related to ECEC quality and child outcomes. A number of meta-analyses included articles published prior to 2010 which is why we decided to keep 2010 as year to start our literature search. We added this information in the revised manuscript in lines 274-278:

We based these dates on previous meta-analyses investigating associations between various aspects of ECEC quality and child outcomes for which literature searches included articles published prior to 2010 (e.g., Brunsek, et al., 2017; Burchinal et al., 2011; Eggert et al., 2018; Falenchuk et al., 2017; Hong, et al., 2019; Jensen & Rasmussen, 2019; Perlman, et al., 2016; Perlman, et al., 2017).

We also added the following (lines 280-285):

The time period (2010-2020) also covers a period of ECEC-focused policy initiatives across the world, for example, the national plan for medium and long-term education reform and development (2010-2020) in China (Zhou, 2015), recommendations on high-quality ECEC systems by the Council of the European Union (2019), or the G20 Initiative for Early Childhood Development (2018) and G20 Education Ministers’ Declaration (2018).

The supporting excel sheet listing studies and their measures appears to exclude research using the ECERS-E as the measure of quality. Correlations with the ECERS-E tend to be higher than the ECERS-R and studies using the measure have been frequently conducted in LMICs and in Britain.

RESPONSE: This is an important point. The inclusion of studies in the meta-analysis was not based on the measures used to assess indicators of ECEC quality. Rather, the inclusion was based on whether the study reported an effect size relevant to the research questions (i.e., an association between ECEC quality and child outcomes). The lack of research using the ECERS-E as a measure of ECEC quality was not a result of the inclusion/exclusion criteria used in the present meta-analysis.

It is also possible that many studies using the ECERS as a measure of ECEC quality were not included because the studies were published prior to 2010. For example, in a recent meta-analysis testing the relationship between the Early Childhood Environment Rating Scale and its revised form and child outcomes, 68% of included studies (51 out of 75 as per supplemental information S3) were published prior to 2010.

Brunsek, A., Perlman, M., Falenchuk, O., McMullen, E., Fletcher, B., & Shah, P. S. (2017). The relationship between the Early Childhood Environment Rating Scale and its revised form and child outcomes: A systematic review and meta-analysis. PloS one, 12(6), e0178512.

In lines 809-815, we addressed your comment in the limitation:

Another limitation related to the dominance of studies from the U.S. might be related to the measures used to assess ECEC quality. For example, 81 studies, of which 50 were from the U.S., used a version of the CLASS, an observational tool developed in the U.S. to assess indicators of process quality. As a result, other measures, such as the ECERS-R and ECERS-E were not as commonly reported which might have biased the results towards a certain conceptualization of ECEC quality.

It is not clear why longitudinal studies, when a child outcome came from a time after the quality measure, and intervention studies were excluded. Why would their associations be irrelevant to the questions asked here? These two features are most likely to exclude LMIC studies where interventions are often the only ethical reason for conducting such a study.

RESPONSE: Thank you for this comment. We apologize for not being clear in the description of inclusion criteria. The comment relates to one eligibility criterium (line 298): “the study assessed indicators of quality in center-based ECEC programs catering to children ages 0-6 years” and two inclusion criteria (reported in lines 303-304): “the article reported effect size measure of at least one quality indicator-child outcome association” and “measures of ECEC quality and child outcomes were collected within the same school year”. A longitudinal study was included if an effect size measure of at least one ECEC quality indicator-child outcome association was available when both ECEC quality and child outcomes were assessed during the same school year of the ECEC program.

There were two reasons for why longitudinal studies were excluded:

(1) when child outcome measures were from a different school year than ECEC quality measures. To avoid bias introduced to the results, longitudinal studies with child outcomes measures from a different school year than the ECEC quality measure were excluded. There was no consistency in the length of the time when ECEC quality was measured and child outcomes were measured across longitudinal studies. The length ranged between 1-10 years which would need to be analyzed separately by length of follow-up and there were not enough effect sizes to permit such an analysis.

(2) when child outcome measures were assessed before ECEC quality measures. In this case, ECEC quality was no longer the predictor of child outcomes but rather predicted by child outcomes which was not our research question.

It appears that 6 interventions were excluded. The number of longitudinal studies excluded is not reported.

Several publications after 2010 were omitted. It would be important to include these especially as they are from LMIC, which the authors claim to be lacking:

Malmberg L-E, Mwaura P, Sylva K. Effects of a preschool intervention on cognitive development among East-African preschool children: A flexibly time-coded growth model. Early Child Res Q 2011;26(1):124-33.

Aboud, Frances E., Kerrie Proulx, and Zaitu Asrilla. An impact evaluation of Plan Indonesia’s early childhood program. Canadian Journal of Public Health 107.4 (2016): e366-e372.

Su, Yufen, et al. Preschool quality and child development in China. Early childhood research quarterly 56 (2021): 15-26.

Aboud, F.E. & Hossain, K (2011). The impact of preprimary school on primary school achievement in Bangladesh. Early Childhood Research Quarterly, 26, 237-246.

RESPONSE: The comment relates to an exclusion criterium: “Intervention studies were excluded unless relevant effect size measures were reported prior to the intervention.” (reported in line 308). The decision was made to avoid bias to the results that might have been due to the intervention. In an attempt to standardize the conditions in which effect sizes were reported as much as possible across studies, only pre-intervention (baseline) effect sizes were included from intervention studies. That was because interventions targeted ECEC quality and as such aimed to change (increase) ECEC quality, thus reflecting very different conditions from studies that observed ECEC quality but did not attempt to increase ECEC quality. The 6 excluded intervention studies did not report such effect sizes and were therefore excluded.

PLOS recent published a meta-analysis of parenting programs, separating out high-income country findings from LMICs. Could the same be done here? Out of 185 studies listed in the excel sheet, 165 were from HICs. This is not representative of the quality-outcome research conducted in LMICs. Perhaps you can conduct one analysis for HIC and a separate one for LMIC studies, adding more LMIC studies than currently (see comments above).

RESPONSE: Thank you for the suggestion. As the reviewer mentioned, the number of studies from LMICs that could be identified by our literature search during the a-priori defined search period was small and did not allow to run separate analysis. In response to the comment, however, we did run a separate analysis for studies from HIC countries only for the research question regarding the association between ECEC quality and child outcomes. The results are reported in the supplemental materials (SI5).

We added the following to the revised manuscript (lines 549-552):

We repeated the analysis using only studies from high-income countries. The results replicated and are reported Supplemental Information SI 5. However, for low-to-middle income countries, the same separate analysis could not be completed because of the small number of unique studies.

Line 289: please specify for the readers what is meant by a “global process quality score”?

RESPONSE: More details are now provided in the revised manuscript (lines 343-345).

If the study used a global process-quality score (i.e., the score did not differentiate between specific indicators but instead reflected an average level of process quality across multiple indicators) this was recorded as a separate process quality indicator.

Please specify how each estimate of association is weighted when calculating the pooled effect size.

RESPONSE: The information has been added to the revised manuscript (lines 401-404).

We used robust variance estimation to calculate the pooled effect sizes in which weights of each estimate of association were based on the assumption that effect sizes are correlated within studies. Details on the formulas specifying the correlated effects covariance structure and weights calculation can be found in Fisher and Tipton (2015, p. 4).

Line 295. The five structural qualities were clear. However, the four process qualities were not. How did you categorize CLASS and ECERS-R items into these four process qualities?

RESPONSE: In the revised manuscript we now provide additional information for how items or scales were categorized (lines 341-343).

To categorize items or scales of measures into these four indicators, we relied on the description of the measure and labeling by the author(s) of the original study.

Results:

Two questions were posed: "whether such structural characteristics itself systematically change the effects of process quality on child outcomes, or whether process quality changes the impact of structural characteristics on child outcomes. Could you also ask and present the results for the two simpler questions before the moderated ones, namely: Do structural characteristics impact child outcomes and Do process characteristics impact child outcomes?

RESPONSE: Thank you for the suggestion. We now added the recommended separate analyses and tested for associations between structural characteristics and child outcomes, and between process quality indicators and child outcomes.

The results are reported in the main manuscript (lines 579-591):

We first tested for associations between structural characteristics and child outcomes, and between process quality indicators and child outcomes. In separate analyses, we combined effect sizes for structural characteristics and the effect sizes process quality indicators; effect sizes were pooled with each of the 8 child outcome categories. For structural characteristics, none of the associations were significant (see Supplement Information SI6 for the results). For process quality indicators, most child outcome categories showed a significant association. Higher levels of process quality were significantly related to higher levels of academic outcomes (literacy, n=96: 0.09, 95% C.I. 0.03 – 0.16; math, n=56: 0.09, 95% C.I. 0.05 – 0.12), behavioral skills (n=64: 0.13, 95% C.I. 0.08 – 0.18), and social competence (n=59: 0.14, 95% C.I. 0.08 – 0.20), and lower levels of behavioral (n=59: -0.14, 95% C.I. -0.20 - -0.07) and social-emotional problems (n=27: -0.09, 95% C.I. -0.15 - -0.02). For motor skills (n=2: 0.09, 95% C.I. -0.02 – 0.20) and when a global assessment of child outcomes was reported (n=12: 0.04, 95% C.I. -0.08 – 0.16), however, the association was not significant.

And in Supplement Information (SI 6):

SI6: Associations between structural characteristics and child outcomes

The associations between structural characteristics and child outcomes were not significant (literacy, n=28: 0.03, 95% C.I. -0.02 – 0.08; math, n=16: 0.01, 95% C.I. -0.04 – 0.05; behavioral skills, n=9: 0.01, 95% C.I. -0.04 – 0.05; social competence, n=13: 0.03, 95% C.I. -0.03 – 0.08; behavioral problems, n=13: -0.03, 95% C.I. -0.07 - 0.13; social-emotional problems, n=2: -0.02, 95% C.I. -0.88 - 0.84; motor skills, n=2: 0.14, 95% C.I. -0.61 – 0.89; and global assessment of child outcomes, n=5: -0.05, 95% C.I. -0.26 – 0.15).

Where are the individual measures of association in each study presented? Meta-analyses typically present the data extracted from each study that contributes to the analyses.

RESPONSE: Thank you for your comment. The documents are available in the Data and Results supplemental materials:

Excel document (Coding Sheet): (FINAL) Coding Sheet_11_25_22.xlsx

Excel document (Effect Sizes): ECEC Quality_A Meta-Analysis_all_effect_sizes.xlsx

Figure 1: I would expect that effect sizes would differ depending on the indicator of quality (i.e., type of structural and type of process indicators of quality). Why were these not separated, and effect sizes for child outcomes calculated for each indicator?

RESPONSE: Thank you for the comment. To examine the overall magnitude of associations between ECEC quality indicators and child outcomes, we averaged effects pooled within each of the child outcomes. Structural and process indicators of ECEC quality were combined to maximize the number of included effect sizes. Figure 1 presents the pooled effect size estimates for ECEC quality-child outcome associations.

In response to a previous comment, we now added the recommended separate analyses and tested for associations between structural characteristics and child outcomes, and between process quality indicators and child outcomes. For the analysis testing associations between structural characteristics and child outcomes, we combined all structural indicators of ECEC quality. Similarly, for the analysis testing associations between process quality indicators and child outcomes, we combined all process quality indicators of ECEC quality. This was done to maximize the number of included studies. Because of the large confidence intervals observed for the separate analyses, we decided to no present the findings in Figure 1. However, the results are presented in the main manuscript (lines 579-591) and the Supplement Information (SI 6).

Figure 1: why does the size of the circle not represent the number of effect sizes (rather than unique studies) used to estimate the pooled effect size? It seems it should be number of estimates of association since some studies had multiple estimates of association. Also, in the results section (e.g., paragraph starting on line 484), does the n represent number of studies or number of effect sizes used to estimate the pooled effect size?

RESPONSE: Thank you for the comment. We used the Robust MA method that takes into account that some studies contributed multiple effect sizes. Thus, the number of effect sizes might be misleading as they could come from a small number of studies. For this reason, we decided to report the number of unique studies.

In the revised manuscript, we added to the results section, that n represents the number of unique studies (line 539).

Table 2: please make clear which type of quality indicator is used as the reference (I believe it is process). In the text, you state that effect sizes for associations that include process indicators are more positive than those that include structural indicators, yet the regression coefficient estimates in the table are negative. This is confusing. I suggest authors stay consistent in the way they discuss and present the direction of associations.

RESPONSE: Thank you for the comment. As per the reviewer’s suggestion, we added the information to the table note. The reference is process quality. We also revised the text to align with the table.

Figure SI 5: While this figure is nice, it would benefit from also listing the estimates and 95% CIs for the pooled effect sizes.

RESPONSE: The requested information has been added to the figure.

Table S2: I see that instructional quality was used as the reference group. But there are two other groups, so why do we not see how each group differs from the reference?

RESPONSE: In Table S2, we now report three coefficients for each child outcome. The statistics reported in the first row for each outcome reflect managerial quality (in reference to instructional quality); the statistics reported in the second row for each outcome reflect emotional quality (in reference to instructional quality); the statistics reported in the second row for each outcome reflect emotional quality (in reference to managerial quality). We included this information in the table note.

(b) Evidence for moderation: Where are the non-significant results presented?

RESPONSE: The results are available in the Data and Results supplemental materials; Excel document: 220513_results_meta-analyses_Revision1_10032023.xlsx (Sheets (b) moderation struct and (b) moderation process. We now also added the results to Supplement Information Table S3 (Tests of structural indicators of ECEC quality as moderators of process quality-child outcome associations) and Table S4 (Tests of process quality indicators of ECEC quality as moderators of structural quality-child outcome associations).

Line 484. Can you comment on whether the effect sizes were small, moderate or large? They all appear to be small and Literacy and math appear to be very small.

RESPONSE: This has been added (line 539).

Line 589: The authors state that there was significant moderation from family income on the association between quality indicators and social competence and behavioral problems. However, the coefficients are 0 (95% CI: 0-0). Please explain.

RESPONSE: Thank you for the comment. In the manuscript, we only reported two decimals. Because the numbers were very small, it rounded to 0.00. The actual numbers are as follows:

Social competence: coefficient = -0.0028, SE = 0.0011, 95% Confidence Interval = -0.0053, -0.0003

Behavioral problems: coefficient = 0.0029, SE = 0.0012, 95% Confidence Interval = -0.0002, 0.0056

The detailed results are available in the Data and Results supplemental materials; Excel document: 220513_results_meta-analyses_Revision1_10032023.xlsx (Sheet (e) minority SES).

In moderation analyses, it is typical to see effect sizes for each stratum (e.g., high vs low proportion of children from low-income families). What do the coefficients in Figure S5 represent? Is this the coefficient for the interaction term? If so, please make this explicit in the Figure. If not, please explain and clarify what the coefficient represents.

RESPONSE: Thank you for the comment. The coefficients reported in the figure can be interpreted as the extent to which the association between ECEC quality and child outcomes changes if the percentage of children from an ethnic minority or a low-income family background increase by 10%. We added this information to the figure caption/note.

Discussion

Line 699. It is difficult to draw conclusions about frameworks and evidence from LMICs unless you add more research from LMICs and conduct analyses comparing HICs and LMICs.

RESPONSE: Many thanks for the comment. The discussion about conceptual and theoretical frameworks is not specifically focused on a comparison between HICs and LMICs. We agree with the reviewer, that we do not have the data to make such a claim. Rather, we wanted to point out that the majority of identified studies that were included in the meta-analysis included samples from the U.S. (123 studies compared to 62 studies from countries other than the U.S.). In addition, studies with large samples (>500 participants) were also predominantly from the U.S. We concluded that the results might have been biased towards patterns prevalent in the U.S. that might not apply to other, non-U.S. ECEC contexts.

Line 712. You stated that the reliance on correlation coefficients is a limitation. What kind of analysis would be more appropriate?

RESPONSE: In response to the comment, we added the following to the revised manuscript (lines: 830-831):

Zero-order correlations do not reflect the complexity of ECEC classrooms, pointing to the need of innovative meta-analytic approaches that allow for the aggregation of published multivariate findings (Ulferts et al., 2019). 

Reviewer #2: I appreciate the opportunity to review the meta-analysis on early childhood education and care quality and child outcomes. The study is well done with clear rationales and descriptions of the methods and results. I believe the findings will add to the literature on ECEC quality and children’s development. I provide specific comments below but want to emphasize that I think the authors need to be clear that the effect sizes found are small and more information is needed on the practical significance of the findings. Additionally, the implication section is underdeveloped, and more effort should be put into discussing how these findings fit with the broader literature and what this means for practice and policy.

Literature review is well written and thorough, with the exception of the discussion of the interaction of quality indicators. The justification for interaction effects is underdeveloped – why would one think that an association between structural aspects and child outcomes will be stronger with higher levels of process quality? The theoretical model of structure – process – outcome would not predict this. Unclear what is motivating this question. Also, do the authors have any hypotheses for which quality elements together are most predictive of child outcomes or how the combination may differ depending on outcome examined?

RESPONSE: Many thanks for the comment. In response to the feedback, we added more information to the discussion of the interaction of ECEC quality indicators. This specific aspect of the meta-analysis was largely exploratory which is why we did not have specific hypotheses.

The following was added to the introduction (lines 181-204):

A better understanding of the underlying processes linking ECEC quality with child outcomes may be gained by testing interaction effects of ECEC quality indicators. It is possible that it is a specific combination of structural and process aspects that matters for children’s outcomes. For example, it has been found that associations between process quality and children’s social-emotional skills were moderated by dosage. Children who spent more time in high-quality ECEC settings were reported to have higher levels of social-emotional skills compared to children who spent less time in high-quality ECEC settings (Aguiar, 2016; Cunha et al., 2006). Such results suggest that structural characteristics can reinforce positive effects of high levels of process quality as well as negative effects of low levels of process quality. Likewise, positive effects of the level of instructional processes on children’s gains in literacy and numeracy might only be present in small classes where teachers can engage in differentiated instruction, whereas in large classes such an effect might be absent. However, results are mixed and other studies did not find significant results when testing structural characteristics of the ECEC setting as moderators of the association between process quality and child outcomes (Xue et al., 2016).

Alternatively, it might also be possible that associations between structural aspects and child outcomes will be stronger under high levels of process quality, compared to low levels of process quality. For example, a study found that teacher emotional support moderated the association between classroom composition (i.e., high levels of problem behaviors in the classroom) and children’s relational functioning. The negative effect of a highly challenging class on individual children’s relational functioning was buffered by teachers who were highly emotionally supportive (Buyse et al., 2008). Although fewer studies tested the moderating role of process quality, it can provide important information about the mechanisms underlying the associations between ECEC quality indicators and child outcomes.

Section on Children from Ethnic Minority Backgrounds appropriately and importantly highlights the challenges students may encounter and how their background can contribute to differences in achievement. However, evidence from Head Start and other pre-K evaluations suggest that multilingual learners may benefit the most from the ECE (see work by Marianne Bitler and other on Head Start, NC preK RCT evaluation results).

RESPONSE: In response to the reviewer’s comment, we elaborated the discussion and included a note on ECEC program evaluations (lines 208-218).

Indeed, evaluations of ECEC programs (for example, Head Start in the U.S.) provide evidence for this assumption, suggesting that program effects may be largest for children from disadvantaged backgrounds (Bitler et al., 2014; Iruka, 2020; Weiland & Yoshikawa, 2013; Yoshikawa et al., 2016). ECEC programs have the potential to compensate for educational disadvantages by providing rich and engaging learning environments and to support these children to catch up with their peers (Ulfers & Anders, 2015). As such, ECEC programs can disrupt trends leading to achievement gaps which have been found to start prior to age three (Iruka, 2020). Yet, to date, systems, including ECEC, continue to perpetuate racism and inequities, thus “reduc[ing] opportunities for certain groups to thrive and meet their potential” (Iruka, 2020, p.65). In order to strengthen the impact of early learning, more effective, evidence-based policies are thus needed.

Given the focus on structural and process quality, I was surprised the authors did not discuss policy more in the introduction and literature review as how quality in programs is regulated. This is particularly important in the global context where policies differ widely and may contribute to differences observed/reported in studies.

RESPONSE: Thank you for the important comment. In the revised introduction, we now refer to policy where we saw fit, with a focus on global policy initiatives.

We added the following to the manuscript:

Moreover, the G20 Initiative for Early Childhood Development (2018) emphasizes the importance of political buy-in and state and non-state investments in the early years in order to narrow achievement and opportunity gaps that exist between children from higher and lower socioeconomic backgrounds. (lines 58-61)

Because structural features of ECEC settings have been more regulable, many countries focus on structural standards as key strategy for improving the quality of ECEC programs (Early, et al., 2007; OECD, 2018). The 2018 G20 Development Working Group (2018) urges to focus on the quality of the infrastructure and capacity building of, decent work conditions for and adequate training of the ECEC workforce. Government regulations can set standards for these features, for example, by lifting the minimum requirements for teacher-child ratio or requiring a certain percentage of teaching staff to be qualified in early childhood education. Such structural regulations determine the setting in which children learn and are thus important preconditions for process quality (OECD, 2018). (lines 114-122)

Methods – overall this section is well done. I have just a few specific comments below.

• Does the requirement of English result in studies from non-English majority speaking countries not being represented in the data?

RESPONSE: At the pre-screening stage, 168 studies were excluded because they were either in another language (non-English) or not available online. This reflects 4% of studies excluded at this stage. Six additional studies were identified at the screening stage that were also not in English and thus excluded which reflects 1% of excluded studies during the screening stage. See the PRISMA flowchart of article selection for this information. As such, the exclusion of studies because they were not published in English was a minor reason. In addition, the exclusion of non-English publications is a common exclusion criteria for meta-analysis (see, for example, Cosso, Suchodoletz, & Yoshikawa, 2022; Eggert, Fukkink, & Eckhardt, 2018; Perlman, et al., 2016; Perlman et al., 2017).

However, it is possible that our search did not pick up relevant studies from non-English majority speaking countries as they might have been published in journals that are not listed with the databases used. We added this as a possible limitation (lines 804-809):

It is possible that the use of English-language databases and the English requirement for studies to be included in the coding have resulted in studies from non-English majority speaking countries being underrepresented in the data. Increased efforts and resources are needed to overcome the challenges of locating, assessing and including non-English studies in systematic reviews, for example, by using professional translators (Neimann Rasmussen & Montgomery, 2018).

• What is the rationale for dropping intervention studies?

RESPONSE: The comment relates to an exclusion criterium: “Intervention studies were excluded unless relevant effect size measures were reported prior to the intervention.” (reported in line 308). The decision was made to avoid bias to the results that might have been due to the intervention. In an attempt to standardize the conditions in which effect sizes were reported as much as possible across studies, only pre-intervention (baseline) effect sizes were included from intervention studies. That was because interventions targeted ECEC quality and as such aimed to change (increase) ECEC quality, thus reflecting very different conditions from studies that observed ECEC quality but did not attempt to increase ECEC quality. The 6 excluded intervention studies did not report such effect sizes and were therefore excluded.

• Did the authors attempt to contact study authors to obtain any missing information?

RESPONSE: Yes, we contacted authors to obtain missing information, yet, the response rate was only 32%.

• After looking at the excel database, I was surprised to find some key ECE studies not there (Soliday Hong et al., 2019; Keys et al., 2013; Burchinal et al., 2016). I am guessing this is because of the studies including meta-analyses of the datasets. I think a further description of the methods used to achieve the final sample, even in the supplementary files, would help readers better understand decisions.

RESPONSE: Meta-analyses, literature reviews or systematic syntheses were excluded. The information is reported in the PRISMA flowchart. We now added a sentence to the method section that meta-analyses, literature reviews and systematic syntheses of multiple datasets were excluded during the screening phase (line 306).

Results are clearly written and nicely organized.

RESPONSE: Thank you for the positive feedback.

Discussion section

• Overall, the section is thorough. However, the authors should emphasize throughout, particularly in the first paragraph that the overall effect is small for all child outcomes. This is done nicely in the second paragraph.

RESPONSE: In the first paragraph, we now added that effect sizes were small (line 682).

• I’m not sure what to make of the significant differences between structural and process quality indicators for the two outcomes – the authors should expand on these findings. Such as is this a data issue or is there reason to believe the findings are meaningful and aligned with prior research.

• The authors should situate their effect sizes in the literature and provide information on the practical significance. Also, understanding the cost of improving quality is important and is not equal for all indicators. More should be said on this point.

• The conclusion section is very broad and generally mentions quality improvement. As discussed above, this can look quite different depending on which areas of quality are the focus (e.g., structural – requiring teachers to have certain degrees vs. improving process quality). Right now, I think the implications are too general and not particularly useful to the field.

RESPONSE: In response to the reviewer’s comments, we have thoroughly revised the discussion and conclusion and incorporated the reviewer’s suggestions (lines 671-846).

• I’m unsure what is meant by two sentences on page 32-33 “While, at the sample level, these results seem to be robust…when studying the nature of the effects of ECE quality.”

RESPONSE: The sentences were deleted during the revision.

Attachment

Submitted filename: Response Letter - R1 15032023.docx

Decision Letter 1

Sze Yan Liu

7 May 2023

Early Childhood Education and Care Quality and Associations with Child Outcomes: A Meta-Analysis

PONE-D-22-31943R1

Dear Dr. Von Suchodolotz,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Sze Yan Liu, PhD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Thank you for the revisions and the clarifications.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have thoroughly and thoughtfully addressing my previous comments. I have no further suggestions.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

Acceptance letter

Sze Yan Liu

12 May 2023

PONE-D-22-31943R1

Early Childhood Education and Care Quality and Associations with Child Outcomes: A Meta-Analysis

Dear Dr. von Suchodoletz:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Sze Yan Liu

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Checklist. PRISMA 2020 for abstracts checklist.

    (DOCX)

    S2 Checklist. PRISMA 2020 checklist.

    (DOCX)

    S1 File. List of keywords for literature search.

    (DOCX)

    S2 File. Descriptive information of studies included in the meta-analysis.

    (XLSX)

    S3 File. Funnel plots.

    (DOCX)

    S4 File. Sensitivity analysis.

    (DOCX)

    S5 File. Association between ECEC quality and child outcomes (only studies from high-income countries).

    (DOCX)

    S6 File. Associations between structural characteristics and child outcomes.

    (DOCX)

    S7 File. Differences in ECEC quality–Child outcome associations by the type of process quality domain.

    (DOCX)

    S8 File. Differences in effect size by process quality domain.

    (DOCX)

    S9 File. Tests of structural indicators of ECEC quality as moderators of process quality-child outcome associations.

    (DOCX)

    S10 File. Tests of process quality indicators of ECEC quality as moderators of structural quality-child outcome associations.

    (DOCX)

    S11 File. Differences by the timing of the data collection.

    (DOCX)

    S12 File. Differences by ethnic minority or socioeconomic family background.

    (DOCX)

    S13 File

    (ZIP)

    Attachment

    Submitted filename: Response Letter - R1 15032023.docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES