Abstract
Abstract
Introduction
Early childhood—specifically, the period from 0 to 6 years of age—is a critical time in children’s lives with rapid growth in their cognitive, social and emotional development. This period has also been shown to be the most effective time for early interventions. The use of artificial Intelligence (AI) for supporting early child development is increasing alongside the rapid advancement of technology. AI can be used directly by children (eg, for implementing adaptive technologies), by individuals who interact with children (eg, educators, parents, nurses), and by individuals indirectly supporting early child development (eg, early childhood researchers or policy analysts). This scoping review will provide a roadmap for relevant stakeholders on how AI has been applied within and across different contexts to support infants and young children’s development, as well as the most predominant AI technologies used across various contexts.
Methods and analysis
The current study follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Review The search syntax will be applied in PsycINFO, ERIC, Education Source, CINHAL, MEDLINE, Embase and IEEE Xplore. The purpose of this study is to curate and synthesise academic papers to examine the application of AI for supporting the development of children between birth and age 6 years of age. Studies with children or individuals who work directly or indirectly with children will be included. Part of the abstract and full-text screening will be conducted by two researchers, with discrepancies being resolved by the lead authors. In addition, AI will be used to help with study screening and data extraction once confirmed to be reliable (Cohen’s kappa >0.80). Thematic and content analyses will be conducted to identify the types of AI products used and their applications in different contexts, the most predominant AI products used within and across each context, as well as how children’s developmental outcomes are impacted by the use of these AI products. Where applicable, visualisations such as tables, graphs and figures will be used to synthesise the data across contexts and AI products used to support early development of young children.
Keywords: Artificial Intelligence, Child Development, Early Childhood Education, Evidence Synthesis Methods, Human-in-the-Loop
STRENGTHS AND LIMITATIONS OF THIS STUDY.
A comprehensive search strategy was developed in consultation with an academic librarian.
Multiple databases and sources of grey literature were systematically searched.
Inter-rater reliability was assessed to ensure consistency in screening and data extraction.
As with all reviews, studies published after the search date will not be included.
The scope of the review is broad, resulting in heterogeneity of the studies.
Introduction
Following the rapid advancement of artificial intelligence (AI), it has been increasingly applied to support young children’s development in various contexts including healthcare,1 education2 3 and within their homes.4 AI technologies are used directly by children3 5 and individuals interacting directly with children, such as educators, parents and healthcare professionals. Moreover, AI supports those indirectly involved in early child development, including researchers and policy analysts. Despite the growing integration of AI in children’s lives, there is a lack of comprehensive understanding regarding how AI is applied within early childhood contexts and its impact on developmental outcomes. This scoping review aims to fill this gap by providing a roadmap of the AI used to support young children for relevant stakeholders. Specifically, the different and predominant types of AI products and techniques and their usage by different stakeholders will be examined across various contexts, and the developmental outcomes influenced by these technologies will be examined. An evidence gap map will be generated to highlight areas requiring further research in AI applications.
Capabilities of AI
AI is a branch of computer science focused on performing human intelligence tasks through learning, problem-solving and inferential decision-making. AI is an umbrella term that encompasses a wide range of computational approaches designed to mimic aspects of human intelligence. Within AI, there are multiple subfields, including machine learning, natural language processing and computer vision. Generative AI represents a more recent branch of AI that focuses on creating new content, such as text, images or audio. ChatGPT, for example, is a large language model (LLM) within the branch of Generative AI. The term ‘artificial intelligence’ was first introduced at a 1956 research conference at Dartmouth.6 Since then, the field has evolved alongside rapid technological advancements. In 2017—referred to by many as the ‘Year of AI’—Google released the original Transformer model, now a cornerstone of deep learning architectures.7
The release of ChatGPT by OpenAI in late 2022 further accelerated interest in generative AI due to its ability to generate and process natural language.8 AI has demonstrated benefits across various sectors involving computer systems, enhancing task efficiency, enabling real-time pattern recognition and supporting large-scale data analysis.9,14
Using AI to support early child development
AI tools have been increasingly used in early child development, profoundly influencing how young children learn and grow.3 15 Early childhood, particularly the period from birth to 6 years of age, is a critical period that lays the foundation for future development.16 Specifically, early development (ages 0–6) is a particularly critical developmental window of rapid cognitive, social and emotional growth.17,20 Experiences with AI technologies during this sensitive window, whether through educational tools, digital play or interactions mediated by caregivers, may have long-lasting implications for learning and development. For this reason, it is timely and important to provide a roadmap for relevant stakeholders, including educators, policy-makers, designers and parents, to guide the responsible integration of AI into early childhood contexts. Such guidance can help ensure that these technologies support, rather than undermine, foundational developmental processes.
Building on Bronfenbrenner’s Ecological Systems Theory,21 which emphasises that children’s development is influenced by multiple levels of their surrounding environment, we recognise the multifaceted role AI can play. Children are affected not only by their immediate environments—the microsystem, including family and educational settings—but also by interactions between these environments (mesosystem), external factors such as parental workplace conditions (exosystem), the broader societal culture (macrosystem) and changes over time (chronosystem). Supporting young children’s development at both the proximal and distal levels within these systems is essential. A variety of AI tools that children engage with directly, as well as ones for stakeholders who interact with children directly (eg, parents) or indirectly (eg, policy-makers).
Examples of AI in the microsystem include AI toys and robots that are used to provide personalised learning experiences directly to children.3 5 When engaging with AI robotic toys, children participate in a wide range of play types, similar to play with peers or non-technological toys.22 23 AI also supports children with diverse needs, such as assisting those with different language backgrounds to overcome learning barriers24 or helping children with autism spectrum disorder to practise social skills.25 In the mesosystem, AI can support individuals working directly with children. It aids educators in implementing classroom activities in early childhood educational settings.3 5 26 Tools like ChatGPT offer educators additional training and professional development opportunities3 and serve as parenting interventions to answer parenting-related questions and alleviate stress.4 In medical settings, AI has been employed to diagnose diseases or disorders in children more efficiently than traditional methods.14 27 In the exosystem, AI assists stakeholders such as researchers and policymakers who work on early childhood-related matters but do not interact directly with young children. For example, studies have used machine learning to identify main topics within lengthy documents related to universal child care policies (removed for review; removed for review).
Since 2017, AI has been increasingly used by researchers and data scientists for data analysis, given its learning capabilities, statistical power and flexibility in handling data.28 Collectively, these examples highlight the opportunities and early growth of AI as a support for the development and learning of young children. They also demonstrate how AI assists individuals working directly or indirectly with children in providing more effective support to meet a range of needs.
Study objectives
A scoping review is a systematic integration of evidence that follows similar procedures to a systematic review but addresses research questions that are more exploratory and open-ended in nature.29 Its primary goal is often to provide a comprehensive overview of the literature on a specific area of research. Previous integration of evidence has been done to examine the application of AI in various contexts to support children, teenagers and adolescents.328 30,32 However, a more comprehensive review and synthesis of the literature is needed to gain a systems-level understanding of how AI can be used within and across contexts to support early child development directly and indirectly. Given the growing importance of AI as a tool used across types of interventions and research, this scoping review that we outline in this protocol aims to address the following research questions:
What are the different types of AI-based products and techniques used across contexts to support early child development?
Who are the target users of different types of AI-based products and techniques across contexts?
What are the different developmental outcomes supported by AI?
Methods
Patient and public involvement
Patients or the public were not involved in the design, conduct, reporting or dissemination plans of our research.
Reporting of information in the current protocol
Information in this protocol is reported following the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA) Checklist.33 The checklist can be found in online supplemental Appendix B.
Search strategy
This study will be conducted and reported following the steps outlined in the Joanna Briggs Institute’s scoping review methodology: the PRISMA extension for Scoping Reviews (PRISMA-ScR).34 Specifically, the PRISMA-ScR checklist35 will be used as a guideline for reporting the methods and results in the paper. Literature will be gathered from the following seven electronic databases: PsycINFO, ERIC, Education Source, CINHAL, MEDLINE, Embase and IEEE Xplore. The search syntax was developed in consultation with an academic librarian. Based on an environmental scan of the literature, a wide range of search terms and subject headings related to AI as well as infant and early child development were included. The search syntax was applied to titles, abstracts and subject headings. Subject headings were adapted to the specific terminology of each database. When searching for AI-related literature, examples of terms included are “machine learning”, “natural language processing”, “deep learning”, “speech-to-text” and “intelligent tutor”. Examples of infancy and early child development terms applied in the syntax are “early childhood”, “young children”, “early years” and “early development”. For the full search syntax applied to each database, see online supplemental Appendix A. The initial search was conducted in January 2024, with the study expected to conclude by the end of 2025.
Study selection
All research papers published in English will be included, except for non-empirical studies (table 1). Given the rapid advancement of technology and AI, only studies published since 2017 will be included to ensure the results are synthesised from the most cutting-edge AI research. All studies that focus on supporting children aged 0 to 6 years old will be included. Studies that support children within that age range but do not explicitly report the related findings will be excluded. There are no exclusion criteria based on the target user (ie, whether target users of the AI are children, parents, educators, policy-makers, etc). In addition, studies from all research settings are included (eg, early childhood educational settings, home, healthcare, community), as this study aims to examine how AI impacts both children’s immediate environments and stakeholders who work to support children indirectly. Therefore, studies examining the application of AI in microsystems, mesosystems and exosystems will be included.
Table 1. Inclusion and exclusion criteria.
| Study characteristics | Inclusion | Exclusion |
|---|---|---|
| Type of publication |
|
|
| Publication year |
|
|
| Study population |
|
|
| Contexts |
|
|
| Usage of AI |
|
|
| Language |
|
|
| Country |
|
|
AI, artificial intelligence.
Screening procedures
Research assistants (RAs) involved in screening and data extraction are undergraduate and graduate students with diverse backgrounds (ie, psychology, education, computer science, biology). All RAs went through extensive training on the inclusion criteria and achieved >0.80 Cohen’s kappa before conducting official screening. Once trained, RAs independently screened titles and abstracts to identify eligible studies. After RAs achieve >0.8 Cohen’s Kappa for full-text screening, all remaining studies will be screened at full text.
A hand search will be conducted on the reference lists of all included studies to ensure comprehensive coverage of relevant literature. Two RAs will independently screen the titles and, if relevant, review the abstracts to assess eligibility. Eligible articles will then undergo a full-text review to confirm inclusion. Throughout the screening processes, discrepancies will be addressed by the leading author of this study. A flow chart following the PRISMA-ScR34 will be provided in the manuscript to present the number of studies included and excluded during study screening and the reason(s) for exclusion (figure 1).
Figure 1. PRISMA flow chart diagram. PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses.
Data collection and management
All studies will be uploaded to Covidence, an online platform for review management. A preliminary data extraction codebook will be tested independently by two RAs on 10% of the studies and revised iteratively until variable saturation is reached. Key data to be extracted includes study characteristics, sample characteristics, use of AI, study context, child outcomes and other significant findings. Table 2 provides an initial extraction guideline, which may be adjusted based on the study findings.
Table 2. Preliminary template for data extraction.
| Domain | Description | Example |
|---|---|---|
| Study characteristics | ||
| Year of publication | Year that the study is published | 2020 |
| Author | Lead author(s) of the study | – |
| Country | The country in which the study was published | Canada, USA, China, etc. |
| Study design | Is the study experimental or observational? | Experimental, observational |
| Research question(s) | What are all of the research question(s) of this study? | |
| Context | The context that AI is intended to be used in | Educational settings, home, community, healthcare, etc. |
| Sample characteristics | ||
| Children’s age | Children’s mean age and age range in years | 5.3 (from 2.4 to 6.8) |
| Children’s gender | Gender of children | Female (50%), Male (50%) |
| Children’s ethnicity | Ethnicity of children | Black (40%), Asian (30%), white (20%), other (10%) |
| Target users | The population of users who directly interacted with the AI tool in the study | Children, parents, nurses, educators, etc. |
| N participants | The number of participants included in the study | 50 |
| Participants’ age | Participants’ mean age and age range in years | 30.7 (from 28.5 to 35.5) |
| Participants’ gender % | Percentage of participant gender | Female (50%), Male (50%) |
| Participants’ ethnicity % | Proportion of participant ethnicity | Black (40%), Asian (30%), White (20%), Other (10%) |
| Participants’ education | The highest education attained by the participants | Secondary (high) school graduation or certificate equivalent, college certificate or diploma, Bachelor’s degree, Master’s degree or above, etc. |
| Participants’ income | The income of participants | 20% less than US$10 000, US$10 000 to US$19 999, 40% from US$20 000 to US$29 999, 10% from US$30 000 to US$39 999, 10% from US$40 000 to US$49 999, 10% from US$50 000 to US$59 999 |
| Use of AI | ||
| AI product(s) | The type of AI tool used by the target users. The tangible AI-based tool or solution designed for practical use and/or ready for dissemination | Chatbot, intelligent tutor, learning toys, diagnostic tool, etc. |
| AI technique(s) | The type of AI technique used by the target users. The method, algorithm or framework used to develop AI system | Machine learning, natural language processing, computer vision, deep learning, reinforcement learning, etc. |
| AI hardware integration | The physical components used to implement AI systems | Computer, tablet, iPad, robot, etc. |
| AI software system | The software platforms or systems used to run AI algorithms. The set of instructions, programmes or code that run on hardware | AI-powered applications, software system, platform, software systems, web-based tool, etc. |
| Children’s developmental outcomes | Developmental outcome(s) of children that were impacted by the use of AI | Cognitive, social, emotional, language and literacy, gross motor, fine motor, physical health, mental health. |
Note: For each type of participant (eg, children, parents, educators), information regarding their sample size, age, gender, ethnicity, education and income will be collected.
AI, artificial intelligence.
Given the broad scope and the potentially large volume of studies, this scoping review will use a novel, AI-assisted workflow to conduct and validate full-text screening and data extraction from selected academic papers. The workflow employs two custom Python tools, ai-pdf2docx and ai-data-extractor, both leveraging Google’s Gemini LLMs via the Vertex AI platform. This AI-driven process serves as a proof-of-concept, where its outputs will be systematically benchmarked against manual reviews performed by human researchers using an identical codebook.
Document preprocessing
The included studies in PDF format will undergo an initial preparation phase using the ai-pdf2docx tool. This tool will first employ an LLM to parse each PDF, identifying structural elements such as headings, paragraphs and tables, converting this structure to a machine-readable JSON representation. This JSON output will then be programmatically used to reconstruct each document into standardised Microsoft (.docx) format, preserving textual content, table structures and primary headings. This ensures uniformly structured input for the subsequent data extraction tool. A human review will briefly verify the accuracy of these conversions.
AI-driven full-text screening and data extraction
The standardised DOCX files will be processed by the ai-data-extractor tool, which uses a two-pass LLM-based methodology guided by the study codebook (see table 2). Extraction variables are clustered into domains (eg, Study Characteristics, Educational Setting, etc) for batched LLM processing. These domains may be iteratively subdivided to improve extraction quality or meet LLM maximum output limits.
The first pass will perform content classification to identify relevant document sections for each variable batch. After segmenting documents by heading section (eg, Heading 2) and converting tables to Markdown format for LLM readability, an LLM will assign domain name tags to identify relevant paragraphs and tables for each variable batch.
The second pass conducts targeted data extraction. For each tag, aggregated content pieces that are identified in the first pass will be provided to the LLM. The model prompt will detail variable definitions, examples and guidance for each target variable. The LLM will be instructed to extract values for each variable using only information pertaining to the primary research study reported in the article.
Output, validation and iterative refinement
The AI-assisted workflow will generate distinct outputs for the full-text screening and data extraction stages, each with tailored validation approaches. For all AI-driven tasks, the LLM is designed to provide structured outputs, including the identified information, an associated confidence score, references to the specific source text segments within the document, and a textual justification for its finding. These outputs are systematically compiled into a spreadsheet format to facilitate human review.
A key distinction exists in the LLM’s role and the basis for validation between the two stages. During full-text screening, the LLM’s primary function is to identify and extract information directly related to the study’s predefined inclusion and exclusion criteria. The critical output from this process is data supporting a decision to either include or exclude the paper for full review. Consequently, although the model is extracting for variables of inclusion and exclusion criteria, inter-rater reliability assessments for this screening phase are based on the level of agreement on this final decision (include/exclude) for each assessed paper.
In contrast, during the subsequent data extraction phase for all included studies, the LLM is tasked with locating and extracting values for the complete set of variables defined in the study codebook. For this task, validation and inter-rater reliability evaluations are determined by the accuracy of each individually extracted variable value when compared against human-generated data within the validation set.
Initial validation of the model against human researchers is applied to both screening and extraction outputs, ensuring transparency and allowing for meticulous examination of the LLM’s reasoning and the accuracy of its outputs against the source material. The insights derived from this review process are crucial for the iterative refinement of the AI system. This includes adjustments to the model prompts and clarification or enhancement of the definitions within the codebook (including encapsulated inclusion/exclusion criteria)—both for the application of the inclusion/exclusion criteria in screening and for the precision of variable extraction in the data extraction phase. This iterative cycle is fundamental to progressively enhancing the overall reliability and accuracy of the AI-assisted review process and to mitigating potential biases introduced by the LLM.
Reliability and quality assurance
The full-text screening process with LLMs will involve three key steps to ensure reliability: (1) Inter-rater reliability will first be established between RAs by achieving a Cohen’s Kappa of 0.80 at the full-text screening level; (2) LLM extractions will then be evaluated against RA extractions on 20% of studies that reach full-text screening and (3) two RAs will independently verify AI screening beyond the validation set. Using the initial 20% validation set, the LLM prompts and codebook will be refined until the model meets a Cohen’s kappa of 0.80. on reaching this benchmark, our intended methodology for the remaining 80% of studies is to employ a human-in-the-loop verification approach. This allows for either strategic, random or complete review of LLM screening decisions by human researchers, with verification expedited by LLM-provided confidence scores, source content references and justifications.
For articles progressing to data extraction, our intention is to follow a similar reliability assessment and human-in-the-loop extraction. This involves model refinement on the extractions of each variable in the validation set of 20% of the studies. Once Cohen’s kappa of 0.80 is met for variable extractions, the model will extract the remaining 80% of studies whereby two independent RAs will review all information extracted by the LLM (table 3). Weekly meetings between RAs will facilitate collaborative resolution of their discrepancies, and in cases where consensus is not achieved between the RAs’ extractions, the primary author will make the final decision. Consistent with PRISMA-ScR guidelines for scoping reviews,34 a risk of bias assessment will not be conducted on the included studies.
Table 3. Template for LLM extraction output.
| Example variable | Relevant paragraph(s) identified by AI | AI extracted value | AI justification | Human verified response | Is AI correct? |
|---|---|---|---|---|---|
| Context: home | 23: “The study took place at the child’s home or school, or at our research lab…” | Yes | Paragraph 23 states that the study took place at the child's home or school, indicating that the study was conducted in a home setting. | Yes | Yes |
| Number of children | 9: “To this end, we performed a user study with 60 children aged 3 to 8 years in which they were asked to draw several simple shapes on a tablet device…” | 60 | The text states that “we performed a user study with 60 children” in paragraph 9 and “we collected digital drawings from 60 child participants” in paragraph 22. | 60 | Yes |
AI, artificial intelligence; LLM, large language model.
Data synthesis and analysis
Our data analysis combines qualitative and quantitative approaches, including thematic analysis and content/frequency analysis. This synthesis aims to identify gaps in the literature and point to areas for future systematic reviews or meta-analyses. To explore the types of AI and its implementation in early child development, thematic analysis will specifically address these questions, following Braun and Clarke’s six-phase qualitative coding process. The six phases are (1) familiarising with data through repeated reading; (2) generating initial codes to capture key features; (3) identifying broader themes by clustering codes; (4) reviewing and refining themes for coherence; (5) defining and naming themes to enhance clarity and (6) producing a report that synthesises findings related to the research questions. To gain more quantitative insights, content analysis will be conducted to examine the frequencies of identified AI, target users and different developmental outcomes supported by AI across the included studies.
Ethics and dissemination
The current study does not collect primary data but instead uses available publications; therefore, ethics approval is not required for this study. The findings of this study will be published in a peer-reviewed research journal and disseminated in research conferences.
Discussion
AI tools have been increasingly used across various settings, especially in the past few years, to support infants and young children’s development. These tools are being used both directly by children and by others (parents, educators, nurses, etc) whose use of AI has an indirect impact on children. Given that early childhood is a critical period in a child’s development, becoming aware of the types of AI tools that are available and how they have been applied to support children’s development is crucial for laying the foundation of developing early childhood intervention programmes to support infants and children. An overview of the current literature is needed to understand the full landscape and, therefore, inform future practices. We anticipate that the results will be of interest to multiple stakeholders, including children’s families and caregivers, educators, healthcare professionals, researchers and other individuals involved in directly or indirectly supporting young children. The study that we are proposing aims to enhance the transparency and quality of LLM data extraction from PDF documents.
A few limitations of our planned scoping review should be noted. First, only studies related to children aged 0–6 will be included, which limits the generalisability of our findings to this preselected age range. In addition, only studies published in English will be analysed. The scoping review method also does not include any critical appraisal of the quality of each research study, nor does it examine the causal relationships between interventions and outcomes, but rather provides a high-level synthesis of the landscape of available literature and the dominant themes and characteristics of these studies. Despite these limitations, this scoping review will provide a comprehensive investigation of all relevant literature that examines the application of AI tools in early childhood, with the hope of becoming a roadmap that can be used by all stakeholders including educators, healthcare professionals, policy-makers, researchers and many more. This review will identify the different types of AI tools, their applications across settings and the unique ways in which these technologies support early child development.
Supplementary material
Footnotes
Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. This research was not supported by a grant from any funding agency in the public, commercial or not-for-profit sectors.
Prepublication history and additional supplemental material for this paper are available online. To view these files, please visit the journal online (https://doi.org/10.1136/bmjopen-2025-106044).
Provenance and peer review: Not commissioned; externally peer reviewed.
Patient consent for publication: Not applicable.
Patient and public involvement: Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
References
- 1.Zhang P, Kamel Boulos MN. Generative AI in medicine and healthcare: promises, opportunities and challenges. Future Internet . 2023;15:286. doi: 10.3390/fi15090286. [DOI] [Google Scholar]
- 2.Bhutoria A. Personalized education and artificial intelligence in the United States, China, and India: a systematic review using a Human-In-The-Loop model. Computers and Education: Artificial Intelligence. 2022;3:100068. doi: 10.1016/j.caeai.2022.100068. [DOI] [Google Scholar]
- 3.Su J, Yang W. Unlocking the power of ChatGPT: a framework for applying generative AI in education. ECNU Review of Education. 2023;6:355–66. doi: 10.1177/20965311231168423. [DOI] [Google Scholar]
- 4.Yunike Y, Rehana R, Misinem M, et al. The implications of utilizing artificial intelligence-based parenting technology on children’s mental health: a literature review. JIK. 2023;17:1083–99. doi: 10.33860/jik.v17i3.2958. [DOI] [Google Scholar]
- 5.Akdeniz M, Özdinç F. Maya: an artificial intelligence based smart toy for pre-school children. Int J Child Comput Interact. 2021;29:100347. doi: 10.1016/j.ijcci.2021.100347. [DOI] [Google Scholar]
- 6.Haenlein M, Kaplan A. A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. Calif Manage Rev. 2019;61:5–14. doi: 10.1177/0008125619864925. [DOI] [Google Scholar]
- 7.Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Adv Neural Inf Process Syst. 2017;30 [Google Scholar]
- 8.Wu T, He S, Liu J, et al. A brief overview of ChatGPT: the history, status quo and potential future development. IEEE/CAA J Autom Sinica. 2023;10:1122–36. doi: 10.1109/JAS.2023.123618. [DOI] [Google Scholar]
- 9.Bharadiya J. Machine learning and AI in business intelligence: trends and opportunities. IJC. 2023;IJC:123–34. [Google Scholar]
- 10.Briganti G, Le Moine O. Artificial intelligence in medicine: today and tomorrow. Front Med (Lausanne) 2020;7 doi: 10.3389/fmed.2020.00027. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Celik I, Dindar M, Muukkonen H, et al. The promises and challenges of artificial intelligence for teachers: a systematic review of research. TechTrends . 2022;66:616–30. doi: 10.1007/s11528-022-00715-y. [DOI] [Google Scholar]
- 12.Davenport TH, Ronanki R. Artificial intelligence for the real world. Harv Bus Rev. 2018;96:108–16. [Google Scholar]
- 13.Kaplan A, Haenlein M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus Horiz. 2019;62:15–25. doi: 10.1016/j.bushor.2018.08.004. [DOI] [Google Scholar]
- 14.Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, et al. Artificial intelligence in retina. Prog Retin Eye Res. 2018;67:1–29. doi: 10.1016/j.preteyeres.2018.07.004. [DOI] [PubMed] [Google Scholar]
- 15.Su J, Zhong Y. Artificial Intelligence (AI) in early childhood education: curriculum design and future directions. Computers and Education: Artificial Intelligence. 2022;3:100072. doi: 10.1016/j.caeai.2022.100072. [DOI] [Google Scholar]
- 16.Ayoub C, Vallotton CD, Mastergeorge AM. Developmental pathways to integrated social skills: the roles of parenting and early intervention. Child Dev. 2011;82:583–600. doi: 10.1111/j.1467-8624.2010.01549.x. [DOI] [PubMed] [Google Scholar]
- 17.Auger A, Farkas G, Burchinal MR, et al. Preschool center care quality effects on academic achievement: an instrumental variables analysis. Dev Psychol. 2014;50:2559–71. doi: 10.1037/a0037995. [DOI] [PubMed] [Google Scholar]
- 18.Broekhuizen ML, van Aken MAG, Dubas JS, et al. Child care quality and Dutch 2‐ and 3‐year‐olds’ socio‐emotional outcomes: does the amount of care matter? Infant Child Dev. 2018;27 doi: 10.1002/icd.2043. [DOI] [Google Scholar]
- 19.Vandell DL, Burchinal M, Pierce KM. Early child care and adolescent functioning at the end of high school: results from the NICHD study of early child care and youth development. Dev Psychol. 2016;52:1634–45. doi: 10.1037/dev0000169. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Wu D, Dong X, Liu D, et al. How early digital experience shapes young brains during 0-12 years: a scoping review. Early Educ Dev. 2024;35:1395–431. doi: 10.1080/10409289.2023.2278117. [DOI] [Google Scholar]
- 21.Bronfenbrenner U. Ecology of the family as a context for human development: Research perspectives. Dev Psychol. 1986;22:723–42. doi: 10.1037/0012-1649.22.6.723. [DOI] [Google Scholar]
- 22.Samuelsson R. A shape of play to come: exploring children’s play and imaginaries with robots and AI. Computers and Education: Artificial Intelligence . 2023;5:100173. doi: 10.1016/j.caeai.2023.100173. [DOI] [Google Scholar]
- 23.Palaiologou I, Kewalramani S, Dardanou M. Make‐believe play with the Internet of Toys: A case for multimodal playscapes. Brit J Educational Tech. 2021;52:2100–17. doi: 10.1111/bjet.13110. [DOI] [Google Scholar]
- 24.Kewalramani S, Palaiologou I, Dardanou M, et al. Using robotic toys in early childhood education to support children’s social and emotional competencies. Australas J Early Child. 2021;46:355–69. doi: 10.1177/18369391211056668. [DOI] [Google Scholar]
- 25.Lemaignan S, Newbutt N, Rice L, et al. UNICEF guidance on ai for children: application to the design of a social robot for and with autistic. arXiv. 2021 doi: 10.48550/arXiv.2108.12166. [DOI] [Google Scholar]
- 26.Jin L. Investigation on potential application of artificial intelligence in preschool children’s education. J Phys: Conf Ser. 2019;1288:012072. doi: 10.1088/1742-6596/1288/1/012072. [DOI] [Google Scholar]
- 27.Ahmed N, Abbasi MS, Zuberi F, et al. Artificial intelligence techniques: analysis, application, and outcome in dentistry-a systematic review. Biomed Res Int. 2021;2021 doi: 10.1155/2021/9751564. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Yi H, Liu T, Lan G. The key artificial intelligence technologies in early childhood education: a review. Artif Intell Rev. 2024;57 doi: 10.1007/s10462-023-10637-7. [DOI] [Google Scholar]
- 29.Tricco AC, Lillie E, Zarin W, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169:467–73. doi: 10.7326/M18-0850. [DOI] [PubMed] [Google Scholar]
- 30.Kaelin VC, Valizadeh M, Salgado Z, et al. Artificial intelligence in rehabilitation targeting the participation of children and youth with disabilities: scoping review. J Med Internet Res. 2021;23 doi: 10.2196/25745. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Mangina E, Psyrra G, Screpanti L, et al. Robotics in the context of primary and preschool education: a scoping review. IEEE Trans Learning Technol. 2024;17:342–63. doi: 10.1109/TLT.2023.3266631. [DOI] [Google Scholar]
- 32.Yim IHY, Su J. Artificial intelligence (AI) learning tools in K-12 education: a scoping review. J Comput Educ. 2024:1–39. doi: 10.1007/s40692-023-00304-9. [DOI] [Google Scholar]
- 33.Shamseer L. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2016;354 doi: 10.1136/bmj.i4086. [DOI] [PubMed] [Google Scholar]
- 34.Peters MDJ, Marnie C, Tricco AC, et al. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Synth . 2020;18:2119–26. doi: 10.11124/JBIES-20-00167. [DOI] [PubMed] [Google Scholar]
- 35.Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. J Clin Epidemiol. 2021;134:178–89. doi: 10.1016/j.jclinepi.2021.03.001. [DOI] [PubMed] [Google Scholar]

