Abstract
Objective
To gather evidence from literature on Artificial Intelligence applications in educational settings and assess the educational elements involved in Oral and Maxillofacial Radiology.
Methods
A systematic scoping review was performed using predefined keywords and truncation search strategies across PubMed, Scopus, and Web of Science, focusing on artificial intelligence, oral and maxillofacial radiology, and dental education.
Results
The initial search retrieved 3,124 articles published from 2014 to November 2024 were screened. Of these, only 8 articles met the criteria, specifically focusing on educational aspects within Oral and Maxillofacial Radiology; the others were excluded mainly because they focused on general dentistry. Three studies focus on Knowledge, Attitude, and Perception of dental students and educators, while others involve clinical applications among students, Generative Adversarial Networks for radiographic image generation, and large language models (LLMs) for theoretical OMFR comparison among students and LLMs. The educational elements identified include utilizing AI as a learning tool, incorporating AI knowledge and fundamentals into lectures, and evaluating AI’s performance in theoretical and clinical OMFR.
Conclusions
Prospective research on the educational aspects of Artificial Intelligence (AI) integration in Oral and Maxillofacial Radiology appears promising, given the current limited evidence base, which tends to emphasize clinical implications more than direct educational value. Additionally, strategically incorporating AI into pedagogical frameworks can help develop dental professionals with improved technological literacy, a strong understanding, and greater confidence in using AI technologies.
Keywords: Oral radiology, Artificial intelligence, Dental curriculum, Dental education
Introduction
Artificial intelligence (AI) is rapidly transitioning from a purely theoretical concept to a practical influence in healthcare education, fundamentally changing how knowledge is learned, used, and evaluated [1]. This paradigm shift is particularly noticeable in dentistry, especially in Oral and Maxillofacial Radiology (OMFR), a field that has undergone significant growth in AI integration [2]. The need for continuous curricular updates is well-supported by research, which demonstrates that dental education must regularly adapt its content to keep pace with evolving healthcare needs and technological advancements [3]. Clinically, the main discussion about AI in OMFR centers on its ability to enhance diagnostic accuracy, particularly through improvements in lesion detection, segmentation precision, and image analysis [4]. Beyond these clinical applications, AI presents significant opportunities for dental education itself, particularly in OMFR [5]. Educational researchers are actively investigating the utility of AI-assisted tools in facilitating student training in radiographic interpretation, as well as their capacity to deliver adaptive feedback and foster more interactive learning environments [6, 7]. Such pedagogical innovations resonate with principles derived from constructivist and experiential learning theories [8], which suggest that knowledge is fundamentally constructed through active interaction, critical reflection, and contextualized application.
Enthusiasm for technological integration, however, must be tempered by a critical awareness of its inherent limitations [9]. AI is demonstrably dependent on the quality and diversity of its training datasets; phenomena like ‘AI hallucination,’ where the algorithm misinterprets data beyond its learned distribution, underscore the necessity of human oversight [10–12]. Consequently, these limitations underscore that clinical reasoning and interpretive judgment collectively represent the foundational standard in radiologic education. Hence, AI integration should not aim to replace the educator, but rather to enrich the educational ecosystem through guided, reflective engagement [13].
Although AI is becoming increasingly integrated into OMFR practice, a significant lack of formal educational frameworks remains, focusing on AI literacy, pedagogy, and its incorporation into curricula [14]. While Schwendicke et al. [15] proposed structured learning objectives for integrating artificial intelligence into the curriculum, empirical reports on the outcomes of such implementations remain scarce, especially in OMFR education [16].
To address this recognized gap, this scoping review systematically maps the available evidence concerning the educational elements integrated and the application of AI tools in OMFR educational settings.
Methods
This scoping review followed JBI guidance and Arksey and O’Malley’s framework [17, 18], with its protocol registered in the Open Science Framework [19].
The choice of scoping review methodology was driven by the limited and diverse body of literature on the integration of artificial intelligence (AI) in oral and maxillofacial radiology (OMFR) education. Scoping review is particularly appropriate for systematically charting emerging evidence, identifying existing knowledge gaps, and elucidating key conceptual components within a broad and constantly evolving discipline.
An initial, preliminary search conducted prior to formal data collection helped identify the research gap, notably confirming the significant lack of AI-related educational studies within the OMFR field. In line with JBI guidance, the study used the Population–Concept–Context (PCC) framework to guide the development of the main review question and the specific inclusion criteria.
Population: Undergraduate and postgraduate dental students, and dental educators involved in OMFR teaching or learning.
Concept: Knowledge, perception, integration and application of AI software or tool for teaching, learning, assessment, or curriculum design in OMFR.
Context: Educational or clinical-training settings in OMFR.
After establishing the PCC (Population, Concept, Context) framework, research questions defined the review’s scope. The questions are:
-
i.
What AI-related studies have been reported within oral and maxillofacial radiology that involve the educational population?
-
ii.
What are the main findings, AI applications and educational focus being implemented or investigated in the OMFR study involving educational population?
-
iii.
What specific educational aspects of AI are currently being implemented in studies within educational settings?
The research questions were designed to enable a thorough exploration, covering both the existing research landscape and the educational implications of AI integration in oral and maxillofacial radiology (OMFR) dental education. The primary aim is to gather evidence from educational literature on Artificial Intelligence applications and assess the incorporated educational elements within Oral and Maxillofacial Radiology.
Search strategy
The literature search was conducted from December 2024 to February 2025, with a revision in October 2025. It utilizes three main electronic databases: PubMed, Scopus, and Web of Science (WoS). These databases were selected for their extensive coverage of medical, dental, and educational research, ensuring thoroughness. Following JBI guidelines, a systematic three-step search strategy was developed. First, a preliminary search identified relevant keywords and index terms. A comprehensive search was then conducted by combining controlled vocabulary, specifically Medical Subject Headings (MeSH), with free-text terms across three key conceptual areas.:
Artificial Intelligence: “artificial intelligence”, “machine learning”, “deep learning”, “neural network”, “large language model”, “chatbot”.
Oral radiology/ Oral and maxillofacial radiology: “oral radiology”, “maxillofacial imaging”, “dental radiology”, “radiography”.
Education: “education”, “curriculum”, “teaching”, “training”, “dental education”.
The search was conducted across three selected databases. Initially, using a broad Boolean strategy in Web of Science resulted in no findings. This prompted a quick switch to a more targeted search string: TS= (“Oral radiology” AND “Artificial Intelligence” AND “Dental curriculum”). This refined search produced 80 records, from which two eligible studies were identified. Customized Boolean strings for each database, tailored to their specific indexing systems, are presented in Table 1, which outlines the search strategies, Boolean strings, and initial results. Eligibility was determined based on the Joanna Briggs Institute (JBI) guidelines and the Population–Concept–Context (PCC) framework, ensuring alignment with the predefined aims of the review.
Table 1.
Search Strategy, boolean Strings, and search result
| Electronic Database | Keywords and Search Strings | Filter / Limit used | Records obtained prior to removing duplicates | Eligible records |
|---|---|---|---|---|
| PubMed |
((“artificial intelligence“[MeSH Terms] OR “machine learning“[All Fields] OR “deep learning“[All Fields] OR “neural network“[All Fields] OR “large language model“[All Fields] OR chatbot[All Fields]) AND (“oral radiology“[All Fields] OR “dental radiology“[All Fields] OR “maxillofacial imaging“[All Fields] OR “radiography“[All Fields]) AND (“education“[MeSH Terms] OR “curriculum“[All Fields] OR “teaching“[All Fields] OR “training“[All Fields])) |
Date: January 2014 to November 2024 Language: English |
628 | 3 |
| Scopus |
Copied Query: TITLE-ABS-KEY ( “artificial intelligence” OR “machine learning” OR “deep learning” OR “large language model” OR “chatbot” ) AND TITLE-ABS-KEY ( “oral radiology” OR “dental imaging” OR “maxillofacial radiology” OR “radiography” ) AND TITLE-ABS-KEY ( “education” OR “curriculum” OR “training” OR “teaching” ) AND PUBYEAR > 2013 AND PUBYEAR < 2025 AND ( LIMIT-TO ( LANGUAGE, “English” ) ) Reproducible string: “artificial intelligence” OR “machine learning” OR “deep learning” OR “large language model” OR “chatbot” AND “oral radiology” OR “dental imaging” OR “maxillofacial radiology” OR “radiography” AND “education” OR “curriculum” OR “training” OR “teaching” |
Search within: Article Title, Abstract, Keywords Year: 2014–2024 Language: English |
2494 | 3 |
| Web of Sciences |
“Oral radiology” AND “Artificial Intelligence” AND “Dental curriculum” *Different search strategies such as the initial search using the same keywords produce zero yield |
Publication Year: 2014–2024 Language: English |
Initial :0 Final search: 80 |
2 |
| Total | 3202 | 8 | ||
This table displays the exact record counts retrieved in the initial search for the period from December 2024 to February 2025. A revision of the selected articles was conducted in October 2025, but the publication timeline remains the same (January 2014 - November 2022). The search strings for each database are unique with different truncations and syntax formats
-
i).Inclusion Criteria:
- Population: Studies involving undergraduate or postgraduate dental students, educators engaged in oral and maxillofacial radiology (OMFR) teaching or learning.
- Concept: Studies examining the use, knowledge, perception, integration and application of artificial intelligence (AI), including machine learning, deep learning, neural networks, large language models, chatbots, for education, assessment, or curriculum in OMFR.
- Context: Conducted within educational or training settings such as dental schools, academic radiology programs, or simulation-based learning environments.
- Study type: Original research, mixed-methods studies, educational trials, or reviews published in English between January 2014 and November 2024.
-
ii).Exclusion Criteria:
- Articles focused solely on clinical patients, diagnostic datasets, or AI tools for diagnosis, treatment planning, or image enhancement without educational relevance.
- Studies lacking an explicit teaching, training, or learning component related to OMFR.
- Articles addressing general AI implementation in dental education without OMFR-specific context.
- Editorials, preprints, commentaries, letters, or non-peer-reviewed sources without empirical data.
Source of evidence selection
All researchers involved in this review collaborated to develop the search strategy and the evidence selection template used for full-text screening. This template, located in Appendix 1, was utilized by two reviewers and clearly outlined the inclusion and exclusion criteria, final decisions on study eligibility, and reviewer notes, thereby ensuring transparency in the study selection process.
Before starting the formal screening, two main reviewers participated in a calibration exercise, jointly evaluating an initial set of records. This step was essential to guarantee consistent interpretation and strict adherence to the predefined inclusion and exclusion criteria. Afterwards, these reviewers independently selected evidence from articles retrieved from the three specified databases. Additionally, other team members acted as a control group to verify the effectiveness of the search queries and their results.
After verifying the consistency of the database output, all retrieved citations, including article titles, abstracts, and keywords, were exported in CSV or RIS format. These records were then imported into Rayyan QCRI (Qatar Computing Research Institute, Doha, Qatar) [20] for systematic screening. Duplicates are eliminated to prevent the duplication of data extraction. Rayyan automatically identifies duplicate records, which are subsequently manually inspected and removed to guarantee that each is only counted once, even if it appears in multiple databases. The initial screening of titles and abstracts was conducted using Rayyan, a platform that enabled the systematic categorization of records as either ‘include’ or ‘exclude’ based on predefined eligibility criteria. Reviewers consistently adhered to these criteria throughout this stage.
After the initial review, two independent reviewers conducted a systematic full-text screening over consecutive days. Prior to this, a comprehensive compilation of eligible full-text articles was assembled, with additional team members aiding in the retrieval of the corresponding PDFs. The decisions taken during full-text screening were carefully recorded using a structured validation form. Articles with uncertain eligibility or conflicting reviewer judgments were discussed collectively until consensus was achieved. This entire screening process is thoroughly depicted in the PRISMA flow diagram (Fig. 1).
Fig. 1.
PRISMA flow chart of the review process
Data extraction
To maintain methodological rigor and ensure data consistency, the same reviewers used a structured form specifically created for this review to extract data. This form was carefully designed to gather specific details relevant to the research questions, helping to standardize data collection across all included studies.
Before beginning full data extraction, the research team carried out a pilot test of the data extraction form using two selected articles. This initial step was essential to verify the clarity and appropriateness of the form’s fields. The feedback from this pilot led to minor adjustments, improving data accuracy and better aligning the form with the review’s goals. Subsequently, the following specific information was systematically extracted from each included study:
Study identification details (author, year, and country).
Study design.
Educational population.
Type and description of AI application/tool.
Educational purpose or focus.
Main findings and outcomes.
Educational element on AI implemented or investigated.
The data extracted were organized into a standardized table (Table 2) that details the characteristics and findings of the included studies. Discrepancies found between the two reviewers during data extraction were resolved through discussion to reach consensus. No intra-reviewer calibration was done. When needed, other research team members were consulted to address any remaining uncertainties.
Table 2.
Tabulated data extraction from selected studies (n = 8). All studies were eligible according to the criteria outlined
| Author/ Year |
Study Design | Country | Educational Population | AI Application | Educational Focus/ Outcome | Main Findings | Educational element on AI |
|---|---|---|---|---|---|---|---|
|
Keser and Namdar Pekine (2021) [21] |
Cross-sectional | Türkiye | Undergraduates |
Knowledge, Attitude and Perception on AI survey |
Knowledge, Attitude and Perception of AI integration into OMFR | Positive knowledge with low results in attitude and perception |
-Basic concept and principle of AI in OMFR -Reliability in AI judgment in diagnosis and interpretation -Perception in integrating AI in the curriculum |
| Pauwels & Del Rey (2021) [22] | Cross-sectional | Brazil |
Undergraduate, Postgraduate dental students and faculty members |
Introductory AI lecture on oral radiology and survey | Evaluate attitude change toward AI in education | Post-lecture positive shift on AI |
-Lecture on basic concepts and principles of AI in OMFR -AI-related applications and research in OMFR -Prospective AI application in OMFR -Technophobia on AI |
|
Murali et al. (2023) [23] |
Cross- sectional | India |
Undergraduate, Postgraduate dental students |
Knowledge, Attitude and Perception on AI survey | Knowledge, Attitude and Perception of AI integration into curriculum and practice | High awareness, confidence and positive feedback on AI |
-Basic concept and principle of AI -Reliability in AI judgment in diagnosis and interpretation -Perception in integrating AI in the curriculum |
| Chang et al. (2024) [24] | Experimental study | United States of America | Undergraduate dental students | AI-assisted full-mouth radiograph mounting software |
-Accuracy and efficiency of AI -Perceptions (Confidence level on AI) |
-Accuracy: Students > AI -Efficiency: AI > Students -No significant difference in the confidence level of students |
-Clinical experience of AI-assisted software for students in OMFR -Clinical accuracy of AI in OMFR -Clinical efficiency of AI in OMFR -Reliability in AI judgment |
|
Jeong et al. (2024) [25] |
Experimental comparison | Korea | Undergraduate dental students | Large Language Model chatbots (ChatGPT, ChatGPT Plus, Bard, Bing Chat) | Evaluate chatbots vs. student performance in OMFR exam | Students > LLM chatbotssynthetic |
-OMFR theoretical knowledge accuracy of Large Language Model -AI hallucination in OMFR interpretation |
|
Schoenhof et al. (2024) [26] |
Experimental observational study | Finland | Dental students | Generative Adversarial Network (GAN) produces synthetic images | Evaluate the realism and educational usefulness of GAN-generated panoramic radiographs in oral and maxillofacial radiology teaching | Students can distinguish synthetic GAN-generated images and real images |
-Image quality of AI-generated images -Prospective teaching & learning using GAN-generated images in OMFR teaching -Data privacy of radiographic images |
|
Schropp et al. (2024) [27] |
Randomized controlled trial | Denmark | Undergraduate, dental students | AI software (AssistDent®) used to assist detection of enamel-only proximal caries in bitewing radiographs | Evaluate the impact of AI assistance software on students’ ability to detect proximal enamel caries | No difference in students’ ability |
--Clinical experience of AI assisted software for students in OMFR -Independent Operator’s interpretation in OMFR despite AI asisstance |
| Rampf et al. (2024) [28] | Randomized controlled trial | Germany | Undergraduate dental students | dentalXrai Pro 3.0 – AI-assisted radiographic analysis | AI integration influence students’ radiographic diagnostic competences | AI software is feasible for implementation in providing feedback for students’ answers | -Feasibility of AI application implementation in OMFR |
Results
Search yield
The initial systematic search across three electronic databases resulted in 3,124 records. Following a thorough screening of all titles and abstracts against the Population, Concept, and Context (PCC) criteria, 14 records were identified as potentially eligible for full-text review. Consequently, a comprehensive full-text screening was performed on eight distinct studies. These studies met the established inclusion criteria and were therefore included in the final analysis. The PRISMA-ScR flow diagram (Fig. 1) visually displays the comprehensive search results, showing how literature was identified, screened, and selected across three databases. The final search strings are listed in Table 1.
Study characteristics (Country, study design and educational population)
This review encompasses eight studies published between 2021 and 2024 (see Table 2). They cover various geographical regions, such as the United States, Türkiye, Korea, India, Brazil, Finland, Denmark, and Germany as portrayed in Fig. 2. Methodologically, the studies utilize different approaches.
Fig. 2.
Geographical distribution of selected studies
Specifically, four studies employed experimental or quasi-experimental methodologies, such as Randomized Controlled Trials (RCTs), to evaluate diagnostic competence. Conversely, three studies utilized cross-sectional survey designs to assess participants’ knowledge, attitudes, and perceptions. The populations of participants exhibited variability; six studies primarily concentrated on undergraduate dental students, whereas two expanded their inclusion criteria to include postgraduate students, educators, or practicing professionals. Educational elements encompassed lecture-based instruction, clinical interpretation labs, and AI-enabled simulation environments.
AI applications and educational focus (Type and description of AI application/tool)
The AI applications being studied encompass a wide range of technological applications (Fig. 3a and b) as below:
Fig. 3.
a Artificial Intelligence Application outlined in selected studies. b Type of Artificial Intelligence application in selected studies with educational population
Generative AI and LLMs: A recent study directly compared the performance of large language model (LLM) chatbots with students in OMFR examinations [25]. Furthermore, Schoenhof et al. [26] assessed the educational realism and practical utility of Generative Adversarial Network (GAN) images, which are utilized to produce synthetic radiographs for training purposes.
Commercial Diagnostic Software: Three key studies examine the development of commercial diagnostic AI platforms. Chang et al. [24] evaluated an AI-assisted radiograph mounting system. Additionally, two randomized controlled trials by Schropp et al. and Rampf et al. [27, 28] assessed the effects of commercial diagnostic software, specifically AssistDent® and dentalXrai Pro 3.0, on proximal caries detection and overall diagnostic performance.
Perception and Knowledge: Three separate studies used surveys to evaluate the knowledge and attitudes of dental students and practitioners regarding the incorporation of artificial intelligence in the Oral and Maxillofacial Radiology (OMFR) curriculum [21–23].
Study main findings
In the studies reviewed, attitudes were predominantly positive, reflecting a significant awareness of AI’s prospective role within dentistry. However, the measurable effect on learning outcomes remained inconsistent:
Performance vs. Efficiency: Two comparative investigations demonstrated that the students frequently surpassed AI tools in both diagnostic accuracy and theoretical comprehension [24, 25]. Thus, the primary utility of AI in these scenarios resided in its operational efficiency and capacity for reducing task completion times, as opposed to exhibiting superior decision-making capabilities.
Feasibility and Feedback: Randomized controlled trials (RCTs) have indicated that integrating commercial diagnostic systems is both feasible and highly effective for delivering immediate and standardized feedback to students [2]. However, despite this efficacy in feedback delivery, the AI-assisted feedback did not reliably enhance students’ final diagnostic accuracy [28].
Simulation: An evaluation of GAN-generated images showed that students could distinguish synthetic images from real ones, yet these synthetic images were still considered a useful and patient-independent simulation tool for teaching purposes [26].
Educational elements investigated
This review defines educational elements as the core components of the learning process, encompassing both what has been directly taught and what has been indirectly taught. These findings (Fig. 4) collectively indicate that the investigated educational elements primarily center on foundational concepts of AI. Most of the knowledge, attitude, and perceptions involve the basic knowledge, the perception of integrating AI in the OMFR, attitude, and confidence level. Commonly explored elements included the AI-assisted diagnostic experience in radiographic interpretation [26, 27] and radiographic mounting [24]. Generally, students also understand AI reliability by recognizing its limitations and acknowledging their confidence levels in utilizing AI tools, through training on how to use AI software and AI-generated images. Some studies highlight prospective AI research and applications in OMFR, especially data privacy concerns on radiographic images related to AI tools or software [26].
Fig. 4.
Educational elements of artificial intelligence application
Discussion
This scoping review comprehensively examined the existing evidence concerning artificial intelligence (AI) in Oral and Maxillofacial Radiology (OMFR) education. A key finding reveals a significant lack of dedicated educational research, particularly in comparison to the growing number of clinical AI studies. The eight selected articles collectively highlight the emerging stage of the field, primarily focusing on AI knowledge dissemination, perceptions, and the integration of AI tools in OMFR education. Radiology education revolves around didactic lectures, constructive feedback, direct clinical experience, and objective knowledge assessments [29]. Our analysis indicates an emergent shift toward AI integration within this framework. Furthermore, these findings facilitate the identification of potential educational elements and methodologies that are currently under exploration.
Pedagogical element: lecture, seminars and feedback method
Establishing foundational knowledge is essential in AI-OMFR education. Cross-sectional studies conducted in Türkiye, India, and South Korea [21–23] have demonstrated a positive shift in student and faculty perceptions of AI following the implementation of structured instructional methods. This observation underscores the crucial role of formal information dissemination. A well-structured lecture, potentially including the areas detailed by Schwendicke et al. (2023), could establish a strong educational foundation [12]. It highlighted the fundamental knowledge, limitations, and applications of AI that should be integrated into student education. The phenomenon of ‘AI hallucination’ as highlighted in the LLM study by Jeong et al. [25] warrants particular emphasis because of its implications. This understanding reinforces the perspective among students that AI functions as a sophisticated tool rather than an infallible shortcut, particularly within the nuanced domain of image interpretation. This study aligns with Harte et al. [30], who stated that the integration of AI in undergraduate dental education is promising but should serve as a complementary tool, not a replacement.
Hui et al. [31] summarized the radiology education process for postgraduate students, including specialized AI datasets and knowledge training, as well as fundamental AI concepts for diagnostics, simulation, and assessment of AI tools. Therefore, introductory activities on AI should be initiated for either undergraduates or postgraduates. This can be as simple as assigning students the task of prompting Large Language Models with OMFR theoretical questions to highlight inconsistencies in AI-generated outputs. This encourages a critical evaluation of AI content against established clinical references and actively involves students in providing feedback on the limitations of AI tools. Such practices can help foster greater confidence in human diagnostic judgment and enhance the integration of AI in OMFR. Synthetic radiograph images, as in the study by Schoenhof et al. [26], can also serve as effective training materials for radiographic interpretation, especially for undergraduates. They can try providing their interpretations and seek knowledge from AI tools.
Technological element: AI tools, simulation software and virtual models
This review highlights a range of AI tools being explored for OMFR education, including generative tools for creating synthetic images, commercial diagnostic software, and large language models. While these studies documented the use of such tools, they provided only a limited exploration of the specific teaching methods employed by them. Nevertheless, the broader medical imaging literature offers instructive models for consideration. For example, Van de Venter et al. (2023) evaluated a postgraduate AI module designed for radiographers, recommending a “blended learning delivery format” that combines both synchronous and asynchronous elements, along with customizable and contextualized content [32]. These findings suggest that introducing new AI tools alone may not be sufficient; instead, successful integration likely requires a structured educational program grounded in solid pedagogical principles.
In the context of educational applications that utilize AI-generated synthetic images, educators could consider tasking students with creating educational posters or infographics on relevant OMFR topics. This approach would, of course, necessitate instructor validation for accuracy. This pedagogical approach effectively combines AI as a productivity tool with opportunities for critical appraisal, thereby consolidating theoretical knowledge through active creation. Furthermore, AI-generated synthetic radiographs offer a valuable avenue for students to develop and test their OMFR image interpretation skills, particularly as they are not constrained by limitations in actual patient data.
Cognitive element: problem solving, critical thinking and diagnostic reasoning
A central pedagogical challenge involves guiding students from simply receiving information passively to engaging actively and critically engagement. Indeed, the experimental studies analyzed in this review demonstrate that students consistently outperformed AI tools in both diagnostic accuracy and theoretical Oral and Maxillofacial Radiology (OMFR) examinations. This outcome resonates with broader trends observed across the medical imaging discipline. For instance, Lewis et al. [33] reported that students in Medical Imaging and Radiation Science (MIRS) already acknowledge the limitations of AI. Specifically, they noted students’ awareness that “information produced by AI can be inaccurate,” which prompts them to routinely “cross-reference and check” AI-generated content.
The available evidence suggests that a primary educational objective should not focus on instructing students in the utilization of AI, but rather on developing their ability to critically evaluate AI-generated output. Research on employing AI for feedback has notably demonstrated its utility, thereby positioning AI as a tool for verification rather than an absolute authority. Lewis et al. [33] further reported that students considered the verification process itself to be a valuable “learning opportunity,” which promoted a profound engagement with both reading and research. Accordingly, a pragmatic pedagogical strategy might involve educators implementing assessment tasks that require students to investigate AI-generated content and formally corroborate or refute its accuracy. This method directly quantifies and reinforces critical human oversight, a competency universally recognized as essential within clinical practice.
Affective element: motivation, confidence and attitude
The affective domain, encompassing student attitudes and motivation, appears to constitute a significant area of inquiry within contemporary educational literature. Our scoping review revealed that investigations assessing student perceptions consistently reported favorable dispositions. This positive inclination is further supported by a broader body of scholarly work. Specifically, a survey conducted by Yılmaz et al. [34] similarly indicated that a substantial proportion of students advocate for the integration of artificial intelligence into the curriculum, perceiving it as essential for modern dental practice.
However, this positive attitude is balanced by a healthy skepticism regarding confidence and trust. Notably, our review indicates that mere exposure to AI tools did not inherently enhance student confidence. Chang et al. [24] reported no significant difference in confidence levels after students used an AI-assisted system. This observation aligns with the findings of Yılmaz et al. [34], who revealed that students continued to prioritize human clinical judgment over AI recommendations and did not perceive AI as superior to that of an experienced clinician. These findings emphasize an important educational goal: educators should aim to foster motivation and enthusiasm in students while also addressing key issues, without over relying on AI. The primary goal is to develop clinicians who are not only motivated to utilize AI tools in their practice but also confident enough in their clinical judgment to critically assess and, if necessary, override AI recommendations.
Limitations of this review and recommendations
Despite the findings, several notable limitations characterize this scoping review. A primary constraint was the exclusive focus on English-language publications, which may have inadvertently omitted pertinent research conducted in non-English-speaking regions. Moreover, the limited number of included studies (n = 8) underscores the nascent stage of AI-educational research within Oral and Maxillofacial Radiology (OMFR), thereby attenuating the robustness of the derived findings. Additionally, many investigations mainly focused on perceptions rather than measuring more crucial long-term learning results or clear changes in OMFR education. This restricts the outcomes that need to be emphasized in the future OMFR curriculum.
Apart from these limitations, this review establishes a crucial foundation for future research. There is a need for controlled trials that quantify specific learning outcomes, such as diagnostic accuracy and critical reasoning, across extended observational periods. Furthermore, investigations should also encompass the cost-effectiveness of AI integration, barriers to implementation in resource-limited settings, and the comparative effectiveness of different pedagogical approaches.
Conclusion
This scoping review reveals a growing body of evidence on the educational integration of Artificial Intelligence (AI) in Oral and Maxillofacial Radiology (OMFR), with only eight peer-reviewed studies published between 2014 and 2024. The studies primarily implemented AI applications, such as large language models and Generative Adversarial Networks, and assessed students’ and educators’ knowledge, attitudes, and perceptions. Studies generally highlight a positive perspective on AI in OMFR, acknowledging both its advantages and limitations. Students also demonstrate superior knowledge and performance compared to AI tools. There is no definitive implementation framework of the OMFR curriculum on AI in the selected studies. Thus, the development of prospective research investigating AI curriculum development and implementation strategies in OMFR education, specifically, is crucial.
Acknowledgments
Clinical trial number
Not applicable.
Authors’ contributions
A.S.A.S. contributed to conceptualization and wrote the main manuscript text.N.R. contributed to data curation, reviewing and editing.M.F.K. contributed to data curation and writing.N.M. and A.M contributed to the literature review and visualization.J.Y.A. and M.I.M.A.H. contributed to reviewing and editing.All authors approved the final version of the manuscript.
Funding
No funding has been received for this study.
Data availability
The datasets generated during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J. 2021. 10.7861/fhj.2021-0095. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Rezallah NNF, El Banna F, Abdelkarim AZ, Alhameedi AZN. Artificial intelligence in oral & maxillofacial radiology: narrative review. J Int Dent Med Res. 2024;17(3):1323–30. [Google Scholar]
- 3.Rasmussen EL, Musaeus P. Subject matter changes in the dental curriculum: a scoping review of the last two decades. J Dent Educ. 2024;88(8):1101–14. 10.1002/jdd.13530. [DOI] [PubMed] [Google Scholar]
- 4.Heo MS, Kim JE, Hwang JJ, Han SS, Kim JS, Yi WJ, Park IW. Artificial intelligence in oral and maxillofacial radiology: what is currently possible? Dentomaxillofac Radiol. 2021. 10.1259/dmfr.20200375. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Negrete D, Lopes SLPC, Barretto MDA, Moura NBd, Nahás ACR, Costa ALF. Artificial intelligence and dentomaxillofacial radiology education: innovations and perspectives. Dent J. 2025;13(6):245. 10.3390/dj13060245. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Thurzo A, Strunga M, Urban R, Surovková J, Afrashtehfar KI. Impact of artificial intelligence on dental education: a review and guide for curriculum update. Educ Sci. 2023. 10.3390/educsci13020150. [Google Scholar]
- 7.Hu C, Li F, Wang S, Gao Z, Pan S, Qing M. The role of artificial intelligence in enhancing personalized learning pathways and clinical training in dental education. Cogent Educ. 2025;12(1):2490425. 10.1080/2331186X.2025.2490425. [Google Scholar]
- 8.Pang X, Zou J, Zhang X, Li Y, Zhang H, Wang F, et al. The impact of artificial intelligence-assisted teaching on medical students’ learning outcomes: an integrated model based on the ARCS model and constructivist theory. BMC Med Educ. 2025;25:1309. 10.1186/s12909-025-07826-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Giannakopoulos K, Kavadella A, Aaqel Salim A, Stamatopoulos V, Kaklamanos EG. Evaluation of the performance of generative AI large Language models chatGPT, Google Bard, and Microsoft Bing chat in supporting evidence-based dentistry: comparative mixed methods study. J Med Internet Res. 2023;25:e51580. 10.2196/51580. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Huang YK, Hsu LP, Chang YC. Artificial intelligence in clinical dentistry: the potentially negative impacts and future actions. J Dent Sci. 2022;17:1817–8. 10.1016/j.jds.2022.07.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Feher B, Tussie C, Giannobile WV. Applied artificial intelligence in dentistry: emerging data modalities and modeling approaches. Front Artif Intell. 2024. 10.3389/frai.2024.1427517. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Stephan D, Bertsch A, Burwinkel M, Vinayahalingam S, Al-Nawas B, Kämmerer PW, Thiem DG. AI in dental Radiology—Improving the efficiency of reporting with chatgpt: comparative study. J Med Internet Res. 2024;26:e60684. 10.2196/60684. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Claman D, Sezgin E. Artificial intelligence in dental education: opportunities and challenges of large Language models and multimodal foundation models. JMIR Med Educ. 2024;10:e52346. 10.2196/52346. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Dashti M, Ghasemi S, Khurshid Z. Integrating artificial intelligence in dental education: an urgent call for dedicated postgraduate programs. Int Dent J. 2024. 10.1016/j.identj.2024.08.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Schwendicke F, Chaurasia A, Wiegand T, Uribe SE, Fontana M, Akota I, et al. Artificial intelligence for oral and dental healthcare: core education curriculum. J Dent. 2023;128:104363. 10.1016/j.jdent.2022.104363. [DOI] [PubMed] [Google Scholar]
- 16.El-Hakim M, Anthonappa R, Fawzy A. Artificial intelligence in dental education: A scoping review of Applications, Challenges, and gaps. Dent J. 2025;13(9):384. 10.3390/dj13090384. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalil H, Scoping Reviews. Aromataris E, Lockwood C, Porritt K, Pilla B, Jordan Z, editors. JBI Manual for Evidence Synthesis. JBI. 2020:2024. Available from: https://synthesismanual.jbi.global . 10.46658/JBIMES-24-09. Accessed 27 October 2025.
- 18.Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32. 10.1080/1364557032000119616. [Google Scholar]
- 19.Satmi ASA, Bin Khamis MF, Madawana AM, Bin Reza NAH. Scoping Review on Educational Elements of Artificial Intelligence in Oral and Maxillofacial Radiology. OSF Registries. 2025. Available from: https://osf.io/45ygw/overview10.17605/OSF.IO/45YGW. [DOI] [PMC free article] [PubMed]
- 20.Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan - a web and mobile app for systematic reviews. Syst Rev. 2016;5:210. 10.1186/s13643-016-0384-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Keser G, Namdar Pekiner FM. Attitudes, perceptions and knowledge regarding the future of artificial intelligence in oral radiology among a group of dental students in turkey: a survey. Clin Exp Health Sci. 2021;11(4):637–41. 10.33808/clinexphealthsci.928246. [Google Scholar]
- 22.Pauwels R, Del Rey YC. Attitude of Brazilian dentists and dental students regarding the future role of artificial intelligence in oral radiology: a multicenter survey. Dentomaxillofac Radiol. 2021;50(5):20200461. 10.1259/dmfr.20200461. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Murali S, Bagewadi A, Kumar L, Fernandes A, Jayapriya T, Panwar A, et al. Knowledge, attitude, and perception of dentists regarding the role of artificial intelligence and its applications in oral medicine and radiology: a cross sectional study. J Oral Med Oral Surg. 2023;29(2):22. 10.1051/omf/2023020. [Google Scholar]
- 24.Chang J, Bliss L, Angelov N, Glick A. Artificial intelligence-assisted full-mouth radiograph mounting in dental education. J Dent Educ. 2024;88(7):933–9. 10.1002/jdd.13524. [DOI] [PubMed] [Google Scholar]
- 25.Jeong H, Han SS, Yu Y, Kim S, Jeon KJ. How well do large Language model-based chatbots perform in oral and maxillofacial radiology? Dentomaxillofac Radiol. 2024;53(6):390–5. 10.1093/dmfr/twae021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Schoenhof R, Blumenstock G, Lethaus B, Hoefert S. Synthetic, non-person related panoramic radiographs created by generative adversarial networks in research, clinical, and teaching applications. J Dent. 2024;146:105042. 10.1016/j.jdent.2024.105042. [DOI] [PubMed] [Google Scholar]
- 27.Schropp L, Sørensen APS, Devlin H, Matzen LH. Use of artificial intelligence software in dental education: a study on assisted proximal caries assessment in bitewing radiographs. Eur J Dent Educ. 2024;28(2):490–6. 10.1111/eje.12939. [DOI] [PubMed] [Google Scholar]
- 28.Rampf S, Gehrig H, Möltner A, Fischer MR, Schwendicke F, Huth KC. Radiographical diagnostic competences of dental students using various feedback methods and integrating an artificial intelligence application—a randomized clinical trial. Eur J Dent Educ. 2024;28(4):925–37. 10.1111/eje.12939. [DOI] [PubMed] [Google Scholar]
- 29.Zafar S, Safdar S, Zafar AN. Evaluation of use of e-Learning in undergraduate radiology education: a review. Eur J Radiol. 2014;83(12):2277–87. 10.1016/j.ejrad.2014.08.017. [DOI] [PubMed] [Google Scholar]
- 30.Harte M, Carey B, Feng Q, et al. Transforming undergraduate dental education: the impact of artificial intelligence. Br Dent J. 2025;238(1):57–60. 10.1038/s41415-024-7788-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Hui M, Sacoransky E, Chung A, Kwan BY. Exploring the integration of artificial intelligence in radiology education: A scoping review. Curr Probl Diagn Radiol. 2025;54(3):332–8. 10.1067/j.cpradiol.2024.10.012. [DOI] [PubMed] [Google Scholar]
- 32.van de Venter R, Skelton E, Matthew J, Woznitza N, Tarroni G, Hirani SP, et al. Artificial intelligence education for radiographers, an evaluation of a UK postgraduate educational intervention using participatory action research: a pilot study. Insights Imaging. 2023;14(1):25. 10.1186/s13244-023-01372-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Lewis S, Bhyat F, Casmod Y, Gani A, Gumede L, Hajat A, et al. Medical imaging and radiation science students’ use of artificial intelligence for learning and assessment. Radiography. 2024;30(Suppl 2):60–6. 10.1016/j.radi.2024.10.006. [DOI] [PubMed] [Google Scholar]
- 34.Yılmaz C, Erdem RZ, Uygun LA. Artificial intelligence knowledge, attitudes and application perspectives of undergraduate and specialty students of faculty of dentistry in turkey: an online survey research. BMC Med Educ. 2024;24:1149. 10.1186/s12909-024-06106-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The datasets generated during the current study are available from the corresponding author on reasonable request.




