Abstract
Background/Objectives: Generative artificial intelligence (AI) such as ChatGPT has developed rapidly in recent years, and in the medical field, its usefulness for diagnostic assistance has been reported. However, there are few reports of AI use in dental fields. Methods: We created 20 questions that we had encountered in clinical pediatric dentistry, and collected the responses to these questions from three types of generative AI. The responses were evaluated on a 5-point scale by six pediatric dental specialists using the Global Quality Scale. Results: The average scores were >3 for the three types of generated AI tools that we tested; the overall average was 3.34. Although the responses for questions related to “consultations from guardians” or “systemic diseases” had high scores (>3.5), the score for questions related to “dental abnormalities” was 2.99, which was the lowest among the four categories. Conclusions: Our results show the usefulness of generative AI tools in clinical pediatric dentistry, indicating that these tools will be useful assistants in the dental field.
Keywords: pediatric dentistry, natural language processing, artificial intelligence
1. Introduction
Artificial intelligence (AI) is broadly defined as the ability of machines to apply human-like reasoning to problem-solving, and we have seen a rapid growth of AI in many disciplines in recent years [1]. Natural language processing (NLP), an area of research in AI, includes various methods for identifying, reading, extracting, and ultimately transforming large collections of language [2,3]. NLP techniques have gained importance in the medical field and helped in the early diagnosis of systemic diseases [4,5,6,7]. In addition, the usefulness of a state-of-the-art chatbot capable of creating natural conversations using NLP for medical consultations has been the focus of some research [8,9,10].
Dental caries and periodontitis are common oral diseases that affect a large number of people worldwide [11,12,13]. In pediatric dentistry, dental problems include malocclusion, fusion tooth, supernumerary tooth, congenital absence, and hypomineralization [14,15,16,17,18]. There are also many systemic diseases that dental professionals need to consider, such as heart disease, hemophilia, pediatric cancer, and hypophosphatasia [19,20,21,22,23]. Therefore, it is important for patients with dental problems or systemic diseases to be able to access proper information regarding dental management. Dental professionals may encounter pediatric patients whose chief complaints are dental trauma or oral habits like finger sucking or pacifier habits [24,25,26]. In addition, self-inflicted soft tissue injuries following the administration of local anesthesia can occur in pediatric patients [27]. These suggest that the field of pediatric dentistry is broad and the importance of proper knowledge for each situation.
Research using AI is also being conducted in the dental field, and the application is classified into clinical practice and dental education [28,29,30,31]. In clinical situations, AI is also used to detect caries and dental abnormalities detection, and for patient consultations [29,32,33,34]. Infectious disease pandemics can cause patients to postpone dental visits, and in some areas, a shortage of dentists can affect the difficulty of making dental visits for patients [35,36,37]. Also, early preservation is required in milk, saline solution, and even saliva in an emergency such as tooth avulsion [38]. Therefore, patient consultation using AI, especially virtual consultation, is an important topic.
Acar et al. (2024) investigated AI performance on questions regarding patients’ concerns in oral surgery and suggested the high potential of chatbots [10]. As mentioned above, there are various concerns in pediatric patients; in addition, the normal eruption of deciduous and permanent teeth into the oral cavity occurs over a broad chronologic age range [39]. Although regular dental checkups may reveal obvious dental abnormalities, patients and their parents must judge whether it is normal in current conditions [40,41]. It would benefit pediatric patients and their parents if they could obtain information about pediatric dentistry from AI and recommendations about dental visits. However, only a few reports have focused on the usefulness of AI in pediatric dental consultation for patients. In this study, we evaluated three generative AI tools to investigate whether AI can provide patients with the proper information about pediatric dentistry.
2. Materials and Methods
2.1. Ethical Consideration
Because the usefulness of generative AI was investigated in this study, ethical approval was not required and waved. No patients or medical records were involved in this study.
2.2. Questions Presented to the Generative AI Tools
When we searched for previous papers that focused on the reliability of responses provided via AI, the number of the questions was from 10 to 20 [10,42]. Therefore, we created 20 questions focusing on important issues in pediatric dentistry and common queries from guardians (Table 1). Questions 1–8 ask about the start or finish times of various events regarding oral management, and therefore, the three generative AI tools were able to provide the specific times of the events. Questions 9–13 focused on dental abnormalities, and Questions 14–17 focused on systemic diseases. Questions 18–20 were designed to represent questions a pediatric dentist may encounter from patients’ guardians at pediatric dental clinics.
Table 1.
Questions and categories used in this study (translated from Japanese).
Age
|
These questions were presented in Japanese to the free versions of three generative AI in late June 2024. The same questions were always presented using the same computer on the same day to prevent the impact of updates to the AI tools. Additionally, we avoided using AI logged-in to the same account to prevent the three AI tools from affecting each other. The generative AI tools used for this study were ChatGPT 3.5 (OpenAI Global, San Francisco, CA, USA), Microsoft Copilot (Microsoft, Redmond, Washington, DC, USA), and Gemini (Google, Mountain View, CA, USA). Each question was randomly described and evaluated for this study.
2.3. Evaluation of Answers of the Generative AI Tools
The answers in Japanese from the generative AI tools were evaluated by six evaluators, all of whom were pediatric dentists with >5 years of clinical experience in pediatric dentistry at a university hospital, according to a previous study [21]. The evaluation criteria were the Global Quality Scale (GQS) as used in previous studies (Table 2) [10,43]. The scores were rated from 1 to 5, with 1 being the lowest and 5 being the highest. The AI response score for each question was determined by calculating the mean and standard deviation of the scores by the six evaluators.
Table 2.
Global Quality Scale score description used in this study.
Score 1 | Poor quality, poor flow of the site, most information missing, and not at all useful for patients |
Score 2 | Generally poor quality and poor flow, some information listed but many important topics missing, and of very limited use to patients |
Score 3 | Moderate quality, suboptimal flow, some important information is adequately discussed but others poorly discussed, and somewhat useful for patients |
Score 4 | Good quality and generally good flow, most of the relevant information is listed, but some topics are not covered, and useful for patients |
Score 5 | Excellent quality and excellent flow, and very useful for patients |
2.4. Statistical Analysis
Statistical analyses were conducted using GraphPad Prism 9 (GraphPad Software Inc., La Jolla, CA, USA). A Kruskal–Wallis test for nonparametric analysis, followed by the Dunn test for multiple comparisons were used to compare between groups. Data are shown by mean ± standard deviation. Differences were considered statistically significant at p < 0.05.
3. Results
The scores of the AI responses to the 20 questions are shown in Table 3. The scores of each AI ranged from 1.00 ± 0.00 to 4.50 ± 0.55. The average of the AI scores for each question ranged from 2.61 ± 1.04 to 3.83 ± 0.71; the total average score was 3.34 ± 0.94. For the lowest AI score of 1.00, the generative AI misread the systemic disease.
Table 3.
Evaluation of AI responses to 20 questions.
Question | ChatGPT 3.5 | Microsoft Copilot | Gemini | Average |
---|---|---|---|---|
1 | 2.83 ± 0.75 | 4.33 ± 0.82 * | 3.50 ± 1.05 | 3.56 ± 1.04 |
2 | 3.17 ± 0.75 | 3.50 ± 1.05 | 3.00 ± 0.63 | 3.22 ± 0.80 |
3 | 3.33 ± 0.52 | 3.83 ± 0.75 | 3.17 ± 0.98 | 3.44 ± 0.78 |
4 | 2.83 ± 0.75 | 3.17 ± 0.41 | 3.17 ± 1.17 | 3.06 ± 0.80 |
5 | 3.17 ± 0.41 | 3.83 ± 0.75 | 2.83 ± 0.98 | 3.28 ± 0.83 |
6 | 3.67 ± 0.52 | 2.83 ± 0.75 | 2.33 ± 0.82 * | 2.94 ± 0.87 |
7 | 3.50 ± 0.55 | 3.83 ± 0.75 | 3.33 ± 1.03 | 3.56 ± 0.78 |
8 | 3.33 ± 0.52 | 3.83 ± 0.75 | 3.83 ± 0.75 | 3.67 ± 0.69 |
9 | 1.67 ± 0.52 | 3.67 ± 0.82 ** | 2.50 ± 0.55 | 2.61 ± 1.04 |
10 | 2.33 ± 0.52 | 4.00 ± 0.63 ** | 3.67 ± 0.52 * | 3.33 ± 0.91 |
11 | 2.83 ± 0.75 | 2.17 ± 0.75 | 3.17 ± 0.75 | 2.72 ± 0.83 |
12 | 3.00 ± 0.89 | 3.83 ± 0.41 | 3.00 ± 0.89 | 3.28 ± 0.83 |
13 | 3.33 ± 0.82 | 3.00 ± 0.63 | 2.67 ± 0.82 | 3.00 ± 0.77 |
14 | 3.67 ± 0.82 | 4.17 ± 0.75 | 3.67 ± 0.52 | 3.83 ± 0.71 |
15 | 4.50 ± 0.55 | 1.00 ± 0.00 ** | 3.83 ± 0.75 † | 3.11 ± 1.64 |
16 | 2.00 ± 0.89 | 4.17 ± 0.41 * | 4.33 ± 0.82 ** | 3.50 ± 1.29 |
17 | 4.00 ± 0.63 | 3.67 ± 0.52 | 3.83 ± 0.41 | 3.83 ± 0.51 |
18 | 4.00 ± 0.00 | 3.83 ± 0.41 | 3.67 ± 0.52 | 3.83 ± 0.38 |
19 | 3.17 ± 0.75 | 4.17 ± 0.75 | 3.67 ± 0.82 | 3.67 ± 0.84 |
20 | 3.17 ± 0.75 | 3.33 ± 0.82 | 3.33 ± 0.82 | 3.28 ± 0.75 |
Total | 3.18 ± 0.90 | 3.51 ± 1.00 | 3.33 ± 0.89 | 3.34 ± 0.94 |
Data are shown as the mean ± SD. The total mean ± SD were calculated using the average of the 20 questions. Statistical comparisons were performed on each question number using the Kruskal–Wallis test. * p < 0.05 and ** p < 0.01 versus ChatGPT 3.5; † p < 0.05 versus Microsoft Copilot.
We divided the 20 questions into four categories (Table 1) and compared the AI scores among each of the categories. For the questions regarding age, the Microsoft Copilot score was 3.65 ± 0.84, which was statistically significantly higher than Gemini (p < 0.01) (Figure 1, Supplementary Figure S1). Microsoft Copilot also had the highest score for the questions regarding dental abnormalities, and it was significantly different from the ChatGPT 3.5 score (p < 0.01) (Figure 2, Supplementary Figure S2). However, there were no differences in the responses to the questions regarding consultations from guardians (Figure 3 and Figure 4, Supplementary Figures S3 and S4).
Figure 1.
The score of the AI responses regarding age. The Kruskal–Wallis test was used for comparisons, * p < 0.05. Bars indicate the mean and SD.
Figure 2.
The score of the AI responses regarding dental abnormalities. The Kruskal–Wallis test was used for comparisons, ** p < 0.01. Bars indicate the mean and SD.
Figure 3.
The score of the AI responses regarding systemic diseases. The Kruskal–Wallis test was used for comparisons. Bars indicate the mean and SD.
Figure 4.
The score of the AI responses regarding consultations from guardians. The Kruskal–Wallis test was used for comparisons. Bars indicate the mean and SD.
For the responses regarding consultations from guardians, the total average score was 3.59 ± 0.71, which was the highest among the four categories, followed by the questions regarding systemic diseases and age. For the responses to the questions regarding dental abnormalities, the total average score was 2.99 ± 0.91, which was the lowest among the four categories.
4. Discussion
In this study, we evaluated how accurately ChatGPT, Microsoft Copilot, and Gemini answered medical questions. Screenshots of representative responses from each AI tool are shown in Supplementary Figure S5. Microsoft Copilot’s score was significantly higher than that of Gemini in responses regarding age, and it was significantly higher than that of ChatGPT 3.5 in responses regarding dental abnormalities. In response to the question “At what age should I start brushing my child’s teeth?”, Microsoft Copilot recommended starting as soon as teeth erupt, and using toothpaste once the child is able to rinse their mouth. Gemini also answered that after the primary teeth have erupted; however, it recommended starting the use of toothpaste before tooth eruption and after eruption, to use a toothpaste that does not contain fluoride. The Japanese Society of Pediatric Dentistry recommends the use of fluoride toothpaste, with careful use regarding the amount, after the primary teeth eruption, which may have influenced the evaluation [44]. In addition, in the response to the question regarding neonatal teeth, Microsoft Copilot correctly explained the definition, whereas ChatGPT 3.5 answered about neonatal teeth and congenital dental abnormalities and did not mention the definition, resulting in a difference in the evaluation scores. On the other hand, in response to the question regarding systemic diseases, Microsoft Copilot’s score was 1.00, which was the lowest because it answered about diabetes when asked about hemophilia. In this study, the responses to each item were evaluated, and when the answers included incorrect information in a lot of information, the evaluation was low. In the future, it will be necessary to compare cases where limitations such as the number of characters are set.
We found that although the AI responses had a certain degree of accuracy, several important issues were identified. First, the AI responses were based on general medical knowledge and showed relatively high accuracy for basic medical information [45]. For example, the AI tools provided reliable information for questions related to general knowledge, such as symptoms of common diseases, preventive measures, and standard treatments. When accompanied by specific numbers, the results were generally consistent with the expert knowledge of pediatric dental specialists. This is likely due to the fact that AI learns based on large data sets [46]. Ozgor et al. (2024) also reported that analyzing data from numerous resources provided a higher accuracy rate for ChatGPT’s answers [47]. However, not all the information on the Internet is accurate. Cetin et al. (2023) investigated English videos about Coronary artery bypass grafting on YouTube™, and reported that YouTube™ English videos have low quality and reliability [48]. In addition, Borges do Nascimento et al. (2022) reported that the prevalence of health-related misinformation on social media ranged from 0.2% to 28.8% [49]. It is important for AI to obtain information from the correct resources.
AI’s limitations became apparent for questions related to complex medical issues that require specialized judgment and specific advice tailored to individual patient situations. For example, it became clear that individualized treatment strategies for specific medical conditions and answers that reflect the latest research findings need to be validated by experts. AI learns based on the existing data, which limits its ability to respond to the latest information and individual cases. Massey et al. (2023) investigated the performance of ChatGPT on orthopedic assessment examinations and reported that both ChatGPT-3.5 and GPT-4 performed better on text-only questions than questions with images [50]. It is reported that the diagnostic performance of GPT-4 and GPT-4V-based ChatGPTs did not reach the performance level of either radiology residents or board-certified radiologists in challenging neuroradiology cases [51]. These results indicate that it may be currently difficult for generative AI to correctly diagnose based on patient information and examination findings. On the other hand, GPT-4 passed the Japanese Medical Licensing Examination, whereas GPT-3.5 failed the examination, suggesting GPT-4′s rapid evolution in Japanese language processing [52]. As the accuracy of AI improves in the future, this weakness may be improved.
AI answers may be ambiguous, and users may have trouble interpreting them. In particular, there is a risk of users misunderstanding if medical terminology or complex concepts are not fully explained. However, because answers from AI usually do not include the sources, we found that there were situations where it was difficult to determine how certain the information was unless one had expertise in the field. This point needs to be improved to increase the transparency and explanatory nature of AI responses.
We also found that there was a risk of problems not only in the output of information from AI tools, but also in the input to properly read information from the questions. In the present study, one AI misread a systemic disease and gave a wrong answer, resulting in the lowest score of 1.00. In clinical situations, the patient may suffer harm when misinterpretation by AI leads to an answer giving incorrect information. The answers by AI are written in text; therefore, it is important for users to verify that the AI reads and answers the questions correctly. To make appropriate use of generative AI tools, in addition to efforts to make AI output appropriate information, it is necessary to explore ways to ensure AI tools read the information properly [53].
We also considered the ethical aspects of AI. When AI provides medical information, it is important to ensure that the information is not misleading [54]. In particular, answers that promote self-diagnosis or self-treatment should be avoided, and users should always be encouraged to seek the opinion of a medical professional. When AI is used in the medical field, its responsibilities and limitations must be clearly defined [54].
AI is considered a promising technology to support non-invasive tasks in pediatric dental clinical practice, such as helping parents with their children’s oral hygiene habits and assisting general dentists in diagnosis [55,56,57,58,59]. You et al. reported an AI model for detecting plaque in primary teeth and achieved clinically acceptable performance compared to experienced pediatric dentists [56]. Kılıc et al. concluded that an AI model that detects the number of pediatric primary teeth in panoramic radiographs is a promising tool for the automatic charting of panoramic radiographs [57]. General dentists are not necessarily familiar with pediatric dental knowledge and diagnosis. Although our results require further refinement for the clinical use of AI, the application of AI may support general dentists in their understanding and diagnosis of pediatric dentistry.
AI research in clinical pediatric dentistry has many promising applications that might change pediatric practice in the coming years [60]. On the other hand, the ongoing collaboration between dental professionals, researchers, and technologists will be essential in harnessing the full potential of AI to transform pediatric dental care [61]. By accumulating large datasets and integrating and analyzing them, AI may learn clinically valuable information related to pediatric dentistry and improve the accuracy of its answers.
Overall, while generative AI tools can be useful in providing medical information, it is important to recognize their limitations and risks and to use them appropriately [62]. Future research and development must take a comprehensive approach that includes ethical considerations and methods to improve the accuracy and reliability of AI.
This study has some limitations. First, in accordance with previous reports, we used the free versions of generative AI in the present study [63,64,65]. Therefore, the results may be different if we use the paid version. In addition, the results may vary depending on the timing of the survey or the type of AI because AI has improved every day. We will investigate the change in AI by conducting similar surveys again in the future. Second, referring to the previous report [43], the 20 questions were based on the author’s personal experience and lacked standardization. We also did not investigate the scoring variability among the six evaluators. Although our study selected the evaluators based on the criteria that they had a certain amount of clinical experience, we did not calculate the Kappa value or provide evaluator training. Additional surveys are needed to simplify the evaluation criteria or investigate the inter-rater reliability in the future. Lastly, 20 questions were presented in Japanese, and the answers in Japanese were evaluated. The competence of generative AI may vary depending on the language. The advantage of evaluating in one’s native language is that it allows for a detailed understanding by evaluators. On the other hand, conducting research in English has the advantage of comparing countries or regions. We will conduct additional research to assess the difference in language.
5. Conclusions
To our knowledge, this is the first study to examine the usefulness of generative AI in pediatric dentistry. We found that the three generative AI tools tested provided reliable responses to many questions regarding pediatric dentistry, suggesting the usefulness of AI in clinical dental situations. However, extremely low AI scores were found for a few questions, and there were categories with significant score differences between the AI tools. Thus, generative AI has weaknesses, and it is important to understand the characteristics when using generative AI tools. Further research is needed to develop the clinical application of generative AI in dentistry.
Supplementary Materials
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/diagnostics14242818/s1, Figure S1: The score of AI responses regarding age. (A) Question 1, (B) Question 2, (C) Question 3, (D) Question 4, (E) Question 5, (F) Question 6, (G) Question 7, (H) Question 8. Kruskal-Wallis test was used for comparisons, * p < 0.05. Bars indicate mean and SD. Figure S2: The score of AI responses regarding dental abnormalities. (A) Question 9, (B) Question 10, (C) Question 11, (D) Question 12, (E) Question 13. Kruskal-Wallis was used for comparisons, * p < 0.05, ** p < 0.01. Bars indicate mean and SD. Figure S3: The score of AI responses regarding systemic diseases. (A) Question 14, (B) Question 15, (C) Question 16, (D) Question 17. Kruskal-Wallis was used for comparisons, * p < 0.05, ** p < 0.01. Bars indicate mean and SD. Figure S4: The score of AI responses regarding consultations from guardians. (A) Question 18, (B) Question 19, (C) Question 20. Kruskal-Wallis was used for comparisons. Bars indicate mean and SD. Figure S5: Screenshots of representative responses from each AI tool (accessed on 26 November 2024). (A) ChatGPT 3.5, (B) Microsoft Copilot, (C) Gemini.
Author Contributions
Conceptualization, S.K., T.A. and M.H.; methodology, T.A.; validation, S.K., T.A. and R.N.; formal analysis, S.K., T.A. and M.H.; investigation, T.A., Y.A., Y.I., M.T., C.M. and R.N.; writing—original draft preparation, S.K., T.A. and R.N.; writing—review and editing, T.A. and R.N.; supervision, M.H. and C.M. All authors have read and agreed to the published version of the manuscript.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data are available from the corresponding author upon reasonable request.
Conflicts of Interest
The authors declare no conflicts of interest.
Funding Statement
This research received no external funding.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
References
- 1.Tessler F.N., Thomas J. Artificial Intelligence for Evaluation of Thyroid Nodules: A Primer. Thyroid. 2023;33:150–158. doi: 10.1089/thy.2022.0560. [DOI] [PubMed] [Google Scholar]
- 2.Goodfellow I., Bengio Y., Courville A. Deep Learning. MIT Press; Cambridge, MA, USA: 2016. [Google Scholar]
- 3.Schoene A.M., Basinas I., van Tongeren M., Ananiadou S. A Narrative Literature Review of Natural Language Processing Applied to the Occupational Exposome. Int. J. Environ. Res. Public Health. 2022;19:8544. doi: 10.3390/ijerph19148544. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Aramaki E., Wakamiya S., Yada S., Nakamura Y. Natural Language Processing: From Bedside to Everywhere. Yearb. Med. Inform. 2022;31:243–253. doi: 10.1055/s-0042-1742510. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Liu N., Luo K., Yuan Z., Chen Y. A Transfer Learning Method for Detecting Alzheimer’s Disease Based on Speech and Natural Language Processing. Front. Public Health. 2022;10:772592. doi: 10.3389/fpubh.2022.772592. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Reading Turchioe M., Volodarskiy A., Pathak J., Wright D.N., Tcheng J.E., Slotwiner D. Systematic review of current natural language processing methods and applications in cardiology. Heart. 2022;108:909–916. doi: 10.1136/heartjnl-2021-319769. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Michalski A.A., Lis K., Stankiewicz J., Kloska S.M., Sycz A., Dudziński M., Muras-Szwedziak K., Nowicki M., Bazan-Socha S., Dabrowski M.J., et al. Supporting the Diagnosis of Fabry Disease Using a Natural Language Processing-Based Approach. J. Clin. Med. 2023;12:3599. doi: 10.3390/jcm12103599. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Hsu H.Y., Hsu K.C., Hou S.Y., Wu C.L., Hsieh Y.W., Cheng Y.D. Examining Real-World Medication Consultations and Drug-Herb Interactions: ChatGPT Performance Evaluation. JMIR Med. Educ. 2023;9:e48433. doi: 10.2196/48433. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Kuroiwa T., Sarcon A., Ibara T., Yamada E., Yamamoto A., Tsukamoto K., Fujita K. The Potential of ChatGPT as a Self-Diagnostic Tool in Common Orthopedic Diseases: Exploratory Study. J. Med. Internet Res. 2023;25:e47621. doi: 10.2196/47621. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Acar A.H. Can natural language processing serve as a consultant in oral surgery? J. Stomatol. Oral Maxillofac. Surg. 2024;125:101724. doi: 10.1016/j.jormas.2023.101724. [DOI] [PubMed] [Google Scholar]
- 11.Worthington H.V., MacDonald L., Poklepovic Pericic T., Sambunjak D., Johnson T.M., Imai P., Clarkson J.E. Home use of interdental cleaning devices, in addition to toothbrushing, for preventing and controlling periodontal diseases and dental caries. Cochrane Database Syst. Rev. 2019;4:CD012018. doi: 10.1002/14651858.CD012018.pub2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Usuda M., Kametani M., Hamada M., Suehiro Y., Matayoshi S., Okawa R., Naka S., Matsumoto-Nakano M., Akitomo T., Mitsuhata C., et al. Inhibitory Effect of Adsorption of Streptococcus mutans onto Scallop-Derived Hydroxyapatite. Int. J. Mol. Sci. 2023;24:11371. doi: 10.3390/ijms241411371. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Yasuda J., Yasuda H., Nomura R., Matayoshi S., Inaba H., Gongora E., Iwashita N., Shirahata S., Kaji N., Akitomo T., et al. Investigation of periodontal disease development and Porphyromonas gulae FimA genotype distribution in small dogs. Sci. Rep. 2024;14:5360. doi: 10.1038/s41598-024-55842-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Zou J., Meng M., Law C.S., Rao Y., Zhou X. Common dental diseases in children and malocclusion. Int. J. Oral Sci. 2018;10:7. doi: 10.1038/s41368-018-0012-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Akitomo T., Asao Y., Iwamoto Y., Kusaka S., Usuda M., Kametani M., Ando T., Sakamoto S., Mitsuhata C., Kajiya M., et al. A Third Supernumerary Tooth Occurring in the Same Region: A Case Report. Dent. J. 2023;11:49. doi: 10.3390/dj11020049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Akitomo T., Kusaka S., Iwamoto Y., Usuda M., Kametani M., Asao Y., Nakano M., Tachikake M., Mitsuhata C., Nomura R. Five-Year Follow-Up of a Child with Non-Syndromic Oligodontia from before the Primary Dentition Stage: A Case Report. Children. 2023;10:717. doi: 10.3390/children10040717. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Akitomo T., Kusaka S., Usuda M., Kametani M., Kaneki A., Nishimura T., Ogawa M., Mitsuhata C., Nomura R. Fusion of a Tooth with a Supernumerary Tooth: A Case Report and Literature Review of 35 Cases. Children. 2023;11:6. doi: 10.3390/children11010006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Vieira L.D.S., Mandetta A.R.H., Bortoletto C.C., Sobral A.P.T., Motta L.J., Mesquita Ferrari R.A., Duran C.C.G., Horliana A.C.R.T., Fernandes K.P.S., Bussadori S.K. A minimal interventive protocol using antimicrobial photodynamic therapy on teeth with molar incisor hypomineralization: A case report. J. Biophotonics. 2024;17:e202300414. doi: 10.1002/jbio.202300414. [DOI] [PubMed] [Google Scholar]
- 19.Kiselnikova L., Vislobokova E., Voinova V. Dental manifestations of hypophosphatasia in children and the effects of enzyme replacement therapy on dental status: A series of clinical cases. Clin. Case Rep. 2020;8:911–918. doi: 10.1002/ccr3.2769. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Karhumaa H., Lämsä E., Vähänikkilä H., Blomqvist M., Pätilä T., Anttonen V. Dental caries and attendance to dental care in Finnish children with operated congenital heart disease. A practice based follow-up study. Eur. Arch. Paediatr. Dent. 2021;22:659–665. doi: 10.1007/s40368-021-00603-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Kametani M., Akitomo T., Usuda M., Kusaka S., Asao Y., Nakano M., Iwamoto Y., Tachikake M., Ogawa M., Kaneki A., et al. Evaluation of Periodontal Status and Oral Health Habits with Continual Dental Support for Young Patients with Hemophilia. Appl. Sci. 2024;14:1349. doi: 10.3390/app14041349. [DOI] [Google Scholar]
- 22.Akitomo T., Ogawa M., Kaneki A., Nishimura T., Usuda M., Kametani M., Kusaka S., Asao Y., Iwamoto Y., Tachikake M., et al. Dental Abnormalities in Pediatric Patients Receiving Chemotherapy. J. Clin. Med. 2024;13:2877. doi: 10.3390/jcm13102877. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Akitomo T., Tsuge Y., Mitsuhata C., Nomura R. A Narrative Review of the Association between Dental Abnormalities and Chemotherapy. J. Clin. Med. 2024;13:4942. doi: 10.3390/jcm13164942. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Muzulan C.F., Gonçalves M.I. Recreational strategies for the elimination of pacifier and finger sucking habits. J. Soc. Bras. Fonoaudiol. 2011;23:66–70. doi: 10.1590/S2179-64912011000100014. [DOI] [PubMed] [Google Scholar]
- 25.Khan L. Dental Care and Trauma Management in Children and Adolescents. Pediatr. Ann. 2019;48:e3–e8. doi: 10.3928/19382359-20181213-01. [DOI] [PubMed] [Google Scholar]
- 26.Némat S.M., Kenny K.P., Day P.F. Special considerations in paediatric dental trauma. Prim. Dent. J. 2023;12:64–71. doi: 10.1177/20501684231211413. [DOI] [PubMed] [Google Scholar]
- 27.Alghamidi W.A., Alghamdi S.B., Assiri J.A., Almathami A.A., Alkahtani Z.M., Togoo R.A. Efficacy of self-designed intraoral appliances in prevention of cheek, lip and tongue bite after local anesthesia administration in pediatric patients. J. Clin. Exp. Dent. 2019;11:e315–e321. doi: 10.4317/jced.55477. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Fang Q., Reynaldi R., Araminta A.S., Kamal I., Saini P., Afshari F.S., Tan S.C., Yuan J.C., Qomariyah N.N., Sukotjo C. Artificial Intelligence (AI)-driven dental education: Exploring the role of chatbots in a clinical learning environment. J. Prosthet. Dent. p. 2024. in press . [DOI] [PubMed]
- 29.Sharma S., Kumari P., Sabira K., Parihar A.S., Divya Rani P., Roy A., Surana P. Revolutionizing Dentistry: The Applications of Artificial Intelligence in Dental Health Care. J. Pharm. Bioallied Sci. 2024;16((Suppl. S3)):S1910–S1912. doi: 10.4103/jpbs.jpbs_1290_23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Sabri H., Saleh M.H.A., Hazrati P., Merchant K., Misch J., Kumar P.S., Wang H.L., Barootchi S. Performance of three artificial intelligence (AI)-based large language models in standardized testing; implications for AI-assisted dental education. J. Periodontal Res. 2024 doi: 10.1111/jre.13323. [DOI] [PubMed] [Google Scholar]
- 31.Lu W., Yu X., Li Y., Cao Y., Chen Y., Hua F. Artificial Intelligence-Related Dental Research: Bibliometric and Altmetric Analysis. Int. Dent. J. 2024 doi: 10.1016/j.identj.2024.08.004. [DOI] [PubMed] [Google Scholar]
- 32.Mertens S., Krois J., Cantu A.G., Arsiwala L.T., Schwendicke F. Artificial intelligence for caries detection: Randomized trial. J. Dent. p. 2021. in press . [DOI] [PubMed]
- 33.Dave M., Patel N. Artificial intelligence in healthcare and education. Br. Dent. J. 2023;234:761–764. doi: 10.1038/s41415-023-5845-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Hartman H., Nurdin D., Akbar S., Cahyanto A., Setiawan A.S. Exploring the potential of artificial intelligence in paediatric dentistry: A systematic review on deep learning algorithms for dental anomaly detection. Int. J. Paediatr. Dent. 2024;34:639–652. doi: 10.1111/ipd.13164. [DOI] [PubMed] [Google Scholar]
- 35.Guile E.E., Hagens E., de Miranda J.C. Dental nursing in Suriname: Training and deployment. J. Dent. Educ. 1981;45:156–160. doi: 10.1002/j.0022-0337.1981.45.3.tb01443.x. [DOI] [PubMed] [Google Scholar]
- 36.Seminario A.L., DeRouen T., Cholera M., Liu J., Phantumvanit P., Kemoli A., Castillo J., Pitiphat W. Mitigating Global Oral Health Inequalities: Research Training Programs in Low- and Middle-Income Countries. Ann. Glob. Health. 2020;86:141. doi: 10.5334/aogh.3134. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Suzuki S., Ohyama A., Yoshino K., Eguchi T., Kamijo H., Sugihara N. COVID-19-Related Factors Delaying Dental Visits of Workers in Japan. Int. Dent. J. 2022;72:716–724. doi: 10.1016/j.identj.2022.05.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Emerich K., Wyszkowski J. Clinical practice: Dental trauma. Eur. J. Pediatr. 2010;169:1045–1050. doi: 10.1007/s00431-009-1130-x. [DOI] [PubMed] [Google Scholar]
- 39.Suri L., Gagari E., Vastardis H. Delayed tooth eruption: Pathogenesis, diagnosis, and treatment. A literature review. Am. J. Orthod. Dentofac. Orthop. 2004;126:432–445. doi: 10.1016/j.ajodo.2003.10.031. [DOI] [PubMed] [Google Scholar]
- 40.Akitomo T., Asao Y., Mitsuhata C., Kozai K. A new supernumerary tooth occurring in the same region during follow-up after supernumerary tooth extraction: A case report. Pediatr. Dent. J. 2021;31:100–107. doi: 10.1016/j.pdj.2020.10.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Usuda M., Akitomo T., Kametani M., Kusaka S., Mitsuhata C., Nomura R. Dens invaginatus of fourteen teeth in a pediatric patient. Pediatr. Dent. J. 2023;33:240–245. doi: 10.1016/j.pdj.2023.10.001. [DOI] [Google Scholar]
- 42.Aguiar de Sousa R., Costa S.M., Almeida Figueiredo P.H., Camargos C.R., Ribeiro B.C., Alves ESilva M.R.M. Is ChatGPT a reliable source of scientific information regarding third-molar surgery? J. Am. Dent. Assoc. 2024;155:227–232. doi: 10.1016/j.adaj.2023.11.004. [DOI] [PubMed] [Google Scholar]
- 43.Balel Y. Can ChatGPT be used in oral and maxillofacial surgery? J. Stomatol. Oral Maxillofac. Surg. 2023;124:101471. doi: 10.1016/j.jormas.2023.101471. [DOI] [PubMed] [Google Scholar]
- 44.Recommended Use of Fluoride Toothpaste. [(accessed on 26 November 2024)]. Available online: https://www.jspd.or.jp/recommendation/article19/
- 45.Johnson D., Goodman R., Patrinely J., Stone C., Zimmerman E., Donald R., Chang S., Berkowitz S., Finn A., Jahangir E., et al. Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model. Res. Sq. 2023. preprint .
- 46.King R.C., Samaan J.S., Yeo Y.H., Mody B., Lombardo D.M., Ghashghaei R. Appropriateness of ChatGPT in Answering Heart Failure Related Questions. Heart Lung Circ. 2024;33:1314–1318. doi: 10.1016/j.hlc.2024.03.005. [DOI] [PubMed] [Google Scholar]
- 47.Ozgor B.Y., Simavi M.A. Accuracy and reproducibility of ChatGPT’s free version answers about endometriosis. Int. J. Gynecol. Obstet. 2024;165:691–695. doi: 10.1002/ijgo.15309. [DOI] [PubMed] [Google Scholar]
- 48.Cetin H.K., Koramaz I., Zengin M., Demir T. The Evaluation of YouTube™ English Videos’ Quality About Coronary Artery Bypass Grafting. Sisli Etfal Hastan. Tip Bul. 2023;57:130–135. doi: 10.14744/SEMB.2022.59908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Borges do Nascimento I.J., Pizarro A.B., Almeida J.M., Azzopardi-Muscat N., Gonçalves M.A., Björklund M., Novillo-Ortiz D. Infodemics and health misinformation: A systematic review of reviews. Bull. World Health Organ. 2022;100:544–561. doi: 10.2471/BLT.21.287654. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Massey P.A., Montgomery C., Zhang A.S. Comparison of ChatGPT-3.5, ChatGPT-4, and Orthopaedic Resident Performance on Orthopaedic Assessment Examinations. J. Am. Acad. Orthop. Surg. 2023;31:1173–1179. doi: 10.5435/JAAOS-D-23-00396. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Horiuchi D., Tatekawa H., Oura T., Oue S., Walston S.L., Takita H., Matsushita S., Mitsuyama Y., Shimono T., Miki Y., et al. Comparing the Diagnostic Performance of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and Radiologists in Challenging Neuroradiology Cases. Clin. Neuroradiol. 2024;34:779–787. doi: 10.1007/s00062-024-01426-y. [DOI] [PubMed] [Google Scholar]
- 52.Takagi S., Watari T., Erabi A., Sakaguchi K. Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study. JMIR Med. Educ. 2023;9:e48002. doi: 10.2196/48002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Stadler M., Horrer A., Fischer M.R. Crafting medical MCQs with generative AI: A how-to guide on leveraging ChatGPT. GMS J. Med. Educ. 2024;41:Doc20. doi: 10.3205/zma001675. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Goodman R.S., Patrinely JRJr Osterman T., Wheless L., Johnson D.B. On the cusp: Considering the impact of artificial intelligence language models in healthcare. Med. 2023;4:139–140. doi: 10.1016/j.medj.2023.02.008. [DOI] [PubMed] [Google Scholar]
- 55.Schwendicke F., Samek W., Krois J. Artificial intelligence in dentistry: Chances and challenges. J. Dent. Res. 2020;99:769–774. doi: 10.1177/0022034520915714. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.You W., Hao A., Li S., Wang Y., Xia B. Deep learning-based dental plaque detection on primary teeth: A comparison with clinical assessments. BMC Oral Health. 2020;20:141. doi: 10.1186/s12903-020-01114-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Kılıc M.C., Bayrakdar I.S., Çelik Ö., Bilgir E., Orhan K., Aydın O.B., Kaplan F.A., Sağlam H., Odabaş A., Aslan A.F., et al. Artificial intelligence system for automatic deciduous tooth detection and numbering in panoramic radiographs. Dentomaxillofac Radiol. 2021;50:20200172. doi: 10.1259/dmfr.20200172. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Naeimi S.M., Darvish S., Salman B.N., Luchian I. Artificial Intelligence in Adult and Pediatric Dentistry: A Narrative Review. Bioengineering. 2024;11:431. doi: 10.3390/bioengineering11050431. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Alessa N. Application of Artificial Intelligence in Pediatric Dentistry: A Literature Review. J. Pharm. Bioallied Sci. 2024;16((Suppl. S3)):S1938–S1940. doi: 10.4103/jpbs.jpbs_74_24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Vishwanathaiah S., Fageeh H.N., Khanagar S.B., Maganur P.C. Artificial Intelligence Its Uses and Application in Pediatric Dentistry: A Review. Biomedicines. 2023;11:788. doi: 10.3390/biomedicines11030788. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Alharbi N., Alharbi A.S. AI-Driven Innovations in Pediatric Dentistry: Enhancing Care and Improving Outcome. Cureus. 2024;16:e69250. doi: 10.7759/cureus.69250. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Dave T., Athaluri S.A., Singh S. ChatGPT in medicine: An overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front. Artif. Intell. 2023;6:1169595. doi: 10.3389/frai.2023.1169595. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Mondal H., Panigrahi M., Mishra B., Behera J.K., Mondal S. A pilot study on the capability of artificial intelligence in preparation of patients’ educational materials for Indian public health issues. J. Fam. Med. Prim. Care. 2023;12:1659–1662. doi: 10.4103/jfmpc.jfmpc_262_23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Yau J.Y., Saadat S., Hsu E., Murphy L.S., Roh J.S., Suchard J., Tapia A., Wiechmann W., Langdorf M.I. Accuracy of Prospective Assessments of 4 Large Language Model Chatbot Responses to Patient Questions About Emergency Care: Experimental Comparative Study. J. Med. Internet Res. 2024;26:e60291. doi: 10.2196/60291. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Munir F., Gehres A., Wai D., Song L. Evaluation of ChatGPT as a Tool for Answering Clinical Questions in Pharmacy Practice. J. Pharm. Pract. 2024;37:1303–1310. doi: 10.1177/08971900241256731. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data are available from the corresponding author upon reasonable request.