Abstract
Personalized medicine refers to the tailoring of diagnostics and therapeutics to individuals based on one’s biological, social, and behavioral characteristics. While personalized dental medicine is still far from being a reality, advanced artificial intelligence (AI) technologies with improved data analytic approaches are expected to integrate diverse data from the individual, setting, and system levels, which may facilitate a deeper understanding of the interaction of these multilevel data and therefore bring us closer to more personalized, predictive, preventive, and participatory dentistry, also known as P4 dentistry. In the field of dentomaxillofacial imaging, a wide range of AI applications, including several commercially available software options, have been proposed to assist dentists in the diagnosis and treatment planning of various dentomaxillofacial diseases, with performance similar or even superior to that of specialists. Notably, the impact of these dental AI applications on treatment decision, clinical and patient-reported outcomes, and cost-effectiveness has so far been assessed sparsely. Such information should be further investigated in future studies to provide patients, providers, and healthcare organizers a clearer picture of the true usefulness of AI in daily dental practice.
Keywords: personalized medicine, artificial intelligence, deep learning, diagnostic imaging, dentistry
Personalized and precision dentistry and data-driven technologies
Current concepts of managing dental diseases have by large been developed over the course of the last 50 years. While knowledge generated by continuous research efforts towards the biological foundation of the main dental diseases (caries and periodontitis) has been gradually integrated into contemporary therapy approaches, the backbone of treatments employed in dental practices has been established decades ago. For example, restorative treatments remain the cornerstone for carious lesions while deep scaling and root planing remain central for periodontal disease, both of which are increasingly accompanied by preventive efforts. 1,2 Given the evolving understanding of dental diseases, their etiology and pathogenesis, and the resulting chance and need to adequately describe different disease stages and grades to deduct appropriate therapies, this simplification may not suffice any longer. Notably, it is grounded in a similarly simplified diagnostic approach; what is missing is a systematic and holistic evaluation of individual health and disease on patient, tooth, and site level, and the synthesis of the gathered data into adequately granular diagnoses. Such an approach would need to be built on a detailed multimodal data collection and would allow to assign individualized treatment pathways based on personalized diagnosis.
At present, however, such individualized diagnostics and treatment pathways are not at all available in dentistry. Instead, we are stuck in the era of stratification of individuals and lesions into risk groups, characterized mainly by simple shared phenotypic characteristics (e.g., caries experience for caries risk assessment, smoking or poor oral hygiene for periodontal risk assessment, etc.). Currently, the accuracy and generalizability of most of these risk assessment systems are insufficiently validated. Even if these risk assessment systems were valid, they would only describe groups of individuals and lesions sharing a similar “risk” and subsequently assign identical management strategies to all individuals in a certain risk group (i.e., the one-size-fits-all approach). 3
While being the next step beyond stratification, true personalized management is not possible at the moment. Personalized management is closely linked to “precision medicine”, defined as “the tailoring of a therapy to individuals with one’s biological (genomic, microbiomic, proteomic, etc.), social (economic, educational, etc.) and behavioral (lifestyle) characteristics”. 3 Personalized care should, ideally, allow to provide the safest, most efficacious and efficient diagnostics and therapies, which is jointly with precision medicine and closely related to another concept “P4 medicine”. 4 The four Ps stand for a more precise, personalized, preventive, and participatory healthcare approach (Figure 1). What is needed, however, to make personalized, precision, and P4 dentistry come true is a deep understanding of individuals and the option to predict what will happen to this individual, a specific organ or lesion.
Figure 1.
The confluence of different data sources and technologies (e.g., AI, specifically deep learning, systems medicine involving genomic, metabolomic, or microbiomic data, as well as clinical data sources or those provided publicly or by the patient) will enable P4 medicine and dentistry. 4
To allow such understanding and prediction, the discussed concept of stratification and the employed few risk indicators or factors (Table 1) are obviously insufficient. What is needed is a shift towards a healthcare model centered around broad and deep data. As discussed elsewhere, 7 many recent academic breakthroughs in astronomy, 8 biology 9 and other disciplines are mainly driven by making use of large amounts of data. Dentistry should also make use of the wealth of available dental data and transform into something that was previously referred to as “data dentistry”. 7 The data needed could be generated from advanced sensor technologies, including wearables, ingestibles, and implantables as well as social media and electronic health records (eHR), to name a few. 10 Many of these data sources will not solely rely on being collected in clinical settings, but routinely, even by patients who may actively donate data from social media, food consumption, healthcare apps, behavioral diaries, or toothbrushes. In addition, prospectively collected omics data may become more available if costs for generating them decrease further and technologies are becoming available in routine settings. 10
Table 1.
Risk factors and risk indicators
Risk factor | Risk indicator | |
---|---|---|
Definition | “A characteristic that may make an individual more susceptible to a certain disease” 3 ; can be “environmental, behavioral, or biologic and “if present directly increases the probability of a disease occurring, and if absent or removed reduces the probability”. 5 | “A marker that is not necessarily causally linked, but can be used to predict risk, like past disease experience or social, educational or economic factors” 3 ; “may be a probable, or putative, risk factor, but […] a temporal association usually cannot be specified”. 6 |
Caries | Diet rich in carbohydrates | Caries experience |
Oral hygiene, fluoridated toothpaste | Low social, educational or economic status | |
Medication causing hyposalivation/xerostomia | ||
Periodontal disease | Oral hygiene, smoking | Periodontitis experience |
Bacterial composition, genetic factors (SNPs) | Low social, educational or economic status | |
Medication inducing immunosuppression | ||
Oral cancer | Smoking | Low social, educational or economic status |
Betel quid | Male sex | |
Alcohol consumption | High age |
Artificial intelligence and its use in dental medicine
The analysis of such diverse, multimodal, large, and complex data, including speech and imagery, requires advanced data analytic approaches. 11 One major strategy adopted over recent years for this purpose is “artificial intelligence” (AI). The term was coined in the 1950’s and refers to the idea of building machines that are capable of performing tasks that are normally performed by humans. Machine learning (ML) is a subfield of AI where algorithms are applied to learn the inherent statistical patterns and structures in data, which allows for predictions of unseen data. More complex machine learning algorithms frequently used for data like images are neural networks (NNs), which are constituted of artificial neurons (i.e., mathematical non-linear models that can be stacked and concatenated in layers using mathematical operations to form a network). The term “deep learning” is a reference to deep (multilayered) NNs, which are able to represent hierarchical features in complex data and frequently used for detecting edges, corners, shapes, and macroscopic patterns in images. 12
ML and NNs as a subtype of AI are “trained” to automatically perform specific tasks, and the most common type of training is supervised learning where data points and corresponding data information (e.g., labels, tasks, etc.) are repetitively passed through the network to detect the intrinsic statistical patterns in the data. During the training process, the connections between the neurons, also referred to as model weights, are optimized to minimize the so-called prediction error (difference of the true vs the predicted data information). A trained NN can predict the outcome of unseen data by passing the new data point through the network. AI, ML, and NNs are increasingly used in dentistry to work with the increasing amount of data available, as described above. A number of such forms of use are currently discussed or already clinically available:
1. Data analytics and precision dentistry
As discussed, there is an increasing strive towards more precise data-centered dentistry, making use of not only clinical and historical data, claims and treatment data, image and further test data, but also data provided by patients as outlined above. A big advantage in dentistry is that these multimodal datasets are usually available repeatedly as many patients visit dentists regularly. Using such longitudinal data will help to foster a deeper understanding of individual health and disease and to develop AI models to predict disease onset or progression individually. Currently, however, many of these data remain siloed or unavailable. Meanwhile, existing AI prediction models remain limited in their predictive power and generalizability as useful predictions need to be better than plainly guessing the so-called majority class (i.e., the more frequent event). 3 Predicting this majority class is easy but models which focus on exclusively predicting it may not be clinically useful. 13
2. Evidence-based care
Gathering a more comprehensive picture of an individuals’ health and objectifying diagnosis through imagery and AI-assisted analysis will support evidence-based care. Data-centric approaches will further allow to embed external evidence, for example from guidelines and standards of care, into decision making, and then fostering reliable high-quality and cost-effective care. 14 An additional benefit of more data-driven care is the option to objectively assess treatment needs, actually provided treatments, and the yielded outcomes. Ultimately, this should foster value-based care (i.e., quantifying the “value” of a certain treatment to individuals and the society).
3. Beyond the dental chair
AI and data-driven approaches will facilitate better information and decision making on the dental public health level, including workforce planning. Automated data generation in routine settings in addition to prospective epidemiologic surveys will allow a more up-to-date and detailed picture of a populations’ health, its oral health demands, and the effectiveness and efficiency of services. 15 This will facilitate the informed setup or buy-in of services as well as benchmarking of healthcare interventions and policy. It will support a needs-based and adaptive workforce planning. Enabling providers of different levels, AI and data-driven approaches will further support modularized models of care, fostering affordable, accessible, and specialized services. AI will further change dental education by employing non-synchronous learning models. Learning using simulation including augmented or virtual reality-based teaching and training will be more common in the future. 16
Current use of AI in dentomaxillofacial imaging
Radiographic examination is an integral component in most diagnostic and treatment planning processes in daily dental practice. With the growing use of digital dental radiography, images generated by dental radiographic examinations are commonly automatically stored as digital data in the archiving system and associated databases. These data can be analyzed using AI and specifically deep learning based on convolutional NNs. 17 Currently, a range of deep learning models have been trained and tested on dentomaxillofacial radiographic images to fulfill tasks of image classification (e.g., “is there a certain pathology detectable on this image?”), object detection (e.g., “in which image area is this certain pathology located?”) and pixelwise segmentation (e.g., “which pixels of this image show a certain pathology”), and for image quality improvement (Table 2). 95,96
Table 2.
Artificial intelligence applications using dentomaxillofacial imaging data
Category | Artificial intelligence application |
---|---|
Dental caries | Detection of dental caries 18–21 |
Periodontal evaluation | Detection of periodontal bone loss 22,23 |
Measurement and staging of periodontal bone loss 24,25 | |
Classification of periodontitis stages 26,27 | |
Identification of periodontally compromised teeth 28,29 | |
Endodontic evaluation | Detection, classification, and measurement of apical pathologies 30,31 |
Detection of vertical root fractures 32,33 | |
Detection 34 and classification 35,36 of C-shaped canals | |
Dental implants | Detection of peri-implant bone loss 37 |
Measurement of the peri-implant bone loss ratio and classification of the bone loss severity 38 | |
Detection of the edentulous sites, nasal fossa, maxillary sinus, and mandibular canal, and measurement of the heights and widths of residual alveolar bone at the edentulous sites 39 | |
Classification of dental implant systems 40–42 | |
Detection and classification of dental implant fractures 43 | |
Third molars | Classification of positional relationships between lower third molars and the mandibular canal 44–46 |
Prediction of extraction difficulty for lower third molars 47 | |
Prediction of paresthesia after third molar extraction 48 | |
Radiolucent lesions in the jaws | Detection and segmentation of infections, granuloma, cysts, and tumors in the jaws 49 |
Detection of ameloblastomas and odontogenic keratocysts 50 | |
Detection/classification of ameloblastomas, odontogenic keratocysts, dentigerous cysts, radicular cysts, and/or bone cysts in the maxilla/mandible 51,52 | |
Differentiation of Stafne’s bone cavity from mandibular radiolucent lesions 53 | |
Maxillary sinus | Detection of maxillary sinus lesions 54,55 |
Detection and segmentation of maxillary sinus lesions 56,57 | |
Prediction of oroantral communication after tooth extraction 58 | |
Orthodontic and orthognathic evaluation | Localization of cephalometric landmarks 59–64 |
Classification of skeletal malocclusion 65–67 | |
Assessment of facial symmetry before and after orthognathic surgery 68 | |
Temporomandibular joint | Diagnosis of temporomandibular joint osteoarthritis 69 |
Diagnosis of mandibular condyle fractures 70 | |
Measurement of the cortical thickness of mandibular condyle head 71 | |
Maxillofacial fracture | Detection and classification of mandibular fracture 72 |
Sialoliths | Detection of submandibular gland sialoliths 73 |
Osteoporosis | Diagnosis and prediction of osteoporosis 74,75 |
Sjögren’s syndrome | Diagnosis of Sjögren’s syndrome 76 |
Lymph node metastasis | Segmentation and identification of metastatic cervical lymph nodes 77 |
Reporting of the dental status | Segmentation of teeth and jaws, numbering of teeth, detection of caries, periapical lesions, and periodontitis 78 |
Identification of missing tooth, caries, filling, prosthetic restoration, endodontically treated tooth, residual root, periapical lesion, and periodontal bone loss 79 | |
Tooth numbering and detection of dental implants, prosthetic crowns, fillings, root remnants, and root canal treatment 80 | |
Detection, segmention, and labeling of teeth, crowns, fillings, root canal fillings, implants, and root remnants 81,82 | |
Tooth detection and numbering 83,84 | |
Tooth segmentation and classification 85,86 | |
Image quality improvement | Correction of blurred panoramic radiographic images 87 |
Reduction of metal artifacts on CBCT images 88 | |
Improvement of the resolution of CT/CBCT images 89–91 | |
Multimodal image registration | Registration of CBCT with intra oral scan, 92 optical dental model scan, 93 or MRI 94 |
Diagnosis
1. Dental caries
Intraoral radiographic examination is essential for the detection of dental carious lesions, particularly early non-cavitated ones. The sensitivity and specificity of intraoral radiography for detecting dental caries were reported to range from 27–66% and 76–97%, respectively. 97,98 The relatively low sensitivity reported implies a high underdetection of dental caries, which may be related to clinicians’ experience and caries lesion depth (i.e., enamel or dentin caries).
Several deep learning models have been developed to assist clinicians in detecting and classifying dental caries. 96 Lee et al developed three CNN models to automatically detect dental caries in posterior teeth on periapical images. 18 The models showed higher detection accuracy for premolars than for molars, which could be related to differences in their anatomical characteristics. Srivastava et al developed a CNN model to detect dental caries on bitewings. The AI model achieved significantly higher sensitivity (81%) than three general dentists (34–48%). 19 More recently, caries detection using AI additionally focused on the detection of early enamel caries. The CNN model developed by Cantu et al outperformed seven experienced dentists in detecting initial enamel and advanced dentin caries. 20 The seven dentists showed greatly different sensitivities for detecting initial (<25%) and advanced (40–75%) caries while the model achieved robust sensitivities (>70%) for both initial and advanced caries. Currently, commercial AI software programs including AssistDent (Manchester, UK), Denti.AI (Toronto, Canada), Diagnocat (Tel Aviv, Israel), CranioCatch (Eskişehir, Turkey) and dentalXr.ai (Berlin, Germany) (Table 3) are available to assist clinicians in the diagnosis of dental caries on two-dimensional (2D) radiographic images. The use of AssistDent and dentalXr.ai significantly increased dentists’ sensitivity especially for detecting enamel caries. 21,99 Notably, automatic detection of buccal/lingual caries or secondary caries (i.e., caries next to restorations) remains challenging for AI models. This is, however, also the case for human observers and mainly grounded in the 2D nature of most intraoral images. While cone-beam computed tomography (CBCT) allows caries detection in three-dimensions (3D), it is not recommended for caries diagnostics.
Table 3.
Examples of commercially available AI software for dental applications
AI software | Origin (City, Country) |
Type of image data | Application | Website |
---|---|---|---|---|
AssistDent | Manchester, UK | Bitewing images | Detecting proximal enamel and dentin caries | https://www.assistdent.net |
WebCeph | Seongnam, Korea | Cephalometric images | Cephalometric tracing and analysis | https://webceph.com/en/about/ |
Ceppro | Seoul, Korea | Cephalometric images | Cephalometric tracing and analysis | https://www.ddhinc.net/en/ |
dentalXr.ai | Berlin, Germany | Bitewing, periapical and panoramic images |
|
https://www.dentalxr.ai/en/home |
Relu | Leuven, Belgium | CBCT images |
|
https://relu.eu/ |
Denti.AI | Toronto, Canada | Periapical, bitewing, panoramic, and CBCT images |
|
https://www.denti.ai/ |
Promaton | Amsterdam, Netherlands | Panoramic and CBCT images |
|
https://www.promaton.com |
Diagnocat | Tel Aviv, Israel | Periapical, bitewing, panoramic, cephalometric, and CBCT images |
|
https://diagnocat.com |
CranioCatch | Eskişehir, Turkey | Periapical, bitewing, panoramic, cephalometric, and CBCT images |
|
https://www.craniocatch.com/en/ |
AI, artificial intelligence; CBCT, cone-beam computed tomography
2. Periodontal bone loss
Deep learning models have also been developed for the detection and segmentation of periodontal bone loss and the associated classification of periodontitis stages on periapical and panoramic images. In 2018, Lee et al developed a CNN model on periapical images to automatically identify periodontally compromised posterior teeth and predict tooth loss in the future. 28 The accuracy of the model was higher for premolars (>80%) than for molars. Thanathornwong et al developed a CNN model to identify periodontally compromised teeth on panoramic images. 29 Kim et al 22 and Krois et al 23 trained their CNN models to automatically detect periodontal bone loss on panoramic radiographs. The diagnostic accuracies of their models (AUCs [area under the curves] of 0.89–0.95) were higher than that of several general dentists (AUCs of 0.77–0.85).
In addition, periodontitis stages can also be classified automatically using deep learning on periapical and panoramic images. 26,27 Danks et al 24 and Lee et al 25 developed CNN models to measure the extent of periodontal bone loss on periapical images and subsequently classify the identified sites into three/four severity stages according to the bone loss extent measured. The model by Lee et al achieved high classification accuracy with an AUC value of 0.98. Future applications are expected to detect changes in the bone density and textures of the alveolar ridge for early detection of the onset of periodontitis.
3. Endodontic evaluation
AI applications in endodontics so far mainly focus on apical pathologies, root fractures, and C-shaped canals. For apical pathologies, models on 2D radiographic images were able to automatically detect and classify lesions while those on CBCT images were able to additionally provide volumetric information of the detected lesions. Krois et al 100 and Ekert et al 30 developed CNN models on panoramic images or image crops to detect apical pathologies and classify teeth into (i) teeth without apical lesions, (ii) teeth with widened periodontal ligament or uncertain apical lesions, and (iii) teeth with clearly visible apical lesions. Krois et al trained their CNN models with images acquired from one or two centers, respectively, and reported low cross-center generalizability of the model trained with images acquired only from one center. 100 The low generalizability mainly resulted from differences in the dental status shown on images from different centers, specifically the association between root canal fillings and apical lesions being present differed. CNNs learnt this association structure on data from one center (where root canal fillings were frequent) but were then unable to reproduce this based on data from the other center (where root canal fillings were less frequent). Based on their findings, cross-center training seems to be able to improve a model’s generalizability. Hamdan et al reported that the diagnostic ability of eight dental practitioners to detect apical radiolucencies on periapical images increased with the aid of a commercially available AI software named Denti.AI (Toronto, Canada; Table 3), as demonstrated by an increased sensitivity from 59.6 to 73.3%. 101 Orhan et al used 109 CBCT scans to test an AI software named Diagnocat (Tel Aviv, Israel; Table 3) and reported high detection accuracy and no significant differences in the lesion volumes measured by the software and a radiologist. 31 Notably, the presence of endo-perio lesions, buccal-lingual cortical perforations, incomplete apex, endodontically treated teeth, and large lesions associated with multiple teeth detrimentally affected the model’s performance.
Diagnosis of root fractures, especially vertical root fractures, is a challenging and experience-dependent task, commonly accomplished by combined clinical and radiographic examination. Root fractures are categorized as horizontal and vertical fractures. Horizontal root fractures frequently occur in the anterior teeth due to dentoalveolar trauma while vertical root fractures are common in endodontically treated teeth as a result of excessive root canal preparation or occlusive force. CNN models have been developed to automatically detect vertical root fractures on 2D and 3D radiographic images. 32,33 Despite promising diagnostic accuracy, the models still have to overcome a relatively low accuracy on non-endodontically treated teeth and the potential impact of caries, fillings, dental restorations, and metal artifacts on their performance.
Automatic detection and classification of C-shaped canals in mandibular second molars have been seen as another field of AI application. Several CNN models on 2D and 3D radiographic images have been developed to automatically detect, segment, and classify C-shaped canals. 34–36 Their performance has been shown similar or superior to both general dentists and specialists. 34,36
4. Dental implants
CNN models were also developed to detect peri-implant bone loss and implant fractures on 2D radiographic images. Liu et al developed a CNN model to automatically detect peri-implant bone loss on periapical images. 37 The model performed similarly to two general dentists but inferior to one specialist. Another CNN model on periapical images measured peri-implant bone loss ratio and classified the bone loss severity into normal, early, moderate, and severe. 38 Lee et al developed CNN models on periapical, panoramic, or both images to detect implant fractures and to classify the fractured implants into horizontal or vertical fractures. 43 The models achieved AUCs of 0.90–0.98 for the detection task and 0.75–0.87 for the classification task. The highest detection and classification accuracies were achieved on periapical images, likely due to higher spatial resolution of periapical images compared with panoramic images.
5. Maxillofacial pathologies
Treatment options and prognosis for patients with pathologies in the maxillofacial region are directly associated with the timing and accuracy of diagnosis. The differential diagnosis of maxillofacial pathologies is a challenge for general practitioners, particularly for incidental findings on diagnostic images. A delayed diagnosis will lead to a longer disease course, more invasive surgical approach, and poorer treatment outcome, especially for malignant lesions. Several researchers tried to develop AI tools to improve the diagnostic accuracy of general practitioners for various maxillofacial pathologies to reach the level of specialists. Poedjiastoeti et al developed a CNN model on panoramic images for automatic detection of ameloblastomas and odontogenic keratocysts, with high diagnostic performance (sensitivity and specificity over 80%) being on par with five oral-maxillofacial surgeons. 50 Another CNN model on panoramic images detected and classified ameloblastomas, odontogenic keratocysts, dentigerous cysts, and radicular cysts, and obtained high classification performance with an AUC of 0.94, sensitivity of 88.9%, and specificity of 97.2%, respectively. 51 The model by Endres et al outperformed 14 oral-maxillofacial surgeons in detecting infections, granuloma, cysts, and tumors in the jaws on panoramic images. 49 Lee et al developed CNN models, respectively, on panoramic and CBCT images to detect, segment, and classify odontogenic keratocysts, dentigerous cysts, and radicular cysts. 52 The model on CBCT images (AUC = 0.91) outperformed the one on panoramic images (AUC = 0.85). Ariji et al et al developed a CNN model on contrast-enhanced CT images to identify and segment metastatic cervical lymph nodes in patients with oral cancer. 77 The model outperformed two radiologists in identifying cervical lymph nodes while its segmentation accuracy should be improved.
It has been reported that inexperienced oral-maxillofacial radiologists are prone to miss pathological changes of the parotid gland while interpreting CT images of the maxillofacial region, leading to underdetection of Sjögren’s syndrome. 76 Kise et al developed a CNN model to assess the texture features of the parotid gland on CT images for automatic diagnosis of Sjögren’s syndrome. 76 The model performed similarly to three experienced radiologists and superior to three inexperienced radiologists.
The maxillary sinus is the largest paranasal sinus and frequently involved in dental surgical procedures due to its close proximity to the teeth in the posterior maxilla. Accurate diagnosis of maxillary sinus pathologies is the key to the success of dental surgical procedures, such as sinus augmentation for dental implant placement and apical surgery of maxillary posterior teeth. 102–104 However, it has been reported that inexperienced dental practitioners were less likely to accurately diagnose sinus pathologies on radiographic images. 105 In order to assist clinicians in the diagnosis of the sinus pathologies, CNN modes have been developed to automatically detect and segment sinus lesions on panoramic and CBCT images. 54–57 The models obtained favorable performance on both detection and segmentation tasks. Murata et al reported that their CNN model performed similarly to two radiologists and outperformed two dental residents in the diagnosis of maxillary sinusitis. 55 The CNN model by Hung et al obtained high accuracy for detecting and segmenting mucous retention cysts and mucosal thickening of the sinus on both ultra-low-dose and standard-dose CBCT images with AUCs ranging from 0.84 to 0.93. 56
6. Temporomandibular joint
Diagnosis of temporomandibular joint (TMJ) disorders requires sufficient clinical experience. Undetected TMJ problems can result in patients suffering for a long time and undergoing unnecessary examinations and even invasive treatment. Jung et al developed two CNN models on panoramic images using different pre-trained flameworks for automatic diagnosis of TMJ osteoarthritis. The models achieved excellent diagnostic accuracy superior to that of three general dentists and even three TMJ specialists. 69 The CNN model by Kim et al obtained high accuracy for measuring cortical thickness of the mandibular condyle head on CBCT images. 71 Nishiyama et al developed CNN models to diagnose mandibular condyle fracture on panoramic images, and reported high diagnostic accuracy with AUCs of nearly 0.9. 70
7. Other diagnostic purposes
Apart from the abovementioned diagnostic purposes, deep learning models can also be developed for automatic detection and classification of mandibular fractures, 72 diagnosis and prediction of osteoporosis, 74,75 detection of submandibular gland sialoliths, 73 differentiation of Stafne’s bone cavity from mandibular radiolucent lesions 53 on 2D or 3D radiographic images. All these models obtained high accuracies mostly with AUC values over 0.9.
Reporting of the dental status
Charting of teeth, restorations, and present dental diseases is the first step in the routine assessment of dental patients. Any mistakes or oversights in the resulting dental records may lead to misdiagnosis and erroneous treatment decisions, such as extraction or endodontic treatment of the wrong tooth. As electronic dental health records are by now widely used in dental practice, automated charting using AI seems highly useful. Some studies reported excellent performance of CNN models for automated detection and numbering of deciduous and permanent teeth on panoramic images. 83,84 Shaheen et al developed a CNN model on CBCT images for automated tooth segmentation and classification. 85 The model achieved high accuracies for both segmentation and classification tasks, and has found its way into a commercially available software named Relu (Leuven, Belgium; Table 3). Fontenele et al reported that the presence of dental fillings in CBCT images negatively affected Relu’s performance on tooth segmentation. 86 Some CNN models were developed for automated detection, segmention, and labelling of teeth, crowns, restorative fillings, root canal fillings, and dental implants. 79–82 Commercially available systems including dentalXrai (Berlin, Germany), Denti.AI (Toronto, Canada), and Diagnocat (Tel Aviv, Israel) (Table 3) allow such charting in similar accuracy to practitioners. 78 Moreover, CNN models were able to automatically classify various implant systems and their prosthetic status on periapical and panoramic images. 40–42 These models achieved excellent classification accuracy and some even outperformed periodontists. Automatic implant classification models could be used to recognize and record the system of the placed implants in the dental recording systems, which can facilitate regular maintenance and future repairs.
Treatment planning
AI has great potential to help dental practitioners with treatment planning and time-consuming tasks in the digital dental workflow. Segmentation, localization, and measurement of anatomical structures or pathologies on radiographic images as well as multimodal image registration are common manual steps required in the planning of oral and maxillofacial surgical procedures. 106 So far, several AI applications have been proposed for automated landmark localization, 59–64 skeletal classification, 65–67 facial symmetry assessment, 68,107 and decision-making on tooth retention or extraction for orthodontic treatment 108,109 on 2D or 3D images.
Kunz et al developed a CNN model to automatically localize anatomical landmarks and measure their linear/angular parameters on cephalometric radiographs. 60 The mean absolute differences in the linear/angular analyses were 0.44–0.64 mm/0.46–2.18° for the model and 0.35–0.88 mm/0.55–1.80° for 12 orthodontists, which demonstrates similar performance. Bulatova et al 63 and Mahto et al 64 tested AI driven automated cephalometric analysis software applications named Ceppro (Seoul, Korea; Table 3) and WebCeph (Seongnam, Korea; Table 3), respectively. Ceppro achieved mean absolute localization differences ranging from 1.3 to 8.7 mm, with no significant differences between automated and manual localization for eleven out of sixteen selected landmarks. WebCeph obtained high agreement with intraclass correlation coefficients over 0.9 between automated and manual measurements on seven out of twelve cephalometric parameters. Some deep learning models on cephalometric or CBCT images classified skeletal malocclusion for orthodontic and orthognathic treatment planning and obtained excellent accuracies over 93%. 65–67 Lin et al developed a CNN model to assess facial symmetry before and after orthognathic surgery on CBCT images and reported high accuracy of 90%. 68
Another group developed a CNN model for automatic detection of edentulous sites, nasal fossa, maxillary sinus, and mandibular canal, and measurement of the heights and widths of residual alveolar bone at the edentulous sites on CBCT images for dental implant treatment planning. 39 The model’s detection accuracy was high for edentulous sites (95.3%) and moderate for the mandibular canal (72.2%) and nasal fossa/maxillary sinus (66.4%). On the sites of maxillary premolars/molars and mandibular premolars, the automated bone height measurements were similar to the manual measurements (i.e., ground truth). The automated bone height measurements on the sites of maxillary/mandibular anterior teeth and mandibular molars as well as the automated bone width measurements on all tooth sites were significantly different from the manual measurements, with median measurement deviations of 1.7–11.3 mm. The significant differences between automated and manual measurements might be due to the incorrect localization of the measuring points.
Assessment of the difficulty of planned third molar surgery is also a field of increased interest in AI research. Yoo et al developed a CNN model on panoramic images to classify the difficulty of third molar removal according to several parameters, such as the depth and angulation of the molar. 47 CNN models were also developed to classify the positional relationship between lower third molars and the mandibular canal on panoramic and CBCT images. 44–46 Choi et al developed a CNN model to determine whether lower third molars are truly in contact with or positioned buccally/lingually to the mandibular canal when they are shown as overlapped on panoramic images (CBCT readings served as ground truth), and to classify the non-contact molars as being buccally or lingually positioned. 46 The model obtained accuracies of 72% for determining the true contact position and 81% for classifying the bucco-lingual position, outperforming six oral-maxillofacial specialists. Kim et al developed a CNN model on panoramic images to predict paresthesia due to damage of the inferior alveolar nerve during lower third molar removal, and reported high prediction accuracy with an AUC of 0.92. 48 Apart from third molars, Vollmer et al attempted to develop CNN models on panoramic images to predict oroantral communication after tooth extraction. 58 The prediction accuracy of the best model was similar to that of four oral-maxillofacial experts.
Multimodal image registration is a critical step in digital dental workflows where 3D images acquired from different imaging modalities, including CT, CBCT, MRI, intraoral, facial, and model scanning, are superimposed into the same coordinate frame to create a virtual augmented patient model. This is useful for treatment planning for dental implant placement, joint, salivary gland, orthognathic, and reconstructive surgeries. 110 Multimodal image registration can be performed manually by aligning anatomical landmarks or semi-automatically by using the surface-based or fiducial marker registration approach. Although the semi-automatic approach is less time-consuming than the manual approach, its registration accuracy is affected by the quality of the acquired images, the presence of image artifacts, the deformation of the optical surface, and the distribution of the employed fiducial markers. Therefore, manual corrections are frequently required after semi-automatic image registration. In order to improve the efficiency and accuracy of multimodal image registration, a range of studies developed AI models to automatically register CBCTs with intraoral scans, 92 optical dental model scans, 93 or MRIs. 94 Compared with conventional approaches, these models allow more accurate automated image registration in a significantly shorter time.
Image quality improvement
Due to the growing use of CBCT in daily dental practice for diagnosis and treatment planning, concerns have been raised regarding the increased risk of radiation-induced stochastic effects, particularly on radiosensitive organs such as salivary glands, thyroid glands, and eye lenses. While several low-dose CBCT protocols have been suggested and applied in practice, the perceived inferior image quality of low-dose scans may hamper their use, with clinicians nevertheless employing standard- or even high-dose scanning mode for certain imaging tasks. 111 In addition, the presence of severe artifacts in CBCT images has been reported as one of the common reasons for re-exposure. 112 Patient motion during scanning and metallic dental restorations are the main sources for the occurrence of movement or metal artifacts in CBCT images.
Several researchers therefore developed AI applications to correct blurred panoramic images and reduce noise and metal artifacts in CT and CBCT images. 87–90 CNN-based auto-positioning can reduce blurring occurred due to positioning errors in panoramic image by reconstructing the image with the corrected curvature. 87 Hu et al 91 and Park et al 89 developed image denoizing tools using deep learning to remove noise from low-dose CT or CBCT for improving the image quality to be equivalent to high-dose scans. Hatvani et al developed a CNN-based method to enhance the resolution of teeth on cross-sectional CBCT images, which allows better visualization of the anatomical structure of teeth. 90 CNN-based methods were also proposed to reduce metal artifacts in CT or CBCT images. 88,113,114 These methods identify and segment the areas of metal artifacts in the original images and then merge the original and corrected images to suppress the artifacts.
The future of AI in dentomaxillofacial imaging
Current applications of AI in dentomaxillofacial imaging mainly focus on improving diagnostic accuracy and easing the diagnostic and/or planning workflow. More and more AI applications were reported to be able to perform similarly to or even outperformed dentists (Table 4), oftentimes lifting general dental practitioners to levels equivalent to specialists. Notably, existing AI models are limited in their scope, mainly aiming to detect, segment, or classify anatomical structures or common pathologies. Rare variations or diseases have so far seldom been the focus. Given the difficulty practitioners have for diagnosing rare variations or diseases, AI models developed for such specific tasks could be truly clinically significant. With technical advances, the availability of larger data pools (including the pooling of different datasets and the uptake of federated learning, 115 and the increased usage of alternative labeling and training pathways in dentistry (such as weakly or self-supervised learning, 116 these difficult diagnostic tasks are expected to be tackled.
Table 4.
Performance of the developed AI models in comparison to specialists/general practitioners
Author (Year) | Application | Imaging modality | AI software/ deep learning model | Test dataset | Performance of the developed software/model versus human | Main findings | |
---|---|---|---|---|---|---|---|
AI | Human Mean (range) |
||||||
Dental caries | |||||||
Srivastava et al. (2017) 19 |
Detection of dental caries | Bitewing radiography | CNN | 500 images from nearly 100 clinics across USA | SEN = 0.81 PPV = 0.62 F1 = 0.7 |
three dentists
SEN = 0.42 (0.34–0.48) PPV = 0.78 (0.63–0.89) F1 = 0.53 (0.5–0.56) |
The model achieved significantly higher F1-score and sensitivity for detecting caries than three dentists. |
Cantu et al. (2020) 20 |
Detection of initial (enamel) and advanced (dentin) proximal caries | Bitewing radiography | CNN (U-Net) | 141 bitewings from the dental clinic at Charité - Universitätsmedizin Berlin |
All caries
ACC = 0.80 SEN = 0.75 SPE = 0.83 PPV = 0.70 NPV = 0.86 F1 = 0.73 MCC = 0.57 Initial caries SEN > 0.7 Advanced caries SEN > 0.7 |
seven experienced dentists
All caries ACC = 0.71 SEN = 0.36 (0.19–0.65) SPE = 0.91 (0.69–0.98) PPV = 0.75 (0.41–0.88) NPV = 0.72 (0.68–0.82) F1 = 0.41 (0.26–0.63) MCC = 0.35 (0.14–0.51) Initial caries SEN <0.25 Advanced caries SEN = 0.40–0.75 |
The model achieved higher overall accuracy than seven dentists. The seven dentists were far less sensitive, but slightly more specific than the model. For initial caries, the risk of under detection by dentists was very high while the model showed robust sensitivity regardless of the lesion depth. |
Mertens et al. (2021) 99 |
Detection of proximal enamel, early dentin, and advanced dentin caries | Bitewing radiography | dentalXr.ai Pro software | 20 bitewings from the dental clinic at Charité - Universitätsmedizin Berlin |
ten images evaluated by 22 dentists with the aid of dentalXr.ai Pro
AUC = 0.89 ACC = 0.94 SEN = 0.81 SPE = 0.97 PPV = 0.82 NPV = 0.97 F1 = 0.81 |
ten images evaluated by 22 dentists without the aid of dentalXr.ai Pro
AUC = 0.85 ACC = 0.93 SEN = 0.72 SPE = 0.97 PPV = 0.80 NPV = 0.95 F1 = 0.76 |
dentalXr.ai Pro software can significantly increase dentists’ sensitivity for detecting enamel caries. |
Devlin et al. (2021) 21 |
Detection of proximal enamel caries | Bitewing radiography | AssistDent software | 24 images from one dental hospital and nine general dental practice sites in UK |
24 images evaluated by 12 dentists with the aid of AssistDent
SEN = 0.76 SPE = 0.85 |
24 images evaluated by 11 dentists without the aid of AssistDent
SEN = 0.44 SPE = 0.96 |
AssistDent software can significantly increase dentists’ sensitivity for detecting proximal enamel caries in enamel |
Endodontic evaluation | |||||||
Hamdan et al. (2022) 101 |
Detection of apical radiolucencies | Periapical radiography | Denti.AI software | 68 images from one dental center |
six operative dentistry residents, one general dentist and one endodontist with the aid of Denti.AI
AUC = 0.89 SEN = 0.93 SPE = 0.73 |
six operative dentistry residents, one general dentist and one endodontist without the aid of Denti.AI
AUC = 0.82 SEN = 0.94 SPE = 0.60 |
Denti.AI software can enhance dental practitioner’s ability to detect apical radiolucencies on periapical images. |
Jeon et al. (2021) 34 |
Detection of C-shaped canals in mandibular second molars | Panoramic radiography | CNN (Xception) | 408 cropped images of mandibular second molars | AUC = 0.98 ACC = 0.95 SEN = 0.93 SPE = 0.97 PPV = 0.96 |
OMF radiologist/endodontist
AUC = 0.87/0.89 ACC = 0.87/0.89 SEN = 0.93/0.92 SPE = 0.82/0.86 PPV = 0.84/0.86 |
The model outperformed the OMF radiologist and endodontist |
Sherwood et al. (2021) 35 |
Segmentation and classification of C-Shaped canals in mandibular second molars | CBCT | CNN (U-Net, Residual U-Net, or XceptionU-Net) |
35 scans | SEN = 0.72–0.79 |
one endodontist and 1 OMF radiologist
SEN = 0.97 |
The model performed less well than the OMF radiologist and endodontist while it may aid clinicians with the detection and classification of C-shaped canal anatomy. |
Yang et al. (2022) 36 |
Classification of C-shaped canals in mandibular second molars | Periapical and panoramic radiography | CNN | 100 cropped images consisting of 56 mandibular second molars without C-shaped canals and 44 molars with C-shaped canals |
Periapical images (PA) AUC = 0.95 ACC = 0.90 SEN = 0.93 SPE = 0.87 PPV = 0.90 NPV = 0.91 F1 = 0.91 Panoramic images (Pano) AUC = 0.93 ACC = 0.85 SEN = 0.72 SPE = 0.93 PPV = 0.87 NPV = 0.84 F1 = 0.79 |
one specialist
AUC = 0.95 (PA); 0.96 (Pano) ACC = 0.95 (PA); 0.96 (Pano) SEN = 0.95 (PA); 0.97 (Pano) SPE = 0.94 (PA); 0.95 (Pano) PPV = 0.94 (PA); 0.95 (Pano) NPV = 0.95 (PA); 0.97 (Pano) F1 = 0.94 (PA); 0.96 (Pano) one general dentist AUC = 0.89 (PA); 0.91 (Pano) ACC = 0.89 (PA); 0.91 (Pano) SEN = 0.91 (PA); 0.93 (Pano) SPE = 0.87 (PA); 0.89 (Pano) PPV = 0.86 (PA); 0.89 (Pano) NPV = 0.92 (PA); 0.93 (Pano) F1 = 0.89 (PA); 0.91 (Pano) |
The model’s diagnostic performance using only the root portion of the tooth was similar to the specialist and superior to the general dentist. Both the specialist and general dentist showed better diagnostic performance when reading panoramic radiographs compared with periapical images. |
Periodontal evaluation | |||||||
Kim et al. (2019) 22 |
Detection of periodontal bone loss | Panoramic radiography | Deep neural transfer network | 800 images from Korea University of Anam Hospital | AUC = 0.95 SEN = 0.77 SPE = 0.95 PPV = 0.73 NPV = 0.96 F1 = 0.75 |
five dentists
AUC = 0.85 (0.84–0.87) SEN = 0.78 (0.74–0.80) SPE = 0.92 (0.91–0.93) PPV = 0.62 (0.59–0.65) NPV = 0.96 (0.95–0.97) F1 = 0.69 (0.68–0.70) |
The model outperformed five dentists in detecting periodontal bone loss. |
Krois et al. (2019) 23 |
Detection of periodontal bone loss | Panoramic radiography | CNN | 353 cropped images of individual tooth | AUC = 0.89 ACC = 0.81 SEN = 0.81 SPE = 0.81 PPV = 0.76 NPV = 0.85 F1 = 0.78 |
six dentists
AUC = 0.77 ACC = 0.76 SEN = 0.92 SPE = 0.63 PPV = 0.68 NPV = 0.90 F1 = 0.78 |
The model outperformed six dentists in detecting periodontal bone loss. |
Dental implants | |||||||
Liu et al. (2022) 37 |
Detection of peri-implant bone loss | Periapical radiography | Faster R-CNN | 150 images of bone level dental implants placed in patients | SEN = 0.67 SPE = 0.87 PPV = 0.81 |
two dentists
SEN = 0.62–0.93 SPE = 0.64–0.77 PPV = 0.69–0.70 |
The model performed similarly to two dentists, but inferior to one experienced dentist (ground truth) |
Lee et al. (2020) 41 |
Classification of six dental implant systems | Periapical and panoramic radiography | 18-layer deep CNN | 2,396 cropped images of individual dental implant placed in patients from three centers including Daejeon Dental Hospital, Wonkwang University; Ilsan Hospital, National Health Insurance Service; and Mokdong Hospital, Ewha Womans University | AUC = 0.90–0.98 SEN = 0.83–0.97 SPE = 0.83–0.98 |
six board-certified periodontists
AUC = 0.50–0.97 SEN = 0.78–0.97 SPE = 0.39–0.99 eight periodontology residents AUC = 0.50–0.92 SEN = 0.10–0.95 SPE = 0.38–0.99 eleven residents not specialized in periodontology AUC = 0.54–0.92 SEN = 0.49–0.89 SPE = 0.39–0.96 |
The model outperformed most of the participating periodontists, periodontal residents, and residents not specialized in periodontology. |
Cystic, nodal, and tumor lesions | |||||||
Poedjiastoeti et al. (2018) 50 |
Detection of ameloblastomas and keratocysts | Panoramic radiography | CNN (VGG-16) | 100 images from 50 patients with ameloblastomas and 50 patients with keratocysts | ACC = 0.83 SEN = 0.82 SPE = 0.83 Diagnostic time: 38 s |
5 OMF surgeons
ACC = 0.83 SEN = 0.81 SPE = 0.83 Diagnostic time: 23 mins |
The model’s performance was on par with five OMF surgeons. |
Endres et al. (2020) 49 |
Detection and segmentation of infection, granuloma, cysts, and tumors in the jaws | Panoramic radiography | CNN (U-Net) | 102 images from the Department of Oral and Maxillofacial Surgery, Charité, Berlin | SEN = 0.51 PPV = 0.67 |
24OMF surgeons
SEN = 0.51 (0.26–0.76) PPV = 0.69 (0.42–0.93) |
The model outperformed 14 of 24 OMF surgeons |
Ariji et al. (2022) 77 |
Identification of metastatic cervical lymph nodes | Contrast-enhanced CT | CNN (U-Net) | 72 image slices of 24 metastatic and 68 non-metastatic lymph nodes from 59 OSCC patients | AUC = 0.95 ACC = 0.96 SEN = 0.98 SPE = 0.95 |
two radiologists
AUC = 0.90 ACC = 0.89 SEN = 0.94 SPE = 0.86 |
The model outperformed two radiologists in identifying metastasis with a short time period of 7 sec. |
Others | |||||||
Kunz et al. (2020) 60 |
Localization of cephalometric landmarks | Cephalometric radiography | CNN | 50 images from a private orthodontic dental practice | Mean absolute differences between AI and gold standard ranging 0.46–2.18° for angular and 0.44–0.64 mm for linear analyses | Mean absolute differences between 12 orthodontists and gold standard ranging 0.55–1.80° for angular and 0.35–0.88 mm for linear analyses | The model’s performance reached the level equivalent to that of experienced orthodontists. |
Ezhov et al. (2021) 78 |
Segmentation of teeth and jaws, numbering of teeth, detection of caries, periapical lesions, and periodontitis | CBCT | Diagnocat software | 30 scans selected from 1,135 scans acquired from 17 scanners | Cross-condition SEN = 0.92 SPE = 0.99 |
4 OMF radiologists
Cross-condition SEN = 0.93–0.94 SPE = 0.99–1.00 |
Diagnocat‘s performance was on par with four radiologists |
Choi et al. (2022) 46 |
Determination and classification of positional relationships between lower third molars and the mandibular canal | Panoramic radiography | CNN (ResNet-50) | Cropped images of lower third molars with their roots overlapping the mandibular canal from 25% of 571 panoramic images | Determination of the true contact position ACC = 0.72 SEN = 0.86 SPE = 0.55 Classification of the bucco-lingual position ACC = 0.81 SEN = 0.87 SPE = 0.75 |
6 OMF surgeons
Determination of the true contact position ACC = 0.53–0.70 SEN = 0.25–0.88 SPE = 0.17–0.92 Classification of the bucco-lingual position ACC = 0.32–0.52 SEN = 0.40–1.0 SPE = 0–0.56 |
The model outperformed six OMFS specialists with much higher accuracy for determining the true contact position and classifying the bucco-lingual position between lower third molars and the mandibular canal. |
Vollmer et al. (2022) 58 |
Prediction of oroantral communication after tooth extraction | Panoramic radiography | CNN (VGG16, InceptionV3, MobileNetV2, EfficientNet, or ResNet50) | 60 images from patients with or without postoperative OAC |
The highest performance
(MobileNetV2) AUC = 0.67 ACC = 0.74 SEN = 0.43 PPV = 0.75 F1 = 0.55 |
4 OMF experts
AUC = 0.55–0.71 SEN = 0.14–0.60 |
Although the MobileNetV2 model and one expert reached AUCs of nearly 0.7, the overall accuracy for predicting oroantral communication after tooth extraction from panoramic images was not sufficiently reliable. |
Murata et al. (2019) 55 |
Diagnosis of maxillary sinusitis | Panoramic radiography | CNN (AlexNet) | 120 images consisting of 60 healthy and 60 inflamed sinuses | ACC = 0.88 SEN = 0.87 SPE = 0.88 PPV = 0.88 NPV = 0.87 |
Radiologists/dental residents
ACC = 0.90/0.77 SEN = 0.90/0.78 SPE = 0.89/0.75 PPV = 0.89/0.76 NPV = 0.90/0.78 |
The model performed similarly to two OMF radiologists and outperformed two dental residents. |
Jung et al. (2021) 69 |
Diagnosis of temporomandibular joint osteoarthritis | Panoramic radiography | CNNs (ResNet-152 or EfficientNet-B7) | 20% of 858 images from 395 patients with normal TMJs and 463 with TMJ osteoarthritis |
ResNet/EfficientNet
AUC = 0.94/0.95 ACC = 0.88/0.88 SEN = 0.95/0.86 SPE = 0.80/0.91 |
Specialists/general dentists
ACC = 0.88/0.67 SEN = 0.86/0.69 SPE = 0.91/0.65 |
The model outperformed three general dentists and three specialists in the diagnosis of TMJ osteoarthritis |
Kise et al. (2019) 76 |
Diagnosis of Sjögren’s syndrome | CT | CNN (AlexNet) | 100 CT slices from 5 patients diagnosed with Sjögren’s syndrome and five individuals without any parotid gland abnormalities | ACC = 0.96 SEN = 1.0 SPE = 0.92 |
Experienced/inexperienced radiologists
ACC = 0.98/0.84 SEN = 0.99/0.78 SPE = 0.97/0.89 |
The model performed similarly to three experienced OMF radiologists and outperformed three inexperienced OMF radiologists. |
ACC, accuracy; AI, artificial intelligence; AUC, area under the ROC curve; CBCT, cone-beam computed tomography; CT, computed tomography; CNN, convolutional neural network; DSC, Dice similarity coefficient; F1, F1-score; MCC, Matthew’s correlation coefficient; NPV, negative predictive value; OAC, Oroantral communication; OMF, oral and maxillofacial; PA, periapical images; Pano, panoramic images; PPV, positive predictive value (Precision); SEN, sensitivity (Recall); SPE, specificity; TMJ, temporomandibular joint;
In addition, it has been expected that AI at some point could “see” more on a certain image type than the human eye could. To do so, training of the AI on more sensitive sensor data as ground truth than the one later used during inference would be one option. Currently, this is not the case because the ground truth relies on the same sensor data and oftentimes involves human activity. The only way to achieve somewhat “superhuman” performance is by involving a larger number of practitioners to at least overcome the limitations of single dentists. 14
Gradually, more and more AI models proposed have been tested by completely external image data acquired from different dental centers (Table 5) as suggested by the Artificial Intelligence in Dental Research guideline. 117 Few models were able to achieve similar performance while most showed inferior performance on external images. Some studies have reported low cross-center generalizability of their models. 70,100 It was noted that adding external images acquired from one dental center in the training dataset could increase the model’s performance on the images from that external center but would decrease its performance on the images from the original center. These findings indicate that although cross-center training could improve the generalizability of AI models, the proportion of the images from different centers in the training dataset is also an influencing factor associated with the trained models’ performance. Therefore, future studies should focus not only on internal testing but also on external testing. If external testing shows unfavorable outcomes, cross-center training should be considered to increase the model’s generalizability.
Table 5.
Performance of AI models on internal and external test datasets
Author (Year) | Application | Imaging modality | AI software/ deep learning model | Internal testing | External testing | Main findings | ||
---|---|---|---|---|---|---|---|---|
Test dataset | Performance | Test dataset | Performance | |||||
Orhan et al. (2020) 31 |
Detection and measurement of apical lesions | CBCT | Diagnocat software | N/A | N/A | 109 scans from Eskisehir Osmangazi University, Faculty of Dentistry | SPE = 0.89 PPV = 0.95 F1 = 0.93 |
Diagnocat achieved high performance on external validation with no significant differences in the volumes measured by Diagnocat and by an OMF radiologist |
Krois et al. (2021) 100 |
Detection of apical lesions | Panoramic radiography | CNNs (U‐Net and EfficientNet-B5) |
Dataset A = 150 images from Charité, Berlin, Germany Dataset B = 150 images from King George Medical University, Lucknow, India |
Model trained with images from datasets A and tested on images from dataset A SEN = 0.48 SPE = 1.0 PPV = 0.64 F1 = 0.54 Model trained with images from datasets A&B and tested on images from dataset A SEN = 0.48 SPE = 1.0 PPV = 0.57 F1 = 0.51 Model trained with images from datasets A&B and tested on images from dataset B SEN = 0.40 SPE = 1.0 PPV = 0.54 F1 = 0.46 |
Dataset B = 150 images from King George Medical University, Lucknow, India | Model trained with images from datasets A and tested on images from dataset B SEN = 0.22 SPE = 1.0 PPV = 0.63 F1 = 0.327 |
The model trained with images acquired from one hospital achieved lower performance (especially lower sensitivity) when tested on images acquired from another hospital. |
Zadroz ˙ny et al. (2022) 79 |
Multitasking including identification of missing tooth, caries, filling, prosthetic restoration, endodontically treated tooth, residual root, apical lesion, and periodontal bone loss | Panoramic radiography | Diagnocat software | N/A | N/A | 30 images from the Dental and Maxillofacial Radiology Department, Medical University of Warsaw, Poland |
Missing tooth
SEN = 0.96, SPE = 0.98 Caries SEN = 0.45, SPE = 0.98 Filling SEN = 0.83, SPE = 0.99 Prosthesis SPE = 0.96, SPE = 0.99 Endo-treated tooth SEN = 0.87, SPE = 0.99 Residual root SEN = 0.82, SPE = 1.00 Apical lesion SEN = 0.39, SPE = 0.98 Periodontal bone loss SEN = 0.80, SPE = 0.85 |
Diagnocat achieved high performance on external validation in identifying missing tooth, fillings, prosthesis, endodontically treated tooth, residual root, and periodontal bone loss, but low sensitivities for identifying caries and apical lesions. |
Ezhov et al. (2021) 78 |
Segmentation of teeth and jaws, numbering of teeth, detection of caries, periapical lesions, and periodontitis | CBCT | Diagnocat software | Cropped images from 562 scans taken using 19 scanners | Overall SEN = 0.92 SPE = 0.99 |
30 scans taken using three different scanners from three clinics |
12 dentists with/without the aid of the Diagnocat
Overall SEN = 0.85/0.77 SPE = 0.97/0.96 |
The overall sensitivity of 12 dentists with the aid of Diagnocat on external images was lower than that of Diagnocat on internal images. |
De Angelis et al. (2022) 80 |
Tooth numbering and detection of dental implants, prosthetic crowns, fillings, root remnants, and root canal treatment | Panoramic radiography | Promaton software | N/A | N/A | 120 images from the Department of Oral and Maxillofacial Sciences of Sapienza University of Rome, Italy | Overall AUC = 0.94 SEN = 0.89 SPE = 0.98 PPV = 0.94 NPV = 0.97 |
Promaton achieved high overall performance on external validation |
Nishiyama et al. (2021) 70 |
Diagnosis of mandibular condyle fracture | Panoramic radiography | CNN (AlexNet) |
5-fold CV
Dataset A = 200 images from a university dental hospital Dataset B = 200 images from a general hospital |
Model trained with and tested on images from dataset A/B AUC = 0.85/0.86 ACC = 0.80/0.81 SEN = 0.80/0.80 SPE = 0.79/0.82 Model trained by images from datasets A&B and tested on images from dataset A/B AUC = 0.89/0.91 ACC = 0.82/0.85 SEN = 0.83/0.85 SPE = 0.80/0.84 |
Dataset A = 200 images from a university dental hospital Dataset B = 200 images from a general hospital |
Model trained with images from dataset A and tested on images from dataset B AUC = 0.58 ACC = 0.59 SEN = 0.60 SPE = 0.58 Model trained with images from dataset B and tested on images from dataset A AUC = 0.58 ACC = 0.60 SEN = 0.61 SPE = 0.59 |
The model trained with images acquired from one hospital achieved much lower diagnostic performance when tested on images acquired from another hospital. |
Jung et al. (2021) 57 |
Segmentation of maxillary sinus lesions | CBCT | CNN (3D nnU-Net) | 20 scans from Korea University Anam Hospital | DSC (air) = 0.93 DSC (lesions) = 0.76 |
20 scans from Korea University Ansan Hospital | DSC (air) = 0.97 DSC (lesions) = 0.54 |
The model achieved similar performance on external images in segmenting the air space of the sinus but much lower performance in segmenting the sinus lesions |
3D, three-dimensional; ACC, accuracy; AI, artificial intelligence; AUC, area under the ROC curve; CBCT, cone-beam computed tomography; CNN, convolutional neural network; CV, cross-validation; DSC, Dice similarity coefficient; F1, F1-score; N/A, not available; NPV, negative predictive value; OMF, oral and maxillofacial; PPV, positive predictive value (Precision); SEN, sensitivity (Recall); SPE, specificity.
The usefulness and efficacy of most proposed AI models in daily dental practice are still unclear based on current evidence. Although most studies reported that AI models could increase diagnostic ability of dental practitioners and reduce the time spent on time-consuming work in the treatment planning process, their true impact on real-world clinical practice is rarely discussed. In addition to a model’s accuracy, future studies should focus more on its impact on treatment decision, clinical and patient-reported outcomes, and cost-effectiveness, which may be more important to patients, providers, and healthcare organizers. 118 Schwendicke et al performed the first cost-effectiveness analysis of an AI application for caries detection on bitewings. 14 They reported that AI showed significantly higher sensitivity than dentists, which allows more early caries to be detected, facilitates non- or micro-invasive management of the detected lesions, and thus avoids costly late retreatments. The high cost-effectiveness of dental AI applications implies that integrating AI into clinical practice has the potential to reduce healthcare cost burden, revealing their economic impact on healthcare systems. Only clinically relevant AI tools that are capable of fulfilling technical requirements with promising financial potential can attract healthcare stakeholders to continuously support their development, optimization, and application in dental medicine. 119,120 Therefore, future research should assess the clinical, technical, and financial aspects of cost-effectiveness of AI applications in dental medicine to demonstrate their true usefulness in daily practice.
Moreover, the impact of AI on patient-provider interaction should not be ignored. A more positive attitude regarding dental AI applications was observed in younger and more educated individuals than in older and less educated individuals. 121 Compared with younger individuals, the elderly are more sceptical towards such advanced healthcare technologies. Providers need to frame the usage of AI individually to retain trust into the care process. The output of AI should be able to help patients to objectify any diagnosis and to support visual recognition of a lesion, which can improve patient-clinician communication and increase patients’ trust in any derived management.
Conclusions and outlook
Personalized dental medicine should allow to provide the safest, most efficacious and efficient diagnostics and therapeutics tailored to individuals based on one’s biological, social, and behavioral characteristics. Based on current evidence, true personalized dental medicine is still far from being a reality as most evidence-based up-to-date clinical practice guidelines for the management of dental diseases still stratify individuals and lesions into risk groups, and thus assign identical management strategies to all individuals in a certain risk group. A wide range of AI applications, including several commercially available software options, have been developed based on diagnostic images to assist clinicians in the diagnosis and treatment planning of various dentomaxillofacial diseases, with performance similar or even superior to that of specialists. Although these dental AI applications are seen to have the potential to enable a more precise, personalized, preventive, and participatory approach for the management of dentomaxillofacial diseases, almost all of them only work on image data obtained at a certain time point in the diagnostic or treatment process without considering other data such as individual characteristics and clinical assessment. Advanced technologies with improved data analytic approaches are expected to enrich these AI applications with diverse, multimodal, large, and complex data from the individual level (e.g., demographic, behavioral, and social characteristics; clinical data generated by records mining, clinical assessment, diagnostic imaging, omics technologies; and real-time consumer data from wearables and tracking devices), setting level (e.g., geospatial, environmental, and provider-related data), and system level (e.g., health insurance, regulatory, and legislative data), which may facilitate a deeper understanding of the interaction of these multilevel data and hopefully bring us closer to a truer form of personalized dental care for patients in the near future. 3,7
Footnotes
Acknowledgment: The authors received no financial support. F.S. is co-founder of a startup in the field of dental image analysis, the dentalXrai GmbH. The startup did not have any relation to this study, neither with regards to setting it up nor conducting, analyzing or reporting it.
Funding: Open Access funding enabled and organized by Projekt DEAL.
Contributor Information
Kuo Feng Hung, Email: hungkfg@hku.hk.
Falk Schwendicke, Email: falk.schwendicke@charite.de.
REFERENCES
- 1. Innes NPT, Schwendicke F. Restorative thresholds for carious lesions: systematic review and meta-analysis. J Dent Res 2017; 96: 501–8. doi: 10.1177/0022034517693605 [DOI] [PubMed] [Google Scholar]
- 2. Innes NPT, Chu CH, Fontana M, Lo ECM, Thomson WM, Uribe S, et al. A century of change towards prevention and minimal intervention in cariology. J Dent Res 2019; 98: 611–17. doi: 10.1177/0022034519837252 [DOI] [PubMed] [Google Scholar]
- 3. Schwendicke F, Krois J. Precision dentistry-what it is, where it fails (yet), and how to get there. Clin Oral Investig 2022; 26: 3395–3403. doi: 10.1007/s00784-022-04420-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Hood L, Flores M. A personal view on systems medicine and the emergence of proactive P4 medicine: predictive, preventive, personalized and participatory. N Biotechnol 2012; 29: 613–24. doi: 10.1016/j.nbt.2012.03.004 [DOI] [PubMed] [Google Scholar]
- 5. Beck JD. Risk revisited. Community Dent Oral Epidemiol 1998; 26: 220–25. doi: 10.1111/j.1600-0528.1998.tb01954.x [DOI] [PubMed] [Google Scholar]
- 6. Burt BA. Definitions of risk. J Dent Educ 2001; 65: 1007–8. [PubMed] [Google Scholar]
- 7. Schwendicke F, Krois J. Data dentistry: how data are changing clinical care and research. J Dent Res 2022; 101: 21–29. doi: 10.1177/00220345211020265 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Huerta EA, Khan A, Huang X, Tian M, Levental M, Chard R, et al. Accelerated, scalable and reproducible AI-driven gravitational wave detection. Nat Astron 2021; 5: 1062–68. doi: 10.1038/s41550-021-01405-0 [DOI] [Google Scholar]
- 9. Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, et al. Highly accurate protein structure prediction with alphafold. Nature 2021; 596: 583–89. doi: 10.1038/s41586-021-03819-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. National Research Council (US) . Committee on a framework for developing a new taxonomy of disease. Why now?. Toward precision medicine: building a knowledge network for biomedical research and a new taxonomy of disease. Washington (DC): National Academies Press (US); 2011, pp.22–39. [PubMed] [Google Scholar]
- 11. Schwendicke F, Marazita ML, Jakubovics NS, Krois J. Big data and complex data analytics: breaking peer review? J Dent Res 2022; 101: 369–70. doi: 10.1177/00220345211070983 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Networks 1991; 4: 251–57. doi: 10.1016/0893-6080(91)90009-T [DOI] [Google Scholar]
- 13. Krois J, Graetz C, Holtfreter B, Brinkmann P, Kocher T, Schwendicke F. Evaluating modeling and validation strategies for tooth loss. J Dent Res 2019; 98: 1088–95. doi: 10.1177/0022034519864889 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Schwendicke F, Rossi JG, Göstemeyer G, Elhennawy K, Cantu AG, Gaudin R, et al. Cost-Effectiveness of artificial intelligence for proximal caries detection. J Dent Res 2021; 100: 369–76. doi: 10.1177/0022034520972335 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Riordain RN, Glick M, Mashhadani S, Aravamudhan K, Barrow J, Cole D, et al. Developing a standard set of patient-centred outcomes for adult oral health-an international, cross-disciplinary consensus. Int Dent J 2021; 71: 40–52. doi: 10.1111/idj.12604 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Joda T, Gallucci GO, Wismeijer D, Zitzmann NU. Augmented and virtual reality in dental medicine: a systematic review. Comput Biol Med 2019; 108: 93–100. doi: 10.1016/j.compbiomed.2019.03.012 [DOI] [PubMed] [Google Scholar]
- 17. Hung K, Montalvao C, Tanaka R, Kawai T, Bornstein MM. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: a systematic review. Dentomaxillofac Radiol 2020; 49(1): 20190107. doi: 10.1259/dmfr.20190107 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Lee JH, Kim DH, Jeong SN, Choi SH. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J Dent 2018; 77: 106–11. doi: 10.1016/j.jdent.2018.07.015 [DOI] [PubMed] [Google Scholar]
- 19. Srivastava MM, Kumar P, Pradhan L, Varadarajan S. Detection of tooth caries in bitewing radiographs using deep learning. arXiv. 2017.
- 20. Cantu AG, Gehrung S, Krois J, Chaurasia A, Rossi JG, Gaudin R, et al. Detecting caries lesions of different radiographic extension on bitewings using deep learning. J Dent 2020; 100: 103425. doi: 10.1016/j.jdent.2020.103425 [DOI] [PubMed] [Google Scholar]
- 21. Devlin H, Williams T, Graham J, Ashley M. The ADEPT study: a comparative study of dentists’ ability to detect enamel-only proximal caries in bitewing radiographs with and without the use of assistdent artificial intelligence software. Br Dent J 2021; 231: 481–85. doi: 10.1038/s41415-021-3526-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Kim J, Lee HS, Song IS, Jung KH. DeNTNet: deep neural transfer network for the detection of periodontal bone loss using panoramic dental radiographs. Sci Rep 2019; 9: 17615. doi: 10.1038/s41598-019-53758-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Krois J, Ekert T, Meinhold L, Golla T, Kharbot B, Wittemeier A, et al. Deep learning for the radiographic detection of periodontal bone loss. Sci Rep 2019;9:8495. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Danks RP, Bano S, Orishko A, Tan HJ, Moreno Sancho F, D’Aiuto F, et al. Automating periodontal bone loss measurement via dental landmark localisation. Int J Comput Assist Radiol Surg 2021; 16: 1189–99. doi: 10.1007/s11548-021-02431-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Lee C-T, Kabir T, Nelson J, Sheng S, Meng H-W, Van Dyke TE, et al. Use of the deep learning approach to measure alveolar bone level. J Clin Periodontol 2022; 49: 260–69. doi: 10.1111/jcpe.13574 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Chang H-J, Lee S-J, Yong T-H, Shin N-Y, Jang B-G, Kim J-E, et al. Deep learning hybrid method to automatically diagnose periodontal bone loss and stage periodontitis. Sci Rep 2020; 10(1): 7531. doi: 10.1038/s41598-020-64509-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Chang J, Chang M-F, Angelov N, Hsu C-Y, Meng H-W, Sheng S, et al. Application of deep machine learning for the radiographic diagnosis of periodontitis. Clin Oral Invest 2022; 26: 6629–37. doi: 10.1007/s00784-022-04617-4 [DOI] [PubMed] [Google Scholar]
- 28. Lee J-H, Kim D, Jeong S-N, Choi S-H. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J Periodontal Implant Sci 2018; 48: 114. doi: 10.5051/jpis.2018.48.2.114 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Thanathornwong B, Suebnukarn S. Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks. Imaging Sci Dent 2020; 50: 169–74. doi: 10.5624/isd.2020.50.2.169 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Ekert T, Krois J, Meinhold L, Elhennawy K, Emara R, Golla T, et al. Deep learning for the radiographic detection of apical lesions. J Endod 2019; 45: 917–22. doi: 10.1016/j.joen.2019.03.016 [DOI] [PubMed] [Google Scholar]
- 31. Orhan K, Bayrakdar IS, Ezhov M, Kravtsov A, Özyürek T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int Endod J 2020; 53: 680–89. doi: 10.1111/iej.13265 [DOI] [PubMed] [Google Scholar]
- 32. Fukuda M, Inamoto K, Shibata N, Ariji Y, Yanashita Y, Kutsuna S, et al. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiol 2020; 36: 337–43. doi: 10.1007/s11282-019-00409-x [DOI] [PubMed] [Google Scholar]
- 33. Johari M, Esmaeili F, Andalib A, Garjani S, Saberkari H. Detection of vertical root fractures in intact and endodontically treated premolar teeth by designing a probabilistic neural network: an ex vivo study. Dentomaxillofac Radiol 2017; 46(2): 20160107. doi: 10.1259/dmfr.20160107 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Jeon S-J, Yun J-P, Yeom H-G, Shin W-S, Lee J-H, Jeong S-H, et al. Deep-learning for predicting C-shaped canals in mandibular second molars on panoramic radiographs. Dentomaxillofac Radiol 2021; 50(5): 20200513. doi: 10.1259/dmfr.20200513 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Sherwood AA, Sherwood AI, Setzer FC, K SD, Shamili JV, John C, et al. A deep learning approach to segment and classify C-shaped canal morphologies in mandibular second molars using cone-beam computed tomography. J Endod 2021; 47: 1907–16. doi: 10.1016/j.joen.2021.09.009 [DOI] [PubMed] [Google Scholar]
- 36. Yang S, Lee H, Jang B, Kim K-D, Kim J, Kim H, et al. Development and validation of a visually explainable deep learning model for classification of C-shaped canals of the mandibular second molars in periapical and panoramic dental radiographs. J Endod 2022; 48: 914–21. doi: 10.1016/j.joen.2022.04.007 [DOI] [PubMed] [Google Scholar]
- 37. Liu M, Wang S, Chen H, Liu Y. A pilot study of a deep learning approach to detect marginal bone loss around implants. BMC Oral Health 2022; 22(1): 11. doi: 10.1186/s12903-021-02035-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Cha JY, Yoon HI, Yeo IS, Huh KH, Han JS. Peri-Implant bone loss measurement using a region-based convolutional neural network on dental periapical radiographs. J Clin Med 2021; 10(5): 1009. doi: 10.3390/jcm10051009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Kurt Bayrakdar S, Orhan K, Bayrakdar IS, Bilgir E, Ezhov M, Gusarev M, et al. A deep learning approach for dental implant planning in cone-beam computed tomography images. BMC Med Imaging 2021; 21(1): 86. doi: 10.1186/s12880-021-00618-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Kim JE, Nam NE, Shim JS, Jung YH, Cho BH, Hwang JJ. Transfer learning via deep neural networks for implant fixture system classification using periapical radiographs. JCM 2020; 9: 1117. doi: 10.3390/jcm9041117 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Lee JH, Kim YT, Lee JB, Jeong SN. A performance comparison between automated deep learning and dental professionals in classification of dental implant systems from dental imaging: a multi-center study. Diagnostics 2020; 10: 910. doi: 10.3390/diagnostics10110910 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Sukegawa S, Yoshii K, Hara T, Matsuyama T, Yamashita K, Nakano K, et al. Multi-task deep learning model for classification of dental implant brand and treatment stage using dental panoramic radiograph images. Biomolecules 2021;11:815. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Lee DW, Kim SY, Jeong SN, Lee JH. Artificial intelligence in fractured dental implant detection and classification: evaluation using dataset from two dental hospitals. Diagnostics (Basel) 2021; 11(2): 233. doi: 10.3390/diagnostics11020233 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Fukuda M, Ariji Y, Kise Y, Nozawa M, Kuwada C, Funakoshi T, et al. Comparison of 3 deep learning neural networks for classifying the relationship between the mandibular third molar and the mandibular canal on panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol 2020; 130: 336–43. doi: 10.1016/j.oooo.2020.04.005 [DOI] [PubMed] [Google Scholar]
- 45. Liu M-Q, Xu Z-N, Mao W-Y, Li Y, Zhang X-H, Bai H-L, et al. Deep learning-based evaluation of the relationship between mandibular third molar and mandibular canal on CBCT. Clin Oral Investig 2022; 26: 981–91. doi: 10.1007/s00784-021-04082-5 [DOI] [PubMed] [Google Scholar]
- 46. Choi E, Lee S, Jeong E, Shin S, Park H, Youm S, et al. Artificial intelligence in positioning between mandibular third molar and inferior alveolar nerve on panoramic radiography. Sci Rep 2022; 12(1): 2456. doi: 10.1038/s41598-022-06483-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Yoo J-H, Yeom H-G, Shin W, Yun JP, Lee JH, Jeong SH, et al. Deep learning based prediction of extraction difficulty for mandibular third molars. Sci Rep 2021; 11(1): 1954. doi: 10.1038/s41598-021-81449-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Kim BS, Yeom HG, Lee JH, Shin WS, Yun JP, Jeong SH, et al. Deep learning-based prediction of paresthesia after third molar extraction: a preliminary study. Diagnostics 2021; 11: 1572. doi: 10.3390/diagnostics11091572 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Endres MG, Hillen F, Salloumis M, Sedaghat AR, Niehues SM, Quatela O, et al. Development of a deep learning algorithm for periapical disease detection in dental radiographs. Diagnostics 2020; 10: 430. doi: 10.3390/diagnostics10060430 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Poedjiastoeti W, Suebnukarn S. Application of convolutional neural network in the diagnosis of jaw tumors. Healthc Inform Res 2018; 24: 236–41. doi: 10.4258/hir.2018.24.3.236 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51. Kwon O, Yong T-H, Kang S-R, Kim J-E, Huh K-H, Heo M-S, et al. Automatic diagnosis for cysts and tumors of both jaws on panoramic radiographs using a deep convolution neural network. Dentomaxillofac Radiol 2020; 49(8): 20200185. doi: 10.1259/dmfr.20200185 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52. Lee JH, Kim DH, Jeong SN. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis 2020; 26: 152–58. doi: 10.1111/odi.13223 [DOI] [PubMed] [Google Scholar]
- 53. Lee A, Kim MS, Han SS, Park P, Lee C, Yun JP. Deep learning neural networks to differentiate stafne’s bone cavity from pathological radiolucent lesions of the mandible in heterogeneous panoramic radiography. PLoS One 2021; 16(7): e0254997. doi: 10.1371/journal.pone.0254997 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Kuwana R, Ariji Y, Fukuda M, Kise Y, Nozawa M, Kuwada C, et al. Performance of deep learning object detection technology in the detection and diagnosis of maxillary sinus lesions on panoramic radiographs. Dentomaxillofac Radiol 2021; 50(1): 20200171. doi: 10.1259/dmfr.20200171 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Murata M, Ariji Y, Ohashi Y, Kawai T, Fukuda M, Funakoshi T, et al. Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography. Oral Radiol 2019; 35: 301–7. doi: 10.1007/s11282-018-0363-7 [DOI] [PubMed] [Google Scholar]
- 56. Hung KF, Ai QYH, King AD, Bornstein MM, Wong LM, Leung YY. Automatic detection and segmentation of morphological changes of the maxillary sinus mucosa on cone-beam computed tomography images using a three-dimensional convolutional neural network. Clin Oral Investig 2022; 26: 3987–98. doi: 10.1007/s00784-021-04365-x [DOI] [PubMed] [Google Scholar]
- 57. Jung SK, Lim HK, Lee S, Cho Y, Song IS. Deep active learning for automatic segmentation of maxillary sinus lesions using a convolutional neural network. Diagnostics (Basel) 2021; 11(4): 688. doi: 10.3390/diagnostics11040688 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. Vollmer A, Saravi B, Vollmer M, Lang GM, Straub A, Brands RC, et al. Artificial intelligence-based prediction of oroantral communication after tooth extraction utilizing preoperative panoramic radiography. Diagnostics (Basel) 2022; 12(6): 1406. doi: 10.3390/diagnostics12061406 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59. Nishimoto S, Sotsuka Y, Kawai K, Ishise H, Kakibuchi M. Personal computer-based cephalometric landmark detection with deep learning, using cephalograms on the Internet. J Craniofac Surg 2019; 30: 91–95. doi: 10.1097/SCS.0000000000004901 [DOI] [PubMed] [Google Scholar]
- 60. Kunz F, Stellzig-Eisenhauer A, Zeman F, Boldt J. Artificial intelligence in orthodontics: evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. J Orofac Orthop 2020; 81: 52–68. doi: 10.1007/s00056-019-00203-8 [DOI] [PubMed] [Google Scholar]
- 61. Park J-H, Hwang H-W, Moon J-H, Yu Y, Kim H, Her S-B, et al. Automated identification of cephalometric landmarks: part 1-comparisons between the latest deep-learning methods YOLOV3 and SSD Angle Orthod 2019; 89: 903–9. doi: 10.2319/022019-127.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62. Hwang HW, Park JH, Moon JH, Yu Y, Kim H, Her SB, et al. Automated identification of cephalometric landmarks: part 2-might it be better than human? Angle Orthod 2020; 90: 69–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63. Bulatova G, Kusnoto B, Grace V, Tsay TP, Avenetti DM, Sanchez FJC. Assessment of automatic cephalometric landmark identification using artificial intelligence. Orthod Craniofacial Res 2021; 24: 37–42. doi: 10.1111/ocr.12542 [DOI] [PubMed] [Google Scholar]
- 64. Mahto RK, Kafle D, Giri A, Luintel S, Karki A. Evaluation of fully automated cephalometric measurements obtained from web-based artificial intelligence driven platform. BMC Oral Health 2022; 22(1): 132. doi: 10.1186/s12903-022-02170-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65. Shin W, Yeom HG, Lee GH, Yun JP, Jeong SH, Lee JH, et al. Deep learning based prediction of necessity for orthognathic surgery of skeletal malocclusion using cephalogram in Korean individuals. BMC Oral Health 2021;21:130. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66. Yu HJ, Cho SR, Kim MJ, Kim WH, Kim JW, Choi J. Automated skeletal classification with lateral cephalometry based on artificial intelligence. J Dent Res 2020; 99: 249–56. doi: 10.1177/0022034520901715 [DOI] [PubMed] [Google Scholar]
- 67. Kim I, Misra D, Rodriguez L, Gill M, Liberton DK, Almpani K, et al. Malocclusion Classification on 3D Cone-Beam CT Craniofacial Images Using Multi-Channel Deep Learning Models. 2020 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) in conjunction with the 43rd Annual Conference of the Canadian Medical and Biological Engineering Society; Montreal, QC, Canada. Vol. 2020; 2020. pp. 1294–98. doi: 10.1109/EMBC44109.2020.9176672 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68. Lin HH, Chiang WC, Yang CT, Cheng CT, Zhang T, Lo LJ. On construction of transfer learning for facial symmetry assessment before and after orthognathic surgery. Comput Methods Programs Biomed 2021; 200: 105928. doi: 10.1016/j.cmpb.2021.105928 [DOI] [PubMed] [Google Scholar]
- 69. Jung W, Lee KE, Suh BJ, Seok H, Lee DW. Deep learning for osteoarthritis classification in temporomandibular joint. Oral Dis 2021. doi: 10.1111/odi.14056 [DOI] [PubMed] [Google Scholar]
- 70. Nishiyama M, Ishibashi K, Ariji Y, Fukuda M, Nishiyama W, Umemura M, et al. Performance of deep learning models constructed using panoramic radiographs from two hospitals to diagnose fractures of the mandibular condyle. Dentomaxillofac Radiol 2021; 50(7): 20200611. doi: 10.1259/dmfr.20200611 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71. Kim YH, Shin JY, Lee A, Park S, Han SS, Hwang HJ. Automated cortical thickness measurement of the mandibular condyle head on CBCT images using a deep learning method. Sci Rep 2021; 11(1): 14852. doi: 10.1038/s41598-021-94362-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72. Wang X, Xu Z, Tong Y, Xia L, Jie B, Ding P, et al. Detection and classification of mandibular fracture on CT scan using deep convolutional neural network. Clin Oral Investig 2022; 26: 4593–4601. doi: 10.1007/s00784-022-04427-8 [DOI] [PubMed] [Google Scholar]
- 73. Ishibashi K, Ariji Y, Kuwada C, Kimura M, Hashimoto K, Umemura M, et al. Efficacy of a deep leaning model created with the transfer learning method in detecting sialoliths of the submandibular gland on panoramic radiography. Oral Surg Oral Med Oral Pathol Oral Radiol 2022; 133: 238–44. doi: 10.1016/j.oooo.2021.08.010 [DOI] [PubMed] [Google Scholar]
- 74. Sukegawa S, Fujimura A, Taguchi A, Yamamoto N, Kitamura A, Goto R, et al. Identification of osteoporosis using ensemble deep learning model with panoramic radiographs and clinical covariates. Sci Rep 2022; 12(1): 6088. doi: 10.1038/s41598-022-10150-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75. Tassoker M, Öziç MÜ, Yuce F. Comparison of five convolutional neural networks for predicting osteoporosis based on mandibular cortical index on panoramic radiographs. Dentomaxillofac Radiol 2022; 51(6): 20220108. doi: 10.1259/dmfr.20220108 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76. Kise Y, Ikeda H, Fujii T, Fukuda M, Ariji Y, Fujita H, et al. Preliminary study on the application of deep learning system to diagnosis of Sjögren’s syndrome on CT images. Dentomaxillofac Radiol 2019; 48(6): 20190019. doi: 10.1259/dmfr.20190019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77. Ariji Y, Kise Y, Fukuda M, Kuwada C, Ariji E. Segmentation of metastatic cervical lymph nodes from CT images of oral cancers using deep-learning technology. Dentomaxillofac Radiol 2022; 51(4): 20210515. doi: 10.1259/dmfr.20210515 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78. Ezhov M, Gusarev M, Golitsyna M, Yates JM, Kushnerev E, Tamimi D, et al. Clinically applicable artificial intelligence system for dental diagnosis with CBCT. Sci Rep 2021;11:15006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79. Zadrożny Ł, Regulski P, Brus-Sawczuk K, Czajkowska M, Parkanyi L, Ganz S, et al. Artificial intelligence application in assessment of panoramic radiographs. Diagnostics 2022; 12: 224. doi: 10.3390/diagnostics12010224 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80. De Angelis F, Pranno N, Franchina A, Di Carlo S, Brauner E, Ferri A, et al. Artificial intelligence: a new diagnostic software in dentistry: a preliminary performance diagnostic study. Int J Environ Res Public Health 2022; 19(3): 1728. doi: 10.3390/ijerph19031728 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81. Vinayahalingam S, Goey R-S, Kempers S, Schoep J, Cherici T, Moin DA, et al. Automated chart filing on panoramic radiographs using deep learning. J Dent 2021; 115: 103864. doi: 10.1016/j.jdent.2021.103864 [DOI] [PubMed] [Google Scholar]
- 82. Başaran M, Çelik Ö, Bayrakdar IS, Bilgir E, Orhan K, Odabaş A, et al. Diagnostic charting of panoramic radiography using deep-learning artificial intelligence system. Oral Radiol 2022; 38: 363–69. doi: 10.1007/s11282-021-00572-0 [DOI] [PubMed] [Google Scholar]
- 83. Kılıc MC, Bayrakdar IS, Çelik Ö, Bilgir E, Orhan K, Aydın OB, et al. Artificial intelligence system for automatic deciduous tooth detection and numbering in panoramic radiographs. Dentomaxillofac Radiol 2021; 50(6): 20200172. doi: 10.1259/dmfr.20200172 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84. Tuzoff DV, Tuzova LN, Bornstein MM, Krasnov AS, Kharchenko MA, Nikolenko SI, et al. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac Radiol 2019; 48(4): 20180051. doi: 10.1259/dmfr.20180051 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 85. Shaheen E, Leite A, Alqahtani KA, Smolders A, Van Gerven A, Willems H, et al. A novel deep learning system for multi-class tooth segmentation and classification on cone beam computed tomography. A validation study. J Dent 2021; 115: 103865. doi: 10.1016/j.jdent.2021.103865 [DOI] [PubMed] [Google Scholar]
- 86. Fontenele RC, Gerhardt M do N, Pinto JC, Van Gerven A, Willems H, Jacobs R, et al. Influence of dental fillings and tooth type on the performance of a novel artificial intelligence-driven tool for automatic tooth segmentation on CBCT images-a validation study. J Dent 2022; 119: 104069. doi: 10.1016/j.jdent.2022.104069 [DOI] [PubMed] [Google Scholar]
- 87. Du X, Chen Y, Zhao J, Xi Y. A convolutional neural network based auto-positioning method for dental arch in rotational panoramic radiography. Annu Int Conf IEEE Eng Med Biol Soc 2018; 2018: 2615–18. doi: 10.1109/EMBC.2018.8512732 [DOI] [PubMed] [Google Scholar]
- 88. Liang K, Zhang L, Yang H, Yang Y, Chen Z, Xing Y. Metal artifact reduction for practical dental computed tomography by improving interpolation-based reconstruction with deep learning. Med Phys 2019; 46: e823–34. doi: 10.1002/mp.13644 [DOI] [PubMed] [Google Scholar]
- 89. Park J, Hwang D, Kim KY, Kang SK, Kim YK, Lee JS. Computed tomography super-resolution using deep convolutional neural network. Phys Med Biol 2018; 63(14): 145011. doi: 10.1088/1361-6560/aacdd4 [DOI] [PubMed] [Google Scholar]
- 90. Hatvani J, Horvath A, Michetti J, Basarab A, Kouame D, Gyongy M. Deep learning-based super-resolution applied to dental computed tomography. IEEE Trans Radiat Plasma Med Sci 2018; 3: 120–28. doi: 10.1109/TRPMS.2018.2827239 [DOI] [Google Scholar]
- 91. Hu Z, Jiang C, Sun F, Zhang Q, Ge Y, Yang Y, et al. Artifact correction in low-dose dental CT imaging using wasserstein generative adversarial networks. Med Phys 2019; 46: 1686–96. doi: 10.1002/mp.13415 [DOI] [PubMed] [Google Scholar]
- 92. Jang TJ, Yun HS, Kim JE, Lee SH, Seo JK. Fully automatic integration of dental CBCT images and full-arch intraoral impressions with stitching error correction via individual tooth segmentation and identification. 2021. Available from: https://arxiv.org/abs/2112.01784 [DOI] [PubMed]
- 93. Chung M, Lee J, Song W, Song Y, Yang I-H, Lee J, et al. Automatic registration between dental cone-beam CT and scanned surface via deep pose regression neural networks and clustered similarities. IEEE Trans Med Imaging 2020; 39: 3900–3909. doi: 10.1109/TMI.2020.3007520 [DOI] [PubMed] [Google Scholar]
- 94. Li J, Wang Y, Wang S, Zhang K, Li G.. Landmark-guided rigid registration for temporomandibular joint mri-cbct images with large field-of-view difference. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics) 12966 LNCS:527–536. 2021. [Google Scholar]
- 95. Hung K, Yeung AWK, Tanaka R, Bornstein MM. Current applications, opportunities, and limitations of AI for 3D imaging in dental research and practice. IJERPH 2020; 17: 4424. doi: 10.3390/ijerph17124424 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96. Hung KF, Ai QYH, Leung YY, Yeung AWK. Potential and impact of artificial intelligence algorithms in dento-maxillofacial radiology. Clin Oral Investig 2022; 26: 5535–55. doi: 10.1007/s00784-022-04477-y [DOI] [PubMed] [Google Scholar]
- 97. Bader JD, Shugars DA, Bonito AJ. Systematic reviews of selected dental caries diagnostic and management methods. J Dent Educ 2001; 65: 960–68. doi: 10.1002/j.0022-0337.2001.65.10.tb03470.x [DOI] [PubMed] [Google Scholar]
- 98. Keenan JR, Keenan AV. Accuracy of dental radiographs for caries detection. Evid Based Dent 2016; 17(2): 43. doi: 10.1038/sj.ebd.6401166 [DOI] [PubMed] [Google Scholar]
- 99. Mertens S, Krois J, Cantu AG, Arsiwala LT, Schwendicke F. Artificial intelligence for caries detection: randomized trial. J Dent 2021; 115: 103849. doi: 10.1016/j.jdent.2021.103849 [DOI] [PubMed] [Google Scholar]
- 100. Krois J, Garcia Cantu A, Chaurasia A, Patil R, Chaudhari PK, Gaudin R, et al. Generalizability of deep learning models for dental image analysis. Sci Rep 2021; 11(1): 6102. doi: 10.1038/s41598-021-85454-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101. Hamdan MH, Tuzova L, Mol A, Tawil PZ, Tuzoff D, Tyndall DA. The effect of a deep-learning tool on dentists’ performances in detecting apical radiolucencies on periapical radiographs. Dentomaxillofac Radiol 2022; 51(7): 20220122. doi: 10.1259/dmfr.20220122 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 102. Bornstein MM, Horner K, Jacobs R. Use of cone beam computed tomography in implant dentistry: current concepts, indications and limitations for clinical practice and research. Periodontol 2000 2017; 73: 51–72. doi: 10.1111/prd.12161 [DOI] [PubMed] [Google Scholar]
- 103. von Arx T, Käch S, Suter VGA, Bornstein MM. Perforation of the maxillary sinus floor during apical surgery of maxillary molars: a retrospective analysis using cone beam computed tomography. Aust Endod J 2020; 46: 176–83. doi: 10.1111/aej.12413 [DOI] [PubMed] [Google Scholar]
- 104. Hung K, Hui L, Yeung AWK, Wu Y, Hsung R-C, Bornstein MM. Volumetric analysis of mucous retention cysts in the maxillary sinus: a retrospective study using cone-beam computed tomography. Imaging Sci Dent 2021; 51: 117–27. doi: 10.5624/isd.20200267 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 105. Ohashi Y, Ariji Y, Katsumata A, Fujita H, Nakayama M, Fukuda M, et al. Utilization of computer-aided detection system in diagnosing unilateral maxillary sinusitis on panoramic radiographs. Dentomaxillofac Radiol 2016; 45(3): 20150419. doi: 10.1259/dmfr.20150419 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 106. Hung KF, Hui LL, Leung YY. Patient-specific estimation of the bone graft volume needed for maxillary sinus floor elevation: a radiographic study using cone-beam computed tomography. Clin Oral Investig 2022; 26: 3875–84. doi: 10.1007/s00784-021-04354-0 [DOI] [PubMed] [Google Scholar]
- 107. Lo LJ, Yang CT, Ho CT, Liao CH, Lin HH. Automatic assessment of 3-dimensional facial soft tissue symmetry before and after orthognathic surgery using a machine learning model: a preliminary experience. Ann Plast Surg 2021; 86: S224–28. doi: 10.1097/SAP.0000000000002687 [DOI] [PubMed] [Google Scholar]
- 108. Jung SK, Kim TW. New approach for the diagnosis of extractions with neural network machine learning. Am J Orthod Dentofacial Orthop 2016; 149: 127–33. doi: 10.1016/j.ajodo.2015.07.030 [DOI] [PubMed] [Google Scholar]
- 109. Suhail Y, Upadhyay M, Chhibber A. Machine learning for the diagnosis of orthodontic extractions: a computational analysis using ensemble learning. Bioengineering (Basel) 2020; 7(2): E55. doi: 10.3390/bioengineering7020055 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 110. Shujaat S, Bornstein MM, Price JB, Jacobs R. Integration of imaging modalities in digital dental workflows - possibilities, limitations, and potential future developments. Dentomaxillofac Radiol 2021; 50(7): 20210268. doi: 10.1259/dmfr.20210268 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 111. Hung KF, Hui L, Yeung AWK, Jacobs R, Leung YY, Bornstein MM. An analysis of patient dose received during cone-beam computed tomography in relation to scan settings and imaging indications as seen in a dental institution in order to establish institutional diagnostic reference levels. Dentomaxillofac Radiol 2022; 51(5): 20200529. doi: 10.1259/dmfr.20200529 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 112. Hung K, Hui L, Yeung AWK, Scarfe WC, Bornstein MM. Image retake rates of cone beam computed tomography in a dental institution. Clin Oral Investig 2020; 24: 4501–10. doi: 10.1007/s00784-020-03315-3 [DOI] [PubMed] [Google Scholar]
- 113. Zhang Y, Yu H. Convolutional neural network based metal artifact reduction in X-ray computed tomography. IEEE Trans Med Imaging 2018; 37: 1370–81. doi: 10.1109/TMI.2018.2823083 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 114. Hegazy MAA, Cho MH, Cho MH, Lee SY. U-net based metal segmentation on projection domain for metal artifact reduction in dental CT. Biomed Eng Lett 2019; 9: 375–85. doi: 10.1007/s13534-019-00110-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115. Rischke R, Schneider L, Müller K, Samek W, Schwendicke F, Krois J. Federated learning in dentistry: chances and challenges. J Dent Res 2022; 101: 1269–73. doi: 10.1177/00220345221108953 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 116. Taleb A, Rohrer C, Bergner B, De Leon G, Rodrigues JA, Schwendicke F, et al. Self-supervised learning methods for label-efficient dental caries classification. Diagnostics (Basel) 2022; 12(5): 1237. doi: 10.3390/diagnostics12051237 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 117. Schwendicke F, Singh T, Lee J-H, Gaudin R, Chaurasia A, Wiegand T, et al. Artificial intelligence in dental research: checklist for authors, reviewers, readers. Journal of Dentistry 2021; 107: 103610. doi: 10.1016/j.jdent.2021.103610 33631303 [DOI] [Google Scholar]
- 118. Schwendicke F, Samek W, Krois J. Artificial intelligence in dentistry: chances and challenges. J Dent Res 2020; 99: 769–74. doi: 10.1177/0022034520915714 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 119. Gomez Rossi J, Feldberg B, Krois J, Schwendicke F. Evaluation of the clinical, technical, and financial aspects of cost-effectiveness analysis of artificial intelligence in medicine: Scoping review and framework of analysis. JMIR Med Inform 2022; 10: e33703. doi: 10.2196/33703 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 120. Müller A, Mertens SM, Göstemeyer G, Krois J, Schwendicke F. Barriers and enablers for artificial intelligence in dental diagnostics: a qualitative study. JCM 2021; 10: 1612. doi: 10.3390/jcm10081612 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 121. Kosan E, Krois J, Wingenfeld K, Deuter CE, Gaudin R, Schwendicke F. Patients’ perspectives on artificial intelligence in dentistry: a controlled study. J Clin Med 2022; 11(8): 2143. doi: 10.3390/jcm11082143 [DOI] [PMC free article] [PubMed] [Google Scholar]