Table 1. Applications of ML methods in dental, oral and craniofacial imaging.
Fields | Subfields | Types of ML | Researches |
---|---|---|---|
Orthodontics | Landmark identification | Active shape model (ASM) | The algorithm functions by capturing variations of region shape and grey profile, based on segmentation of lateral cephalograms. High imagery quality and tedious works are needed (Yue et al., 2006) |
Customized open-source CNN deep learning algorithm (Keras & Google Tensorflow) | Study uses high quality training data for supervised learning. With a huge set of 1792 lateral cephalograms, the algorithm demonstrates comparable precision with experienced examiners (Kunz et al., 2020) | ||
You-Only-Look-Once version 3 (YOLOv3) | The study uses 1028 cephalograms as training data, which consists of both hard and soft tissue landmarks. The mean detection errors are not clinically significant between AI and manual examination. Reproducibility seems better than manual identification (Hwang et al., 2020; Park et al., 2019) | ||
Hybrid: 2D active shape model (ASM) & 3D knowledge-based models | The study uses a holistic ASM search to get initial 2D cephalogram projections. Further it utilizes 3D approaches for landmark identification. With the preprocessing of 2D algorithms, the accuracy and speed of landmark annotation are heightened (Montúfar, Romero & Scougall-Vilchis, 2018) | ||
Entire image-based CNN, patch-based CNN & variational autoencoder | With only a small amount of CT data, the hierarchical method (consists of 4 steps) reaches higher accuracy than former researches on 3D landmark annotation with deep learning methods. The mean point-to-point error is 3.63 mm (Yun et al., 2020) | ||
VGG-Net | The study has trained VGG-Net with a large amount of diverse shadowed 2D images. Each image has different lighting and shooting angles. The VGG-Net is able to form stereoscopic craniofacial morphological structure (Lee et al., 2019) | ||
determination of cervical vertebrae stages (CVS) | k-nearest neighbors (k-NN), Naive Bayes (NB), decision tree (Tree), artificial neural networks (ANN), support vector machine (SVM), random forest (RF), and logistic regression (Log.Regr.) | The study suggests that the seven AI algorithms have different precision of determination. ANN reaches the highest stability, the lowest accuracy occurs in Log.Regr. and kNN. By overall consideration, ANN is recommended to CVS determination (Kök, Acilar & Izgi, 2019) | |
Teeth-extraction decision | A two-layer neural network | The process consists of three steps: initial determination of teeth extraction, the choice of differential extraction, and determination of specific teeth to be extracted. The neural network gives a detailed plan of teeth extraction in orthodontic treatment (Jung & Kim, 2016) | |
Oral cancer | Detection of oral cancers | Texture-map based branch-collaborative network |
Deep CNN is used for cancer detection as well as localization, the detection sensitivity and specificity achieve 93.14% and 94.75% respectively (Chan et al., 2019) |
Alexnet, VGG-16, VGG-19, Resnet-50, & a proposed CNN | The study utilizes five CNNs for automated OSCC grading. The proposed CNN performs best with accuracy of 97.5% (Das, Hussain & Mahanta, 2020) | ||
Regression-based deep CNN with 2 partitioned layers, Google Net Inception V3 CNN architecture | The deep learning method is implemented on hyperspectral images, with the amount of training data growing from 100 to 500, the tissue classification accuracy (benign or cancerous) increases by 4.5% (Jeyaraj & Samuel Nadar, 2019) | ||
Cancer margin assessment | SVM, Random Forests, 6-layer 1-D CNN | Fiber probes are utilized to collect FLIm data with ML methods. Random Forest demonstrates best performance in tissue region division (healthy, benign and cancerous tissue), displaying potential in tumor surgical visualization (Marsden et al., 2020) | |
Prognosis of oral cancer | 3-D residual CNN (rCNN) | The study uses three types of labels as inputting data: CT images, radiotherapy dose distribution and contours of oral cancers. And the rCNN model is able to extract features on CT images to predict post-therapeutic xerostomias with best accuracy of 76% (Men et al., 2019) | |
Deep learning method, AlexNet architecture | The system is implemented on contrast-enhanced CT to assess cervical lymph node metastasis in patients carrying oral cancers. The diagnostic results demonstrate little differences between manual and automated evaluation (Ariji et al., 2019) | ||
back propagation (BP), Genetic Algorithm-Back Propagation (GA-BP), Probabilistic Genetic Algorithm-Back Propagation (PGA-BP) neural networks |
Three ML approaches are utilized for cancerous patients’ survivial time prediction. PGA-BP has the best performance with an error of of average survival time for less than 2 years (Pan et al., 2020) | ||
Dental endodontics | Detection of dental caries | CNN, the basic DeepLab network, DeepLabV3+ model | The dental plaque detection model was trained using natural photos based on a CNN framework and transfer learning. Photos of deciduous teeth before and after the usage of a dental plaque exposure agent were used. Results show that the AI model is more accurate (You et al., 2020) |
Root morphology | CNN, the standard DIGITS algorithm | This study analyzed CBCT and panoramic radiographs of mandibular first molars with a total of 760. The root image block is segmented and applied by deep learning system. High accuracy in the differential diagnosis of distal root forms of the mandibular first molar (single or multiple) was observed (Hiraiwa et al., 2019) | |
Periapical lesions | deep CNN | CBCT images of 153 periapical lesions were evaluated by deep CNN, and it was able to detect 142 periapical lesions, which is capable to figure out the location and volume of lesions and detect periapical pathosis based on CBCT images (Orhan et al., 2020) | |
The deep learning approach based on a U-Net architecture | This study achieved periapical lesions detections by segmenting CBCT images. The accuracy of DLS lesion detection reaches 0.93 (Setzer et al., 2020) | ||
Periodontology | CNN, the GoogLeNet Inception-v3 architecture | The study utilized panoramic and CBCT images to detect three types of odontogenic cystic lesions (OCLs) based on CNN and transfer learning. Results suggest that CBCT-based training performs better than panoramic image-based training (Lee, Kim & Jeong, 2020) | |
deep CNN architecture and a self-trained network | The study utilized deep CNN algorithm for periodontally compromised teeth (PCT) diagnosis and prediction. The accuracy of PCT diagnosis on premolars reaches high level than that on molars (Lee et al., 2018) | ||
Orthognathic surgery | facial attractiveness | CNN, VGG-16 architecture | The study viewed the photos of 146 orthognathic patients before and after treatment, assessed their facial attractiveness and apparent age using CNN, and found that the appearance of most patients improved after treatment (Patcas et al., 2019a) |
CNN, VGG-16 architecture | Full-face and lateral pictures of left cleft lip patients and controls were assessed and facial attractiveness was evaluated. Results showed that CNN is capable to assess facial attractiveness with similar score of manual evaluation (Patcas et al., 2019b) | ||
Others | CNN | CBCT images combined with AI can also be used to measure the bone mineral density of the implant area, evaluate the bone mass of the surgical area, and assist in the construction of static guide plate system (Dahiya et al., 2018; Lin et al., 2020; Suttapreyasri, Suapear & Leepong, 2018) |