Abstract
Artificial intelligence has been emerging as an increasingly important aspect of our daily lives and is widely applied in medical science. One major application of artificial intelligence in medical science is medical imaging. As a major component of artificial intelligence, many machine learning models are applied in medical diagnosis and treatment with the advancement of technology and medical imaging facilities. The popularity of convolutional neural network in dental, oral and craniofacial imaging is heightening, as it has been continually applied to a broader spectrum of scientific studies. Our manuscript reviews the fundamental principles and rationales behind machine learning, and summarizes its research progress and its recent applications specifically in dental, oral and craniofacial imaging. It also reviews the problems that remain to be resolved and evaluates the prospect of the future development of this field of scientific study.
Keywords: Orthodontics; Oral cancer; Machine learning; Dental, oral and craniofacial imaging
Introduction
Artificial Intelligence (AI) has been one of the most popular realms of scientific research in the past few decades and plays a role in the daily lives of many people (Abhimanyu et al., 2020; Dolci, 2017; Wang et al., 2020; Yanhua, 2020). AI is the ability of computers to learn from the input of data. It aims to find an optimal and adaptive approach to solve problems flexibly without the help of human beings (Legg & Hutter, 2007; Visvikis et al., 2019). Traditional computer programs utilize complex mathematical models and formulas to achieve automation and output of a series of schemes based on given programming models. This includes, for example, expert systems. At present, AI research has entered a distinct field of study known as machine learning (ML).
ML utilizes computational methods and data (experience) for training purposes. It does this to analyze the information that serves as the input and to process the information gained from accumulative experiences. The foundation of ML falls on “experience gathering” or “active learning.” In practice, this means that computers learn from input data and boost their properties by the mistakes they have made without specific programming or the establishment of a mathematical model (Erickson et al., 2017). In recent years, the rapid development of medical science means that it requires more precise and effective clinical treatment. Manual work increasingly has more disadvantages because of the need for high-density data processing. In parallel, mistakes are inevitable due to the inexperience of medical or dental professionals (Gao et al., 2019). The combination of ML and medical science, which is based on automatic operation workflow and the powerful operating capacity of computers, makes it possible to circumvent the constraints of manual work (Visvikis et al., 2019).
Computer-aided detection and diagnosis are two major domains of medical applications in ML (Gao et al., 2019). The application of algorithms for picture processing is ubiquitous in medical applications, particularly for the detailed analysis of medical images. These methods first carry out feature extraction of specific images and subsequently conduct target detection or categorize images into established classes to achieve image detection or classification. Convolutional neural networks (CNNs) have been applied the most frequently in medical imaging from the various ML models available because of their outstanding performance in disposing of image features (Lee et al., 2017). They offer outstanding performance in areas including radiographic recognition, analysis, segmentation and interpretation (Kulkarni et al., 2020) (Fig. 1). CNNs are thus used for pathological detection, diagnosis and prognosis.
Figure 1. The popular branches of artificial intelligence used in medical imaging.
Many studies have established a considerable number of ML models for use in medical and dental fields. Instances include the detection and evaluation of pulmonary nodules (Hung et al., 2020a) and diffuse lung diseases (Kido, Hirano & Mabu, 2020), diagnosis in dermatology, for example, for melanocytic lesions (Selim & Giovanni, 2019), as well as the segmentation of the prostate (Milletari, Navab & Ahmadi, 2016). Other examples include cancers like lung and breast cancer and show potential for diagnosis, detection and even prognostication using CNN (Gao et al., 2018; Hosny et al., 2018; Liu et al., 2018; Xie et al., 2019).
The applications of ML algorithms in dentistry and oral surgery are at early stages of development despite their potential as promising assistants for radiography (Schwendicke et al., 2019). In recent years, scientists have witnessed a drastic increase in research featuring dental, oral and craniofacial imaging with deep-learning methods, along with maxillofacial radiology and head and neck oncology. In this review, we first briefly demonstrate the working principle and rationale of ML in medical imaging. Second, we introduce the recent progress and applications of ML in dental radiography. Finally, we conclude the review with a summary of problems demanding prompt and timely investigation and resolution. We also describe our anticipations for the future research and development of AI in medical science.
Why this review is needed and who it is intended for
ML is widely used in medical fields, including medical imaging and assisting clinicians in the diagnosis and treatment of disease. The application of ML in dental, oral and craniofacial imaging has been widely studied, and ML has been initially utilized in clinical treatment. These technologies have attracted the attention of dentists and are expected to become an important tool to assist in treatment. The combination of ML methods and medical imaging is a current trend and becomes increasingly necessary. In the years 2018–2020, around 30 articles have been published describing the application of deep learning, a subset of ML, to various fields of dentistry. The applications of ML in medical imaging have been reviewed by some researchers. However, no review is available to summarize in detail the applications of ML in the fields of dental, oral and craniofacial imaging, an area of much interest in dentistry as well as oral and maxillofacial surgery. Therefore, this review covers recent applications of ML methods in dental, oral and craniofacial imaging, points out problems that remain to be resolved and evaluates the prospects of the future development of this field of scientific study. The review is intended for dentists, oral and maxillofacial surgeons, other specialists and medical workers who are interested in AI.
Survey methodology
We performed a systematic search of the literature in PubMed, Web of Science and IEEE ACM SPRINGER from 1980 to December 2020 to identify relevant articles for this review. The main free text and MESH terms used for our search can be divided into categories, and a combination of any words in different categories is applied to the search. At the same time, in order to achieve wider search results, we initially avoid adding specific types of data. Instead, we utilize the Boolean operator “NOT” to exclude unwanted results like studies using genomic data. The categories we used are as follows:
About ML: machine learning (ML); artificial intelligence (AI); neural network; convolutional neural network (CNN); support vector machine (SVM); regression; decision tree; random forest; deep learning; unsupervised learning; semi-supervised learning; fully convolutional network (FCN); U-net; ResNet; AlexNet; Lenet; DenseNet.
About imaging methods: radiography; cone beam computed tomography (CBCT); cephalometrics; X-ray; panoramic radiograph; lateral cephalogram; two-dimensional (2D); three-dimensional (3D); hyperspectral imaging; fluorescence imaging.
About oral cancer: oral cancer; head and neck cancer; head and neck squamous cell carcinoma (HNSCC); oral squamous cell carcinoma (OSCC); tongue cancer; oral tumor; detection; diagnosis; prognosis; survival rate.
About task and processing methods of ML: detection; prognosis; segmentation; object detection; classification.
About craniofacial imaging: orthodontics; landmark location; landmark annotation; superimposition; orthognathic surgery; dentofacial deformity.
About other dental diseases: dental caries; endodontic disease; periapical disease; periapical lesion; teeth extraction; periodontitis; root canal; dental pulp.
We only include the original research articles in the scope of references and single case reports were ruled out. We focus on the application of ML in dental, oral and craniofacial imaging, while studies with a wider range within the vague and broad concept of executive functions were excluded. This included the use of ML in other medical fields like respiratory diseases, other AI algorithms other than ML methods like expert systems, and other forms of data like clinical and genomic indicators. The preliminary research title and abstract were selected by four authors to determine whether they met the research criteria. We found about 540 relevant articles written in English through the preliminary search that may be useful for this review. We finally included about 170 studies that contributed to this review after reading the title and abstract of these articles.
The working principle and rationale of ML
ML is a branch of AI. It can be seen from its name that it refers to the ability of machines to learn. ML is a general term for a class of algorithms that allow the machine to automatically dig out hidden laws from data, build models and then use the models to make decisions and complete other tasks. The core of these algorithms usually lies in data. The explosive growth in the amount of information that we witness today therefore gives these algorithms the vast soil upon which productive seeds can land and grow (Dey, 2016).
In the field of ML, four main learning methods exist: supervised learning, unsupervised learning, semi-supervised learning (weakly supervised learning) and reinforcement learning. Here we consider a task to be learned by a machine. Suppose there is a goal function “G”: X→Y that can accurately predict the output Y corresponding to each input X. This function is the ultimate goal learned by the algorithm. We can approximate the optimal solution to a certain extent, although it is impossible to find it. The method is to use a series of samples (x1, y1), (x2, y2),…,(xi, yi),…, (xn, yn) generated by the “Goal function” to estimate itself in order to put as many out-of-sample target pairs (xj, yj) in the function as possible. xi represents the feature we select to achieve the goal or the feature extracted by the algorithm, and yi represents the goal to be achieved by the task.
There are two essential parts for the application of ML to actual scenarios. The first is data and features, and the second is models and algorithms. A commonly used ML procedure receives data that goes through preprocessing and manual labeling for training and testing. Selected ML algorithms are utilized for data learning, which occurs through model optimizing and model evaluation, and the mature model is finalized (Fig. 2). Specifically, the process of medical image processing typically consists of four steps: image acquisition, image pre-processing, image analysis and pattern recognition. The first step is image acquisition. The images processed in medical image processing are mostly acquired from medical imaging devices, including X-ray, CBCT and magnetic resonance imaging (MRI) (Taghanaki et al., 2021). The second step is to pre-process the images. Artifacts and noise on the images affect the image quality, which are due to the damage and contamination of the image caused by storage and transmission. Therefore, we need a series of image enhancement operations to recover or generate an image of the desired quality, including histogram equalization, image sharpening, thresholding transformations (Maini & Aggarwal, 2010) and various filtering operations. The third step consists of image analysis and feature engineering. This approach is used to extract the required features using a priori information and to send the image analysis results to the next stage of the ML model for training of the corresponding tasks. Shape features can be extracted by some boundary extraction operators including first-order differential Robert, Prewitt, Sobel operators, second-order differential Laplacian edge detection operators and optimal method-based operator (Canny, 1986). Spatial relationship features can be extracted by modeling pixel points, using methods such as Markov Random Field (Li, 1994). In image processing utilizing deep neural networks such as CNN, feature engineering can be performed using multi-layer convolution for adaptive extraction. This approach simplifies the amount of engineering, but specific feature information is difficult to extract and externalize. The fourth step is to feed the feature information extracted in step 3 into the selected ML model for modeling. Fallback steps such as hyperparameter adjustment and model adjustment are carried out based on the feedback from the results.
Figure 2. The fundamental machine learning procedure to achieve a final model.
In medical image processing, most traditional ML algorithms do not directly feed the original image into the model but go through some feature extraction processes. This may involve, for example, use of the well-known SIFT feature extraction analysis (Lowe, 2004) to obtain the features and send them to the model such as Support Vector Machine (SVM) (Smola & Schölkopf, 2004). Models such as k-Nearest Neighbor (KNN) (Cover & Hart, 1967) are used for tasks such as image classification or segmentation. When the CNN is finalized, the traditional feature engineering can be replaced by the convolutional layer and performed more efficiently. End-to-end task solutions can be realized, for example, and the convolutional layer with fully connected (FC) layers can be applied to image classification. CNN has become a popular model in the field of image processing and has been broadly used (Fig. 3).
Figure 3. The main machine learning algorithms used in medical image processing.
CNN is an important type of neural network, and it is widely applied to image processing. The traditional method of FC layers, which existed before the advent of CNN, has been considered to have various disadvantages when used in image processing. Since it is constructed by how each neuron in the adjacent layer is linked, this architecture contains many weights (parameters in neuron networks), which dramatically increases the computational overhead. Compared to FC architecture, however, CNN is designed with some special characteristics.
The basis of CNN is a series of convolution operations that can be understood as the filter sliding on the image. A filter is a three-dimensional (channel, height, width) weight tensor that can extract the features from different pixel units. For a two-dimensional digital image, the affine transformation of the region can be realized after the filter is multiplied by the matrix of different regions of the image (Dumoulin & Visin, 2016). A filter is a collection of multiple different convolution kernels. A multiplying filter with the image area will yield responses of different magnitudes because of different kernels. Features are represented by the kernel in the area if the response is strong. The size of the filter is related to the receptive field, which represents the perception area of neurons (Hubel & Wiesel, 1962). It possesses a Gaussian distribution and also only takes up a small part of the full theoretical receptive field (Luo et al., 2016).
One kind of kernel usually corresponds to one specific pattern. Numerous different features can be extracted if filters with numerous different kernels are applied to an image. If there are kernels that correspond to identical patterns in a given image, the output of the convolution operation will respond strongly to those pixel units. These features include edges, directions and others. In addition, this operation largely reduces the number of weights because a filter conducts convolution operations for each pixel unit of the CNN architecture image. Furthermore, it means different image units can share the parameters, a process that is called parameter sharing (Ravanbakhsh, Schneider & Poczos, 2017). The use of kernels to process images can extract spatial information successfully, which means the interpretability of the parameters will be improved.
Feature maps are obtained after the convolution operation. However, the feature map size is still large even if the image has been processed by filters. A pooling layer has been proposed to further downsize the feature maps (LeCun et al., 1998). The pooling process is similar to the convolution operation, but its purpose is different: the filters used for pooling are usually designed to generate the maximum value or average value. The two methods involved in the pooling layer are called max-pooling and mean-pooling. They are usually used to extract the texture information and to collect the background information of feature points in the region (Boureau et al., 2010). The size of the feature maps can be reduced by pooling subsampling. It is therefore helpful to avoid overfitting and keep features robust against changes like rotation. Pooling subsampling can also reduce the calculation workload. However, these two pooling methods will cause excessive information loss and destroy the spatial information in processed images. Therefore, in order to compensate for the flaws of both pooling methods, researchers have made many improvements to them and presented methods like fractional max-pooling and others (Graham, 2014; Zeiler & Fergus, 2013; He et al., 2015). One trend is pronounced despite these improved pooling methods: many advanced networks are using fewer pooling layers and replacing them with convolution layers (Springenberg et al., 2014).
Problems remain to be solved after performing convolution and pooling operations to images. We have noted that these two operations are virtually continuous linear operations, and the output of the linear transformation superposition is another linear transformation. Therefore, by applying these two operations, we can only produce linear solutions and cannot handle indivisible linear problems. Nonlinear transformation is needed to compensate for the confinement of these two operations.
The solution to the aforementioned problem is activation functions, which are essentially nonlinear functions (Karlik & Olgac, 2011). The network can approximate any function according to the universal approximation theorem (Cybenko, 1989) when given sufficient linear output layers and nonlinear hidden layers. Only when a specific threshold value is achieved by the weighted sum of the signal intensity from previous dendrites will the subsequent neurons be activated in our nervous system for neurons in the biological sense. It is also necessary to discard some weak features for the neural network view, which is analogous to the biological neuron network, because it is unnecessary to store these features. The following are some well-known activation functions (Ramachandran, Zoph & Le, 2017): the softmax function, which is the earliest sigmoid function and the currently popular Relu function (Nair & Hinton, 2010). These two parts are the basic components of CNN. This is why a deep learning net is considered as a multi-stage distillation of information, where the information passes through continuous filters and is continuously purified.
In addition to these two fundamental parts, FC layers are often chosen to be a part of the network especially when networks are wanted to classify data (Krizhevsky, Sutskever & Hinton, 2012; Simonyan & Zisserman, 2014). Some unique variants have been well designed in order to adapt to other work. Fully Convolutional Network (Long, Shelhamer & Darrell, 2015) and U-net (Ronneberger, Fischer & Brox, 2015) are proven to be effective in semantic segmentation tasks. Yolov3 (Redmon & Farhadi, 2018) performs well in real-time object detection. In addition to these architecture changes, some optimization methods are also proposed in this process like Dropout (Hinton et al., 2012) and Adam (Kingma & Ba, 2014).
Applications of ml in the dental, oral and craniofacial imaging field
Dental, oral and craniofacial imaging consists of several techniques from two-dimensions to three-dimensions. The most common imaging methods are CBCT and panoramic radiographs. Recent years have witnessed the burgeoning increase of the application of ML in this field. Our systematic search revealed that the usage of ML in the field of craniofacial imaging has become the biggest area of application, among which automatic cephalometrics has become relatively mature. In oral imaging, oral cancers, which are life threatening, have caught the attention of many researchers. Therefore, considerable studies focus on ML-based detection, diagnosis, prognosis and treatment design for these tumors, especially for OSCC, the oral cancer with the highest morbidity.
ML in craniofacial imaging has stepped into a multidirectional mature stage of research with many studies reported. Meanwhile, oral cancers are life-threatening diseases that cannot be easily diagnosed. Auxiliary diagnosis seems to be particularly meaningful. In addition, therapeutic schedules vary from person to person with different conditions of prognosis and process. The prediction of results based on ML may therefore lead to valuable references that improve the quality of life of patients. Other fields in oral medicine like endodontic and periodontal disease are likewise studied using ML approaches but mostly at the diagnostic level. In this section, we categorize the applications into three classes. First, we focus on the craniofacial imaging field, which includes orthodontics and orthognathic surgery. Second, we introduce the applications in oral tumors, covering their diagnosis, prognosis and the design of therapeutic regimen. Other applications will be grouped together.
Application of ML in craniofacial imaging
Landmark location in cephalometrics
Automated cephalometric analysis is helpful in reducing the workload of orthodontists while achieving higher accuracy and efficiency (Dot et al., 2020). In 1984, computer-aided automated skeletal landmarking was created (Cohen, Ip & Linney, 1984). Today, various approaches have been used for cephalogram measurement. In the field of landmark location, the methods that have been most widely adopted into use can be roughly categorized into four branches: knowledge-based approaches (Gupta et al., 2015), model-based approaches (Romaniuk et al., 2004; Shahidi et al., 2014; Vucinic, Trpovski & Scepan, 2010), learning-based approaches (Kunz et al., 2020) and hybrid approaches (the combination of the first three approaches mentioned here) (Montúfar, Romero & Scougall-Vilchis, 2018). The first two approaches are considered deductive methods or analogical learning and are used to analyze radiographic structure via a defined set of patterns and models. Therefore, variability plays an important role in the final output data (Gupta et al., 2015), and both approaches are sensitive to image quality (Leonardi et al., 2008). By contrast, the recent widely applied approach of ML refers to learning by induction. Once training data are given, the computer produces the source concept itself based on a large dataset, which means it acts like a perception procedure.
Yue et al. presented a modified active shape model (ASM) to assist landmark location of lateral radiographs. This model is based on principal component analysis and grey pattern matching. Such an algorithm is built to capture variations of region shape and grey profile (Yue et al., 2006) by training with two hundred cephalograms of which 262 labeled feature points were set. The input pre-labeled images are marked by presetting 12 landmarks with good reliability. However, this method requires a large number of feature points to identify specific landmarks. The solution is to divide the whole lateral radiographs into smaller regions. The accuracy is highly relevant to the resolution ratio and initial position of the tested imagery graphs, which requires laborious work and has limitations in image quality.
Kunz et al. (2020) implemented cephalometric X-ray analysis by the application of a CNN mainly for landmark location. The customized CNN functioned as well as expert analysis, the golden standard for this type of study, after training with a total of 1,792 manually positioned lateral cephalometric radiographs. Numeric grey-scale values of each pixel, as input data, are recognized, and afterward the output layer acquires coordinate pairs of cephalometric landmarks after going through hidden layers with subsampling functions. Algorithms like You-Only-Look-Once version 3 (YOLOv3) network and Single Shot Multibox Detector (SSD) have been compared and analyzed in recent studies. YOLOv3 clearly outperformed SSD in time consumption and accuracy. In addition, no difference in detection error between YOLOv3 and manual landmark identification was found (Hwang et al., 2020; Park et al., 2019).
Two-dimensional radiographs lead to the deficiency of overall craniofacial morphology as well as information in the horizontal plane (Lenza et al., 2010). Other than traditional lateral cephalograms, CBCT imaging, which obtains details from the coronal, sagittal and horizontal positions, excels at lower radiation doses and where more structural information is present, and consequently is popular for dental imaging (Kiljunen et al., 2015). Many AI-aided types of research for cephalometric analysis work at the three-dimensional level (Gupta et al., 2016; Lee et al., 2019; Montúfar, Romero & Scougall-Vilchis, 2018; O’Neil et al., 2019).
Three-dimensional automated analysis is the ramification of plane cephalometrics. The main annotation methods can be classified into three categories: knowledge-based, atlas-based and learning-based methods (Dot et al., 2020). Gupta et al. (2015) created a knowledge-based algorithm in MATLAB that consists of preset mathematical entities. This approach works by finding the seed point, creating the volume of interest and extracting the contour of the valid skeletal structure. The corresponding landmarks on CBCT images are accessed by matching extracted contours with relevant mathematical entities. Furthermore, Montúfar, Romero & Scougall-Vilchis (2018) developed a hybrid method based on earlier work (Gupta et al., 2015) and a two-dimensional holistic ASM. The result suggests a potential role of the initial two-dimensional search algorithm in the improvement of accuracy and time saving for three-dimensional landmark annotation. Deep learning methods like CNN structure have also been conducted (Kang et al., 2020; Lee et al., 2019; Yun et al., 2020). Some structures like gonion, porion and others seem to be points with imperfect accuracy. In addition to algorithm insufficiency and manual errors, inexact anatomical positions and complex definitions are possible causes of this loss of accuracy (Ma et al., 2020; Montúfar, Romero & Scougall-Vilchis, 2018). However, only a few studies have been reported in the three-dimensional field of imaging, which suggests that it is still at the initial stages. Some research has produced tenable results, but further improvements are required to permit concrete conclusions.
One point worth noting is that the spatial landmark annotation can directly result from two-dimensional image learning. Lee et al. (2019) introduced a novel approach using shadowed two-dimensional image-based ML. VGG-Net is able to form stereoscopic craniofacial morphological structures after training using two-dimensional marked image data with different lighting angles and various views. A significant benefit of this approach is the reduction of input size. However, large errors persist in some landmarks. This approach offers new ideas, but many subsequent trials are needed.
Other branches in orthodontics
In addition to cephalometrics, the personalized design of orthodontic treatment is vital and significant.
Long-lasting therapeutic processes, optimal initiation times and optimal durations of orthodontic treatment are the main considerations for malocclusion types. Therapeutic interventions can help patients overcome the severity of different conditions and counter problems due to deficiencies in individual growth and development (Martonffy, 2015; Pinto et al., 2018; Pinto et al., 2017).
Orthodontists can better design the initial time of intervention by determination of the cervical vertebrae stages (CVS) from cephalometric radiographs (Chen et al., 2010; Uysal et al., 2006). Kök, Acilar & Izgi (2019) implemented a series of comparisons on CVS classifications using seven different AI algorithms, naming artificial neural networks (ANN) and evaluating other criteria. These algorithms analyze second to fourth cervical vertebrae and classify radiographs into six stages. The different stages are subsequently used to evaluate the decisions made for treatment time. ANN achieves the highest stability in a comparison of actual CVS with predicted CVS for the output of AI algorithms. ANN and SVM yielded the highest determination value in distinct stages of the area under the receiver operating characteristic curve (AUC) evaluation. More specifically, SVM achieves the highest accuracy in identifying CVS3 and CVS5, while ANN has the best performance in determining the other stages. SVM functions as a maximum margin classifier, maximizing the differences between disparate classes (Ben-Hur & Weston, 2010). Other evaluation methods also suggest that ANN displays both high relative accuracy and stability. ANN is therefore preferable in CVS determination. Recently, a study also compared the effectiveness of ANN with manual observation, and ANN was determined to be slightly inferior to human observers (Amasya et al., 2020). Other research described considerably high accuracy with ANN to achieve CVS evaluation (86.9% with 13 linear marks for each radiograph) (Kök, Izgi & Acilar, 2020). The differences may be due to measurement methods.
AI-assisted methods have been used in diverse ways in orthodontics. Some studies discuss the possibility of using ML to determine the necessity to extract teeth and the need for orthognathic-orthodontic surgery (Choi et al., 2019; Jung & Kim, 2016; Takada, Yagi & Horiguchi, 2009; Xie, Wang & Wang, 2010). Jung et al. (Jung & Kim, 2016) created a two-layer neural network to perform the extraction or non-extraction decision. The procedure sets four classifiers and consists of three stages: determining whether to extract teeth, the need for differential extraction between maxillae and mandible, and, eventually, the need for more retraction. The success rates of each stage tested were 93%, 89% and 84% (for more retraction with identical retraction) and 96% (for more retraction with differential extraction), which suggest a relatively high diagnostic precision. In addition to the need to extract teeth, the system of Jung provides a detailed plan of orthodontic treatment. In another study, ANN outputs detailed extraction patterns as well as anchorage patterns based on clinical and radiological data of orthodontic patients, which provides good treatment advice for orthodontists (Li et al., 2019) (Fig. 4).
Figure 4. An example of a machine learning method (artificial neural network) utilized in orthodontic treatment design.
(A) The data processing workflow of the artificial neural network which provides detailed guidance for the extraction and anchorage patterns. (B) The main inputting data and the structure of the three-layer neural network for tooth extraction prediction. Reprinted from (Li et al., 2019).
In terms of poor image quality, which leads to unavoidable systematic errors, including noise and artifacts (particularly metal artifacts), the constraints can be reduced robustly and efficiently by deep learning methods (Huang et al., 2018; Jiang et al., 2018; Minnema et al., 2019; Zhang & Yu, 2018). Computer-assisted denoising and metal artifact reduction (MAR) succeed in improving the structural visualization and diagnostic accuracy of orthodontists, oncologists and doctors in other fields.
Orthodontists have accounted for a relatively large portion of users at present for the application of ML in oral medicine. Two-dimensional landmark location based on ML using traditional lateral cephalograms gradually shows promise. Multiple methods, especially various types of CNNs, produce good results.
The common use of CBCT means that cephalometrics is advancing to the three-dimensional stage. Three-dimensional cephalometrics has become a frontline direction for research. The need to retain more information means that landmark identification requires more suitable operations and specialized knowledge (Cevidanes, Styner & Proffit, 2006), which may be one of the principal obstacles to the automatic use of three-dimensional landmark annotation using AI methods. In ML, the lack of large training datasets might confine the development in three-dimensional fields because of ML features learning directly from data (Hwang et al., 2019). Other obstacles in the development of AI-supported applications in medical science include situations likely to take excessive amounts of time (computer learning time and manual cropping time) as many studies utilize manually preprocessed images for training data. Some other applications like CVS and orthodontic-orthognathic operation design also demonstrate the superiority of neural networks. The practicability of picture processing is therefore paving the way for the development of automatic orthodontic treatment. More mature clinical uses like image superimposition, detailed surgical procedure design and process simulation of orthodontic treatments can be achieved fully automatically with the help of ML methods.
Orthognathic surgery and other dentofacial deformities
In the field of orthognathic surgery, the use of ML can enhance the accuracy of diagnosis from maxillofacial images (Sun et al., 2018; Zamora et al., 2012), assist in customizing the computer-aided design and manufacture (CAD/CAM) of orthodontic and surgical appliances and equipment (Cevidanes, Styner & Proffit, 2006) and can be improved by comparing the results at finer intervals through image superposition (Bouletreau et al., 2019).
Hyuk-Il Choi et al. (Choi et al., 2019) developed a ML model by studying 316 samples. Twelve lateral cephalogram measurements and six additional indexes were used as input for the model to calculate the success rate of surgical decision-making. Patcas et al. (2019a) have shown that ML can be used to evaluate the facial attractiveness and appearance age of orthognathic patients. Patcas et al. (2019b) evaluated the facial attractiveness of the forehead and side images of ten patients with a left cleft lip and ten controls using a special convolution neural network, and concluded that the ML method can be a powerful tool to describe facial attractiveness. Facial symmetry is an important indicator of facial attractiveness. Lin et al. used a novel Xception model to score facial symmetry before and after orthognathic surgery. Special two-dimensional contour maps converted from CBCTs were considered as the input data. These maps contain much three-dimensional information (Lin et al., 2021). Jeong et al. studied the front and side faces of more than 800 subjects with dentofacial dysmorphosis/malocclusion using CNNs and found that CNNs are able to relatively accurately estimate the soft tissue contours related to orthognathic surgery with these photographs alone (Jeong et al., 2020). However, as far as the current results are concerned, important adjustments need to be made to the ML model. CBCT images combined with ML models can also be used to measure the bone mineral density of the implant area (Çolak, 2019; Dahiya et al., 2018), evaluate the bone mass of the surgical area (Suttapreyasri, Suapear & Leepong, 2018) and assist in the construction of a static guide plate system (Lin et al., 2020).
AI passage from two-dimensional to three-dimensional imagery along with the added benefits of increased diagnostic precision make the treatment effect visual and the communication between doctors and patients unimpeded. However, the value and ability of ML in simulating the consequences of orthognathic surgery have not been fully proved. Bone displacement will make it difficult to predict soft tissue changes. The response displacement of soft tissue to the underlying bone can vary greatly according to the mass, and there are many influencing factors. An algorithm is still unlikely to accurately predict the final aesthetic effect after surgery.
Application in oral cancers
Oral cavity cancer is a high-risk category of life-threatening tumors and accounts for the major proportion of head and neck cancer (Rivera, 2015). In addition to several functional symptoms like teeth loss, head-neck pain and potentially fatal consequences, this craniofacial disease also likely results in disfigurement of patients without early diagnosis or favorable prognosis. Classical oral cancer detection and diagnosis are based on radiological analysis, clinical monitoring indicators and histopathological assessments (Mahmood et al., 2020). Prevention and early-stage diagnosis are of great significance to the survival rate and treatment management of cancerous patients. However, the definite tumor diagnosis is usually late (Chakraborty, Natarajan & Mukherjee, 2019; Rivera, 2015). In recent years, conventional and modern ML methods, especially neural networks and SVM, have illustrated the capability of processing oral cavity tumor-related image data. This includes oral cancer detection and tissue cell classification in the stage of cancer diagnosis (Al-Ma’aitah & AlZubi, 2018; Aubreville et al., 2017; Das, Hussain & Mahanta, 2020; Jeyaraj & Samuel Nadar, 2019; Shamim et al., 2019), tumor margin assessment and tumor subtype classification in the process of clinical cancer treatment (Fei et al., 2017; Marsden et al., 2020; van Rooij et al., 2019) and assessment of complications after treatment (Ariji et al., 2019; Dong et al., 2018; Men et al., 2019). Major tumors like OSCC are able to be detected and evaluated with high accuracy using a timesaving algorithm (Aubreville et al., 2017; Das, Hussain & Mahanta, 2020).
Detection of oral cancers
Semantic image segmentation and feature extraction are two fundamental processes of image classification through ML methods. They form the basis of oral cancer detection by this type of approach (Haider et al., 2020; Mahmood et al., 2020). Hyperspectral Imaging (HSI) is a currently applicable technique for tumor detection. HSI, which contains three-dimensional data, provides a potential noninvasive approach to assess pathological tissue by illustrating spectral features of different tissue (Akbari et al., 2011; Lu et al., 2014). Pandia et al. (Jeyaraj & Samuel Nadar, 2019) established a deep CNN, which is used in the classification and evaluation of hyperspectral cancerous images. The researchers extract image features at the first stage using a weight-based technique, and a two-layer partitioned, regression-based deep CNN classifier is employed subsequently for feature classification. The discrimination accuracy using an expert classification scheme between malignant and benign tumors reaches 91.4%, while the accuracy between malignant tumors and precancerous lesions reaches 91.56%.
Chan et al. (2019) designed a two-branch deep CNN method for oral cancer detection and localization. Original auto-fluorescence images were chosen and disposed of to output texture maps. Afterward, the specific texture maps are utilized by ML to conduct automatic localization of cancer. The Gobar filter, which is used to implement image feature extraction, achieves detection sensitivity and specificity of 93.14% and 94.75%, respectively.
Oral leukoplakia is the most common type of precancerous lesions of oral cancer. A study by Jurczyszyn utilizes intraoral photographs to conduct oral leukoplakia prediction, which can be considered for early prevention of oral cancers (Jurczyszyn, Gedrange & Kozakiewicz, 2020). However, oral cancer consists of a large variety of distinct malignancies. Hence, primarily distinguishing tumor-related tissue from imaging data of patients is fundamental and essential but lacks precision for specific oral cancers.
Some other studies have focused on the diagnosis of single oral cancers (Aubreville et al., 2017; Das, Hussain & Mahanta, 2020; Rahman et al., 2020; Shamim et al., 2019). Squamous cell carcinoma is responsible for approximately 90% of total oral cancers and has become the sixth most common cancer worldwide (D’Souza & Addepalli, 2018; Kar et al., 2020).
Biopsy is the current gold standard for OSCC (Swinson et al., 2006), but the histopathological method is time-consuming and costly. Therefore, Navarun et al. (Das, Hussain & Mahanta, 2020) utilize four types of deep CNN models through a transfer learning approach, and one proposed CNN to achieve the automated histological grading of whole slide images on lesion locations.
Transfer learning can reduce the amount of training data as it fine-tunes from the previously trained large dataset. Biopsy is invasive and painful for patients, and some researchers are looking for ways for noninvasive imaging. Confocal Laser Endomicroscopy (CLE) imaging has proved capable and reliable in the detection of HNSCC (Nathan et al., 2014). It has been used by Marc and coworkers (Aubreville et al., 2017) for OSCC microstructure assessment.
The researchers collected both normal tissue images from the alveolar ridge, inner labium, hard palate and cancer-related tissue images as samples. A binary classification (normal or cancerous) with an accuracy of 88.3% is obtained using CNN. The real-time identification instrument is of use for automated detection of cancerous lesions.
Carcinogenic factors need to be taken into consideration. Infection by human papillomavirus (HPV) is one of the high-risk factors for OSCC (Marur et al., 2010). ML-based cancer detection also succeeds in evaluating molecular markers like HPV. Some studies use contrast computed tomography (CT) for capturing features of HPV-related head and neck squamous cell carcinomas (Huang et al., 2019; Zhu et al., 2019). MRI data are also utilized for OSCC assessment. Specific MRI texture features, which are chosen by dimension reduction, are capable of automatic OSCC histological grading without biopsy (Ren et al., 2020). The grading accuracy of OSCC achieves an average of nearly 85% using three types of classifiers. It is useful to assess histological grading via MRI since it is a noninvasive approach for clinical examination.
Trials on automated screening of other oral cavity cancers have been implemented clinically (Shamim et al., 2019) or on experimental animals (Lu et al., 2018). Six deep CNNs have been applied to distinguish tongue lesions before tongue cancer fully takes hold (Shamim et al., 2019). The VGG19 model demonstrates the best capability in classifying benign and pre-cancerous lesions using original resized photographic images as input, while the ResNet50 model shows its potential in the discrimination of five lesion subtypes. Researchers have further combined computational outcomes with physician decisions and increased the binary classification accuracy to 100%. At the same time, some benign tumors in the oral cavity are also detected automatically by some ML methods, which includes ameloblastoma, keratocystic odontogenic tumor, pleomorphic adenoma, Warthin tumor and others (Poedjiastoeti & Suebnukarn, 2018; Al Ajmi et al., 2018). These tumors also make up a large proportion of oral tumors.
The majority of these studies are related to cancerous image classification, and most studies have achieved desirable detection accuracies compared to the gold standard. ML has yet to reach the needed precision for tumor diagnosis. The automated detection methods, which are based on a series of diverse imaging approaches, improve clinical cancerous workflow and provide assistance for the decisions of oncologists. However, a lack of training datasets and data quality restrict the scale of research. More image data with the standard format are required for future research.
Clinical treatment of oral cancers
Morphological analysis, including tumor margin assessment and tumor site evaluation, is of much concern during the treatment of oral cancers. The tumor size and site are connected with prognosis (Chakrabarti et al., 2010; Namin et al., 2020), for example, for patient survival rate and surgical decisions for tumor resection (Upile et al., 2007). Much research has focused on the automated structure segmentation of oral cancer-related images (Brouwer de Koning et al., 2018; Fei et al., 2017; Grillone et al., 2017; Marsden et al., 2020; van Rooij et al., 2019). In a study by Fei et al. (Fei et al., 2017), hyperspectral images of surgical cancerous tissue samples have been acquired to train and test the ML model. The principle of this method is also related to tissue classification, through which the margins of oral tumors are profiled clearly with an average accuracy of 90%. Additionally, the research also compares the impact of image types on the precision of margin assessment, which turns out to outperform HSI over fluorescence images.
The use of real-time oral cavity screening probes with ML methods has also been reported for surgical procedures (Marsden et al., 2020). Fluorescence lifetime imaging (FLIm) is a noninvasive technique capable of assessment of molecular composition (Cheng et al., 2013). Mark and coworkers (Marsden et al., 2020) utilized and compared three ML models to conduct both in vivo and ex vivo tumor margin assessment. They used fiber probes to acquire oral tissue specimens, and further processed the tissue regions with different classifiers: SVM, Random Forests and CNN. “Cancer,” “Health” and “Dysplasia” labels were annotated on the scanning images after the visualization process using Python. The outcomes demonstrate the potential of FLIm to predict pre-cancerous tissue and suggest that the Random Forest technique is superior to the other two popular image-processing methods.
Prognosis of oral cancers
Post chemoradiotherapy complications of cancers are severe and individualized. In addition to common side effects like myelosuppression, osteoradionecrosis and hair loss, specific complications after chemoradiotherapy include xerostomia, hearing loss, inflammation of skin and mucosa and cancer recurrence (Haider et al., 2020). Kuo and coworkers (Men et al., 2019) collected CT scan data from patients undergoing radiation therapy and developed a prognostic system using three-dimensional residual CNN (3DrCNN) to predict the occurrence of post-therapy xerostomia. The implementation of the 3DrCNN method is followed by structural segmentation, which outlines margins of parotid and submandibular glands on CT scans. Radiation dose distributions, profiles of salivary glands and CT scans are prepared as optional data, and at least two of them are selected as input. The model without data rejection reaches the best performance with accuracy, sensitivity and specificity of all around 0.76. The worst performance occurs when the radiation dose label is lacking. Further studies can focus on the accuracy of methods for structure identification and to increase the data types for input (e.g., treatment cycle) to augment the precision of xerostomia prediction.
Five-year survival rate and survival time of cancer patients are significant indicators for cancer prognosis, as well as references for therapeutic outcomes. In a recent study, a total of 59 patients with oral tongue cancer have been examined (Pan et al., 2020). All were treated with radiotherapy, and their CT images utilized and studied by computer for individual survival prediction.
The researchers used a t-Distributed Stochastic Neighbor Embedding (t-SNE) method to screen out effective features to allow for numerous irrelevant features. Probabilistic Genetic Algorithm-Back Propagation (PGA-BP) ML methods were used, and the prediction accuracy was already close to actual survival conditions: 30.5 ± 21.3 months for actual survival time and 31.6 ± 15.8 months for predicted survival times. Furthermore, to improve the degree of accuracy, other indicators including tumor grading and staging should be taken into account. The year of diagnosis, the age at diagnosis and cancer size and site are of significance in the lifetime of patients (Hung et al., 2020b).
Oral malignancies have a close relation with cervical lymphatic metastasis, which implicates poor cancer prognosis (Okada et al., 2003; Spiro, 1985) and is especially indicative of the sharp decrease in the 5-year survival rate (Exarchos, Goletsis & Fotiadis, 2012; Taghavi & Yazdi, 2015; Walk & Weed, 2011). Therefore, the detection of metastasis for cervical lymph nodes has become a focus of attention after clinical treatment. In this context, automated detection with the help of ML methods has been conducted with distinct image types recently (Ariji et al., 2019; Dong et al., 2018; Keek et al., 2020). The nodal status of oral cavity SCC and oropharyngeal SCC is assessed using contrast-enhanced CT scans. The bagging of Naïve Bayes achieves the best accuracy of 92.9% with receiver operating characteristic (ROC) of 0.857 (Romeo et al., 2020). Additionally, more than 10,000 contrast-enhanced CT images of cervical lymph nodes have been trained by CNN, and the analytical result suggests a close precision for evaluation between manual and automated assessment.
In another study (Dong et al., 2018), the assessment sensitivity based on a non-radiating thermal system was higher than that based on contrast-enhanced CT scans, but the two pieces of research utilize distinct ML models.
The wiser choice of image types for assessment needs further comparison. MRI has also been considered as a potential predictive modality for HNSCC prognosis (Yuan et al., 2019). In future, computer-assistant methods can be explored and put into application. One type of research utilizes both MRI image features and clinical information such as smoking history, age and other factors to automatically predict the existence of HPV in patients suffering from oropharyngeal squamous cell carcinoma (Bos et al., 2021). Relations between clinical characteristics and HPV status are also analyzed as the existence of HPV is closely related to cancer prognosis. Others have also estimated the existence of HPV and p53 mutation in HNSCC patients via MRI (Dang et al., 2015; Ravanelli et al., 2018).
Automated oral cancer detections and assessments are available for diverse image data, most of which are CT scans and HSI (hue, saturation, intensity).
CNN models achieve high-quality cancer-related image processing and are mainly used as a means of image classification, especially functioning as binary classifiers. Other ML methods like SVM and Random Forests also display high sensitivity, accuracy and specificity during image processing with specific types of image data. However, their superiority needs further exploration and more evidence due to the lack of sufficient research literature. Meanwhile, given that a large amount of recent research utilizes limited imaging data for AI training, the significance of data sharing and dataset construction is highlighted (Haider et al., 2020).
The combination of AI and molecular imaging has aroused attention with the rapid progress in ML-based imaging. Rather than conventional automated imaging, which works by image classification, applications at the level of molecular imaging place emphasis on biomarker exploration (Choi, 2018). Biomarkers of oral cancers may assist in unambiguous tumor detection and individualized treatment. At the genetic level, the complex genomic data can be extracted by ML methods effectively, which demonstrates a distinct way for the detection of oral cancer and its evaluation (Chang et al., 2013; Li et al., 2017). SVM shows promising capabilities in genomic studies. Future directions of research will include AI-related, imaging–genomic combined studies to enhance analytical effectiveness and the accuracy of oral cancers.
Other fields in dental, oral and craniofacial imaging
ML is widely used in the field of stomatology. It has important clinical value, including but not restricted to the detection of dental caries, periapical disease, periodontology, facial recognition, the evaluation of facial attractiveness evaluation and other uses.
Hong Guofeng and coworkers (Hung et al., 2020a) established a non-clinical caries detection model and realized caries detection by obtaining and analyzing two-dimensional images of extracted teeth. You et al. (2020) studied plaque detection of deciduous teeth based on deep learning. Schwendicke et al. (2020a) applied deep CNN to detect caries in near-infrared transparent (NILT) images, and they also emphasized that applying AI for caries detection is less costly and more effective (Schwendicke et al., 2020b). Zhang X et al. developed ConvNet, which is based on CNN, to identify dental caries from oral images captured with consumer cameras. The result showed that the image-wise sensitivity is good (Zhang et al., 2020). At present, more studies have been conducted on the possibility and accuracy of AI-assisted detection of dental caries, but there are few studies on AI-assisted prediction of the occurrence of dental caries. The automatic detection of superficial dental caries is also a problem that needs a solution.
ML is increasingly being used by both dentists and researchers as a novel method for diagnosing dental diseases, especially endodontic diseases. The condition of teeth is a significant factor of influence for stomatognathic system health. The classification and segmentation of tooth and root canal by ML methods has achieved promising results both at the two-dimensional and three-dimensional levels (Dumont et al., 2020; Zhang et al., 2018). Dumont et al. (2020) combined an intraoral scanner-acquired crown form with a CBCT-obtained root form to realize the clinical labeling of spatial crown and root morphology. Image segmentation work was conducted using both U-Net and ResNet. The follow-up clinical diagnosis or therapeutic schedule can be more complete and more precise once the tooth morphology has been ascertained.
Different studies (Fukuda et al., 2019; Hiraiwa et al., 2019; Orhan et al., 2020) have utilized a CNN system to detect the root morphology, longitudinal root fissure and periapical lesions of molars. Lee, Kim & Jeong (2020) used CNN-based dental panoramic X-rays and CBCT images to detect and diagnose three types of odontogenic cystic lesions as follows: odontogenic keratocyst, odontogenic cyst and periapical cyst. They also developed a computer-aided detection system using a deep CNN algorithm (Lee et al., 2018), which was found to be useful in PCT diagnosis and provided predictable evaluation of periodontal injury. Early osteoarthritis can also be automatically detected by providing radiologic, clinical and molecular level information for the temporomandibular joint (Bianchi et al., 2020). Most of the studies are in the initial stages with insufficient clinical applications.
We conclude that the research of image detection based on CNN has been intense in the past few years. One of the development trends of AI in oral imaging research in the future will be using CNN to combine image detection with clinical treatment and further advance smarter decision making for medical treatments.
ML models have also proved useful in other clinical application domains. Examples include the following: the diagnosis of maxillary sinusitis (Ohashi et al., 2016), the classification of third molar developmental phase and tooth type (Tuzoff et al., 2019), the classification of periapical slices, the identification of dental plaque and gingival inflammation at the root canal opening, the automatic allocation of dental age estimation and the diagnosis of multiple dental diseases (Benyo, 2012; Hwang et al., 2019). The major applications of ML in dental, oral and craniofacial imaging are summarized in Table 1.
Table 1. Applications of ML methods in dental, oral and craniofacial imaging.
Fields | Subfields | Types of ML | Researches |
---|---|---|---|
Orthodontics | Landmark identification | Active shape model (ASM) | The algorithm functions by capturing variations of region shape and grey profile, based on segmentation of lateral cephalograms. High imagery quality and tedious works are needed (Yue et al., 2006) |
Customized open-source CNN deep learning algorithm (Keras & Google Tensorflow) | Study uses high quality training data for supervised learning. With a huge set of 1792 lateral cephalograms, the algorithm demonstrates comparable precision with experienced examiners (Kunz et al., 2020) | ||
You-Only-Look-Once version 3 (YOLOv3) | The study uses 1028 cephalograms as training data, which consists of both hard and soft tissue landmarks. The mean detection errors are not clinically significant between AI and manual examination. Reproducibility seems better than manual identification (Hwang et al., 2020; Park et al., 2019) | ||
Hybrid: 2D active shape model (ASM) & 3D knowledge-based models | The study uses a holistic ASM search to get initial 2D cephalogram projections. Further it utilizes 3D approaches for landmark identification. With the preprocessing of 2D algorithms, the accuracy and speed of landmark annotation are heightened (Montúfar, Romero & Scougall-Vilchis, 2018) | ||
Entire image-based CNN, patch-based CNN & variational autoencoder | With only a small amount of CT data, the hierarchical method (consists of 4 steps) reaches higher accuracy than former researches on 3D landmark annotation with deep learning methods. The mean point-to-point error is 3.63 mm (Yun et al., 2020) | ||
VGG-Net | The study has trained VGG-Net with a large amount of diverse shadowed 2D images. Each image has different lighting and shooting angles. The VGG-Net is able to form stereoscopic craniofacial morphological structure (Lee et al., 2019) | ||
determination of cervical vertebrae stages (CVS) | k-nearest neighbors (k-NN), Naive Bayes (NB), decision tree (Tree), artificial neural networks (ANN), support vector machine (SVM), random forest (RF), and logistic regression (Log.Regr.) | The study suggests that the seven AI algorithms have different precision of determination. ANN reaches the highest stability, the lowest accuracy occurs in Log.Regr. and kNN. By overall consideration, ANN is recommended to CVS determination (Kök, Acilar & Izgi, 2019) | |
Teeth-extraction decision | A two-layer neural network | The process consists of three steps: initial determination of teeth extraction, the choice of differential extraction, and determination of specific teeth to be extracted. The neural network gives a detailed plan of teeth extraction in orthodontic treatment (Jung & Kim, 2016) | |
Oral cancer | Detection of oral cancers | Texture-map based branch-collaborative network |
Deep CNN is used for cancer detection as well as localization, the detection sensitivity and specificity achieve 93.14% and 94.75% respectively (Chan et al., 2019) |
Alexnet, VGG-16, VGG-19, Resnet-50, & a proposed CNN | The study utilizes five CNNs for automated OSCC grading. The proposed CNN performs best with accuracy of 97.5% (Das, Hussain & Mahanta, 2020) | ||
Regression-based deep CNN with 2 partitioned layers, Google Net Inception V3 CNN architecture | The deep learning method is implemented on hyperspectral images, with the amount of training data growing from 100 to 500, the tissue classification accuracy (benign or cancerous) increases by 4.5% (Jeyaraj & Samuel Nadar, 2019) | ||
Cancer margin assessment | SVM, Random Forests, 6-layer 1-D CNN | Fiber probes are utilized to collect FLIm data with ML methods. Random Forest demonstrates best performance in tissue region division (healthy, benign and cancerous tissue), displaying potential in tumor surgical visualization (Marsden et al., 2020) | |
Prognosis of oral cancer | 3-D residual CNN (rCNN) | The study uses three types of labels as inputting data: CT images, radiotherapy dose distribution and contours of oral cancers. And the rCNN model is able to extract features on CT images to predict post-therapeutic xerostomias with best accuracy of 76% (Men et al., 2019) | |
Deep learning method, AlexNet architecture | The system is implemented on contrast-enhanced CT to assess cervical lymph node metastasis in patients carrying oral cancers. The diagnostic results demonstrate little differences between manual and automated evaluation (Ariji et al., 2019) | ||
back propagation (BP), Genetic Algorithm-Back Propagation (GA-BP), Probabilistic Genetic Algorithm-Back Propagation (PGA-BP) neural networks |
Three ML approaches are utilized for cancerous patients’ survivial time prediction. PGA-BP has the best performance with an error of of average survival time for less than 2 years (Pan et al., 2020) | ||
Dental endodontics | Detection of dental caries | CNN, the basic DeepLab network, DeepLabV3+ model | The dental plaque detection model was trained using natural photos based on a CNN framework and transfer learning. Photos of deciduous teeth before and after the usage of a dental plaque exposure agent were used. Results show that the AI model is more accurate (You et al., 2020) |
Root morphology | CNN, the standard DIGITS algorithm | This study analyzed CBCT and panoramic radiographs of mandibular first molars with a total of 760. The root image block is segmented and applied by deep learning system. High accuracy in the differential diagnosis of distal root forms of the mandibular first molar (single or multiple) was observed (Hiraiwa et al., 2019) | |
Periapical lesions | deep CNN | CBCT images of 153 periapical lesions were evaluated by deep CNN, and it was able to detect 142 periapical lesions, which is capable to figure out the location and volume of lesions and detect periapical pathosis based on CBCT images (Orhan et al., 2020) | |
The deep learning approach based on a U-Net architecture | This study achieved periapical lesions detections by segmenting CBCT images. The accuracy of DLS lesion detection reaches 0.93 (Setzer et al., 2020) | ||
Periodontology | CNN, the GoogLeNet Inception-v3 architecture | The study utilized panoramic and CBCT images to detect three types of odontogenic cystic lesions (OCLs) based on CNN and transfer learning. Results suggest that CBCT-based training performs better than panoramic image-based training (Lee, Kim & Jeong, 2020) | |
deep CNN architecture and a self-trained network | The study utilized deep CNN algorithm for periodontally compromised teeth (PCT) diagnosis and prediction. The accuracy of PCT diagnosis on premolars reaches high level than that on molars (Lee et al., 2018) | ||
Orthognathic surgery | facial attractiveness | CNN, VGG-16 architecture | The study viewed the photos of 146 orthognathic patients before and after treatment, assessed their facial attractiveness and apparent age using CNN, and found that the appearance of most patients improved after treatment (Patcas et al., 2019a) |
CNN, VGG-16 architecture | Full-face and lateral pictures of left cleft lip patients and controls were assessed and facial attractiveness was evaluated. Results showed that CNN is capable to assess facial attractiveness with similar score of manual evaluation (Patcas et al., 2019b) | ||
Others | CNN | CBCT images combined with AI can also be used to measure the bone mineral density of the implant area, evaluate the bone mass of the surgical area, and assist in the construction of static guide plate system (Dahiya et al., 2018; Lin et al., 2020; Suttapreyasri, Suapear & Leepong, 2018) |
Concluding remarks
In recent years, ML has gradually penetrated all fields of dentistry, most of which are related to teeth and, in some cases, to gums and dental tissue, dental arch, osteoporosis and others. In our review, we mostly focus on applications in the fields of craniofacial imaging and oral cancer. ML, especially CNNs, not only helps doctors to screen diseases quickly but also assists in diagnosis and treatment. However, some aspects need to be supplemented to promote the sustainable development of oral and maxillofacial radiology deep-learning research. First, the data needed for these types of research are internal, which results in difficulties in making objective comparisons and poses great needs for professional labeling. Second, the imbalance in the quality of medical records quality, some good and some bad, makes it difficult to conduct studies with big data. In addition, the insufficient number of data is another obstacle. Recent research with ML usually has only less than 1,000 X-rays in each group. For three-dimensional data like CBCT, sometimes less than 100 units are used for training. Third, because of the large amount of computation and long training time, higher requirements exist for computer hardware (Schwendicke et al., 2019). Fourth and finally, deep learning cannot be completely intellectualized. It must rely on considerable existing data samples in order to analyze and predict new data, but for disease analysis, we often cannot control the variability of this data (Ruellas et al., 2016).
The development of ML has faced major challenges. However, significant progress has been made for ML applications in current trials such as two-dimensional landmark annotation and detection of oral diseases. Some of these applications have obtained the same high accuracy as the current golden standard or achieved even better results. The image-processing capability of ML has particular significance in oral medicine. However, many applications are at an incipient stage and are far from clinical use. In view of data deficiencies, efforts can be made to transform learning models: for example, turning the complex, integral model to several modularized structures, modifying previous ML models to handle multifarious, accessible data or adopting the method that combines manual operation with computer assistance. The production of more high-quality medical imaging data sets in the future will also greatly promote this direction of research. Novel semi-supervised and weak-supervised algorithms will also help alleviate this problem. In addition to model design, the operating cost also needs to be taken into consideration. Overall, ML has a bright future as a way to increase clinical efficiency and diagnostic accuracy.
There are three types of main tasks in medical imaging: image classification, image positioning and detection and image segmentation. Good progress has been made in all three fields, especially in the areas of medical image registration and segmentation. However, application has been weak, and more innovative work is needed. Some theoretical and application aspects of machine learning applied to medical imaging require improvement. In addition, new theoretical breakthroughs and creative applications are needed to promote the further development of this direction. In particular, many theoretical areas need significant advances. For example, the application of three-dimensional convolution to medical imaging can compensate for the three-dimensional features of organ tissues that cannot be extracted by traditional two-dimensional convolution (Zhu et al., 2018). Three-dimensional reconstruction technology can help visualize the internal structure of the human body, which enables surgical navigation and early-stage auxiliary diagnosis (Dong et al., 2018). A multi-modal information extraction method can synthesize information obtained by different devices to generate more accurate information. In addition, a time series model can be used to generate metastasis trajectory predictions for oral tumors and other diseases.
ML is likely to be more widely used in medical imaging as current trends in medical development gather pace, and we witness the inevitable convergence of the medical and computer sciences,. In the future, many of the existing barriers for medical image-assisted diagnosis technology will be overcome by the application of ML as its accuracy improves. Further effort should ensure a smarter and more effective imaging analysis in the fields of dental, oral and craniofacial research.
Funding Statement
This study was supported by the Science and Technology Department of Sichuan Province (20ZDYF2839) and 2018 Sichuan University-Luzhou City Co-operaton Program (CDLZ2018-14). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Additional Information and Declarations
Competing Interests
The authors declare that they have no competing interests.
Author Contributions
Ruiyang Ren conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft.
Haozhe Luo performed the experiments, analyzed the data, prepared figures and/or tables, and approved the final draft.
Chongying Su performed the experiments, prepared figures and/or tables, and approved the final draft.
Yang Yao conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft.
Wen Liao conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft.
Data Availability
The following information was supplied regarding data availability:
This is a literature review.
References
- Abhimanyu et al. (2020).Abhimanyu T, Ambika PM, Bishnupriya P, Diana CSR, Isha G, Babita M. Application of artificial intelligence in pharmaceutical and biomedical studies. Current Pharmaceutical Design. 2020;26(29):3569–3578. doi: 10.2174/1381612826666200515131245. [DOI] [PubMed] [Google Scholar]
- Akbari et al. (2011).Akbari H, Uto K, Kosugi Y, Kojima K, Tanaka N. Cancer detection using infrared hyperspectral imaging. Cancer Science. 2011;102(4):852–857. doi: 10.1111/j.1349-7006.2011.01849.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Al-Ma’aitah & AlZubi (2018).Al-Ma’aitah M, AlZubi AA. Enhanced computational model for gravitational search optimized echo state neural networks based oral cancer detection. Journal of Medical Systems. 2018;42(11):205. doi: 10.1007/s10916-018-1052-0. [DOI] [PubMed] [Google Scholar]
- Al Ajmi et al. (2018).Al Ajmi E, Forghani B, Reinhold C, Bayat M, Forghani R. Spectral multi-energy CT texture analysis with machine learning for tissue classification: an investigation using classification of benign parotid tumours as a testing paradigm. European Radiology. 2018;28(6):2604–2611. doi: 10.1007/s00330-017-5214-0. [DOI] [PubMed] [Google Scholar]
- Amasya et al. (2020).Amasya H, Cesur E, Yıldırım D, Orhan K. Validation of cervical vertebral maturation stages: Artificial intelligence vs human observer visual analysis. American Journal of Orthodontics and Dentofacial Orthopedics. 2020;158(6):e173–e179. doi: 10.1016/j.ajodo.2020.08.014. [DOI] [PubMed] [Google Scholar]
- Ariji et al. (2019).Ariji Y, Fukuda M, Kise Y, Nozawa M, Yanashita Y, Fujita H, Katsumata A, Ariji E. Contrast-enhanced computed tomography image assessment of cervical lymph node metastasis in patients with oral cancer by using a deep learning system of artificial intelligence. Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology. 2019;127(5):458–463. doi: 10.1016/j.oooo.2018.10.002. [DOI] [PubMed] [Google Scholar]
- Aubreville et al. (2017).Aubreville M, Knipfer C, Oetter N, Jaremenko C, Rodner E, Denzler J, Bohr C, Neumann H, Stelzle F, Maier A. Automatic classification of cancerous tissue in laserendomicroscopy images of the oral cavity using deep learning. Scientific Reports. 2017;7(1):11979. doi: 10.1038/s41598-017-12320-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ben-Hur & Weston (2010).Ben-Hur A, Weston J. A user’s guide to support vector machines. Data Mining Techniques for the Life Sciences; 2010. pp. 223–239. [DOI] [PubMed] [Google Scholar]
- Benyo (2012).Benyo B. Identification of dental root canals and their medial line from micro-CT and cone-beam CT records. Biomedical Engineering Online. 2012;11(1):81. doi: 10.1186/1475-925X-11-81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bianchi et al. (2020).Bianchi J, de Oliveira Ruellas AC, Gonçalves JR, Paniagua B, Prieto JC, Styner M, Li T, Zhu H, Sugai J, Giannobile W, Benavides E, Soki F, Yatabe M, Ashman L, Walker D, Soroushmehr R, Najarian K, Cevidanes LHS. Osteoarthritis of the Temporomandibular Joint can be diagnosed earlier using biomarkers and machine learning. Scientific Reports. 2020;10(1):8012. doi: 10.1038/s41598-020-64942-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bos et al. (2021).Bos P, van den Brekel MWM, Gouw ZAR, Al-Mamgani A, Waktola S, Aerts HJWL, Beets-Tan RGH, Castelijns JA, Jasperse B. Clinical variables and magnetic resonance imaging-based radiomics predict human papillomavirus status of oropharyngeal cancer. Head Neck. 2021;43(2):485–495. doi: 10.1002/hed.26505. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bouletreau et al. (2019).Bouletreau P, Makaremi M, Ibrahim B, Louvrier A, Sigaux N. Artificial intelligence: applications in orthognathic surgery. Journal of Stomatology, Oral and Maxillofacial Surgery. 2019;120(4):347–354. doi: 10.1016/j.jormas.2019.06.001. [DOI] [PubMed] [Google Scholar]
- Boureau et al. (2010).Boureau Y-L, Bach F, LeCun Y, Ponce J. Learning mid-level features for recognition. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Piscataway: IEEE; 2010. pp. 2559–2566. [Google Scholar]
- Brouwer de Koning et al. (2018).Brouwer de Koning SG, Baltussen EJM, Karakullukcu MB, Dashtbozorg B, Smit LA, Dirven R, Hendriks BHW, Sterenborg HJCM, Ruers TJM. Toward complete oral cavity cancer resection using a handheld diffuse reflectance spectroscopy probe. Journal of Biomedical Optics. 2018;23(12):1–8. doi: 10.1117/1.JBO.23.12.121611. [DOI] [PubMed] [Google Scholar]
- Canny (1986).Canny J. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1986;PAMI-8(6):679–698. doi: 10.1109/TPAMI.1986.4767851. [DOI] [PubMed] [Google Scholar]
- Cevidanes, Styner & Proffit (2006).Cevidanes LH, Styner MA, Proffit WR. Image analysis and superimposition of 3-dimensional cone-beam computed tomography models. American Journal of Orthodontics and Dentofacial Orthopedics. 2006;129(5):611–618. doi: 10.1016/j.ajodo.2005.12.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chakrabarti et al. (2010).Chakrabarti B, Ghorai S, Basu B, Ghosh S, Gupta P, Ghosh K, Ghosh P. Late nodal metastasis in early-stage node-negative oral cavity cancers after successful sole interstitial brachytherapy: an institutional experience of 42 cases in India. Brachytherapy. 2010;9(3):254–259. doi: 10.1016/j.brachy.2009.11.001. [DOI] [PubMed] [Google Scholar]
- Chakraborty, Natarajan & Mukherjee (2019).Chakraborty D, Natarajan C, Mukherjee A. Advances in oral cancer detection. Advances in Clinical Chemistry. 2019;91(5):181–200. doi: 10.1016/bs.acc.2019.03.006. [DOI] [PubMed] [Google Scholar]
- Chan et al. (2019).Chan CH, Huang TT, Chen CY, Lee CC, Chan MY, Chung PC. Texture-map-based branch-collaborative network for oral cancer detection. IEEE Transactions on Biomedical Circuits and Systems. 2019;13(4):766–780. doi: 10.1109/TBCAS.2019.2918244. [DOI] [PubMed] [Google Scholar]
- Chang et al. (2013).Chang S-W, Abdul-Kareem S, Merican AF, Zain RB. Oral cancer prognosis based on clinicopathologic and genomic markers using a hybrid of feature selection and machine learning methods. BMC Bioinformatics. 2013;14(1):170. doi: 10.1186/1471-2105-14-170. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen et al. (2010).Chen L, Liu J, Xu T, Lin J. Longitudinal study of relative growth rates of the maxilla and the mandible according to quantitative cervical vertebral maturation. American Journal of Orthodontics and Dentofacial Orthopedics. 2010;137(6):736.e731–736.e738. doi: 10.1016/j.ajodo.2009.12.022. [DOI] [PubMed] [Google Scholar]
- Cheng et al. (2013).Cheng SN, Rico-Jimenez JJ, Jabbour J, Malik B, Maitland KC, Wright J, Cheng YSL, Jo JA. Flexible endoscope for continuous in vivo multispectral fluorescence lifetime imaging. Optics Letters. 2013;38(9):1515–1517. doi: 10.1364/OL.38.001515. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Choi (2018).Choi H. Deep learning in nuclear medicine and molecular imaging: current perspectives and future directions. Nuclear Medicine and Molecular Imaging. 2018;52(2):109–118. doi: 10.1007/s13139-017-0504-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Choi et al. (2019).Choi HI, Jung SK, Baek SH, Lim WH, Ahn SJ, Yang IH, Kim TW. Artificial intelligent model with neural network machine learning for the diagnosis of orthognathic surgery. Journal of Craniofacial Surgery. 2019;30(7):1986–1989. doi: 10.1097/SCS.0000000000005650. [DOI] [PubMed] [Google Scholar]
- Cohen, Ip & Linney (1984).Cohen AM, Ip HH-S, Linney AD. A preliminary study of computer recognition and identification of skeletal landmarks as a new method of cephalometric analysis. Journal of Orthodontics. 1984;11(3):143–154. doi: 10.1179/bjo.11.3.143. [DOI] [PubMed] [Google Scholar]
- Çolak (2019).Çolak M. An evaluation of bone mineral density using cone beam computed tomography in patients with ectodermal dysplasia: a retrospective study at a single center in Turkey. Medical Science Monitor: International Medical Journal of Experimental and Clinical Research. 2019;25:3503–3509. doi: 10.12659/MSM.914405. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cover & Hart (1967).Cover T, Hart P. Nearest neighbor pattern classification. IEEE Transactions on Information Theory. 1967;13(1):21–27. doi: 10.1109/TIT.1967.1053964. [DOI] [Google Scholar]
- Cybenko (1989).Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems. 1989;2(4):303–314. doi: 10.1007/BF02551274. [DOI] [Google Scholar]
- D’Souza & Addepalli (2018).D’Souza S, Addepalli V. Preventive measures in oral cancer: an overview. Biomedicine & Pharmacotherapy. 2018;107:72–80. doi: 10.1016/j.biopha.2018.07.114. [DOI] [PubMed] [Google Scholar]
- Dahiya et al. (2018).Dahiya K, Kumar N, Bajaj P, Sharma A, Sikka R, Dahiya S. Qualitative assessment of reliability of cone-beam computed tomography in evaluating bone density at posterior mandibular implant site. The Journal of Contemporary Dental Practice. 2018;19(4):426–430. doi: 10.5005/jp-journals-10024-2278. [DOI] [PubMed] [Google Scholar]
- Dang et al. (2015).Dang M, Lysack JT, Wu T, Matthews TW, Chandarana SP, Brockton NT, Bose P, Bansal G, Cheng H, Mitchell JR, Dort JC. MRI texture analysis predicts p53 status in head and neck squamous cell carcinoma. American Journal of Neuroradiology. 2015;36(1):166–170. doi: 10.3174/ajnr.A4110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Das, Hussain & Mahanta (2020).Das N, Hussain E, Mahanta LB. Automated classification of cells into multiple classes in epithelial tissue of oral squamous cell carcinoma using transfer learning and convolutional neural network. Neural Networks. 2020;128(2):47–60. doi: 10.1016/j.neunet.2020.05.003. [DOI] [PubMed] [Google Scholar]
- Dey (2016).Dey A. Machine learning algorithms: a review. International Journal of Computer Science and Information Technologies. 2016;7:1174–1179. [Google Scholar]
- Dolci (2017).Dolci R. IoT solutions for precision farming and food manufacturing: artificial intelligence applications in digital food. 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC); Piscataway: IEEE; 2017. pp. 384–385. [Google Scholar]
- Dong et al. (2018).Dong F, Tao C, Wu J, Su Y, Wang Y, Wang Y, Guo C, Lyu P. Detection of cervical lymph node metastasis from oral cavity cancer using a non-radiating, noninvasive digital infrared thermal imaging system. Scientific Reports. 2018;8(1):7219. doi: 10.1038/s41598-018-24195-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dot et al. (2020).Dot G, Rafflenbeul F, Arbotto M, Gajny L, Rouch P, Schouman T. Accuracy and reliability of automatic three-dimensional cephalometric landmarking. International Journal of Oral and Maxillofacial Surgery. 2020;49(10):1367–1378. doi: 10.1016/j.ijom.2020.02.015. [DOI] [PubMed] [Google Scholar]
- Dumont et al. (2020).Dumont M, Prieto JC, Brosset S, Cevidanes L, Bianchi J, Ruellas A, Gurgel M, Massaro C, Castillo AAD, Ioshida M, Yatabe M, Benavides E, Rios H, Soki F, Neiva G, Aristizabal JF, Rey D, Alvarez MA, Najarian K, Gryak J, Styner M, Fillion-Robin J-C, Paniagua B, Soroushmehr R. Patient specific classification of dental root canal and crown shape. Shape in Medical Imaging: International Workshop, ShapeMI 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings; 2020. pp. 145–153. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dumoulin & Visin (2016).Dumoulin V, Visin F. A guide to convolution arithmetic for deep learning. 2016. https://arxiv.org/abs/1603.07285 https://arxiv.org/abs/1603.07285
- Erickson et al. (2017).Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for medical imaging. RadioGraphics. 2017;37(2):505–515. doi: 10.1148/rg.2017160130. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Exarchos, Goletsis & Fotiadis (2012).Exarchos KP, Goletsis Y, Fotiadis DI. Multiparametric decision support system for the prediction of oral cancer reoccurrence. IEEE Transactions on Information Technology in Biomedicine. 2012;16(6):1127–1134. doi: 10.1109/TITB.2011.2165076. [DOI] [PubMed] [Google Scholar]
- Fei et al. (2017).Fei B, Lu G, Wang X, Zhang H, Little JV, Patel MR, Griffith CC, El-Diery MW, Chen AY. Label-free reflectance hyperspectral imaging for tumor margin assessment: a pilot study on surgical specimens of cancer patients. Journal of Biomedical Optics. 2017;22(08):1–7. doi: 10.1117/1.JBO.22.8.086009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fukuda et al. (2019).Fukuda M, Inamoto K, Shibata N, Ariji Y, Yanashita Y, Kutsuna S, Nakata K, Katsumata A, Fujita H, Ariji E. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiology. 2019;36(4):337–343. doi: 10.1007/s11282-019-00409-x. [DOI] [PubMed] [Google Scholar]
- Gao et al. (2018).Gao F, Wu T, Li J, Zheng B, Ruan L, Shang D, Patel B. SD-CNN: a shallow-deep CNN for improved breast cancer diagnosis. Computerized Medical Imaging and Graphics. 2018;70(3):53–62. doi: 10.1016/j.compmedimag.2018.09.004. [DOI] [PubMed] [Google Scholar]
- Gao et al. (2019).Gao J, Jiang Q, Zhou B, Chen D. Convolutional neural networks for computer-aided detection or diagnosis in medical image analysis: an overview. Mathematical Biosciences and Engineering. 2019;16(6):6536–6561. doi: 10.3934/mbe.2019326. [DOI] [PubMed] [Google Scholar]
- Graham (2014).Graham B. Fractional max-pooling. arXiv. 2014. https://arxiv.org/abs/1412.6071 https://arxiv.org/abs/1412.6071
- Grillone et al. (2017).Grillone GA, Wang Z, Krisciunas GP, Tsai AC, Kannabiran VR, Pistey RW, Zhao Q, Rodriguez-Diaz E, A’Amar OM, Bigio IJ. The color of cancer: margin guidance for oral cancer resection using elastic scattering spectroscopy. The Laryngoscope. 2017;127(7 suppl 1):S1–S9. doi: 10.1002/lary.26763. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gupta et al. (2015).Gupta A, Kharbanda OP, Sardana V, Balachandran R, Sardana HK. A knowledge-based algorithm for automatic detection of cephalometric landmarks on CBCT images. International Journal of Computer Assisted Radiology Surgery. 2015;10(11):1737–1752. doi: 10.1007/s11548-015-1173-6. [DOI] [PubMed] [Google Scholar]
- Gupta et al. (2016).Gupta A, Kharbanda OP, Sardana V, Balachandran R, Sardana HK. Accuracy of 3D cephalometric measurements based on an automatic knowledge-based landmark detection algorithm. International Journal of Computer Assisted Radiology Surgery. 2016;11(7):1297–1309. doi: 10.1007/s11548-015-1334-7. [DOI] [PubMed] [Google Scholar]
- Haider et al. (2020).Haider SP, Burtness B, Yarbrough WG, Payabvash S. Applications of radiomics in precision diagnosis, prognostication and treatment planning of head and neck squamous cell carcinomas. Cancers Head Neck. 2020;5(1):6. doi: 10.1186/s41199-020-00053-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- He et al. (2015).He K, Zhang X, Ren S, Sun J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2015;37(9):1904–1916. doi: 10.1109/TPAMI.2015.2389824. [DOI] [PubMed] [Google Scholar]
- Hinton et al. (2012).Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR. Improving neural networks by preventing co-adaptation of feature detectors. https://arxiv.org/pdf/1207.0580.pdf 2012 [Google Scholar]
- Hiraiwa et al. (2019).Hiraiwa T, Ariji Y, Fukuda M, Kise Y, Nakata K, Katsumata A, Fujita H, Ariji E. A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofacial Radiology. 2019;48(3):20180218. doi: 10.1259/dmfr.20180218. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hosny et al. (2018).Hosny A, Parmar C, Coroller TP, Grossmann P, Zeleznik R, Kumar A, Bussink J, Gillies RJ, Mak RH, Aerts HJWL. Deep learning for lung cancer prognostication: a retrospective multi-cohort radiomics study. PLOS Medicine. 2018;15(11):e1002711. doi: 10.1371/journal.pmed.1002711. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huang et al. (2019).Huang C, Cintra M, Brennan K, Zhou M, Colevas AD, Fischbein N, Zhu S, Gevaert O. Development and validation of radiomic signatures of head and neck squamous cell carcinoma molecular features and subtypes. EBioMedicine. 2019;45:70–80. doi: 10.1016/j.ebiom.2019.06.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huang et al. (2018).Huang X, Wang J, Tang F, Zhong T, Zhang Y. Metal artifact reduction on cervical CT images by deep residual learning. Biomedical Engineering Online. 2018;17(1):175. doi: 10.1186/s12938-018-0609-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hubel & Wiesel (1962).Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology. 1962;160(1):106–154. doi: 10.1113/jphysiol.1962.sp006837. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hung et al. (2020a).Hung K, Montalvao C, Tanaka R, Kawai T, Bornstein MM. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: a systematic review. Dentomaxillofacial Radiology. 2020a;49(1):20190107. doi: 10.1259/dmfr.20190107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hung et al. (2020b).Hung M, Park J, Hon ES, Bounsanga J, Moazzami S, Ruiz-Negrón B, Wang D. Artificial intelligence in dentistry: harnessing big data to predict oral cancer survival. World Journal of Clinical Oncology. 2020b;11(11):918–934. doi: 10.5306/wjco.v11.i11.918. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hwang et al. (2020).Hwang HW, Park JH, Moon JH, Yu Y, Kim H, Her SB, Srinivasan G, Aljanabi MNA, Donatelli RE, Lee SJ. Automated identification of cephalometric landmarks: part 2-might it be better than human? Angle Orthodontist. 2020;90(1):69–76. doi: 10.2319/022019-129.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hwang et al. (2019).Hwang JJ, Jung YH, Cho BH, Heo MS. An overview of deep learning in the field of dentistry. Imaging Science in Dentistry. 2019;49(1):1–7. doi: 10.5624/isd.2019.49.1.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jeong et al. (2020).Jeong S, Yun J, Yeom H, Lim H, Lee J, BJSr Kim. Deep learning based discrimination of soft tissue profiles requiring orthognathic surgery by facial photographs. Scientific Reports. 2020;10(1):16235. doi: 10.1038/s41598-020-73287-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jeyaraj & Samuel Nadar (2019).Jeyaraj PR, Samuel Nadar ER. Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm. Journal of Cancer Research and Clinical Oncology. 2019;145(4):829–837. doi: 10.1007/s00432-018-02834-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jiang et al. (2018).Jiang D, Dou W, Vosters L, Xu X, Sun Y, Tan T. Denoising of 3D magnetic resonance images with multi-channel residual learning of convolutional neural network. Japanese Journal of Radiology. 2018;36(9):566–574. doi: 10.1007/s11604-018-0758-8. [DOI] [PubMed] [Google Scholar]
- Jung & Kim (2016).Jung SK, Kim TW. New approach for the diagnosis of extractions with neural network machine learning. American Journal of Orthodontics Dentofacial Orthopedics. 2016;149(1):127–133. doi: 10.1016/j.ajodo.2015.07.030. [DOI] [PubMed] [Google Scholar]
- Jurczyszyn, Gedrange & Kozakiewicz (2020).Jurczyszyn K, Gedrange T, Kozakiewicz M. Theoretical background to automated diagnosing of oral leukoplakia: a preliminary report. Journal of Healthcare Engineering. 2020;2020(4):8831161. doi: 10.1155/2020/8831161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kang et al. (2020).Kang SH, Jeon K, Kim H-J, Seo JK, Lee S-H. Automatic three-dimensional cephalometric annotation system using three-dimensional convolutional neural networks: a developmental trial. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization. 2020;8(2):210–218. doi: 10.1080/21681163.2019.1674696. [DOI] [Google Scholar]
- Kar et al. (2020).Kar A, Wreesmann VB, Shwetha V, Thakur S, Rao VUS, Arakeri G, Brennan PA. Improvement of oral cancer screening quality and reach: the promise of artificial intelligence. Journal of Oral Pathology & Medicine. 2020;49(8):727–730. doi: 10.1111/jop.13013. [DOI] [PubMed] [Google Scholar]
- Karlik & Olgac (2011).Karlik B, Olgac AV. Performance analysis of various activation functions in generalized MLP architectures of neural networks. International Journal of Artificial Intelligence and Expert Systems. 2011;1:111–122. [Google Scholar]
- Keek et al. (2020).Keek S, Sanduleanu S, Wesseling F, de Roest R, van den Brekel M, van der Heijden M, Vens C, Giuseppina C, Licitra L, Scheckenbach K, Vergeer M, Leemans CR, Brakenhoff RH, Nauta I, Cavalieri S, Woodruff HC, Poli T, Leijenaar R, Hoebers F, Lambin P. Computed tomography-derived radiomic signature of head and neck squamous cell carcinoma (peri)tumoral tissue for the prediction of locoregional recurrence and distant metastasis after concurrent chemo-radiotherapy. PLOS ONE. 2020;15(5):e0232639. doi: 10.1371/journal.pone.0232639. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kido, Hirano & Mabu (2020).Kido S, Hirano Y, Mabu S. Deep learning for pulmonary image analysis: classification, detection, and segmentation. Yeast Membrane Transport. 2020;1213(5):47–58. doi: 10.1007/978-3-030-33128-3_3. [DOI] [PubMed] [Google Scholar]
- Kiljunen et al. (2015).Kiljunen T, Kaasalainen T, Suomalainen A, Kortesniemi M. Dental cone beam CT: a review. Physica Medica. 2015;31(8):844–860. doi: 10.1016/j.ejmp.2015.09.004. [DOI] [PubMed] [Google Scholar]
- Kingma & Ba (2014).Kingma DP, Ba J. Adam: a method for stochastic optimization. 2014. https://arxiv.org/abs/1412.6980 https://arxiv.org/abs/1412.6980
- Kök, Acilar & Izgi (2019).Kök H, Acilar AM, Izgi MS. Usage and comparison of artificial intelligence algorithms for determination of growth and development by cervical vertebrae stages in orthodontics. Progress in Orthodontics. 2019;20(1):41. doi: 10.1186/s40510-019-0295-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kök, Izgi & Acilar (2020).Kök H, Izgi MS, Acilar AM. Determination of growth and development periods in orthodontics with artificial neural network. Orthodontics & Craniofacial Research. 2020;00(6):1–8. doi: 10.1111/ocr.12443. [DOI] [PubMed] [Google Scholar]
- Krizhevsky, Sutskever & Hinton (2012).Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems; 2012. pp. 1097–1105. [Google Scholar]
- Kulkarni et al. (2020).Kulkarni S, Seneviratne N, Baig MS, Khan AHA. Artificial intelligence in medicine: where are we now? Academic Radiology. 2020;27(1):62–70. doi: 10.1016/j.acra.2019.10.001. [DOI] [PubMed] [Google Scholar]
- Kunz et al. (2020).Kunz F, Stellzig-Eisenhauer A, Zeman F, Boldt J. Artificial intelligence in orthodontics: evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. Journal of Orofacial Orthopedics-fortschritte Der Kieferorthopadie. 2020;81(1):52–68. doi: 10.1007/s00056-019-00203-8. [DOI] [PubMed] [Google Scholar]
- LeCun et al. (1998).LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. doi: 10.1109/5.726791. [DOI] [Google Scholar]
- Lee, Kim & Jeong (2020).Lee J-H, Kim D-H, Jeong S-N. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Diseases. 2020;26(1):152–158. doi: 10.1111/odi.13223. [DOI] [PubMed] [Google Scholar]
- Lee et al. (2018).Lee J-H, Kim D-H, Jeong S-N, Choi S-H. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. Journal of Periodontal & Implant Science. 2018;48(2):114–123. doi: 10.5051/jpis.2018.48.2.114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee et al. (2017).Lee JG, Jun S, Cho YW, Lee H, Kim GB, Seo JB, Kim N. Deep learning in medical imaging: general overview. Korean Journal of Radiology. 2017;18(4):570–584. doi: 10.3348/kjr.2017.18.4.570. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee et al. (2019).Lee SM, Kim HP, Jeon K, Lee SH, Seo JK. Automatic 3D cephalometric annotation system using shadowed 2D image-based machine learning. Physics in Medicine and Biology. 2019;64(5):055002. doi: 10.1088/1361-6560/ab00c9. [DOI] [PubMed] [Google Scholar]
- Legg & Hutter (2007).Legg S, Hutter M. Universal intelligence: a definition of machine intelligence. Minds and Machines. 2007;17(4):391–444. doi: 10.1007/s11023-007-9079-x. [DOI] [Google Scholar]
- Lenza et al. (2010).Lenza M, Lenza MMDO, Dalstra M, Melsen B, Cattaneo P. An analysis of different approaches to the assessment of upper airway morphology: a CBCT study. Orthodontics & Craniofacial Research. 2010;13(2):96–105. doi: 10.1111/j.1601-6343.2010.01482.x. [DOI] [PubMed] [Google Scholar]
- Leonardi et al. (2008).Leonardi R, Giordano D, Maiorana F, Spampinato C. Automatic cephalometric analysis. Angle Orthodontist. 2008;78(1):145–151. doi: 10.2319/120506-491.1. [DOI] [PubMed] [Google Scholar]
- Li et al. (2019).Li P, Kong D, Tang T, Su D, Yang P, Wang H, Zhao Z, Liu Y. Orthodontic treatment planning based on artificial neural networks. Scientific Reports. 2019;9:2037. doi: 10.1038/s41598-018-38439-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li et al. (2017).Li S, Chen X, Liu X, Yu Y, Pan H, Haak R, Schmidt J, Ziebolz D, Schmalz G. Complex integrated analysis of lncRNAs-miRNAs-mRNAs in oral squamous cell carcinoma. Oral Oncology. 2017;73(2):1–9. doi: 10.1016/j.oraloncology.2017.07.026. [DOI] [PubMed] [Google Scholar]
- Li (1994).Li SZ. Markov random field models in computer vision. In: Eklundh JO, editor. Computer Vision—ECCV’94. ECCV 1994. Lecture Notes in Computer Science. Vol. 801. Berlin, Heidelberg: Springer; 1994. pp. 361–370. [DOI] [Google Scholar]
- Lin et al. (2020).Lin C-C, Wu C-Z, Huang M-S, Huang C-F, Cheng H-C, Wang DP. Fully digital workflow for planning static guided implant surgery: a prospective accuracy study. Journal of Clinical Medicine. 2020;9(4):980. doi: 10.3390/jcm9040980. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lin et al. (2021).Lin H-H, Chiang W-C, Yang C-T, Cheng C-T, Zhang T, Lo L-J. On construction of transfer learning for facial symmetry assessment before and after orthognathic surgery. Computer Methods and Programs in Biomedicine. 2021;200(2):105928. doi: 10.1016/j.cmpb.2021.105928. [DOI] [PubMed] [Google Scholar]
- Liu et al. (2018).Liu K, Kang G, Zhang N, Hou B. Breast cancer classification based on fully-connected layer first convolutional neural networks. IEEE Access. 2018;6:23722–23732. doi: 10.1109/ACCESS.2018.2817593. [DOI] [Google Scholar]
- Long, Shelhamer & Darrell (2015).Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Piscataway: IEEE; 2015. pp. 3431–3440. [DOI] [PubMed] [Google Scholar]
- Lowe (2004).Lowe DG. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision. 2004;60(2):91–110. doi: 10.1023/B:VISI.0000029664.99615.94. [DOI] [Google Scholar]
- Lu et al. (2014).Lu G, Halig L, Wang D, Qin X, Chen ZG, Fei B. Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging. Journal of Biomedical Optics. 2014;19(10):106004. doi: 10.1117/1.JBO.19.10.106004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lu et al. (2018).Lu G, Wang D, Qin X, Muller S, Wang X, Chen AY, Chen ZG, Fei B. Detection and delineation of squamous neoplasia with hyperspectral imaging in a mouse model of tongue carcinogenesis. Journal of Biophotonics. 2018;11(3):e201700078. doi: 10.1002/jbio.201700078. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luo et al. (2016).Luo W, Li Y, Urtasun R, Zemel R. Understanding the effective receptive field in deep convolutional neural networks. Advances in Neural Information Processing Systems; 2016. pp. 4898–4906. [Google Scholar]
- Ma et al. (2020).Ma Q, Kobayashi E, Fan B, Nakagawa K, Sakuma I, Masamune K, Suenaga H. Automatic 3D landmarking model using patch-based deep neural networks for CT image of oral and maxillofacial surgery. International Journal of Medical Robotics and Computer Assisted Surgery. 2020;16(3):e2093. doi: 10.1002/rcs.2093. [DOI] [PubMed] [Google Scholar]
- Mahmood et al. (2020).Mahmood H, Shaban M, Indave BI, Santos-Silva AR, Rajpoot N, Khurram SA. Use of artificial intelligence in diagnosis of head and neck precancerous and cancerous lesions: a systematic review. Oral Oncology. 2020;110(3):104885. doi: 10.1016/j.oraloncology.2020.104885. [DOI] [PubMed] [Google Scholar]
- Maini & Aggarwal (2010).Maini R, Aggarwal H. A comprehensive review of image enhancement techniques. 2010. https://arxiv.org/ftp/arxiv/papers/1003/1003.4053.pdf https://arxiv.org/ftp/arxiv/papers/1003/1003.4053.pdf
- Marsden et al. (2020).Marsden M, Weyers BW, Bec J, Sun T, Gandour-Edwards RF, Birkeland AC, Abouyared M, Bewley AF, Farwell DG, Marcu L. Intraoperative margin assessment in oral and oropharyngeal cancer using label-free fluorescence lifetime imaging and machine learning. IEEE Transactions on Biomedical Engineering. 2020;68(3):857–868. doi: 10.1109/TBME.2020.3010480. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Martonffy (2015).Martonffy AI. Oral health: orthodontic treatment. FP Essentials. 2015;428:22–26. [PubMed] [Google Scholar]
- Marur et al. (2010).Marur S, D’Souza G, Westra WH, Forastiere AA. HPV-associated head and neck cancer: a virus-related cancer epidemic. The Lancet Oncology. 2010;11(8):781–789. doi: 10.1016/S1470-2045(10)70017-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Men et al. (2019).Men K, Geng H, Zhong H, Fan Y, Lin A, Xiao Y. A deep learning model for predicting xerostomia due to radiation therapy for head and neck squamous cell carcinoma in the RTOG, 0522 clinical trial. International Journal of Radiation Oncology Biology Physics. 2019;105(2):440–447. doi: 10.1016/j.ijrobp.2019.06.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Milletari, Navab & Ahmadi (2016).Milletari F, Navab N, Ahmadi S. V-Net: fully convolutional neural networks for volumetric medical image segmentation. 2016 Fourth International Conference on 3D Vision (3DV); 2016. pp. 565–571. [Google Scholar]
- Minnema et al. (2019).Minnema J, van Eijnatten M, Hendriksen AA, Liberton N, Pelt DM, Batenburg KJ, Forouzanfar T, Wolff J. Segmentation of dental cone-beam CT scans affected by metal artifacts using a mixed-scale dense convolutional neural network. Medical Physics. 2019;46(11):5027–5035. doi: 10.1002/mp.13793. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Montúfar, Romero & Scougall-Vilchis (2018).Montúfar J, Romero M, Scougall-Vilchis RJ. Hybrid approach for automatic cephalometric landmark annotation on cone-beam computed tomography volumes. American Journal of Orthodontics and Dentofacial Orthopedics. 2018;154(1):140–150. doi: 10.1016/j.ajodo.2017.08.028. [DOI] [PubMed] [Google Scholar]
- Nair & Hinton (2010).Nair V, Hinton GE. Rectified linear units improve restricted boltzmann machines. Proceedings of the 27th International Conference on Machine Learning (ICML-10); 2010. pp. 807–814. [Google Scholar]
- Namin et al. (2020).Namin AW, Bollig CA, Harding BC, Dooley LM. Implications of tumor size, subsite, and adjuvant therapy on outcomes in pT4aN0 oral cavity carcinoma. Otolaryngology—Head and Neck Surgery. 2020;162(5):683–692. doi: 10.1177/0194599820904679. [DOI] [PubMed] [Google Scholar]
- Nathan et al. (2014).Nathan CA, Kaskas NM, Ma X, Chaudhery S, Lian T, Moore-Medlin T, Shi R, Mehta V. Confocal laser endomicroscopy in the detection of head and neck precancerous lesions. Otolaryngology—Head and Neck Surgery. 2014;151(1):73–80. doi: 10.1177/0194599814528660. [DOI] [PubMed] [Google Scholar]
- O’Neil et al. (2019).O’Neil AQ, Kascenas A, Henry J, Wyeth D, Shepherd M, Beveridge E, Clunie L, Sansom C, Šeduikytė E, Muir K, Poole I. Attaining human-level performance with atlas location autocontext for anatomical landmark detection in 3D CT data. Proceedings Computer Vision–ECCV, 2018 Workshops; 2019. pp. 470–484. [Google Scholar]
- Ohashi et al. (2016).Ohashi Y, Ariji Y, Katsumata A, Fujita H, Nakayama M, Fukuda M, Nozawa M, Ariji E. Utilization of computer-aided detection system in diagnosing unilateral maxillary sinusitis on panoramic radiographs. Dentomaxillofacial Radiology. 2016;45(3):20150419. doi: 10.1259/dmfr.20150419. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Okada et al. (2003).Okada Y, Mataga I, Katagiri M, Ishii K. An analysis of cervical lymph nodes metastasis in oral squamous cell carcinoma: relationship between grade of histopathological malignancy and lymph nodes metastasis. International Journal of Oral and Maxillofacial Surgery. 2003;32(3):284–288. doi: 10.1054/ijom.2002.0303. [DOI] [PubMed] [Google Scholar]
- Orhan et al. (2020).Orhan K, Bayrakdar IS, Ezhov M, Kravtsov A, Özyürek T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. International Endodontic Journal. 2020;53(5):680–689. doi: 10.1111/iej.13265. [DOI] [PubMed] [Google Scholar]
- Pan et al. (2020).Pan X, Zhang T, Yang Q, Yang D, Rwigema J-C, Qi XS. Survival prediction for oral tongue cancer patients via probabilistic genetic algorithm optimized neural network models. The British Journal of Radiology. 2020;93(1112):20190825. doi: 10.1259/bjr.20190825. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Park et al. (2019).Park JH, Hwang HW, Moon JH, Yu Y, Kim H, Her SB, Srinivasan G, Aljanabi MNA, Donatelli RE, Lee SJ. Automated identification of cephalometric landmarks: part 1-comparisons between the latest deep-learning methods YOLOV3 and SSD. Angle Orthodontist. 2019;89(6):903–909. doi: 10.2319/022019-127.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Patcas et al. (2019a).Patcas R, Bernini DAJ, Volokitin A, Agustsson E, Rothe R, Timofte R. Applying artificial intelligence to assess the impact of orthognathic treatment on facial attractiveness and estimated age. International Journal of Oral and Maxillofacial Surgery. 2019a;48(1):77–83. doi: 10.1016/j.ijom.2018.07.010. [DOI] [PubMed] [Google Scholar]
- Patcas et al. (2019b).Patcas R, Timofte R, Volokitin A, Agustsson E, Eliades T, Eichenberger M, Bornstein MM. Facial attractiveness of cleft patients: a direct comparison between artificial-intelligence-based scoring and conventional rater groups. European Journal of Orthodontics. 2019b;41(4):428–433. doi: 10.1093/ejo/cjz007. [DOI] [PubMed] [Google Scholar]
- Pinto et al. (2018).Pinto AS, Alves LS, Maltz M, Susin C, Zenkner JEA. Does the duration of fixed orthodontic treatment affect caries activity among adolescents and young adults? Caries Research. 2018;52(6):463–467. doi: 10.1159/000488209. [DOI] [PubMed] [Google Scholar]
- Pinto et al. (2017).Pinto AS, Alves LS, Zenkner JEA, Zanatta FB, Maltz M. Gingival enlargement in orthodontic patients: effect of treatment duration. American Journal of Orthodontics and Dentofacial Orthopedics. 2017;152(4):477–482. doi: 10.1016/j.ajodo.2016.10.042. [DOI] [PubMed] [Google Scholar]
- Poedjiastoeti & Suebnukarn (2018).Poedjiastoeti W, Suebnukarn S. Application of convolutional neural network in the diagnosis of jaw tumors. Healthcare Informatics Research. 2018;24(3):236–241. doi: 10.4258/hir.2018.24.3.236. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rahman et al. (2020).Rahman TY, Mahanta LB, Das AK, Sarma JD. Automated oral squamous cell carcinoma identification using shape, texture and color features of whole image strips. Tissue and Cell. 2020;63(7):101322. doi: 10.1016/j.tice.2019.101322. [DOI] [PubMed] [Google Scholar]
- Ramachandran, Zoph & Le (2017).Ramachandran P, Zoph B, Le QV. Searching for activation functions. arXiv. 2017. https://arxiv.org/abs/1710.05941 https://arxiv.org/abs/1710.05941
- Ravanbakhsh, Schneider & Poczos (2017).Ravanbakhsh S, Schneider J, Poczos B. Equivariance through parameter-sharing. Proceedings of the 34th International Conference on Machine Learning. 2017;70:2892–2901. [Google Scholar]
- Ravanelli et al. (2018).Ravanelli M, Grammatica A, Tononcelli E, Morello R, Leali M, Battocchio S, Agazzi GM, Buglione di Monale e Bastia M, Maroldi R, Nicolai P, Farina D. Correlation between human papillomavirus status and quantitative MR imaging parameters including diffusion-weighted imaging and texture features in oropharyngeal carcinoma. American Journal of Neuroradiology. 2018;39(10):1878–1883. doi: 10.3174/ajnr.A5792. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Redmon & Farhadi (2018).Redmon J, Farhadi A. Yolov3: an incremental improvement. 2018. https://arxiv.org/abs/1804.02767 https://arxiv.org/abs/1804.02767
- Ren et al. (2020).Ren J, Qi M, Yuan Y, Duan S, Tao X. Machine learning-based mri texture analysis to predict the histologic grade of oral squamous cell carcinoma. American Journal of Roentgenology. 2020;215(5):1184–1190. doi: 10.2214/AJR.19.22593. [DOI] [PubMed] [Google Scholar]
- Rivera (2015).Rivera C. Essentials of oral cancer. International Journal of Clinical and Experimental Pathology. 2015;8:11884–11894. [PMC free article] [PubMed] [Google Scholar]
- Romaniuk et al. (2004).Romaniuk B, Desvignes M, Revenu M, Deshayes MJ. Shape variability and spatial relationships modeling in statistical pattern recognition. Pattern Recognition Letters. 2004;25(2):239–247. doi: 10.1016/j.patrec.2003.10.011. [DOI] [Google Scholar]
- Romeo et al. (2020).Romeo V, Cuocolo R, Ricciardi C, Ugga L, Cocozza S, Verde F, Stanzione A, Napolitano V, Russo D, Improta G, Elefante A, Staibano S, Brunetti A. Prediction of tumor grade and nodal status in oropharyngeal and oral cavity squamous-cell carcinoma using a radiomic approach. Anticancer Research. 2020;40(1):271–280. doi: 10.21873/anticanres.13949. [DOI] [PubMed] [Google Scholar]
- Ronneberger, Fischer & Brox (2015).Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-assisted Intervention; Berlin: Springer; 2015. pp. 234–241. [Google Scholar]
- Ruellas et al. (2016).Ruellas ACDO, Yatabe MS, Souki BQ, Benavides E, Nguyen T, Luiz RR, Franchi L, Cevidanes LHS. 3D mandibular superimposition: comparison of regions of reference for voxel-based registration. PLOS ONE. 2016;11(6):e0157625. doi: 10.1371/journal.pone.0157625. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schwendicke et al. (2020a).Schwendicke F, Elhennawy K, Paris S, Friebertshäuser P, Krois J. Deep learning for caries lesion detection in near-infrared light transillumination images: a pilot study. Journal of Dentistry. 2020a;92(2):103260. doi: 10.1016/j.jdent.2019.103260. [DOI] [PubMed] [Google Scholar]
- Schwendicke et al. (2019).Schwendicke F, Golla T, Dreher M, Krois J. Convolutional neural networks for dental image diagnostics: a scoping review. Journal of Dentistry. 2019;91(7639):103226. doi: 10.1016/j.jdent.2019.103226. [DOI] [PubMed] [Google Scholar]
- Schwendicke et al. (2020b).Schwendicke F, Rossi J, Göstemeyer G, Elhennawy K, Cantu A, Gaudin R, Chaurasia A, Gehrung S, Krois J. Cost-effectiveness of artificial intelligence for proximal caries detection. Journal of Dental Research. 2020b;100(4):22034520972335. doi: 10.1177/0022034520972335. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Selim & Giovanni (2019).Selim A, Giovanni P. Computational neural network in melanocytic lesions diagnosis: artificial intelligence to improve diagnosis in dermatology? European Journal of Dermatology. 2019;29:4–7. doi: 10.1684/ejd.2019.3538. [DOI] [PubMed] [Google Scholar]
- Setzer et al. (2020).Setzer FC, Shi KJ, Zhang Z, Yan H, Yoon H, Mupparapu M, Li J. Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images. Journal of Endodontics. 2020;46(7):987–993. doi: 10.1016/j.joen.2020.03.025. [DOI] [PubMed] [Google Scholar]
- Shahidi et al. (2014).Shahidi S, Bahrampour E, Soltanimehr E, Zamani A, Oshagh M, Moattari M, Mehdizadeh A. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images. BMC Medical Imaging. 2014;14(1):32. doi: 10.1186/1471-2342-14-32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shamim et al. (2019).Shamim MZM, Syed S, Shiblee M, Usman M, Ali S. Automated detection of oral pre-cancerous tongue lesions using deep learning for early diagnosis of oral cavity cancer. arXiv. 2019. https://arxiv.org/abs/1909.08987 https://arxiv.org/abs/1909.08987
- Simonyan & Zisserman (2014).Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. https://arxiv.org/abs/1409.1556 https://arxiv.org/abs/1409.1556
- Smola & Schölkopf (2004).Smola AJ, Schölkopf B. A tutorial on support vector regression. Statistics and Computing. 2004;14(3):199–222. doi: 10.1023/B:STCO.0000035301.49549.88. [DOI] [Google Scholar]
- Spiro (1985).Spiro RH. The management of neck nodes in head and neck-cancer: a surgeons view. Bulletin of the New York Academy of Medicine. 1985;61:629–637. [PMC free article] [PubMed] [Google Scholar]
- Springenberg et al. (2014).Springenberg JT, Dosovitskiy A, Brox T, Riedmiller M. Striving for simplicity: the all convolutional net. arXiv. 2014. https://arxiv.org/abs/1412.6806 https://arxiv.org/abs/1412.6806
- Sun et al. (2018).Sun Y, Liu X, Cong P, Li L, Zhao Z. Digital radiography image denoising using a generative adversarial network. Journal of X-ray Science and Technology. 2018;26(4):523–534. doi: 10.3233/XST-17356. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Suttapreyasri, Suapear & Leepong (2018).Suttapreyasri S, Suapear P, Leepong N. The accuracy of cone-beam computed tomography for evaluating bone density and cortical bone thickness at the implant site: micro-computed tomography and histologic analysis. Journal of Craniofacial Surgery. 2018;29(8):2026–2031. doi: 10.1097/SCS.0000000000004672. [DOI] [PubMed] [Google Scholar]
- Swinson et al. (2006).Swinson B, Jerjes W, El-Maaytah M, Norris P, Hopper C. Optical techniques in diagnosis of head and neck malignancy. Oral Oncology. 2006;42(3):221–228. doi: 10.1016/j.oraloncology.2005.05.001. [DOI] [PubMed] [Google Scholar]
- Uysal et al. (2006).Uysal T, Ramoglu SI, Basciftci FA, Sari Z. Chronologic age and skeletal maturation of the cervical vertebrae and hand-wrist: is there a relationship? American Journal of Orthodontics and Dentofacial Orthopedics. 2006;130(5):622–628. doi: 10.1016/j.ajodo.2005.01.031. [DOI] [PubMed] [Google Scholar]
- Taghanaki et al. (2021).Taghanaki SA, Abhishek K, Cohen JP, Cohen-Adad J, Hamarneh G. Deep semantic segmentation of natural and medical images: a review. Artificial Intelligence Review. 2021;54(1):137–178. doi: 10.1007/s10462-020-09854-1. [DOI] [Google Scholar]
- Taghavi & Yazdi (2015).Taghavi N, Yazdi I. Prognostic factors of survival rate in oral squamous cell carcinoma: clinical, histologic, genetic and molecular concepts. Archives of Iranian Medicine. 2015;18:314–319. [PubMed] [Google Scholar]
- Takada, Yagi & Horiguchi (2009).Takada K, Yagi M, Horiguchi E. Computational formulation of orthodontic tooth-extraction decisions. Part I: to extract or not to extract. Angle Orthodontist. 2009;79(5):885–891. doi: 10.2319/081908-436.1. [DOI] [PubMed] [Google Scholar]
- Tuzoff et al. (2019).Tuzoff DV, Tuzova LN, Bornstein MM, Krasnov AS, Kharchenko MA, Nikolenko SI, Sveshnikov MM, Bednenko GB. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofacial Radiology. 2019;48(4):20180051. doi: 10.1259/dmfr.20180051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Upile et al. (2007).Upile T, Fisher C, Jerjes W, El Maaytah M, Searle A, Archer D, Michaels L, Rhys-Evans P, Hopper C, Howard D, Wright A. The uncertainty of the surgical margin in the treatment of head and neck cancer. Oral Oncology. 2007;43(4):321–326. doi: 10.1016/j.oraloncology.2006.08.002. [DOI] [PubMed] [Google Scholar]
- van Rooij et al. (2019).van Rooij W, Dahele M, Ribeiro Brandao H, Delaney AR, Slotman BJ, Verbakel WF. Deep learning-based delineation of head and neck organs at risk: geometric and dosimetric evaluation. International Journal of Radiation Oncology Biology Physics. 2019;104(3):677–684. doi: 10.1016/j.ijrobp.2019.02.040. [DOI] [PubMed] [Google Scholar]
- Visvikis et al. (2019).Visvikis D, Cheze Le Rest C, Jaouen V, Hatt M. Artificial intelligence, machine (deep) learning and radio(geno)mics: definitions and nuclear medicine imaging applications. European Journal of Nuclear Medicine and Molecular Imaging. 2019;46(13):2630–2637. doi: 10.1007/s00259-019-04373-w. [DOI] [PubMed] [Google Scholar]
- Vucinic, Trpovski & Scepan (2010).Vucinic P, Trpovski Z, Scepan I. Automatic landmarking of cephalograms using active appearance models. The European Journal of Orthodontics. 2010;32(3):233–241. doi: 10.1093/ejo/cjp099. [DOI] [PubMed] [Google Scholar]
- Walk & Weed (2011).Walk EL, Weed SA. Recently identified biomarkers that promote lymph node metastasis in head and neck squamous cell carcinoma. Cancers. 2011;3(1):747–772. doi: 10.3390/cancers3010747. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang et al. (2020).Wang N, Liu Y, Liu Z, Huang X. Application of artificial intelligence and big data in modern financial management. 2020 International Conference on Artificial Intelligence and Education (ICAIE); 2020. pp. 85–87. [Google Scholar]
- Xie et al. (2019).Xie H, Yang D, Sun N, Chen Z, Zhang Y. Automated pulmonary nodule detection in CT images using deep convolutional neural networks. Pattern Recognition. 2019;85(2):109–119. doi: 10.1016/j.patcog.2018.07.031. [DOI] [Google Scholar]
- Xie, Wang & Wang (2010).Xie X, Wang L, Wang A. Artificial neural network modeling for deciding if extractions are necessary prior to orthodontic treatment. Angle Orthodontist. 2010;80(2):262–266. doi: 10.2319/111608-588.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yanhua (2020).Yanhua Z. The application of artificial intelligence in foreign language teaching. 2020 International Conference on Artificial Intelligence and Education (ICAIE); 2020. pp. 40–42. [Google Scholar]
- You et al. (2020).You W, Hao A, Li S, Wang Y, Xia B. Deep learning-based dental plaque detection on primary teeth: a comparison with clinical assessments. BMC Oral Health. 2020;20(1):141. doi: 10.1186/s12903-020-01114-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yuan et al. (2019).Yuan Y, Ren J, Shi Y, Tao X. MRI-based radiomic signature as predictive marker for patients with head and neck squamous cell carcinoma. European Journal of Radiology. 2019;117(2):193–198. doi: 10.1016/j.ejrad.2019.06.019. [DOI] [PubMed] [Google Scholar]
- Yue et al. (2006).Yue W, Yin D, Li C, Wang G, Xu T. Automated 2-D cephalometric analysis on X-ray images by a model-based approach. IEEE Transactions on Biomedical Engineering. 2006;53(8):1615–1623. doi: 10.1109/TBME.2006.876638. [DOI] [PubMed] [Google Scholar]
- Yun et al. (2020).Yun HS, Jang TJ, Lee SM, Lee S-H, Seo JK. Learning-based local-to-global landmark annotation for automatic 3D cephalometry. Physics in Medicine & Biology. 2020;65(8):085018. doi: 10.1088/1361-6560/ab7a71. [DOI] [PubMed] [Google Scholar]
- Zamora et al. (2012).Zamora N, Llamas JM, Cibrián R, Gandia JL, Paredes V. A study on the reproducibility of cephalometric landmarks when undertaking a three-dimensional (3D) cephalometric analysis. Medicina Oral Patología Oral y Cirugia Bucal. 2012;17:e678. doi: 10.4317/medoral.17721. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeiler & Fergus (2013).Zeiler MD, Fergus R. Stochastic pooling for regularization of deep convolutional neural networks. 2013. https://arxiv.org/abs/1301.3557 https://arxiv.org/abs/1301.3557
- Zhang et al. (2018).Zhang K, Wu J, Chen H, Lyu P. An effective teeth recognition method using label tree with cascade network structure. Computerized Medical Imaging and Graphics. 2018;68(5):61–70. doi: 10.1016/j.compmedimag.2018.07.001. [DOI] [PubMed] [Google Scholar]
- Zhang et al. (2020).Zhang X, Liang Y, Li W, Liu C, Gu D, Sun W, Miao L. Development and evaluation of deep learning for screening dental caries from oral photographs. Oral Diseases. 2020;76(4):270. doi: 10.1111/odi.13735. [DOI] [PubMed] [Google Scholar]
- Zhang & Yu (2018).Zhang Y, Yu H. Convolutional neural network based metal artifact reduction in x-ray computed tomography. IEEE Transactions on Medical Imaging. 2018;37(6):1370–1381. doi: 10.1109/TMI.2018.2823083. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhu et al. (2018).Zhu W, Liu C, Fan W, Xie X. Deeplung: deep 3d dual path nets for automated pulmonary nodule detection and classification. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV); Piscataway: IEEE; 2018. pp. 673–681. [Google Scholar]
- Zhu et al. (2019).Zhu Y, Mohamed ASR, Lai SY, Yang S, Kanwar A, Wei L, Kamal M, Sengupta S, Elhalawani H, Skinner H, Mackin DS, Shiao J, Messer J, Wong A, Ding Y, Zhang L, Court L, Ji Y, Fuller CD. Imaging-genomic study of head and neck squamous cell carcinoma: associations between radiomic phenotypes and genomic mechanisms via integration of the cancer genome atlas and the cancer imaging archive. JCO Clinical Cancer Informatics. 2019;3(3):1–9. doi: 10.1200/CCI.18.00073. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The following information was supplied regarding data availability:
This is a literature review.