Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Jul 1.
Published in final edited form as: J Dent. 2024 May 8;146:105057. doi: 10.1016/j.jdent.2024.105057

A reliable deep-learning-based method for alveolar bone quantification using a murine model of periodontitis and micro-computed tomography imaging

Ranhui Xi a, Mamoon Ali a, Yilu Zhou b, Marco Tizzano a,*
PMCID: PMC11288397  NIHMSID: NIHMS2001322  PMID: 38729290

Abstract

Objectives

This study focuses on artificial intelligence (AI)-assisted analysis of alveolar bone for periodontitis in a mouse model with the aim to create an automatic deep-learning segmentation model that enables researchers to easily examine alveolar bone from micro-computed tomography (μCT) data without needing prior machine learning knowledge.

Methods

Ligature-induced experimental periodontitis was produced by placing a small-diameter silk sling ligature around the left maxillary second molar. At 4, 7, 9, or 14 days, the maxillary bone was harvested and processed with a μCT scanner (μCT-45, Scanco). Using Dragonfly (v2021.3), we developed a 3D deep learning model based on the U-Net AI deep learning engine for segmenting materials in complex images to measure alveolar bone volume (BV) and bone mineral density (BMD) while excluding the teeth from the measurements.

Results

This model generates 3D segmentation output for a selected region of interest with over 98% accuracy on different formats of μCT data. BV on the ligature side gradually decreased from 0.87 mm3 to 0.50 mm3 on day 9 and then increased to 0.63 mm3 on day 14. The ligature side lost 4.6% of BMD on day 4, 9.6% on day 7, 17.7% on day 9, and 21.1% on day 14.

Conclusions

This study developed an AI model that can be downloaded and easily applied, allowing researchers to assess metrics including BV, BMD, and trabecular bone thickness, while excluding teeth from the measurements of mouse alveolar bone.

Clinical significance

This work offers an innovative, user-friendly automatic segmentation model that is fast, accurate, and reliable, demonstrating new potential uses of artificial intelligence (AI) in dentistry with great potential in diagnosing, treating, and prognosis of oral diseases.

Keywords: Micro-CT, deep learning, periodontitis

1. Introduction

Periodontal disease is a common oral disease characterized by unbalanced oral microbiota and host immunity. It affects nearly 50% of the population, making it one of the primary causes of tooth loss in adults and a major oral disease that poses a significant threat to both dental and overall health [1, 2].

Current periodontal disease research, investigate the etiology, progression, treatment, and related mechanisms of periodontitis by utilizing various periodontitis-induced techniques such as a tooth ligature, chemical treatment, bacterial inoculation, or immunodeficient animal models [3]. Clinical attachment loss and radiographic bone loss are indicators widely used to identify periodontal disease and to evaluate periodontal bone loss in clinical practice. [4]. To assess the progression of periodontitis in laboratory animals (such as rats, mice, etc.), researchers use alveolar bone loss (ABL) as an important disease indicator [57]. However, quantifying ABL is particularly challenging due to the complexity of alveolar bone deconstruction, individual variations, and interference from tooth roots.

We can broadly categorize into four the methods to measure the ABL employed in current research on murine models of periodontitis. In all methods, the raw alveolar bone data is usually acquired on a μCT or a stereomicroscope loupe [8, 9]. First, the most utilized and straightforward approach is to measure the length between the most coronal point of the bone crest to the cementoenamel junction (CEJ). This method measuring the alveolar bone loss level, represents the ABL value for each sample as an average measure from multiple positions, either for a selected tooth or for several teeth [49]. Second, the alveolar bone area method calculates the area of ABL (the area between the CEJ and the bone crest) using the ImageJ software to evaluate the degree of bone damage [8, 10]. These first two methods often require for the alveolar bone analysis the use of two-dimensional (2D) rather than three-dimensional (3D) data, and both methods lack additional quantifiable parameters (e.g. bone density). Alternatively, some scientists use a selected region of interest (ROI) on 3D data to obtain a more comprehensive bone analysis. In these two last methods, the ROI is either large or small. The third method, the large ROI method, uses a selected ROI on 3D μCT data and requires the manual removal of teeth from each slide. Then with the defined selected ROI, researchers can quantify various 3D parameters such as bone volume (BV) or bone mineral density (BMD). Unfortunately, an obstacle faced using this ROI method is that studies employing this approach differ in the way the selected ROIs are defined. For example, one study measured the alveolar bone parameters including ABL from the root bifurcation to the apical region of the second molar [11]. Another two studies defined the ROI horizontally by the mesial root of the third molar to the distal root of the first molar or by horizontal boundary extended from the CEJ, spanning 3 mm apically [12]. Zheng et al. used a cube-shaped ROI encompassing the roots of the first and second molars [13]. Lima et al. analyzed 65 slices and Goes et al. used 20 slices from one selected molar to analyze without precisely defining an ROI [10, 14]. Wong et al. focused on analyzing the area of the second molar, extending from the distal root of the first molar to the mesial root of the third molar [15]. Wu et al. chose a 1.0 × 3.0 × 2.5 mm3 region below the CEJ of the first molar [16]. The fourth method, the small ROI method, introduces the use of BMD and trabecular thickness (Tb/Th) within a small, fixed-volume ROI to represent bone loss, rather than calculating the volume of a larger ROI by manually removing the roots. For instance, Tominari et al. selected a small rectangular region within the inter-tooth area of the first molar [17]. and Zheng et al. analyzed a rectangular region located between the mesial and distal roots of the maxillary second molar [13].

Looking at these different ROI methods and other published research, the ROI selection could be more specific and reproducible. Additionally, μCT data set from a single sample contains hundreds of slides which requires hours to manually remove the roots in the selected ROIs. The large amount of data involved makes this a daunting and time-consuming task, a significant obstacle to extracting quantitative information from images. From this the need to use an automated system to perform an accurate, systematic, comprehensive, and reliable assessment of ABL. U-Net, a convolutional neural network (CNN) architecture commonly used in medical image segmentation, specializes in analyzing images with a grid-like topology (e.g., 3D μCT data). Its deep learning engine can define or detect abstract mathematical features in imaging modalities that are not visible to the human eye [18]. Recently, CNNs were used to automatically segment CBCT data and detect periapical lesions in clinic data and to assist in dental implant planning by accurately measuring critical anatomical parameters [1921]. To our knowledge, no study using CNN has focused on periodontal bone loss using μCT data.

Here, we describe an efficient method using an automatic segmentation model for image analysis that uses deep learning, a unique field of research in artificial intelligence (AI), to enable direct analysis of alveolar bone changes from μCT data with experimental periodontitis models. Our model was developed as a plug-in for Dragonfly, a non-commercial software, and does not require prior knowledge of machine learning. The automatic segmentation model can distinguish tooth from bone and discriminate for differences in shape and bone density. Finally, the automatic segmentation model can compare data from different μCT devices or the same μCT with different parameter settings. (The model can be found and downloaded in the data availability).

2. Materials and Methods

2.1. Periodontitis model and μCT scanning

In this study, 7 male and 9 female mice (8-week-old C57BL/6) were divided into two groups (training data, n=4; periodontitis data, n=12). We induced periodontitis by placing a 6/0 silk ligature (AD surgical #SS618R13) around the left second molar of the maxilla [13]. After 4, 7, 9, or 14 days, we euthanized the mice (n=3 each group) by asphyxiation using carbon dioxide as described in our IACUC approved protocol (807189). The maxillary bone was harvested (Figure 1 and 4A), fixed in 4% paraformaldehyde overnight, and stored in phosphate-buffered saline (PBS) at −20°C until μCT scanning. All samples were scanned on a μCT-45 (Scanco Medical AG, Brüttisellen, Switzerland; 55 peak kilovoltage, 145-μA, and 400 ms integration time). The scan’s resolution was 7.4 μm which represent between 700–800 slices per sample. All experimental procedures were performed under NIH guidelines for the care and use of animals in research and approved by the Institutional Animal Care and Use Committee (IACUC) at the [Author’s Institution].

Figure 1.

Figure 1.

Alignment and density histogram of data for deep learning model. (A-B) An example illustrating the effect of 3D rotation on ABL measurement. (A) The five colored lines correspond to the ABL of the distal part of the first molar, the mesial and distal part of the second molar, and the mesial part of the third molar. (B) After a slight rotation, the start and end points of the five colored lines do not accurately measure the CEJ and the most coronal end of the crestal bone. (C) Standard positions: blue: coronal plane is symmetrical; pink: sagittal plane parallels maxillary second molar axis; green: transverse plane parallels maxillary bone. (D) Density histogram of three file formats. x-axis: density value; y-axis: frequency.

2.2. Automatic segmentation model training based on U-Net deep learning architecture

Both 2D and 3D representations of μCT data were processed and visualized using ORS Dragonfly software version 2021.3 (https://theobjects.com/dragonfly, Figure 1). Post-processing included realignment of 3D models in Dragonfly to ensure uniform orientation and left-right symmetry (Figure 1C and Supplementary Video S1). Three different formats of μCT data were exported following the manufacturer’s guidelines (Scanco Medical AG, Brüttisellen, Switzerland): DICOM (Digital Imaging and Communications in Medicine), AIM (original format from Scanco μCT), and TIFF format. The voxel color gradient in DICOM files was normalized to a range of [0, 1], whereas AIM and TIFF files required no normalization (Figure 1D, Supplementary Figure S1).

For the initial training data, we manually acquired it utilizing the 3D ROI Painter tool within Dragonfly. The verification of ROI placement was performed on every coronal slice of the μCT images from maxillary bone samples (700–800 slices per sample, DICOM format, histogram normalized to 0–1). Regions with an intensity value below 0.2 were defined as background, while those with an intensity of 0.2 or higher were identified as teeth and alveolar bone. Then, these images were segmented into three distinct classes: background (represented in green), teeth (depicted in blue), and bone (indicated in orange). We modified the U-Net network architecture by adjusting the network parameters to reduce overfitting and to facilitate the efficiency of deep learning performance (Figure 2A). The parameter details can be found in Supplementary Figure S2. Subsequently, 80% of the training dataset was employed for training the deep learning model, while the remaining 20% was reserved for the validation process to ensure the successful training of the model.

Figure 2.

Figure 2.

U-Net based automatic segmentation model training on DICOM format (A) Architecture and training of the deep-learning U-Net model. After making trial runs with different parameter combinations, the network parameters values were selected for the quality of inference on unseen data (>98% accuracy) and reasonable training time. (B) U-Net model training log. (C-D) Images at the 3D level and cross-section images of the axial palate and coronal planes of the input data (C) and the corresponding output segmentation result (D). Green: background; orange: bone; blue: teeth.

After each epoch (a pass over all batches of the training set), the model’s accuracy was automatically validated (Figure 2B). To avoid inaccuracy of the model due to slight inconsistencies in orientation and brightness among different 3D samples in the actual application process, the data augmentation was set to generate additional training data during each round of the training process. The specific settings included flipping (horizontally and vertically), rotating (0–180 degrees), cropping, scale (80%−120%), and brightness (75%−125%), which allow the model to accurately handle data that differ from training data in scale, brightness, and orientation (Supplementary Figure S2).

In addition, due to the complexity of these μCT data, it is sometimes difficult to distinguish teeth from bone using only a single coronal slide, even for an experienced scientist. Therefore, five slices selected before and after every slide were used as reference slices during the learning and application process. The inaccuracy ratio decreased from 30.08% to 1.16% in the first round of training (20 hours, 67 patches; Figure 2B). Subsequently, the model was applied to another maxillary sample, yielding preliminary automated segmentation results. At this stage, the results were not entirely accurate. However, after manual calibration, the manually modified segmentation results were utilized as training data for the second round of model training. This iterative process led to a further improvement in the model’s accuracy. The automatic segmentation error rate reached and remained under 2% after four training rounds totaling 73 hours (Figure 2C-D).

2.3. Application of automatic segmentation model based on U-Net deep learning architecture in selected ROI

We applied the trained model to a 2 mm spherical ROI drawn on μCT data from 12 ligature-induced periodontitis mouse samples (DICOM format). The selection of the 2 mm spherical ROI was based on the distance from the peak of the occlusal surface to the root apex of the second molar, measuring approximately 1.4–1.6 mm (Figure 3A and Supplementary Video S1). The center of this 2 mm spherical ROI was positioned at the root bifurcation of the second molar, encompassing the second molar itself, the surrounding bone, and a portion of the neighboring molars.

Figure 3.

Figure 3.

Selection of ROIs and application of automatic segmentation model. (A) ROIs. Two spherical areas with a 2-mm diameter are centered on the root bifurcation of the second molar to include the area of the alveolar bone affected by ligature-induced periodontitis: the entire maxillary second molar, the distal root of the maxillary first molar, the mesial root of the maxillary third molar, and the corresponding alveolar bone. (B-C) The transverse and coronal plane of segmentation results, respectively. Left: control side; right: ligature side. Green: background; orange: bone; blue: teeth. (D-E) 3D auto-segmentation results. (F) 3D auto-segmentation results of bone only. Left: control side; right: ligature side.

Then the machine learning model would automatically segment the ROI into three parts, background, teeth, and bone, in less than 1 minute for each sphere (Figure 3B-F and Supplementary Video S1). For further morphometric analysis, the bone analysis plug-in in Dragonfly software was employed to calculate parameters such as bone volume (BV in mm3), bone density per unit volume (BD), bone surface area (mm2), and the average trabecular thickness/trabecular separation (Tb/Th in mm). The investigator obtaining the μCT data was blinded regarding which mouse group was tested. One-way ANOVA with post hoc Tukey tests was conducted in GraphPad Prism (version 10.0.1, USA).

3. Results

3.1. Alignment of data for deep learning model

Fixed orientation of the sample is a prerequisite for successful application of any method to measure bone loss. Yet in both 2D and 3D data, it is nearly impossible to reach a fixed position where all measurement lines are parallel to the long axis of the tooth, using the standard measure of bone loss: alveolar bone level method. In Figure 1, the effect of 3D rotation on ABL measurements, showing the inaccuracy of bone level as a measure of alveolar bone. The five lines in Figure 1A correspond to the alveolar bone level of the distal part of the first molar, the mesial and distal part of the second molar, and the mesial part of the third molar. After slightly rotating the 3D image (Figure 1B), the starting and ending points of the five lines are not precise representations of the CEJ and the coronal end of the crestal bone. In contrast, 3D orientation has minimal impact on our deep learning model because the data augmentation was set to generate additional training data during the training process. Actions like flipping (180 degrees), rotating, cropping, or changing brightness (+ or - 25%) will not affect results of the automatic segmentation process when the coronal plane is set as the main plane to ensure consistency of data importing and training (Supplementary Video S2).

3.2. Segmentation performance

The deep learning model shows excellent accuracy in all samples analyzed in this study. After four rounds of training with uCT data from whole maxillary bone (2763 slices in the coronal plane), the model’s error rate is stable below 2% (Figure 2B). The images were segmented into three classes, background class (green), teeth class (blue), and bone class (orange), in the whole maxillary bone (Figure 2C-D and Supplementary Video S3) or in a custom ROI (see Figure 3 and Supplementary Video S4). Choosing a different ROI will not affect segmentation accuracy since the deep learning model was trained with the segmentation result using the entire maxillary bone. The duration of automatic segmentation is influenced by the size of the region of interest (ROI) and the processing capabilities of the graphics processing unit (GPU). Specifically, using an NVIDIA Quadro P2000 GPU in this study, segmentation of the entire maxillary bone is completed within approximately 15 minutes (Figure 2D). For a spherical ROI encompassing the second molar with a diameter of 2 mm, the process takes about 30 seconds (Figure 3).

3.3. Alveolar bone analysis of ligature-induced experimental periodontitis model

The trained model was then applied to a specific ROI drawn on the μCT data from 12 ligature-induced periodontitis mouse samples (DICOM format, Figure 4A, B). Additionally, the model is compatible with AIM and TIFF files, as illustrated in Supplementary Figure S3.The segmentation will create a multi-ROI that contains 3 ROI layers that distinguish bone, teeth, and background. We then extract the 3 embedded ROIs which allows us to separate teeth and background from bone. The extracted bone ROI is then used in the Dragonfly bone analysis plug-in to obtain parameters such as BMD, BV, bone surface area, and Tb/Th. Figure 4C represents the BV, which provides two critical pieces of information: 1) There was no significant difference in BV on the control side (un-ligatured) of the four groups (4, 7, 9, or 14 days of ligature). We conducted a pairwise comparison (one-way ANOVA) on the BV of control side across all four groups. The p-values for day 4 vs 9, 4 vs 14, 7 vs 14, and 9 vs 14 exceeded 0.9, and day 4 vs 7 had the smallest p-value (p=0.5202). This finding further the reliability of both the auto-segmentation model and the ROI definition for the experimental periodontitis. 2) The BV on the ligature side gradually decreased from 0.87 mm3 (average number on control side, SD=0.040) to 0.76 mm3 (SD=0.032) on day 4, 0.61 mm3 (SD=0.035) on day 7, 0.50 mm3 (SD=0.035) on day 9, and then 0.63 mm3 (SD=0.061) on day 14 (average of ligature side). In other words, ligature of the second molar resulted in continuous reduction of alveolar BV up to day 9, with a trend toward recovery of alveolar BV on day 14 (Figure 4C). This result was consistent with the original 3D images of the four groups (Figure 4B), which showed ABL around the second molar was greatest on days 7 and 9 (change from green- to blue-colored bone), with slightly increased bone volume on day 14 (gaps filled with blue-colored bone; Supplementary Figures S4-S7). Although the bone loss in female mice is reported to be higher [22], the results of the auto-segmentation were not influenced by gender. Further investigation such as increasing samples size of each gender is needed to confidently evaluate any variability of bone loss due to gender.

Figure 4.

Figure 4.

Alveolar bone loss in the ligature-induced experimental periodontitis model. (A) Schematic experimental plan for ligature-induced periodontitis. (B) Representative images of periodontal bone loss at days 4, 7, 9, and 14. Bone density values are represented using a color scale —blue/violet: lowest density; red, highest intensity. (C) Alveolar bone volume (mm3) in the selected spherical ROI, excluding the teeth and background. We did not observe statistical differences in bone volume from the four groups on the control side (left). The alveolar bone volume of the ligature side was the lowest on days 7 and 9 and had increased on day 14. (D) BV ratio (ligature side/control side). The BV ratio continued to decrease on days 7 and 9 and increased on day 14 but remained below levels on day 4. A pairwise comparison using a one-way ANOVA was performed for statistical significance. *P < 0.05; **P < 0.01; NS, not significant. Data (mean ± SD) are biological replicates (n=3).

In this study, the BMD (mg/mm3) for each ROI was calibrated and calculated by hydroxyapatite calibration phantoms with five cylindrical inserts, supplying 0 (water), 200, 400, 600, and 800 mg/mm3 standard mineral density (Supplementary Figure S8). Our data showed that alveolar BMD on the un-ligatured control side ranged from 962 to 1078 mg/mm3, with a mean value of 1036 mg/mm3 (SD=39.86 mg/mm3), and on the ligatured side ranged from 824 to 1021 mg/mm3, with a mean value of 931 mg/mm3 (SD=65.59 mg/mm3). To verify the accuracy of the BMD calculations we the calculated the tooth mineral density, which ranged from 1277 to 1381 mg/mm3 in the 24 samples from the 12 mice total in the four groups and a mean value of 1333 mg/mm3 (SD=29.56 mg/mm3; Figure 5A and Supplementary Figure S8). These numbers align with other alveolar BMD and tooth mineral density studies published in the literature [23, 24].

Figure 5.

Figure 5.

Bone analysis of ligature-induced periodontitis after automatic segmentation. All measurements collected from the 2 mm spherical ROIs exclude the teeth and background. (A) BMD of periodontal bone at day 4, 7, 9, and 14. (B) BMD ratio (ligature side/control side). BMD ratio decreased starting on day 4 and continued to decrease on days 9 and 14, although no significant difference was observed between days 7 and 9. (C) Alveolar bone surface area (mm2) showed no statistical difference among the four groups on the control side (left). On the ligature side (right), the alveolar bone surface was smallest on days 7 and 9 and increased on day 14 but remained below day 4 levels. (D) Alveolar bone surface area ratio (ligature side/control side). (E) Average Tb/Th (mm) shows no statistical difference among the four groups on the control side (left). On the ligature side (right), average trabecular thickness decreased from day 4 to 7, with no significant changes among days 7, 9, and 14. (F) Average Tb/Th (ligature side/control side). A pairwise comparison using a one-way ANOVA was performed for statistical significance. *P < 0.05; **P < 0.01, ***P < 0.001; NS, not significant. Data (mean ± SD) are from biological replicates (n=3).

In addition, the ABL can be evaluated by BV ratio or density ratio of the ligature vs un-ligatured control side from the same mouse (Figure 4D and 5B). Compared to the un-ligatured control side, the ligatured side lost an average of 15.9% (SD=3.0%) of BV and 4.6% (SD=0.32%) of BMD on day 4, 27% (SD=2.4%) and 9.6% (SD=0.79%) on day 7, 43% (SD=5.1%) and 17.7% (SD=3.2%) on day 9, and 26.8% (SD=3.5%) and 21.1% (SD=1.5%) on day 14 (Figure 4D and 5B). This indicates that the BV ratio is the lowest on day 9 and is rescued by day 14, but the BMD continues to decline.

The results of bone surface area analysis were highly consistent with BV. The area size (mm2) on day 7 was smaller than on day 4 (Figure 5C). There was no statistical difference between days 7 and 9, but the area size increased from day 9 to day 14 (Figure 5C). Tb/Th results had a similar data trend, without statistical differences among days 7, 9, and 14 (Figure 5E). The bone surface area ratio and Tb/Th ratio rise by day 14 after reaching its lowest point on day 9 (Figure 5D-F).

4. Discussion

Deep learning is a unique field of research in AI that has shown great applicability in the medical field, such as medical image segmentation, disease diagnosis, treatment selection, and prediction of prognosis [2527]. Based on automated operation processes and powerful computing, deep learning allows us to bypass the limitations of manual work.

In dentistry, AI has been applied to identify proximal dental caries [28, 29], control dental plaque [30], classify canal shape[31], detect osteoporosis in panoramic radiographs [32], and diagnose periodontal compromised teeth by periapical radiographic data sets [33]. Cone beam computed tomography (CBCT) images of dental data can be used to evaluate parameters of alveolar bone, enamel, and even pulp at the 3D level. In 2022, a study collected 4,938 CBCT samples from numerous institutions and created an automatic segmentation 3D model to segment the teeth and alveolar bone. The results were comparable to or even superior to results by radiologists [34]. Also in 2022, another study divided CBCT images into five layers, including teeth, pulp cavity, cancellous bone, and cortical bone, using the U-Net machine learning model [35]. Another study used Dragonfly software and CBCT images to measure ABL and vertical height after immediate implant placement by segmentation of the implant from alveolar bone [36].

Several studies have also been done to ascertain AI technology applications to diagnose and predict periodontal diseases [37]. In Lee’s study, machine learning methods using CNN were employed to analyze periapical X-ray images (2D images) from patients and categorize teeth into three classes: healthy teeth, moderate periodontal clinical attachment loss (CAL), and severe CAL, with an average prediction accuracy of 78.9% (compared to experienced dentists)[33]. In Yauney’s study, CNN was applied to intraoral fluorescence photos to automatically grade modified gingival index levels in patients [38]. Krois’s study used CNN to detect the percentage of periodontal bone loss (PBL) in 2D digital panoramic dental X-rays images [39].

The previously published studies mainly aimed for classification and disease diagnosis in clinic data, while our study focused on AI-assisted automatic analysis of alveolar bone for periodontitis in a mouse model. Radiographic bone loss is one of the most used indicators for experimental periodontitis model. However, the analysis of 3D data in the clinic and laboratory remains problematic due to the complexity of alveolar bone deconstruction, individual differences, and interference of tooth roots. We developed a method for rapid, accurate, and reproducible analysis of μCT data for mouse alveolar bone, which can be applied to different data formats (AIM, DICOM, or TIFF). Compared to the four traditional analytic approaches, representative ROI that excludes teeth—the automatic segmentation model is a dependable, fast, comprehensive, and accurate method to measure bone changes over time during periodontitis. The automatic segmentation process of the 2 mm spherical ROI described here takes about 1 minute per ROI (Supplementary Video S1); for the entire mouse maxillary alveolar bone, the process requires less than 15 minutes for each sample.

For operations beyond human abilities, such as tooth volume or density measured in this study, machine learning has greater advantages. For instance, in the 12 mice used in this study, ligature-induced periodontitis decreased alveolar BMD, and the mineral density of the teeth on the ligature and control sides remained the same. Our model’s results revealed that the mineral density of teeth ranged from 1277 to 1390 mg/mm3 on the control side and from 1311 to 1381 mg/mm3 on the ligature side, with ratios of 0.98–1.04 averaging 1.0012, which is ideal. In addition to the BV, bone surface area, BMD, and Tb/Th measurements examined in this study, the built-in bone analysis plug-in in the Dragonfly program also allows morphometric measurements for superior bone analysis, such as trabecular separation, cortical thickness, and cortical area fraction. New users can download the model to import into the Dragonfly software to apply it to new μCT samples (AIM/DICOM/TIFF files) [39].

In using the model, the following information must be considered for successful application. First, the accuracy of the analysis is highly dependent on sample orientation (Figure 1C). Our automatic segmentation model is based on 2D slices in the coronal plane and includes a total of nine slices of images as a reference before and after the current slice. Therefore, to maintain correctness of the analysis, our model requires the user to set the coronal plane as the primary plane while adjusting the μCT orientation.

Second, we also trained the automatic segmentation model using input 3D data instead of 2D slices in the coronal plane; in this way, input data orientation does not affect accuracy of results. However, the 3D unit-based model requires significantly more time for training and automatic segmentation than the 2D slice method. For example, the former takes 2 hours while the latter takes only 5 minutes for the automatic segmentation of the whole maxilla. Manually changing the orientation might be a more time-effective strategy than using a 3D unit-based model, if the user wants to assess a larger number of samples.

In the discussed three file formats, TIFF files are represented as 8-bit grayscale images with a histogram ranging from 0 to 255, while AIM files have a histogram extending up to 35,000. Conversely, DICOM files are subject to bias field distortion, leading to variations in tissue imaging intensity. For AIM and TIFF files, samples can be directly applied to the model post-orientation adjustment without normalization. However, DICOM files, which use Hounsfield units (HU), often show inconsistent density histograms across different μCT scans, requiring normalization from 0 to 1 (background set to 0 and the highest density set to 1) in Dragonfly software(Supplementary Figure S1), as corroborated by other studies[4042]. Despite the additional normalization step required for DICOM files, our study predominantly employs this format for model training and application due to three primary reasons: 1, DICOM’s extensive acceptance in clinical settings makes it the leading medical data format. 2, Although AIM files contain raw data and offer more detail, their size significantly surpasses that of DICOM, resulting in slower processing. 3, TIFF files, despite their smaller size, sacrifice detail. Moreover, our U-Net-based model effectively handles AIM and TIFF formats, as it primarily assesses target shape and contrast, independent of the data format.

In Supplementary Figure S2, our model employs data augmentation techniques such as horizontal and vertical flipping, rotation within a range of 0–180 degrees, cropping, scaling between 80% to 120%, and adjusting brightness levels within a range of 75% to 125%. It is noteworthy that while maintaining sample orientation and normalization is crucial, minor deviations from perfection do not significantly impact the accuracy of the segmentation process. Another benefit of this technique is the simplicity of customizing adjustments to improve sample segmentation. Small inaccuracies in automatic segmentation during the application process can be manually adjusted by the user to increase result accuracy, for example, suspended bone fracture fragments that affect bone analysis results (Supplementary Figure S9). Moreover, the adjusted data can be used for further training. Our model has been effectively employed in the μCT analysis of over one hundred samples of mouse maxillary bone, conducted by our laboratory and collaborative partners. These analyses have been pivotal for advancing the understanding of bone loss in models of periodontitis. The efficacy of this approach is supported by data that, while not yet published, underscore its potential utility in further research.

Regarding clinical application, our model is specifically created for murine alveolar bone characteristics; and all training of this model has been conducted on murine bone models, and the training dataset does not include human data. Due to the differences in tooth and root morphology between mice and humans, directly extending the model’s training to clinical human alveolar bone cone-beam computed tomography (CBCT) data is not advisable. A more prudent approach would be to retrain a custom automatic segmentation model tailored for clinical use using a similar methodology. If there is enough clinical CBCT data, constructing a precise automatic segmentation model using the same software and similar methods would be relatively easy.

Conclusion

Here we describe an automatic segmentation model for rapid, accurate, and reproducible analysis of μCT data to measure changes in mouse alveolar bone during ligature-induced periodontitis mouse model. This approach demonstrates new potential uses of AI in dentistry without the need for advanced machine learning knowledge or programming skills.

Supplementary Material

1
2
3
Download video file (28.3MB, mp4)
4
Download video file (9.8MB, mp4)
5
Download video file (4.3MB, mp4)
6
Download video file (3.3MB, mp4)

Acknowledgments

We thank Zixiong Cao from ORS (Object Research Systems) for helping with U-Net model training.

Funding

This study was supported by the National Institute of Dental and Craniofacial Research (NIDCR) grant R01DC028979 and by the National Institute on Deafness and other Communication Disorders (NIDCD) Grant R01DC016598 to Marco Tizzano. μCT was performed in Penn Center for Musculoskeletal Disorders (PCMD) MicroCT Imaging Core (supported by NIH/NIAMS grant P30-AR069619).

Footnotes

Competing interests

The authors have no competing interests to declare.

Credit Author Statement

Ranhui Xi: Contributed to designing, performing, and writing the manuscript. Mamoon Ali: Data curation, formal analysis, Writing - Original draft, Writing - Reviewing and editing. Yilu Zhou: Contributed to μCT scanning and writing the manuscript. Marco Tizzano: Contributed to the concept, design, drafted, and critically revised the manuscript.

All authors gave their final approval, and they agree to be accountable for all aspects of the work.

Declaration of interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Data availability

The trained model, test data, and instructional videos are available to download from Mendeley Data (doi: 10.17632/xrwvczfnm9.1) and can be applied to the Dragonfly software [43].

References

  • [1].Kassebaum NJ, Smith AGC, Bernabé E, Fleming TD, Reynolds AE, Vos T, Murray CJL, Marcenes W, Global, Regional, and National Prevalence, Incidence, and Disability-Adjusted Life Years for Oral Conditions for 195 Countries, 1990–2015: A Systematic Analysis for the Global Burden of Diseases, Injuries, and Risk Factors, Journal of dental research 96(4) (2017) 380–387. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Kinane DF, Stathopoulou PG, Papapanou PN, Periodontal diseases, Nature reviews. Disease primers 3 (2017) 17038. [DOI] [PubMed] [Google Scholar]
  • [3].Struillou X, Boutigny H, Soueidan A, Layrolle P.J.T.o.d.j., Experimental animal models in periodontology: a review, 4 (2010) 37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Caton JG, Armitage G, Berglundh T, Chapple IL, Jepsen S, Kornman KS, Mealey BL, Papapanou PN, Sanz M, Tonetti MS, A new classification scheme for periodontal and peri‐implant diseases and conditions–Introduction and key changes from the 1999 classification, Journal of periodontology 89 (2018) S1–S8. [DOI] [PubMed] [Google Scholar]
  • [5].Baker PJ, Evans RT, Roopenian D.C.J.A.o.o.b., Oral infection with Porphyromonas gingivalis and induced alveolar bone loss in immunocompetent and severe combined immunodeficient mice, 39(12) (1994) 1035–1040. [DOI] [PubMed] [Google Scholar]
  • [6].Al-Rasheed A, Scheerens H, Rennick D, Fletcher H, Tatakis D.J.J.o.d.r., Accelerated alveolar bone loss in mice lacking interleukin-10, 82(8) (2003) 632–635. [DOI] [PubMed] [Google Scholar]
  • [7].Zang Y, Song JH, Oh S-H, Kim J-W, Lee MN, Piao X, Yang J-W, Kim O-S, Kim TS, Kim S.-H.J.J.o.D.R., Targeting NLRP3 inflammasome reduces age-related experimental alveolar bone loss, 99(11) (2020) 1287–1295. [DOI] [PubMed] [Google Scholar]
  • [8].Moro MG, Oliveira MDS, Santana MM, de Jesus FN, Feitosa K, Teixeira SA, Franco GCN, Spolidorio LC, Muscara MN, Holzhausen M, Leukotriene receptor antagonist reduces inflammation and alveolar bone loss in a rat model of experimental periodontitis, J Periodontol 92(8) (2021) e84–e93. [DOI] [PubMed] [Google Scholar]
  • [9].Bezerra MM, de Lima V, Alencar VB, Vieira IB, Brito GA, Ribeiro RA, Rocha FA, Selective cyclooxygenase-2 inhibition prevents alveolar bone loss in experimental periodontitis in rats, J Periodontol 71(6) (2000) 1009–14. [DOI] [PubMed] [Google Scholar]
  • [10].Goes P, Dutra C, Losser L, Hofbauer LC, Rauner M, Thiele S, Loss of Dkk-1 in Osteocytes Mitigates Alveolar Bone Loss in Mice With Periodontitis, Front Immunol 10 (2019) 2924. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Liu YF, Wu LA, Wang J, Wen LY, Wang XJ, Micro‐computerized tomography analysis of alveolar bone loss in ligature‐and nicotine‐induced experimental periodontitis in rats, J Journal of periodontal research 45(6) (2010) 714–719. [DOI] [PubMed] [Google Scholar]
  • [12].Cavagni J, de Macedo IC, Gaio EJ, Souza A, de Molon RS, Cirelli JA, Hoefel AL, Kucharski LC, Torres IL, Rosing CK, Obesity and Hyperlipidemia Modulate Alveolar Bone Loss in Wistar Rats, J Periodontol 87(2) (2016) e9–17. [DOI] [PubMed] [Google Scholar]
  • [13].Zheng X, Tizzano M, Redding K, He J, Peng X, Jiang P, Xu X, Zhou X, Margolskee RF, Gingival solitary chemosensory cells are immune sentinels for periodontitis, Nat Commun 10(1) (2019) 4496. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Lima MLD, Martins AA, Medeiros CACXD, Guerra GCB, Santos R, Bader M, Pirih FQ, Junior RFDA, Brito GAD, Leitao RFD, Silva RA, Barbosa SJD, Melo RCD, Araujo AAD, The Receptor AT1 Appears to Be Important for the Maintenance of Bone Mass and AT2 Receptor Function in Periodontal Bone Loss Appears to Be Regulated by AT1 Receptor, International Journal of Molecular Sciences 22(23) (2021) 12849. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Wong RL, Hiyari S, Yaghsezian A, Davar M, Casarin M, Lin YL, Tetradis S, Camargo PM, Pirih FQ, Early intervention of peri‐implantitis and periodontitis using a mouse model, Journal of periodontology 89(6) (2018) 669–679. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Wu YH, Taya Y, Kuraji R, Ito H, Soeno Y, Numabe Y, Dynamic microstructural changes in alveolar bone in ligature-induced experimental periodontitis, Odontology 108(3) (2020) 339–349. [DOI] [PubMed] [Google Scholar]
  • [17].Tominari T, Sanada A, Ichimaru R, Matsumoto C, Hirata M, Itoh Y, Numabe Y, Miyaura C, Inada M, Gram-positive bacteria cell wall-derived lipoteichoic acid induces inflammatory alveolar bone loss through prostaglandin E production in osteoblasts, Scientific Reports 11(1) (2021) 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Ronneberger O, Fischer P, Brox T, U-net: Convolutional networks for biomedical image segmentation, International Conference on Medical image computing and computer-assisted intervention (2015) 234–241. [Google Scholar]
  • [19].Setzer FC, Shi KJ, Zhang Z, Yan H, Yoon H, Mupparapu M, Li J, Artificial Intelligence for the Computer-aided Detection of Periapical Lesions in Cone-beam Computed Tomographic Images, Journal of endodontics 46(7) (2020) 987–993. [DOI] [PubMed] [Google Scholar]
  • [20].Zheng Z, Yan H, Setzer FC, Shi KJ, Mupparapu M, Li J, Anatomically constrained deep learning for automating dental cbct segmentation and lesion detection, IEEE Transactions on Automation Science and Engineering 18(2) (2020) 603–614. [Google Scholar]
  • [21].Kurt Bayrakdar S, Orhan K, Bayrakdar IS, Bilgir E, Ezhov M, Gusarev M, Shumilov E, A deep learning approach for dental implant planning in cone-beam computed tomography images, BMC Medical Imaging 21(1) (2021) 86. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Duan X, Gleason RC, Li F, Hosur KB, Duan X, Huang D, Wang H, Hajishengallis G, Liang S.J.J.o.p.r., Sex dimorphism in periodontitis in animal models, 51(2) (2016) 196–202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Djomehri SI, Candell S, Case T, Browning A, Marshall GW, Yun W, Lau SH, Webb S, Ho SP, Mineral density volume gradients in normal and diseased human tissues, PLoS One 10(4) (2015) e0121611. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Chavez M, Tan M, Kolli T, Zachariadou C, Farah F, Mohamed F, Chu E, Foster B.J.J.o.D.R., Bone Sialoprotein Is Critical for Alveolar Bone Healing in Mice, (2022) 00220345221126716. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Shen D, Wu G, Suk HI, Deep Learning in Medical Image Analysis, Annu Rev Biomed Eng 19 (2017) 221–248. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Ker J, Wang L, Rao J, Lim TJIA, Deep learning applications in medical image analysis, J Ieee Access 6 (2017) 9375–9389. [Google Scholar]
  • [27].Razzak MI, Naz S, Zaib A, Deep learning for medical image processing: Overview, challenges and the future, J Classification in BioApps (2018) 323–350. [Google Scholar]
  • [28].Lee JH, Kim DH, Jeong SN, Choi SH, Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm, Journal of Dentistry 77 (2018) 106–111. [DOI] [PubMed] [Google Scholar]
  • [29].Choi J, Eun H, Kim C, Boosting Proximal Dental Caries Detection via Combination of Variational Methods and Convolutional Neural Network, Journal of Signal Processing Systems for Signal Image and Video Technology 90(1) (2018) 87–97. [Google Scholar]
  • [30].Imangaliyev S, van der Veen MH, Volgenant C, Loos BG, Keijser BJ, Crielaard W, Levin E, Classification of quantitative light-induced fluorescence images using convolutional neural network, J arXiv preprint arXiv:.09193 (2017). [Google Scholar]
  • [31].Sherwood AA, Sherwood AI, Setzer FC, Shamili JV, John C, Schwendicke F, A deep learning approach to segment and classify C-shaped canal morphologies in mandibular second molars using cone-beam computed tomography, Journal of endodontics 47(12) (2021) 1907–1916. [DOI] [PubMed] [Google Scholar]
  • [32].Lee JS, Adhikari S, Liu L, Jeong HG, Kim H, Yoon SJ, Osteoporosis detection in panoramic radiographs using a deep convolutional neural network-based computer-assisted diagnosis system: a preliminary study, Dentomaxillofacial Radiology 48(1) (2019) 20170344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [33].Lee JH, Kim DH, Jeong SN, Choi SH, Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm, Journal of Periodontal and Implant Science 48(2) (2018) 114–123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [34].Cui Z, Fang Y, Mei L, Zhang B, Yu B, Liu J, Jiang C, Sun Y, Ma L, Huang J, Liu Y, Zhao Y, Lian C, Ding Z, Zhu M, Shen D, A fully automatic AI system for tooth and alveolar bone segmentation from cone-beam CT images, Nature communications 13(1) (2022) 2096. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Yang H, Wang X, Li G.J.I.J.o.M., Tooth and Pulp Chamber Automatic Segmentation with Artificial Intelligence Network and Morphometry Method in Cone-beam CT, 40(2) (2022). [Google Scholar]
  • [36].Chang C-C, Kim SK, Lee C-T, A novel approach to assess volumetric bone loss at immediate implant sites and comparison to linear measurements: A pilot study, Journal of Dentistry 120 (2022) 104083. [DOI] [PubMed] [Google Scholar]
  • [37].Mohammad‐Rahimi H, Motamedian SR, Pirayesh Z, Haiat A, Zahedrozegar S, Mahmoudinia E, Rohban MH, Krois J, Lee JH, Schwendicke F.J.J.o.p.r., Deep learning in periodontology and oral implantology: A scoping review, 57(5) (2022) 942–951. [DOI] [PubMed] [Google Scholar]
  • [38].Yauney G, Rana A, Wong LC, Javia P, Muftu A, Shah P, Automated process incorporating machine learning segmentation and correlation of oral diseases with systemic health, 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC), IEEE, 2019, pp. 3387–3393. [DOI] [PubMed] [Google Scholar]
  • [39].Krois J, Ekert T, Meinhold L, Golla T, Kharbot B, Wittemeier A, Dörfer C, Schwendicke F.J.S.r., Deep learning for the radiographic detection of periodontal bone loss, 9(1) (2019) 8495. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Song Y, Yang H, Ge Z, Du H, Li G, Age estimation based on 3D pulp segmentation of first molars from CBCT images using U-Net, Dentomaxillofacial Radiology 52(7) (2023) . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].Chang Y-W, Sheu TW, GPU acceleration of a patient-specific airway image segmentation and its assessment, arXiv preprint arXiv:2012.10684 (2020). [Google Scholar]
  • [42].Fajar A, Sarno R, Fatichah C, Fahmi A, Reconstructing and resizing 3D images from DICOM files, Journal of King Saud University-Computer and Information Sciences 34(6) (2022) 3517–3526. [Google Scholar]
  • [43].XI RANHUI; TIZZANO MARCO (2023), “A reliable deep learning-based method for alveolar bone quantification using a murine model of periodontitis and micro-CT imaging ”, Mendeley Data, V1, doi: 10.17632/xrwvczfnm9.1 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1
2
3
Download video file (28.3MB, mp4)
4
Download video file (9.8MB, mp4)
5
Download video file (4.3MB, mp4)
6
Download video file (3.3MB, mp4)

Data Availability Statement

The trained model, test data, and instructional videos are available to download from Mendeley Data (doi: 10.17632/xrwvczfnm9.1) and can be applied to the Dragonfly software [43].

RESOURCES