Skip to main content
PLOS One logoLink to PLOS One
. 2022 Jan 7;17(1):e0261870. doi: 10.1371/journal.pone.0261870

Development of an artificial intelligence-based algorithm to classify images acquired with an intraoral scanner of individual molar teeth into three categories

Nozomi Eto 1,2,*, Junichi Yamazoe 3, Akiko Tsuji 2, Naohisa Wada 1,4, Noriaki Ikeda 2
Editor: Francesco Bianconi5
PMCID: PMC8741029  PMID: 34995298

Abstract

Background

Forensic dentistry identifies deceased individuals by comparing postmortem dental charts, oral-cavity pictures and dental X-ray images with antemortem records. However, conventional forensic dentistry methods are time-consuming and thus unable to rapidly identify large numbers of victims following a large-scale disaster.

Objective

Our goal is to automate the dental filing process by using intraoral scanner images. In this study, we generated and evaluated an artificial intelligence-based algorithm that classified images of individual molar teeth into three categories: (1) full metallic crown (FMC); (2) partial metallic restoration (In); or (3) sound tooth, carious tooth or non-metallic restoration (CNMR).

Methods

A pre-trained model was created using oral-cavity pictures from patients. Then, the algorithm was generated through transfer learning and training with images acquired from cadavers by intraoral scanning. Cross-validation was performed to reduce bias. The ability of the model to classify molar teeth into the three categories (FMC, In or CNMR) was evaluated using four criteria: precision, recall, F-measure and overall accuracy.

Results

The average value (variance) was 0.952 (0.000140) for recall, 0.957 (0.0000614) for precision, 0.952 (0.000145) for F-measure, and 0.952 (0.000142) for overall accuracy when the algorithm was used to classify images of molar teeth acquired from cadavers by intraoral scanning.

Conclusion

We have created an artificial intelligence-based algorithm that analyzes images acquired with an intraoral scanner and classifies molar teeth into one of three types (FMC, In or CNMR) based on the presence/absence of metallic restorations. Furthermore, the accuracy of the algorithm reached about 95%. This algorithm was constructed as a first step toward the development of an automated system that generates dental charts from images acquired by an intraoral scanner. The availability of such a system would greatly increase the efficiency of personal identification in the event of a major disaster.

1 Introduction

Dental evidence is widely used for personal identification because the teeth exhibit age-related changes, have features that are unique to an individual, and resist decomposition after death [1, 2]. Forensic dentistry identifies deceased individuals by collecting postmortem data, such as dental charts, oral-cavity pictures and dental X-ray images, and comparing them with antemortem records [3]. A notable disadvantage of this approach is that it is a slow process due to the time needed to perform the postmortem examinations, manually interpret the findings and compare them with antemortem records. Hence, the use of conventional techniques can lead to delays in victim identification following major disasters such as earthquakes that claim many lives. The development of an automated system that produces dental charts from digitized images of the teeth would greatly speed up the process of victim identification after a large-scale disaster.

Previous studies aimed at improving personal identification methods have evaluated superposition of reconstructed images of the palatal rugae [4], superposition of computed tomography (CT) images of the skull [5, 6], and superposition of dental X-ray images [7]. However, obtaining X-ray or CT images from a huge number of victims is not only time-consuming but also difficult to achieve at the site of a disaster because it requires radiation-generating equipment that is not easily portable. Although oral-cavity pictures potentially could be used as part of an automated system for victim identification, it is often challenging to obtain postmortem pictures with a dental camera due to the presence of rigor mortis that restricts mouth opening. By contrast, real-time imaging of the teeth can be performed easily and rapidly with an intraoral scanner even when there is some restriction of mouth opening. Furthermore, an intraoral scanner is a handheld device that is highly portable, making it well suited for use after a large-scale disaster.

The development of a system that automatically creates dental charts from images acquired with an intraoral scanner would greatly facilitate the rapid creation of postmortem dental charts for personal identification in the event of a major disaster. In this study, as a component of automated dental chart filling system, we generated and evaluated an artificial intelligence (AI)-based algorithm that analyzes images of molar teeth acquired with an intraoral scanner and classifies each tooth as one of three types: (1) full metallic crown (FMC); (2) partial metallic restoration (In); or (3) sound tooth, carious tooth or non-metallic restoration (CNMR).

2 Materials and methods

2.1 Ethics

In this study, existing information was anonymized and used unidentified. The anonymized correspondence table was stored in a separate file in a separate location. We do not give individual written or oral informed consent because we are using existing information. Therefore, we have disclosed the information of this research on our website and provided contact information so that patients or their families can decline if they do not wish to be eligible. This study was approved by the Kyushu University Certified Institutional Review Board for Clinical Trials (reference no. 2020–499). All experiments were conducted in accordance with approved guidelines.

2.2 Intraoral scanning

Images were obtained using a TRIOS third-generation intraoral scanner (3Shape, Copenhagen, Denmark). The TRIOS scanner does not require opacification (powder-free) and relies on the principle of confocal microscopy to acquire a sequence of video images using structured illumination from a light-emitting diode. The imaging system produces a colored visualization of the scanned structures. The scanner’s acquisition software generates proprietary files (DICOM, Digital Imaging and Communications in Medicine format) that can be exported to an open file format (STL, Standard Tessellation Language).

Since the overarching objective of our research is to develop a system for use in disaster victim identification, this study was conducted on cadavers. The oral-cavities of 34 cadavers to be dissected were imaged with an intraoral scanner at the Department of Forensic Pathology and Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan. First, the oral-cavity of the cadaver was cleaned to remove any contaminants (including body fluids) that might hinder the collection of accurate findings. Then, the head of the scanner was inserted into the oral-cavity between the upper and lower teeth and slowly moved along the teeth to acquire the images. The minimal opening required to allow the scanner head to be inserted into the oral-cavity was around 2 cm (the thickness of the scanner head). The three-dimensional images of the upper and lower teeth were saved as STL files, and occlusal views of the teeth were generated with a snipping tool and saved as Portable Network Graphic (PNG) files. ImageJ (National Institutes of Health, Bethesda, MD, USA) was used to generate individual images of the upper and lower molars (256×256 pixels) from the occlusal views of the teeth (examples are shown in Fig 1).

Fig 1. Representative images of individual upper and lower molar teeth (occlusal views) extracted from images obtained with the intraoral scanner.

Fig 1

A. Occlusal view generated by intraoral scanning. Images of individual molar teeth were extracted from the occlusal view. B. full metallic crown (FMC). C. partial metallic restoration (In). D. sound tooth, carious tooth or non-metallic restoration (CNMR).

2.3 Classification of the extracted images

The individual molar teeth extracted from the scanned images were classified as FMC (full metallic crown), In (partial metallic restoration) or CNMR (sound tooth, carious tooth or non-metallic restoration). Images that were unclear on visual inspection were excluded from the analysis.

2.4 Deep learning machine

The deep learning machine used in this study was comprised of a Core i9-7920X central processing unit (2.90 GHz, 12 cores, 24 threads; Intel, Santa Clara, CA, USA), TITAN V graphics processing unit (32 GB, 5120 cores, 4096-bit bus width; Nvidia, Santa Clara, CA, USA) and Ubuntu 16.04.5 LTS software (Canonical, London, UK).

2.5 Network architecture

The convolutional neural network (CNN) was based on the LeNet architecture proposed by LeCun et al. [8]. The LeNet architecture consisted of two convolution layers, two max-pooling layers and two affine layers (Fig 2). When color molar images were provided as the input to the CNN, the output of the network classified the tooth condition as one of three types (FMC, In or CNMR). The following hyperparameters were used in this study: training epochs: 30; base learning rate: 0.001; and training / validation rate: 80% / 20%.

Fig 2. Architecture of the LeNet used in our study.

Fig 2

The LeNet architecture consisted of two convolution layers, two max-pooling layers and two affine layers.

2.6 Generation of a pre-trained model

Deep learning in general is considered to require a large number of training samples [9]. Since there was a limitation to the number of cadavers available for this research, the present study utilized the transfer learning technique, which is a method of transferring knowledge across domains to overcome problems associated with small training datasets [10]. Therefore, we carried out a pre-training process using oral-cavity pictures obtained from patients who had attended the dentistry section of Kyushu University Hospital. Training data (i.e., occlusal views of individual molar teeth) were extracted from the oral-cavity pictures using the method described in section 2.2 (Intraoral scanning) (Fig 3), and each tooth was classified as FMC, In or CNMR as described above. Images that were unclear on visual inspection were excluded. 300 images of each category were randomly divided into 10 folders of 30 images each, and training and test runs of the CNN were performed a total of 10 times for cross-validation.

Fig 3. Representative images of individual upper and lower molar teeth (occlusal views) extracted from oral-cavity pictures.

Fig 3

A. full metallic crown (FMC). B. partial metallic restoration (In). C. sound tooth, carious tooth or non-metallic restoration (CNMR).

2.7 Generation of a classification model

The classification model was generated by applying transfer learning to the pre-trained model using images obtained from the cadavers by the intraoral scanner. 60 images of each tooth category (FMC, In or CNMR) were randomly divided into 4 folders of 15 images, and a total of 4 training and test runs of the CNN were performed for cross-validation.

2.8 Two-step cascade method for classification of tooth condition

An additional trained model was generated using a two-step cascade method to classify tooth condition. Images obtained from cadavers by intraoral scanner were used. In the first step, the tooth was classified as ‘presence or absence of metallic restoration’. In the second step, ‘metallic restoration’ was sub-classified as ‘full or partial’ (Fig 4).

Fig 4. Schematic diagram of the two-step cascade classification.

Fig 4

FMC: full metallic crown; In: partial metallic restoration; CNMR: sound tooth, carious tooth or non-metallic restoration.

2.9 Evaluation of the models

The ability of each model to classify tooth condition was evaluated using four criteria: precision, recall, F-measure and overall accuracy. Precision was calculated as the fraction of correct predictions for a certain class. Recall was calculated as the fraction of instances of a class that were correctly predicted. F-measure was defined as the harmonic mean of precision and recall: F-measure = (2 × precision × recall) / (recall + precision). Overall accuracy was calculated as the fraction of instances that were correctly classified.

3 Results

3.1 Evaluation of the pre-trained model

Table 1 shows the recall, precision, F-measure and overall accuracy values for each of 10 cross-validated tests using oral-cavity pictures obtained from patients. The average value (variance) was 0.937 (0.00128) for recall, 0.941 (0.00108) for precision, 0.937 (0.00129) for F-measure, and 0.939 (0.00107) for overall accuracy (values given to 3 significant figures).

Table 1. Recall, precision, F-measure and overall accuracy for each test at the pre-trained model.

Test Recall Precision F-measure Accuracy
1 0.8973 0.9091 0.8970 0.8988
2 0.8466 0.8579 0.8462 0.8588
3 0.9659 0.9662 0.9658 0.9662
4 0.9544 0.9579 0.9548 0.9550
5 0.9436 0.9442 0.9432 0.9438
6 0.9655 0.9696 0.9659 0.9662
7 0.9321 0.9317 0.9317 0.9325
8 0.9555 0.9607 0.9553 0.9555
9 0.9555 0.9607 0.9553 0.9555
10 0.9555 0.9561 0.9551 0.9555
Average 0.9372 0.9414 0.9370 0.9388
Variance 1.28 × 10−3 1.08 × 10−3 1.29 × 10−3 1.07 × 10−3

3.2 Evaluation of the classification model

The classification algorithm was generated through transfer learning using images obtained from cadavers by intraoral scanning. Table 2 shows the recall, precision, F-measure and overall accuracy values for each of 4 cross-validated tests. The average value (variance) was 0.952 (0.000140) for recall, 0.957 (0.0000614) for precision, 0.952 (0.000145) for F-measure, and 0.952 (0.000142) for overall accuracy.

Table 2. Recall, precision, F-measure, and overall accuracy for each test at the classification model.

Test Recall Precision F-measure Accuracy
1 0.9333 0.9444 0.9326 0.9333
2 0.9555 0.9583 0.9539 0.9545
3 0.9659 0.9662 0.9658 0.9662
4 0.9544 0.9579 0.9548 0.9550
Average 0.9523 0.9567 0.9518 0.9523
Variance 1.40 × 10−4 6.10 × 10−5 1.44 × 10−4 1.41 × 10−4

3.3 Evaluation of the two-step cascade classification model

Table 3 (step 1 of the two-step cascade classification model) and Table 4 (step 2 of the two-step cascade classification model) present the recall, precision, F-measure and overall accuracy values for each of 4 cross-validated tests using images acquired from cadavers with the intraoral scanner. Step 1 of the two-step cascade classification model (i.e., presence or absence of a metallic restoration) achieved particularly high values (greater than 0.987) for recall, precision, F-measure and overall accuracy.

Table 3. Recall, precision, F-measure, and overall accuracy for each test using the first step of the two-step model.

Test Recall Precision F-measure Accuracy
1 1.000 1.000 1.000 1.000
2 0.9827 0.9687 0.9750 0.9772
3 1.000 1.000 1.000 1.000
4 0.9666 0.9838 0.9745 0.9777
Average 0.9873 0.9881 0.9874 0.9887
Variance 1.93 × 10−4 1.70 × 10−4 1.60 × 10−4 1.27 × 10−4

Table 4. Recall, precision, F-measure, and overall accuracy for each test using the second step of the two-step model.

Test Recall Precision F-measure Accuracy
1 0.9000 0.9166 0.8989 0.9000
2 0.9666 0.9666 0.9654 0.9655
3 0.9333 0.9411 0.9330 0.9333
4 0.9666 0.9687 0.9666 0.9666
Average 0.9416 0.9483 0.9410 0.9414
Variance 7.60 × 10−4 4.50 × 10−4 7.70 × 10−4 7.40 × 10−4

4 Discussion

Dental evidence has been used for personal identification during recent natural disasters. However, many dentists have little or no experience of collecting dental information from cadavers and experience psychological distress when faced with this task [9]. The above factors can lead to errors in judgment as well as limitations in manpower availability for personal identification of victims after a large-scale disaster. Although previous reports have described the automatic interpretation of dental findings and the collation of ante/postmortem images [11, 12], these studies were based on X-rays or CT scans of the cadavers, which are time-consuming to perform and require specialized equipment that generates radiation and lacks portability. As a method that does not use them, in the present study, images of individual molar teeth were acquired with an intraoral scanner, and an algorithm was created to categorize the tooth in each image as FMC, In or CNMR. Notably, the algorithm was able to classify molar teeth with an accuracy of around 95%.

The intraoral scanner utilized in this study has several advantageous features that make it well suited to the acquisition of dental evidence from cadavers. First, the captured images are displayed in real time on a dedicated application. Second, the images can be saved as digital data for later analysis. Third, the scanner used in this study comes with RealColor Technology that allows it to accurately reproduce color tones [13]. Fourth, the TRIOS scanner does not require opacification (powder-free) because it is equipped with technology that controls for the reflection of light by metals and other substances; hence, this scanner is easier to use than early-generation scanners that require the application of powder for opacification. Finally, the intraoral scanner is a highly portable, handheld device with a small scanning head that can be inserted through an opening of only 2 cm. Thus, unlike taking oral-cavity pictures with a dental camera, intraoral scanning can be performed even when mouth opening is somewhat restricted by rigor mortis or other factors. Based on the above features, we believe that the use of an intraoral scanner would improve the accuracy and speed of dental evaluation in the event of a large-scale disaster.

In this study, the number of images we were able to obtain by intraoral scanning of cadavers was limited. Possible methods to overcome this limitation include data augmentation, increasing the size of the training sample [9] and transfer learning [14]. We performed transfer learning in this study because of the availability of a large number of oral-cavity pictures that were similar to the images acquired with the intraoral scanner. First, the CNN was pre-trained using oral-cavity pictures, which were relatively easy to collect. Then, using the parameter of pre-trained model as an initial parameter of classification model, we generated the classification model using images obtained by intraoral scanning of cadavers. This approach allowed us to construct a highly accurate CNN using only a small number of intraoral scans.

The use of dental evidence for the personal identification of disaster victims requires that the dental findings are collected accurately [15]. Since teeth with composite resin restorations can sometimes be mistaken for non-restored teeth even when observed with the naked eye, the present study limited the classification of molar tooth condition to only three categories (i.e., FMC, In and CNMR) that exhibit obvious differences in color. The accuracy of the algorithm was around 95% when these three categories were used and reached about 98% when the classification was based only on the presence or absence of a metal-colored restoration (Table 3), which suggests that the accuracy of the algorithm can be improved by simplifying the conditions. However, in order to be utilized for personal identification purposes, the algorithm will need to be improved so that it can recognize non-metallic restorations and thereby provide a more complex classification. We envisage that additional research will allow the algorithm to be further developed so that it not only recognizes a wider range of tooth features (including both metallic and non-metallic restorations) but also identifies individual teeth in an occlusal view. The creation of such an algorithm would allow dental charts to be automatically generated from occlusal views obtained by intraoral scanning.

In conclusion, we have developed an AI-based algorithm that can analyze images acquired with an intraoral scanner and classify molar teeth into one of three types (FMC, In or CNMR) based on the presence/absence of metallic restorations. Furthermore, the accuracy of the algorithm reached about 95%. This algorithm was created as a first step toward the construction of a system that can automatically generate a dental chart from images obtained with an intraoral scanner. The development of such an automated system would greatly improve the efficiency of personal identification in the event of a major disaster.

Acknowledgments

The authors would like to thank Dr Kenichi Morooka, Professor, Graduate School of Natural Science and Technology, Okayama University for helpful discussion and comments on the manuscript. We are also grateful to Dr Yongsu Yoon, Assistant Professor, Department of Radiological Science, Dongseo University for help with the image editing and deep learning methods. We thank OXMEDCOMMS (www.oxmedcomms.com) for writing assistance.

Data Availability

All relevant data are within the paper.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Utsuno H. Victim identification in large-scale disasters using dental findings. IATSS Res. 2019;43:90–96. [Google Scholar]
  • 2.Sable G, Rindhe D. A review of dental biometrics from teeth feature extraction and matching techniques. Int J Sci Res. 2014;3:2720–2722. [Google Scholar]
  • 3.Katsumura S, Sato K, Ikawa T, Yamaura K, Ando E, Shigeta Y, et al. “High-precision, reconstructed 3D model” of skull scanned by conebeam CT: Reproducibility verified using CAD/CAM data. Leg Med. 2016;18:37–43. doi: 10.1016/j.legalmed.2015.11.007 [DOI] [PubMed] [Google Scholar]
  • 4.Gibelli D, Angelis DD, Pucciarelli V, Riboli F, Ferrario VF, Dolci C, et al. Application of 3D models of palatal rugae to personal identification: hints at identification from 3D-3D superimposition techniques. Int J Leg Med. 2018;132:1241–1245. doi: 10.1007/s00414-017-1744-x [DOI] [PubMed] [Google Scholar]
  • 5.Ishii M, Yayama K, Motani H, Sakuma A, Yasjima D, Hayakawa M, et al. Application of superimposition-based personal identification using skull computed tomography images. J Forensic Sci. 2011;56:960–966. doi: 10.1111/j.1556-4029.2011.01797.x [DOI] [PubMed] [Google Scholar]
  • 6.dos Santos Rocha S, de Paula Ramos DL, de Gusmao Paraiso Cavalcanti M. Applicability of 3D-CT facial reconstruction for forensic individual identification. Pesqui Odontl Bras. 2003;17:24–28. doi: 10.1590/s1517-74912003000100005 [DOI] [PubMed] [Google Scholar]
  • 7.Jain AK, Chen H. Matching of dental X-ray images for human identification. Pattern Recognit. 2015;37:1519–1532. doi: 10.1016/j.patcog.2003.12.016 [DOI] [Google Scholar]
  • 8.LeCun Y, Haffner P, Bottou L, Bengio Y. Object recognition with gradient-based learning. In: Forsyth DA, Mundy JL, Gesu V, Cipolla R, editors. Shape, Contour and Grouping in Computer Vision, Lecture Notes in Computer Science vol 1681. Berlin, Heidelberg: Springer; 1999. pp. 319–345. [Google Scholar]
  • 9.Miki Y, Muramatsu C, Hayashi T, Zhou X, Hara T, Katsumata A, et al. Classification of teeth in cone-beam CT using deep convolutional neural network. Comput Biol Med 2017;80: 24–29. doi: 10.1016/j.compbiomed.2016.11.003 [DOI] [PubMed] [Google Scholar]
  • 10.Zhuang F, Qi Z, Duan K, Xi D, Zhu Y, Zhu H, et al. A comprehensive survey on transfer learning. Proc IEEE. 2021;109:43–76. doi: 10.1109/JPROC.2020.3004555 [DOI] [Google Scholar]
  • 11.Zhou J, Abdel-Mottaleb M. A content-based system for human identification based on bitewing dental X-ray images. Pattern Recognit. 2005;38:2132–2142. doi: 10.1016/j.patcog.2005.01.011 [DOI] [Google Scholar]
  • 12.Trochesset DA, Serchuk RB, Colosi DC. Generation of intra-oral-like images from cone beam computed tomography volumes for dental forensic image comparison. J Forensic Sci. 2015;59:510–513. doi: 10.1111/1556-4029.12336 [DOI] [PubMed] [Google Scholar]
  • 13.Ahmed KE, Wang T, Li KY, Luk WK, Burrow MF. Performance and perception of dental students using three intraoral CAD/CAM scanners for full-arch scanning. J Prosthodont Res. 2019;63:167–172. doi: 10.1016/j.jpor.2018.11.003 [DOI] [PubMed] [Google Scholar]
  • 14.Hu W, Qian Y, Soong FK, Wang Y. Improved mispronunciation detection with deep neural network trained acoustic models and transfer learning based logistic regression classifiers. Speech Commun. 2015;67:154–166. doi: 10.1016/j.specom.2014.12.008 [DOI] [Google Scholar]
  • 15.Aoki T, Ito K, Aoyama S. Disaster victim identification and ICT. IEICE Fundam Rev. 2015;9:119–130. doi: 10.1587/essfr.9.2_119 [DOI] [Google Scholar]

Decision Letter 0

Francesco Bianconi

8 Nov 2021

PONE-D-21-30844Development of an artificial intelligence-based algorithm to evaluate tooth condition from images acquired with an intraoral three-dimensional scannerPLOS ONE

Dear Dr. ETO,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that your work has merit but does not fully meet the journal’s publication criteria in the form as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Your manuscript has been evaluated by two experts in the field and their comments are attached here below for your reference. During the revision some significant concerns have emerged, and in particular:

  • The main outcomes of the work are not clearly exposed;

  • Further details are needed in different sections of the manuscript: both reviewers agree on this point, please check their comments carefully;

  • Abstract and Introduction need thorough restructuring;

  • The limitations of this research should be described more in depth.

 Please submit your revised manuscript by Dec 23 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Francesco Bianconi, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please amend your current ethics statement to address the following concerns:

a) Did participants provide their written or verbal informed consent to participate in this study?

b) If consent was verbal, please explain i) why written consent was not obtained, ii) how you documented participant consent, and iii) whether the ethics committees/IRB approved this consent procedure.

3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

4. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ

5. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please delete it from any other section.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In this work an AI-based algorithm has been proposed for the automatic identification of the condition of molar teeth from cadavers. The main aim of this activity was to provide a first step for the construction of a digitized dental chart from imaging data, in support of post-mortem identification analyses in forensic dentistry.

The paper is well written and has a good general organisation among sections. The main issue that the reviewer can see, is that the research seems to be not concluded as it is presented, and the main outcomes of this activity are not clearly exposed. The authors stated that this is the first part of a wider project and research, but the novelty of the proposed methodology and how it is placed in relation to the state of the art is not defined. Moreover, it should be better presented throughout the work how this part of the research activity is or will be linked to the whole project.

The reviewer suggests major revisions in order to improve the article impact.

INTRODUCTION

The Introduction section should be improved better clarifying the potentiality and the novelty of the proposed approach, in order to classify it in relation to the most used techniques.

MATERIALS AND METHODS

The ‘Intraoral 3D scanning’ section describes mainly the characteristics of the scanner, but there is no information on the experimental setup and procedure followed for the data acquisition from cadavers. In the Discussion section it is stated that the intraoral 3D scanning can be used also in challenging situations, typical when the mouth opening width is limited because of rigor mortis. For this reason, in the ‘Materials and Methods’ section, a step-by-step description of the procedure used for the 3D scanning acquisitions on cadavers should be reported, also highlighting, if there have been, difficulties faced.

Line 89: dcm is the file extension. The file format is called DICOM.

Line 93: how many cadavers were available for the analysis? The only information seems to be that sixty images were available for the classification model (line 142).

Figure 1: this figure shows an example of the three molar teeth types, but for the model implementation it is stated that you have used scans. So, the reviewer is wondering if these scans were exported in some CAD format (usually stl) and then these used for the analysis. This point is not clear. For this reason, the reviewer suggests to show some images representing DICOM or stl models (or better both of them), in order to clarify the type of available data. Moreover, if some scans post-processing was performed this should be reported in the Methods section as well as the accuracy of the model (number of elements, texture, and so on).

Lines 126-127: here the authors stated that the number of available scanning from cadavers was not sufficient for the training of the network. Is it possible to sustain this with some bibliography and provide information on the proper number of training data?

Figure 4: this image is not very useful; indeed, the two-step cascade process is clear as it is explained in this section. It would be more beneficial for the reader if a comprehensive workflow of the overall process and AI-based algorithm is provided with all the main steps clearly described.

RESULTS

Lines 167-168: this sentence in not comprehensible; please rephrase.

DISCUSSION

Lines 201-202: It is not clear which features have been used for the identification of these conditions: texture, shape, volume? In the Methods section a more detailed description of the available features should be provided along with the features used within the AI architecture.

Lines 205-206: How the model can be used for victim recognition based only on the recognition of the molar teeth condition? This point is not clear to the reviewer. The authors have described the full process for the creation and training of the network but its application for the specific case of victim recognition remains vague.

Lines 233-234: the possible use of morphological characteristics for victim recognition is undoubtedly an improvement for the model, but how do you link the work here presented with the possibility to use new features for victim recognition? In general, the connection between the part of the work here presented and the overall project has to be better clarified and carried on throughout the paper.

Reviewer #2: The paper is interesting and worth publishing.

The abstract should be reworded and more clearly structured.

For the readers unfamiliar with intraoral scanners more information should be provided on the applied scanner and explanation why it is powder free should be added.

The main limitation of the study is a relatively low number of cases, especially from the intraoral scanner. However, preliminary results are promising.

Another limitation of the study is that the authors teach the algorithm to distinguish between 3 groups: 1. full metallic crown 2. partial metallic restoration and 3. sound tooth, caries tooth and tooth with non-metallic restoration merged in one group. Probably metallic restorations are prevalent in the studied Japanese population, but this is a major limitation in populations in which more esthetic restorations are applied. The authors should discuss this limitation more indepth and also provide an attempt on solution of the problem of differentiation between healthy teeth and restored teeth using composite materials mimicking sound tissues.

What did the authors mean by "treatment scars"?

The conclusions in the present form are rather an extended summary of the results, therefore should be rephrased so that they are related to the aim of the study.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Jan 7;17(1):e0261870. doi: 10.1371/journal.pone.0261870.r002

Author response to Decision Letter 0


24 Nov 2021

We thank the reviewers for their careful reading of our manuscript and useful comments. Our responses to the reviewers’ comments are presented below. All revisions to the manuscript are highlighted in red-colored font and referred to by line number in the responses below. Please note that the changes made do not majorly affect the content, conclusions or framework of the paper.

REVIEWER #1

INTRODUCTION

Comment

The Introduction section should be improved better clarifying the potentiality and the novelty of the proposed approach, in order to classify it in relation to the most used techniques.

Response

Thank you for this excellent recommendation. We have completely rewritten the Introduction section to streamline the text and focus on the potentiality and novelty of our study in relation to currently used approaches. We refer you to the new Introduction section in the revised manuscript (lines 48–76).

MATERIALS AND METHODS

Comment

The ‘Intraoral 3D scanning’ section describes mainly the characteristics of the scanner, but there is no information on the experimental setup and procedure followed for the data acquisition from cadavers. In the Discussion section it is stated that the intraoral 3D scanning can be used also in challenging situations, typical when the mouth opening width is limited because of rigor mortis. For this reason, in the ‘Materials and Methods’ section, a step-by-step description of the procedure used for the 3D scanning acquisitions on cadavers should be reported, also highlighting, if there have been, difficulties faced.

Response

Thank you for this helpful suggestion. We have modified the Intraoral Scanning subsection of the Materials and Methods section to include more information about the technique used for intraoral scanning of cadavers (lines 96–105). We did not encounter any specific difficulties while performing intraoral scanning, hence none are described in the manuscript.

Comment

Line 89: dcm is the file extension. The file format is called DICOM.

Response

Thank you for pointing out this inadvertent error. We have revised “DCM” to “DICOM” (line 90).

Comment

Line 93: how many cadavers were available for the analysis?

Response

A total of 34 cadavers were available for our analysis. We have added this information to the Intraoral Scanning subsection of the Materials and Methods section (line 94).

Comment

Figure 1: this figure shows an example of the three molar teeth types, but for the model implementation it is stated that you have used scans. So, the reviewer is wondering if these scans were exported in some CAD format (usually stl) and then these used for the analysis. This point is not clear. For this reason, the reviewer suggests to show some images representing DICOM or stl models (or better both of them), in order to clarify the type of available data. Moreover, if some scans post-processing was performed this should be reported in the Methods section as well as the accuracy of the model (number of elements, texture, and so on).

Response

Thank you for these important queries. The images shown in Figure 1 were obtained as follows. First, the 3D images obtained from cadavers by intraoral scanning were saved as STL files, and occlusal views of the teeth were generated with a snipping tool and saved as PNG files. Then, ImageJ was used to generate individual images of the upper and lower molars (256×256 pixels) from the occlusal views of the teeth; examples of these images are presented in Figure 1. We have revised the Intraoral Scanning subsection of the Materials and Methods section to include the above information (lines 101–105). In addition, Figure 1 has been modified to include a panel showing an occlusal view generated by intraoral scanning, from which images of individual molar teeth were extracted. The title and legend for Figure 1 have been updated accordingly (lines 107–111).

Comment

Lines 126-127: here the authors stated that the number of available scanning from cadavers was not sufficient for the training of the network. Is it possible to sustain this with some bibliography and provide information on the proper number of training data?

Response

The number of images required to train a network varies depending on the structure of the network and the complexity of the problem. Currently, no methods are available to predict the number of images needed, and there are no relevant publications that we can cite. Since only a limited number of cadavers were available to us, we made the decision to use transfer learning to overcome any potential problems that might be associated with the use of a small training dataset. We have added some information about the transfer learning technique and a supporting reference citation to the Generation Of A Pre-trained Model subsection of the Materials and Methods section (lines 136–139).

Comment

Figure 4: this image is not very useful; indeed, the two-step cascade process is clear as it is explained in this section. It would be more beneficial for the reader if a comprehensive workflow of the overall process and AI-based algorithm is provided with all the main steps clearly described.

Response

Thank you for this helpful suggestion. We have modified Figure 4 to make it easier to understand.

Comment

Lines 167-168: this sentence in not comprehensible; please rephrase.

Response

We are sorry that the meaning of the original text was unclear. We have modified this sentence (lines 178–179).

DISCUSSION

Comment

Lines 201-202: It is not clear which features have been used for the identification of these conditions: texture, shape, volume? In the Methods section a more detailed description of the available features should be provided along with the features used within the AI architecture.

Response

We are sorry that the point was unclear. We assume that this was based on the color of the tooth in the image, because a metallic restoration appears darker in the image. However, we did not use a color threshold to distinguish dark metallic regions from light non-metallic regions because good accuracy was detected without setting. Therefore, there is no additional information at this stage.

Comment

Lines 205-206: How the model can be used for victim recognition based only on the recognition of the molar teeth condition? This point is not clear to the reviewer. The authors have described the full process for the creation and training of the network but its application for the specific case of victim recognition remains vague.

Response

Thank you for this important question. The long-term aim of this research is to develop an AI-based algorithm that can automatically generate a dental chart from images acquired with an intraoral scanner. These postmortem dental charts could be compared with antemortem records to facilitate the identification of disaster victims. As a first step toward this aim, the present study has developed an algorithm to classify the condition of the molar teeth into three types. We have completely rewritten the Introduction section (see lines 49–76) and Discussion section (lines 211–264) to better explain the objective, significance and future potential of our research.

Comment

Lines 233-234: the possible use of morphological characteristics for victim recognition is undoubtedly an improvement for the model, but how do you link the work here presented with the possibility to use new features for victim recognition? In general, the connection between the part of the work here presented and the overall project has to be better clarified and carried on throughout the paper.

Response

Since the focus of the present study was the detection of dental restorations (specifically, metallic restorations) rather than morphological characteristics, we have deleted the text referring to morphological characteristics in order to avoid confusion. Furthermore, we have rewritten the Discussion section to explain more clearly that the present research is a first step toward developing an automated system for the construction of postmortem dental charts from imaging data obtained by intraoral scanning.

REVIEWER #2

Comment

The abstract should be reworded and more clearly structured.

Response

Thank you for this helpful recommendation. We have rewritten the Abstract section of the manuscript to improve its structure (lines 20–46).

Comment

For the readers unfamiliar with intraoral scanners more information should be provided on the applied scanner and explanation why it is powder free should be added.

Response

Thank you for this useful suggestion. We have added more information about the intraoral scanner to the Intraoral Scanning subsection of the Materials and Methods section (lines 96–105) and the Discussion section (lines 222–234).

Comment

The main limitation of the study is a relatively low number of cases, especially from the intraoral scanner. However, preliminary results are promising.

Response

Since there was a limitation to the number of cadavers available for this research, we utilized the technique of transfer learning, which focuses on transferring the knowledge across domains to overcome the problems of small training datasets. Using this approach, we were able to develop our algorithm successfully despite the limited number of cadavers. We have added more details about the use of the transfer learning to the Generation Of A Pre-trained Model subsection of the Materials and Methods section (lines 1376–146) and to the Discussion section of the manuscript (lines 235¬–243).

Comment

Another limitation of the study is that the authors teach the algorithm to distinguish between 3 groups: 1. full metallic crown 2. partial metallic restoration and 3. sound tooth, caries tooth and tooth with non-metallic restoration merged in one group. Probably metallic restorations are prevalent in the studied Japanese population, but this is a major limitation in populations in which more esthetic restorations are applied. The authors should discuss this limitation more in depth and also provide an attempt on solution of the problem of differentiation between healthy teeth and restored teeth using composite materials mimicking sound tissues.

Response

We agree entirely that the algorithm will need to be further developed to detect non-metallic (composite resin) restorations before it can be widely used to facilitate personal identification in the event of a large-scale disaster. The use of dental evidence for the personal identification of disaster victims requires that the collected dental findings are highly accurate. Since teeth with composite resin restorations can sometimes be mistaken for non-restored teeth even when observed with the naked eye, the present study limited the classification of molar tooth condition to only three categories (i.e., FMC, In and CNMR) that exhibit obvious differences in color. The accuracy of the algorithm was around 95% when these three categories were used and reached about 98% when the classification was based only on the presence or absence of a metal-colored restoration. We view our research as a first step toward the creation of a more complex algorithm that can accurately detect a wide range of tooth features, including both metallic and non-metallic restorations. Furthermore, we envisage that such an algorithm would allow postmortem dental charts to be automatically and rapidly generated from occlusal views obtained by intraoral scanning. We have completely rewritten the entire Discussion section of the manuscript, which now includes the above information (lines 243–257).

Comment

What did the authors mean by "treatment scars"?

Response

Thank you for this query. This term was intended to describe evidence of prior treatment. However, this term has been deleted from the manuscript following major revisions to the Discussion section.

Comment

The conclusions in the present form are rather an extended summary of the results, therefore should be rephrased so that they are related to the aim of the study.

Response

We have rewritten the conclusion so that it is better related to the aim of the study (lines 258–264).

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Francesco Bianconi

7 Dec 2021

PONE-D-21-30844R1Development of an artificial intelligence-based algorithm to classify images acquired with an intraoral scanner of individual molar teeth into three categoriesPLOS ONE

Dear Dr. ETO,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that you paper can be made suitable for publication after minor revisions. Specifically, we only ask you to address two very minor points raised by Reviewer #1 (please find them below) before proceeding to the final publication steps.

Please submit your revised manuscript by Jan 21 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Francesco Bianconi, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have addressed all the comments and now the paper is more detailed and sounds clearer for the reader.

Only two minor suggestions:

Figure 4: ‘over’ should be ‘cover’?

Line 264: it should be ‘the construction of a system’

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Jan 7;17(1):e0261870. doi: 10.1371/journal.pone.0261870.r004

Author response to Decision Letter 1


8 Dec 2021

We thank the reviewer for his/her careful reading of our manuscript and useful comments. Our responses to the reviewers’ comments are presented below. All revisions to the manuscript are highlighted in red-colored font and referred to by line number in the responses below. Please note that the changes made do not majorly affect the content, conclusions or framework of the paper.

REVIEWER #1

Comment

The authors have addressed all the comments and now the paper is more detailed and sounds clearer for the reader.

Response

We thank Reviewer#1 for the positive comments regarding the revised manuscript.

Comment

Figure 4: ‘over’ should be ‘cover’?

Response

Thank you for pointing out this inadvertent error. We have revised ‘over’ to ‘cover’ (Figure 4).

Comment

Line 264: it should be ‘the construction of a system’

Response

Thank you for this helpful suggestion. We have modified the sentence ‘the construction a system’ to ‘the construction of a system’ (Line 264).

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 2

Francesco Bianconi

13 Dec 2021

Development of an artificial intelligence-based algorithm to classify images acquired with an intraoral scanner of individual molar teeth into three categories

PONE-D-21-30844R2

Dear Dr. ETO,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Francesco Bianconi, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Francesco Bianconi

30 Dec 2021

PONE-D-21-30844R2

Development of an artificial intelligence-based algorithm to classify images acquired with an intraoral scanner of individual molar teeth into three categories

Dear Dr. Eto:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Prof. Francesco Bianconi

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All relevant data are within the paper.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES