Abstract
Purpose: To evaluate the identification of nasal bone fractures and their clinical diagnostic significance for three-dimensional (3D) reconstruction of maxillofacial computed tomography (CT) images by applying artificial intelligence (AI) with deep learning (DL). Methods: CT maxillofacial 3D reconstruction images of 39 patients with normal nasal bone and 43 patients with nasal bone fracture were retrospectively analysed, and a total of 247 images were obtained in three directions: the orthostatic, left lateral and right lateral positions. The CT scan images of all patients were reviewed by two senior specialists to confirm the presence or absence of nasal fractures. Binary classification prediction was performed using the YOLOX detection model + GhostNetv2 classification model with a DL algorithm. Accuracy, sensitivity, and specificity were used to evaluate the efficacy of the AI model. Manual independent review, and AI model-assisted manual independent review were used to identify nasal fractures. Results: Compared with those of manual independent detection, the accuracy, sensitivity, and specificity of AI-assisted film reading improved between junior and senior physicians. The differences were statistically significant (P<0.05), and all were higher than manual independent detection. Conclusions: Based on deep learning methods, an artificial intelligence model can be used to assist in the diagnosis of nasal bone fractures, which helps to promote the practical clinical application of deep learning methods.
Keywords: Artificial intelligence, deep learning, nasal bone fracture, diagnosis
Introduction
The nose is located in the anterior midline of the human body and protrudes from the face. Because the upper end of the nasal bone is narrow and thick and the lower end is thin, it can easily be affected by external forces, leading to nasal bone fracture [1]. Following the standard which were classified using by Stranc and Robertson [2]. Fractures of the nasal bone could be roughly divided into two categories: those caused by lateral impacts and those caused by frontal impacts. Lateral impacts ranged from a slight depression of the single nasal bone to a complete lateral displacement of all structures, with a good prognosis. Frontal impacts were divided into three categories according to the fracture plane. Plane 1 damage does not extend beyond the line connecting the lower end of the nasal bone with the anterior end of the nasal spine. Plane 2 damage is limited to the external nose and does not extend beyond the orbital margin. Plane 3 damage involves the orbit and even intracranial structures. The nasal bone is very susceptible to fracture due to external impact. The skin of the nose becomes rapidly bruised and swollen after traumatic nasal deformity; the swelling usually takes 7-10 days to subside. The bony support of the external nose usually consists of the nasal bone, maxillary, and frontal nasal processes. Nasal fracture repositioning usually needs to be completed within two weeks to prevent the fracture site from healing abnormally or bone scab development, which requires the clinician to quickly diagnose and determine the surgical plan. DL is a kind of AI method. In image processing, through DL to imitate human thoughts and behaviours, a machine is trained to think similarly to humans and can extract and identify the key parts of an image [3]. This method has been widely used in diagnosing lung cancer, lung nodules, stomach cancer and other diseases [4]. The use of AI to diagnose nasal bone fractures via CT scans has been reported [5]. However, CT maxillofacial 3D reconstruction is more advantageous than nasal CT scanning in identifying the type of fracture, whether it is combined with multiple fractures, and suggesting complications, and no such study has been performed. In recent years, the connection between artificial intelligence and medicine is getting closer. The AI model improved the accuracy of imaging physicians’ diagnosis of nasal bone fractures via CT maxillofacial 3D reconstruction images, shortened the reading time, and assisted clinicians in quickly determining surgical plans, improving patient cure rates. In this study, we establish and evaluate the clinical application of an AI-assisted diagnostic model for nasal fractures based on CT maxillofacial 3D reconstruction images data.
Methods
General information
The clinical data of 43 patients who had nasal bone fracture due to trauma and who were admitted to the Department of Otorhinolaryngology, Head and Neck Surgery for CT maxillofacial 3D reconstruction examination and nasal bone fracture repositioning surgery from January 2018 to September 2023 were retrospectively analysed. One hundred thirty images were collected in the orthostatic, left lateral and right lateral positions, and 117 images of 39 patients with normal nasal bone were used as the control group. Among the patients with nasal bone fracture, there were 30 males and 13 females, with ages ranging from 6 to 60 years and a mean age of 28.67 years. In the control group, there were 26 males and 13 females. Their ages ranged from 22-72 years, with a mean age of 45.58 years. This retrospective study was approved by the Institutional Review Ethics Committee of the Affiliated Hospital of Southwest Medical University (Ethics Approval Number: KY2023251).
Inclusion and exclusion criteria
Inclusion criteria
1) All patients were confirmed to have a nasal bone fracture diagnosed by two specialised senior physicians who reviewed the patients’ CT images. 2) CT maxillofacial 3D reconstruction images suggestive of nasal bone fracture. 3) Complete clinical information supported by imaging data.
Exclusion criteria
1) Patients with a history of nasal trauma or nasal surgery. 2) Patients with trauma greater than two weeks in duration. 3) Patients whose clinical information was missing or incomplete.
CT maxillofacial 3D reconstruction and scanning methods
A multilayer spiral CT 3D reconstruction machine from the GE was selected, and each patient underwent a CT maxillofacial 3D reconstruction examination. The original scanning parameters were as follows: voltage, 120 kV; current, 250 mA; spiral scanning; scanning layer thickness, 0.5 mm; layer spacing, 0.5 mm; and spiral matrix, 512×512. After scanning, the original data were reconstructed and input into the workstation; 3D images were obtained via three-bit reconstruction technology [6] (Figure 1).
Figure 1.
The patient was placed in the supine position and scanned from the top of the skull to the submandibular area.
Picture labelling method
Three levels of CT maxillofacial 3D reconstruction images of patients in the control group and the nasal bone fracture group were obtained: orthostatic, left lateral, and right lateral. The images were labelled using the Django-based LabelMe (image labelling tool) and outlined with green boxes (green boxes can be hidden), which included the bilateral nasal bones, bilateral maxillary frontal eminences, up to the lower edge of the nasal portion of the frontal bone, and down to the upper edge of the median plate of the sieve bone.
DL-based AI methods
YOLOX model
The YOLOX model was released in July 2021 by Kuangshi Technology, a famous AI company in China, and is the latest generation of target detectors in the YOLO series. The model adopts the architecture of the DarkNet53 backbone network and the SPP layer, which includes a training phase and a testing phase [7]. The training phase is divided into forward propagation and backpropagation processes. Forward propagation performs inference forward to derive the predicted value and the true value and then calculates the error value, which provides the values that will be used for backpropagation. The backpropagation updates the calculations to obtain the model weights, which identify how important the values are to the model. This process needs to be repeated several times. In the testing phase, the model weights obtained in the training phase are used for forward reasoning to produce the final detection results [8]. In this study, the dataset is fed into the detection network, and the YOLOX detection model detects the location of the nose in the human face. The image detection module adopts a DL algorithm to detect the input images, specifically using the YOLOX algorithm. The detection data are annotated with a class of labels, with the category of “nose”.
GhostNetv2 classification model
The ghost module is a convolutional neural network. GhostNetv2 was introduced into the YOLOX network as a backbone network, and a bidirectional feature pyramid network was used for feature recognition, increasing the fusion of shallow and deep networks to improve the detection accuracy of small-area objects [9] (Figures 2, 3).
Figure 2.
The CT maxillofacial 3D reconstruction image is first inputted into the YOLOX detection model, and the nasal part selected by the YOLOX model is framed and intercepted and then inputted into the classification network GhostNetv2 to classify the image to determine whether there is a fracture of the nasal bone. The classification dataset is divided into two categories, and the main classification categories are named “guzhe” and “normal”.
Figure 3.
The first four images are images of nasal bone fractures, AI models are labelled with blue boxes, and physician groups are labelled with green boxes. The last four images are normal images, AI models are labelled with red boxes, and physician groups are labelled with green boxes.
Statistical methods
Statistical analysis was performed using SPSS 20.0 statistical software, and the precision, sensitivity, specificity, accuracy and average accuracy of manual detection and AI-assisted manual annotation of fracture sites were analysed and compared with each other using a four-cell table or the Pearson’s chi-square test with R×C table, and the results were expressed as percentages. The ROC curves were used to analyse the efficacy of the various modes of labelling for fracture detection, and the area under the ROC curve (AUC) of the various modes of labelling was compared by the Delong’s test. Differences were considered statistically significant at P<0.05.
Results
YOLOX model + GhostNetv2 model detection results
The YOLOX model was first trained through a training-validation phase, and after validation, it was tested with an independent test set. After detection, the GhostNetv2 model was used to identify and classify the nasal bone region. The accuracy, sensitivity, specificity, and average accuracy of the AI model in detecting nasal bone fracture were 97.44%, 95.00%, 95.24%, and 97.22%, respectively.
Comparison of manual independent detection and AI-assisted clinician detection results
Comparing the accuracy of two clinicians (one highly qualified and one less qualified) in reading films before and after AI-assisted review, the accuracy, sensitivity, and specificity of AI-assisted review by a junior physician were 96.15%, 85.62%, and 95.05%, respectively, and those of AI-assisted review by a senior physician were 99.23%, 96.27%, and 99.12%, respectively; all of these values were greater than those of manual independent detection. The accuracy of low-qualified physicians increased from 90.77% without AI assistance to 96.15% with AI assistance, the sensitivity increased from 74.68% to 85.62%, and the specificity increased from 86.52% to 95.05%, whereas the accuracy of high-qualified physicians reading increased from 95.38% without AI assistance to 99.23%, the sensitivity increased from 91.85% to 96.27%, and the specificity increased from 91.85% to 96.27%. These differences were statistically significant (P<0.05) (Tables 1, 2).
Table 1.
Comparison of diagnostic efficacy of physicians and AI assisted diagnosis of nasal bone fractures
Forms of diagnosis | Junior doctors | AI models + low-seniority physicians | χ2 | P |
---|---|---|---|---|
Accuracy | 90.77 per cent | 96.15 per cent | 3.084 | <0.01 |
(Level of) Sensitivity | 74.68 per cent | 85.62 per cent | 5.655 | <0.01 |
Specificity | 86.52 per cent | 95.05 per cent | 4.228 | <0.01 |
Table 2.
Comparison of diagnostic efficacy of physicians and AI assisted diagnosis of nasal bone fractures
Forms of diagnosis | Senior Medical Practitioner | AI Models + Senior Physicians | χ2 | P |
---|---|---|---|---|
Accuracy | 95.38 per cent | 99.23 per cent | 3.670 | 0.033 |
(Level of) Sensitivity | 91.85 per cent | 96.27 per cent | 2.298 | <0.01 |
Specificity | 94.64 per cent | 99.12 per cent | 3.732 | 0.030 |
Receiver Operating Characteristic (ROC) curve
The diagnostic efficacy of manual detection and AI-assisted manual analysis was evaluated using ROC curves: AUC=0.787 (95% CI 0.727-0.848, P<0.05) for junior physicians; AUC=0.889 (95% CI 0.843-0.936, P<0.05) for AI-assisted junior physicians; AUC=0.928 (95% CI of 0.890-0.966, P<0.05); and AI-assisted senior physician AUC=0.979 (95% CI of 0.957-1.000, P<0.05) (Figure 4).
Figure 4.
The ROC graph represents the AUC (area under the ROC curve) of the four diagnostic modalities (AI-assisted senior physicians > senior physicians > AI-assisted junior physicians > junior physicians), demonstrating that their diagnostic efficacy increases in order, with AI-assisted senior physicians having the highest diagnostic efficacy.
Discussion
The incidence of nasal fractures exceeds that of maxillofacial fractures [10]. The incidence of nasal bone fracture is mostly due to falls and collisions, and patients tend to be young [11]. In addition to nasal bone fracture, a compound fracture can involve the sieve bone, orbital, maxillofacial and skull base, and the scope of the lesion is wider [12]. Nasal septum fractures are often associated with nasal congestion and deformity; if a sieve bone fracture is caused by a transverse fracture of the nasal bone or the frontal bone, cerebrospinal fluid leakage may occur, and it may even be life-threatening. Therefore, if it is not diagnosed and treated in time, it will affect facial appearance, nasal function, and normal life activities [13]. In the past, X-ray or CT scanning was commonly used to diagnose nasal bone fractures [14]. X-ray imaging has the advantages of low cost and simple operation methods; however, if the tissue is swollen or the fracture is not displaced, it is easy to obtain overlapping images, and the fracture line cannot be clearly visualised [15]. CT scanning is more accurate than X-ray for identifying the fracture site, the degree of fracture, and the degree of damage to the surrounding tissues [16]. However, The advantage of CT maxillofacial 3D reconstruction over X-ray and CT scanning is that it can clearly represent simple or compound nasal fracture, and make preliminary diagnosis and early prevention of whether it is combined with skull base fracture or possible complications, which is currently the main diagnostic modality [17]. The application of AI to CT maxillofacial 3D reconstruction not only improves the diagnosis rate of nasal fracture, but also reduces the consumption of manpower and medical resources.
Nasal bone fracture diagnosis has a high misdiagnosis and underdiagnosis rate among low-quality diagnostic imaging physicians, which is mainly due to the short working time and inexperience. Additionally, the special histological nasal bone structure, which has a thin and small bone mass, causes the fracture line to be easily confused with the bone suture or the image of the vascular sulcus of the nasal bone [18]. The diagnostic accuracy is not low for senior physicians; however, the main difficulty is determining the type of fracture, and sometimes, a normal image without fracture is easily misdiagnosed as a glabellar fracture or linear fracture, and the possible complications are easily overlooked [19]. In children, the external nose is smaller than that in adults, and swelling is more likely to occur after fracture, which makes the diagnosis more difficult [20]. The modern combination of artificial intelligence and medical diagnosis and treatment is an inevitable trend in the development of the new era [21]. AI-assisted diagnosis of various diseases has become quite popular in clinical practice but is not common in otorhinolaryngology or head and neck surgery. YOLOX, a new generation of target detectors released in the context of the globalisation of new coronary pneumonia in the last few years, in which recognizing faces wearing masks is required to enter and exit certain places for disease prevention and control; the accuracy and performance of this new generation of target detectors are superior to those of their counterparts [22]. GhostNetv, a lightweight network model, has a shorter inference time for a single image and higher accuracy for small target images [23]. In this study, AI-assisted reading accuracy, sensitivity, and specificity improved by 5.38%, 10.94%, and 8.53%, respectively, for low-seniority physicians, and AI-assisted reading accuracy, sensitivity, and specificity improved by 3.85%, 4.42%, and 4.48%, respectively, for high-seniority physicians; these differences were statistically significant. The results showed that all the AI diagnosis indicators were greater than those of clinicians. Additionally, all the indicators of AI model-assisted radiographic fracture diagnosis by senior and junior physicians were improved compared with their independent detection, which can effectively reduce the rate of missed diagnosis and misdiagnosis of nasal fractures for both clinicians and imaging physicians. The ROC curve also further confirmed the value of AI. With improved diagnostic accuracy, clinicians can also prevent and treat possible complications based on the type and location of the fracture. The external nose occupies a small area of the human face, and because the nasal bone and its surrounding bones are anatomically delicate and smaller than long bones such as the humerus and femur [24]. Therefore, this study used the novel YOLOX model combined with the GhostNetv2 model and demonstrated through training and testing that AI can help clinicians improve the accuracy, sensitivity and specificity of film reading. Since the incidence of nasal bone fractures is always high and the duration of surgery is limited, to indirectly decrease the reading time, it is important for both highly and less qualified physicians to be able to accurately and quickly determine the fracture site and type from CT images. The results of this study show that the use of artificial intelligence to assist in the diagnosis of nasal fractures in CT maxillofacial 3D reconstruction images can improve diagnostic efficiency and shorten diagnostic time for clinicians. In my opinion, the benefits of artificial intelligence in medicine are not only limited to assisting in reading medical images, but can also be used in treatment or determining the prognosis of a disease. In terms of treatment, artificial intelligence can assist physicians in clinical operations, eliminating human factors and making operations more precise and safer. For example, robotic arms can perform operations that were previously impossible. In terms of determining the prognosis of a disease, we can compare the CT images of patients before and after surgery, or conduct long-term follow-ups of patients’ clinical symptoms or the recovery of their physiological structures after surgery. Take a broken nose bone as an example. We can construct an AI model by collecting clinical information such as nasal congestion, nasal discharge, cerebrospinal fluid leakage, etc., and comparing the degree of healing of the patient’s nose bone and nasal septum before and after surgery. After a lot of training, artificial intelligence can make a preliminary prediction of the postoperative disease prognosis based on the patient’s preoperative relevant information, and help the physician formulate a reasonable individualized treatment. This is something that needs to be further studied in the future.
This study also has several limitations. ① The sample of this study was from a single unit, and the sample size was small, which affected its accuracy. Additionally, it cannot represent all the actual situations that may occur in the clinic and may produce bias. ② The images need to be annotated before AI model training. Different physicians have different annotations for the fracture area, and their judgement is still subjective, which may affect model training performance. ③ In this study, the presence or absence of nasal bone fracture in CT maxillofacial 3D reconstruction was identified only by AI, and the diagnosis of the fracture site and type was not described, which can be further studied in the future.
In summary, the auxiliary role of AI for diagnosing CT maxillofacial 3D reconstruction images effectively improves the efficiency and accuracy of physicians in diagnosing nasal bone fractures, reflecting a better prospect of clinical application, which can better serve the clinic and people in the era of global technologisation.
Disclosure of conflict of interest
None.
References
- 1.Davari R, Pirzadeh A, Sattari F. Etiology and epidemiology of nasal bone fractures in patients referred to the otorhinolaryngology section, 2019. Int Arch Otorhinolaryngol. 2023;27:e234–e239. doi: 10.1055/s-0043-1768208. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Yu H, Jeon M, Kim Y, Choi Y. Epidemiology of violence in pediatric and adolescent nasal fracture compared with adult nasal fracture: an 8-year study. Arch Craniofac Surg. 2019;20:228–32. doi: 10.7181/acfs.2019.00346. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Chen X, Wang X, Zhang K, Fung KM, Thai TC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal. 2022;79:102444. doi: 10.1016/j.media.2022.102444. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Chamberlin J, Kocher MR, Waltz J, Snoddy M, Stringer NFC, Stephenson J, Sahbaee P, Sharma P, Rapaka S, Schoepf UJ, Abadia AF, Sperl J, Hoelzer P, Mercer M, Somayaji N, Aquino G, Burt JR. Automated detection of lung nodules and coronary artery calcium using artificial intelligence on low-dose CT scans for lung cancer screening: accuracy and prognostic value. BMC Med. 2021;19:55. doi: 10.1186/s12916-021-01928-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Jeong Y, Jeong C, Sung KY, Moon G, Lim J. Development of AI-based diagnostic algorithm for nasal bone fracture using deep learning. J Craniofac Surg. 2024;35:29–32. doi: 10.1097/SCS.0000000000009856. [DOI] [PubMed] [Google Scholar]
- 6.Shah S, Uppal SK, Mittal RK, Garg R, Saggar K, Dhawan R. Diagnostic tools in maxillofacial fractures: is there really a need of three-dimensional computed tomography? Indian J Plast Surg. 2016;49:225–33. doi: 10.4103/0970-0358.191320. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Jeon YD, Kang MJ, Kuh SU, Cha HY, Kim MS, You JY, Kim HJ, Shin SH, Chung YG, Yoon DK. Deep learning model based on you only look once algorithm for detection and visualization of fracture areas in three-dimensional skeletal images. Diagnostics (Basel) 2023;14:11. doi: 10.3390/diagnostics14010011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Raimundo A, Pavia JP, Sebastião P, Postolache O. YOLOX-Ray: an efficient attention-based single-staged object detector tailored for industrial inspections. Sensors (Basel) 2023;23:4681. doi: 10.3390/s23104681. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Li ML, Sun GB, Yu JX. A pedestrian detection network model based on improved YOLOv5. Entropy (Basel) 2023;25:381. doi: 10.3390/e25020381. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Ghassemi A, Riediger D, Hölzle F, Gerressen M. The intraoral approach to lateral osteotomy: the role of a diamond burr. Aesthetic Plast Surg. 2013;37:135–8. doi: 10.1007/s00266-012-0011-2. [DOI] [PubMed] [Google Scholar]
- 11.Kim SH, Han DG, Shim JS, Lee YJ, Kim SE. Clinical characteristics of adolescent nasal bone fractures. Arch Craniofac Surg. 2022;23:29–33. doi: 10.7181/acfs.2022.00038. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Seol YJ, Kim YJ, Kim YS, Cheon YW, Kim KG. A study on 3D deep learning-based automatic diagnosis of nasal fractures. Sensors (Basel) 2022;22:506. doi: 10.3390/s22020506. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Peeters N, Lemkens P, Leach R, Gemels B, Schepers S, Lemmens W. Facial trauma. B-ENT. 2016;(Suppl 26):1–18. [PubMed] [Google Scholar]
- 14.Nam Y, Choi Y, Kang J, Seo M, Heo SJ, Lee MK. Diagnosis of nasal bone fractures on plain radiographs via convolutional neural networks. Sci Rep. 2022;12:21510. doi: 10.1038/s41598-022-26161-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Leapo L, Uemura M, Stahl MC, Patil N, Shah J, Otteson T. Efficacy of X-ray in the diagnosis of pediatric nasal fracture. Int J Pediatr Otorhinolaryngol. 2022;162:111305. doi: 10.1016/j.ijporl.2022.111305. [DOI] [PubMed] [Google Scholar]
- 16.Ardeshirpour F, Ladner KM, Shores CG, Shockley WW. A preliminary study of the use of ultrasound in defining nasal fractures: criteria for a confident diagnosis. Ear Nose Throat J. 2013;92:508–12. [PubMed] [Google Scholar]
- 17.Wu Y, Wang P, Han X, Liu Z, Qiu S. Three-dimension CT assisted treatment of nasal fracture. Lin Chuang Er Bi Yan Hou Tou Jing Wai Ke Za Zhi. 2020;34:452–5. doi: 10.13201/j.issn.2096-7993.2020.05.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Cao R, Wu H, Li Y, Wang Z, Huang Q. Clinical diagnostic value for nasal bone fracture by three-dimensional reconstruction of spiral CT. Lin Chuang Er Bi Yan Hou Ke Za Zhi. 2004;18:270–1. [PubMed] [Google Scholar]
- 19.Kim D, Oh JT, Ahn SH, Kim HJ, Bae MR. Facial trauma affects the radiological diagnosis of nasal bone fractures. J Craniofac Surg. 2024;35:e544–6. doi: 10.1097/SCS.0000000000010248. [DOI] [PubMed] [Google Scholar]
- 20.Jian F, Wu S. Comparison of the diagnosis and treatment of nasal bone fracture by physicians in China with different levels of experience. J Craniofac Surg. 2024;35:e497–e501. doi: 10.1097/SCS.0000000000010231. [DOI] [PubMed] [Google Scholar]
- 21.Anderson PG, Baum GL, Keathley N, Sicular S, Venkatesh S, Sharma A, Daluiski A, Potter H, Hotchkiss R, Lindsey RV, Jones RM. Deep learning assistance closes the accuracy gap in fracture detection across clinician types. Clin Orthop Relat Res. 2023;481:580–8. doi: 10.1097/CORR.0000000000002385. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Manssor SAF, Sun S, Elhassan MAM. Real-time human recognition at night via integrated face and gait recognition technologies. Sensors (Basel) 2021;21:4323. doi: 10.3390/s21134323. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Yang L, Cai H, Luo X, Wu J, Tang R, Chen Y, Li W, Li W. A lightweight neural network for lung nodule detection based on improved ghost module. Quant Imaging Med Surg. 2023;13:4205–21. doi: 10.21037/qims-21-1182. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Yang C, Yang L, Gao GD, Zong HQ, Gao D. Assessment of artificial intelligence-aided reading in the detection of nasal bone fractures. Technol Health Care. 2023;31:1017–25. doi: 10.3233/THC-220501. [DOI] [PubMed] [Google Scholar]