Skip to main content
Heliyon logoLink to Heliyon
. 2024 May 10;10(10):e31010. doi: 10.1016/j.heliyon.2024.e31010

Feasibility of rib fracture detection in low-dose computed tomography images with a large, multicenter datasets-based model

Liang Jin a,b,⁎,1, E Youjun c,1, Zheng Ye d,1, Pan Gao a, Guoliang Wei c, Jia qi Zhang c, Ming Li a,b,e,⁎⁎
PMCID: PMC11103521  PMID: 38770294

Abstract

Purpose

To evaluate the feasibility of rib fracture detection in low-dose computed tomography (CT) images with a RetinaNet-based approach and to evaluate the potential of lowdose CT for rib fracture detection compared with regular-dose CT images.

Materials and methods

The RetinaNet-based deep learning model was trained using 7300 scans with 50,410 rib fractures that were used as internal training datasets from four multicenter. The external test datasets consisted of both regular-dose and low-dose chest-abdomen CT images of rib fractures; the MICCAI 2020 RibFrac Challenge Dataset was used as the public dataset. Radiologists' interpretations were used as reference standards. The performance of the model in rib fracture detection was compared with the radiologists’ interpretation.

Results

In total, 728 traumatic rib fractures of 100 patients [60 men (60 %); mean age, 53.45 ± 11.19 (standard deviation (SD)); range, 18–77 years] were assessed in the external datasets. In these patients, the regular-dose group had a mean CT dose index volume (CTDIvol) of 7.18 mGy (SD: 2.22) and a mean dose length product (DLP) of 305.38 mGy cm (SD: 95.31); the low-dose group had a mean CTDIvol of 2.79 mGy (SD: 1.11) and a mean DLP of 131.52 mGy cm (SD: 55.58). The sensitivity of the RetinaNet-based model and that of the radiologists was 0.859 and 0.721 in the low-dose CT images and 0.886 and 0.794 in the regular-dose CT images, respectively.

Conclusions

These findings indicate that the RetinaNet-based model can detect rib fractures in low-dose CT images with a robust performance, indicating its feasibility in assisting radiologists with rib fracture diagnosis.

Keywords: Low-dose, Rib fracture, Deep learning, Multicenter, CT

Highlights

  • Proposed RetinaNet-based model trained using 7300 scans with 50,410 rib fractures.

  • RetinaNet-based model detects rib fractures in low-dose CT with robust performance.

  • RetinaNet-based model can assist radiologists in rib fracture detection.

Abbreviations

CT

computed tomography

CTDIvol

CT dose index volume

DLP

dose length product

FP

false positive

Fps

FP per Scan

FROC

free-response receiver operating characteristic curve

LDCA-CT

low-dose chest–abdomen CT

PA

posteroanterior

RCA-CT

regular-dose chest–abdomen CT

SD

standard deviation

1. Introduction

Accurate diagnosis of rib fractures is vital for the treatment and prognosis of patients with chest trauma [1,2]. Compared to specific, but insensitive, standard posteroanterior (PA) chest radiography, multidetector computed tomography (CT) provides more accurate evaluation of rib fractures [1,[3], [4], [5]]. In clinical settings, as well as, forensic examination of the degree of disability, confirming the location and number of rib fractures is essential [2,6,7]. Identification of rib fractures in CT images is difficult and labor-intensive [8], and reducing the burden of image reading while increasing accuracy is necessary. Worldwide, advocating for ionizing radiation to be as low as reasonably achievable while maintaining diagnostic image quality is emphasized [4,9,10]. There is a growing demand for safer, more efficient diagnostic techniques that minimize radiation exposure without compromising diagnostic accuracy, especially in vulnerable populations such as children, pregnant women, and patients requiring frequent imaging.

Deep learning is an attractive subfield of machine learning, which is a relatively effective field of artificial intelligence [11]. Niiya et al. [12] developed a deep learning-based automatic detection algorithm for 46 rib fractures with high-energy trauma. Zhang et al. [13] proposed a nnU-Net and DenseNet combination model, which achieved a sensitivity of 95 % for rib fractures and reduced the false-positive and false-negative rates for rib fracture detection to 5 %. Hong et al. [14] used deep learning-based diagnostic tools to improve the sensitivity and efficiency in identifying rib fractures. Wu et al. [15] developed a deep learning algorithm to detect rib fractures using multicenter CT datasets.

Despite advancements in low-dose CT technology, a significant gap remains in the automated, reliable detection of rib fractures with reduced image quality. Existing studies have primarily focused on regular-dose chest-abdomen CT (RCA-CT) images; limited exploration has been conducted regarding the potential of deep learning techniques to address the challenges associated with low-dose chest-abdomen CT (LDCA-CT) imaging for rib fracture detection.4 This leads to the research question, “How can deep learning models be optimized to accurately detect rib fractures in low-dose CT images, while overcoming the limitations of reduced image quality and contrast?” To the best of our knowledge, no existing report suggests that deep learning-based algorithms could show robust performance on rib fracture detection in low-dose CT images. Hence, we developed a RetinaNet-based approach to evaluate the detection performance using both RCA-CT and LDCA-CT images.

2. Materials and methods

2.1. Data collection and preprocessing

The training data consisted of 7300 scans with 50,410 rib fractures from four multicenters. These scans were obtained using RCA-CT imaging once for the detection of rib fractures. They were collected from hospitals in four regions of China: east, west, north and south. The primary scanners utilized were UIH, Philips, SIEMENS, GE MEDICAL SYSTEMS, and TOSHIBA; the number of images obtained from each brand of scanner was similar for all scanners used. The inclusion criteria were as follows: (1) traumatic patients with thin-slice chest-abdomen CT images (<3 mm) containing all ribs, and (2) CT images containing bone window reconstruction series. Included images were split into a training set of 6272 scans (86 %) and a test set of 1028 scans (14 %).

We manually delineated the traumatic rib fractures using a rectangular 3D bounding box. To create the function of annotating tight three-dimensional bounding boxes, YiZhun developed easyAnno Platform, as shown in Fig. 2. Two double-blinded radiologists with 3 and 5 years of experience annotated the CTs. The annotations were then confirmed by a clinical specialist with 10 years of experience (Radiologist Specialist) as the final reference standards. When the clinical specialist could not confirm whether the annotations were real fractures or not due to the lack of follow-up CT images, the fractures were considered to be suspected rib fractures.

Fig. 2.

Fig. 2

The bounding boxes shown in YiZhun developed easyAnno Platform.

The external test datasets were collected as a total of 100 patients with 728 traumatic rib fractures with a regular- and low-dose chest-abdomen CT from January 2008 to September 2021 in Huadong hospital; we searched the electronic medical records and the radiology information systems of the hospital for patients with traumatic rib fractures identified on chest-abdomen CT scans (1–1.25 mm). The inclusion criteria were as follows: (1) traumatic patients with thin-slice chest-abdomen CT images (1–1.25 mm) containing all ribs, and (2) traumatic patients with followed high-pitch or low-dose chest-abdomen CT after their first regular chest-abdomen CT with breathing or moving artifact debasing diagnostic accuracy.

All 100 patients underwent chest-abdomen CT using two CT scanners. One scanner was a new-generation dual-source CT scanner (Somatom Drive, Siemens Healthcare, Forchheim, Germany) with the following parameters for RCA-CT: tube voltage, 120 kVp; an automated anatomical tube current modulation with 188 mAref.qual (CARE Dose 4D, Siemens Healthineers); pitch, 0.8; and collimation, 1 mm. The LDCA-CT was performed by a high-pitch mode using the following parameters: automatic tube voltage selection with 100 kVref.qual (ATVS, CARE kV™, Siemens Healthineers); an automated anatomical tube current modulation with 188 mAref.qual (CARE Dose 4D, Siemens Healthineers); pitch, 3.2; and collimation, 1 mm. All imaging data were reconstructed using a bone reconstruction algorithm with a thickness of 1.0 mm (Fig. 1). Another scanner, the gemstone CT scanner (Discovery HD750; GE Healthcare, Waukesha, WI, USA), was used with the following parameters for RCA-CT: the tube voltage was set at 120 Kv with the noise index of 11 HU; the pitch was 0.984; the anode rotation time was 0.5 s; the layer thickness was 0.625 mm with 0.625-mm intervals. The LDCA-CT used the noise index of 26 HU, while the other parameters were the same as those with RCA-CT.

Fig. 1.

Fig. 1

Flowchart of the data collection for training and validation of this study.

The multi-scale CT images were flipped randomly and cut randomly to a constant size (192 × 192 × 192) and data preprocessing was then used to produce the CT images. We used a bone window with a width of 2000 and level of 700 to normalize the HU value of the cropped images, which was the input of the model.

2.2. RetinaNet architecture of model development and training

Herein, we propose an advanced Dual Attention RetinaNet rib fracture detector. The detector is based on RetinaNet [16] and consists of two components, namely a novel feature pyramid network and subnets of classification and regression. Due to the large size discrepancy between rib fracture lesions (4 mm–67 mm), the feature pyramid network was applied. The network can blend feature maps from different layers and fully exploit the low-level texture information and the high-level semantic information, thereby improving detection performance of multi-scale lesions. In addition, we sought to generalize the ability of the model to accurately transfer learned knowledge from high-dose data to low-dose data. To achieve this goal, a powerful Dual Attention mechanism, and useful data augmentation strategies are proposed.

The powerful Dual Attention mechanism can capture the interrelationship between different local parts of a CT image. Through this mechanism, the model tends to fully exploit inherent structural information rather than the quality of images. The SENet [17] proposed a channel attention mechanism that adaptively recalibrates channel-wise feature responses by explicitly modeling interdependencies between channels. However, it does not account for the spatial domain, which is essential for capturing the structural information of images. Convolutional Block Attention Module (CBAM) proposes another advanced attention mechanism; it uses average pooling and maximum pooling to aggregate space and channel information. However, we found that using SE modules to aggregate channel information was faster than CBAM without loss of accuracy. Therefore, we maintained the channel domain attention of SENet and introduced the space domain attention of CBAM to capture more inherent structural information. Our attention mechanism is illustrated in the left panel of Fig. 3. The attention mechanism can be divided into two steps: the first step focuses on channel information, and the second step on space information.

Fig. 3.

Fig. 3

Architecture of RetinaNet network. RetinaNet network consists of a novel feature pyramid network and subnets that are divided into the classification component and the box regression component. The feature pyramid network consists of a feedforward network (black layers) and a feature fusion network (red layers). The left third of the figure is our dual attention module, which is used in each layer of the feedforward network.

To further improve the generalizability of the model, several data augmentation strategies were applied to inputs, including randomly cropping and resizing, Gaussian blur, Gaussian noise, and Contrast Limited Adaptive Histogram Equalization. These strategies make the model less sensitive to images of different quality and focus on semantic information that is more important, thus improving the generalizability of our model.

Our model is similar to RetinaNet and is made of five stages with each stage consisting of six residual blocks. The output channels for each of these stages are 32, 64, 128, 256, and 256. For each residual block, we applied the dual attention module and ResNeXt block, as it is more efficient than ResNet. We also added deformable convolution layers to adapt to the different sizes of the receptive fields, which increased the model complexity slightly, but significantly improved the accuracy of recognition.

Concerning subnets, we used the full convolution network. The classification subnet shares the parameter with the box regression subnet, except for the last layer. The shared part of subnets contains four convolution layers, and the kernel size of convolution is three. In the last layer, 1 × 1 × 1 convolution was used for mapping channels to KA and 6A.

2.3. MICCAI 2020 RibFrac Challenge Datasets as public datasets

To prove the accuracy and robustness of our method, we used the data in MICCAI 2020 RibFrac Challenge Datasets for model verification. The datasets contain approximately 5000 rib fractures from 660 RCA-CT scans, which consisted of 420 training CT images (all with fractures), 80 validation CT images (20 without fractures), and 160 evaluation CT images.

2.4. Radiologists’ interpretation of rib fractures evaluation

All images from the enrolled external datasets, including the regular- and low-dose datasets, were anonymized and sent to the YIZHUN platform. Radiologist A with 10 years and radiologist B with 3 years of experience in chest interpretation independently interpreted the rib fractures in a double-blinded fashion. In the first round, they interpreted rib fractures in the low-dose CT images, and one week later, they interpreted the high-dose CT images (second round). Agreement was reached between the two radiologists based on consensus. In the case that the two radiologists could not agree on whether the annotations were real fractures or not due to the lack of follow-up CT images, the fractures were considered to be suspected rib fractures.

2.5. RetinaNet evaluation and statistical analysis

We also used the free-response receiver operating characteristic curve (FROC) to evaluate our trained models. We considered the predicted result whose center fell in the gold standard box as true positive and others as false positive (FP). The FROC is an alternative to the ROC curve. Its x-axis is FP per scan (Fps), which means the average number of FPs per patient. The y-axis is sensitivity, in which the formulation of the sensitivity is the number of positive lesion samples detected divided by the actual number of ground-truth bounding boxes. We chose the sensitivities in Fps = 0.5, 1.0, 2.0, 4.0, and 8.0 to evaluate the performance of detection.

3. Results

3.1. Population characteristics

We included 100 patients (60 men [60 %], mean age 53.45 years ± 11.19 years standard deviation [SD], range 18–77 years) with 728 traumatic rib fractures scanned twice (a high-pitch or LDCA CT following their first regular chest-abdomen CT). The first scans had breathing artifact debasing diagnostic accuracy, and 125 cases were excluded as suspected rib fractures. In these included patients, the RCA-CT group had a mean CTDIvol of 7.18 mGy (SD: 2.22) and a mean DLP of 305.38 mGy cm (SD: 95.31), while the low-dose group had a mean CTDIvol of 2.79 mGy (SD: 1.11) and a mean DLP of 131.52 mGy cm (SD: 55.58).

3.2. RetinaNet performance of rib fracture on public datasets

The performance of RetinaNet achieved the average of 0.857 on MICCAI 2020 RibFrac Challenge Datasets compared to the average of 0.850 in MICCAI 2020 RibFrac Challenge competition; the details of this result are shown in Table 1.

Table 1.

Performance of RetinaNet for rib fracture detection in both public and external validation datasets.

Datasets Detection Sensitivities at FP Levels
0.5 1 2 4 8 Avg
Public datasets
MICCAI 2020 RibFrac Challenge Dataset 0.764 0.835 0.869 0.902 0.913 0.857
MICCAI 2020 RibFrac Challenge competition 0.746 0.825 0.875 0.896 0.909 0.850
External datasets
Rib fractures in RCA-CT 0.793 0.850 0.899 0.933 0.953 0.886
Rib fractures in LDCA-CT 0.773 0.822 0.865 0.904 0.931 0.859

FP: False positive; RCA-CT: Regular-dose chest-abdomen CT; LDCA-CT: Low-dose chest-abdomen CT.

3.3. RetinaNet performance of rib fracture on external datasets

The performance of our model in RCA-CT and LDCA-CT images is shown in Table 1 and Fig. 4.

Fig. 4.

Fig. 4

Evaluation results for our regular- and low-dose chest-abdomen CT test dataset The blue line is the FROC curve of the regular-dose test set, and the orange line is the FROC curve of the low-dose test set. The results show that our model can achieve satisfying performances on both regular-dose and low-dose data.

3.4. Radiologists’ interpretation of rib fractures on external datasets

The performance of rib fracture detection for the radiologists’ interpretation is shown in Table 2.

Table 2.

Performance of rib fracture detection in radiologists’ interpretation.

Private datasets (RCA-CT) Private datasets (LDCA-CT)
Radiologist A Radiologist B Radiologist A Radiologist B
False positive 1.11 (728) 1.28 (728) 0.55 (728) 1.58 (728)
Recall 0.793 0.794 0.721 0.691
Exclude (suspected rib fractures) 0.46 (125) 0.53 (125) 0.19 (125) 0.44 (125)

RCA-CT: Regular-dose chest-abdomen CT; LDCA-CT: Low-dose chest-abdomen CT.

4. Discussion

In our study, we demonstrated the usability of a novel deep learning model for automatic detection of rib fractures using CT images with different image quality, including LDCA-CT images. Our model had an average sensitivity of 0.859 in LDCA-CT, while the average sensitivity in RCA-CT was 0.886. Although the average sensitivity in LDCA-CT was not as good as that in RCA-CT, our model showed acceptable results in LDCA-CT compared with the interpretation of Radiologists A (0.721) and B (0.691), a result that so far has never been reported in deep learning studies for rib fracture detection.

Compared to conventional CT scans, low-dose CT scans are widely used in clinical practice because they use a lower radiation dose while maintaining diagnostic accuracy [4,[18], [19], [20], [21], [22]]. However, low-dose CT leading to quantum noise might overwhelm potential low-contrast lesions, making accurate diagnosis challenging [23]. This is also the main factor that influences the sensitivity of deep learning models in detecting lesions, as the image features of the lesions could not easily be differentiated with heavy quantum noise. Previous studies have demonstrated the automatic segmentation and detection of rib fractures with deep learning approaches [2,24]. To the best of our knowledge, the present study is the first to detect rib fractures in LDCA-CT images, indicating that rib fracture scans could be performed using LDCA-CT scans. This was consistent with the findings of a former study that revealed that LDCA-CT showed equivalent value between RCA-CT and LDCA-CT images [4]. In addition, we did not use the LDCA-CT data from our center to train a deep learning model. It is reasonable that if a deep learning model is trained with specific features such as those of rib fractures in LDCA-CT images, even with multicenter datasets, this deep learning model could show better performance in target detection under model tuning. However, in the real-world setting, RCA-CT images for the diagnosis of rib fractures have reached a consensus. It is challenging to anticipate whether LDCA-CT-based deep learning will show better performance in RCA-CT images. In the present study, we trained the model with a large dataset that consisted of RCA-CT images from four centers, and then was evaluated across datasets from our center, including both RCA-CT and LDCA-CT images. As shown in Fig. 5, some rib fractures in LDCA-CT images were not detected because we did not train our model using LDCA-CT datasets, which have insufficient image quality with heavy quantum noise. Despite these drawbacks, our model showed a competitive performance on the LDCA-CT test set, achieving 0.865 in sensitivity when Fps was equal to 2, while the radiation dose of LDCA-CT (mean: 131.52 mGy cm; SD: 55.58) was reduced by approximately 56.9 % compared to that of RCA-CT (mean: 305.38 mGy cm; SD: 95.31). This result certifies that our model is reliable for predicting rib fractures using not only RCA-CT but also LDCA-CT after only training on RCA-CT datasets. More importantly, the model still achieved reasonable performance on the LDCA-CT dataset, even though we only trained it on the RCA-CT dataset, which demonstrates our model's remarkable generalizability.

Fig. 5.

Fig. 5

Rib fracture detection from our model. The first line corresponds to the prediction results for the regular-dose images, while the second line corresponds to the low-dose images. As expected, the quality of the image in the second line is worse than that in the top line. In the first two columns, the rib fracture lesions are all detected. However, in the remaining columns, the lesions in low-dose images cannot be detected because the lesion in the images are not obviously apparent.

The radiologists showed similar diagnostic performance with regular-dose CT images, while the diagnostic accuracy for Radiologist A was higher than that for Radiologist B in low-dose CT images, indicating that low-dose CT images would significantly decrease the diagnostic performance of humans, especially in less-experienced radiologists. Compared to deep learning models, the RetinaNet had better sensitivity in detection of rib fractures in both regular- and low-dose CT images. Compared to previous studies [5,12,15], our study did not achieve the highest sensitivity; however, we demonstrated the possibility of detecting rib fractures in low-dose CT images with a robust model that contributes to the clinical diagnosis with a lower radiation dose. In theory, radiologists will have access to the results of model detection before making a diagnosis, potentially enhancing diagnostic confidence and accuracy, particularly among junior radiologists.

This study had several limitations. First, our LDCA-CT dataset was small because LDCA-CT is not acceptable for routine rib fracture diagnosis, as the heavy noise will debase the sensitivity of human eyes for the detection of tiny fracture lines. Our LDCA-CT dataset consisted of RCA-CT images and LDCT-CT images from patients in a single examination. There is usually failure in detecting rib fractures in the initial RCA-CT images of such patients due to breathing or moving artifacts; thus, the following LDCT-CT images acted as a compensation sequence. A large LDCA-CT dataset is required for future detection of rib fractures. Second, because of the lack of a large LDCA-CT dataset, we did not solely train an LDCA-CT dataset-based deep learning model, which should have better performance in detecting rib fractures in a LDCA-CT dataset. A large dataset consisting of both RCA-CT and LDCA-CT images will be better accepted in daily practice for the diagnosis of rib fractures. We plan to conduct such studies in the future to build on the present results. Finally, diagnosis of some types of rib fractures, such as buckle fractures [25] and occult fractures, heavily interfere with image noise. We did not classify the types of rib fractures; therefore, we could not distinguish which types of rib fractures may yield less sensitivity in LDCA-CT images using deep learning approaches. We also plan to improve this in future studies.

5. Conclusions

In conclusion, a multicenter dataset trained RetinaNet-based deep learning model could detect rib fractures in low-dose CT images with a sensitivity of 0.859, indicating the potential of our model for rib fracture detection in low-dose CT. The performance of the model was better than that of radiologists, indicating that applying the RetinaNet-based deep learning model may be feasible for assisting radiologists in rib fracture diagnosis using low-dose CT images.

Declarations

Ethics statement

This retrospective study was approved by the Ethics Committee of our hospital (NO.20220051). And the requirement for informed consent was waived by the hospital's Ethics Committee.

The data was not publicly available but has been completely anonymized to remove any identifying information in this manuscript.

Data availability statement

The data will be available with reasonable request after contacting the corresponding authors.

Funding

This work was supported by Science and Technology Planning Project of Shanghai Science and Technology Commission [grant number, 22Y11910700], the Health Commission of Shanghai [grant number, 2018ZHYL0103], National Natural Science Foundation of China [grant number 61976238], and Shanghai “Rising Stars of Medical Talent” Youth Development Program “Outstanding Youth Medical Talents” (SHWJRS [2021]-99), Emerging Talent Program (XXRC2213) and Leading Talent Program (LJRC2202) of Huadong hospital, and Excellent Academic Leaders of Shanghai (2022XD042).

CRediT authorship contribution statement

Liang Jin: Writing – review & editing, Writing – original draft, Validation, Supervision, Resources, Methodology, Investigation, Data curation, Conceptualization. E. Youjun: Software, Methodology, Investigation. Zheng Ye: Validation, Supervision. Pan Gao: Methodology, Investigation, Formal analysis, Data curation. Guoliang Wei: Supervision, Resources, Project administration. Jia qi Zhang: Methodology, Investigation. Ming Li: Writing – review & editing, Validation, Supervision, Software, Resources, Project administration.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

None.

Contributor Information

Liang Jin, Email: jin_liang@fudan.edu.cn.

E. Youjun, Email: youjun.e@yizhun-ai.com.

Zheng Ye, Email: yaya_yezheng@163.com.

Pan Gao, Email: 15620935261@163.com.

Guoliang Wei, Email: guoliang.wei@yizhun-ai.com.

Jia qi Zhang, Email: jiaqi.zhang@yizhun-ai.com.

Ming Li, Email: minli77@163.com, ming_li@fudan.edu.cn.

References

  • 1.Talbot B.S., Gange CP Jr, Chaturvedi A., Klionsky N., Hobbs S.K., Chaturvedi A. Traumatic rib injury: patterns, imaging pitfalls, complications, and treatment. Radiographics. 2017;37(2):628–651. doi: 10.1148/rg.2017160100. [Internet] Feb 10 [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 2.Jin L., Yang J., Kuang K., et al. Deep-learning-assisted detection and segmentation of rib fractures from CT scans: development and validation of FracNet. EBioMedicine. 2020;62(103106) doi: 10.1016/j.ebiom.2020.103106. [Internet] Nov 10 [cited date - year, month, day] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Urbaneja A., De Verbizier J., Formery A.S., et al. Automatic rib cage unfolding with CT cylindrical projection reformat in polytraumatized patients for rib fracture detection and characterization: feasibility and clinical application. Eur. J. Radiol. 2019;110:121–127. doi: 10.1016/j.ejrad.2018.11.011. [Internet] Jan [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 4.Jin L., Ge X., Lu F., et al. Low-dose CT examination for rib fracture evaluation: a pilot study. Méd. 2018;97(30) doi: 10.1097/MD.0000000000011624. [Internet] July [cited date - year, month, day] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Zhang B., Jia C., Wu R., et al. Improving rib fracture detection accuracy and reading efficiency with deep learning-based detection software: a clinical evaluation. Br. J. Radiol. 2020;94(1118) doi: 10.1259/bjr.20200870. [Internet] Dec 17 [cited date - year, month, day] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Kolopp M., Douis N., Urbaneja A., et al. Automatic rib unfolding in postmortem computed tomography: diagnostic evaluation of the OpenRib software compared with the autopsy in the detection of rib fractures. Int. J. Leg. Med. 2020;134:339–346. doi: 10.1007/s00414-019-02195-x. [Internet] [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 7.Glemser P.A., Pfleiderer M., Heger A., et al. New bone post-processing tools in forensic imaging: a multi-reader feasibility study to evaluate detection time and diagnostic accuracy in rib fracture assessment. Int. J. Leg. Med. 2016;131:489–496. doi: 10.1007/s00414-016-1412-6. [Internet] July 22 [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 8.Ringl H., Lazar M., Töpker M., et al. The ribs unfolded – a CT visualization algorithm for fast detection of rib fractures: effect on sensitivity and specificity in trauma patients. Eur. Radiol. 2015;25:1865–1874. doi: 10.1007/s00330-015-3598-2. [Internet] Feb 14 [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 9.Sanchez T.R., Grasparil A.D., Chaudhari R., Coulter K.P., Wootton-Gorges S.L. Characteristics of rib fractures in child abuse-the role of low-dose chest computed tomography. Pediatr. Emerg. Care. 2018;34(2):81–83. doi: 10.1097/PEC.0000000000000608. [Internet] Feb [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 10.Pomeranz C.B., Barrera C.A., Servaes S.E. Value of chest CT over skeletal surveys in detection of rib fractures in pediatric patients. Clin. Imag. 2022;82:103–109. doi: 10.1016/j.clinimag.2021.11.008. [Internet] [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 11.LeCun Y., Bengio Y., Hinton G. Deep learning. Nature. 2015 May 27;521:436–444. doi: 10.1038/nature14539. [Internet] [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 12.Niiya A., Murakami K., Kobayashi R., et al. Development of an artificial intelligence-assisted computed tomography diagnosis technology for rib fracture and evaluation of its clinical usefulness. Sci Rep. Sci Rep. 2022;12:8363. doi: 10.1038/s41598-022-12453-5. [Internet] May 19 [cited date - year, month, day] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Zhang J., Li Z., Yan S., Cao H., Liu J., Wei D. An algorithm for automatic rib fracture recognition combined with nnU-Net and DenseNet. Evid Based Complement Alternat Med. 2022;2022 doi: 10.1155/2022/5841451. [Internet] Feb 25 [cited date - year, month, day] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Hongbiao S., Shaochun X., Xiang W., et al. Comparison and verification of two deep learning models for the detection of chest CT rib fractures. Acta Radiol. 2022;2022(2) doi: 10.1177/02841851221083519. [Internet] Mar 18 [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 15.Wu M., Chai Z., Qian G., et al. Development and evaluation of a deep learning algorithm for rib segmentation and fracture detection from multicenter chest CT images. Radiol Artif Intell. 2021;3(5) doi: 10.1148/ryai.2021200248. [Internet] Jul 21 [cited date - year, month, day] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Lin T.Y., Goyal P., Girshick R., He K., Dollar P. Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020;42:318–327. doi: 10.1109/TPAMI.2018.2858826. [Internet] [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 17.Hu J., Shen L., Albanie S., Sun G., Wu E. Squeeze-and-Excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020;42(8):2011–2023. doi: 10.1109/TPAMI.2019.2913372. [Internet] Aug 1 [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 18.Diekhoff T., Ulas S.T., Poddubnyy D., et al. Ultra-low-dose CT detects synovitis in patients with suspected rheumatoid arthritis. Ann. Rheum. Dis. 2019;78:31–35. doi: 10.1136/annrheumdis-2018-213904. [Internet] Sep 29 [cited date - year, month, day] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Szucs-Farkas Z., Kaelin I., Flach P.M., et al. Detection of chest trauma with whole-body low-dose linear slit digital radiography: a multireader study. Am. J. Roentgenol. 2010;194(5):W388–W395. doi: 10.2214/AJR.09.3378. [Internet] May [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 20.Keller G., Afat S., Ahrend M.D., Springer F. Diagnostic accuracy of ultra-low-dose CT for torsion measurement of the lower limb. Eur. Radiol. 2021;31:3574–3581. doi: 10.1007/s00330-020-07528-8. [Internet] Jan 25 [cited date - year, month, day] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Gu J., Yang T.S., Ye J.C., Yang D.H. CycleGAN denoising of extreme low-dose cardiac CT using wavelet-assisted noise disentanglement. Med. Image Anal. 2021;74 doi: 10.1016/j.media.2021.102209. [Internet] [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 22.Solberg L.I., Wang Y., Whitebird R., Lopez-Solano N., Smith-Bindman R. Organizational factors and quality improvement strategies associated with lower radiation dose from CT examinations. J. Am. Coll. Radiol. 2020;17:951–959. doi: 10.1016/j.jacr.2020.01.044. [Internet] [cited date - year, month, day] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Bai T., Wang B., Nguyen D., et al. Deep interactive denoiser (DID) for X-ray computed tomography. IEEE Trans. Med. Imag. 2021;40(11):2965–2975. doi: 10.1109/TMI.2021.3101241. [Internet] Nov [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 24.Zhou Q.Q., Tang W., Wang J., et al. Automatic detection and classification of rib fractures based on patients' CT images and clinical information via convolutional neural network. Eur. Radiol. 2021;31:3815–3825. doi: 10.1007/s00330-020-07418-z. [Internet] [cited date - year, month, day] [DOI] [PubMed] [Google Scholar]
  • 25.Cho S.H., Sung Y.M., Kim M.S. Missed rib fractures on evaluation of initial chest CT for trauma patients: pattern analysis and diagnostic value of coronal multiplanar reconstruction images with multidetector row CT. Br. J. Radiol. 2012;85(1018):e845–e850. doi: 10.1259/bjr/28575455. [Internet] [cited date - year, month, day] [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data will be available with reasonable request after contacting the corresponding authors.


Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES