Skip to main content
. 2023 Sep 10;7(4):387–432. doi: 10.1007/s41666-023-00144-3

Table 6.

CT studies summarized

Pre-processing technique Pre-trained model used Novel technique Performance Ref., year
Deep learning algorithm based on RetinaNet For internal test set- per-image SN: 91.9%, per-lesion SN: 96.5 %, precision: 18.2%, FPs/case: 13.5. for external test set- per-image SN: 90.3%, per-lesion SN: 96.1 %, precision: 34.2%, FPs/case: 15.6. [122], 2022
Processed and augmented with code written in Python 3.7.0 and Python imaging library of Pillow 3.3.1 ImageNet pretraining model Xpection CNN. Applied fine-tuned and Stochastic gradient descent optimizer 4-degree model’s SN: 96%, SP: 80%, AUC = 0.936 (CI: 95%, 0.890-0.982 ). 0-degree model’s SN: 82%, SP: 88%, AUC = 0.918 (CI: 95%, 0.859-0.968, p = 0.078). [123], 2022
Dual-phase contrast-enhanced (reconstructed by blending factor of 0.5) DECT scan of the thorax. Univariate analysis, logistic regression, XGBoost, SGD, LDA, AdaBoost, RF, decision tree, and SVM-based model. On training dataset, AUROC: 0.88-0.99, SN: 0.85-0.98, SP: 0.92-1.0, F1 score: 0.87-0.98. On testing dataset, AUROC: 0.83-0.96, SN: 0.72-0.92, SP: 0.76-1.0, F1 score: 0.75-0.91. [134], 2022
Data augmentation: 90 rotation, grayscale-value reversing, 90 rotation of generated images. Primal-dual hybrid gradient (PDHG) methods based algorithm. FI-Net to replace the computation. Structural similarity measure (SSIM): 0.94, root mean squared error (RMSE): 0.1. [130], 2022
Data augmentation: horizontal and vertical shifting, flipping. Compared with pre-trained ResNet models Neural architecture search (NAS)-generated CNN AUC: 0.727, SN: 80 % (95% CI), SP: 60 % (95% CI). [135], 2021
Data augmentation: flipped the training set images horizontally and vertically. labeled samples preparation. Pre-trained VGG16; pre-trained in ImageNet Deformable attention, DA-VGG19 is proposed AUC: 0.9696, acc: 0.9088, PPV: 0.8786, NPV: 0.9469, SN: 0.9500, SP: 0.8675. [121], 2021
3D residual CNN equipped with an attention mechanism. SN: 68.6 % & 64.2 % [124], 2021
Data augmentation: elastic deformations, random scaling, random rotation, gamma augmentation. U-NetBL & U-NetFU network’s architectures For ΔSULpesk biomarker; AUC: 0.89, SN: 87 %, SP: 87 %, optimal cutoff value: -32 %, p-value: 0.001. [125], 2021
Contrast-enhanced. image reconstruction-ordered subset expectation maximization algorithm. Pre-trained on lymphoma and lung cancer 18F-FDG PET/CT data. PET-Assisted Reporting System (PARS) prototype that uses a neural network. SN: 92%, SP: 98%, acc: 98%, region:88% [119], 2021
Image optimization with fuzzy C-means clustering algorithm (FCM). Gray-gradient two-dimensional histogram generated. Convolution and deconvolution neural network (CDNN) through the CNN. SN: 80% (FP rate: 0,1). Detection acc: 78.4% (CI: 0.95) [126], 2021
Cropping, resizing, manual segmentation of ROI. CNN-F pre-trained in ILSVRC-2012 dataset CNN-F- consisted of five convolutional layers and three fully connected layers. Combined model brier score: 0.159 (primary cohort) & 0.211 (validation cohort). [128], 2020
Manual segmentation, rotation (1–20, 10–30,20–40), mirroring, shearing. Generative Adversarial Network (GAN) for data augmentation. - U-Net based CNN architecture Average DICE: 0.93 ± 0.03, SN: 0.92 ± 0.03, precision: 0.93 ± 0.05, conformity: 0.85 ± 0.06. [129], 2020
PET/CT fusion images attenuation-corrected by radiologists, Define spherical ROI with a radius of 2.4 cm., data augmentation. Novel 3D CNN Predicted SUVmax associated with real SUVmax (β estimate = 0.83, p !‘ 0.0001) and with FDG avidity (p !‘ 0.0001), ROC AUC: 0.85. [133], 2019
No adjustment AlexNet trained from scratch with 3D CT case. Classify breast density correctly 72 % (training samples) & 76% (testing samples). [131], 2017