Table 5.
US studies summarized
Pre-processing technique | Pre-trained model used | Novel technique | Performance | Ref., year |
---|---|---|---|---|
Data augmentation | DarkNet-53 | CNN Optimized CNN (RDE & RGW optimizer) | Acc 99.1 | [101], 2022 |
Oversampling, data augmentation | ResNet-50; ResNet-101 | Dynamic U-Net with a ResNet-50 encoder backbone. | Acc: classifier A (normal vs abnormal) 96 & classifier B (benign vs abnormal) 85 | [88], 2022 |
Image processing in OpenCV & scikit-image (Python) | EficientNetB2, Inception-V3, ResNet-50 | Multistage transfer learning (MSTL). optimizers: Adam, Adagrad, and stochastic gradient descent (SGD) | Acc: Mendeley dataset 99 & MT-Small-Dataset 98.7 | [98], 2022 |
Basic geometric augmentation | – | Gaussian Dropout Based Stacked Ensemble CNN model with meta-learners | Acc 92.15%, F1-score 92.21 , precision 92.26 , recall 92.17 . | [90], 2021 |
ROI extraction, image resolution adjustment, data normalization, data augmentation. | Inception-V3 | Fine-tuned Inception-v3 architecture | Acc 98.19 | [91], 2021 |
– | – | Shape-Adaptive Convolutional (SAC) Operator with K-NN & Self-attention coefficient U-Net with VGG-16 & ResNet-101. | ResNet-101 mean IoU 82.15 (multi-object segmentation) and IoU 77.9 & 72.12 (Public BUSI). | [104], 2021 |
Image resized & normalized | ResNet-101 pre-trained on RGB images. | Novel transfer learning technique based on deep representation scaling (DRS) layers. | AUC: 0.955, acc: 0.915 | [118], 2021 |
Focal loss strategy, data augmentation, rotation, horizontal or vertical flipping, random cropping, random channel shifts. | − | ResNet-50 | Test cohort A’s acc: 80.07 to 97.02, AUC: 0.87, PPV: 93.29, MCC: 0.59. Test cohort B’s acc: 87.94 to 98.83, AUC: 0.83, PPV: 88.21, MCC: 0.79. | [92], 2021 |
− | ImageNet-based pre-trained weights. | DNN; deepest layers were fine-tuned by minimizing the focal loss. | AUC: 0.70 ((95 CI,0.630.77; on 354 TMA samples) & 0.67 (95 CI,0.620.71; on 712 whole-slide image). | [114], 2021 |
Enhanced by fuzzy preprocessing. | FCN-AlexNet, UNet, SegNet-VGG16, SegNetVGG19, DeepLabV3 (ResNet-18, ResNet-50, MobileNet-V2, Xception). | A scheme based on combining fuzzy logic (FL) and deep learning (DL). | Global acc: 95.45, mean IoU: 78.70, mean Boundary F1: 68.08 . | [106], 2021 |
DL-based data augmentation & online augmentation. | Pre-trained AlexNet & ResNet | Fine-tuned ensemble CNN | Acc: 90 | [109], 2021 |
Coordinate marking, image cutting, mark removal. | Pretrained Xception CNN | Optimized deep learning model (DLM) | For DLM, acc: 89.7 , SN: 91.3 , SP: 86.9 , AUC: 0.96. For DLM in BI-RADS, acc: 92.86 . false negative rate 10.4. | [94], 2021 |
Normalizing image stain-color | Pre-trained VGG-19 | Block-wise fine-tuned VGG-19 model with softmax classifier on top. | Acc: 94.05 to 98.13 | [110], 2021 |
Augmentation: flipping, rotation, gaussian blur, scalar multiplication. | Pre-trained on the MS COCO dataset | Deep learning-based computer-aided prediction (CAP) system. Mask R-CNN, DenseNet-121 | Acc: 81.05, SN: 81.36, SP: 80.85, AUC: 0.8054. | [93], 2021 |
Data augmentation, rotation | Inception-v3 | Modified Inception-v3 | AUC: 0.9468, SN: 0.886, Specificity: 0.876. | [95], 2020 |
Data augmentation: random rotation, random shear, random zoom. | − | DenseNet | Raining/testing cohorts AUCs: 0.957/0.912 (combined region), 0.944/0.775 (peritumoral region), (0.937/0.748 (intratumoral region). | [100], 2020 |
Data augmentation: random geometric image transformations, flipping, rotation, scaling, shifting, resizing | – | Inception-V3, Inception-ResNet-V2, ResNet-101 | Inception-V3’s AUC: 0.89 (95 CI: 0.83, 0.95), SN: 85 (35 of 41 images; 95 CI: 70, 94), SP: 73(29 of 40 images; 95 CI: 56, 85) | [96], 2020 |
Data augmentation: flipping, translation, scaling, and rotation technique. | VGG16, VGG19, ResNet-50 | Finetuned CNN | VGG16 with linear SVM’s patch-based accuracies: (93.97 for 40-, 92.92 for 100-, 91.23 for 200-, 91.79 for 400-); patient-based accuracies: (93.25 for 40-, 91.87 for 100-, 91.5 for 200-,92.31 for 400-). | [108], 2020 |
.jpeg conversion, trimmed, resized | − | GoogLeNet CNN | SN: 0.958, SP: 0.875, acc: 0.925, AUC: 0.913. | [117], 2019 |
Data augmentation: used ROI-CNN & G-CNN model | − | Two-stage grading. ROI-CNN, G-CNN | Acc = 0.998 | [97], 2019 |
Data augmentation | VGG16 CNN | Fine-tuned deep learning parameters | Acc: 0.973, AUC: 0.98 | [115], 2019 |