Skip to main content
. 2022 Oct 10;2022:8904768. doi: 10.1155/2022/8904768

Table 2.

A succinct survey of deep-learning-based histopathological image classification methods. NA indicates either “not available” or “no answer” from the associated authors.

Year Ref Aim Technique Dataset Sample Training (%) Testing (%) Result Performance
AUC ACC
2016 Chan and Tuszynski [80] To predict tumor malignancy in breast cancer Employed binarization, fractal dimension, SVM BreaKHis [33] 7909 50 50 ACC of 97.90%, 16.50%, 16.50%, and 25.30% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 39.05%
Spanhol et al. [33] To classify histopathological images Employed CNN based on AlexNet [81] BreaKHis [33] 7909 70 30 ACC of 90.0%, 88.4%, 84.6%, and 86.1% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 87.28%
Bayram-oglu et al. [38] To classify breast cancer histopathology images Employed single-task CNN and multitask CNN BreaKHis [33] 7909 70 30 For single-task CNN, ACC of 83.08%, 83.17%, 84.63%, and 82.10%, obtained for 40x, 100x, 200x, and 400x magnification factors, respectively; accordingly, for multitask CNN, ACC of 81.87%, 83.39%, 82.56%, and 80.69% NA 82.69%
Abbas [77] To diagnose breast masses Applied SURF [82], LBPV [83] DDSM [84], MIAS [85] 600 40 60 Overall 92%, 84.20%, 91.50%, and 0.91 obtained for sensitivity, specificity, ACC, and AUC, respectively 0.91 91.50%

2017 Song et al. [21] To classify histopathology images Employed a model of CNN, Fisher vector [86], SVM BreaKHis [33], IICBU2008 [87] 8283 70 30 ACC of 94.42%, 89.49%, 87.25%, and 85.62% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 89.19%
Wei et al. [22] To analyze tissue images Employed a modification of GoogLeNet [88] BreaKHis [33] 7909 75 25 ACC of 97.46%, 97.43%, 97.73%, and 97.74% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 97.59%
Das et al. [23] To classify histopathology images Employed GoogLeNet [88] BreaKHis [33] 7909 80 20 ACC of 94.82%, 94.38%, 94.67%, and 93.49% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 94.34%
Kahya et al. [89] To identify features of breast cancer Employed dimensionality reduction, adaptive sparse SVM BreaKHis [33] 7909 70 30 ACC of 94.97%, 93.62%, 94.54%, and 94.42% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 94.38%
Song et al. [24] To classify histopathology images easily Employed CNN-based Fisher vector [86], SVM BreaKHis [33] 7909 70 30 ACC of 90.02%, 88.90%, 86.90%, and 86.30% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 88.03%
Gupta and Bhavsar [90] To classify histopathology images. Employed an integrated model BreaKHis [33] 7909 70 30 Average ACC of 88.09% and 88.40% obtained for image and patient levels, respectively NA 88.25%
Dhungel et al. [91] To analyze masses in mammograms Applied multiscale deep belief nets INbreast [92] 410 60 20 The best results on the testing set with an ACC got 95% on manual and 91% on the minimal user intervention setup 0.76 91.03%
Spanhol et al. [34] To classify breast cancer images Using deep CNN BreaKHis [33] 7900 70 30 ACC of 84.30%, 84.35%, 85.25% and 82.10% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 83.96%
Han et al. [35] To study breast cancer multiclassification Employed class structure based CNN BreaKHis [33] 7909 50 50 ACC of 95.80%, 96.90%, 96.70%, and 94.9% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 96.08%
Sun and Binder [39] To assess performance of H&E stain dat. A comparative study among ResNet-50 [75], CaffeNet [93], and GoogLeNet [88] BreaKHis [33] 7909 70 30 ACC of 85.75%, 87.03%, and 84.18% obtained for GoogLeNet [88], ResNet-50 [75], and CaffeNet [93], respectively NA 85.65%
Kaymak et al. [94] To organize breast cancer images Back-Propagation [95] and Radial Basis Neural Networks [96] 176 images from a hospital 176 65 35 Overall ACC of 59.0% and 70.4% got from Back-Propagation [95] and Radial Basis [96], respectively NA 64.70%
Liu et al. [47] To detect cancer metastases in images Employed a CNN architecture Camelyon16 [97] 110 68 32 An AUC of 97.60 (93.60, 100) obtained on par with Camelyon16 [97] test set performance 0.97 95.00%
Zhi et al. [57] To diagnose breast cancer images Employed a variation of VGGNet [98] BreaKHis [33] 7909 80 20 ACC of 91.28%, 91.45%, 88.57%, and 84.58% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 88.97%
Chang et al. [58] To solve the limited amount of training data Employed CNN model from Inception [88] family (e.g., Inception V3) BreaKHis [33] 4017 70 30 ACC of 83.00% for benign class and 89.00% for malignant class. AUC of malignant was 93.00% and AUC of benign was also 93.00% 0.93 86.00%

2018 Jannesari et al. [6] To classify breast cancer images Employed variations of Inception [88], ResNet [75] BreaKHis [33], 6402 images from TMA [99] 14311 85 15 With ResNets ACC of 99.80%, 98.70%, 94.80%, and 96.40% obtained for four cancer types. Inception V2 with fine-tuning all layers got ACC of 94.10% 0.99 96.34%
Bardou et al. [7] To classify breast cancer based on histology images Employed CNN topology, data augmentation BreaKHis [33] 7909 70 30 ACC of 98.33%, 97.12%, 97.85%, and 96.15% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 97.36%
Kumar and Rao [9] To train CNN for using image classification Employed CNN topology BreaKHis [33] 7909 70 30 ACC of 85.52%, 83.60%, 84.84%, and 82.67% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 84.16%
Das et al. [11] To classify breast histopathology images Employed variation of CNN model BreaKHis [33] 7909 80 20 ACC of 89.52%, 89.06%, 88.84%, and 87.67% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 88.77%
Nahid et al. [100] To classify biomedical breast cancer images Employed Boltzmann machine [101], Tamura et al. features [102] BreaKHis [33] 7909 70 30 ACC of 88.70%, 85.30%, 88.60%, and 88.40% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 87.75%
Badejo et al. [103] To classify medical images Employed local phase quantization, SVM BreaKHis [33] 7909 70 30 ACC of 91.10%, 90.70%, 86.20%, and 84.30% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 88.08%
Alireza-zadeh et al. [104] To arrange breast cancer images Threshold adjacency [105], quadratic analysis [106] BreaKHis [33] 7909 70 30 ACC of 89.16%, 87.38%, 88.46%, and 86.68% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 87.92%
Du et al. [13] To distribute breast cancer images Employed AlexNet [81] BreaKHis [33] 7909 70 30 ACC of 90.69%, 90.46%, 90.64%, and 90.96% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 90.69%
Gandom-kar et al. [14] To model CNN for breast cancer image diagnosis Employed a variation of ResNet [75] (e.g., ResNet152) BreaKHis [33] 7786 70 30 ACC of 98.60%, 97.90%, 98.30%, and 97.60% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 98.10%
Gupta and Bhavsar [15] To model CNN for breast cancer image diagnosis Employed DenseNet [67], XGBoost classifier [107] BreaKHis [33] 7909 70 30 ACC of 94.71%, 95.92%, 96.76%, and 89.11% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 94.12%
Ben-hammou et al. [17] To study CNN for breast cancer images Employed Inception V3 [88] module BreaKHis [33] 7909 70 30 ACC of 87.05%, 82.80%, 85.75%, and 82.70% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 84.58%
Morillo et al. [108] To label breast cancer images Employed KAZE features [109] BreaKHis [33] 7909 70 30 ACC of 86.15%, 80.70%, 77.95%, and 72.00% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 97.20%
Chattoraj and Vishwakarma [110] To study breast carcinoma images Zernike moments [111], entropies of Renyi [112], Yager [113] BreaKHis [33] 7909 70 30 ACC of 87.7%, 85.8%, 88.0%, and 84.6% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 96.53%
Sharma and Mehra [19] To analyze behavior of magnification independent breast cancer Employed models of VGGNet [98] and ResNet [75] (e.g., VGG16, VGG19, and ResNet50) BreaKHis [33] 7909 90 10 Pretrained VGG16 with logistic regression classifier showed the best performance with 92.60% ACC, 95.65% AUC, and 95.95% ACC precision score for 90%–10% training-testing data splitting 0.95 94.28%
Zheng et al. [114] To study content-based image retrieval Employed binarization encoding, Hamming distance [115] BreaKHis [33] and others 16309 70 30 ACC of 47.00%, 40.00%, 40.00%, and 37.00% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 41.00%
Cascianelli et al. [20] To study features extraction from images Employed dimensionality reduction using CNN BreaKHis [33] 7909 75 25 ACC of 84.00%, 88.20%, 87.00%, and 80.30% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 84.88%
Mukkamala et al. [116] To study deep model for feature extraction Employed PCANet [117] BreaKHis [33] 7909 80 20 ACC of 96.12%, 97.41%, 90.99%, and 85.85% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 92.59%
Mahraban Nejad et al. [51] To retrieve breast cancer images Employed a variation of VGGNet [98], SVM BreaKHis [33] 7909 98 02 An average ACC of 80.00% was demonstrated from BreaKHis [33] NA 80.00%
Rakhlin et al. [118] To analyze breast cancer images Several deep neural networks and gradient boosted trees classifier BACH [78] 400 75 25 For 4-class classification task ACC was 87.2% but for 2-class classification ACC was reported to be 93.8% 0.97 90.50%
Almasni et al. [119] To detect breast masses Applied regional deep learning technique DDSM [84] 600 80 20 Distinguished between benign and malignant lesions with an overall ACC of 97% 0.96 97.00%

2019 Kassani et al. [8] To use deep learning for binary classification of breast histology images Usage of VGG19 [98], MobileNet [120], and DenseNet [67] BreaKHis [33], ICIAR2018 [78], PCam [121], Bioimaging2015 [122] 8594 87 13 Multimodel method got better predictions than single classifiers and other algorithms with ACC of 98.13%, 95.00%, 94.64% and 83.10% obtained for BreaKHis [33], ICIAR2018 [78], PCam [121], and Bioimaging2015 [122], respectively NA 92.72%
Alom et al. [10] To classify breast cancer from histopathological images Inception recurrent residual CNN BreaKHis [33], Bioimaging2015 [122] 8158 70 30 From BreaKHis [33], ACC of 97.90%, 97.50%, 97.30%, and 97.40%, obtained for 40x, 100x, 200x, and 400x magnification factors, respectively 0.98 97.53%
Nahid and Kong [12] To classify histopathological breast images Employed RGB histogram [123] with CNN BreaKHis [33] 7909 85 15 ACC of 95.00%, 96.60%, 93.500%, and 94.20% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 94.68%
Jiang et al. [16] To use CNN for breast cancer histopathological images Employed CNN, Squeeze-and-Excitation [124] based ResNet [75] BreaKHis [33] 7909 70 30 The achieved accuracy between 98.87% and 99.34% for the binary classification as well as between 90.66% and 93.81% for the multiclass classification 0.99 95.67%
Sudharshan et al. [18] To use instance learning for image sorting Employed CNN-based multiple instance learning algorithm BreaKHis [33] 7909 70 30 ACC of 86.59%, 84.98%, 83.47%, and 82.79% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 84.46%
Gupta and Bhavsar [25] To segment breast cancer images Employed ResNet [75] for multilayer feature extraction BreaKHis [33] 7909 70 30 ACC of 88.37%, 90.29%, 90.54%, and 86.11% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 88.82%
Vo et al. [125] To extract visual features from training images Combined weak classifiers into a stronger classifier BreaKHis [33], Bioimaging2015 [122] 8194 87 13 ACC of 95.10%, 96.30%, 96.90%, and 93.80% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 95.56%
Qi et al. [32] To organize breast cancer images Employed a CNN network to complete the classification task BreaKHis [33] 7909 70 30 ACC of 91.48%, 92.20%, 93.01%, and 92.58% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 92.32%
Talo [41] To detect and classify diseases in images DenseNet [67], ResNet [75] (e.g., DenseNet161, ResNet50) KimiaPath24 [126] 25241 80 20 DenseNet161 pretrained and ResNet50 achieved ACC of 97.89% and 98.87% on grayscale and color images, respectively NA 98.38%
Li et al. [127] To detect invading component in cancer images Convolutional autoencoder-based contrast pattern mining framework 361 samples of the breast cancer 361 90 10 ACC was taken into account. The overall ACC achieved was 76.00%, whereas 77.70% was presented for F1S NA 76.00%
Ragab et al. [44] To detect breast cancer from images AlexNet [81] and SVM DDSM [84], CBIS-DDSM [128] 2781 70 30 The deep CNN presented an ACC of 73.6%, whereas the SVM demonstrated an ACC of 87.2% 0.88 73.60%
Romero et al. [45] To study cancer images A modification of Inception module [88] HASHI [129] 151465 63 37 From deep learning networks, an overall ACC of 89.00% was demonstrated along with F1S of 90.00% 0.96 89.00%
Minh et al. [46] To diagnose breast cancer images A modification of ResNet [75] and InceptionV3 [88] BACH [78] 400 70 20 ACC with 95% for 4 types of cancer classes and ACC with 97.5% for two combined groups of cancer 0.97 96.25%

2020 Stanitsas et al. [130] To visualize a health system for clinicians Employed region covariance [131], SVM, multiple instance learning [132] FABCD [133], BreaKHis [33] 7949 70 15 ACC of 91.27% and 92.00% at the patient and image level, respectively 0.98 91.64%
Togacar et al. [26] To analyze breast cancer images rapidly Employed a ResNet [75] architecture with attention modules BreaKHis [33] 7909 80 20 ACC of 97.99%, 97.84%, 98.51%, and 95.88% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 97.56%
Asare et al. [134] To study breast cancer images Employed self-training and self-paced learning BreaKHis [33] 7909 70 30 ACC of 93.58%, 91.04%, 93.38%, and 91.00% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 92.25%
Gour et al. [28] To diagnose breast cancer tumors images Employed a modification of ResNet [75] BreaKHis [33] 7909 70 30 ACC of 90.69%, 91.12%, 95.36%, and 90.24% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively 0.91 92.52%
Li et al. [29] To grade pathological images Employed a modification of Xception network [135] BreaKHis [33], VLAD [136], LSC [137] 8583 60 40 ACC of 95.13%, 95.21%, 94.09%, and 91.42% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 93.96%
Feng et al. [138] To allocate breast cancer images Deep neural-network-based manifold preserving autoencoder [139] BreaKHis [33] 7909 70 30 ACC of 90.12%, 88.89%, 91.57%, and 90.25% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 90.53%
Parvin and Mehedi Hasan [31] To study CNN models for cancer images LeNet [140], AlexNet [81], VGGNet [98], ResNet [75], Inception V3 [88] BreaKHis [33] 7909 80 20 ACC of 89.00%, 92.00%, 94.00% and 90.00% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively 0.85 91.25%
Carvalho et al. [141] To classify histological breast images Entropies of Shannon [142], Renyi [112], Tsallis [143] BreaKHis [33] 4960 70 30 ACC of 95.40%, 94.70%, 97.60%, and 95.50% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively 0.99 95.80%
Li et al. [144] To analyze breast cancer images Employed global covariance pooling module [145] BreaKHis [33] 7909 70 30 ACC of 96.00%, 96.16%, 98.01%, and 95.97% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 94.93%
Man et al. [36] To classify cancer images Usage of generative adversarial networks, DenseNet [67] BreaKHis [33] 7909 80 20 ACC of 97.72%, 96.19%, 86.66%, and 85.18% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 91.44%
Kumar et al. [37] To classify human breast cancer and canine mammary tumors Employed a framework based on a variant of VGGNet [98] (e.g., VGGNet16) and SVM BreaKHis [33] and CMTHis [37] 8261 70 30 For BreaKHis [33], ACC of 95.94%, 96.22%, 98.15%, and 94.41% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively; the same for CMTHis [37], ACC of 94.54%, 97.22%, 92.07%, and 82.84% obtained 0.95 96.93%
Kaushal and Singla [40] To detect cancerous cells in images. Employed a CNN model of self-training and self-paced learning Total 50 images of various patients 50 90 10 ACC was taken into account. Estimation of the standard error of mean was approximately 0.81 NA 93.10%
Hameed et al. [43] To use deep learning for classification of breast cancer images Variants of VGGNet [98] (e.g., fully trained VGG16, fine-tuned VGG16, fully trained VGG19, and fine-tuned VGG19 models) Breast cancer images: 675 for training and 170 for testing 845 80 20 The ensemble of fine-tuned VGG16 and VGG19 models offered sensitivity of 97.73% for carcinoma class and overall accuracy of 95.29%. It also offered an F1 score of 95.29% NA 95.29%
Alantari et al. [48] To detect breast lesions in digital X-ray mammograms Adopted three deep CNN models INbreast [92], DDSM [84] 1010 70 20 In INbreast [92] mean ACC of 89%, 93%, and 95% for CNN, ResNet50, and Inception-ResNet V2, respectively; 95%, 96%, and 98% for DDSM [146] 0.96 94.08%
Zhang et al. [49] To classify breast mass ResNet [75], DenseNet [67], VGGNet [98] CBIS-DDSM [128], INbreast [92] 3168 70 30 Overall ACC of 90.91% and 87.93% obtained from CBIS-DDSM [128] and INbreast [92], respectively 0.96 89.42%
Hassan et al. [59] To classify breast cancer masses Modification of AlexNet [22] and GoogLeNet [88] CBIS-DDSM [128], MIAS [85], INbreast [92], etc 600 75 17 With CBIS-DDSM [128] and INbreast [92] databases, the modified GoogLeNet achieved ACC of 98.46% and 92.5%, respectively 0.97 96.98%

2021 Li et al. [147] To use high-resolution info of images Multiview attention-guided multiple instance detection network BreaKHis [33], BACH [78], PUIH [148] 12329 70 30 Overall ACC of 94.87%, 91.32%, and 90.45% obtained from BreaKHis [33], BACH [78], and PUIH [148], respectively 0.99 92.21%
Wang et al. [27] To divide breast cancer images Employed a model of CNN and CapsNet [149] BreaKHis [33] 7909 70 30 ACC of 92.71%, 94.52%, 94.03%, and 93.54% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 93.70%
Albashish et al. [30] To analyze VGG16 [98] Employed a variation of VGGNet [98] BreaKHis [33] 7909 90 10 ACC of 96%, 95.10%, and 87% obtained for polynomial SVM, Radial Basis SVM, and k-nearest neighbors, respectively NA 92.70%
Kundale et al. [150] To classify breast cancer from histology images Employed SURF [82], DSIFT [151], linear coding [152] BreaKHis [33] 7909 70 30 ACC of 93.35%, 93.86%, 93.73%, and 94.00% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 93.74%
Attallah et al. [153] To classify breast cancer from histopathological images Employed several deep learning techniques including autoencoder [139] BreaKHis [33], ICIAR2018 [78] 7909 70 30 For BreaKHis [33], ACC of 99.03%, 99.53%, 98.08%, and 97.56% got for 40x, 100x, 200x, and 400x magnification factors, respectively; for ICIAR2018 [78], ACC was 97.93% NA 98.43%
Burçak et al. [154] To classify breast cancer histopathological images Stochastic [155], Nesterov [156], Adaptive [157], RMSprop [158], AdaDelta [159], Adam [160] BreaKHis [33] 7909 70 30 ACC was taken into account. The overall ACC of 97.00%, 97.00%, 96.00%, and 96.00% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 96.50%
Hirra et al. [161] To label breast cancer images Patch-based deep belief network [162] HASHI [129] 584 52 30 Images from four different data samples achieved an accuracy of 86% NA 86.00%
Elmannai et al. [42] To extract eminent breast cancer image features A combination of two deep CNNs BACH [78] 400 60 20 The overall ACC for the subimage classification was 97.29% and for the carcinoma cases the sensitivity achieved was 99.58% NA 97.29%
Baker and Abu Qutaish [163] To segment breast cancer images Clustering and global thresholding methods BACH [78] 400 70 30 The maximum ACC obtained from classifiers and neural network using BACH [78] to detect breast cancer NA 63.66%
Soumik et al. [60] To classify breast cancer images Employed Inception V3 [88] BreaKHis [33] 7909 80 20 ACC of 99.50%, 98.90%, 98.96% and 98.51% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 98.97%
Brancati et al. [50] To analyze gigapixel histopathological images Employed CNN with a compressing path and a learning path Camelyon16 [164], TUPAC16 [165] 892 68 32 AUC values of 0.698, 0.639, and 0.654 obtained for max-pooling, average pooling, and combined attention maps, respectively 0.66 NA
Mahmoud et al. [61] To classify breast cancer images Employed transfer learning Mammography images [166] 7500 80 20 Maximum ACC of 97.80% was claimed by using the given dataset [166]. Sensitivity and specificity were estimated NA 94.45%
Munien et al. [62] To classify breast cancer images Employed EfficientNet [167] ICIAR2018 [78] 400 85 15 Overall ACC of 98.33% obtained from ICIAR2018 [78]. Sensitivity was also taken into account NA 98.33%
Boumaraf et al. [63] To analyze breast cancer images Employed ResNet [75] on ImageNet [168] images BreaKHis [33] 7909 80 20 ACC of 94.49%, 93.27%, 91.29%, 89.56% obtained for 40x, 100x, 200x, and 400x magnification factors, respectively NA 92.15%
Saber et al. [64] To detect breast cancer Employed transfer learning technique MIAS [85] 322 80 20 Overall ACC, PRS, F1S, and AUC of 98.96%, 97.35%, 97.66%, and 0.995, respectively, got from MIAS [85] 0.995 98.96%

2022 Ameh Joseph et al. [169] To classify breast cancer images Employed handcrafted features and dense layer BreaKHis [33] 7909 90 10 ACC of 97.87% for 40x, 97.60% for 100x, 96.10% for 200x, and 96.84% for 400x demonstrated from BreaKHis [33] NA 97.08%
Reshma et al. [52] To detect breast cancer Employed probabilistic transition rules with CNN BreaKHis [33] 7909 90 10 ACC, PRS, RES, F1S, and GMN of 89.13%, 86.23%, 81.47%, 85.38%, and 85.17% demonstrated from BreaKHis [33] NA 89.13%
Huang et al. [53] To detect nuclei on breast cancer Employed mask-region-based CNN H&E images of patients 537 80 20 PRS, RES, and F1S of 91.28%, 87.68%, and 89.44% demonstrated from the used dataset NA 95.00%
Chhipa et al. [170] To learn efficient representations Employed magnification prior contrastive similarity BreaKHis [33] 7909 70 30 Maximum mean ACC of 97.04% and 97.81% were got from patient and image levels, respectively using BreaKHis [33] NA 97.42%
Zou et al. [171] To classify breast cancer images Employed channel attention module with nondimensionality reduction BreaKHis [33], BACH [78] 8309 90 10 Average ACC, PRS, RES, and F1S of 97.75%, 95.19%, 97.30%, and 96.30% obtained from BreaKHis [33], respectively. ACC of 85% got from BACH [78] NA 91.37%
Liu et al. [172] To classify breast cancer images Employed autoencoder and Siamese framework BreaKHis [33] 7909 80 20 Average ACC, PRS, RES, F1S, and RTM of 96.97%, 96.47%, 99.15%, 97.82%, and 335 seconds obtained from BreaKHis [33], respectively NA 96.97%
Jayandhi et al. [54] To diagnose breast cancer Employed VGG [98] and SVM MIAS [85] 322 80 20 Maximum ACC of 98.67% obtained from MIAS [85]. Sensitivity and specificity were also calculated NA 98.67%
Sharma and Kumar [55] To classify breast cancer images Employed Xception [135] and SVM BreaKHis [33] 2000 75 25 Average ACC, PRS, RES, F1S, and AUC of 95.58%, 95%, 95%, 95%, and 0.98 obtained from BreaKHis [33], respectively 0.98 95.58%
Zerouaoui and Idri [56] To classify breast cancer images Employed multilayer perceptron, DenseNet201 [67] BreaKHis [33] and others NA 80 20 ACC of 92.61%, 92%, 93.93%, and 91.73% on four magnification factors of BreaKHis [33] NA 93.85%
Soltane et al. [65] To classify breast cancer images Employed ResNet [75] 323 colored lymphoma images 323 50 50 A total of 27 misclassifications for 323 samples were claimed. PRS, RES, F1S, and Kappa score were estimated NA 91.6%
Naik et al. [173] To analyze breast cancer images Employed random forest, k-nearest neighbors, SVM 699 whole-slide images 699 80 20 Random forest algorithm achieved better result for classifying benign and malignant images from 190 testing samples NA 98.2%
Chattopadhyay et al. [174] To classify breast cancer images Employed dense residual dual-shuffle attention network BreaKHis [33] 7909 80 20 Average ACC, PRS, RES, and F1S of 96.10%, 96.03%, 96.08%, and 96.02%, respectively, obtained from four different magnification levels of BreaKHis [33] NA 96.10%
Alruwaili and Gouda [66] To detect breast cancer Employed the principle of transfer learning, ResNet [75] MIAS [85] 322 80 20 Best results for ACC, PRS, RES, F1S, and AUC of 89.5%, 89.5%, 90%, and 89.5% obtained from MIAS [85], respectively NA 89.5%