Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2023 May 4.
Published in final edited form as: Comput Model Eng Sci. 2023 Mar 9;136(3):2127–2172. doi: 10.32604/cmes.2023.025484

A Survey of Convolutional Neural Network in Breast Cancer

Ziquan Zhu 1, Shui-Hua Wang 1, Yu-Dong Zhang 1,*
PMCID: PMC7614504  EMSID: EMS174792  PMID: 37152661

Abstract

Problems

For people all over the world, cancer is one of the most feared diseases. Cancer is one of the major obstacles to improving life expectancy in countries around the world and one of the biggest causes of death before the age of 70 in 112 countries. Among all kinds of cancers, breast cancer is the most common cancer for women. The data showed that female breast cancer had become one of the most common cancers.

Aims

A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it could give patients more treatment options and improve the treatment effect and survival ability. Based on this situation, there are many diagnostic methods for breast cancer, such as computer-aided diagnosis (CAD).

Methods

We complete a comprehensive review of the diagnosis of breast cancer based on the convolutional neural network (CNN) after reviewing a sea of recent papers. Firstly, we introduce several different imaging modalities. The structure of CNN is given in the second part. After that, we introduce some public breast cancer data sets. Then, we divide the diagnosis of breast cancer into three different tasks: 1. classification; 2. detection; 3. segmentation.

Conclusion

Although this diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the diagnosis. (iii) It is easy to cause overfitting when using small data sets.

Keywords: Breast cancer, convolutional neural network, deep learning, review, image modalities

1. Introduction

For people all over the world, cancer is one of the most feared diseases and one of the major obstacles to improving life expectancy in countries around the world [13]. According to the survey, cancer is one of the biggest causes of death before the age of 70 in 112 countries. At the same time, cancer is the third and fourth leading cause of death in 23 countries [47].

Among all kinds of cancers, breast cancer is the most common cancer for women [812]. According to the data from the United States in 2017, there were more than 250,000 new cases of breast cancer [13]. 12% of American women may get breast cancer in their lifetime [14]. The data surveyed in 2020 showed that female breast cancer had become one of the most common cancers [4].

A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it will give patients more treatment options and improve the treatment effect and survival ability [8,1517]. Therefore, there are many diagnostic methods for breast cancer, such as biopsy [18].

The image of breast cancer is shown in Fig. 1. Invasive carcinoma and carcinoma in situ are two types of breast cancer [19]. Carcinoma in situ cannot be upgraded in the body. About one-third of new breast cancer is carcinoma in situ [20]. Most newly diagnosed breast cancer is invasive. Invasive cancer begins in the mammary duct and can spread to other breast sites [21].

Figure 1. The breast cancer image [22].

Figure 1

Sometimes, the breast cancer image could be divided into two categories, which are benign and malignant. The images of benign tumors and malignant tumors are given in Figs. 2 and 3. Several imaging modalities are used for the diagnosis and analysis of breast cancer [2325]. The abbreviated imaging modality table is given in Table 1. (i) Screen-film mammography (SFM) is one of the most important imaging modalities for early breast cancer detection [26]. But SFM also has some disadvantages. First, the sensitivity of SFM is low for the detection of the breast with dense glandular tissue [27]. This disadvantage may be caused by the film. Because once the film is finished, it is impossible to improve it. So sometimes there are pictures with low contrast [28]. Furthermore, SFM is not digital. (ii) Digital mammography (DM) is one of the effective imaging modalities for early breast cancer detection [29,30]. At the same time, DM has always been the standard imaging modality for female breast cancer diagnosis and detection [31]. However, DM has some limitations. The specificity of DM is low, which could cause some biopsies [32]. Another limitation of DM is that patients may face high radiation exposure [27]. This may cause some health hazards to patients. (iii) Magnetic resource imaging (MRI) is suitable for clinical diagnosis and high-risk patients [33]. MRI is very sensitive to breast cancer [20]. MRI still has some problems. Compared with DM, the MRI detection cost is higher [34]. Although MRI has high sensitivity, its specificity is low [35]. (iv) Ultrasound (US) is one of the most common methods for the detection of breast cancer. The US has no ionizing radiation [36]. Therefore, compared with SFM and DM, the US is safer and has lower costs [37]. But the US is an imaging modality that depends on the operator [38]. Therefore, the success of the US in detecting and differentiating breast cancer lesions is largely affected by the operator. (v) Digital breast tomosynthesis (DBT) is a different imaging modality. Compared with traditional mammography, DBT can take less time for imaging [39] and provide more details of the dense chest [40]. One problem with DBT is that DBT may not detect malignant calcification when it is at the slice plane [41]. It also takes more time to read than DM [42]. (vi) Histopathological images (HP) can capture information about cell shape and structural information [43]. However, it is invasive and requires additional costs [44]. The details of these different imaging modalities are presented in Table 2.

Figure 2. The images of the benign tumors.

Figure 2

Figure 3. The images of the malignant tumors.

Figure 3

Table 1. Full explanation and abbreviated imaging modality.

Abbreviated imaging modality Full explanation
SFM Screen-film mammography
DM Digital mammography
MRI Magnetic resource imaging
US Ultrasound
DBT Digital breast tomosynthesis
HP Histopathological images

Table 2. Advantages and disadvantages of abbreviated imaging modality.

Abbreviated imaging modality Advantages Disadvantages
SFM Detect the early-stage cancer
Standard imaging modality
High sensitivity
Not digital imaging
modality
Low sensitivity with dense
cancer
Image is impossible to be
improved
DM Effective imaging modality
Detects the early-stage breast
cancer
Expensive compared with
SFM
High radiation exposure
Low specificity may cause
unnecessary biopsies
High false-positive results
and false-negative results
MRI Used for clinical diagnosis
Suitable for high-risk patients
High sensitivity
Low specificity
High cost compared with
US and DM
US No radiation
Suitable for pregnant patients
Safe and low cost compared
with SFM and DM
High requirement for
operator
DBT High accuracy, sensitivity, and
specificity compared with DM
Less time for imaging
More details of the dense chest
Multiple 3D images
High cost compared with
the other four imaging
modalities
Not detect malignant
micro-calcifications
HP Get depth information
Better resolution
Capture cell shape information
Capture structural information
Invasive
Require additional costs

Medical imaging is usually done manually by experts (pathologists, radiologists, etc.) [45]. Through the above overview of several medical imaging, there are some problems in medical imaging [46]. Firstly, experts are required to manually analyze medical imaging, but there are few experts in this field in many developing countries [47]. Secondly, the process of manual analysis of medical imaging is very long and cumbersome [48]. Thirdly, when experts manually analyze medical imaging, they can be influenced by foreign factors, such as lack of rest, decreased attention, etc. [27].

With the continuous progress of computer science, computer-aided diagnosis (CAD) models for breast cancer have become a hot prospect [49]. Scientists have been studying CAD models for breast cancer for more than 30 years [50,51]. CAD models for breast cancer have the following advantages [52]: (i) CAD models can improve specificity and sensitivity [53]. (ii) Unnecessary examinations can be omitted by CAD models [54]. This can shorten the diagnosis time and reduce the cost. (iii) The CAD models can reduce the mortality rate by 30% to 70% [13]. With the development of computing power, the convolutional neural network (CNN) is one of the most popular methods for the diagnosis of breast cancer [5557]. Recently, a sea of research papers has been published papers about breast cancer based on CNN [5861]. However, these research papers only propose one or several methods, which cannot make readers fully understand the diagnosis technology of breast cancer based on the CNN model. Therefore, we complete a comprehensive review of the diagnosis of breast cancer based on CNN after reviewing a sea of recent papers. In this paper, readers can not only see the CNN-based diagnostic methods for breast cancer in recent decades but also know the advantages and disadvantages of these methods and future research directions. The main contributions of this survey are given as follows:

  • A sea of major papers about the diagnosis of breast cancer based on CNN is reviewed in this paper to provide a comprehensive survey.

  • This survey presents the advantages and disadvantages of these state-of-the-art methods.

  • A presentation of significant findings gives readers the opportunities available for future research.

  • We give the future research direction and critical challenges about the CNN-based diagnostic methods for breast cancer.

The rest structure of this paper is shown as Section 2 talks about CNN. Section 3 introduces the breast cancer data set. Section 4 presents the application of CNN in breast cancer. The conclusion is given in Section 5.

2. Convolutional Neural Network

In the past few decades, the importance of medical imaging has been fully verified [6266]. With medical imaging, people can help detect, diagnose and treat early diseases [33,6769]. However, as analyzed above, medical imaging still has some shortcomings [7073]. With the progress of CNN technology, lots of researchers use CNN to diagnose breast cancer [7477]. A large number of studies have proved that CNN shows superior performance in breast cancer diagnosis [7881]. CNN can be a solution for the continuous improvement of image analysis technology and transfer learning [8284]. Recently, a large number of researchers take CNN as the backbone model for transfer learning, such as ResNet, AlexNet, DenseNet, and so on [8587]. Some layers of CNN models are frozen, and the unfrozen layers are retrained with the data set [8890]. Sometimes researchers use CNN models as feature extractors and select other networks as the classifiers [9193], such as support vector machines (SVM) [94], randomized neural networks (RNNs) [95], etc. At present, lots of CNN models are used in breast cancer diagnosis [96], such as AlexNet, VGG, ResNet, U-Net, etc. [93,97,98]. CNN is a computing model composed of a sea of layers [99102]. Fig. 4 shows the structure of a classic CNN model-VGG16 [103]. The residual learning and DenseNet block are given in Figs. 5 and 6.

Figure 4. The architecture of VGG16.

Figure 4

Figure 5. The residual learning.

Figure 5

Figure 6. The DenseNet block.

Figure 6

The convolution layer is one of the most important components of CNN and usually connects the input layer [104108]. The input is scanned by the convolution layer based on the convolution kernel for extracting features. Different convolution kernels will extract different features in the same input layer [109]. There may be multiple convolution layers in a CNN model [110]. Basic features are usually extracted by the front convolution layers. The convolution layers in the back are more likely to extract advanced features [88].

We first define the parameters of the convolution layer: the input image size is I × I, the convolution kernel is K × K, S represents the stride, the padding is P, and the output size is O×O. Padding refers to additional pixels used to supplement the zero value around the input image [104,111113]. Stride refers to the step size of each convolution kernel sliding [114116]. The formula is shown below:

O=IK+2PS+1 (1)

Fig. 7 gives a sample of convolution. In Fig. 7, the stride and padding are set as 1 and 0, respectively. I=7, K=3, P=0, S=1, thus O=5.

Figure 7. A sample of convolution.

Figure 7

More and more researchers use zero padding [117] in the convolution layer. In Fig. 8, the output size is the same as the input size with the one zero-padding.

Figure 8. Convolution with the one-pixel thick zero-padding.

Figure 8

The features from the input are extracted by the convolution layer [118121]. After multiple convolutions, the feature dimension becomes higher and higher, resulting in too much data [122]. But too much data may contain too much redundant information [122124]. This redundant information will not only increase the amount of training but also lead to overfitting problems [123,125127]. At this time, some researchers could select the pooling layer to downsample the extracted features. The main functions of the pooling layer are (i) translation invariance and (ii) feature dimensionality reduction [124].

At present, the three main pooling methods are max pooling [128], average pooling [129], and stochastic pooling [130], as given in Fig. 9.

Figure 9. An example of max, average, and stochastic pooling.

Figure 9

AR is pooling region R in the feature map and k is the index of each element within it. The function is set as pool ():

p=pool(ak),kAR (2)

Max pooling is to obtain the maximum value of pixels in the specific area of the feature map in a certain step [129]. The formula of max pooling (pm) is as follows:

pm=max(ak),kAR (3)

Average pooling is to average the pixels in a specific area of the feature map in a certain step [131]. The formula of average pooling (pa)is as follows:

pa=kARak|AR| (4)

where |AR| means the number of elements in AR.

Stochastic pooling selects the map response based on the probability map B = (b1, b2bk, …) [132]. The formula of bk is as follows:

bk=akkARak (5)

Stochastic pooling outputs are picked from the multinomial distribution. The formula of stochastic pooling (ps) is as follows:

ps=am,wherem~(b1,b2bk,) (6)

The nonlinearity is introduced into CNN through activation. Two traditional activation functions are Sigmoid [133] and Tanh [134]. The equation of Sigmoid is given as:

Sigmoid(x)=11+ex (7)

The Tanh is written as:

Tanh(x)=exexex+ex (8)

These two traditional activation functions do not perform well in convergence. The rectified linear unit (ReLU) [135] accelerates the convergence. The equation of ReLU is as follows:

ReLU(x)={x,x>00,x0 (9)

There are some problems with the ReLU. When x is less than or equal to 0, the activation value is 0. In this case, leaky ReLU (LReLU) [136] is proposed. Compared with ReLU, when x is less than or equal to 0, the activation value is a small negative. The equation of LReLU is given as:

LReLU(x)={x,x>00.01x,x0 (10)

Based on LReLU, researchers proposed PReLU [137]. When x is less than or equal to 0, the slope is learned adaptively from the data. The PReLU is shown as:

PReLU(x)={x,x>0zx,x0 (11)

where z is very small and decided by other parameters.

Each activation function has its characteristics, which is shown in Table 3.

Table 3. The characteristics of activation functions.

Activation function Symmetry the origin Speed of convergence Output
Sigmoid No low (0, 1)
Tanh Symmetrical low (−1, 1)
ReLU No Fast [0, +∞)
LReLU No Fast (−∞, +∞)
PReLU No Fast (−∞, +∞)

The CNN model maps the input data to the feature space with the convolution layer, pooling layer, and activation function. The function of the fully connected layer is to map these to the sample space. The fully connected layer convolutes the feature map to obtain a one-dimensional vector, weighs the features, and reduces the spatial dimension.

CNN may consist of multi-layer fully connected layers. Global average pooling is proposed to substitute the fully connected layer, which greatly reduces parameters. However, global average pooling does not always perform better than the fully connected layer, such as in transfer learning.

The increasing depth of the CNN model increases the difficulty of adjusting the model. The input of each subsequent layer changes in the training. In this case, this could cause the disappearance of the gradient of the low-level network. The reason why the neural structure of a deep neural network converges more and more slowly is the gradient disappearance [138].

Batch normalization adjusts the input value of each layer to the standard normal distribution. The data is set as:

X=[x1,x2,,xn] (12)

Firstly, calculate the mean value of batch B:

φB=1ni=1hxi (13)

Secondly, calculate the variance:

ϑB=1ni=1h(xiφB)2 (14)

Thirdly, perform the normalization:

xi=xiφBϑB2+ (15)

where ∈ is greater than 0, which makes sure that the denominator is greater than 0.

Finally, two parameters are proposed to increase network nonlinearity:

yi=αxi+β (16)

where α is the scale parameter and β is the shift parameter.

In the CNN model, too few training samples could lead to the overfitting problem. The overfitting problem is that the loss function of the CNN model is small and high accuracy is obtained during training, but the loss function is large, and the accuracy is low during testing. In this case, researchers usually select the dropout to prevent overfitting problems. In CNN model training, some nodes in the hidden layer are set as 0, as shown in Fig. 10. This reduces the interaction between hidden layers [139].

Figure 10. An example of the dropout.

Figure 10

One of the important indexes used to evaluate the performance of a CNN model is the confusion matrix The confusion matrix is given in Table 4.

Table 4. Confusion matrix.

Predicated class
True class TP FN
FP TN

TP, FN, FP, and TN are true positive, false negative, false positive, and true negative, respectively.

However, the confusion matrix only counts numbers. Sometimes in the face of lots of data, it is difficult to measure the quality of the model simply by counting the numbers. Therefore, there are several other indicators for the basic statistical results.

  1. Accuracy: It means the proportion of all samples with correct prediction.

    Accuracy=TP+TNTP+FP+TN+FN (17)
  2. Sensitivity (TPR): It indicates the proportion of positive cases recognized as positive cases in the positive cases
    Sensitivity=TPTP+FN (18)
  3. Specificity: It represents the proportion of negative cases recognized as negative cases in the negative cases.

    Specificity=TNFP+TN (19)
  4. Precision: It Indicates how many samples with positive predictions are positive.

    Precision=TPTP+FP (20)
  5. F1-measure: It is the harmonic average of precision and recall.

    F1=2TP2TP+FP+FN (21)
  6. FPR: When the result is negative, it predicts a positive value.

    FPR=FPTN+FP (22)
  7. Receiver Operating Characteristic (ROC) curve: TPR and FPR are the y-axis and x-axis, respectively. From the definitions of FPR and TPR, it can be understood that the higher the TPR and the smaller the FPR, the more efficient the CNN model will be.

  8. Area under Curve (AUC): It is between 0 and 1 and means the area under ROC. The model would be better with the larger AUC.

  9. The Dice Similarity Coefficient (DSC) is usually used as the measurement to evaluate the quality of the segmentation. The DCS measures the overlap between manual segmentation (M) and automated segmentation (A).

    DSC(A,M)=2|AM||A|+|M| (23)
    where |AM| represents the intersection of A and M.
  10. The Mean Absolute Error (MAE) is the average distance between the predicted (t) and the truth (y) of the sample.

    MAE=1mi=1m|tiyi| (24)
    where m is the number of samples.
  11. The Intersection over Union (IoU) evaluates the distance between the predicted value (V) and the ground truth (G).

    IoU=|VG||VG| (25)
    where |VG| means the area of union.

3. Common Datasets

In recent years, a lot of data sets were produced and published. Researchers can use some of them for research. Table 5 shows the details of some public data sets.

Table 5. The details of some public data sets.

Date set Number of images Size (GB) Modality
DDSM 55,890 - DM
MIAS 322 2.3 DM
CBIS-DDSM 4067 70.5 DM
INbreast 410 - DM
BreakHis 9109 - Histology

Note: - means unavailable.

For DDSM, all images are 299 × 299. The DDSM project is a collaborative effort at the Massachusetts General Hospital (D. Kopans, R. Moore), the University of South Florida (K. Bowyer), and Sandia National Laboratories (P. Kegelmeyer). Additional cases from Washington University School of Medicine were provided by Peter E. Shile, MD, Assistant Professor of Radiology, and Internal Medicine. There are a total of 55890 samples in the DDSM dataset. 86% of these samples are negative, and the rest are positive. All data is stored as tfrecords files.

The images in the CBIS-DDSM (Curated Breast Imaging Subset of DDSM) are divided into three categories: normal, benign, and malignant cases. This data set contains a total of 4067 images. The CBIS-DDSM collection includes a subset of the DDSM data selected and curated by a trained mammographer. The images have been decompressed and converted to DICOM format.

The Mammographic Image Analysis Society (MIAS) Database contains 322 images. Each image in this dataset is 1024 × 1024. MIAS is an organization of UK research groups interested in the understanding of mammograms and has generated a database of digital mammograms. Films taken from the UK National Breast Screening Programme have been digitized to a 50-micron pixel edge with a Joyce-Loebl scanning microdensitometer, a device linear in the optical density range 0–3.2, and representing each pixel with an 8-bit word.

The INbreast database contains 410 breast cancer images. The INbreast database is a mammographic database, with images acquired at a Breast Centre, located in Hospital de São João, Breast Centre, Porto, Portugal. These images were obtained from 115 patients. Among these 115 patients, 90 were women with double breasts, and the other 25 were mastectomies. Each double breast patient would have four images, and each mastectomy patient would have two images.

The Breast Cancer Histopathological Image Classification (BreakHis) consists of 5429 malignant samples and 2480 benign samples. So, there are 9109 samples in the BreakHis data set. This database has been built in collaboration with the P&D Laboratory–Pathological Anatomy and Cytopathology, Parana, Brazil These microscopic images of breast tumor tissue were collected from 82 patients using different magnifying factors (40×, 100×, 200×, and 400×).

4. Application of CNN in Breast Cancer

This diagnosis of breast cancer through CNN is generally divided into three different tasks: 1 Classification; 2 Detection; 3 Segmentation. Therefore, this section is presented in three parts based on three different tasks.

4.1. Breast Cancer Classification

In recent years, the CNN model has been proven to be successful and has been applied to the diagnosis of breast cancer [140]. Researchers would classify breast cancer into several categories based on CNN models. We would review the classification of breast cancer based on CNN in this section.

Alkhaleefah et al. [141] introduced a model combining CNN and support vector machine (SVM) classifier with radial basis function (RBF) for breast cancer image classification, as shown in Fig. 11. This method was roughly separated into three steps: Firstly, the CNN model was trained through breast cancer images. Secondly, the CNN model was fine-tuned based on the data set. Finally, the features extracted by the CNN model would be used as the input to RBF-Based SVM. They evaluated the proposed method based on the confusion matrix.

Figure 11. The structure of CNN+SVM.

Figure 11

Liu et al. [142] introduced the fully connected layer first CNN (FCLF-CNN) method. This method added the fully connected layer before the convolution layer. They improved structured data transformation in two ways. The encoder in the first method was the fully connected layer. The second method was to use MSE losses. They tested different FCLF-CNN models and four FCLF-CNN models were ensembled. The FCLF-CNN model got 99.28% accuracy, 98.65% sensitivity, and 99.57% specificity for the WDBC data set, and 98.71% accuracy, 97.60% sensitivity, and 99.43% specificity for the WBCD data set.

Gour et al. [143] designed a network to classify breast cancer (ResHist). To obtain better classification results, they proposed a data enhancement technique. This data enhancement technique combined affine transformation, stain normalization, and image patch generation. Experiments show that ResHist had better classification results than traditional CNN models, such as GoogleNet, ResNet50, and so on. This method finally achieved 84.34% accuracy and 90.49% F1.

Wang et al. [144] introduced a hybrid CNN and SVM model to classify breast cancer. This method uses the VGG16 network as the backbone model. Because the data set was small, transfer learning was used in the VGG16 network. On the data set, they used the method of multi-model voting to strengthen the graph. At the same time, the image was also deformed. The accuracy of this method was 80.6%.

Yao et al. [145] introduced a new model to classify breast cancer. Extracting features from breast cancer images was based on CNN (DenseNet) and RNN (LSTM). Then the perceptron attention mechanism based on natural language processing (NLP) was selected to weight the extracted features. They used the targeted dropout in the model instead of the general dropout. They achieved 98.3% accuracy, 100% precision, 100% recall, 100% F1 for Bioimaging2015 Dataset.

Ibraheem et al. [24] proposed a three-parallel CNN branch network (3PCNNB-Net) to classify breast cancer. The 3PCNNB-Net was separated into three steps. The first step was mainly feature extraction. There were three parallel CNN to extract features. The three CNN models were the same. The second step was to use the average layer to merge the extracted features. The flattened layer, BN, and softmax layer were used as the classification layer. The 3PCNNB-Net achieved 97.04% accuracy, 97.14% sensitivity, and 95.23% specificity.

Agnes et al. [146] proposed a multiscale convolutional neural network (MA-CNN) to classify breast cancer, as presented in Fig. 12. They used extended convolution and used three dilated convolutions of different sizes to extract different levels’ features. At this time, these features were combined.

Figure 12. The structure of MA-CNN.

Figure 12

Zhang et al. [115] designed an 8-layer CNN network for breast cancer classification (BDR-CNN-GCN). This network mainly consisted of three innovations. The first innovation was that they integrated BN and dropout. Second, they use rank-based stochastic pooling (RSP) instead of general maximum or average pooling. Finally, it was combined with two layers of graph convolutional network (GCN).

Wang et al. [147] introduced a breast cancer classification model according to CNN. In this paper, they selected inception-v3 as the backbone model for feature extraction of breast cancer images. And they did transfer learning to the inception-v3. This model got 0.886 sensitivity, 0.876 specificity, and 0.9468 AUC, respectively.

Saikia et al. [148] compared different classical CNN models in breast cancer classification. These classic CNN models used in this article were VGG16, VGG19, ResNet-50, and GoogLeNet-V3. The data set contained a total of 2120 breast cancer images.

Mewada et al. [149] introduced a new CNN-based model to classify breast cancer. In this new model, they added the multi-resolution wavelet transform. Spectral features were as important as spatial features in classification. Therefore, they integrated the features extracted from Haar wavelet with spatial features. They tested the new model on the BreakHist dataset and BCC2015 and obtained 97.58% and 97.45% accuracy, respectively.

Zhou et al. [150] proposed a new model for automatically classifying benign and malignant breast cancer, as shown in Fig. 13. This model can directly extract features from images, thus eliminating manual operation and image segmentation. This method combined shear wave elastography (SWE) and the CNN model to classify breast cancer. This SWE-CNN model produced 95.7% specificity, 96.2% sensitivity, and 95.8% accuracy, respectively.

Figure 13. The structure SWE+CNN.

Figure 13

Lotter et al. [151] introduced a multi-scale CNN for the classification of breast cancer. Firstly, the classifier was trained by segmenting the lesions in the image. Moreover, they trained the model by using the extracted features. They tested the multi-scale CNN on the DDSM dataset and obtained 0.92 AUROC.

Vidyarthi et al. [152] introduced a classification model combining CLAHE and CNN models for microscopic imaging of breast cancer. They tested the image preprocessing using CNN and without CNN. In this paper, they selected the BreakHist data set for testing. Finally, the hybrid model of CNN can get better classification results, which produces an accuracy of about 90%.

Hijab et al. [153] used a classical CNN model (VGG16) for breast cancer classification. They did some modifications to the VGG16. First, they selected the pre-trained VGG16 as the backbone model. Then they fine-tuned the backbone model. When fine-tuning, they froze all convolution layers except the last layer. The weights were updated by using random gradient descent (SGD). Finally, the fine-tuned VGG16 yielded 0.97 accuracy and 0.98 AUC.

Kumar et al. [154] proposed a self-made CNN model for breast cancer classification. Six convolutional layers, six max-pooling layers, and two fully connected layers are used to form the self-made CNN model. The ReLU activation function was selected in this paper. The self-made CNN model was tested on the 7909 breast cancer images and achieved 84% efficiency.

Kousalya et al. [155] compared the self-made CNN model with DensenNet201 for the classification of breast cancer. In the self-made CNN model, there were two convolutional layers, two pooling layers, one flattened layer, and two fully connected layers. They tested these two CNN models on the different learning rates and batch sizes. In conclusion, the self-made CNN models with Particle Swarm Optimization (PSO) can yield better specificity and precision.

Mikhailov et al. [156] proposed a novel CNN model to classify breast cancer. In this model, the max-pooling and depth-wise separable convolution were selected to improve the classification performance. What’s more, different activation functions were tested in this paper, which were ReLU, ELU, and Sigmoid. The novel CNN model with ReLU can achieve the best accuracy, which was 85%.

Karthik et al. [157] offered a novel stacking ensemble CNN framework for the classification of breast cancer. Three stacked CNN models were made for extracting features. They designed these three stacked CNN models. The features from these three stacked CNN models were ensembled to yield better classification performance. The ensemble CNN model achieved 92.15 accuracy, 92.21% F1-score, and 92.17% recall.

Nawaz et al. [158] proposed a novel CNN model for the multi-classification of breast cancer. In this model, DenseNet was used as the backbone model. The open data set (BreakHis data set) was selected to test the proposed novel model. The novel model could achieve 95.4% accuracy for the multi-classification of breast cancer.

Deniz et al. [159] proposed a new model for breast cancer classification, which obtained transfer learning and CNN models. The pre-trained VGG16 and AlexNet were used to extract features. These extracted features from these two pre-trained CNN models would be concatenated and then fed to SVM for classification. The model can achieve 91.30% accuracy.

Yeh et al. [160] compared CNN-based CAD and feature-based CAD for classifying breast cancer based on DBT images. In the CNN-based CAD, the feature extractor was the LeNet. After experiments, the LeNet-based CAD could yield 87.12% (0.035) and 74.85% (0.122) accuracy. In conclusion, the CNN-based CAD could outperform the feature-based CAD.

Gonçalves et al. [161] tested three different CNN models to classify breast cancer, which were ResNet50, DenseNet201, and VGG16. In these three CNN models, transfer learning was used to improve classification performance. Finally, the DenseNet could get 91.67% accuracy, 83.3% specificity, 100% sensitivity, and 0.92 F1-score.

Bayramoglu et al. [162] proposed two different CNN models for breast cancer classification. The single CNN model was used to classify a malignancy. The multi-task CNN (mt_CNN) model was used to classify malignancy and image magnification levels. The single CNN model and mt_CNN model could yield 83.25% and 82.13% average recognition rates, respectively.

Alqahtani et al. [163] offered a novel CNN model (msSE-ResNet) for breast cancer classification. In the msSE-ResNet, the residual learning and different scales were used to improve the results. The msSE-ResNet can achieve 88.87% accuracy and 0.9541 AUC.

For the classification of breast cancer based on CNN, there are some limitations. When these existing methods select the large public dataset, it will take a lot of training time. Five-fold cross-validation was used to evaluate some proposed methods in these papers. Even though some results were very good, there were still many unsatisfactory results. The details of these methods are given in Table 6.

Table 6. Details of breast cancer classification based on CNN.

Authors Methods Results
Alkhaleefah et al. [141] A model combining CNN and SVM classifier with RBF to classify breast cancer. The sensitivity, specificity, and accuracy of this model were 1, 0.86, and 0.92, respectively.
Liu et al. [142] The fully connected layer first CNN (FCLF-CNN) method was proposed. This method added the fully connected layer before the convolution layer. They improved structured data transformation in two ways. The encoder in the first method was the fully connected layer. The second method was to use MSE losses. The FCLF-CNN model got 99.28% accuracy, 98.65% sensitivity, and 99.57% specificity for the WDBC data set, and 98.71% accuracy, 97.60% sensitivity, and 99.43% specificity for the WBCD data set.
Gour et al. [143] A network (ResHist) was designed to classify breast cancer. The data enhancement technique combined affine transformation, stain normalization, and image patch generation. This method finally achieved 84.34% accuracy and 90.49% F1.
Wang et al. [144] A hybrid CNN and SVM model was presented to classify breast cancer. This method uses the VGG16 network as the backbone model. The accuracy of this method was 80.6%.
Yao et al. [145] This model used CNN (DenseNet) and RNN (LSTM) to extract features. Then the perceptron attention mechanism based on NLP was used to weigh the features. This model achieved 98.3% accuracy, 100% precision, 100% recall, 100% F1 for Bioimaging2015 Dataset.
Ibraheem et al. [24] A three-parallel CNN branch network (3PCNNB-Net) was designed to classify breast cancer. There were three parallel CNN to extract the features. Then, they used the average layer to merge the features. The f lattened layer, BN, and softmax layer were used as the classification layer. The 3PCNNB-Net achieved 97.04% accuracy, 97.14% sensitivity, and 95.23% specificity.
Agnes et al. [146] A multiscale all convolutional neural network (MA-CNN) was proposed for breast cancer classification. In the MA-CNN, they used extended convolution and used three dilated convolutions of different sizes to extract different levels’ features. The accuracy, sensitivity, specificity, F1, and AUC of MA-CNN were 96.47%, 96%, 96%, 96%, and 0.99, respectively.
Zhang et al. [115] An 8-layer CNN network (BDR-CNN-GCN) was designed for breast cancer classification. They integrated BN and dropout, replaced the normal pooling layer with RSP, and combined GCN. The accuracy, sensitivity, and specificity of BDR-CNN-GCN were 96.10%±1.60%, 96.20%±2.90%, and 96.00%±2.31%, respectively.
Wang et al. [147] A breast cancer classification model based on CNN (inception-v3) was proposed. This model got 0.886 sensitivity, 0.876 specificity, and 0.9468 AUC, respectively.
Saikia et al. [148] They compared different classical CNN models in breast cancer classification, which were VGG16, VGG19, ResNet-50, and GoogLeNet-V3. Finally, GoogLeNet-V3 achieved the highest accuracy of 96.25%.
Mewada et al. [149] A new CNN-based model was proposed to classify breast cancer. In this new model, they added the multi-resolution wavelet transform. They tested the new model on the BreakHist dataset and BCC2015 and obtained 97.58% and 97.45% accuracy, respectively.
Zhou et al. [150] A new model was proposed for automatically classifying benign and malignant breast cancer, which combined SWE and the CNN model. This SWE-CNN model produced 95.7% specificity, 96.2% sensitivity, and 95.8% accuracy, respectively.
Lotter et al. [151] A multi-scale CNN was designed for the classification of breast cancer. They tested the multi-scale CNN on the DDSM dataset and obtained 0.92 AUROC.
Vidyarthi et al. [152] A classification method combining CLAHE, and CNN model was proposed for microscopic imaging of breast cancer. The results showed that the hybrid model of CNN can get better classification results, which produces an accuracy of about 90%.
Hijab et al. [153] A classical CNN model (VGG16) was used for breast cancer classification. They did some modifications to the VGG16. Finally, the fine-tuned VGG16 yielded 0.97 accuracy and 0.98 AUC.
Kumar et al. [154] Six convolutional layers, six max-pooling layers, and two fully connected layers are used to form the self-made CNN model. The self-made CNN model was tested on the 7909 breast cancer images and achieved 84% efficiency.
Kousalya et al. [155] The self-made CNN model was compared with DensenNet201 for the classification of breast cancer. These two CNN models were tested on different learning rates and batch sizes. The self-made CNN models with Particle Swarm Optimization (PSO) can yield better specificity and precision.
Mikhailov et al. [156] The max-pooling and depth-wise separable convolution were used in this novel CNN model to classify breast cancer. ReLU, ELU, and Sigmoid were tested in this paper. The novel CNN model with ReLU can achieve 85% accuracy.
Karthik et al. [157] A novel stacking ensemble CNN framework was proposed for the classification of breast cancer. They designed these three stacked CNN models for extracting features. These extracted features were ensembled for classification. The ensemble CNN model achieved 92.15 accuracy, 92.21% F1-score, and 92.17% recall.
Nawaz et al. [158] A novel CNN model was proposed for the multi-classification of breast cancer. In this model, DenseNet was used as the backbone model. The novel model could achieve 95.4% accuracy for the multi-classification of breast cancer.
Deniz et al. [159] A new model was based on obtained transfer learning and CNN models. for breast cancer classification. The pre-trained VGG16 and AlexNet were used to extract features. These extracted features would be concatenated and then fed to SVM for classification. The model could achieve 91.30% accuracy.
Yeh et al. [160] The CNN-based CAD and feature-based CAD for classifying breast cancer were compared. In the CNN-based CAD, the feature extractor was the LeNet. In conclusion, the CNN-based CAD could outperform the feature-based CAD.
Gonçalves et al. [161] Three different CNN models were tested to classify breast cancer, which were ResNet50, DenseNet201, and VGG16. DenseNet could achieve the best results and get 91.67% accuracy, 83.3% specificity, 100% sensitivity, and 0.92 F1-score.
Bayramoglu et al. [162] Two different CNN models were proposed for breast cancer classification. The single CNN model was used to classify a malignancy. The multi-task CNN (mt_CNN) model was used to classify malignancy and image magnification levels. The single CNN model and mt_CNN model could yield 83.25% and 82.13% average recognition rates, respectively.
Alqahtani et al. [163] A novel CNN model (msSE-ResNet) for breast cancer classification. In the msSE-ResNet, the residual learning and different scales were used to improve the results. The msSE-ResNet can achieve 88.87% accuracy and 0.9541 AUC.

4.2. Breast Cancer Detection

We will review the detection of breast cancer based on CNN in this section [164]. Researchers use the CNN model to detect candidate lesion locations in breast images.

Sohail et al. [165] introduced a CNN-based framework (MP-MitDet) for mitotic nuclei recognition in pathological images of breast cancer. The framework can be divided into four steps. 1. refine the label, 2 Select split region, 3 Blob analysis, 4 cell refinement. The whole framework used an automatic tagger and the CNN model for training. More areas were selected according to the spot area. The MP-MitDet obtained 0.71 precision, 0.76 recall, 0.75 F1, and 0.78 area.

Mahmood et al. [166] proposed a low-cost CNN framework for automatic breast cancer mitotic cell detection, as shown in Fig. 14. This framework was composed of the faster regional convolutional neural network (Faster R-CNN) and deep CNN. They experimented with this model on two public datasets, which were ICPR 2012 and ICPR 2014. This model yielded 0.841 recall, 0.858 F1, and 0.876 precision for ICPR 2012 and 0.583 recall, 0.691 F1, and 0.848 precision for ICPR 2014.

Figure 14. The framework of R-CNN+CNN.

Figure 14

Wang et al. [167] introduced a new model by CNN and US-ELM (CNN-GTD-ELM) to detect breast cancer X-rays. They designed an 8-layer CNN model for feature extraction of input images. They combined the extracted features with some additional features of the tumor. These combined features were the input to the ELM.

Chiao et al. [168] established a mask region detection framework based on CNN, as given in Fig. 15. This method detected the lesion of breast cancer and classify benign and malignant breast cancer. Finally, this framework achieved 0.75 average precision in detection and 85% accuracy in classification.

Figure 15. The structure of mask CNN.

Figure 15

Das et al. [169] introduced Deep Multiple Instance Learning (MIL) based on CNN for breast cancer detection. This model did not rely on region learning marked by experts on WSI images. The MIL-CNN model achieved 96.63%, 93.06%, and 95.83% accuracy on the IUPHL, BreakHis, and UCSB data sets, respectively.

Melekoodappattu et al. [11] introduced a framework for breast cancer detection. The framework was mainly composed of CNN and image texture attribute extraction. They designed a 9-layer CNN model. In the extraction phase, they defined texture features and used Uniform Manifold Approximation and Projection (UMAP) to reduce the dimension of features. Then the multi-stage features were integrated for detection. They tested this model on two data sets which were MIAS and DDSM. This model obtained 98% accuracy and 97.8% specificity for the MIAS data set, and 97.9% accuracy and 98.3% specificity for the DDSM data set.

Zainudin et al. [170] designed three CNN models for mitosis and amitosis in breast cell detection. The layers of these three CNN were 6, 13, and 17, respectively. Experiments showed that the 17-layer CNN model achieved the best performance. Finally, the model achieved a 15.50% loss, 80.55% TPR, 84.49% accuracy, and 11.66% FNR.

Wu et al. [171] introduced a deep fused fully convolutional neural network (FF-CNN) for breast cancer detection. They selected the AlexNet model as the backbone model. They combined different levels of features to improve detection results. They used a multi-step fine-tuning method to reduce overfitting problems. The FF-CNN was tested on ICPR 2014 data set and obtained better detection accuracy and faster detection speed.

Gonçalves et al. [172] introduced a new framework for breast cancer detection. This new framework used two bionic optimization techniques to optimize the CNN model, which were particle swarm optimization and genetic algorithm. The authors used three CNN models, which were DenseNet-201, VGG-16, and ResNet-50. Experiments showed that the optimized network detection results were significantly improved. The F1 score of VGG-16 was increased from 0.66 to 0.92 and the F1 score of ResNet-50 was increased from 0.83 to 0.90. The F1 values of the three optimized networks were higher than 0.90.

Guan et al. [173] proposed two models to detect breast cancer. The first method was to train images by Generative Adversarial Network (GAN) and then put the trained images into CNN for experiments. The accuracy of this model was 98.85%. The second model was that they first select the VGG-16 model as the backbone model and then transferred the backbone model. The accuracy of this method was 91.48%. The authors combined the two methods, but the results of the combined model were not ideal.

Hadush et al. [174] proposed the breast mass abnormality detection model with CNN to reduce the artificial cost. Extracting features was completed by CNN. Then these features were input into the Region Proposed Network (RPN) and Region of Interest (ROI) of fast R-CNN for detection. Finally, the method achieved 92.2% AUC-ROC, 91.86% accuracy, and 94.67% sensitivity.

Huang et al. [175] presented a lightweight CNN model (BM-Net) to detect breast cancer. The lightweight CNN model consisted of MobileNet-V3 and bilinear structure. The MobileNet-V3 was the backbone model to extract the features. To save resources, they just replaced the fully connected layer with a bilinear structure. The BM-Net could achieve 0.88 accuracy and 0.71 score.

Mahbub et al. [176] proposed a novel model to detect breast cancer. They designed a CNN model, which consisted of six convolutional layers, five max-pooling layers, and two dense layers. The proposed model was composed of the designed CNN model and the fuzzy analytical hierarchy process model. The proposed model can get 98.75% accuracy to detect breast cancer.

Prajoth SenthilKumar et al. [177] used a pre-trained CNN model for the detection and analysis of breast cancer. They selected the VGG16 model as the backbone model. They detected breast cancer from the histology images based on the variability, cell density, and tissue structure. The model could get 88% accuracy.

Charan et al. [178] designed a 16-layers CNN model for the detection of breast cancer. The designed CNN model consisted of six convolution layers, four average-pooling layers, and one fully connected layer. The public data set (Mammograms-MIAS data set) was used for training and testing. The designed CNN model can achieve 65% accuracy.

Alanazi et al. [179] offered a novel CNN model for the detection of breast cancer. They designed a new CNN model and used three different classifiers to detect breast cancer. Three classifiers were K-nearest neighbor, logistic regression, and support vector machines, respectively. This new model can achieve 87% accuracy, which improved 9% accuracy than other ML methods.

Gonçalves et al. [180] presented a novel model to detect breast cancer. They proposed a new random forest surrogate to get better parameters in the pre-trained CNN models. The random forest surrogate was made of particle swarm optimization and genetic algorithms. Three pre-trained CNN models were used in this paper, which was ResNet50, DenseNet201, and VGG16. With the help of the proposed random forest surrogate, the F1-scores of DenseNet201 and ResNet50 could be improved from 0.92 to 1, and 0.85 to 0.92, respectively.

Guan et al. [181] applied the generative adversarial network (GAN) to generate more breast cancer images. The regions of interest (ROIs) form images to train GAN. Some augmentation methods were used to compare with GAN, such as scaling, shifting, rotation, and so on. They designed a new CNN model as the classifier. After experiments, the GAN can yield around 3.6% better than other transformations on the image augmentation.

Sun et al. [182] were inspired by human detection to propose a novel model for breast cancer detection based on the mammographic image. The mathematical morphology method was used to preprocess the images. The image template matching method was selected to locate the suspected regions of a breast mass. The PSO was used to improve the accuracy. The proposed model can achieve 85.82% accuracy, 66.31% F1-score, 95.38% recall, and 50.81% precision.

Chauhan et al. [183] used different algorithms to detect breast cancer. Three different algorithms were CNN, KNN, and SVM, respectively. They compared these three algorithms on the breast cancer data set. SVM could achieve 98% accuracy, KNN can yield 73% accuracy, and CNN could get 95% accuracy.

Gupta et al. [184] proposed a modified CNN model for the detection of breast cancer. The backbone of this model was ResNet. They modified the ResNet in three steps. Firstly, they used the dropout of 0.5. Then, the adaptive average pooling and adaptive max pooling were used by two layers of BN, the dropout, and the fully connected layer. The third step was the stride for down-sampling at 3 × 3 convolution. The modified CNN model could achieve 99.75% accuracy, 99.18% precision, and 99.37% recall, respectively.

Chouhan et al. [185] designed a novel framework (DFeBCD) for detecting breast cancer. In the DFeBCD, they designed the highway network based on CNN to select features. There were two classifiers, which were SVM and Emotional Learning inspired Ensemble Classifier (ELiEC). These two classifiers were trained by the selected features. This framework was evaluated by five-fold cross-validation and achieved 80.5% accuracy.

There are some limitations in the detection of breast cancer based on CNN. If the dataset used in the research paper is very large, a sea of computation and time is needed to complete the training. On the other hand, if the dataset used in the research paper is very small, it could cause an overfitting problem. Most of the breast cancer diagnosis model based on CNN uses the pre-trained CNN model to extract features. But at this time, which layer has the best feature? Which layer of features should we extract? The summary of CNN for breast cancer detection is shown in Table 7.

Table 7. Summary of CNN for breast cancer detection.

Authors Methods Results
Sohail et al. [165] A CNN-based framework (MP-MitDet) was proposed for mitotic nuclei recognition in pathological images of breast cancer. The whole framework used an automatic tagger and the CNN model for training. The MP-MitDet obtained 0.71 precision, 0.76 recall, 0.75 F1, and 0.78 area.
Mahmood et al. [166] A low-cost CNN-based model was proposed for automatic breast cancer mitotic cell detection. This framework was composed of the faster regional convolutional neural network (Faster R-CNN) and deep CNN. This model yielded 0.841 recall, 0.858 F1, and 0.876 precision for ICPR 2012 and 0.583 recall, 0.691 F1, and 0.848 precision for ICPR 2014.
Wang et al. [167] A new model combining CNN and US-ELM (CNN-GTD-ELM) was proposed to detect breast cancer X-rays. They designed an 8-layer CNN model for feature extraction of input images and used ELM for detection. They combined the extracted features with some additional features of the tumor. The CNN-GTD-ELM got 86.50% accuracy, 85.10% sensitivity, 88.02% specificity, and 0.923 AUC.
Chiao et al. [168] A mask region detection method was established based on CNN. This method detected the lesion of breast cancer based on ultrasound images. Finally, this method achieved 0.75 average precision in detection and 85% accuracy in classification.
Das et al. [169] A Deep Multiple Instance Learning (MIL) was designed based on the CNN model for breast cancer detection. The MIL-CNN model achieved 96.63%, 93.06%, and 95.83% accuracy on the IUPHL, BreakHis, and UCSB data sets, respectively.
Melekoodappattu et al. [11] They proposed the 9-layer CNN method to detect breast cancer. Then, they defined texture features and used Uniform Manifold Approximation and Projection (UMAP) to reduce the dimension of features. The multi-stage features were integrated for detection. This model obtained 98% accuracy and 97.8% specificity for the MIAS data set, and 97.9% accuracy and 98.3% specificity for the DDSM data set.
Zainudin et al. [170] They designed three CNN models for mitosis and amitosis in breast cell detection. The layers of these three CNN were 6, 13, and 17, respectively. Experiments showed that the 17-layer CNN model achieved the best results. Finally, the model achieved a 15.50% loss, 80.55% TPR, 84.49% accuracy, and 11.66% FNR.
Wu et al. [171] A deep fused fully convolutional neural network (FF-CNN) was designed for breast cancer detection. They selected the AlexNet model as the backbone model and combined different levels of features. The FF-CNN was tested on ICPR 2014 data set and obtained better detection accuracy and faster detection speed.
Gonçalves et al. [172] This new framework used particle swarm optimization and genetic algorithm to optimize the CNN model. DenseNet-201, VGG-16, and ResNet-50 were used as the backbone model. The F1 score of VGG-16 was increased from 0.66 to 0.92 and the F1 score of ResNet-50 was increased from 0.83 to 0.90. The F1 values of the three optimized networks were higher than 0.90.
Guan et al. [173] Two methods were proposed to detect breast cancer. The first method was to train images by Generative Adversarial Network (GAN) and then put the trained images into CNN for experiments. The second method was that they first select the VGG-16 model as the backbone model and then transferred the backbone model. The accuracy of the first and second methods were 98.85% and 91.48%.
Hadush et al. [174] Extracting features was completed by CNN. Then these features were input into the Region Proposed Network (RPN) and Region of Interest (ROI) of fast R-CNN for detection. The method achieved 92.2% AUC-ROC, 91.86% accuracy, and 94.67% sensitivity.
Huang et al. [175] A lightweight CNN model (BM-Net) was presented to detect breast cancer. The lightweight CNN model consisted of MobileNet-V3 and bilinear structure. The MobileNet-V3 was the backbone model to extract the features. To save resources, they just replaced the fully connected layer with a bilinear structure. The BM-Net could achieve 0.88 accuracy and 0.71 score.
Mahbub et al. [176] The proposed model was composed of the designed CNN model and the fuzzy analytical hierarchy process model. The designed CNN model consisted of six convolutional layers, five max-pooling layers, and two dense layers. The proposed model can get 98.75% accuracy to detect breast cancer.
Prajoth SenthilKumar et al. [177] The VGG16 model was selected for the detection and analysis of breast cancer. They detected breast cancer from the histology images based on the variability, cell density, and tissue structure. The model could get 88% accuracy on the testing data set.
Charan et al. [178] They designed a 16-layers CNN model for the detection of breast cancer, which consisted of six convolution layers, four average-pooling layers, and one fully connected layer. The public data set (Mammograms-MIAS data set) was used for training and testing. The designed CNN model can achieve 65% accuracy.
Alanazi et al. [179] They designed a new CNN model and used three different classifiers to detect breast cancer, which were K-nearest neighbor, logistic regression, and support vector machines, respectively. This new model can achieve 87% accuracy, which improved 9% accuracy than other ML methods.
Gonçalves et al. [180] They proposed a new random forest surrogate to get better parameters in the pre-trained CNN models, which were made of particle swarm optimization and genetic algorithms. Three pre-trained CNN models were used in this paper, which were ResNet50, DenseNet201, and VGG16. With the help of the proposed random forest surrogate, the F1-scores of DenseNet201 and ResNet50 could be improved from 0.92 to 1, and 0.85 to 0.92, respectively.
Guan et al. [181] The Generative Adversarial Network (GAN) was applied to generate more breast cancer images. The Regions of Interest (ROIs) form images to train GAN. Some augmentation methods were used to compare with GAN, such as scaling, shifting, rotation, and so on. They designed a new CNN model as the classifier. After experiments, the GAN can yield around 3.6% better than other transformations on the image augmentation.
Sun et al. [182] A novel model was proposed for breast cancer detection based on the mammographic image. The mathematical morphology method was used to preprocess the images. The image template matching method was selected to locate the suspected regions of a breast mass. The PSO was used to improve the accuracy. The proposed model can achieve 85.82% accuracy, 66.31% F1-score, 95.38% recall, and 50.81% precision.
Chauhan et al. [183] Three different algorithms were used to detect breast cancer, which were CNN, KNN, and SVM, respectively. SVM could achieve 98% accuracy, KNN can yield 73% accuracy, and CNN could get 95% accuracy.
Gupta et al. [184] A novel modified CNN model was proposed for the detection of breast cancer. They modified the ResNet in three steps. Firstly, they used the dropout of 0.5. Then, the adaptive average pooling and adaptive max pooling were used by two layers of BN, the dropout, and the fully connected layer. The third step was the stride for down-sampling at 3 × 3 convolution. The modified CNN model could achieve 99.75% accuracy, 99.18% precision, and 99.37% recall, respectively.
Chouhan et al. [185] A novel framework (DFeBCD) was designed for detecting breast cancer. In the DFeBCD, they designed the highway network based on CNN to select features. There were two classifiers, which were SVM and Emotional Learning inspired Ensemble Classifier (ELiEC). These two classifiers were trained by the selected features. This framework was evaluated by five-fold cross-validation and achieved 80.5% accuracy.

4.3. Breast Cancer Segmentation

In this chapter, we will review the segmentation of breast cancer based on CNN. The abnormal areas in breast images would be segmented based on the CNN model. Breast cancer image segmentation compares the similarity of feature factors between images and divides the image into several regions. Breast segmentation involves the removal of background region, pectoral muscles, labels, artifacts, and other defects add during image acquisition. The segmented area could be compared with the manually segmented area to verify the accuracy of the segmentation method.

Chen et al. [186] introduced a new model for the segmentation of breast cancer. This new framework mainly consisted of two steps. The first step was the segmentation CNN model. Another part was the structure of the QA network based on the ResNet-101 model. A structure was used to predict the quality of each slice. Another structure gave the DSC value.

Tsochatzidis et al. [6] introduced a new CNN model to segment breast masses. In this new CNN model, the convolution layer of each layer was modified. The loss function was also modified by adding an extra term. They evaluated the method on DDSM-400 and CBIS-DDSM datasets.

Lei et al. [56] developed a mask score region based on the R-CNN to segment breast tumors. The network consisted of five parts, namely, the regional suggestion network, the mask terminal, the backbone network, the mask scoring header, and the regional convolution neural network header. In this R-CNN model, the region of interest (ROI) was segmented by using the network blocks between module quality and region categories to build a direct correlation integration.

El Adoui et al. [187] proposed two CNN models to segment breast tumors in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The first CNN model was based on SegNet, as presented in Fig. 16. The second model was to select U-Net as the backbone model. 85% of the data sets were used for training, and the other 15% were used for validation. The first method obtained 68.88% IoU, and the second method obtained 76.14% IoU.

Figure 16. The structure of SegNet.

Figure 16

Kakileti et al. [188] introduced a cascaded CNN architecture for breast cancer segmentation. This new model used a 5-stage V-net as the main encoding and decoding structure. To improve the accuracy, they used stridden convolutions, deconvolutions, and PReLU activation in this model. This new method obtained 91.6% overall Dice, 93.3% frontal Dice, 89.5% lateral Dice, and 91.9% oblique Dice.

Kumar et al. [189] introduced a dual-layered CNN model (DL-CNN) for breast cancer region recognition and segmentation. The first layer CNN was used to identify the possible region. The second layer CNN was used to segment and reduce false positive. They tested the model on breast image data sets and obtained 0.9726 at 0.39706 for True Positive Rate at False-positive per image.

Ranjbarzadeh et al. [90] proposed a new CNN with multiple feature extraction paths for the segmentation of breast cancer (MRFE-CNN), as shown in Fig. 17. To prevent deep structure, they enhanced the data set. This method can improve the detection of breast cancer tumor boundaries. They used Mini-MIAS and DDSM data sets to evaluate the MRFE-CNN. They obtained 0.936, 0.890, and 0.871 accuracy for normal, benign, and malignant tumors on Mini-MIAS, and 0.944, 0.915, 0.892 accuracy for normal, benign, and malignant tumors on DDSM.

Figure 17. The framework of MRFE-CNN.

Figure 17

Atrey et al. [190] proposed a new CNN automatic segmentation system for breast lesions. This system was mainly based on their self-made CNN model. For the evaluation of this model, the authors used the bimodal database for bimodal evaluation. The two databases were MG and US. Finally, this model got 0.64 DCS, 0.53 JI for the MG, and 0.77 DSC, 0.64 JI for the US.

Irfan et al. [191] introduced a segmentation model with Dilated Semantic Segmentation Network (Di-CNN) for ultrasonic breast lesion images. This model was mainly composed of two CNN models. A CNN model was DenseNet201 for transfer learning. The second model is a self-made 24-layer CNN model. The features extracted from the two CNN models were fused. SVM was used as the classifier of this model. This model yielded 98.9% accuracy.

Su et al. [192] designed a fast-scanning depth convolution neural network (FCNN) for breast cancer segmentation. This model reduced the amount of calculation and the calculation time. It only took 2.3 s to split 1000 × 1000 images. The FCNN model got 0.91 precision, 0.82 recall, and 0.85 F1.

He et al. [193] proposed a novel network with the CNN model and transferring learning to classify and segment breast cancer. In this paper, two CNN models (AlexNet and GoogleNet) were selected as the backbone models. These two CNN models were used as the feature extractors and SVM was selected as the classifier. The segmentation of this model in breast cancer was similar to professional pathologists.

Soltani et al. [194] introduced a new model for automatic breast cancer segmentation. This method was based on the Mask RCNN. The backbone model used in this paper was detectron2. The model was tested on the INbreast data set and got 81.05% F1 and 95.87% precision.

Min et al. [195] introduced a new system (fully integrated CAD) for the automatic segmentation of breast cancer. The new system was composed of the detection-segmentation method and pseudo-color image generation. The detection-segmentation method was mainly with Mask RCNN. The public INbreast data set was chosen to test the new system. This system yielded a 0.88 Dice similarity index.

Arora et al. [196] proposed a model (RGU-Net) for breast cancer segmentation. The RGU-Net consisted of residual connection and group convolution in U-Net. There were several different sizes of encoder and decoder blocks. The conditional random field was selected to analyze the boundaries. The model was evaluated on the INbreast data set and produced 92.6% Dice.

Spuhler et al. [197] introduced a new CNN method (DCE-MRI) to segment breast cancer. The manual regions of interest were completed by the expert radiologist (R1). R2 and R3 were finished by a resident and another expert radiologist. Finally, the new model 0.71 Dice by using R1.

Atrey et al. [198] proposed a customized CNN for the segmentation of breast cancer based on MG and US. There were nine layers in this customized CNN model. Two convolutional layers, one max-pooling layer, one ReLU layer, one fully connected layer, one softmax layer, and a classification layer formed the whole customized CNN model. This model achieved 0.64 DSC and 0.53 JI for MG and 0.77 DSC and 0.64 JI for the US.

Sumathi et al. [199] proposed a new system to segment breast cancer. They used artificial bee colony optimization with fuzzy clustering to select features. Then, CNN was used as the classifier. This hybrid system could achieve 98% segmentation accuracy.

Xu et al. [200] designed an 8-layer CNN for the segmentation of breast cancer. This customized 8-layer CNN model consisted of 1–3 convolution layers, 1–3 pooling layers, a fully connected layer, and a softmax layer. This customized CNN model yielded 85.1% JSI.

Guo et al. [201] proposed a novel network to segment breast cancer. They designed a 6-layers CNN model, which consisted of two convolutional layers, two pooling layers, and two fully connected layers. The features were extracted by the customized CNN model and then fed to SVM. The proposed combined CNN-SVM achieved 0.92, 0.93, and 0.95 on the sensitivity index, DSC coefficient, and PPV.

Cui et al. [202] proposed a novel patch-based CNN model for the detection of breast cancer based on MRI. They designed a 7-layer CNN model, which consisted of four convolutional layers, two max-pooling layers, and one fully connected layer. The 7-layer CNN model achieved a 95.19% Dice ratio.

For the segmentation of breast cancer based on CNN, there are some limitations. These methods selected public datasets for experiments. But these public datasets need many expert doctors to label these images. What’s more, the application of unsupervised learning technology in the segmentation of breast cancer is not very good. The summary of CNN for breast cancer segmentation is shown in Table 8.

Table 8. Summary of CNN for breast cancer segmentation.

Authors Methods Results
Chen et al. [186] A new framework was introduced for the segmentation of breast cancer. This new framework mainly consisted of two parts, which were the segmentation CNN model, and the structure of the QA network based on the ResNet-101 model. The final accuracy of this method was 0.97, 0.94, and 0.89 respectively; the F1 was 0.98, 0.91, and 0.81 respectively; AUC was 0.96, 0.93, and 0.88 for good, medium, and poor-quality slices, respectively and 0.06 ± 0.19 MAE.
Tsochatzidis et al. [6] A new CNN model was introduced to segment breast masses. In this new CNN model, the convolution layer of each layer and the loss function were modified. The AUC of this method was 0.898 and 0.862 for DDSM-400 and CBIS-DDSM, respectively.
Lei et al. [56] A mask score region based on the R-CNN was proposed to segment breast tumors. The network consisted of five parts, namely, the regional suggestion network, the mask terminal, the backbone network, the mask scoring header, and the regional convolution neural network header. The R-CNN produced HD95, MSD, RMSD and CMD of 1.646±1.191 mm and 1.665±1.129 mm, 0.489±0.406 mm and 0.475±0.371 mm, 0.755±0.755 mm and 0.751±0.508 mm, 0.672±0.612 mm and 0.665±0.729 mm in two tests, respectively.
El Adoui et al. [187] Two CNN models were proposed to segment breast tumors in DCE-MRI. The first CNN model was based on SegNet. The second model was to select U-Net as the backbone model. The first method obtained 68.88% IoU, and the second method obtained 76.14% IoU.
Kakileti et al. [188] The new model used a 5-stage V-net as the main encoding and decoding structure proposed to segment breast cancer. This new method obtained 91.6% overall Dice, 93.3% frontal Dice, 89.5% lateral Dice, and 91.9% oblique Dice.
Kumar et al. [189] A dual-layered CNN model (DL-CNN) was proposed for breast cancer region recognition and segmentation. The first layer CNN was used to identify the possible region. The second layer CNN was used to segment and reduce false positive. They tested the model on breast image data sets and obtained 0.9726 at 0.39706 for True Positive Rate at False-positive per image.
Ranjbarzadeh et al. [90] A shallow convolutional neural network with multiple feature extraction paths was proposed for the automatic segmentation of breast cancer (MRFE-CNN). They obtained 0.936, 0.890, and 0.871 accuracy for normal, benign, and malignant tumors on Mini-MIAS, and 0.944, 0.915, 0.892 accuracy for normal, benign, and malignant tumors on DDSM.
Atrey et al. [190] A new computer-aided automatic segmentation system was designed for breast lesions, which was mainly based on their self-made CNN model. This model got 0.64 DSC, 0.53 JI for the MG, and 0.77 DSC, 0.64 JI for the US.
Irfan et al. [191] Two CNN models were proposed to segment breast lesion images, which were DenseNet201 and a self-made 24-layer CNN model. This model yielded 98.9% accuracy.
Su et al. [192] A fast-scanning depth convolution neural network (FCNN) was designed for breast cancer segmentation. The FCNN model got 0.91 precision, 0.82 recall, and 0.85 F1.
He et al. [193] Two CNN models (AlexNet and GoogleNet) were selected as the backbone models to classify and segment breast cancer. The segmentation of this model in breast cancer was similar to professional pathologists.
Soltani et al. [194] A new method was designed for breast cancer segmentation with the Mask RCNN. The method was tested on the INbreast data set and achieved 81.05% F1 and 95.87% precision.
Min et al. [195] A new system (fully integrated CAD) was designed for the automatic segmentation of breast cancer, which was composed of the detection-segmentation method and pseudo-color image generation. This system yielded a 0.88 Dice similarity index.
Arora et al. [196] A model (RGU-Net) was designed for breast cancer segmentation, which was composed of residual connection and group convolution in U-Net. The model was evaluated on the INbreast data set and produced 92.6% Dice.
Spuhler et al. [197] A new CNN model (DCE-MRI) was designed to segment breast cancer. The new model achieved 0.71 Dice by using R1.
Atrey et al. [198] A customized CNN was proposed for the segmentation of breast cancer based on MG and US. There were nine layers in this customized CNN model. Two convolutional layers, one max-pooling layer, one ReLU layer, one fully connected layer, one softmax layer, and a classification layer formed the whole customized CNN model. This model achieved 0.64 DSC and 0.53 JI for MG and 0.77 DSC and 0.64 JI for the US.
Sumathi et al. [199] A new system was proposed to segment breast cancer. They used artificial bee colony optimization with fuzzy clustering to select features. Then, CNN was used as the classifier. This hybrid system could achieve 98% segmentation accuracy.
Xu et al. [200] An 8-layer CNN was designed for the segmentation of breast cancer. This customized 8-layer CNN model consisted of 1–3 convolution layers, 1–3 pooling layers, a fully connected layer, and a softmax layer. This customized CNN model yielded 85.1% JSI.
Guo et al. [201] A novel network was proposed to segment breast cancer. They designed a 6-layers CNN model, which consisted of two convolutional layers, two pooling layers, and two fully connected layers. The features were extracted by the customized CNN model and then fed to SVM. The proposed combined CNN-SVM achieved 0.92, 0.93, and 0.95 on the sensitivity index, DSC coefficient, and PPV.
Cui et al. [202] A novel patch-based CNN model was proposed for the detection of breast cancer based on MRI. They designed a 7-layer CNN model, which consisted of four convolutional layers, two max-pooling layers, and one fully connected layer. The 7-layer CNN model achieved a 95.19% Dice ratio.

5. Conclusion

Recently, the diagnosis of breast cancer based on CNN has made rapid progress and success. This also makes more and more researchers devote more energy to a breast cancer diagnosis with CNN. We complete a comprehensive review of the diagnosis of breast cancer based on CNN after reviewing a sea of recent papers. In this paper, readers can not only see the CNN-based diagnostic methods for breast cancer in recent decades but also know the advantages and disadvantages of these methods and future research directions. The main contributions of this survey: (i) A sea of major papers about the diagnosis of breast cancer based on CNN is reviewed in this paper to provide a comprehensive survey; (ii) This survey presents the advantages and disadvantages of these state-of-the-art methods; (iii) A presentation of significant findings gives readers the opportunities available in the research interest; (iv) We give the future research direction and critical challenges about the CNN-based diagnostic methods for breast cancer.

Based on the papers we have reviewed, many techniques have been used to boost their proposed CNN models for the diagnosis of breast cancer. Many researchers used pre-trained CNN models or their customized CNN models to extract features from input. To reduce the training time and computational cost, some researchers replace some last layers of CNN models with other techniques, such as SVM, ELM, and so on. In some papers, researchers would select more than one CNN models to extract different features. Then, these different features would be ensembled and fed to classifiers for improving performance.

Although this breast cancer diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the detection. (iii) It is easy to cause overfitting when using small data sets. (iv) Most of the breast cancer diagnosis model based on CNN uses the pre-trained CNN model to extract features. But at this time, which layer has the best feature? Which layer of features should we extract? These problems have not been well solved in recent studies.

Even though this paper reviews a sea of recent research papers, there are still some limitations. First, this survey only pays attention to CNN for breast cancer diagnosis. There are some other CAD methods for breast cancer diagnosis. Second, this survey only focuses on two-dimensional images.

In the future, researchers can try more unlabeled data sets for breast cancer detection. Compared with labeled datasets, unlabeled datasets are less expensive and more numerous. What’s more, researchers can try more new methods for image feature extraction, such as EL, TL, xDNNs, U-Net, transformer, and so on.

This paper reviews the CNN-based breast cancer diagnosis technology in recent years. With the progress of CNN technology, the diagnosis accuracy of breast cancer is getting higher and higher. We summarize the limitations and future research directions of CNN-based breast cancer diagnosis technology. Although breast cancer diagnosis technology based on CNN has achieved great success and can be used as an auxiliary means to help doctors diagnose breast cancer, there is still much to be improved.

Funding Statement

This paper is partially supported by Medical Research Council Confidence in Concept Award, UK (MC_PC_17171); Royal Society International Exchanges Cost Share Award, UK (RP202G0230); British Heart Foundation Accelerator Award, UK (AA/18/3/34220); Hope Foundation for Cancer Research, UK (RM60G0680); Global Challenges Research Fund (GCRF), UK (P202PF11); Sino-UK Industrial Fund, UK (RP202G0289); LIAS Pioneering Partnerships Award, UK (P202ED10); Data Science Enhancement Fund, UK (P202RE237).

Footnotes

Conf licts of Interest:

The authors declare that they have no conflicts of interest to report regarding the present study.

References

  • 1.Bray F, Laversanne M, Weiderpass E, Soerjomataram I. The ever-increasing importance of cancer as a leading cause of premature death worldwide. Cancer. 2021;127(16):3029–3030. doi: 10.1002/cncr.33587. [DOI] [PubMed] [Google Scholar]
  • 2.Desai M, Shah M. An anatomization on breast cancer detection and diagnosis employing multilayer perceptron neural network (MLP) and convolutional neural network (CNN) Clinical eHealth. 2021;4:1–11. doi: 10.1016/j.ceh.2020.11.002. [DOI] [Google Scholar]
  • 3.Beeravolu AR, Azam S, Jonkman M, Shanmugam B, Kannoorpatti K, et al. Preprocessing of breast cancer images to create datasets for deep-CNN. IEEE Access. 2021;9:33438–33463. doi: 10.1109/ACCESS.2021.3058773. [DOI] [Google Scholar]
  • 4.Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians. 2021;71(3):209–249. doi: 10.3322/caac.21660. [DOI] [PubMed] [Google Scholar]
  • 5.Heenaye-Mamode Khan M, Boodoo-Jahangeer N, Dullull W, Nathire S, Gao X, et al. Multiclass classification of breast cancer abnormalities using deep convolutional neural network (CNN) PLoS One. 2021;16(8):e0256500. doi: 10.1371/journal.pone.0256500. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Tsochatzidis L, Koutla P, Costaridou L, Pratikakis I. Integrating segmentation information into CNN for breast cancer diagnosis of mammographic masses. Computer Methods and Programs in Biomedicine. 2021;200:105913. doi: 10.1016/j.cmpb.2020.105913. [DOI] [PubMed] [Google Scholar]
  • 7.Xie XZ, Niu JW, Liu XF, Li QF, Wang Y, et al. DG-CNN: Introducing margin information into convolutional neural networks for breast cancer diagnosis in ultrasound images. Journal of Computer Science and Technology. 2022;37(2):277–294. doi: 10.1007/s11390-020-0192-0. [DOI] [Google Scholar]
  • 8.Waks AG, Winer EP. Breast cancer treatment: A review. Jama. 2019;321(3):288–300. doi: 10.1001/jama.2018.19323. [DOI] [PubMed] [Google Scholar]
  • 9.Zuluaga-Gomez J, Al Masry Z, Benaggoune K, Meraghni S, Zerhouni N. A CNN-based methodology for breast cancer diagnosis using thermal images. Computer Methods in Biomechanics and Biomedical Engineering: Imaging Visualization. 2021;9(2):131–145. doi: 10.1080/21681163.2020.1824685. [DOI] [Google Scholar]
  • 10.Sannasi Chakravarthy S, Bharanidharan N, Rajaguru H. Multi-deep CNN based experimentations for early diagnosis of breast cancer. IETE Journal of Research. 2022:1–16. doi: 10.1080/03772063.2022.2028584. [DOI] [Google Scholar]
  • 11.Melekoodappattu JG, Dhas AS, Kandathil BK, Adarsh K. Breast cancer detection in mammogram: Combining modified CNN and texture feature based approach. Journal of Ambient Intelligence and Humanized Computing. 2022:1–10. doi: 10.1007/s12652-022-03713-3. [DOI] [Google Scholar]
  • 12.Lu J, Wu Y, Xiong Y, Zhou Y, Zhao Z, et al. Breast tumor computer-aided detection system based on magnetic resonance imaging using convolutional neural network. Computer Modeling in Engineering Sciences. 2022;130(1):365–377. doi: 10.32604/cmes.2022.017897. [DOI] [Google Scholar]
  • 13.Ribli D, Horváth A, Unger Z, Pollner P, Csabai I. Detecting and classifying lesions in mam-mograms with deep learning. Scientific Reports. 2018;8(1):1–7. doi: 10.1038/s41598-018-22437-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Hance KW, Anderson WF, Devesa SS, Young HA, Levine PH. Trends in inflammatory breast carcinoma incidence and survival: The surveillance, epidemiology, and end results program at the National Cancer Institute. Journal of the National Cancer Institute. 2005;97(13):966–975. doi: 10.1093/jnci/dji172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Tabar L, Gad A, Holmberg L, Ljungquist U, Group KCP, et al. Reduction in mortality from breast cancer after mass screening with mammography: Randomised trial from the breast cancer screening working group of the Swedish National Board of Health and Welfare. The Lancet. 1985;325(8433):829–832. doi: 10.1016/S0140-6736(85)92204-4. [DOI] [PubMed] [Google Scholar]
  • 16.Sharma GN, Dave R, Sanadya J, Sharma P, Sharma K. Various types and management of breast cancer: An overview. Journal of Advanced Pharmaceutical Technology Research. 2010;1(2):109–126. [PMC free article] [PubMed] [Google Scholar]
  • 17.McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89–94. doi: 10.1038/s41586-019-1799-6. [DOI] [PubMed] [Google Scholar]
  • 18.Zebari DA, Ibrahim DA, Zeebaree DQ, Mohammed MA, Haron H, et al. Breast cancer detection using mammogram images with improved multi-fractal dimension approach and feature fusion. Applied Sciences. 2021;11(24):12122. doi: 10.3390/app112412122. [DOI] [Google Scholar]
  • 19.Mihaylov I, Nisheva M, Vassilev D. Machine learning techniques for survival time prediction in breast cancer; International Conference on Artificial Intelligence: Methodology, Systems, and Applications; Varna, Bulgaria. Springer; 2018. pp. 186–194. [Google Scholar]
  • 20.Zhu Z, Harowicz M, Zhang J, Saha A, Grimm LJ, et al. Medical imaging 2018: Computer-aided diagnosis, vol. 10575, 105752W. Houston, Texas, USA: International Society for Optics and Photonics; 2018. Deep learning-based features of breast MRI for prediction of occult invasive disease following a diagnosis of ductal carcinoma in situ: Preliminary data. [Google Scholar]
  • 21.Grimm LJ, Ryser MD, Partridge AH, Thompson AM, Thomas JS, et al. Surgical upstaging rates for vacuum assisted biopsy proven DCIS: Implications for active surveillance trials. Annals of Surgical Oncology. 2017;24(12):3534–3540. doi: 10.1245/s10434-017-6018-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Veta M, Pluim JP, van Diest PJ, Viergever MA. Breast cancer histopathology image analysis: A review; IEEE Transactions on Biomedical Engineering; 2014. pp. 1400–1411. [DOI] [PubMed] [Google Scholar]
  • 23.Zebari DA, Ibrahim DA, Zeebaree DQ, Haron H, Salih MS, et al. Systematic review of computing approaches for breast cancer detection based computer aided diagnosis using mammogram images. Applied Artificial Intelligence. 2021;35(15):2157–2203. doi: 10.1080/08839514.2021.2001177. [DOI] [Google Scholar]
  • 24.Ibraheem AM, Rahouma KH, Hamed HF. 3PCNNB-Net: Three parallel CNN branches for breast cancer classification through histopathological images. Journal of Medical and Biological Engineering. 2021;41(4):494–503. doi: 10.1007/s40846-021-00620-4. [DOI] [Google Scholar]
  • 25.Mokhatri-Hesari P, Montazeri A. Health-related quality of life in breast cancer patients: Review of reviews from 2008 to 2018. Health and Quality of Life Outcomes. 2020;18(1):1–25. doi: 10.1186/s12955-020-01591-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Kösüs N, Kösüs A, Duran M, Simavli S, Turhan N. Comparison of standard mammography with digital mammography and digital infrared thermal imaging for breast cancer screening. Journal ofthe Turkish German Gynecological Association. 2010;11(3):152–157. doi: 10.5152/jtgga.2010.24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Murtaza G, Shuib L, Abdul Wahab AW, Mujtaba G, Nweke HF, et al. Deep learning-based breast cancer classification through medical imaging modalities: State of the art and research challenges. Artificial Intelligence Review. 2020;53(3):1655–1720. doi: 10.1007/s10462-019-09716-5. [DOI] [Google Scholar]
  • 28.Debelee TG, Schwenker F, Ibenthal A, Yohannes D. Survey of deep learning in breast cancer image analysis. Evolving Systems. 2020;11(1):143–163. doi: 10.1007/s12530-019-09297-2. [DOI] [Google Scholar]
  • 29.Liu J, Zarshenas A, Qadir A, Wei Z, Yang L, et al. Medical imaging 2018: Image processing, vol. 10574, 105740F. Houston, Texas, USA: International Society for Optics and Photonics; 2018. Radiation dose reduction in digital breast tomosynthesis (DBT) by means of deep-learning-based supervised image processing. [Google Scholar]
  • 30.Zhao Z, Wu F. Minimally-invasive thermal ablation of early-stage breast cancer: A systemic review. European Journal of Surgical Oncology. 2010;36(12):1149–1155. doi: 10.1016/j.ejso.2010.09.012. [DOI] [PubMed] [Google Scholar]
  • 31.Jalalian A, Mashohor SB, Mahmud HR, Saripan MIB, Ramli ARB, et al. Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: A review. Clinical Imaging. 2013;37(3):420–426. doi: 10.1016/j.clinimag.2012.09.024. [DOI] [PubMed] [Google Scholar]
  • 32.Gilbert FJ, Tucker L, Gillan MG, Willsher P, Cooke J, et al. Accuracy of digital breast tomosynthesis for depicting breast cancer subgroups in a UK retrospective reading study (TOMMY trial) Radiology. 2015;277(3):697–706. doi: 10.1148/radiol.2015142566. [DOI] [PubMed] [Google Scholar]
  • 33.Antropova NO, Abe H, Giger ML. Use of clinical MRI maximum intensity projections for improved breast lesion classification with deep convolutional neural networks. Journal of Medical Imaging. 2018;5(1):014503. doi: 10.1117/1.JMI.5.1.014503. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Griebsch I, Brown J, Boggis C, Dixon A, Dixon M, et al. Cost-effectiveness of screening with contrast enhanced magnetic resonance imaging vs X-ray mammography of women at a high familial risk of breast cancer. British Journal of Cancer. 2006;95(7):801–810. doi: 10.1038/sj.bjc.6603356. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Kuhl CK, Schrading S, Bieling HB, Wardelmann E, Leutner CC, et al. MRI for diagnosis of pure ductal carcinoma in situ: A prospective observational study. The Lancet. 2007;370(9586):485–492. doi: 10.1016/S0140-6736(07)61232-X. [DOI] [PubMed] [Google Scholar]
  • 36.Kelly KM, Dean J, Comulada WS, Lee SJ. Breast cancer detection using automated whole breast ultrasound and mammography in radiographically dense breasts. European Radiology. 2010;20(3):734–742. doi: 10.1007/s00330-009-1588-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Shin SY, Lee S, Yun ID, Kim SM, Lee KM. Joint weakly and semi-supervised deep learning for localization and classification of masses in breast ultrasound images. IEEE Transactions on Medical Imaging. 2018;38(3):762–774. doi: 10.1109/TMI.2018.2872031. [DOI] [PubMed] [Google Scholar]
  • 38.Byra M, Sznajder T, Korzinek D, Piotrzkowska-Wróblewska H, Dobruch-Sobczak K, et al. Impact of ultrasound image reconstruction method on breast lesion classification with deep learning; Iberian Conference on Pattern Recognition and Image Analysis; Madrid, Spain. Springer; 2019. pp. 41–52. [Google Scholar]
  • 39.Fotin SV, Yin Y, Haldankar H, Hoffmeister JW, Periaswamy S. Medical imaging 2016: Computer-aided diagnosis. Vol. 9785. SPIE; Bellingham, Washington, USA: 2016. Detection of soft tissue densities from digital breast tomosynthesis: Comparison of conventional and deep learning approaches; pp. 228–233. [Google Scholar]
  • 40.Zhang J, Ghate SV, Grimm LJ, Saha A, Cain EH, et al. Medical imaging 2018: Computer-aided diagnosis, vol. 10575, 105752V. Houston, Texas, USA: International Society for Optics and Photonics; 2018. Convolutional encoder-decoder for breast mass segmentation in digital breast tomosynthesis. [Google Scholar]
  • 41.Hooley RJ, Durand MA, Philpotts LE. Advances in digital breast tomosynthesis. American Journal of Roentgenology. 2017;208(2):256–266. doi: 10.2214/AJR.16.17127. [DOI] [PubMed] [Google Scholar]
  • 42.Samala RK, Chan HP, Hadjiiski L, Helvie MA, Wei J, et al. Mass detection in digital breast tomosynthesis: Deep convolutional neural network with transfer learning from mammography. Medical Physics. 2016;43(12):6654–6666. doi: 10.1118/1.4967345. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Pang T, Wong JHD, Ng WL, Chan CS. Deep learning radiomics in breast cancer with different modalities: Overview and future. Expert Systems with Applications. 2020;158:113501. doi: 10.1016/j.eswa.2020.113501. [DOI] [Google Scholar]
  • 44.Mahmood T, Li J, Pei Y, Akhtar F, Imran A, et al. A brief survey on breast cancer diagnostic with deep learning schemes using multi-image modalities. IEEE Access. 2020;8:165779–165809. doi: 10.1109/Access.6287639. [DOI] [Google Scholar]
  • 45.Kumar G, Alqahtani H. Deep learning-based cancer detection-recent developments,trend and challenges. Computer Modeling in Engineering Sciences. 2022;130(3):1271–1307. doi: 10.32604/cmes.2022.018418. [DOI] [Google Scholar]
  • 46.Zou L, Yu S, Meng T, Zhang Z, Liang X, et al. A technical review of convolutional neural network-based mammographic breast cancer diagnosis. Computational and Mathematical Methods in Medicine. 2019;2019 doi: 10.1155/2019/6509357. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Gurcan MN, Boucheron LE, Can A, Madabhushi A, Rajpoot NM, et al. Histopathological image analysis: A review. IEEE Reviews in Biomedical Engineering. 2009;2:147–171. doi: 10.1109/RBME.2009.2034865. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Ertosun MG, Rubin DL. Probabilistic visual search for masses within mammography images using deep learning; 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); Washington, USA. 2015. pp. 1310–1315. [Google Scholar]
  • 49.Hussein IJ, Burhanuddin MA, Mohammed MA, Benameur N, Maashi MS, et al. Fully-automatic identification of gynaecological abnormality using a new adaptive frequency filter and histogram of oriented gradients (HOG) Expert Systems. 2022;39(3):e12789. doi: 10.1111/exsy.12789. [DOI] [Google Scholar]
  • 50.Tang J, Rangayyan RM, Xu J, El Naqa I, Yang Y. Computer-aided detection and diagnosis of breast cancer with mammography: Recent advances. IEEE Transactions on Information Technology in Biomedicine. 2009;13(2):236–251. doi: 10.1109/TITB.2008.2009441. [DOI] [PubMed] [Google Scholar]
  • 51.Yassin NI, Omran S, El Houby EM, Allam H. Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: A systematic review. Computer Methods and Programs in Biomedicine. 2018;156:25–45. doi: 10.1016/j.cmpb.2017.12.012. [DOI] [PubMed] [Google Scholar]
  • 52.Xie S, Yu Z, Lv Z. Multi-disease prediction based on deep learning: A survey. Computer Modeling in Engineering and Sciences. 2021;128(2):489–522. doi: 10.32604/cmes.2021.016728. [DOI] [Google Scholar]
  • 53.Doi K. Computer-aided diagnosis in medical imaging: Historical review, current status and future potential. Computerized Medical Imaging and Graphics. 2007;31(4–5):198–211. doi: 10.1016/j.compmedimag.2007.02.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Sadaf A, Crystal P, Scaranelo A, Helbich T. Performance of computer-aided detection applied to full-field digital mammography in detection of breast cancers. European Journal of Radiology. 2011;77(3):457–461. doi: 10.1016/j.ejrad.2009.08.024. [DOI] [PubMed] [Google Scholar]
  • 55.Karimi Jafarbigloo S, Danyali H. Nuclear atypia grading in breast cancer histopathological images based on CNN feature extraction and LSTM classification. CAAI Transactions on Intelligence Technology. 2021;6(4):426–439. doi: 10.1049/cit2.12061. [DOI] [Google Scholar]
  • 56.Lei Y, He X, Yao J, Wang T, Wang L, et al. Breast tumor segmentation in 3D automatic breast ultrasound using mask scoring R-CNN. Medical Physics. 2021;48(1):204–214. doi: 10.1002/mp.14569. [DOI] [PubMed] [Google Scholar]
  • 57.Salama WM, Aly MH. Deep learning in mammography images segmentation and classification: Automated CNN approach. Alexandria Engineering Journal. 2021;60(5):4701–4709. doi: 10.1016/j.aej.2021.03.048. [DOI] [Google Scholar]
  • 58.Agarwal P, Yadav A, Mathur P. Data engineering for smart systems. Manipal University, Springer; Jaipur, India: 2022. Breast cancer prediction on BreakHis dataset using deep CNN and transfer learning model; pp. 77–88. [Google Scholar]
  • 59.Nazir MS, Khan UG, Mohiyuddin A, Reshan A, Saleh M, et al. A novel CNN-inception-V4-based hybrid approach for classification of breast cancer in mammogram images. Wireless Communications and Mobile Computing. 2022;2022 doi: 10.1155/2022/5089078. [DOI] [Google Scholar]
  • 60.Qin C, Wu Y, Zeng J, Tian L, Zhai Y, et al. Joint transformer and multi-scale CNN for DCE-MRI breast cancer segmentation. Soft Computing. 2022;26:8317–8334. doi: 10.1007/s00500-022-07235-0. [DOI] [Google Scholar]
  • 61.Zainudin Z, Shamsuddin SM, Hasan S. Machine learning and big data analytics paradigms: Analysis, applications and challenges. Springer; 2021. Deep layer convolutional neural network (CNN) architecture for breast cancer classification using histopathological images; pp. 347–364. [Google Scholar]
  • 62.Agarwal R, Sharma H. Advances in computer, communication and computational sciences. Springer; Singapore: 2021. A new enhanced recurrent extreme learning machine based on feature fusion with CNN deep features for breast cancer detection; pp. 461–471. [Google Scholar]
  • 63.Saber A, Sakr M, Abou-Seida O, Keshk A. A novel transfer-learning model for automatic detection and classification ofbreast cancer based deep CNN. Kafrelsheikh Journal of Information Sciences. 2021;2(1):1–9. doi: 10.21608/kjis.2021.192207. [DOI] [Google Scholar]
  • 64.Shaila S, Gurudas V, Hithyshi K, Mahima M, PoojaShree H. Data engineering and intelligent computing. Springer; Bengaluru, India: 2022. CNN-LSTM-based deep learning model for early detection of breast cancer; pp. 83–91. [Google Scholar]
  • 65.Karuppasamy A, Abdesselam A, Hedjam R, Zidoum H, Al-Bahri M. Recent CNN-based techniques for breast cancer histology image classification. The Journal of Engineering Research [TJER] 2022;19(1):41–53. doi: 10.53540/tjer.vol19iss1pp41-53. [DOI] [Google Scholar]
  • 66.Susilo AB, Sugiharti E. Accuracy enhancement in early detection of breast cancer on mammogram images with convolutional neural network (CNN) methods using data augmentation and transfer learning. Journal of Advances in Information Systems and Technology. 2021;3(1):9–16. doi: 10.15294/jaist.v3i1.49012. [DOI] [Google Scholar]
  • 67.Hariharan R, Dhilsath Fathima M, Pitchai A, Roy VJ, Padhi A. Advance concepts of image processing and pattern recognition. Vol. 19. Springer; Singapore: 2022. Detection and classification of breast cancer using CNN; pp. 109–1. [Google Scholar]
  • 68.Bal A, Das M, Satapathy SM, Jena M, Das SK. BFCNet: A CNN for diagnosis of ductal carcinoma in breast from cytology images. Pattern Analysis and Applications. 2021;24(3):967–980. doi: 10.1007/s10044-021-00962-4. [DOI] [Google Scholar]
  • 69.Kumar A, Sharma A, Bharti V, Singh AK, Singh SK, et al. MobiHisNet: A lightweight CNN in mobile edge computing for histopathological image classification. IEEE Internet of Things Journal. 2021;8(24):17778–17789. doi: 10.1109/JIOT.2021.3119520. [DOI] [Google Scholar]
  • 70.Liu Y, Pu H, Sun DW. Efficient extraction of deep image features using convolutional neural network (CNN) for applications in detecting and analysing complex food matrices. Trends in Food Science Technology. 2021;113:193–204. doi: 10.1016/j.tifs.2021.04.042. [DOI] [Google Scholar]
  • 71.Bhatt D, Patel C, Talsania H, Patel J, Vaghela R, et al. CNN variants for computer vision: History, architecture, application, challenges and future scope. Electronics. 2021;10(20):2470. doi: 10.3390/electronics10202470. [DOI] [Google Scholar]
  • 72.He K, Ji L, Wu CWD, Tso KFG. Using SARIMA-CNN-LSTM approach to forecast daily tourism demand. Journal of Hospitality and Tourism Management. 2021;49:25–33. doi: 10.1016/j.jhtm.2021.08.022. [DOI] [Google Scholar]
  • 73.Sharma R, Sungheetha A. An efficient dimension reduction based fusion of CNN and SVM model for detection of abnormal incident in video surveillance. Journal of Soft Computing Paradigm. 2021;3(2):55–69. doi: 10.36548/jscp. [DOI] [Google Scholar]
  • 74.Rodriguez-Ruiz A, Teuwen J, Chung K, Karssemeijer N, Chevalier M, et al. Medical imaging 2018: Computer-aided diagnosis, vol. 10575 105752J. Houston, Texas, USA: International Society for Optics and Photonics; 2018. Pectoral muscle segmentation in breast tomosynthesis with deep learning. [Google Scholar]
  • 75.Wang J, Ding H, Bidgoli FA, Zhou B, Iribarren C, et al. Detecting cardiovascular disease from mammograms with deep learning. IEEE Transactions on Medical Imaging. 2017;36(5):1172–1181. doi: 10.1109/TMI.2017.2655486. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Debelee TG, Amirian M, Ibenthal A, Palm G, Schwenker F. Classification of mammograms using convolutional neural network based feature extraction; International Conference on Information and Communication Technology for Develoment for Africa; Bahir Dar, Ethiopia. Springer; 2017. pp. 89–98. [Google Scholar]
  • 77.Kooi T, van Ginneken B, Karssemeijer N, den Heeten A. Discriminating solitary cysts from soft tissue lesions in mammography using a pretrained deep convolutional neural network. Medical Physics. 2017;44(3):1017–1027. doi: 10.1002/mp.12110. [DOI] [PubMed] [Google Scholar]
  • 78.Hu Z, Tang J, Wang Z, Zhang K, Zhang L, et al. Deep learning for image-based cancer detection and diagnosis—A survey. Pattern Recognition. 2018;83:134–149. doi: 10.1016/j.patcog.2018.05.014. [DOI] [Google Scholar]
  • 79.Chittineni S, Edara SS. Machine learning and autonomous systems. Springer; Tamil Nadu, India: 2022. A novel CNN approach for detecting breast cancer from mammographic image; pp. 361–370. [Google Scholar]
  • 80.Tripathi RP, Khatri SK, Baxodirovna DVG. A transfer learning approach to implementation ofpretrained CNN models for breast cancer diagnosis. Journal of Positive School Psychology. 2022;6:5816–5830. [Google Scholar]
  • 81.Kolchev A, Pasynkov D, Egoshin I, Kliouchkin I, Pasynkova O, et al. YOLOv4-based CNN model versus nested contours algorithm in the suspicious lesion detection on the mammography image: A direct comparison in the real clinical settings. Journal of Imaging. 2022;8(4):88. doi: 10.3390/jimaging8040088. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Liu S, Liu G, Zhou H. A robust parallel object tracking method for illumination variations. Mobile Networks and Applications. 2019;24(1):5–17. doi: 10.1007/s11036-018-1134-8. [DOI] [Google Scholar]
  • 83.Devika R, Rajasekaran S, Gayathri RL, Priyal J, Kanneganti SR. Automatic breast cancer lesion detection and classification in mammograms using faster R-CNN deep learning network. Issues and Developments in Medicine and Medical Research. 2022;6:10–20. doi: 10.9734/bpi/idmmr/v6. [DOI] [Google Scholar]
  • 84.Mahmoud H, Alharbi A, Khafga D. Breast cancer classification using deep convolution neural network with transfer learning. Intelligent Automation Soft Computing. 2021;29(3):803–814. doi: 10.32604/iasc.2021.018607. [DOI] [Google Scholar]
  • 85.Liu S, Liu X, Wang S, Muhammad K. Fuzzy-aided solution for out-of-view challenge in visual tracking under IoT-assisted complex environment. Neural Computing and Applications. 2021;33(4):1055–1065. doi: 10.1007/s00521-020-05021-3. [DOI] [Google Scholar]
  • 86.Yin W, Kann K, Yu M, Schütze H. Comparative study of CNN and RNN for natural language processing. arXiv preprint. 2017:arXiv:1702.01923 [Google Scholar]
  • 87.Hershey S, Chaudhuri S, Ellis DP, Gemmeke JF, Jansen A, et al. CNN architectures for large-scale audio classification; 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); LA, USA. 2017. pp. 131–135. [Google Scholar]
  • 88.Wang J, Zhu H, Wang SH, Zhang YD. Are view of deep learning on medical image analysis. Mobile Networks and Applications. 2021;26(1):351–380. doi: 10.1007/s11036-020-01672-7. [DOI] [Google Scholar]
  • 89.Ge R, Chen G, Saruta K, Terata Y. MDDCNN: Diagnosis of lymph node metastases in breast cancer based on dual-CNN fusion and segmental convolution. Information. 2021;24(2):129–138. [Google Scholar]
  • 90.Ranjbarzadeh R, Tataei Sarshar N, Jafarzadeh Ghoushchi S, Saleh Esfahani M, Parhizkar M, et al. MRFE-CNN: Multi-route feature extraction model for breast tumor segmentation in mammograms using a convolutional neural network. Annals of Operations Research. 2022;11:1–22. [Google Scholar]
  • 91.Sun Y, Xue B, Zhang M, Yen GG, Lv J. Automatically designing CNN architectures using the genetic algorithm for image classification. IEEE Transactions on Cybernetics. 2020;50(9):3840–3854. doi: 10.1109/TCYB.6221036. [DOI] [PubMed] [Google Scholar]
  • 92.Wei Y, Xia W, Huang J, Ni B, Dong J, et al. CNN: Single-label to multi-label. arXiv preprint. 2014:arXiv:1406.5726 [Google Scholar]
  • 93.Ji Y, Zhang H, Zhang Z, Liu M. CNN-based encoder-decoder networks for salient object detection: A comprehensive review and recent advances. Information Sciences. 2021;546:835–857. doi: 10.1016/j.ins.2020.09.003. [DOI] [Google Scholar]
  • 94.Chang CC, Lin CJ. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology. 2011;2(3):1–27. doi: 10.1145/1961189.1961199. [DOI] [Google Scholar]
  • 95.Gallicchio C, Scardapane S. Deep randomized neural networks. Recent Trends in Learning from Data. 2020:43–68. doi: 10.1007/978-3-030-43883-8. [DOI] [Google Scholar]
  • 96.Liu RW, Yuan W, Chen X, Lu Y. An enhanced CNN-enabled learning method for promoting ship detection in maritime surveillance system. Ocean Engineering. 2021;235:109435. doi: 10.1016/j.oceaneng.2021.109435. [DOI] [Google Scholar]
  • 97.Pradeep S, Nirmaladevi P. A review on speckle noise reduction techniques in ultrasound medical images based on spatial domain, transform domain and CNN methods. IOP Conference Series: Materials Science and Engineering. 2021;1055(1):012116. doi: 10.1088/1757-899X/1055/1/012116. [DOI] [Google Scholar]
  • 98.Jin N, Wu J, Ma X, Yan K, Mo Y. Multi-task learning model based on multi-scale CNN and LSTM for sentiment classification. IEEE Access. 2020;8:77060–77072. doi: 10.1109/ACCESS.2020.2989428. [DOI] [Google Scholar]
  • 99.Yan R, Liao J, Yang J, Sun W, Nong M, et al. Multi-hour and multi-site air quality index forecasting in Beijing using CNN, LSTM, CNN-LSTM, and spatiotemporal clustering. Expert Systems with Applications. 2021;169:114513. doi: 10.1016/j.eswa.2020.114513. [DOI] [Google Scholar]
  • 100.Xiang L, Wang P, Yang X, Hu A, Su H. Fault detection of wind turbine based on SCADA data analysis using CNN and LSTM with attention mechanism. Measurement. 2021;175:109094. doi: 10.1016/j.measurement.2021.109094. [DOI] [Google Scholar]
  • 101.Chen Y, Wang Y, Dong Z, Su J, Han Z, et al. 2-D regional short-term wind speed forecast based on CNN-LSTM deep learning model. Energy Conversion and Management. 2021;244:114451. doi: 10.1016/j.enconman.2021.114451. [DOI] [Google Scholar]
  • 102.Zhang M, Li W, Tao R, Li H, Du Q. Information fusion for classification of hyperspectral and LiDAR data using IP-CNN. IEEE Transactions on Geoscience and Remote Sensing. 2021;60:1–12. doi: 10.1109/TGRS.2022.3217577. [DOI] [Google Scholar]
  • 103.Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint. 2014:arXiv:1409.1556 [Google Scholar]
  • 104.Albawi S, Mohammed TA, Al-Zawi S. Understanding of a convolutional neural network; 2017 International Conference on Engineering and Technology (ICET); Akdeniz University, Antalya, Turkey. 2017. pp. 1–6. [Google Scholar]
  • 105.Gkioxari G, Malik J, Johnson J. Mesh R-CNN; Proceedings of the IEEE/CVF International Conference on Computer Vision; Seoul, Korea (South). 2019. pp. 9785–9795. [Google Scholar]
  • 106.Zhang K, Zuo W, Zhang L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Transactions on Image Processing. 2018;27(9):4608–4622. doi: 10.1109/TIP.2018.2839891. [DOI] [PubMed] [Google Scholar]
  • 107.Chen K, Wang J, Chen LC, Gao H, Xu W, et al. ABC-CNN: An attention based convolutional neural network for visual question answering. arXiv preprint. 2015:arXiv:1511.05960 [Google Scholar]
  • 108.Basiri ME, Nemati S, Abdar M, Cambria E, Acharya UR. ABCDM: An attention-based bidirectional CNN-RNN deep model for sentiment analysis. Future Generation Computer Systems. 2021;115:279–294. doi: 10.1016/j.future.2020.08.005. [DOI] [Google Scholar]
  • 109.Dua N, Singh SN, Semwal VB. Multi-input CNN-GRU based human activity recognition using wearable sensors. Computing. 2021;103(7):1461–1478. doi: 10.1007/s00607-021-00928-8. [DOI] [Google Scholar]
  • 110.Zhu Z, Lu S, Wang SH, Górriz JM, Zhang YD. BCNet: A novel network for blood cell classification. Frontiers in Cell and Developmental Biology. 2021;9:813996. doi: 10.3389/fcell.2021.813996. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 111.Wu J. Introduction to convolutional neural networks. National Key Lab for Novel Software Technology Nanjing University China. 2017;5(23):1–31. [Google Scholar]
  • 112.Chauhan R, Ghanshala KK, Joshi R. Convolutional neural network (CNN) for image detection and recognition; 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC); Punjab, India. 2018. pp. 278–282. [Google Scholar]
  • 113.Hossain MA, Sajib MSA. Classification of image using convolutional neural network (CNN) Global Journal of Computer Science and Technology. 2019;19(D2):13–18. [Google Scholar]
  • 114.Kido S, Hirano Y, Hashimoto N. Detection and classification of lung abnormalities by use of convolutional neural network (CNN) and regions with CNN features (R-CNN); 2018 International Workshop on Advanced Image Technology (IWAIT); Imperial Mae Ping Hotel, Thailand. 2018. pp. 1–4. [Google Scholar]
  • 115.Zhang YD, Satapathy SC, Guttery DS, Górriz JM, Wang SH. Improved breast cancer classification through combining graph convolutional network and convolutional neural network. Information Processing Management. 2021;58(2):102439. doi: 10.1016/j.ipm.2020.102439. [DOI] [Google Scholar]
  • 116.Zhang Q, Wu YN, Zhu SC. Interpretable convolutional neural networks; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Salt Lake City, UT, USA. 2018. pp. 8827–8836. [Google Scholar]
  • 117.Hashemi M. Enlarging smaller images before inputting into convolutional neural network: Zeropadding vs. interpolation. Journal of Big Data. 2019;6(1):1–13. doi: 10.1186/s40537-019-0263-7. [DOI] [Google Scholar]
  • 118.Fang W, Zhang F, Sheng VS, Ding Y. A method for improving CNN-based image recognition using DCGAN. Computers, Materials Continua. 2018;57(1):167–178. doi: 10.32604/cmc.2018.02356. [DOI] [Google Scholar]
  • 119.Zhang M, Li W, Du Q. Diverse region-based CNN for hyperspectral image classification. IEEE Transactions on Image Processing. 2018;27(6):2623–2634. doi: 10.1109/TIP.2018.2809606. [DOI] [PubMed] [Google Scholar]
  • 120.Chen D, Bolton J, Manning CD. A thorough examination of the CNN/daily mail reading comprehension task. arXiv preprint. 2016:arXiv:1606.02858 [Google Scholar]
  • 121.Hussain M, Bird JJ, Faria DR. A study on CNN transfer learning for image classification; UK workshop on computational intelligence; Nottingham, UK. Springer; 2018. pp. 191–202. [Google Scholar]
  • 122.Akhtar N, Ragavendran U. Interpretation of intelligence in CNN-pooling processes: A methodological survey. Neural Computing and Applications. 2020;32(3):879–898. doi: 10.1007/s00521-019-04296-5. [DOI] [Google Scholar]
  • 123.Tolias G, Sicre R, Jégou H. Particular object retrieval with integral max-pooling of CNN activations. arXiv preprint. 2015:arXiv:1511.05879 [Google Scholar]
  • 124.Gong Y, Wang L, Guo R, Lazebnik S. Multi-scale orderless pooling of deep convolutional activation features; European Conference on Computer Vision; Zurich, Switzerland. Springer; 2014. pp. 392–407. [Google Scholar]
  • 125.Vaccaro F, Bertini M, Uricchio T, DelBimbo A. Image retrieval using multi-scale CNN features pooling; Proceedings ofthe 2020 International Conference on Multimedia Retrieval; Dublin, Ireland. 2020. pp. 311–315. [Google Scholar]
  • 126.Zhang R, Zhu F, Liu J, Liu G. Depth-wise separable convolutions and multi-level pooling for an efficient spatial CNN-based steganalysis. IEEE Transactions on Information Forensics and Security. 2019;15:1138–1150. doi: 10.1109/TIFS.10206. [DOI] [Google Scholar]
  • 127.Xiao Y, Wang X, Zhang P, Meng F, Shao F. Object detection based on faster R-CNN algorithm with skip pooling and fusion of contextual information. Sensors. 2020;20(19):5490. doi: 10.3390/s20195490. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 128.Giusti A, Cireşan DC, Masci J, Gambardella LM, Schmidhuber J. Fast image scanning with deep max-pooling convolutional neural networks; 2013 IEEE International Conference on Image Processing; Melbourne, Australia. 2013. pp. 4034–4038. [Google Scholar]
  • 129.Wang S, Jiang Y, Hou X, Cheng H, Du S. Cerebral micro-bleed detection based on the convolution neural network with rank based average pooling. IEEE Access. 2017;5:16576–16583. doi: 10.1109/Access.6287639. [DOI] [Google Scholar]
  • 130.Wang S, Sun J, Mehmood I, Pan C, Chen Y, et al. Cerebral micro-bleeding identification basedon a nine-layer convolutional neural network with stochastic pooling. Concurrency and Computation: Practice and Experience. 2020;32(1):e5130. doi: 10.1002/cpe.5130. [DOI] [Google Scholar]
  • 131.Hang ST, Aono M. Bi-linearly weighted fractional max pooling. Multimedia Tools and Applications. 2017;76(21):22095–22117. doi: 10.1007/s11042-017-4840-5. [DOI] [Google Scholar]
  • 132.Wang SH, Lv YD, Sui Y, Liu S, Wang SJ, et al. Alcoholism detection by data augmentation and convolutional neural network with stochastic pooling. Journal of Medical Systems. 2018;42(1):1–11. doi: 10.1007/s10916-017-0845-x. [DOI] [PubMed] [Google Scholar]
  • 133.Han J, Moraga C. The influence of the sigmoid function parameters on the speed of backpropa-gation learning; International Workshop on Artificial Neural Networks; Torremolinos, Spain. Springer; 1995. pp. 195–201. [Google Scholar]
  • 134.Fan E. Extended tanh-function method and its applications to nonlinear equations. Physics Letters A. 2000;277(4–5):212–218. doi: 10.1016/S0375-9601(00)00725-8. [DOI] [Google Scholar]
  • 135.Agarap AF. Deep learning using rectified linear units (ReLU) arXiv preprint. 2018:arXiv:1803.08375 [Google Scholar]
  • 136.Dubey AK, Jain V. Applications of computing, automation and wireless systems in electrical engineering. Springer; Singapore: 2019. Comparative study of convolution neural network’s ReLU and leaky-ReLU activation functions; pp. 873–880. [Google Scholar]
  • 137.Crnjanski J, Krstic’ M, Totovic’ A, Pleros N, Gvozdic’ D. Adaptive sigmoid-like and PReLU activation functions for all-optical perceptron. Optics Letters. 2021;46(9):2003–2006. doi: 10.1364/OL.422930. [DOI] [PubMed] [Google Scholar]
  • 138.Santurkar S, Tsipras D, Ilyas A, Madry A. How does batch normalization help optimization? Advances in Neural Information Processing Systems. 2018;31:2483–2493. [Google Scholar]
  • 139.Wu H, Gu X. Towards dropout training for convolutional neural networks. Neural Networks. 2015;71:1–10. doi: 10.1016/j.neunet.2015.07.007. [DOI] [PubMed] [Google Scholar]
  • 140.Behar N, Shrivastava M. ResNet50-based effective model for breast cancer classification using histopathology images. Computer Modeling in Engineering Sciences. 2022;130(2):823–839. doi: 10.32604/cmes.2022.017030. [DOI] [Google Scholar]
  • 141.Alkhaleefah M, Wu CC. A hybrid CNN and RBF-based SVM approach for breast cancer classification in mammograms; 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC); Miyazaki, Japan. 2018. pp. 894–899. [Google Scholar]
  • 142.Liu K, Kang G, Zhang N, Hou B. Breast cancer classification based on fully-connected layer first convolutional neural networks. IEEE Access. 2018;6:23722–23732. doi: 10.1109/ACCESS.2018.2817593. [DOI] [Google Scholar]
  • 143.Gour M, Jain S, Sunil Kumar T. Residual learning based CNN for breast cancer histopathological image classification. International Journal of Imaging Systems and Technology. 2020;30(3):621–635. doi: 10.1002/ima.22403. [DOI] [Google Scholar]
  • 144.Wang Y, Sun L, Ma K, Fang J. Breast cancer microscope image classification based on CNN with image deformation; International Conference Image Analysis and Recognition; Póvoa de Varzim, Portugal. Springer; 2018. pp. 845–852. [Google Scholar]
  • 145.Yao H, Zhang X, Zhou X, Liu S. Parallel structure deep neural network using CNN and RNN with an attention mechanism for breast cancer histology image classification. Cancers. 2019;11(12):1901. doi: 10.3390/cancers11121901. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 146.Agnes SA, Anitha J, Pandian S, Peter JD. Classification of mammogram images using multiscale all convolutional neural network (MA-CNN) Journal of Medical Systems. 2020;44(1):1–9. doi: 10.1007/s10916-019-1494-z. [DOI] [PubMed] [Google Scholar]
  • 147.Wang Y, Choi EJ, Choi Y, Zhang H, Jin GY, et al. Breast cancer classification in automated breast ultrasound using multiview convolutional neural network with transfer learning. Ultrasound in Medicine Biology. 2020;46(5):1119–1132. doi: 10.1016/j.ultrasmedbio.2020.01.001. [DOI] [PubMed] [Google Scholar]
  • 148.Saikia AR, Bora K, Mahanta LB, Das AK. Comparative assessment of CNN architectures for classification of breast FNAC images. Tissue and Cell. 2019;57:8–14. doi: 10.1016/j.tice.2019.02.001. [DOI] [PubMed] [Google Scholar]
  • 149.Mewada HK, Patel AV, Hassaballah M, Alkinani MH, Mahant K. Spectral-spatial features integrated convolution neural network for breast cancer classification. Sensors. 2020;20(17):4747. doi: 10.3390/s20174747. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 150.Zhou Y, Xu J, Liu Q, Li C, Liu Z, et al. A radiomics approach with CNN for shear-wave elastography breast tumor classification. IEEE Transactions on Biomedical Engineering. 2018;65(9):1935–1942. doi: 10.1109/TBME.10. [DOI] [PubMed] [Google Scholar]
  • 151.Lotter W, Sorensen G, Cox D. Deep learning in medical image analysis and multimodal learning for clinical decision support. Québec City, QC, Canada: Springer; 2017. A multi-scale CNN and curriculum learning strategy for mammogram classification; pp. 169–177. [Google Scholar]
  • 152.Vidyarthi A, Shad J, Sharma S, Agarwal P. Classification of breast microscopic imaging using hybrid CLAHE-CNN deep architecture; 2019 Twelfth International Conference on Contemporary Computing(IC3); Noida, India. 2019. pp. 1–5. [Google Scholar]
  • 153.Hijab A, Rushdi MA, Gomaa MM, Eldeib A. Breast cancer classification in ultrasound images using transfer learning; 2019 Fifth International Conference on Advances in Biomedical Engineering (ICABME); Tripoli, Lebanon. 2019. pp. 1–4. [Google Scholar]
  • 154.Kumar K, Rao ACS. Breast cancer classification of image using convolutional neural network; 2018 4th International Conference on Recent Advances in Information Technology (RAIT); Dhanbad, India. 2018. pp. 1–6. [Google Scholar]
  • 155.Kousalya K, Saranya T. Improved the detection and classification of breast cancer using hyper parameter tuning. Materials Today: Proceedings. 2021:1–6. [Google Scholar]
  • 156.Mikhailov N, Shakeel M, Urmanov A, Lee MH, Demirci MF. Optimization of CNN model for breast cancer classification; 2021 16th International Conference on Electronics Computer and Computation(ICECCO); Kaskelen, Kazakhstan. 2021. pp. 1–3. [Google Scholar]
  • 157.Karthik R, Menaka R, Kathiresan G, Anirudh M, Nagharjun M. Gaussian dropout based stacked ensemble CNN for classification of breast tumor in ultrasound images. IRBM. 2021 doi: 10.1016/j.irbm.2021.10.002. [DOI] [Google Scholar]
  • 158.Nawaz M, Sewissy AA, Soliman THA. Multi-class breast cancer classification using deep learning convolutional neural network. International Journal of Advanced Computer Science and Applications. 2018;9(6):316–332. doi: 10.14569/issn.2156-5570. [DOI] [Google Scholar]
  • 159.Deniz E, Sengür A, Kadiroglu Z, Guo Y, Bajaj V, et al. Transfer learning based histopatho-logic image classification for breast cancer detection. Health Information Science and Systems. 2018;6(1):1–7. doi: 10.1007/s13755-018-0057-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 160.Yeh JY, Chan S. CNN-based CAD for breast cancer classification in digital breast tomosynthesis; Proceedings of the 2nd International Conference on Graphics and Signal Processing; Sydney NSW Australia. 2018. pp. 26–30. [Google Scholar]
  • 161.Gonçalves CB, Souza JR, Fernandes H. Classification of static infrared images using pretrained CNN for breast cancer detection; 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS); Aveiro, Portugal. 2021. pp. 101–106. [Google Scholar]
  • 162.Bayramoglu N, Kannala J, Heikkilä J. Deep learning for magnification independent breast cancer histopathology image classification; 2016 23rd International Conference on Pattern Recognition (ICPR); Cancun. 2016. pp. 2440–2445. [Google Scholar]
  • 163.Alqahtani Y, Mandawkar U, Sharma A, Hasan MNS, Kulkarni MH, et al. Breast cancer pathological image classification based on the multiscale CNN squeeze model. Computational Intelligence and Neuroscience. 2022;2022 doi: 10.1155/2022/7075408. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 164.Sharma R, Sharma JB, Maheshwari R, Agarwal P. Thermogram adaptive efficient model for breast cancer detection using fractional derivative mask and hybrid feature set in the IoT environment. Computer Modeling in Engineering Sciences. 2022;130(2):923–947. doi: 10.32604/cmes.2022.016065. [DOI] [Google Scholar]
  • 165.Sohail A, Khan A, Wahab N, Zameer A, Khan S. A multi-phase deep CNN based mitosis detection framework for breast cancer histopathological images. Scientific Reports. 2021;11(1):1–18. doi: 10.1038/s41598-021-85652-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 166.Mahmood T, Arsalan M, Owais M, Lee MB, Park KR. Artificial intelligence-based mitosis detection in breast cancer histopathology images using faster R-CNN and deep CNNs. Journal of Clinical Medicine. 2020;9(3):749. doi: 10.3390/jcm9030749. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 167.Wang Z, Li M, Wang H, Jiang H, Yao Y, et al. Breast cancer detection using extreme learning machine based on feature fusion with CNN deep features. IEEE Access. 2019;7:105146–105158. doi: 10.1109/Access.6287639. [DOI] [Google Scholar]
  • 168.Chiao JY, Chen KY, Liao KYK, Hsieh PH, Zhang G, et al. Detection and classification the breast tumors using mask R-CNN on sonograms. Medicine. 2019;98(19) doi: 10.1097/MD.0000000000015200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 169.Das K, Conjeti S, Chatterjee J, Sheet D. Detection of breast cancer from whole slide histopathological images using deep multiple instance CNN. IEEE Access. 2020;8:213502–213511. doi: 10.1109/ACCESS.2020.3040106. [DOI] [Google Scholar]
  • 170.Zainudin Z, Shamsuddin SM, Hasan S. Deep layer CNN architecture for breast cancer histopathology image detection; International Conference on Advanced Machine Learning Technologies and Applications; Cairo, Egypt. Springer; 2019. pp. 43–51. [Google Scholar]
  • 171.Wu B, Kausar T, Xiao Q, Wang M, Wang W, et al. FF-CNN: An efficient deep neural network for mitosis detection in breast cancer histological images; Annual Conference on Medical Image Understanding and Analysis; John McIntyre Centre, Pollock Halls, Edinburgh. Springer; 2017. pp. 249–260. [Google Scholar]
  • 172.Gonçalves CB, de Souza JR, Fernandes H. CNN architecture optimization using bio-inspired algorithms for breast cancer detection in infrared images. Computers in Biology and Medicine. 2022:105205. doi: 10.1016/j.compbiomed.2021.105205. [DOI] [PubMed] [Google Scholar]
  • 173.Guan S, Loew M. Medical imaging 2019, imaging informatics for healthcare, research, and applications. Vol. 10954. SPIE; San Diego, California, USA: 2019. Using generative adversarial networks and transfer learning for breast cancer detection by convolutional neural networks; pp. 306–318. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 174.Hadush S, Girmay Y, Sinamo A, Hagos G. Breast cancer detection using convolutional neural networks. arXiv preprint. 2020:arXiv:2003.07911 [Google Scholar]
  • 175.Huang J, Mei L, Long M, Liu Y, Sun W, et al. BM-Net: CNN-based MobileNet-V3 and bilinear structure for breast cancer detection in whole slide images. Bioengineering. 2022;9(6):261. doi: 10.3390/bioengineering9060261. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 176.Mahbub TN, Yousuf MA, Uddin MN. A modified CNN and fuzzy AHP based breast cancer stage detection system; 2022 International Conference on Advancement in Electrical and Electronic Engineering (ICAEEE); Gazipur, Bangladesh. 2022. pp. 1–6. [Google Scholar]
  • 177.Prajoth SenthilKumar A, Narendra M, Jani Anbarasi L, Raj BE. Breast cancer analysis and detection in histopathological images using CNN approach; Proceedings of International Conference on Intelligent Computing, Information and Control Systems; Secunderabad, India. Springer; 2021. pp. 335–343. [Google Scholar]
  • 178.Charan S, Khan MJ, Khurshid K. Breast cancer detection in mammograms using convolutional neural network; 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET); Sukkur, Pakistan. 2018. pp. 1–5. [Google Scholar]
  • 179.Alanazi SA, Kamruzzaman M, Islam Sarker MN, Alruwaili M, Alhwaiti Y, et al. Boosting breast cancer detection using convolutional neural network. Journal of Healthcare Engineering. 2021;2021 doi: 10.1155/2021/5528622. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 180.Gonçalves CB, Souza JR, Fernandes H. CNN optimization using surrogate evolutionary algorithm for breast cancer detection using infrared images; 2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS); Shenzhen, China. 2022. pp. 84–89. [Google Scholar]
  • 181.Guan S, Loew M. Breast cancer detection using synthetic mammograms from generative adversarial networks in convolutional neural networks. Journal of Medical Imaging. 2019;6(3):031411. doi: 10.1117/1.JMI.6.3.031411. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 182.Sun L, Sun H, Wang J, Wu S, Zhao Y, et al. Breast mass detection in mammography based on image template matching and CNN. Sensors. 2021;21(8):2855. doi: 10.3390/s21082855. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 183.Chauhan A, Kharpate H, Narekar Y, Gulhane S, Virulkar T, et al. Breast cancer detection and prediction using machine learning; 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA); Coimbatore, India. 2021. pp. 1135–1143. [Google Scholar]
  • 184.Gupta V, Vasudev M, Doegar A, Sambyal N. Breast cancer detection from histopathology images using modified residual neural networks. Biocybernetics and Biomedical Engineering. 2021;41(4):1272–1287. doi: 10.1016/j.bbe.2021.08.011. [DOI] [Google Scholar]
  • 185.Chouhan N, Khan A, Shah JZ, Hussnain M, Khan MW. Deep convolutional neural network and emotional learning based breast cancer detection using digital mammography. Computers in Biology and Medicine. 2021;132:104318. doi: 10.1016/j.compbiomed.2021.104318. [DOI] [PubMed] [Google Scholar]
  • 186.Chen X, Men K, Chen B, Tang Y, Zhang T, et al. CNN-Based quality assurance for automatic segmentation of breast cancer in radiotherapy. Frontiers in Oncology. 2020;10:524. doi: 10.3389/fonc.2020.00524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 187.El Adoui M, Mahmoudi SA, Larhmam MA, Benjelloun M. MRI breast tumor segmentation using different encoder and decoder CNN architectures. Computers. 2019;8(3):52. doi: 10.3390/computers8030052. [DOI] [Google Scholar]
  • 188.Kakileti ST, Manjunath G, Madhu HJ. Cascaded CNN for view independent breast segmentation in thermal images; 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Berlin, Germany. 2019. pp. 6294–6297. [DOI] [PubMed] [Google Scholar]
  • 189.Kumar MN, Jatti A, Narayanappa C. Probable region identification and segmentation in breast cancer using the DL-CNN; 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT); Tirunelveli, India. 2019. pp. 1144–1149. [Google Scholar]
  • 190.Atrey K, Singh BK, Roy A, Bodhey NK. Real-time automated segmentation of breast lesions using CNN-based deep learning paradigm: Investigation on mammogram and ultrasound. International Journal of Imaging Systems and Technology. 2021;32:1084–1100. [Google Scholar]
  • 191.Irfan R, Almazroi AA, Rauf HT, Damasevicius R, Nasr EA, et al. Dilated semantic segmentation for breast ultrasonic lesion detection using parallel feature fusion. Diagnostics. 2021;11(7):1212. doi: 10.3390/diagnostics11071212. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 192.Su H, Liu F, Xie Y, Xing F, Meyyappan S, et al. Region segmentation in histopathological breast cancer images using deep convolutional neural network; 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI); Brooklyn, NY, USA. 2015. pp. 55–58. [Google Scholar]
  • 193.He S, Ruan J, Long Y, Wang J, Wu C, et al. Combining deep learning with traditional features for classification and segmentation of pathological images of breast cancer; 2018 11th International Symposium on Computational Intelligence and Design (ISCID); Zhejiang University, China. 2018. pp. 3–6. [Google Scholar]
  • 194.Soltani H, Amroune M, Bendib I, Haouam MY. Breast cancer lesion detection and segmentation based on mask R-CNN; 2021 International Conference on Recent Advances in Mathematics and Informatics (ICRAMI); Tebessa, Algeria. 2021. pp. 1–6. [Google Scholar]
  • 195.Min H, Wilson D, Huang Y, Liu S, Crozier S, et al. Fully automatic computer-aided mass detection and segmentation via pseudo-color mammograms and mask R-CNN; 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); Iowa City, USA. 2020. pp. 1111–1115. [Google Scholar]
  • 196.Arora R, Raman B. A deep neural CNN model with CRF for breast mass segmentation in mammograms; 2021 29th European Signal Processing Conference (EUSIPCO); Dublin, Ireland. 2021. pp. 1311–1315. [Google Scholar]
  • 197.Spuhler KD, Ding J, Liu C, Sun J, Serrano-Sosa M, et al. Task-based assessment of a convolutional neural network for segmenting breast lesions for radiomic analysis. Magnetic Resonance in Medicine. 2019;82(2):786–795. doi: 10.1002/mrm.27758. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 198.Atrey K, Singh BK, Roy A, Bodhey NK. Real-time automated segmentation of breastlesions using CNN-based deep learning paradigm: Investigation on mammogram and ultrasound. International Journal of Imaging Systems and Technology. 2022;32(4):1084–1100. doi: 10.1002/ima.22690. [DOI] [Google Scholar]
  • 199.Sumathi R, Vasudevan V. Intelligent systems and sustainable computing. Springer; Hyderabad, India: 2022. MRI breast image segmentation using artificial bee colony optimization with fuzzy clustering and CNN classifier; pp. 303–311. [Google Scholar]
  • 200.Xu Y, Wang Y, Yuan J, Cheng Q, Wang X, et al. Medical breast ultrasound image segmentation by machine learning. Ultrasonics. 2019;91:1–9. doi: 10.1016/j.ultras.2018.07.006. [DOI] [PubMed] [Google Scholar]
  • 201.Guo YY, Huang YH, Wang Y, Huang J, Lai QQ, et al. Breast MRI tumor automatic segmentation and triple-negative breast cancer discrimination algorithm based on deep learning. Computational and Mathematical Methods in Medicine. 2022;2022 doi: 10.1155/2022/2541358. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 202.Cui Z, Yang J, Qiao Y. Brain MRI segmentation with patch-based CNN approach; 2016 35th Chinese Control Conference (CCC); Chengdu, Sichuan Province, China. 2016. pp. 7026–7031. [Google Scholar]

RESOURCES