Skip to main content
. 2019 Dec 13;93(1108):20190580. doi: 10.1259/bjr.20190580
Table 3(B). Studies using deep learning approach for mass detection and classification in mammography and other breast X-ray modalities.
Journal article Year Training set Validation set Independenttest set CNN structure Performance (validation or independent test)
Mass detection
Samala et al.37 2016 2282 SFM & DM (2461 masses, 3173 FPs), 230 DBT vols (228 masses, 28330 FPs), 4-fold CV 94 DBT vols (89 masses) Cuda-convNet: Stage 1 training with mammograms, Stage 2 fine-tuning with DBT AUC(stage1 mam)=0.81; AUC(stage2 DBT)=0.90; FROC: Breast-based 91% sens at 1FP/vol
Kim et al.38 2017 154 cases (616 DBT vol, 185M, 431N); 5-fold CV ImageNet pre-pretrained VGG16 and LSTM depth directional long-term recurrent learning AUC(DCNN)=0.871, AUC(DCNN + LSTM)=0.919
Jung et al.39 2018 Private set: 350 pts (222 DMs) for second pretraining. INbreast: 115 pts (410 DMs), 5-fold CV ImageNet-pretrained ResNet50 with a feature pyramid network (class subnet, box subnet) FROC:Sens 0.94 at 1.3 FPI, Sens 0.97 at 3 FPI
Mass Classification
Arevalo et al.40 2016 344 cases (736 images, 426B, 310M): 50% training, 10% validation 40% CNN with one or two conv layers. Also ImageNet-pretrained DeCAF AUC(CNN)=0.822; AUC(combined with hand-crafted features)=0.826; AUC(DeCAF)=0.79
Jiao et al.41 2016 300 images (150B, 150M) 300 images (150B, 150M) Fine-tuning of ImageNet-pretrained AlexNet as feature extractor. Two SVM classifiers for mid-level level and hi-level features. Accuracy = 96.7%
Dhungel et al.42 2017 INBreast 115 cases, Detection: 410 images, Segmentation & classification: 40 cases (41B, 75M masses); 60% training, 20% validation 20% Detection: multiscale deep belief network, a cascade of R-CNNs and random forest classifiers FROC: 90% at 1FPI; AUC(DCNN features)=0.76; AUC(Manually marked mass)=0.91.
Sun et al.43 2017 2400 ROIs (100 labeled, 2300 unlabeled) 758 ROIs DCNN with three convolution layers AUC = 0.8818, Accuracy = 0.8234
Antropova et al.44 2017 DM: 245 masses (113B, 132M); 5-fold CV ImageNet-pretrained VGG19 as feature extractor, SVM classifier AUC(maxpool features)=0.81AUC(Fused with radiomic features)=0.86
Samala et al.45 2017 SFM & DM 1335 views (ROI: 604M, 941B); 4-fold CV SFM 907 views (ROI:453M, 456B) ImageNet-pretrained AlexNet AUC = 0.82
Kooi et al.46 2017 Set 1: (1487M, 73102N); Set 2: (1108M, 696 cysts) Set 1: (342M, 21913N), Set 2: nested CV VGG-like DCNN pretrained with Set 1, used as feature extractor on Set 2. Gradient boosting trees classifier. Malignant-vs-cysts classif. (CC + MLO): AUC(DCNN features)=0.78, AUC(with contrast features)=0.80
Jiao et al.47 2018 DDSM 300 images DDSM 150 images DDSM 150 images; MIAS set Joint model of ImageNet-pretrained AlexNet and fine-tuned as feature extractor and parasitic metric learning net. Accuracy(DDSM)=97.4%; Accuracy(MIAS)=96.7%
Samala et al.48 2018 SFM & DM 2242 views (ROI: 1057M, 1397B), DBT 230 vols (ROI: 590M, 550B); 4-fold CV DBT 94 vols (ROI: 150M, 295B) ImageNet-pretrained AlexNet, 2-stage transfer learning, pruning AUC(with pruning)=0.90; AUC(without pruning)=0.88
Chougrad et al.49 2018 1529 cases (6116 images) from DDSM, INbreast, BCDR; 5-fold CV MIAS (113 images) Compare ImageNet-pretrained VGG16, ResNet, InceptionV3 InceptionV3: AUC = 0.99, Accuracy = 98.23%
Al-masni et. al.50 2018 DDSM 600 images (300M, 300B); 5-fold CV ImageNet-pretrained DCNN with 24 convolutional layers (You-Only-Look-Once detection & classification) AUC = 0.9645; Accuracy = 97%
Wang et al.51 2018 BCDR 736 images; 50% training, 10% validation 40% Multiview-DCNN: ImageNet-pretrained InceptionV3 as feature extractor with attention map, Recurrent NN for classification MV-DNN: AUC = 0.882, Accuracy = 0.828; MV-DNN + Attention map: AUC = 0.886, Accuracy = 0.846.
Al-antari et al.52 2018 INbreast: 115 cases (410 DMs, 112 masses); 4-fold CV: 75% training, 6.25% validation 18.75% Detection DCNN (Al-masni et al); segmentation by second DCNN, Classification by simplified AlexNet. Detection accuracy = 98.96%,AUC(M-vs-B classification)=0.9478
Gao et al.53 2018 SCNN: 49 CEDM cases; DCNN ResNet50: INbreast 89 cases; 10-fold CV Shallow-deep CNN (SD-CNN): SCNN generated virtual CEDM of mass. Pretrained ResNet50 as feature extractors for 2-view virtual CEDM and DM, Gradient boosting trees classifier AUC(DM)=0.87; AUC(DM + virtual CEDM)=0.92
Kim et al.54 2018 DDSM (178M. 306B) DDSM (44M, 77B) DDSM (170M, 170B) BI-RADS guided diagnosis network: ImageNet-pretrained VGG16, plus BI-RADS critic network and relevance score AUC(with B-RADS critic network)=0.841; AUC(without BI-RADS critic network)=0.814
Perek et al.55 2019 54 CESM cases with 129 lesions (56M, 73B); 5-fold CV Fine-tuning (FT) ImageNet-pretrained AlexNet, RawNet without pretraining Using deep features and BI-RADS features: AUC(FT-AlexNet)=0.907; AUC(RawNet)=0.901
Samala et al.56 2019 SFM & DM 2242 views (ROI: 1057M, 1397B), DBT 230 vols (ROI: 590M, 550B); 4-fold CV DBT 94 vols (ROI: 150M, 295B) ImageNet-pretrained AlexNet, 2-stage transfer learning AUC(one-stage fine-tuning with DBT)=0.85; AUC (two-stage fine-tuning with mammo then DBT)=0.91
Mendel et al.57 2019 76 cases (2-view DM, DBT, synthetic SM) with 78 lesions (30M, 48B) including 34 masses, 15 ADs, 30 MC clusters; Leave-one-out CV ImageNet-pretrained VGG19 as feature extractor, SVM classifier Two-view AUC: all lesions DBT = 0.89, SM = 0.86, DM = 0.81; mass&AD DBT = 0.98; MC DBT = 0.97
Cancer detection (any lesion types)
Becker et al.58 2017 Study 1: (95M, 95N); Study 2: (83M, 513N) Study 1: (48M, 48N); Study 2: (42M, 257N) Study 1: BCDR (35M, 35N); Study 2: (18M, 233N) dANN from commercial “ViDi” image analysis software AUC(Study 1)=0.79; AUC(Study 2)=0.82;
Carneiro et al.59 2017 (1) classification: DDSM 86 cases; (2) detection & classif: INbreast 115 cases (1) DDSM 86 cases; (2) INbreast 5-fold CV ImageNet-pretrained ConvNet Two-view AUC: (1) M-vs-B>0.9 or M-vs-(B + N)>0.9. (2) M-vs-B 0.78; M-vs-(B + N) 0.86
Kim et al.60 2018 3101M, 23,530 normal cases (four views/case) 1238 cases (619M) 1238 cases (619M) DIB-MG: (ResNet with 19 convolutional layers + 2-stage global-average-pooling layer) AUC(M-vs-(B + N))=0.906
Ribli et al.61 2018 DDSM 2620 cases and private DM set 174 cases INbreast 115 cases Faster R-CNN: ImageNet-pretrained VGG16 with region proposal network for localizing target Detection FROC: 90% sensitivity at 0.3 FPI; Classification AUC = 0.95
Aboutalib et al.62 2018 DDSM 3294 images, private DM set 1734 images; 6-fold CV private DM 100 images ImageNet-pretrained AlexNet, pretrained with DDSM then fine-tuned with DM (best among other variaitons) AUC(M-vs-recalled B)=0.80; AUC(M-vs-negative&recalled B)=0.74.
Akselrod-Ballin et al.63 2019 9611 cases (1049M, 1903 biopsy negative, 247 BI-RADS3, 6412 normals) 1055 cases + 31 FNs 2548 cases + 71 FNs InceptionResnetV2 without pretraining AUC(predict M per breast with clinical data)=0.91; AUC(identify normal case per breast with clinical data)=0.85; Identify M in 48% of FNs of radiologists