Skip to main content
. 2024 Jan 6;14:692. doi: 10.1038/s41598-024-51329-8

Table 9.

Performance-based comparative analysis of the proposed method with single/multiple neural networks for multimodality in detection of breast cancer detection.

Studies Approach Modalities Performance
79 Two 3D ResNet-50 were combined for multimodal feature extraction and fusion High‑dimensional MRI features and clinical information AUC = 0.827
80 Integration of residual block with inception block to form a single CNN architecture B-mode ultrasound, elastic ultrasound, pure elastic ultrasound, and H-channel images Classification accuracy rates of breast lump detection is 94.76%
81 A single CNN architecture on B-mode and SE-mode ultrasound image B-mode and elastography ultrasound images sensitivity of 100 ± 0.00% and specificity of 94.28 ± 7.00%
33 A single neural architecture model for extracting stacked features using a sigmoid gated attention, and dense layer for bi-modality Text-based, gene expression data and copy number alteration (CNA) data Reported performance improvement for AUC, accuracy, precision, and sensitivity at 0.5%, 8.6%, 9.2% and 34.8% respectively
82 A single CNN architecture applied independently for extraction of multimodal features Grey-scale images samples Obtained 96.55%, 90.68%, and 91.28% on MIAS, DDSM, and INbreast datasets
This proposed study A TwinCNN and binary optimization algorithm framework for multimodal classification using histology and mammography digital images RGB-image and grey scale image samples Classification accuracy for histology modality = 0.977, mammography modality = 0.913, and fused multimodalities = 0.684