Skip to main content
Journal of Imaging logoLink to Journal of Imaging
. 2024 Sep 25;10(10):239. doi: 10.3390/jimaging10100239

A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging

Deepshikha Bhati 1,*, Fnu Neha 1, Md Amiruzzaman 2
Editor: William E Higgins
PMCID: PMC11508748  PMID: 39452402

Abstract

The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools to unravel the black-box nature of these models, providing insights into their inner workings and enhancing trust in their predictions. This survey paper comprehensively examines various interpretation and visualization techniques applied to deep learning models in medical imaging. The paper reviews methodologies, discusses their applications, and evaluates their effectiveness in enhancing the interpretability, reliability, and clinical relevance of deep learning models in medical image analysis.

Keywords: medical imaging, deep learning, machine learning, explainable AI, model interpretability

1. Introduction

Medical imaging (MI) is a cornerstone of modern healthcare, providing critical insights for diagnosing, treating, and monitoring various diseases. Traditionally, MI encompassed mesoscopic imaging techniques such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and Positron Emission Tomography (PET). However, with recent technological advancements, the scope of MI has significantly broadened to include high-resolution histopathology, a vital subspecialty of pathology that focuses on examining tissue samples at microscopic and molecular levels. This area now leverages advanced techniques such as digital pathology and computational analysis.

The integration of artificial intelligence (AI) has revolutionized high-resolution histopathology, enhancing diagnostic accuracy and resolution. This expansion from traditional mesoscopic imaging to advanced microscopic techniques represents a significant evolution in MI. The field now includes histological, cellular, and molecular pathology, driven by AI advancements that support ongoing developments in diagnostic precision and therapeutic strategies. For instance, recent studies highlight the role of AI in analyzing electron microscopy images for disease monitoring [1,2] and improving deep learning applications in histopathology [3,4].

Traditional image analysis methods, reliant on handcrafted features and expert knowledge, are often time-consuming and prone to errors. Machine learning (ML) approaches, including Support Vector Machines (SVMs), decision trees, random forests, and logistic regression, have improved efficiency and accuracy in tasks such as image segmentation and disease classification. However, these methods still require manual feature selection and extraction. The advent of deep learning (DL) has transformed medical image analysis by automatically learning and extracting hierarchical features from large volumes of data [5,6,7,8,9,10,11,12,13]. This progress has provided healthcare professionals with valuable insights, enabling more accurate diagnoses and enhanced patient care.

Despite the impressive performance of DL models, challenges related to interpretability and transparency persist [14,15,16]. The opaque nature of these models raises concerns about their reliability in healthcare, where understanding diagnostic decisions is crucial. Interpretability in AI-driven healthcare models fosters trust and reliability by allowing practitioners to comprehend and verify model outputs. It ensures ethical and legal accountability, supports clinical decision-making, and helps identify biases and errors, enhancing fairness and accuracy.

Efforts to improve the interpretability of DL models in MI are ongoing, with researchers developing techniques to clarify model decision-making processes [17,18,19]. This paper contributes to this field through several key aspects:

  • 1.

    Comprehensive Survey: We offer a thorough survey of innovative approaches for interpreting and visualizing DL models in MI, including a broad range of techniques aimed at enhancing model transparency and trust.

  • 2.

    Methodological Review: We provide an in-depth review of current methodologies, focusing on post-hoc visualization techniques such as perturbation-based, gradient-based, decomposition-based, trainable attention (TA)-based methods, and vision transformers (ViT). We evaluate each method’s effectiveness and applicability in MI.

  • 3.

    Clinical Relevance: We emphasize the importance of interpretability techniques in clinical settings, demonstrating how they lead to more reliable and actionable insights from DL models, thus supporting better decision-making in healthcare.

  • 4.

    Future Directions: We outline future research directions in model interpretability and visualization, highlighting the need for more robust and scalable techniques that can handle the complexity of DL models while ensuring practical utility in medical applications.

Our survey covers innovative approaches for interpreting and visualizing DL models in MI. As illustrated in Figure 1, we explore various DL models and techniques, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), transformer-based architectures, autoencoders, Local Interpretable Model-agnostic Explanations (LIME), Gradient-Class Activation Mapping (Grad-CAM), Layer-Wise Relevance Propagation (LRP), attention-based methods, and vision transformers (ViTs).

Figure 1.

Figure 1

Overview of Deep Learning Models and Techniques in Medical Imaging: This diagram illustrates the main categories of deep learning models used in medical imaging, including model types, understanding model structure and functionality, and interpretation and visualization techniques. It highlights specific methods such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), transformer-based architectures, autoencoders, Local Interpretable Model-agnostic Explanations (LIME), Integrated Gradient (IG), Gradient-Class Activation Mapping (Grad-CAM), and Layer-Wise Relevance Propagation (LRP), Attention and Vision Transformers.

This review distinguishes itself by expanding the definition of medical imaging to include high-resolution histopathology and digital pathology, thus broadening the scope of traditional mesoscopic imaging techniques. This approach not only highlights recent advancements but also sets the stage for future research in interpretability and visualization within the expanding field of medical imaging.

The rest of this paper is divided into four sections, with multiple subsections within each of them. Section 3 is focused on interpreting model design and workflow. Section 4 Visualizing DL models in MI. Section 5 presents an overview of post-hoc interpretation and Visualization Techniques. A comparison of Different Interpretation Methods is discussed in Section 6 and concludes the work with current challenges and future directions in Section 8.

2. Research Methodology

A comprehensive review of explainable AI in medical image analysis was published by [20,21,22,23,24]. While this review covers a broad range of topics, some critical areas, such as research on trainable attention (TA)-based methods, vision transformers, and their applications, have been overlooked. Our review aims to fill this gap by providing an extensive overview of various domains within medical imaging, addressing key aspects such as Domain, Task, Modality, Performance, and Technique.

This research employs the Systematic Literature Review (SLR) method, which involves several stages. The research questions guiding this study are as follows:

  • What innovative methods exist for interpreting and visualizing deep learning models in medical imaging?

  • How effective are post-hoc visualization techniques (perturbation-based, gradient-based, decomposition-based, TA-based, and ViT) in improving model transparency?

  • What is the clinical relevance of interpretability techniques for actionable insights from deep learning models in healthcare?

  • What are the future research directions for model interpretability and visualization in medical applications?

The survey examines over 400 recent papers on explainable AI (XAI) in medical image analysis. Relevant contributions were identified using keywords like “deep learning”, “convolutional neural networks”, “medical imaging”, “surveys”, “interpretation”, “visualization” and “review”. Sources included ArXiv, bioRxiv and medRxiv, Google Scholar, Scopus, and Science Direct, focusing on titles. Studies without results on medical image data or using only standard neural networks with manually designed features were excluded. In cases of similar work, the most significant publications were selected.

The findings will be comprehensively presented, including a detailed description of the research methodology for replication. The literature search results, relevant articles, and their quality evaluations will be summarized in overview tables. Drawing from expertise in applying XAI techniques to medical image analysis, ongoing challenges, and future research directions will be discussed.

3. Interpreting Model Design and Workflow

Interpreting model design and workflow involves examining the hidden layers of convolutional neural networks (CNNs). This can be achieved through methods such as:

  • 1.

    Autoencoders for Learning Latent Representations

  • 2.

    Visualizing High-Dimensional Latent Data in a Two-Dimensional Space

  • 3.

    Visualizing Filters and Activations in Feature Maps

3.1. Autoencoders for Learning Latent Representations

Autoencoders (AE) are DL models for unsupervised feature learning [25], with applications in anomaly detection [26], image compression [27], and representation learning [28]. They consist of an encoder creating latent representations and a decoder reconstructing images. Variants include variational autoencoders (VAE) and adversarial autoencoders (AAE). In medical imaging, AEs detect abnormalities by comparing input images with reconstructions and highlighting high reconstruction loss areas. For instance, VAE has reconstructed OCT retinal images to detect pathologies [29], and AAE has localized brain lesions in MRI images [30]. Convolutional AEs have detected nuclei in histopathology images by combining learned representations with thresholding [31].

3.2. Visualizing High-Dimensional Latent Data in a Two-Dimensional Space

CNNs produce high-dimensional features, making visualization challenging. Dimensionality reduction techniques like Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (tSNE) simplify this data. PCA performs linear transformations, while tSNE [32] uses nonlinear methods to map high-dimensional data to lower dimensions. tSNE is effective for visualizing patterns and clusters, such as in abdominal ultrasound and histopathology image classification. The constraint-based embedding technique [33], using a divide-and-conquer algorithm to preserve k-nearest neighbors in 2D projections, has assessed deep belief networks separating brain MRI images of schizophrenic and healthy patients, though both tSNE and constraint-based embedding struggled with raw data separation.

3.3. Visualizing Filters and Activations in Feature Maps

A convolutional block extracts local features from input images through convolution filters, ReLU or GELU activations, and pooling layers. Filter visualization reveals CNN’s feature extraction capabilities, with initial layers capturing basic elements and later layers capturing intricate patterns. In medical imaging, filter visualization compares filters in Computer-Aided Detection (CAD) for 3D Computed Tomography (CT) images [9]. Larger filters offer more insights but require more memory. Feature map visualization, representing layer outputs after activation, highlights active features and can indicate training issues. It is used in tasks like skin lesion classification [34], fetal facial plan recognition in ultrasound [35], brain lesion segmentation in MRI [6], and Alzheimer’s diagnosis with PET/MRI [36].

4. Deep Learning Models in Medical Imaging

Convolutional neural networks (CNNs) are essential in DL for MI. CNNs are adept at processing X-rays, Computed Tomography (CT) scans, and Magnetic Resonance Imaging (MRI) through their hierarchical feature representations. Studies [5,6,7] have demonstrated their effectiveness in various medical image analysis tasks. For instance, in segmentation tasks, CNNs excel at delineating organ boundaries or identifying anomalies within medical images, providing valuable insights for accurate diagnosis and treatment planning. Recurrent Neural Networks (RNNs) excel in the temporal modeling of dynamic imaging sequences, such as functional MRI or video-based imaging, by capturing temporal patterns [37]. Generative Adversarial Networks (GANs) are valuable for image synthesis, data augmentation, anomaly detection, generating synthetic images and learning normal patterns [38,39,40].

Transformer-Based Architectures

Transformer-based architectures, including bidirectional encoder representations from transformers (BERT) [41,42], and generative pre-trained transformer (GPT) [43], are emerging for tasks like disease prediction, image reconstruction, and capturing complex dependencies in medical images.

5. Interpretation and Visualization Techniques

In recent years, numerous explainable artificial intelligence (XAI) techniques have been developed to enhance the interpretability of DL models, particularly in MI. These techniques can be broadly categorized into Perturbation-Based, Gradient-Based, Decomposition, and Attention methods.

The timeline presented in Figure 2 illustrates the development of these XAI techniques over the years. The points on the timeline are color-coordinated by respective categories, including Dimensionality Reduction, Feature Visualization, Class Activation Mapping, Saliency Mapping, Prediction Difference Analysis, Grad-CAM, Integrated Gradient, Guided Backpropagation, Occlusion Sensitivity, Trainable Attention, Guided Grad-CAM, Layerwise Relevance Propagation, Deconvolution, LIME, Backpropagation, Autoencoder, Meaningful Perturbation, SHAP, and Attention. Notably, the Gradient-Based category is the most densely populated, with CAM and Grad-CAM being among the most popular entries. The timeline also reveals a higher density of developments between 2017 and 2020. The comparison chart illustrates the visualization Techniques methods: Gradient-Based, Perturbation-Based, Decomposition-Based (LRP), and Trainable Attention Models in terms of model dependency, access to model parameters, and computational efficiency as shown in Table 1.

Figure 2.

Figure 2

Timeline of XAI Technique Development in medical imaging applications.

Table 1.

Comparison chart that illustrates the visualization techniques methods.

Attributes Perturbation Gradient Decomposition Trainable Attention Models
Model Dependency Model-agnostic Differentiable Model-specific Model-specific
Access to Model Parameters No Yes Yes Yes
Computational Efficiency Slower Faster Varies Varies

5.1. Perturbation-Based Methods

Perturbation-based methods evaluate how input changes affect model outputs to determine feature importance. By altering specific image regions, these methods identify areas that significantly influence predictions, typically visualized with heatmaps. As highlighted in recent surveys, perturbation-based XAI methods are crucial for exploring CNN models by systematically altering inputs and observing output changes, which is vital for understanding models used in safety-critical areas where transparency is essential [44]. Techniques like Integrated Gradients (IG), Local Interpretable Model-agnostic Explanations (LIME), and Occlusion Sensitivity (OS) are used in various domains, including breast cancer detection, eye disease classification, and brain MRI analysis (see Table 2). The table categorizes studies by domain, task, modality, performance, and technique. These methods also test model sensitivity to input variations, ensuring robust interpretations.

Table 2.

Overview of Various Studies Using Perturbation-Based Methods in Medical Imaging.

Domain Task Modality Performance Technique Citation
Breast Classification MRI N/A IG [45]
Eye Classification DR Accuracy: 95.5% IG [46]
Multiple Classification DR N/A IG [47]
Chest Detection X-ray Accuracy: 94.9%, AUC: 97.4% LIME [48]
Gastrointestinal Classification Endoscopy Accuracy: 97.9% LIME [49]
Brain Segmentation, Detection MRI ICC: 93.0% OS [50]
Brain Classification MRI Accuracy: 85.0% OS [51]
Breast Detection, Classification Histology Accuracy: 55.0% OS [52]
Eye, Chest Classification, Detection OCT, X-ray Eye Accuracy: 94.7%, Chest Accuracy: 92.8% OS [53]
Chest Classification X-ray AUC: 82.0% OS, IG, LIME [54]

5.1.1. Occlusion

Zeiler and Fergus [55] introduced an occlusion method to assess the impact on model output when parts of an image are obstructed. Kermany et al. [53] utilized this method for interpreting optical coherence tomography images to diagnose retinal pathologies. A major limitation of occlusion is its high computational demand, as it requires inference for each occluded image region, increasing with image resolution and desired heatmaps.

5.1.2. Local Interpretable Model-Agnostic Explanations (LIME)

Recent studies emphasize the growing importance of local interpretation methods, such as LIME, which offer clear interpretability with lower computational complexity, making them suitable for real-time applications [56]. Ribeiro et al. [57] introduced local interpretable model-agnostic explanations (LIME) to identify superpixels (groups of connected pixels with similar intensities). Seah et al. [54] applied LIME to identify congestive heart failure in chest radiographs. LIME offers an advantage over occlusion by preserving the altered image portions’ context, as they are not completely blocked as shown in Figure 3.

Figure 3.

Figure 3

Example uses of perturbation-based attribution methods for model interpretability which shows the comparison of several approaches to interpretation for identifying congestive heart failure on chest X-rays (Seah et al., 2018) [54].

5.1.3. Integrated Gradients

Sundararajan et al. [47] introduced integrated gradients (IG) to measure pixel importance by computing gradients across images interpolated between the original and a baseline image with all non-values. Sayres et al. [46] found that model-predicted grades and heatmaps improved the accuracy of diabetic retinopathy grading by readers.

5.2. Gradient-Based Methods

Backpropagation, used for weight adjustment in neural network training, is also employed in model interpretation methods to compute gradients. Unlike training, these methods do not alter weights but use gradients to highlight important image areas. The development of specialized deep learning networks like FISH-Net, which optimizes detection through innovative techniques such as rotated Gaussian heatmaps and noise refinement, highlights the potential of deep learning in achieving high precision and sensitivity in medical imaging tasks [58]. Figure 4 Shows examples of gradient-based attribution methods for model interpretability.

Figure 4.

Figure 4

Examples of gradient-based attribution methods for model interpretability. (A) Class maximization visualization of malignant and benign breast masses on mammograms [13]. (B) Integrated gradients visualizing evidence of diabetic retinopathy on retinal fundus images [46]. (C) Visualization of malignant and benign breast masses [13]. (D) Guided backpropagation applied to ultrasound images for fetal heartbeat localization [8]. (E) Differentiation between benign and malignant breast masses in mammograms [10]. (F) Grad-CAM visualizations identifying discriminative regions in magnetoencephalography images for detecting eye-blink artifacts [59].

5.2.1. Saliency Maps

Saliency maps, introduced by Simonyan et al. [60], use gradient information to explain how deep convolutional networks classify images. They are used in class maximization and image-specific class saliency maps. Class maximization generates an image-maximizing activation for a class, as in:

argmaxSc(I)λI22

Yi et al. [13] applied this to visualize malignant and benign breast masses. Image-specific class saliency maps create heatmaps showing each pixel’s significance in classification, computed as:

Salc(x)=Fc(x)x

Dubost et al. [61] used these maps in a weakly supervised method for segmenting brain MRI structures. Saliency maps have also been utilized in diagnosing heart diseases in chest X-rays [12], classifying breast masses in mammography [62] with accuracy ranging from 85% to 92.9%, and identifying pediatric elbow fractures in X-rays [63] with accuracy of 88.0% and area under curve (AUC) of 95.0%. Moreover, iterative saliency maps [64] enhance less obvious image regions by generating a saliency map, inpainting prominent areas, and iterating the process until the image classification changes or a limit is reached. This approach, applied to retinal fundus images for diabetic retinopathy grading, demonstrated higher sensitivity compared to traditional saliency maps. However, saliency has limitations. They do not distinguish if a pixel supports or contradicts a class, and their effectiveness diminishes in binary classification.

5.2.2. Guided Backpropagation

Guided backpropagation, introduced by Springenberg et al. [65], builds on the saliency map approach by Simonyan et al. [60] and the deconvnet concept by Zeiler and Fergus [55]. It improves gradient backpropagation through ReLU layers, where negative activations are set to zero during the forward pass. Guided backpropagation discards gradients where either the forward activation or the backward gradient is negative, producing heatmaps that highlight pixels positively contributing to the classification.

In 2017, Gao and Noble [8] applied guided backpropagation to ultrasound images for fetal heartbeat localization. They found that the heatmaps remained consistent despite variations in the heart’s appearance, size, position, and contrast. Conversely, Böhle et al. [66] discovered that guided backpropagation was less effective for visualizing Alzheimer’s disease in brain MRIs compared to other methods. Similarly, Dubost et al. [67] achieved an intraclass correlation coefficient (ICC) of 93.0% in brain MRI detection using guided backpropagation. Wang et al. [68] obtained an average accuracy of 93.7% in brain MRI classification with this technique. Gessert et al. [69] reported an accuracy of 99.0% in cardiovascular classification using OCT images. Wickstrom et al. [70] achieved a 94.9% accuracy in gastrointestinal segmentation using endoscopy. Lastly, Jamaludin et al. [71] reported an accuracy of 82.5% in musculoskeletal spine classification using MRI images with guided backpropagation.

5.2.3. Class Activation Maps (CAM)

Class Activation Mapping (CAM), introduced by Zhou et al. [72], visualizes regions of an image most influential in a neural network’s classification decision. CAM is computed as a weighted sum of feature maps from the final convolutional layer, using weights from the fully connected layer following global average pooling [73]. For a specific class c and image x:

CAMc(x)=kwkcfk(x)

This heatmap highlights regions most relevant for classification. CAM has been applied in various medical imaging applications, such as segmenting lung nodules in thoracic CT scans [74] and differentiating between benign and malignant breast masses in mammograms [10]. However, CAM’s effectiveness depends on the network architecture, requiring a global pooling (GAP) layer followed by a fully connected layer. While Zhou et al. [75] originally used GAP, Oquab et al. [76] demonstrated that global max pooling and log-sum-exponential pooling can also be used, with the latter yielding finer localization. Table 3 summarizes CAM’s effectiveness across different medical imaging tasks, covering domain tasks, modalities, and performance metrics.

Table 3.

Performance metrics of various Medical Imaging tasks across different modalities using CAM.

Domain-Task Modality Performance Citation
Bladder Classification Histology Mean Accuracy: 69.9% [77]
Brain Classification MRI Accuracy: 86.7% [78,79]
Brain Detection MRI, PET, CT Accuracy: 90.2–95.3%, F1: 91.6–94.3% [80,81]
Breast Classification X-ray, Ultrasound, MRI Accuracy: 83.0–89.0% [82,83,84,85]
Breast Detection X-ray, Ultrasound Mean AUC: 81.0%, AUC: Mt-Net 98.0%, Sn-Net 92.8%, Accuracy: 92.5% [86,87,88,89]
Chest Classification X-ray, CT Accuracy: 97.8%, Average AUC: 75.5–96.0% [90,91,92,93,94,95,96,97]
Chest Segmentation X-ray Accuracy: 95.8% [98]
Eye Classification Fundus Photography, OCT, CT F1: 95.0%, Precision: 93.0%, AUC: 88.0–99.0% [99,100,101,102,103]
Eye Detection Fundus Photography Accuracy: 73.2–99.1%, AUC: 99.0% [104,105,106,107,108]
Gastrointestinal (GI) Classification Endoscopy Mean Accuracy: 93.2% [109,110,111,112]
Liver Classification, Segmentation Histology Mean Accuracy: 87.5% [113,114]
Musculoskeletal Classification MRI, X-ray Accuracy: 86.0%, AUC: 85.3% [115,116]
Skin Classification, Segmentation Dermatoscopy Accuracy: 83.6%, F1: 82.7% [117,118]
Skull Classification X-ray AUC: 88.0–93.0% [119]
Thyroid Classification Ultrasound Accuracy: 87.3%, AUC: 90.1% [120]
Lymph Node Classification, Detection Histology Accuracy: 91.9%, AUC: 97.0% [121]
Various Classification CT, MRI, Ultrasound, X-ray, Fundoscopy F1: 98.0%, Accuracy: 98.0% [122,123]

5.2.4. Grad-CAM

Grad-CAM, an extension of CAM by Selvaraju et al. [124], broadens its application to any network architecture and output, including image segmentation and captioning. It bypasses the global pooling layer and weights feature maps directly with gradients calculated via backpropagation from a target class. The gradients of the output for the class c concerning feature maps Ak are averaged globally, multiplied by Ak, and passed through a ReLU activation to discard negative values:

Grad-CAMc(x)=ReLUk1ZijycAijkAk

Garg et al. employed grad-CAM visualizations to identify discriminative regions of magnetoencephalography images in the task of detecting eye-blink artifacts [59]. The authors found that the regions of the eye highlighted by grad-CAM are the same regions that human experts rely on. Table 4 summarizes the effectiveness of Grad-CAM across various medical imaging tasks, highlighting domains, tasks, modalities, and performance metrics.

Table 4.

Performance metrics of various Medical Imaging tasks across different modalities Using Grad-CAM.

Domain-Task Modality Performance Citation
Brain Classification MRI 81.6–94.2% accuracy [125,126,127,128,129,130]
Brain Detection Ultrasound 94.2% accuracy [131]
Breast Classification MRI 91.0% AUC [132]
Breast Segmentation Histology 95.6% accuracy [133]
Cardiovascular CT, X-ray, Ultrasound 81.2–92.7% accuracy, AUC (81.0–96.3%) [134,135,136,137]
Chest Classification X-ray, CT, Histology 72.0–99.9% accuracy, AUC (70.0–97.9%) [138,139,140,141,142,143,144,145,146,147,148,149]
Dental Classification X-ray 85.4% accuracy, 92.5% AUC [150]
Eye Classification Fundus, OCT 81–97.5% accuracy, AUC (48.1–99.2%) [151,152,153,154,155]
Gastrointestinal (GI) Classification CT, Endoscopy, Histology, MRI 86.9–93.7% accuracy [156,157,158,159,160]
Musculoskeletal X-ray 74.8–96.3% accuracy [161,162,163,164]
Thyroid Classification CT 82.8% accuracy, 88.4% AUC [165]
Whole-Body Scans MRI R2 value of 83.0% [166]
Liver segmentation CT scans 96% accuracy LiTS [167]
Brain Tumor Detection MRI images 98.52% accuracy [168]
Breast Cancer DISH and FISH images 97% accuracy [169]

5.3. Decomposition-Based Methods

Decomposition-based techniques for model interpretation focus on breaking down a model’s prediction into a heatmap showing each pixel’s contribution to the final decision. These techniques, such as LRP, have been widely applied across different domains.

Layer-Wise Relevance Propagation (LRP)

Layer-Wise Relevance Propagation (LRP), introduced by Bach et al. in 2015 [170], offers an alternative to gradient-based techniques like saliency mapping, guided backpropagation, and Grad-CAM. Instead of relying on gradients, LRP distributes the output of the final layer back through the network to calculate relevance scores for each neuron. This process is repeated recursively from the final layer to the input layer, generating a relevancy heatmap that can be overlaid on the input image. Further properties of LRP and details of its theoretical basis are given in (Montavon et al., 2017 [171]), and a comparison of LRP to other interpretation methods can be found in ([172,173,174]).

The relevance score Rik(l,l+1) for a neuron i in layer l from a neuron k in layer l+1 is defined as:

Rik(l,l+1)=Rk(l+1)aiwikhRhk(l,l+1)

The overall relevance score for neuron i in layer l is as follows:

Ril=kRik(l,l+1)

LRP has been applied in MI, such as diagnosing multiple sclerosis (MS) and Alzheimer’s disease (AD) using MRI scans. For MS, LRP heatmaps highlighted hyperintense lesions and affected brain areas [175] as shown in Figure 5, while for AD, they emphasized the hippocampal volume, a critical region for diagnosis [66]. LRP has been found to provide clearer distinctions compared to gradient-based methods and has been used in frameworks like DeepLight for linking brain regions with cognitive states [176].

Figure 5.

Figure 5

Example uses of decomposition-based attribution methods for model interpretability to show the Layerwise relevance propagation for model interpretation in diagnosing multiple sclerosis on brain MRI (Eitel et al., 2019) [175]. The Attention U-Net for organ segmentation in abdominal CT scans [177] and the areas upon which the model correctly focuses its predictions on the test images in explainable Vision Transformers model [178].

5.4. Trainable Attention Models

Trainable Attention (TA) Mechanisms provide a dynamic approach to model interpretation by integrating attention modules into neural networks. Introduced for CNNs by Jetley et al. [179], these soft attention modules generate attention maps that highlight important image parts. They compute compatibility scores between local and global features ls,g using dot products and learned vectors a:

as=exp(cs)i=1nexp(cs,i)

The output ga adjusts local features based on the attention map weights:

ga=asls

This method enhances signals from compatible features while reducing those from less compatible ones. Applications of attention mechanisms in medical imaging include the Attention U-Net for organ segmentation in abdominal CT scans [177], fetal ultrasound classification, and breast mass segmentation in mammograms [180]. Additionally, they have improved melanoma lesion classification [181] and osteoarthritis grading in knee X-rays [182]. In cancer diagnostics, the CACNET method integrates attention mechanisms with Mask R-CNN to enhance nuclear segmentation and reduce noise interference, significantly improving the accuracy of CAC identification [183]. Attention-weighted RL (AWRL) models combine self-attention mechanisms with value function approximation to effectively filter out irrelevant features and enhance decision-making processes in complex tasks [184]. The Trainable Feature Matching Attention Network (TFMAN), incorporating non-local and channel attention, exemplifies how trainable attention mechanisms can augment representation capabilities in CNNs for image super-resolution [185]. Though optimal configurations are application-specific, attention mechanisms are valued for their interpretability and performance enhancement.

Table 5 provides an overview of various studies using TA models, highlighting the domains, tasks, modalities including MRI and histology, and performance metrics such as accuracy and F1-score, demonstrating the broad applicability and effectiveness of TA models in MI.

Table 5.

Overview of Studies Using Trainable Attention Models in Medical Imaging.

Domain Task Modality Performance Citation
Brain Detection MRI Accuracy: 76.5% [186]
Brain Detection, Classification MRI CC: 61.3–64.8%, RMSE: 1.503–5.701 [187]
Breast Classification X-ray Accuracy: 85.0%, AUC: 89.0% [188]
Breast Segmentation Mammo Accuracy: 78.4%, F1: 82.2% [189]
Breast Classification Histology Accuracy: 90.3, AUC: 98.4% [190]
Chest Detection X-ray Accuracy: 73.0–84.0% [191]
Chest Classification CT Accuracy: 87.6% [192]
Chest Segmentation MRI Accuracy: 91.3% [193]
Eye Detection Fundus Photography Accuracy: 96.2%, AUC: 98.3% [180]
Gastrointestinal (GI) Classification Histology Accuracy: 88.4% [194]
Skin Dermatoscopy [195]
Skin Classification Dermatoscopy Average Precision: 67.2%, AUC: 88.3% [181]
Female Reproductive System, Stomach Classification, Segmentation CT, Fetal Ultrasounds Ultrasound Classification: Accuracy: 97.7–98.0%, F1: 92.2–93.3%, CT Segmentation: Recall: 75.1–83.5% [177]
Skeletal (Joint) Classification X-ray Accuracy: 64.3% [182]

5.5. Vision Transformers

Vision Transformers (ViTs) have emerged as a prominent alternative to convolutional neural networks (CNNs) in medical imaging. Unlike CNNs, which use local receptive fields to capture spatial hierarchies, ViTs employ self-attention to model long-range dependencies and global context [42,196,197]. By partitioning images into fixed-size patches treated as a sequence, ViTs utilize transformer encoder layers, effectively capturing complex anatomical structures and pathological patterns.

ViTs have been employed to automate the Tanner-Whitehouse 3 (TW3) algorithm for bone age assessment, achieving clinically interpretable results with predictive accuracy comparable to that of experienced orthopedic surgeons [198]. ViTs have also demonstrated improved performance in diagnosing conditions such as tuberculosis, pneumothorax, and COVID-19 by leveraging self-supervision and self-training through knowledge distillation [199]. In the domain of medical image registration, ViTs have been shown to enhance the accuracy of volumetric image alignment significantly, outperforming traditional methods by capturing long-range spatial dependencies [200]. ViTs have also been applied to 3D cryogenic electron tomography (cryoET) data, with CryoViT outperforming CNNs in segmenting complex organelles like mitochondria, particularly when training data are limited [201].

ViTs have shown superior performance in segmentation, classification, and detection tasks, achieving high accuracy in segmenting tumors and organs in MRI and CT scans, as reflected in Dice scores. Their interpretability is enhanced through attention maps, gradient-based methods, and occlusion sensitivity, which aid in visualizing model predictions. These advancements highlight ViTs’ potential to improve diagnostic accuracy and provide deeper insights into medical image analysis, as discussed in Table 6. The areas upon which the model correctly focuses its predictions on the test image are presented in Figure 5. The regions of focus identified by the ViT model exhibit significant overlap with the areas of white blood cells [178].

Table 6.

Overview of Studies Using Vision Transformers in Medical Imaging.

Domain Task Modality Performance Citation
Stomach segmentation CT, MRI Dice Score: 77.5%, Hausdorff distance: 31.7% [202]
Brain, Pancreas, Hippocampus segmentation MRI, CT Dice Scores: Brain: 87.9%, Pancreas: 83.6%, Hippocampus: 88.1% [203]
Bile-duct segmentation Hyperspectral Average Dice Score: 75.2% [204]
Brain segmentation MRI Dice Scores: Enhancing Tumor Region: 78.7%, Whole Tumor Region: 90.1%, Regions of Tumor Core: 81.7% [205]
Brain, Spleen segmentation MRI, CT Dice Score: 89.1% [206]
Eye, Rectal, Brain segmentation Fundus, Colonoscopy, MRI Average Dice Score: 91.7% [207]
Eye segmentation Pathology Dice Score: 78.6%, F1: 82.1% [208]
Multi-organ segmentation Colonoscopy, Histology Average Dice Score: 86.8% [209]
Aorta, Gallbladder, Kidney, Liver, Pancreas, Spleen, Stomach segmentation MRI, CT Average Dice Score: 78.1–80.4% [210,211,212,213,214,215]
Heart segmentation MRI Average Dice Score: 88.3% [216]
Skin, Chest segmentation X-ray, CT Average Dice Score: Skin: 90.7%, Chest: 86.6% [217]
Rectal segmentation Colonoscopy, Histology Average Dice Score: 91.7% [218]
Kidney segmentation CT Dice Score: 92.3% [219]
Heart segmentation Echocardio- graphy Dice Score: 91.4% [220]
Brain segmentation MRI Dice Score: 91.3–93.5% [221,222]
Teeth segmentation X-ray Dice Score: 92.5% [223]
Breast classification Ultrasound Accuracy: 86.7%, AUC 95.0% [224]
Lung classification Microscopy Accuracy: 97.5% [225]
Eye classification Fundus Accuracy: 95.9%, AUC: 96.3% [226,227]
Chest classification Ultrasound Accuracy: 93.9% [228]
Chest classification X-ray Average AUC: 93.1%, Accuracy: COVID: 98.0%, Pneumonia: 92.0% [229,230,231,232]
Lung classification CT F1: 76.0% [233]
Chest classification X-ray, CT Overall Accuracy: 87.2–98.1%, F1: 93.5% [234,235,236,237]

6. Comparison of Different Interpretation Methods

6.1. Categorization by Visualization Technique

Visualization techniques in DL can be categorized based on their application and effectiveness. Table 7 summarizes various visualization techniques used in DL for interpretability. It categorizes methods based on their tasks, body parts, modalities, accuracy, and evaluation metrics. This table highlights that techniques like CAM and Grad-CAM are effective for image classification and localization across different modalities such as X-ray and MRI, achieving high accuracy. LRP is noted for its accuracy in segmentation tasks, while IG is utilized for classification with notable AUC-Receiver Operating Characteristic (ROC) scores. Attention-based methods improve performance and interpretability by focusing on relevant regions, whereas perturbation-based methods assess model robustness. LIME provides model-agnostic explanations, and trainable attention models dynamically enhance feature focus.

Table 7.

Comparison of Visualization Techniques.

Visualization Technique Task Body Parts Modality Accuracy Evaluation Metric
CAM Image classification and localization Brain, chest, abdomen X-ray, MRI, CT scans 85.0–95.0% Accuracy for classification; IoU for localization tasks
Grad-CAM Image classification and localization Brain, chest, abdomen X-ray, MRI, CT scans 85.0–95.0% Accuracy for classification; IoU for localization tasks
LRP Segmentation, classification Brain, liver, lungs MRI, CT scans 90.0% Dice coefficient for segmentation accuracy
IG Image classification Breast, lung, spine X-ray, MRI 80.0–92.0% AUC-ROC for classification
Attention-based Image classification, object detection Brain, chest X-ray, MRI 5.0% to 10.0% Accuracy for classification; mAP for object detection
LIME Local explanations for model predictions N/A N/A N/A Task-specific metrics
Gradient-based Visualize feature importance N/A N/A N/A Feature importance metrics, SHAP values, Grad-CAM++
Vision Transformer Dynamically attend to relevant features Various body parts Various modalities N/A Task-specific metrics

6.2. Categorization by Body Parts, Modality, and Accuracy

Table 7 provides a concise overview of imaging techniques categorized by anatomical context (Body Parts). It lists various modalities such as MRI, X-ray, and ultrasound, and highlights specific techniques used for different body parts. For instance, CAM and Grad-CAM are prominent in brain imaging with high accuracy, while LRP and attention-based methods excel in breast imaging. The table also emphasizes the adaptation of methods to address challenges such as speckle noise in ultrasound imaging.

The advanced segmentation techniques discussed include the improved V-net algorithm, which enhances liver and tumor segmentation using distance metric-based loss functions, and the LViT model, which integrates medical text annotations to improve segmentation performance in multimodal datasets [238,239]. Additionally, Transformers have been applied broadly in medical image analysis, significantly improving performance in tasks such as segmentation and classification [240].

The imaging techniques covered in the studies include CT, dermatoscopy, diabetic retinopathy (DR), endoscopy, fundus photography, histology, histopathology, mammography, magnetoencephalography (MEG), MRI, OCT, PET, photography, ultrasound, and X-ray. These studies spanned from 2014 to 2020, with the majority published in 2019 and 2020, as shown in Figure 6.

Figure 6.

Figure 6

Bubble chart provides a concise overview of imaging techniques categorized by anatomical contexts.

6.3. Categorization by Task

This section organizes the techniques and their applications across different tasks, highlighting performance metrics and examples for clarity. Table 8 organizes interpretability techniques according to their tasks, including classification, segmentation, and detection. It details the applications, performance metrics, and specific examples for each task. Techniques like CAM, Grad-CAM, and TA models are effective for classification tasks, providing high accuracy and AUC-ROC scores. LRP and Integrated Gradient are highlighted for segmentation tasks, with metrics like dice similarity coefficient (DSC) and intersection over union (IoU). Detection tasks benefit from methods such as saliency maps and CAM, with metrics including mean average precision (mAP) and sensitivity.

Table 8.

Techniques Organized by Task.

Task Techniques Application Performance Metrics Examples
Classification CAM, Grad-CAM, Attention, ViTs Disease diagnosis, organ identification Accuracy, AUC-ROC, Precision, Recall Disease Diagnosis: High AUC for cancer detection (e.g., mammograms); Organ Identification: CAM for liver segmentation or brain MRI; ViTs: High accuracy in lung and breast cancer classification
Segmentation LRP, IG, ViTs Tumor segmentation, anatomical structure delineation Dice Similarity Coefficient (DSC), Intersection over Union (IoU) Tumor Segmentation: Accurate tumor boundary delineation; Anatomical Structure: IG for cardiac structure in CT scans; ViTs: High DSC scores in brain and stomach segmentation
Detection Saliency maps, CAM, Attention, ViTs Lesion detection, nodule localization Mean Average Precision (mAP), Sensitivity, Specificity Lesion Detection: Saliency maps for skin cancer detection; Nodule Localization: CAM for lung nodule detection in CT scans; ViTs: Improved lesion detection in various modalities

7. Current Challenges and Future Directions

7.1. Current Challenges

Despite significant advancements, several challenges remain in the interpretability and visualization of DL models in MI:

Scalability and Efficiency: Many interpretability methods, such as occlusion and perturbation-based techniques, are computationally intensive. This limits their scalability, especially with high-resolution medical images that require real-time analysis.

Clinical Integration: Translating interpretability techniques into clinical practice requires seamless integration with existing workflows and systems. This includes ensuring that the visualizations are intuitive for non-technical healthcare practitioners and that they provide actionable insights.

Robustness and Generalization: Interpretability methods must be robust across diverse patient populations and medical imaging modalities. Models trained on specific datasets might not generalize well to other contexts, leading to potential biases and inaccuracies in interpretations.

Standardization and Validation: There is a lack of standardized metrics and benchmarks for evaluating the effectiveness of interpretability methods. Rigorous validation in clinical settings is essential to establish the reliability and trustworthiness of these techniques.

Ethical and Legal Considerations: The opacity of deep learning models raises ethical and legal concerns, especially in healthcare where decisions can have critical consequences. Ensuring transparency, accountability, and fairness in AI-driven diagnostics is paramount.

7.2. Future Directions

To address the challenges and enhance the field of medical image interpretability, future research should focus on the following directions:

Expansion to High-Resolution Histopathology: As the field of medical imaging evolves to include high-resolution histopathology and digital pathology, future research should explicitly consider these advanced techniques. This includes developing interpretability methods tailored to microscopic and molecular-level images, which require different approaches compared to traditional mesoscopic imaging methods.

Development of Lightweight Methods: Creating computationally efficient interpretability techniques that can handle high-resolution images and deliver results in real-time is crucial. This involves optimizing existing methods and exploring new algorithmic approaches that balance accuracy with computational efficiency.

Enhanced Clinical Collaboration: Collaborative efforts between AI researchers, clinicians, and medical practitioners are necessary to design interpretability methods that are both clinically relevant and user-friendly. This could include the development of interactive visualization tools that allow clinicians to intuitively explore and understand model outputs.

Robustness to Variability: Developing interpretability techniques that are robust to variations in imaging modalities, patient demographics, and clinical conditions is essential. This requires extensive training on diverse datasets and continuous validation across different settings to ensure that methods remain effective and reliable in varied contexts.

Establishment of Standards: Creating standardized benchmarks and validation protocols for interpretability methods will aid in objectively assessing their effectiveness and reliability. This includes the development of common datasets and metrics for comparative evaluations to facilitate consistency and transparency in the field.

Ethical Frameworks: Integrating ethical considerations into the design and deployment of interpretability methods is critical. This involves ensuring that models are transparent, explainable, and free from biases, as well as addressing privacy and data security concerns. Ethical frameworks will support the responsible use of AI in medical imaging.

Hybrid Approaches: Combining different interpretability techniques, such as perturbation-based and gradient-based methods, can provide more comprehensive insights into model behavior. Hybrid approaches can leverage the strengths of various methods, enhancing overall interpretability and providing a more nuanced understanding of model decisions.

By addressing these directions, the field can advance towards more effective, reliable, and clinically relevant interpretability methods in medical imaging, paving the way for better integration of AI technologies in healthcare.

8. Conclusions

In conclusion, the integration of interpretability and visualization techniques into DL models for MI holds immense potential for advancing healthcare diagnostics and treatment planning. While significant progress has been made, challenges related to scalability, clinical integration, robustness, standardization, and ethical considerations persist. Addressing these challenges requires ongoing collaboration between AI researchers, clinicians, and healthcare practitioners. Future research should focus on developing efficient and clinically relevant interpretability methods, establishing standardized evaluation protocols, and ensuring ethical and transparent AI applications in healthcare. By overcoming these hurdles, we can enhance the trustworthiness, reliability, and clinical impact of DL models in MI, ultimately leading to better patient outcomes and more informed clinical decision-making.

Author Contributions

Conceptualization, D.B. and F.N.; Methodology, D.B., F.N. and M.A.; Validation, D.B., F.N. and M.A.; Formal analysis, D.B. and F.N.; Writing—review and editing, D.B., F.N. and M.A. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data were presented in main text.

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This work was partly supported by Kent State University’s Open Access APC Support Fund.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Neikirk K., Lopez E.G., Marshall A.G., Alghanem A., Krystofiak E., Kula B., Smith N., Shao J., Katti P., Hinton A.O., Jr. Call to action to properly utilize electron microscopy to measure organelles to monitor disease. Eur. J. Cell Biol. 2023;102:151365. doi: 10.1016/j.ejcb.2023.151365. [DOI] [PubMed] [Google Scholar]
  • 2.Galaz-Montoya J.G. The advent of preventive high-resolution structural histopathology by artificial-intelligence-powered cryogenic electron tomography. Front. Mol. Biosci. 2024;11:1390858. doi: 10.3389/fmolb.2024.1390858. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Banerji S., Mitra S. Deep learning in histopathology: A review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2022;12:e1439. doi: 10.1002/widm.1439. [DOI] [Google Scholar]
  • 4.Niazi M.K.K., Parwani A.V., Gurcan M.N. Digital pathology and artificial intelligence. Lancet Oncol. 2019;20:e253–e261. doi: 10.1016/S1470-2045(19)30154-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Hu P., Wu F., Peng J., Bao Y., Chen F., Kong D. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets. Int. J. Comput. Assist. Radiol. Surg. 2017;12:399–411. doi: 10.1007/s11548-016-1501-5. [DOI] [PubMed] [Google Scholar]
  • 6.Kamnitsas K., Ledig C., Newcombe V.F., Simpson J.P., Kane A.D., Menon D.K., Rueckert D., Glocker B. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 2017;36:61–78. doi: 10.1016/j.media.2016.10.004. [DOI] [PubMed] [Google Scholar]
  • 7.Roth H.R., Lu L., Farag A., Shin H.C., Liu J., Turkbey E.B., Summers R.M. Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation; Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference; Munich, Germany. 5–9 October 2015; Berlin/Heidelberg, Germany: Springer; 2015. pp. 556–564. Proceedings, Part I 18. [Google Scholar]
  • 8.Gao Y., Alison Noble J. Detection and characterization of the fetal heartbeat in free-hand ultrasound sweeps with weakly-supervised two-streams convolutional networks; Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2017: 20th International Conference; Quebec City, QC, Canada. 11–13 September 2017; Berlin/Heidelberg, Germany: Springer; 2017. pp. 305–313. Proceedings, Part II 20. [Google Scholar]
  • 9.Roth H.R., Lu L., Liu J., Yao J., Seff A., Cherry K., Kim L., Summers R.M. Improving computer-aided detection using convolutional neural networks and random view aggregation. IEEE Trans. Med. Imaging. 2015;35:1170–1181. doi: 10.1109/TMI.2015.2482920. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Kim S.T., Lee J.H., Lee H., Ro Y.M. Visually interpretable deep network for diagnosis of breast masses on mammograms. Phys. Med. Biol. 2018;63:235025. doi: 10.1088/1361-6560/aaef0a. [DOI] [PubMed] [Google Scholar]
  • 11.Yang X., Do Yang J., Hwang H.P., Yu H.C., Ahn S., Kim B.W., You H. Segmentation of liver and vessels from CT images and classification of liver segments for preoperative liver surgical planning in living donor liver transplantation. Comput. Methods Programs Biomed. 2018;158:41–52. doi: 10.1016/j.cmpb.2017.12.008. [DOI] [PubMed] [Google Scholar]
  • 12.Chen X., Shi B. Deep mask for X-ray based heart disease classification. arXiv. 20181808.08277 [Google Scholar]
  • 13.Yi D., Sawyer R.L., Cohn III D., Dunnmon J., Lam C., Xiao X., Rubin D. Optimizing and visualizing deep learning for benign/malignant classification in breast tumors. arXiv. 20171705.06362 [Google Scholar]
  • 14.Hengstler M., Enkel E., Duelli S. Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technol. Forecast. Soc. Chang. 2016;105:105–120. doi: 10.1016/j.techfore.2015.12.014. [DOI] [Google Scholar]
  • 15.Nundy S., Montgomery T., Wachter R.M. Promoting trust between patients and physicians in the era of artificial intelligence. JAMA. 2019;322:497–498. doi: 10.1001/jama.2018.20563. [DOI] [PubMed] [Google Scholar]
  • 16.Hosny A., Parmar C., Quackenbush J., Schwartz L.H., Aerts H.J. Artificial intelligence in radiology. Nat. Rev. Cancer. 2018;18:500–510. doi: 10.1038/s41568-018-0016-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Jia X., Ren L., Cai J. Clinical implementation of AI technologies will require interpretable AI models. Med. Phys. 2020;47:1–4. doi: 10.1002/mp.13891. [DOI] [PubMed] [Google Scholar]
  • 18.Reyes M., Meier R., Pereira S., Silva C.A., Dahlweid F.M., Tengg-Kobligk H.v., Summers R.M., Wiest R. On the interpretability of artificial intelligence in radiology: Challenges and opportunities. Radiol. Artif. Intell. 2020;2:e190043. doi: 10.1148/ryai.2020190043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Gastounioti A., Kontos D. Is it time to get rid of black boxes and cultivate trust in AI? Radiol. Artif. Intell. 2020;2:e200088. doi: 10.1148/ryai.2020200088. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Guo R., Wei J., Sun L., Yu B., Chang G., Liu D., Zhang S., Yao Z., Xu M., Bu L. A survey on advancements in image-text multimodal models: From general techniques to biomedical implementations. Comput. Biol. Med. 2024;178:108709. doi: 10.1016/j.compbiomed.2024.108709. [DOI] [PubMed] [Google Scholar]
  • 21.Rasool N., Bhat J.I. Brain tumour detection using machine and deep learning: A systematic review. Multimed. Tools Appl. 2024:1–54. doi: 10.1007/s11042-024-19333-2. [DOI] [Google Scholar]
  • 22.Huff D.T., Weisman A.J., Jeraj R. Interpretation and visualization techniques for deep learning models in medical imaging. Phys. Med. Biol. 2021;66:04TR01. doi: 10.1088/1361-6560/abcd17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Hohman F., Kahng M., Pienta R., Chau D.H. Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE Trans. Vis. Comput. Graph. 2018;25:2674–2693. doi: 10.1109/TVCG.2018.2843369. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Litjens G., Kooi T., Bejnordi B.E., Setio A.A.A., Ciompi F., Ghafoorian M., Van Der Laak J.A., Van Ginneken B., Sánchez C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017;42:60–88. doi: 10.1016/j.media.2017.07.005. [DOI] [PubMed] [Google Scholar]
  • 25.Vincent P., Larochelle H., Bengio Y., Manzagol P.A. Extracting and composing robust features with denoising autoencoders; Proceedings of the 25th International Conference on Machine Learning; Helsinki, Finland. 5–9 July 2008; pp. 1096–1103. [Google Scholar]
  • 26.Kiran B.R., Thomas D.M., Parakkal R. An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos. J. Imaging. 2018;4:36. doi: 10.3390/jimaging4020036. [DOI] [Google Scholar]
  • 27.Theis L., Shi W., Cunningham A., Huszár F. Lossy image compression with compressive autoencoders; Proceedings of the International Conference on Learning Representations; Virtually. 25–29 April 2022. [Google Scholar]
  • 28.Tschannen M., Bachem O., Lucic M. Recent advances in autoencoder-based representation learning. arXiv. 20181812.05069 [Google Scholar]
  • 29.Uzunova H., Ehrhardt J., Kepp T., Handels H. Interpretable explanations of black box classifiers applied on medical images by meaningful perturbations using variational autoencoders; Proceedings of the Medical Imaging 2019: Image Processing; San Diego, CA, USA. 19–21 February 2019; Bellingham, DC, USA: SPIE; 2019. pp. 264–271. [Google Scholar]
  • 30.Chen X., You S., Tezcan K.C., Konukoglu E. Unsupervised lesion detection via image restoration with a normative prior. Med. Image Anal. 2020;64:101713. doi: 10.1016/j.media.2020.101713. [DOI] [PubMed] [Google Scholar]
  • 31.Hou L., Nguyen V., Kanevsky A.B., Samaras D., Kurc T.M., Zhao T., Gupta R.R., Gao Y., Chen W., Foran D., et al. Sparse autoencoder for unsupervised nucleus detection and representation in histopathology images. Pattern Recognit. 2019;86:188–200. doi: 10.1016/j.patcog.2018.09.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Van der Maaten L., Hinton G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008;9:2579–2605. [Google Scholar]
  • 33.Plis S.M., Hjelm D.R., Salakhutdinov R., Allen E.A., Bockholt H.J., Long J.D., Johnson H.J., Paulsen J.S., Turner J.A., Calhoun V.D. Deep learning for neuroimaging: A validation study. Front. Neurosci. 2014;8:229. doi: 10.3389/fnins.2014.00229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Stoyanov D., Taylor Z., Kia S.M., Oguz I., Reyes M., Martel A., Maier-Hein L., Marquand A.F., Duchesnay E., Löfstedt T., et al. Understanding and Interpreting Machine Learning in Medical Image Computing Applications. Springer; Berlin/Heidelberg, Germany: 2018. [Google Scholar]
  • 35.Yu Z., Tan E.L., Ni D., Qin J., Chen S., Li S., Lei B., Wang T. A deep convolutional neural network-based framework for automatic fetal facial standard plane recognition. IEEE J. Biomed. Health Inform. 2017;22:874–885. doi: 10.1109/JBHI.2017.2705031. [DOI] [PubMed] [Google Scholar]
  • 36.Zhang F., Li Z., Zhang B., Du H., Wang B., Zhang X. Multi-modal deep learning model for auxiliary diagnosis of Alzheimer’s disease. Neurocomputing. 2019;361:185–195. doi: 10.1016/j.neucom.2019.04.093. [DOI] [Google Scholar]
  • 37.Al’Aref S.J., Anchouche K., Singh G., Slomka P.J., Kolli K.K., Kumar A., Pandey M., Maliakal G., Van Rosendael A.R., Beecy A.N., et al. Clinical applications of machine learning in cardiovascular disease and its relevance to cardiac imaging. Eur. Heart J. 2019;40:1975–1986. doi: 10.1093/eurheartj/ehy404. [DOI] [PubMed] [Google Scholar]
  • 38.Nie D., Trullo R., Lian J., Petitjean C., Ruan S., Wang Q., Shen D. Medical image synthesis with context-aware generative adversarial networks; Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2017: 20th International Conference; Quebec City, QC, Canada. 11–13 September 2017; Berlin/Heidelberg, Germany: Springer; 2017. pp. 417–425. Proceedings, Part III 20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Frid-Adar M., Diamant I., Klang E., Amitai M., Goldberger J., Greenspan H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing. 2018;321:321–331. doi: 10.1016/j.neucom.2018.09.013. [DOI] [Google Scholar]
  • 40.Yi X., Walia E., Babyn P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019;58:101552. doi: 10.1016/j.media.2019.101552. [DOI] [PubMed] [Google Scholar]
  • 41.Devlin J., Chang M.W., Lee K., Toutanova K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv. 20181810.04805 [Google Scholar]
  • 42.Al-Hammuri K., Gebali F., Kanan A., Chelvan I.T. Vision transformer architecture and applications in digital health: A tutorial and survey. Vis. Comput. Ind. Biomed. Art. 2023;6:14. doi: 10.1186/s42492-023-00140-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Lecler A., Duron L., Soyer P. Revolutionizing radiology with GPT-based models: Current applications, future possibilities and limitations of ChatGPT. Diagn. Interv. Imaging. 2023;104:269–274. doi: 10.1016/j.diii.2023.02.003. [DOI] [PubMed] [Google Scholar]
  • 44.Ivanovs M., Kadikis R., Ozols K. Perturbation-based methods for explaining deep neural networks: A survey. Pattern Recognit. Lett. 2021;150:228–234. doi: 10.1016/j.patrec.2021.06.030. [DOI] [Google Scholar]
  • 45.Papanastasopoulos Z., Samala R.K., Chan H.P., Hadjiiski L., Paramagul C., Helvie M.A., Neal C.H. Explainable AI for medical imaging: Deep-learning CNN ensemble for classification of estrogen receptor status from breast MRI; Proceedings of the Medical Imaging 2020: Computer-Aided Diagnosis; Houston, TX, USA. 16–19 February 2020; Bellingham, DC, USA: SPIE; 2020. pp. 228–235. [Google Scholar]
  • 46.Sayres R., Taly A., Rahimy E., Blumer K., Coz D., Hammel N., Webster D.R. Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology. 2019;126:552–564. doi: 10.1016/j.ophtha.2018.11.016. [DOI] [PubMed] [Google Scholar]
  • 47.Sundararajan M., Taly A., Yan Q. Axiomatic attribution for deep networks; Proceedings of the International Conference on Machine Learning, PMLR; Sydney, Australia. 6–11 August 2017; pp. 3319–3328. [Google Scholar]
  • 48.Rajaraman S., Candemir S., Thoma G., Antani S. Visualizing and explaining deep learning predictions for pneumonia detection in pediatric chest radiographs; Proceedings of the Medical Imaging 2019: Computer-Aided Diagnosis; San Diego, CA, USA. 17–20 February 2019; Bellingham, DC, USA: SPIE; 2019. pp. 200–211. [Google Scholar]
  • 49.Malhi A., Kampik T., Pannu H., Madhikermi M., Främling K. Explaining machine learning-based classifications of in-vivo gastral images; Proceedings of the 2019 Digital Image Computing: Techniques and Applications (DICTA); Perth, Australia. 2–4 December 2019; pp. 1–7. [Google Scholar]
  • 50.Dubost F., Adams H., Bortsova G., Ikram M.A., Niessen W., Vernooij M., De Bruijne M. 3D regression neural network for the quantification of enlarged perivascular spaces in brain MRI. Med. Image Anal. 2019;51:89–100. doi: 10.1016/j.media.2018.10.008. [DOI] [PubMed] [Google Scholar]
  • 51.Shahamat H., Abadeh M.S. Brain MRI analysis using a deep learning based evolutionary approach. Neural Netw. 2020;126:218–234. doi: 10.1016/j.neunet.2020.03.017. [DOI] [PubMed] [Google Scholar]
  • 52.Gecer B., Aksoy S., Mercan E., Shapiro L.G., Weaver D.L., Elmore J.G. Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks. Pattern Recognit. 2018;84:345–356. doi: 10.1016/j.patcog.2018.07.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Kermany D.S., Goldbaum M., Cai W., Valentim C.C., Liang H., Baxter S.L., McKeown A., Yang G., Wu X., Yan F., et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018;172:1122–1131. doi: 10.1016/j.cell.2018.02.010. [DOI] [PubMed] [Google Scholar]
  • 54.Seah J.C., Tang J.S., Kitchen A., Gaillard F., Dixon A.F. Chest radiographs in congestive heart failure: Visualizing neural network learning. Radiology. 2019;290:514–522. doi: 10.1148/radiol.2018180887. [DOI] [PubMed] [Google Scholar]
  • 55.Zeiler M.D., Fergus R. Visualizing and understanding convolutional networks; Proceedings of the Computer Vision–ECCV 2014: 13th European Conference; Zurich, Switzerland. 6–12 September 2014; Berlin/Heidelberg, Germany: Springer; 2014. pp. 818–833. Proceedings, Part I 13. [Google Scholar]
  • 56.Liang Y., Li S., Yan C., Li M., Jiang C. Explaining the black-box model: A survey of local interpretation methods for deep neural networks. Neurocomputing. 2021;419:168–182. doi: 10.1016/j.neucom.2020.08.011. [DOI] [Google Scholar]
  • 57.Ribeiro M.T., Singh S., Guestrin C. “Why should i trust you?” Explaining the predictions of any classifier; Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; San Francisco, CA, USA. 13–16 August 2016; pp. 1135–1144. [Google Scholar]
  • 58.Xu X., Li C., Lan X., Fan X., Lv X., Ye X., Wu T. A lightweight and robust framework for circulating genetically abnormal cells (CACs) identification using 4-color fluorescence in situ hybridization (FISH) image and deep refined learning. J. Digit. Imaging. 2023;36:1687–1700. doi: 10.1007/s10278-023-00843-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Garg P., Davenport E., Murugesan G., Wagner B., Whitlow C., Maldjian J., Montillo A. Using convolutional neural networks to automatically detect eye-blink artifacts in magnetoencephalography without resorting to electrooculography; Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2017: 20th International Conference; Quebec City, QC, Canada. 11–13 September 2017; Berlin/Heidelberg, Germany: Springer; 2017. pp. 374–381. Proceedings, Part III 20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Simonyan K., Vedaldi A., Zisserman A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv. 2013 [Google Scholar]
  • 61.Dubost F., Bortsova G., Adams H., Ikram A., Niessen W.J., Vernooij M., De Bruijne M. Gp-unet: Lesion detection from weak labels with a 3d regression network; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI); Quebec City, QC, Canada. 11–13 September 2017; pp. 214–221. [Google Scholar]
  • 62.Lévy D., Jain A. Breast mass classification from mammograms using deep convolutional neural networks. arXiv. 2016 [Google Scholar]
  • 63.Rayan J.C., Reddy N., Kan J.H., Zhang W., Annapragada A. Binomial classification of pediatric elbow fractures using a deep learning multiview approach emulating radiologist decision making. Radiol. Artif. Intell. 2019;1:e180015. doi: 10.1148/ryai.2019180015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Liefers B., González-Gonzalo C., Klaver C., van Ginneken B., Sánchez C.I. Dense segmentation in selected dimensions: Application to retinal optical coherence tomography; Proceedings of the International Conference on Medical Imaging with Deep Learning (MIDL); London, UK. 8–10 July 2019; pp. 337–346. PMLR. [Google Scholar]
  • 65.Springenberg J.T., Dosovitskiy A., Brox T., Riedmiller M. Striving for simplicity: The all convolutional net. arXiv. 2014 [Google Scholar]
  • 66.Böhle M., Eitel F., Weygandt M., Ritter K. Layer-wise relevance propagation for explaining deep neural network decisions in MRI-based Alzheimer’s disease classification. Front. Aging Neurosci. 2019;11:456892. doi: 10.3389/fnagi.2019.00194. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Dubost F., Yilmaz P., Adams H., Bortsova G., Ikram M.A., Niessen W., Vernooij M., de Bruijne M. Enlarged perivascular spaces in brain MRI: Automated quantification in four regions. Neuroimage. 2019;185:534–544. doi: 10.1016/j.neuroimage.2018.10.026. [DOI] [PubMed] [Google Scholar]
  • 68.Wang X., Liang X., Jiang Z., Nguchu B.A., Zhou Y., Wang Y., Wang H., Li Y., Zhu Y., Wu F., et al. Decoding and mapping task states of the human brain via deep learning. Hum. Brain Mapp. 2020;41:1505–1519. doi: 10.1002/hbm.24891. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Gessert N., Latus S., Abdelwahed Y.S., Leistner D.M., Lutz M., Schlaefer A. Bioresorbable scaffold visualization in IVOCT images using CNNs and weakly supervised localization; Proceedings of the Medical Imaging 2019: Image Processing; San Diego, CA, USA. 19–21 February 2019; Bellingham, DC, USA: SPIE; 2019. pp. 606–612. [Google Scholar]
  • 70.Wickstrøm K., Kampffmeyer M., Jenssen R. Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps. Med. Image Anal. 2020;60:101619. doi: 10.1016/j.media.2019.101619. [DOI] [PubMed] [Google Scholar]
  • 71.Jamaludin A., Kadir T., Zisserman A. SpineNet: Automated classification and evidence visualization in spinal MRIs. Med. Image Anal. 2017;41:63–73. doi: 10.1016/j.media.2017.07.002. [DOI] [PubMed] [Google Scholar]
  • 72.Zhou B., Khosla A., Lapedriza A., Oliva A., Torralba A. Learning deep features for discriminative localization; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA. 27–30 June 2016; pp. 2921–2929. [Google Scholar]
  • 73.Lin M., Chen Q., Yan S. Network in network. arXiv. 2013 [Google Scholar]
  • 74.Feng X., Lipton Z.C., Yang J., Small S.A., Provenzano F.A., Initiative A.D.N., Initiative F.L.D.N. Estimating brain age based on a uniform healthy population with deep learning and structural magnetic resonance imaging. Neurobiol. Aging. 2020;91:15–25. doi: 10.1016/j.neurobiolaging.2020.02.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Zhao G., Zhou B., Wang K., Jiang R., Xu M. Respond-CAM: Analyzing deep models for 3D imaging data by visualizations; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference; Granada, Spain. 16–20 September 2018; Berlin/Heidelberg, Germany: Springer International Publishing; 2018. pp. 485–492. Proceedings, Part I. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Oquab M., Bottou L., Laptev I., Sivic J. Learning and transferring mid-level image representations using convolutional neural networks; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Columbus, OH, USA. 23–28 June 2014; pp. 1717–1724. [Google Scholar]
  • 77.Woerl A.C., Eckstein M., Geiger J., Wagner D.C., Daher T., Stenzel P., Fernandez A., Hartmann A., Wand M., Roth W., et al. Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. Eur. Urol. 2020;78:256–264. doi: 10.1016/j.eururo.2020.04.023. [DOI] [PubMed] [Google Scholar]
  • 78.Ahmad A., Sarkar S., Shah A., Gore S., Santosh V., Saini J., Ingalhalikar M. Predictive and discriminative localization of IDH genotype in high grade gliomas using deep convolutional neural nets; Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019); Venice, Italy. 8–11 April 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 372–375. [Google Scholar]
  • 79.Shinde S., Prasad S., Saboo Y., Kaushick R., Saini J., Pal P.K., Ingalhalikar M. Predictive markers for Parkinson’s disease using deep neural nets on neuromelanin sensitive MRI. Neuroimage Clin. 2019;22:101748. doi: 10.1016/j.nicl.2019.101748. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Chakraborty S., Aich S., Kim H.C. Detection of Parkinson’s disease from 3T T1 weighted MRI scans using 3D convolutional neural network. Diagnostics. 2020;10:402. doi: 10.3390/diagnostics10060402. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Choi H., Kim Y.K., Yoon E.J., Lee J.Y., Lee D.S., Initiative A.D.N. Cognitive signature of brain FDG PET based on deep learning: Domain transfer from Alzheimer’s disease to Parkinson’s disease. Eur. J. Nucl. Med. Mol. Imaging. 2020;47:403–412. doi: 10.1007/s00259-019-04538-7. [DOI] [PubMed] [Google Scholar]
  • 82.Huang Z., Zhu X., Ding M., Zhang X. Medical image classification using a light-weighted hybrid neural network based on PCANet and DenseNet. IEEE Access. 2020;8:24697–24712. doi: 10.1109/ACCESS.2020.2971225. [DOI] [Google Scholar]
  • 83.Kim C., Kim W.H., Kim H.J., Kim J. Weakly-supervised US breast tumor characterization and localization with a box convolution network; Proceedings of the Medical Imaging 2020: Computer-Aided Diagnosis; Houston, TX, USA. 16–19 February 2020; Bellingham, DC, USA: SPIE; 2020. pp. 298–304. [Google Scholar]
  • 84.Luo L., Chen H., Wang X., Dou Q., Lin H., Zhou J., Li G., Heng P.A. Deep angular embedding and feature correlation attention for breast MRI cancer analysis; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference; Shenzhen, China. 13–17 October 2019; Berlin/Heidelberg, Germany: Springer; 2019. pp. 504–512. Proceedings, Part IV 22. [Google Scholar]
  • 85.Yi P.H., Lin A., Wei J., Yu A.C., Sair H.I., Hui F.K., Hager G.D., Harvey S.C. Deep-learning-based semantic labeling for 2D mammography and comparison of complexity for machine learning tasks. J. Digit. Imaging. 2019;32:565–570. doi: 10.1007/s10278-019-00244-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Lee J., Nishikawa R.M. Detecting mammographically occult cancer in women with dense breasts using deep convolutional neural network and Radon Cumulative Distribution Transform. J. Med. Imaging. 2019;6:044502. doi: 10.1117/1.JMI.6.4.044502. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Qi X., Zhang L., Chen Y., Pi Y., Chen Y., Lv Q., Yi Z. Automated diagnosis of breast ultrasonography images using deep neural networks. Med. Image Anal. 2019;52:185–198. doi: 10.1016/j.media.2018.12.006. [DOI] [PubMed] [Google Scholar]
  • 88.Xi P., Guan H., Shu C., Borgeat L., Goubran R. An integrated approach for medical abnormality detection using deep patch convolutional neural networks. Vis. Comput. 2020;36:1869–1882. doi: 10.1007/s00371-019-01775-7. [DOI] [Google Scholar]
  • 89.Zhou L.Q., Wu X.L., Huang S.Y., Wu G.G., Ye H.R., Wei Q., Bao L.Y., Deng Y.B., Li X.R., Cui X.W., et al. Lymph node metastasis prediction from primary breast cancer US images using deep learning. Radiology. 2020;294:19–28. doi: 10.1148/radiol.2019190372. [DOI] [PubMed] [Google Scholar]
  • 90.Dunnmon J.A., Yi D., Langlotz C.P., Ré C., Rubin D.L., Lungren M.P. Assessment of convolutional neural networks for automated classification of chest radiographs. Radiology. 2019;290:537–544. doi: 10.1148/radiol.2018181422. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Huang Z., Fu D. Diagnose chest pathology in X-ray images by learning multi-attention convolutional neural network; Proceedings of the 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC); Chongqing, China. 24–26 May 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 294–299. [Google Scholar]
  • 92.Khakzar A., Albarqouni S., Navab N. Learning interpretable features via adversarially robust optimization; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference; Shenzhen, China. 13–17 October 2019; Berlin/Heidelberg, Germany: Springer; 2019. pp. 793–800. Proceedings, Part VI 22. [Google Scholar]
  • 93.Kumar D., Sankar V., Clausi D., Taylor G.W., Wong A. Sisc: End-to-end interpretable discovery radiomics-driven lung cancer prediction via stacked interpretable sequencing cells. IEEE Access. 2019;7:145444–145454. doi: 10.1109/ACCESS.2019.2945524. [DOI] [Google Scholar]
  • 94.Lei Y., Tian Y., Shan H., Zhang J., Wang G., Kalra M.K. Shape and margin-aware lung nodule classification in low-dose CT images via soft activation mapping. Med. Image Anal. 2020;60:101628. doi: 10.1016/j.media.2019.101628. [DOI] [PubMed] [Google Scholar]
  • 95.Tang Y.X., Tang Y.B., Peng Y., Yan K., Bagheri M., Redd B.A., Brandon C.J., Lu Z., Han M., Xiao J., et al. Automated abnormality classification of chest radiographs using deep convolutional neural networks. NPJ Digit. Med. 2020;3:70. doi: 10.1038/s41746-020-0273-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96.Wang K., Zhang X., Huang S. KGZNet: Knowledge-guided deep zoom neural networks for thoracic disease classification; Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); San Diego, CA, USA. 18–21 November 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 1396–1401. [Google Scholar]
  • 97.Yi P.H., Kim T.K., Yu A.C., Bennett B., Eng J., Lin C.T. Can AI outperform a junior resident? Comparison of deep neural network to first-year radiology residents for identification of pneumothorax. Emerg. Radiol. 2020;27:367–375. doi: 10.1007/s10140-020-01767-4. [DOI] [PubMed] [Google Scholar]
  • 98.Liu H., Wang L., Nan Y., Jin F., Wang Q., Pu J. SDFN: Segmentation-based deep fusion network for thoracic disease classification in chest X-ray images. Comput. Med. Imaging Graph. 2019;75:66–73. doi: 10.1016/j.compmedimag.2019.05.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 99.Ahmad M., Kasukurthi N., Pande H. Deep learning for weak supervision of diabetic retinopathy abnormalities; Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019); Venice, Italy. 8–11 April 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 573–577. [Google Scholar]
  • 100.Liao W., Zou B., Zhao R., Chen Y., He Z., Zhou M. Clinical interpretable deep learning model for glaucoma diagnosis. IEEE J. Biomed. Health Inform. 2019;24:1405–1412. doi: 10.1109/JBHI.2019.2949075. [DOI] [PubMed] [Google Scholar]
  • 101.Perdomo O., Rios H., Rodríguez F.J., Otálora S., Meriaudeau F., Müller H., González F.A. Classification of diabetes-related retinal diseases using a deep learning approach in optical coherence tomography. Comput. Methods Programs Biomed. 2019;178:181–189. doi: 10.1016/j.cmpb.2019.06.016. [DOI] [PubMed] [Google Scholar]
  • 102.Shen Y., Sheng B., Fang R., Li H., Dai L., Stolte S., Qin J., Jia W., Shen D. Domain-invariant interpretable fundus image quality assessment. Med. Image Anal. 2020;61:101654. doi: 10.1016/j.media.2020.101654. [DOI] [PubMed] [Google Scholar]
  • 103.Wang X., Chen H., Ran A.R., Luo L., Chan P.P., Tham C.C., Chang R.T., Mannil S.S., Cheung C.Y., Heng P.A. Towards multi-center glaucoma OCT image screening with semi-supervised joint structure and function multi-task learning. Med. Image Anal. 2020;63:101695. doi: 10.1016/j.media.2020.101695. [DOI] [PubMed] [Google Scholar]
  • 104.Jiang H., Yang K., Gao M., Zhang D., Ma H., Qian W. An interpretable ensemble deep learning model for diabetic retinopathy disease classification; Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Berlin, Germany. 23–27 July 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 2045–2048. [DOI] [PubMed] [Google Scholar]
  • 105.Tu Z., Gao S., Zhou K., Chen X., Fu H., Gu Z., Cheng J., Yu Z., Liu J. SUNet: A lesion regularized model for simultaneous diabetic retinopathy and diabetic macular edema grading; Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); Iowa City, IA, USA. 3–7 April 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 1378–1382. [Google Scholar]
  • 106.Kumar D., Taylor G.W., Wong A. Discovery radiomics with CLEAR-DR: Interpretable computer aided diagnosis of diabetic retinopathy. IEEE Access. 2019;7:25891–25896. doi: 10.1109/ACCESS.2019.2893635. [DOI] [Google Scholar]
  • 107.Liu C., Han X., Li Z., Ha J., Peng G., Meng W., He M. A self-adaptive deep learning method for automated eye laterality detection based on color fundus photography. PLoS ONE. 2019;14:e0222025. doi: 10.1371/journal.pone.0222025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108.Narayanan B.N., Hardie R.C., De Silva M.S., Kueterman N.K. Hybrid machine learning architecture for automated detection and grading of retinal images for diabetic retinopathy. J. Med. Imaging. 2020;7:034501. doi: 10.1117/1.JMI.7.3.034501. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 109.Everson M., Herrera L.G.P., Li W., Luengo I.M., Ahmad O., Banks M., Magee C., Alzoubaidi D., Hsu H., Graham D., et al. Artificial intelligence for the real-time classification of intrapapillary capillary loop patterns in the endoscopic diagnosis of early oesophageal squamous cell carcinoma: A proof-of-concept study. United Eur. Gastroenterol. J. 2019;7:297–306. doi: 10.1177/2050640618821800. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 110.García-Peraza-Herrera L.C., Everson M., Lovat L., Wang H.P., Wang W.L., Haidry R., Stoyanov D., Ourselin S., Vercauteren T. Intrapapillary capillary loop classification in magnification endoscopy: Open dataset and baseline methodology. Int. J. Comput. Assist. Radiol. Surg. 2020;15:651–659. doi: 10.1007/s11548-020-02127-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111.Wang S., Xing Y., Zhang L., Gao H., Zhang H. Deep convolutional neural network for ulcer recognition in wireless capsule endoscopy: Experimental feasibility and optimization. Comput. Math. Methods Med. 2019;2019:7546215. doi: 10.1155/2019/7546215. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 112.Yan C., Xu J., Xie J., Cai C., Lu H. Prior-aware CNN with multi-task learning for colon images analysis; Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); Iowa City, IA, USA. 3–7 April 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 254–257. [Google Scholar]
  • 113.Heinemann F., Birk G., Stierstorfer B. Deep learning enables pathologist-like scoring of NASH models. Sci. Rep. 2019;9:18454. doi: 10.1038/s41598-019-54904-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 114.Kiani A., Uyumazturk B., Rajpurkar P., Wang A., Gao R., Jones E., Yu Y., Langlotz C.P., Ball R.L., Montine T.J., et al. Impact of a deep learning assistant on the histopathologic classification of liver cancer. NPJ Digit. Med. 2020;3:23. doi: 10.1038/s41746-020-0232-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 115.Chang G.H., Felson D.T., Qiu S., Guermazi A., Capellini T.D., Kolachalama V.B. Assessment of knee pain from MR imaging using a convolutional Siamese network. Eur. Radiol. 2020;30:3538–3548. doi: 10.1007/s00330-020-06658-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 116.Yi P.H., Kim T.K., Wei J., Shin J., Hui F.K., Sair H.I., Hager G.D., Fritz J. Automated semantic labeling of pediatric musculoskeletal radiographs using deep learning. Pediatr. Radiol. 2019;49:1066–1070. doi: 10.1007/s00247-019-04408-2. [DOI] [PubMed] [Google Scholar]
  • 117.Li W., Zhuang J., Wang R., Zhang J., Zheng W.S. Fusing metadata and dermoscopy images for skin disease diagnosis; Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); Iowa City, IA, USA. 3–7 April 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 1996–2000. [Google Scholar]
  • 118.Xie Y., Zhang J., Xia Y., Shen C. A mutual bootstrapping model for automated skin lesion segmentation and classification. IEEE Trans. Med Imaging. 2020;39:2482–2493. doi: 10.1109/TMI.2020.2972964. [DOI] [PubMed] [Google Scholar]
  • 119.Kim Y., Lee K.J., Sunwoo L., Choi D., Nam C.M., Cho J., Kim J., Bae Y.J., Yoo R.E., Choi B.S., et al. Deep learning in diagnosis of maxillary sinusitis using conventional radiography. Investig. Radiol. 2019;54:7–15. doi: 10.1097/RLI.0000000000000503. [DOI] [PubMed] [Google Scholar]
  • 120.Wang L., Zhang L., Zhu M., Qi X., Yi Z. Automatic diagnosis for thyroid nodules in ultrasound images by deep neural networks. Med. Image Anal. 2020;61:101665. doi: 10.1016/j.media.2020.101665. [DOI] [PubMed] [Google Scholar]
  • 121.Huang Y., Chung A.C. Evidence localization for pathology images using weakly supervised learning; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference; Shenzhen, China. 13–17 October 2019; Berlin/Heidelberg, Germany: Springer; 2019. pp. 613–621. Proceedings, Part I 22. [Google Scholar]
  • 122.Kim I., Rajaraman S., Antani S. Visual interpretation of convolutional neural network predictions in classifying medical image modalities. Diagnostics. 2019;9:38. doi: 10.3390/diagnostics9020038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 123.Tang C. Discovering Unknown Diseases with Explainable Automated Medical Imaging; Proceedings of the Medical Image Understanding and Analysis: 24th Annual Conference, MIUA 2020; Oxford, UK. 15–17 July 2020; Berlin/Heidelberg, Germany: Springer; 2020. pp. 346–358. Proceedings 24. [Google Scholar]
  • 124.Selvaraju R.R., Cogswell M., Das A., Vedantam R., Parikh D., Batra D. Grad-CAM: Visual explanations from deep networks via gradient-based localization; Proceedings of the IEEE International Conference on Computer Vision (ICCV); Venice, Italy. 22–29 October 2017; pp. 618–626. [Google Scholar]
  • 125.Hilbert A., Ramos L.A., van Os H.J., Olabarriaga S.D., Tolhuisen M.L., Wermer M.J., Marquering H.A. Data-efficient deep learning of radiological image data for outcome prediction after endovascular treatment of patients with acute ischemic stroke. Comput. Biol. Med. 2019;115:103516. doi: 10.1016/j.compbiomed.2019.103516. [DOI] [PubMed] [Google Scholar]
  • 126.Kim B.H., Ye J.C. Understanding graph isomorphism network for rs-fMRI functional connectivity analysis. Front. Neurosci. 2020;14:630. doi: 10.3389/fnins.2020.00630. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 127.Liao L., Zhang X., Zhao F., Lou J., Wang L., Xu X., Li G. Multi-branch deformable convolutional neural network with label distribution learning for fetal brain age prediction; Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); Iowa City, IA, USA. 3–7 April 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 424–427. [Google Scholar]
  • 128.Natekar P., Kori A., Krishnamurthi G. Demystifying brain tumor segmentation networks: Interpretability and uncertainty analysis. Front. Comput. Neurosci. 2020;14:6. doi: 10.3389/fncom.2020.00006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 129.Pereira S., Meier R., Alves V., Reyes M., Silva C.A. Automatic brain tumor grading from MRI data using convolutional neural networks and quality assessment; Proceedings of the Understanding and Interpreting Machine Learning in Medical Image Computing Applications: First International Workshops, MLCN 2018, DLF 2018, and iMIMIC 2018, Held in Conjunction with MICCAI 2018; Granada, Spain. 16–20 September 2018; Berlin/Heidelberg, Germany: Springer International Publishing; 2018. pp. 106–114. Proceedings 1. [Google Scholar]
  • 130.Pominova M., Artemov A., Sharaev M., Kondrateva E., Bernstein A., Burnaev E. Voxelwise 3D convolutional and recurrent neural networks for epilepsy and depression diagnostics from structural and functional MRI data; Proceedings of the 2018 IEEE International Conference on Data Mining Workshops (ICDMW); Singapore. 17–20 November 2018; Piscataway, NJ, USA: IEEE; 2018. pp. 299–307. [Google Scholar]
  • 131.Xie B., Lei T., Wang N., Cai H., Xian J., He M., Xie H. Computer-aided diagnosis for fetal brain ultrasound images using deep convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 2020;15:1303–1312. doi: 10.1007/s11548-020-02182-3. [DOI] [PubMed] [Google Scholar]
  • 132.El Adoui M., Drisis S., Benjelloun M. Multi-input deep learning architecture for predicting breast tumor response to chemotherapy using quantitative MR images. Int. J. Comput. Assist. Radiol. Surg. 2020;15:1491–1500. doi: 10.1007/s11548-020-02209-9. [DOI] [PubMed] [Google Scholar]
  • 133.Obikane S., Aoki Y. Weakly supervised domain adaptation with point supervision in histopathological image segmentation; Proceedings of the Pattern Recognition: ACPR 2019 Workshops; Auckland, New Zealand. 26 November 2019; Singapore: Springer; 2020. pp. 127–140. Proceedings 5. [Google Scholar]
  • 134.Candemir S., White R.D., Demirer M., Gupta V., Bigelow M.T., Prevedello L.M., Erdal B.S. Automated coronary artery atherosclerosis detection and weakly supervised localization on coronary CT angiography with a deep 3-dimensional convolutional neural network. Comput. Med Imaging Graph. 2020;83:101721. doi: 10.1016/j.compmedimag.2020.101721. [DOI] [PubMed] [Google Scholar]
  • 135.Cong C., Kato Y., Vasconcellos H.D., Lima J., Venkatesh B. Automated stenosis detection and classification in X-ray angiography using deep neural network; Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); San Diego, CA, USA. 18–21 November 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 1301–1308. [Google Scholar]
  • 136.Huo Y., Terry J.G., Wang J., Nath V., Bermudez C., Bao S., Landman B.A. Coronary calcium detection using 3D attention identical dual deep network based on weakly supervised learning; Proceedings of the Medical Imaging 2019: Image Processing; San Diego, CA, USA. 19–21 February 2019; Bellingham, DC, USA: SPIE; 2019. pp. 308–315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 137.Patra A., Noble J.A. Incremental learning of fetal heart anatomies using interpretable saliency maps; Proceedings of the Medical Image Understanding and Analysis: 23rd Conference, MIUA 2019; Liverpool, UK. 24–26 July 2019; Berlin/Heidelberg, Germany: Springer International Publishing; 2020. pp. 129–141. Proceedings 23. [Google Scholar]
  • 138.Brunese L., Mercaldo F., Reginelli A., Santone A. Explainable deep learning for pulmonary disease and coronavirus COVID-19 detection from X-rays. Comput. Methods Programs Biomed. 2020;196:105608. doi: 10.1016/j.cmpb.2020.105608. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 139.Chen B., Li J., Lu G., Zhang D. Lesion location attention guided network for multi-label thoracic disease classification in chest X-rays. IEEE J. Biomed. Health Inform. 2019;24:2016–2027. doi: 10.1109/JBHI.2019.2952597. [DOI] [PubMed] [Google Scholar]
  • 140.He J., Shang L., Ji H., Zhang X. Deep learning features for lung adenocarcinoma classification with tissue pathology images; Proceedings of the Neural Information Processing: 24th International Conference, ICONIP 2017; Guangzhou, China. 14–18 November 2017; Berlin/Heidelberg, Germany: Springer International Publishing; 2017. pp. 742–751. Proceedings, Part IV. [Google Scholar]
  • 141.Hosny A., Parmar C., Coroller T.P., Grossmann P., Zeleznik R., Kumar A., Aerts H.J. Deep learning for lung cancer prognostication: A retrospective multi-cohort radiomics study. PLoS Med. 2018;15:e1002711. doi: 10.1371/journal.pmed.1002711. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 142.Humphries S.M., Notary A.M., Centeno J.P., Strand M.J., Crapo J.D., Silverman E.K., For the Genetic Epidemiology of COPD (COPDGene) Investigators Deep learning enables automatic classification of emphysema pattern at CT. Radiology. 2020;294:434–444. doi: 10.1148/radiol.2019191022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 143.Ko H., Chung H., Kang W.S., Kim K.W., Shin Y., Kang S.J., Lee J. COVID-19 pneumonia diagnosis using a simple 2D deep learning framework with a single chest CT image: Model development and validation. J. Med Internet Res. 2020;22:e19569. doi: 10.2196/19569. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 144.Mahmud T., Rahman M.A., Fattah S.A. CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization. Comput. Biol. Med. 2020;122:103869. doi: 10.1016/j.compbiomed.2020.103869. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 145.Paul R., Schabath M., Gillies R., Hall L., Goldgof D. Convolutional Neural Network ensembles for accurate lung nodule malignancy prediction 2 years in the future. Comput. Biol. Med. 2020;122:103882. doi: 10.1016/j.compbiomed.2020.103882. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 146.Philbrick K.A., Yoshida K., Inoue D., Akkus Z., Kline T.L., Weston A.D., Erickson B.J. What does deep learning see? Insights from a classifier trained to predict contrast enhancement phase from CT images. Am. J. Roentgenol. 2018;211:1184–1193. doi: 10.2214/AJR.18.20331. [DOI] [PubMed] [Google Scholar]
  • 147.Qin R., Wang Z., Jiang L., Qiao K., Hai J., Chen J., Yan B. Fine-Grained Lung Cancer Classification from PET and CT Images Based on Multidimensional Attention Mechanism. Complexity. 2020;2020:6153657. doi: 10.1155/2020/6153657. [DOI] [Google Scholar]
  • 148.Teramoto A., Yamada A., Kiriyama Y., Tsukamoto T., Yan K., Zhang L., Fujita H. Automated classification of benign and malignant cells from lung cytological images using deep convolutional neural network. Inform. Med. Unlocked. 2019;16:100205. doi: 10.1016/j.imu.2019.100205. [DOI] [Google Scholar]
  • 149.Xu R., Cong Z., Ye X., Hirano Y., Kido S., Gyobu T., Tomiyama N. Pulmonary textures classification via a multi-scale attention network. IEEE J. Biomed. Health Inform. 2019;24:2041–2052. doi: 10.1109/JBHI.2019.2950006. [DOI] [PubMed] [Google Scholar]
  • 150.Vila-Blanco N., Carreira M.J., Varas-Quintana P., Balsa-Castro C., Tomas I. Deep neural networks for chronological age estimation from OPG images. IEEE Trans. Med Imaging. 2020;39:2374–2384. doi: 10.1109/TMI.2020.2968765. [DOI] [PubMed] [Google Scholar]
  • 151.Kim M., Han J.C., Hyun S.H., Janssens O., Van Hoecke S., Kee C., De Neve W. Medinoid: Computer-aided diagnosis and localization of glaucoma using deep learning. Appl. Sci. 2019;9:3064. doi: 10.3390/app9153064. [DOI] [Google Scholar]
  • 152.Martins J., Cardoso J.S., Soares F. Offline computer-aided diagnosis for Glaucoma detection using fundus images targeted at mobile devices. Comput. Methods Programs Biomed. 2020;192:105341. doi: 10.1016/j.cmpb.2020.105341. [DOI] [PubMed] [Google Scholar]
  • 153.Meng Q., Hashimoto Y., Satoh S. How to extract more information with less burden: Fundus image classification and retinal disease localization with ophthalmologist intervention. IEEE J. Biomed. Health Inform. 2020;24:3351–3361. doi: 10.1109/JBHI.2020.3011805. [DOI] [PubMed] [Google Scholar]
  • 154.Wang R., Fan D., Lv B., Wang M., Zhou Q., Lv C., Xie G., Wang L. OCT image quality evaluation based on deep and shallow features fusion network; Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); Iowa City, IA, USA. 3–7 April 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 1561–1564. [Google Scholar]
  • 155.Zhang R., Tan S., Wang R., Manivannan S., Chen J., Lin H., Zheng W.S. Biomarker localization by combining CNN classifier and generative adversarial network; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference; Shenzhen, China. 13–17 October 2019; Berlin/Heidelberg, Germany: Springer; 2019. pp. 209–217. Proceedings, Part I 22. [Google Scholar]
  • 156.Chen X., Lin L., Liang D., Hu H., Zhang Q., Iwamoto Y., Han X.H., Chen Y.W., Tong R., Wu J. A dual-attention dilated residual network for liver lesion classification and localization on CT images; Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP); Taipei, Taiwan. 22–25 September 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 235–239. [Google Scholar]
  • 157.Itoh H., Lu Z., Mori Y., Misawa M., Oda M., Kudo S.e., Mori K. Visualising decision-reasoning regions in computer-aided pathological pattern diagnosis of endoscytoscopic images based on CNN weights analysis; Proceedings of the Medical Imaging 2020: Computer-Aided Diagnosis; Houston, TX, USA. 6–19 February 2020; Bellingham, DC, USA: SPIE; 2020. pp. 761–768. [Google Scholar]
  • 158.Korbar B., Olofson A.M., Miraflor A.P., Nicka C.M., Suriawinata M.A., Torresani L., Suriawinata A.A., Hassanpour S. Looking under the hood: Deep neural network visualization to interpret whole-slide image analysis outcomes for colorectal polyps; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops; Honolulu, HI, USA. 21–26 July 2017; pp. 69–75. [Google Scholar]
  • 159.Kowsari K., Sali R., Ehsan L., Adorno W., Ali A., Moore S., Amadi B., Kelly P., Syed S., Brown D. Hmic: Hierarchical medical image classification, a deep learning approach. Information. 2020;11:318. doi: 10.3390/info11060318. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 160.Wang J., Cui Y., Shi G., Zhao J., Yang X., Qiang Y., Du Q., Ma Y., Kazihise N.G.F. Multi-branch cross attention model for prediction of KRAS mutation in rectal cancer with t2-weighted MRI. Appl. Intell. 2020;50:2352–2369. doi: 10.1007/s10489-020-01658-8. [DOI] [Google Scholar]
  • 161.Cheng C.T., Ho T.Y., Lee T.Y., Chang C.C., Chou C.C., Chen C.C., Chung I., Liao C.H. Application of a deep learning algorithm for detection and visualization of hip fractures on plain pelvic radiographs. Eur. Radiol. 2019;29:5469–5477. doi: 10.1007/s00330-019-06167-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 162.Gupta V., Demirer M., Bigelow M., Sarah M.Y., Joseph S.Y., Prevedello L.M., White R.D., Erdal B.S. Using transfer learning and class activation maps supporting detection and localization of femoral fractures on anteroposterior radiographs; Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); Iowa City, IA, USA. 3–7 April 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 1526–1529. [Google Scholar]
  • 163.Zhang B., Tan J., Cho K., Chang G., Deniz C.M. Attention-based cnn for kl grade classification: Data from the osteoarthritis initiative; Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); Iowa City, IA, USA. 3–7 April 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 731–735. [Google Scholar]
  • 164.von Schacky C.E., Sohn J.H., Liu F., Ozhinsky E., Jungmann P.M., Nardo L., Posadzy M., Foreman S.C., Nevitt M.C., Link T.M., et al. Development and validation of a multitask deep learning model for severity grading of hip osteoarthritis features on radiographs. Radiology. 2020;295:136–145. doi: 10.1148/radiol.2020190925. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 165.Lee J.H., Ha E.J., Kim D., Jung Y.J., Heo S., Jang Y.H., An S.H., Lee K. Application of deep learning to the diagnosis of cervical lymph node metastasis from thyroid cancer with CT: External validation and clinical utility for resident training. Eur. Radiol. 2020;30:3066–3072. doi: 10.1007/s00330-019-06652-4. [DOI] [PubMed] [Google Scholar]
  • 166.Langner T., Wikström J., Bjerner T., Ahlström H., Kullberg J. Identifying morphological indicators of aging with neural networks on large-scale whole-body MRI. IEEE Trans. Med. Imaging. 2019;39:1430–1437. doi: 10.1109/TMI.2019.2950092. [DOI] [PubMed] [Google Scholar]
  • 167.Li C., Yao G., Xu X., Yang L., Zhang Y., Wu T., Sun J. DCSegNet: Deep learning framework based on divide-and-conquer method for liver segmentation. IEEE Access. 2020;8:146838–146846. doi: 10.1109/ACCESS.2020.3012990. [DOI] [Google Scholar]
  • 168.Mohamed Musthafa M., Mahesh T.R., Vinoth Kumar V., Guluwadi S. Enhancing brain tumor detection in MRI images through explainable AI using Grad-CAM with Resnet 50. BMC Med. Imaging. 2024;24:107. doi: 10.1186/s12880-024-01292-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 169.Wang C.W., Khalil M.A., Lin Y.J., Lee Y.C., Chao T.K. Detection of erbb2 and cen17 signals in fluorescent in situ hybridization and dual in situ hybridization for guiding breast cancer her2 target therapy. Artif. Intell. Med. 2023;141:102568. doi: 10.1016/j.artmed.2023.102568. [DOI] [PubMed] [Google Scholar]
  • 170.Bach S., Binder A., Montavon G., Klauschen F., Müller K.R., Samek W. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE. 2015;10:e0130140. doi: 10.1371/journal.pone.0130140. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 171.Montavon G., Lapuschkin S., Binder A., Samek W., Müller K.R. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognit. 2017;65:211–222. doi: 10.1016/j.patcog.2016.11.008. [DOI] [Google Scholar]
  • 172.Samek W., Binder A., Montavon G., Lapuschkin S., Müller K.R. Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 2016;28:2660–2673. doi: 10.1109/TNNLS.2016.2599820. [DOI] [PubMed] [Google Scholar]
  • 173.Kohlbrenner M., Bauer A., Nakajima S., Binder A., Samek W., Lapuschkin S. Towards best practice in explaining neural network decisions with LRP; Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN); Glasgow, UK. 19–24 July 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 1–7. [Google Scholar]
  • 174.Arquilla K., Gajera I.D., Darling M., Bhati D., Singh A., Guercio A. Exploring Fine-Grained Feature Analysis for Bird Species Classification using Layer-wise Relevance Propagation; Proceedings of the 2024 IEEE World AI IoT Congress (AIIoT); Melbourne, Australia. 29–31 May 2024; Piscataway, NJ, USA: IEEE; 2024. pp. 625–631. [Google Scholar]
  • 175.Eitel F., Soehler E., Bellmann-Strobl J., Brandt A.U., Ruprecht K., Giess R.M., Kuchling J., Asseyer S., Weygandt M., Haynes J.D., et al. Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation. Neuroimage Clin. 2019;24:102003. doi: 10.1016/j.nicl.2019.102003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 176.Thomas A.W., Heekeren H.R., Müller K.R., Samek W. Analyzing neuroimaging data through recurrent deep learning models. Front. Neurosci. 2019;13:1321. doi: 10.3389/fnins.2019.01321. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 177.Schlemper J., Oktay O., Schaap M., Heinrich M., Kainz B., Glocker B., Rueckert D. Attention gated networks: Learning to leverage salient regions in medical images. Med. Image Anal. 2019;53:197–207. doi: 10.1016/j.media.2019.01.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 178.Katar O., Yildirim O. An explainable vision transformer model based white blood cells classification and localization. Diagnostics. 2023;13:2459. doi: 10.3390/diagnostics13142459. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 179.Jetley S., Lord N.A., Lee N., Torr P.H. Learn to pay attention. arXiv. 20181804.02391 [Google Scholar]
  • 180.Li S., Dong M., Du G., Mu X. Attention dense-u-net for automatic breast mass segmentation in digital mammogram. IEEE Access. 2019;7:59037–59047. doi: 10.1109/ACCESS.2019.2914873. [DOI] [Google Scholar]
  • 181.Yan Y., Kawahara J., Hamarneh G. Melanoma recognition via visual attention; Proceedings of the Information Processing in Medical Imaging: 26th International Conference, IPMI 2019; Hong Kong, China. 2–7 June 2019; Berlin/Heidelberg, Germany: Springer; 2019. pp. 793–804. Proceedings 26. [Google Scholar]
  • 182.Górriz M., Antony J., McGuinness K., Giró-i Nieto X., O’Connor N.E. Assessing knee OA severity with CNN attention-based end-to-end architectures; Proceedings of the International Conference on Medical Imaging with Deep Learning, PMLR; London, UK. 8–10 July 2019; pp. 197–214. [Google Scholar]
  • 183.Xu X., Li C., Fan X., Lan X., Lu X., Ye X., Wu T. Attention Mask R-CNN with edge refinement algorithm for identifying circulating genetically abnormal cells. Cytom. Part A. 2023;103:227–239. doi: 10.1002/cyto.a.24682. [DOI] [PubMed] [Google Scholar]
  • 184.Bramlage L., Cortese A. Generalized attention-weighted reinforcement learning. Neural Netw. 2022;145:10–21. doi: 10.1016/j.neunet.2021.09.023. [DOI] [PubMed] [Google Scholar]
  • 185.Chen Q., Shao Q. Single image super-resolution based on trainable feature matching attention network. Pattern Recognit. 2024;149:110289. doi: 10.1016/j.patcog.2024.110289. [DOI] [Google Scholar]
  • 186.Dubost F., Adams H., Yilmaz P., Bortsova G., van Tulder G., Ikram M.A., Niessen W., Vernooij M.W., de Bruijne M. Weakly supervised object detection with 2D and 3D regression neural networks. Med. Image Anal. 2020;65:101767. doi: 10.1016/j.media.2020.101767. [DOI] [PubMed] [Google Scholar]
  • 187.Lian C., Liu M., Wang L., Shen D. End-to-end dementia status prediction from brain mri using multi-task weakly-supervised attention network; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference; Shenzhen, China. 13–17 October 2019; Berlin/Heidelberg, Germany: Springer; 2019. pp. 158–167. Proceedings, Part IV 22. [PMC free article] [PubMed] [Google Scholar]
  • 188.Wang H., Feng J., Zhang Z., Su H., Cui L., He H., Liu L. Breast mass classification via deeply integrating the contextual information from multi-view data. Pattern Recognit. 2018;80:42–52. doi: 10.1016/j.patcog.2018.02.026. [DOI] [Google Scholar]
  • 189.Li L., Xu M., Liu H., Li Y., Wang X., Jiang L., Wang Z., Fan X., Wang N. A large-scale database and a CNN model for attention-based glaucoma detection. IEEE Trans. Med Imaging. 2019;39:413–424. doi: 10.1109/TMI.2019.2927226. [DOI] [PubMed] [Google Scholar]
  • 190.Yang H., Kim J.Y., Kim H., Adhikari S.P. Guided soft attention network for classification of breast cancer histopathology images. IEEE Trans. Med Imaging. 2019;39:1306–1315. doi: 10.1109/TMI.2019.2948026. [DOI] [PubMed] [Google Scholar]
  • 191.Pesce E., Withey S.J., Ypsilantis P.P., Bakewell R., Goh V., Montana G. Learning to detect chest radiographs containing pulmonary lesions using visual attention networks. Med. Image Anal. 2019;53:26–38. doi: 10.1016/j.media.2018.12.007. [DOI] [PubMed] [Google Scholar]
  • 192.Singla S., Gong M., Ravanbakhsh S., Sciurba F., Poczos B., Batmanghelich K.N. Subject2Vec: Generative-discriminative approach from a set of image patches to a vector; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference; Granada, Spain. 16–20 September 2018; Berlin/Heidelberg, Germany: Springer; 2018. pp. 502–510. Proceedings, Part I. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 193.Sun J., Darbehani F., Zaidi M., Wang B. Saunet: Shape attentive u-net for interpretable medical image segmentation; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference; Lima, Peru. 4–8 October 2020; Berlin/Heidelberg, Germany: Springer; 2020. pp. 797–806. Proceedings, Part IV 23. [Google Scholar]
  • 194.Zhu Z., Ding X., Zhang D., Wang L. Weakly-supervised balanced attention network for gastric pathology image localization and classification; Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); Iowa City, IA, USA. 3–7 April 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 1–4. [Google Scholar]
  • 195.Barata C., Celebi M.E., Marques J.S. Explainable skin lesion diagnosis using taxonomies. Pattern Recognit. 2021;110:107413. doi: 10.1016/j.patcog.2020.107413. [DOI] [Google Scholar]
  • 196.Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., Dehghani M., Minderer M., Heigold G., Gelly S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv. 20202010.11929 [Google Scholar]
  • 197.Srivastava A., Chandra M., Saha A., Saluja S., Bhati D. Current Advances in Locality-Based and Feature-Based Transformers: A Review; Proceedings of the International Conference on Data & Information Sciences; Edinburgh, UK. 11–13 August 2023; Berlin/Heidelberg, Germany: Springer; 2023. pp. 321–335. [Google Scholar]
  • 198.Wu J., Mi Q., Zhang Y., Wu T. SVTNet: Automatic bone age assessment network based on TW3 method and vision transformer. Int. J. Imaging Syst. Technol. 2024;34:e22990. doi: 10.1002/ima.22990. [DOI] [Google Scholar]
  • 199.Park S., Kim G., Oh Y., Seo J.B., Lee S.M., Kim J.H., Moon S., Lim J.K., Park C.M., Ye J.C. Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation. Nat. Commun. 2022;13:3848. doi: 10.1038/s41467-022-31514-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 200.Chen J., Frey E.C., He Y., Segars W.P., Li Y., Du Y. Transmorph: Transformer for unsupervised medical image registration. Med. Image Anal. 2022;82:102615. doi: 10.1016/j.media.2022.102615. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 201.Gupte S.R., Hou C., Wu G.H., Galaz-Montoya J.G., Chiu W., Yeung-Levy S. CryoViT: Efficient Segmentation of Cryogenic Electron Tomograms with Vision Foundation Models. bioRxiv. 2024 doi: 10.1101/2024.06.26.600701. [DOI] [Google Scholar]
  • 202.Chen J., Lu Y., Yu Q., Luo X., Adeli E., Wang Y., Lu L., Yuille A.L., Zhou Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv. 20212102.04306 [Google Scholar]
  • 203.Karimi D., Vasylechko S.D., Gholipour A. Convolution-free medical image segmentation using transformers; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference; Strasbourg, France. 27 September–1 October 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 78–88. Proceedings, Part I 24. [Google Scholar]
  • 204.Yun B., Wang Y., Chen J., Wang H., Shen W., Li Q. Spectr: Spectral transformer for hyperspectral pathology image segmentation. arXiv. 2021 doi: 10.1109/TCSVT.2023.3326196.2103.03604 [DOI] [Google Scholar]
  • 205.Wenxuan W., Chen C., Meng D., Hong Y., Sen Z., Jiangyun L. Transbts: Multimodal brain tumor segmentation using transformer; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Strasbourg, France. 27 September–1 October 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 109–119. [Google Scholar]
  • 206.Hatamizadeh A., Tang Y., Nath V., Yang D., Myronenko A., Landman B., Roth H.R., Xu D. Unetr: Transformers for 3d medical image segmentation; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision; Waikoloa, HI, USA. 3–8 January 2022; pp. 574–584. [Google Scholar]
  • 207.Li S., Sui X., Luo X., Xu X., Liu Y., Goh R. Medical image segmentation using squeeze-and-expansion transformers. arXiv. 20212105.09511 [Google Scholar]
  • 208.Zhang Y., Higashita R., Fu H., Xu Y., Zhang Y., Liu H., Zhang J., Liu J. A multi-branch hybrid transformer network for corneal endothelial cell segmentation; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference; Strasbourg, France. 27 September–1 October 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 99–108. Proceedings, Part I 24. [Google Scholar]
  • 209.Lin A., Chen B., Xu J., Zhang Z., Lu G., Zhang D. Ds-transunet: Dual swin transformer u-net for medical image segmentation. IEEE Trans. Instrum. Meas. 2022;71:1–15. doi: 10.1109/TIM.2022.3178991. [DOI] [Google Scholar]
  • 210.Li Y., Cai W., Gao Y., Li C., Hu X. More than encoder: Introducing transformer decoder to upsample; Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); Las Vegas, NV, USA. 6–8 December 2022; Piscataway, NJ, USA: IEEE; 2022. pp. 1597–1602. [Google Scholar]
  • 211.Xu G., Zhang X., He X., Wu X. Levit-unet: Make faster encoders with transformer for medical image segmentation; Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV); Xiamen, China. 13–15 October 2023; Berlin/Heidelberg, Germany: Springer; 2023. pp. 42–53. [Google Scholar]
  • 212.Chang Y., Menghan H., Guangtao Z., Xiao-Ping Z. Transclaw u-net: Claw u-net with transformers for medical image segmentation. arXiv. 20212107.05188 [Google Scholar]
  • 213.Cao H., Wang Y., Chen J., Jiang D., Zhang X., Tian Q., Wang M. Swin-unet: Unet-like pure transformer for medical image segmentation; Proceedings of the European Conference on Computer Vision; Tel Aviv, Israel. 23–27 October 2022; Berlin/Heidelberg, Germany: Springer; 2022. pp. 205–218. [Google Scholar]
  • 214.Petit O., Thome N., Rambour C., Themyr L., Collins T., Soler L. U-net transformer: Self and cross attention for medical image segmentation; Proceedings of the Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021; Strasbourg, France. 27 September 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 267–276. Proceedings 12. [Google Scholar]
  • 215.Xie Y., Zhang J., Shen C., Xia Y. Cotr: Efficiently bridging cnn and transformer for 3d medical image segmentation; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference; Strasbourg, France. 27 September–1 October 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 171–180. Proceedings, Part III 24. [Google Scholar]
  • 216.Gao Y., Zhou M., Metaxas D.N. UTNet: A hybrid transformer architecture for medical image segmentation; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference; Strasbourg, France. 27 September–1 October 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 61–71. Proceedings, Part III 24. [Google Scholar]
  • 217.Chen B., Liu Y., Zhang Z., Lu G., Kong A.W.K. Transattunet: Multi-level attention-guided u-net with transformer for medical image segmentation. IEEE Trans. Emerg. Top. Comput. Intell. 2023 doi: 10.1109/TETCI.2023.3309626. [DOI] [Google Scholar]
  • 218.Dong B., Wang W., Fan D.P., Li J., Fu H., Shao L. Polyp-pvt: Polyp segmentation with pyramid vision transformers. arXiv. 2021 doi: 10.26599/AIR.2023.9150015.2108.06932 [DOI] [Google Scholar]
  • 219.Shen Z., Yang H., Zhang Z., Zheng S. International Challenge on Kidney and Kidney Tumor Segmentation. Springer; Berlin/Heidelberg, Germany: 2021. Automated kidney tumor segmentation with convolution and transformer network; pp. 1–12. [Google Scholar]
  • 220.Deng K., Meng Y., Gao D., Bridge J., Shen Y., Lip G., Zhao Y., Zheng Y. Transbridge: A lightweight transformer for left ventricle segmentation in echocardiography; Proceedings of the Simplifying Medical Ultrasound: Second International Workshop, ASMUS 2021, Held in Conjunction with MICCAI 2021; Strasbourg, France. 27 September 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 63–72. Proceedings 2. [Google Scholar]
  • 221.Jia Q., Shu H. Bitr-unet: A cnn-transformer combined network for mri brain tumor segmentation; Proceedings of the International MICCAI Brainlesion Workshop; Singapore. 27 September 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 3–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 222.Hatamizadeh A., Nath V., Tang Y., Yang D., Roth H.R., Xu D. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images; Proceedings of the International MICCAI Brainlesion Workshop; Singapore. 27 September 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 272–284. [Google Scholar]
  • 223.Li Y., Wang S., Wang J., Zeng G., Liu W., Zhang Q., Jin Q., Wang Y. Gt u-net: A u-net like group transformer network for tooth root segmentation; Proceedings of the Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021; Strasbourg, France. 27 September 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 386–395. Proceedings 12. [Google Scholar]
  • 224.Gheflati B., Rivaz H. Vision transformers for classification of breast ultrasound images; Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); Glasgow, UK. 11–15 July 2022; Piscataway, NJ, USA: IEEE; 2022. pp. 480–483. [DOI] [PubMed] [Google Scholar]
  • 225.Zheng Y., Gindra R., Betke M., Beane J.E., Kolachalama V.B. A deep learning based graph-transformer for whole slide image classification. medRxiv. 2021 doi: 10.1101/2021.10.15.21265060. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 226.Yu S., Ma K., Bi Q., Bian C., Ning M., He N., Li Y., Liu H., Zheng Y. Mil-vt: Multiple instance learning enhanced vision transformer for fundus image classification; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference; Strasbourg, France. 27 September–1 October 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 45–54. Proceedings, Part VIII 24. [Google Scholar]
  • 227.Sun R., Li Y., Zhang T., Mao Z., Wu F., Zhang Y. Lesion-aware transformers for diabetic retinopathy grading; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Nashville, TN, USA. 20–25 June 2021; pp. 10938–10947. [Google Scholar]
  • 228.Perera S., Adhikari S., Yilmaz A. Pocformer: A lightweight transformer architecture for detection of covid-19 using point of care ultrasound; Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP); Anchorage, AK, USA. 9–22 September 2021; Piscataway, NJ, USA: IEEE; 2021. pp. 195–199. [Google Scholar]
  • 229.Park S., Kim G., Kim J., Kim B., Ye J.C. Federated split vision transformer for COVID-19 CXR diagnosis using task-agnostic training. arXiv. 20212111.01338 [Google Scholar]
  • 230.Shome D., Kar T., Mohanty S.N., Tiwari P., Muhammad K., AlTameem A., Zhang Y., Saudagar A.K.J. Covid-transformer: Interpretable COVID-19 detection using vision transformer for healthcare. Int. J. Environ. Res. Public Health. 2021;18:11086. doi: 10.3390/ijerph182111086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 231.Liu C., Yin Q. Proceedings of the Journal of Physics: Conference Series. Volume 2010. IOP Publishing; Bristol, UK: 2021. Automatic diagnosis of covid-19 using a tailored transformer-like network; p. 012175. [Google Scholar]
  • 232.Park S., Kim G., Oh Y., Seo J.B., Lee S.M., Kim J.H., Moon S., Lim J.K., Ye J.C. Vision transformer for COVID-19 cxr diagnosis using chest X-ray feature corpus. arXiv. 20212103.07055 [Google Scholar]
  • 233.Gao X., Qian Y., Gao A. COVID-VIT: Classification of COVID-19 from CT chest images based on vision transformer models. arXiv. 20212107.01682 [Google Scholar]
  • 234.Mondal A.K., Bhattacharjee A., Singla P., Prathosh A. xViTCOS: Explainable vision transformer based COVID-19 screening using radiography. IEEE J. Transl. Eng. Health Med. 2021;10:1–10. doi: 10.1109/JTEHM.2021.3134096. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 235.Hsu C.C., Chen G.L., Wu M.H. Visual transformer with statistical test for covid-19 classification. arXiv. 20212107.05334 [Google Scholar]
  • 236.Zhang L., Wen Y. A transformer-based framework for automatic COVID19 diagnosis in chest CTs; Proceedings of the IEEE/CVF International Conference on Computer Vision; Montreal, BC, Canada. 11–17 October 2021; pp. 513–518. [Google Scholar]
  • 237.Ambita A.A.E., Boquio E.N.V., Naval P.C., Jr. Covit-gan: Vision transformer forcovid-19 detection in ct scan imageswith self-attention gan forDataAugmentation; Proceedings of the International Conference on Artificial Neural Networks; Bratislava, Slovakia. 14 September 2021; Berlin/Heidelberg, Germany: Springer; 2021. pp. 587–598. [Google Scholar]
  • 238.Zhang Y., Pan X., Li C., Wu T. 3D liver and tumor segmentation with CNNs based on region and distance metrics. Appl. Sci. 2020;10:3794. doi: 10.3390/app10113794. [DOI] [Google Scholar]
  • 239.Azad R., Kazerouni A., Heidari M., Aghdam E.K., Molaei A., Jia Y., Jose A., Roy R., Merhof D. Advances in medical image analysis with vision transformers: A comprehensive review. Med. Image Anal. 2023:103000. doi: 10.1016/j.media.2023.103000. [DOI] [PubMed] [Google Scholar]
  • 240.Li Z., Li Y., Li Q., Wang P., Guo D., Lu L., Jin D., Zhang Y., Hong Q. Lvit: Language meets vision transformer in medical image segmentation. IEEE Trans. Med. Imaging. 2023 doi: 10.1109/TMI.2023.3291719. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All data were presented in main text.


Articles from Journal of Imaging are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES