Skip to main content
Radiology: Artificial Intelligence logoLink to Radiology: Artificial Intelligence
. 2023 Dec 13;6(1):e230488. doi: 10.1148/ryai.230488

Seeing Is Not Always Believing: Discrepancies in Saliency Maps

Masahiro Yanagawa 1,, Junya Sato 1
PMCID: PMC10831517  PMID: 38166327

See also the article by Zhang et al in this issue.

Masahiro Yanagawa, MD, PhD, is an associate professor of radiology at Osaka University Graduate School of Medicine. His research focuses on using advanced quantitative CT techniques for thoracic oncology. He has led national grants, published numerous articles and chapters, and won 20 awards. He serves on editorial boards of the chest section for European Radiology and is an associate editor for Japanese Journal of Radiology and Radiology: Artificial Intelligence.

Masahiro Yanagawa, MD, PhD, is an associate professor of radiology at Osaka University Graduate School of Medicine. His research focuses on using advanced quantitative CT techniques for thoracic oncology. He has led national grants, published numerous articles and chapters, and won 20 awards. He serves on editorial boards of the chest section for European Radiology and is an associate editor for Japanese Journal of Radiology and Radiology: Artificial Intelligence.

Junya Sato, MD, is a diagnostic radiology resident and a graduate student at Osaka University Graduate School of Medicine. With his status as a Kaggle competition master, he has been a successful contender in many machine learning challenges. His research interests include AI for clinical assistance vision and language models, human-in-the-loop AI, and radiogenomics.

Junya Sato, MD, is a diagnostic radiology resident and a graduate student at Osaka University Graduate School of Medicine. With his status as a Kaggle competition master, he has been a successful contender in many machine learning challenges. His research interests include AI for clinical assistance vision and language models, human-in-the-loop AI, and radiogenomics.

Explainable artificial intelligence (AI) plays a critical role in providing clinicians with the underlying rationale for diagnoses. In routine clinical practice, justifying a suspected diagnosis requires a clear reason; when radiologists make a diagnosis from CT images, they always focus on the characteristic parts of the images as discriminative clues. It is common to evaluate the morphology such as margins and internal structures of a lesion to determine whether it is benign or malignant. Clinicians make diagnoses based on a comprehensive set of patient data that includes physical observations, clinical tests, and imaging findings. Lack of such evidence can lead to serious misdiagnosis and patient harm. Although deep learning models have made significant advances as diagnostic support systems for medical imaging, their complicated structures and large number of parameters have turned them into “black boxes” where the reasoning behind decisions remains opaque to clinicians. Making an AI model's diagnostic process transparent is an issue that should be addressed.

Visualization techniques using saliency maps have become popular to clarify the decision-making process (1,2). Saliency maps indicate which parts of the input image contributed to the model's predicted class by calculating their gradients. However, saliency maps do not necessarily indicate the true causes of the prediction. Several studies have reported that saliency maps can be noisy (3) and can perform significantly worse at localizing pathology than humans (4). Although many AI studies using radiologic images provide some examples from their datasets for visualization, quantitative evaluation has received little emphasis. A study published in this journal reported issues with the reproducibility and robustness of saliency maps based on quantitative analysis (5), but the correlation between saliency maps and model predictions remained unknown.

In this issue of Radiology: Artificial Intelligence, Zhang and colleagues introduce a novel evaluation metric, prediction-saliency correlation (PSC), and investigate quantitatively how saliency maps vary with changes in a model's predictions (6). They validated seven saliency maps, including a commercial one, and showed that the saliency maps were sensitive to subtle input perturbations. Specifically, the authors introduced minimal noise into the input images and evaluated the sensitivity of the saliency maps to changes in the predicted class, as well as the models' robustness when the predictions remained consistent. This experiment revealed low sensitivity (maximum PSC = 0.25; 95% CI: 0.12, 0.38) and weak robustness (maximum PSC = 0.12; 95% CI: 0.0, 0.25) on the CheXpert dataset. On the other hand, even experienced radiologists were able to detect the altered images (ie, the subtle perturbation added to the images) in only 45% of cases.

A notable contribution of this study is the definition of the PSC metric to quantify the relationship between the saliency map and the model's predictions regardless of the model structure. Even when the model's prediction seems to match the saliency map, it might not capture the underlying rationale for that prediction. Ideally, saliency maps should adapt to changes in classification predictions due to variations in the input images. By investigating the underexplored correlation between the saliency maps and the model's predictions, the authors were able to demonstrate a low correlation between the saliency map and the prediction values. A previous study using ophthalmic images supports the conclusions of the present study: In that study, providing both model predictions and saliency maps did not improve clinicians’ diagnostic performance compared with model predictions alone (7).

Given the results of previous reports, the question arises: How can we achieve genuine clinical explainability? The vision transformer model does not use the convolutional operation. Saliency maps derived from attention-based models, such as the vision transformer, have been reported to outperform those from traditional convolutional neural networks (8). In addition, there are other visualization methods such as Shapley additive explanations (SHAP) (9) and local interpretable model-agnostic explanations (LIME) (10), which do not depend on saliency maps. Another important way to improve explainability is to collect additional diagnostic information and incorporate it into the model training process. Just as residents learn the rationale behind diagnoses from experts, deep learning models could benefit not only from classification but also from human reasoning behind decisions. Multimodal training that integrates diagnostic information may provide more reliable output. Although implementing such training requires substantial data and significant efforts from radiologists, the burden can be reduced by promoting the development of platforms and protocols for international data sharing and secondary use of clinical information.

In conclusion, Zhang et al proposed a quantitative and model-agnostic evaluation method to assess the validity of saliency maps for AI explanations. When adding subtle perturbations imperceptible by clinicians, they found the correlation between model predictions and saliency maps to be low across various publicly available datasets and saliency maps. Although a saliency map may provide clues to an AI models’ diagnostic process, to effectively use AI-assisted diagnostic systems, clinicians must understand that saliency maps are merely the results of computing gradients and do not necessarily serve as a rationale for disease prediction. There is a pressing need for further research to improve the trustworthiness of explainable AI. Truly explainable AI should instill greater confidence in its predictions and enable the translation of those results into optimal patient management.

Footnotes

Authors declared no funding for this work.

Disclosures of conflicts of interest: M.Y. Support from Grants-in-Aid for Scientific Research–KAKENHI (JP21K07672, JP21H03840, JP22K07769) and Japan Agency for Medical Research and Development (JP20gk0110051); associate editor of Radiology: Artificial Intelligence. J.S. No relevant relationships.

References

  • 1. Simonyan K , Vedaldi A , Zisserman A . Deep inside convolutional networks: visualising image classification models and saliency maps . arXiv 1312.6034 [preprint] https://arxiv.org/abs/1312.6034. Published December 20, 2013. Updated April 19, 2014. Accessed October 30, 2023. [Google Scholar]
  • 2. Selvaraju RR , Cogswell M , Das A , Vedantam R , Parikh D , Batra D . Grad-CAM: Visual explanations from deep networks via gradient-based localization . 2017 IEEE International Conference on Computer Vision (ICCV) . IEEE ; 2017. : 618 – 626 . [Google Scholar]
  • 3. Kim B , Seo J , Jeon S , Koo J , Choe J , Jeon T . Why are saliency maps noisy? Cause of and solution to noisy saliency maps . 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) . IEEE ; 2019. : 4149 – 4157 . [Google Scholar]
  • 4. Saporta A , Gui X , Agrawal A , et al . Benchmarking saliency methods for chest X-ray interpretation. Nature Machine Intelligence . Nature 2022. ; 4 ( 10 ): 867 – 878 . [Google Scholar]
  • 5. Arun N , Gaw N , Singh P , et al . Assessing the trustworthiness of saliency maps for localizing abnormalities in medical imaging . Radiol Artif Intell 2021. ; 3 ( 6 ): e200267 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Zhang J , Chao H , Dasegowda G , et al . Revisiting the trustworthiness of saliency methods in radiology AI . Radiol Artif Intell 2024. ; 6 ( 1 ): e220221 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Sayres R , Taly A , Rahimy E , et al . Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy . Ophthalmology 2019. ; 126 ( 4 ): 552 – 564 . [DOI] [PubMed] [Google Scholar]
  • 8. Wollek A , Graf R , Čečatka S , et al . Attention-based saliency maps improve interpretability of pneumothorax classification . Radiol Artif Intell 2022. ; 5 ( 2 ): e220187 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Lundberg SM , Lee SI . A Unified Approach to Interpreting Model Predictions . arXiv 1705.07874 [preprint] https://arxiv.org/abs/1705.07874. Published May 22, 2017. Updated November 25, 2017. Accessed October 30, 2023. [Google Scholar]
  • 10. Ribeiro MT , Singh S , Guestrin C . “Why Should I Trust You?”: Explaining the Predictions of Any Classifier . Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining . New York, NY: : Association for Computing Machinery; ; 2016. : 1135 – 1144 . [Google Scholar]

Articles from Radiology: Artificial Intelligence are provided here courtesy of Radiological Society of North America

RESOURCES