Skip to main content
. 2023 Jan 5;23(2):634. doi: 10.3390/s23020634

Table 1.

Summary of explainable AI techniques classified according to Section 3.

Explanation Type Paper Technique Intrinsic Post Hoc Global Local Model-Specify Model-Agnostic
Feature [28] BP * * *
[29] Guided-BP * * *
[30] Deconv Network * * *
[31] LRP * * *
[32] CAM * * *
[33] Grad-CAM * * *
[34] LIME * * *
[35] GraphLIME * * *
[36] SHAP * * *
[37] Attention * * *
Example-based [38] ProtoPNet * * *
[39] Triplet Network * * * *
[5] xDNN * * *
Textual [40] TCAV * * * *
[41] Image Captioning * * *

“*” indicates it belongs to this category, which is defined in Section 3, BP: backpropagation, CAM: class activation map, LRP: layer-wise relevance propagation, LIME: local interpretable model-agnostic explanations, MuSE: model usability evaluation, SHAP: Shapley additive explanations, xDNN: explainable deep neural network, TCAV: testing with concept activation vectors.