Skip to main content
. 2025 Aug 11;6(9):101340. doi: 10.1016/j.patter.2025.101340

Table 2.

Deep-learning-specific and CNN-specific methods

Method Brief description Characteristics
Gradients/sensitivity (Simonyan and Zisserman,56 Baehrens et al.,59 Zeiler and Fergus60) early methods of back-propagation based on gradients salience maps can be noisy and disperse
DeConvNet (Springenberg et al.61)
Guided BackProp (Kindermans et al.62)
PatternNet (Smilkov et al.63)
PatternAttribution (Smilkov et al.63)
SmoothGrad (Hooker et al.64)
SmoothGradSquared (Adebayo et al.65)
VarGrad (Shrikumar et al.66)
DeepLIFT (Grad ∗ Input) (Shrikumar et al.,67 Sundararajan et al.68) later methods of back-propagation based on gradients, typically by trying to overcome gradient discontinuities pixel-wise attribution
Integrated gradients (IG) (Erion et al.69)
Expectation gradients (Bach et al.70)
Layerwise relevance propagation (LRP) (Montavon et al.,71 Zhou et al.72)
Class activation mapping (CAM) (Selvaraju et al.73) global average pooling final convolutional layer with the weights associated with an output decision to create a salience map
  • very easy to calculate

  • requires very specific CNN architecture

  • does not give a pixel-wise salience map

Grad-CAM (Bau et al.74) combines CAM with gradient-based methods inherits characteristics of the combined methods
Guided Grad-CAM (Bau et al.74)
Network dissection (Kim et al.75) find individual CNN units that are associated with pre-defined semantic concepts to analyze image features, the images need to be semantically segmented and labeled
Testing with concept activation vectors (t-CAV) (LeCun et al.76) find how well a given class or input is associated with a concept needs examples and counterexamples of the concept in order to train a CAV