Skip to main content
. 2025 Dec 14;11(12):448. doi: 10.3390/jimaging11120448
Algorithm 1 Explainability Pipeline for DPCSE-Net
Require: Input image x, trained DPCSE-Net model F
  •   1:

    Compute feature maps Ak and class score yc=F(x)

  •   2:

    Grad-CAM: Compute importance weights αkc and generate the spatial heatmap LGradCAMc to visualize class-discriminative regions

  •   3:

    SE-Attention: Extract and normalize channel-wise attention weights s from the SE module to form HSE, indicating the most informative feature channels

  •   4:

    Integrated Gradients: Calculate the pixel-level attribution map IG(x) by integrating gradients from a baseline x to the input image x

  •   5:

    Return the visualization set {LGradCAMc,HSE,IG(x)}