Skip to main content
. 2026 Jan 22;26(2):743. doi: 10.3390/s26020743
Algorithm 1 Coordinate Attention (CA)
  •  Input: 

    Input feature map XRC×H×W

  •  Output: 

    Output feature map YRC×H×W

  •   1:

    # Global average pooling along height and width directions

    zh=1Wj=1WX(:,:,j)    # Height descriptor zhRC×H

    zw=1Hi=1HX(i,:,:)    # Width descriptor zwRC×W

  •   2:

    # Concatenate and transform via shared 1D convolution

    z=Concat(zh,zw)    # zRC×(H+W)

    f=δ(Conv1D(z))    # fRCr×(H+W)

  •   3:

    # Split and apply sigmoid activation

    fh,fw=Split(f,[H,W])

    gh=σ(Conv1Dh(fh))    # Height attention ghRC×H

    gw=σ(Conv1Dw(fw))    # Width attention gwRC×W

  •   4:

    # Apply attention weights to input features

    Y(i,j,:)=X(i,j,:)×gh(i,:)×gw(j,:)