Skip to main content
. Author manuscript; available in PMC: 2021 Aug 1.
Published in final edited form as: IEEE Trans Med Imaging. 2021 Feb 2;40(2):699–711. doi: 10.1109/TMI.2020.3035253

Fig. 4.

Fig. 4

Structure of our proposed scale attention module with residual connection. Its input is the concatenation of interpolated feature maps at different scales obtained in the decoder. γ means scale-wise attention coefficient. We additionally use a spatial attention block LA* to gain pixel-wise scale attention coefficient γ*.