Skip to main content
. 2022 Aug 8;11(15):4625. doi: 10.3390/jcm11154625

Figure 4.

Figure 4

The entire classification process and Swin Transformer architecture. LN: layer normalization; MLP: multilayer perceptron; W-MSA: window multi-head self-attention; SW-MSA: shifted-window multi-head self-attention.