Skip to main content
. 2022 Jun 30;23(4):bbac258. doi: 10.1093/bib/bbac258

Figure 3.

Figure 3

The self-attention mechanism in the Transformer model. The input of the self-attention is the embedded vector and the output is the weighted features with protein–protein relationships information.