Skip to main content
. 2022 Feb 1;34(7):5321–5347. doi: 10.1007/s00521-022-06953-8

Table 12.

Effect of attention weighted blocks with different pooling operations at the first and the last transition layers

Method DR (QK) p value
First transition layer Attention Global pooling layer
Max Average 0.781±0.003 0.0006
Max CBAM Average 0.788±0.008 0.4051
Max A2 Average 0.791±0.005
GM GM 0.785±0.006 0.0219
GM CBAM GM 0.787±0.006 0.0961
GM A2 GM 0.793±0.006

The top scores are highlighed in bold