| Algorithm 1: The implementation of asymmetric multi-weights attention. |
| Input X: The feature matrix of H × W × C size. |
| Output X: The resultant matrix of H × W × C size. |
-
(1)
Set a 3 × 1 convolution layer and compress the channels to C/2.
-
(2)
Use a 1 × 3 convolution layer and expand the channels to C.
-
(3)
Calculate spatial size N = H × W − 1.
-
(4)
Calculate square D = X − X.mean().pow(2).
-
(5)
Calculate channel variance through D/N and derive function F for finding the importance of each pixel as F = D/(4 × (v + lambda)) + 0.5, where lambda is the coefficient value.
-
(6)
Adding sigmoid to restrict F.
-
(7)
Save the value of the output matrix.
|