Skip to main content
. 2022 Jul 13;14:908143. doi: 10.3389/fnagi.2022.908143

TABLE 3.

DenseNet121 architecture with Soft Attention Block for classification of PD and HC cases.

Layers Output shape Kernel size and details
Convolution 2D 112×112 7× 7conv, stride 2 (Rui et al., 2019)
Max Pooling 2D 56×56 3×3 maxpool,stride 2
Dense Block (Pahuja et al., 2019) 56×56 [1×1conv3×3conv]×6
Transition Layer (Pahuja et al., 2019) 56×56 1× 1conv
28×28 2×2average pool, stride 2
Dense Block (Prediger et al., 2014) 28×28 [1×1conv3×3conv]×12
Transition Layer (Prediger et al., 2014) 28×28 1× 1conv
14×14 2×2average pool, stride 2
Dense Block (Blesa et al., 2015) 14×14 [1×1conv3×3conv]×24
Transition Layer (Blesa et al., 2015) 14×14 1× 1conv
7×7 2×2average pool,stride 2
Dense Block (Zhou et al., 2009) 7×7 [1×1conv3×3conv]×16
Soft Attention Block 7×7 SoftAttention× 1
Classification 1×1 7× 7global average pool
Layer 2 Fully Connected Dense Layer, Softmax