Skip to main content
. 2020 Apr 20;20(8):2338. doi: 10.3390/s20082338

Table 2.

Detailed structure of the proposed model.

Network Layer Shape Out Padding Stride Kernel
CNN Conv 625×3 64 Same 1 3
BN + ReLU
Conv 625×64 64 Same 1 3
BN + ReLU
Maxpool
(size = 3)
625×64 - Same 3 -
Conv 209×64 128 Same 1 3
BN + ReLU
Conv 209×128 128 Same 1 3
BN + ReLU
Maxpool
(size = 3)
209×128 - Same 3 -
Conv 70×128 256 Same 1 3
BN + ReLU
Conv 70×256 256 Same 1 3
BN + ReLU
Conv 70×256 256 Same 1 3
BN + ReLU
Maxpool
(size = 3)
70×256 - Same 3 -
Conv 24×256 512 Same 1 3
BN + ReLU
Conv 24×512 512 Same 1 3
BN + ReLU
Conv 24×512 512 Same 1 3
BN + ReLU
Maxpool
(size = 3)
24×512 - Same 3 -
Bi-GRU Forward 8×512 64 -
Backward 8×512 64 -
Concatenation
Attention 1-layer perceptron 8×128 1 -
Activation tanh
Softmax
Weighted sum
1-layer perceptron 128 2 -