Skip to main content
. 2021 May 11;11(5):615. doi: 10.3390/brainsci11050615

Table 3.

Training parameters.

Parameters Bi-LSTM-Attention Parameters 1D-CNN
LSTM hidden size 64 Conv num layers 4
LSTM num layers 2 (in, out, kernel size, stride, padding) of layer1 (1,16,3,1,1)
LSTM dropout 0.1 (in, …, padding) of layer2 (16,32,3,1,1)
hidden linear size 256 (in, …, padding) of layer3 (32,32,3,1,1)
linear dropout 0.3 (in, …, padding) of layer4 (32,32,2,1,1)
batch size 20 batch size 32
training epochs 100 training epochs 100