Table 3.
Parameters | Bi-LSTM-Attention | Parameters | 1D-CNN |
---|---|---|---|
LSTM hidden size | 64 | Conv num layers | 4 |
LSTM num layers | 2 | (in, out, kernel size, stride, padding) of layer1 | (1,16,3,1,1) |
LSTM dropout | 0.1 | (in, …, padding) of layer2 | (16,32,3,1,1) |
hidden linear size | 256 | (in, …, padding) of layer3 | (32,32,3,1,1) |
linear dropout | 0.3 | (in, …, padding) of layer4 | (32,32,2,1,1) |
batch size | 20 | batch size | 32 |
training epochs | 100 | training epochs | 100 |