Table 1. Hyperparameter tuning.
BiLSTM layer | TextCNN layer | ||
---|---|---|---|
Parameters | Values | Parameters | Values |
Hidden node problem | 300 | Convolution kernels number | 300 |
Learning rate | 0.001 | Convolution kernels size | 3, 4, 5 |
Epochs | 15 | Activation function | ReLU |
Batch_size | 300 | Pooling strategy | 1-max pooling |
Optimization function | Adam | Dropout | 0.5 |
Loss function | Cross entropy | L2 regularization | Three |
Input word vector | BERT |