Table 4.
Layer | Properties and dimensions |
---|---|
Embedding Layer (Word Embedding) |
Output dimension: 64 Input sequence length: 500 |
BiLSTM Layer |
Forward: Number of hidden nodes: 128 Backward: Number of hidden nodes: 128 |
Dropout layer | Probability = 0.20 |
BiLSTM Layer |
Forward: Number of hidden nodes: 256 Backward: Number of hidden nodes: 256 |
Convolution + Activation Layer |
Number of filers = 64 Filter size = 5 Activation function: ReLU |
Dropout layer | Probability = 0.20 |
Convolution + Activation Layer |
Number of filers = 128 Filter size = 5 Activation function: ReLU |
Convolution + Activation Layer |
Number of filers = 256 Filter size = 3 Activation function: ReLU |
Maxpooling layer | Pool Size: 3 Stride:1 |
Flatten | |
Hidden Layer 1 |
Number of hidden neurons: 128 Activation function: ReLU |
Dropout layer | Probability = 0.15 |
Hidden Layer 2 |
Number of hidden neurons: 64 Activation function: ReLU |
Output layer |
Number of neurons:1 Activation function: Sigmoid |