Skip to main content
. 2022 Feb 3;12:1849. doi: 10.1038/s41598-022-05974-6

Table 4.

The model architecture.

Layer Properties and dimensions
Embedding Layer (Word Embedding)

Output dimension: 64

Input sequence length: 500

BiLSTM Layer

Forward: Number of hidden nodes: 128

Backward: Number of hidden nodes: 128

Dropout layer Probability = 0.20
BiLSTM Layer

Forward: Number of hidden nodes: 256

Backward: Number of hidden nodes: 256

Convolution + Activation Layer

Number of filers = 64

Filter size = 5

Activation function: ReLU

Dropout layer Probability = 0.20
Convolution + Activation Layer

Number of filers = 128

Filter size = 5

Activation function: ReLU

Convolution + Activation Layer

Number of filers = 256

Filter size = 3

Activation function: ReLU

Maxpooling layer Pool Size: 3 Stride:1
Flatten
Hidden Layer 1

Number of hidden neurons: 128

Activation function: ReLU

Dropout layer Probability = 0.15
Hidden Layer 2

Number of hidden neurons: 64

Activation function: ReLU

Output layer

Number of neurons:1

Activation function: Sigmoid