Skip to main content
. 2024 Jan 3;19(1):e0295501. doi: 10.1371/journal.pone.0295501

Table 6. Layered architecture of deep neural networks.

LSTM CNN CNN-LSTM
Embedding(5000,200) Embedding(5000,200) Embedding(5000,200)
Dropout(0.5) Conv1D(128,5,activation=‘relu’) Conv1D(128,5,activation=‘relu’)
LSTM(128) MaxPooling1D(pool_size = 5) MaxPooling1D(pool_size = 2)
Dropout(0.5) Activation(‘relu’) LSTM(64,return_sequences = True)
LSTM(64) Dropout(rate = 0.5) Conv1D(128,5,activation=‘relu’)
Dropout(0.5) Flatten() MaxPooling1D(pool_size = 2)
Dense(32) Dense(32) Flatten()
Dense(5) Dense(5) Dense(5)
loss =‘categorical_crossentropy’, optimizer =‘adam’, epochs = 100, batch_size = 128, activation=‘softmax’