Skip to main content
. 2024 Mar 27;10:e1961. doi: 10.7717/peerj-cs.1961

Table 5. The hyperparameter values employed of our findings.

Hyperparameter Value Description
num_words 10,000 Maximum number of words to keep based on word frequency.
oov_token <OOV> Token to represent out-of-vocabulary words.
maxlen 100 Maximum length of sequences (padded/truncated).
embedding_dim 100 Dimensionality of the word embeddings.
input_dim num_words Size of the vocabulary.
output_dim 100 Dimensionality of the output space.
trainable False Whether the embedding layer is trainable.
filters 128 Number of filters in the convolutional layer.
kernel_size 5 Size of the convolutional kernel.
pool_size 4 Size of the max pooling window.
units 64 Number of units in the LSTM and dense layers.
dropout_rate 0.5 Fraction of input units to drop for dropout.
lr 0.001 Learning rate for the Adam optimizer.
batch_size 32 Number of samples per gradient update during training.
epochs 10 Number of epochs for training.

Notes.

The best performing results are shown in bold.