Skip to main content
. 2025 Sep 8;11:e3087. doi: 10.7717/peerj-cs.3087

Table 3. The parameters setting used in this experiment.

Parameter Value
LSTM layers 2 (64 units for each)
Dense layers 1 (1 unit)
Optimizer Adam optimizer (learning rate of 0.001)
Activation function ReLU
Loss function Mean squared error
Batch size 64
epochs 100
Validation split 20%
Learning rate adjustment ReduceLROnPlateau