Skip to main content
. 2025 Jul 29;15:27696. doi: 10.1038/s41598-025-13387-4

Table 2.

Hyperparameter setting.

Hyperparameter Value
BiLSTM Number of layers 1
Hidden size 32
Dropout 0.05
Activation functions Tanh
TCN TCN layers 2
Kernel size 3
Dropout 0.1
Activation functions Relu
DCEFormer Sensor encoder blocks 2
Sensor encoder heads 4
Timestep encoder blocks 2
Timestep encoder heads 4
Decoder blocks 1
Decoder heads 4
Activation functions Softmax
Other Adam learning rate 0.0005
Batch size 32
Fully connected layer activation functions Relu
Dropout 0.05
Early stopping patience 10