Skip to main content
. 2022 Sep 27;22(19):7328. doi: 10.3390/s22197328

Table 2.

Summary of the hyper-parameters used to train the multi-modal network. Hyper-parameters were tuned heuristically.

Hyperparameter Value
Learning rate 1·104
Optimizer Adam (β1=0.9,β2=0.999)
Batch size 16
Dropout 0.50
Epochs (frozen backbone) 50
Epochs (fine-tuning backbone) 50
Loss function Mean Squared Error
Image dimensions (1024, 1024)
Timeseries length 4223 timestamps