Skip to main content
. 2024 May 28;24(11):3493. doi: 10.3390/s24113493
parameter value explain
lr0 0.01 Initial learning rate (initial value of 0.01).
lrf 0.01 Final learning rate (initial learning rate multiplied by this value).
The final learning rate is 0.01 * 0.01 = 0.0001.
momentum 0.937 Momentum of the SGD optimizer.
Momentum is used to accelerate convergence and reduce oscillations.
weight_decay 0.0005 The weights of the optimizer are attenuated to prevent overfitting with a value of 5 × 10−4.