Skip to main content
. 2022 Dec 10;22(24):9679. doi: 10.3390/s22249679
Algorithm 1: DLSA Optimization Method for Neural Network Training
1 Initialize the network with a set of random parameters: weights, biases, Levenberg parameter µ
2 Compute Jacobian J and the approximated Hessian JTJ and the total sum of squared error
3 Update the weights and the biases Using the equation: xk + 1 = xk − [JTJ + μI]−1JTe
4 Recompute the total sum of squared error
5 Is the error performance Satisfactory?
6 if yes
7 Safe the training weights and biases
8 else
9 increase µ and re-calculate Δ θ and repeat the process from step 2 initialize the network until the last stage
10 end