Skip to main content
. 2021 Feb 11;23(2):219. doi: 10.3390/e23020219

Table 2.

The optimization process for the Bayesian LSTM networks.

Step Optimization Process
0 Set the scale parameter α as α(0,1).
1 Sample the random variable ε as εN(0,1).
2 Set the initial value of the optimized parameters (μ,ρ).
3 Sample all the parameters as θ=μ+log(1+exp(ρ))ε.
4 Set the cost function as Loss=logq(θ|μ,ρ)logP(θ)+logP(D|θ).
5 Calculate the gradient by the mean with the training data D as μ=Lossθ+Lossμ.
6 Calculate the gradient by the standard deviation with the training data D as ρ=Lossθε1+exp(ρ)+Lossρ.
7 Update the parameters (μ,ρ) as the following: μμαμ,ρραρ.