Skip to main content
. 2024 Feb 22;10:e1875. doi: 10.7717/peerj-cs.1875

Algorithm 2. Deep learning model optimization based on Kullback-Leibler risk function and GMM.

1 1em
 Data: Observation data set D={(y1,x1),,(yN,xN)}, posterior model parameters of Algorithm 1 Θ^post
 Result: Optimized model parameters Θ
2 Initialize deep learning model parameters (including LSTM and GMM parameters);
3 Initialize regularization parameters λ;
4 Phase 1: Pre-training deep learning model;
5 while Pre-training stopping criterion not reached do
6   Minimize the objective function L (Formula (29));
7   through stochastic gradient descent (SGD);
8 Phase 2: Model tuning and integration;
9 Set Kullback-Leibler risk function and Gaussian mixture model weights;
10 while Model tuning stopping criterion not reached do
11   Use the Formula (25) to calculate the Kullback-Leibler gradient;
12   MCMC update of GMM parameters using Metropolis-Hastings algorithm;
13   Integrate Algorithm 1 Θ^post as prior information;
14   Update λ to balance Kullback-Leibler risk and GMM (see Eqs. (28) and (29));
15 Calculate the optimal posterior parameters Θ;
16 return Θ