Skip to main content
. 2022 Nov 19;22(22):8974. doi: 10.3390/s22228974
Algorithm 2 DL model training
Input:  W0,η,B,ζ,ρ,τ
Output:  Wnr
  1: Divide Dn, into equal size B batches with feature vector x;
  2: Set Wnr with initial values;
  3: For each Batch;
    c1 Forward x to Conv1;
    c2 Forward c1 to Conv2;
    λ Flatten (c2);
    H Forward λ to LSTM1;
    μ Forward H to LSTM2;
    M Forward μ to Dense;
    γ Dropout (M);
    ν Forward γ to Output(Sigmoid);
  4: Compute loss function using:
    ζ=1Bi=01xi.logx^i+(1xi).log(1x^i);
  5: Update Wnr;
  6: Repeat until ζ converges;