Skip to main content
. 2024 Sep 18;24(18):6035. doi: 10.3390/s24186035
Algorithm 1 Training the VAE-WACGAN model
1: θE,θG,θD initialize network parameters
2: repeat
3:   for i=1 ncritic do
4:     (Xr,yr) random mini-batch from dataset
5:     zrE(Xr,yr)
6:     XfG(zr,yr)
7:     zp samples from prior N(0,I)
8:     XpG(zp,yr)
9:     // Update the parameters of the Discriminator
10:     LSE[S(Xr)]+E[S(Xf)]+E[S(Xp)]+λE[(x^S(x^)21)2]
11:     LCE[logP(C(Xr)=yr)]E[logP(C(Xf)=yr)]E[logP(C(Xp)=yr)]
12:     LDLS+LC
13:     θDAdam(θD,θD(LD))
14:   end for
15:   // Update the parameters of the Encoder and Decoder
16:   LpriorDKL(q(zr|Xr,yr)p(zr|yr))
17:   LreconEq(zr|Xr,yr)logp(Xr|zr,yr)
18:   LELprior+ΥLrecon
19:   LGE[S(Xp)]E[logP(C(Xp)=yr)]+ΥLrecon
20:   θEAdam(θE,θE(LE))
21:   θGAdam(θG,θG(LG))
22: until D has converged to 0.5