Skip to main content
. 2024 Mar 27;26(4):290. doi: 10.3390/e26040290
Algorithm A3 Overview of Vanilla-SLkGAN training
      Require k, number of epochs ne, batch size B, learning rate η
      Initialize generator G with parameters θG, discriminator D with parameters θD.
      for i=1 to ne do
      Sample batch of real data x={x1,,xB} from dataset
      Sample batch of noise vectors z={z1,,zB}N(0,I)
      Update the discriminator’s parameters using an Adam optimizer with learning rate η by descending the gradient:
θD1Bi=1Blog(D(xi))+log(1D(G(zi)))

      or update the discriminator’s parameters with a simplified (GP):
θD1Bi=1Blog(D(xi))+log(1D(G(zi)))     +5i=1B||xlogD(x)1D(x)||22

      Update the generator’s parameters using an Adam optimizer with learning rate η and descending the gradient:
θG1Bi=1B12(|1D(G(zi))|k1)

   end for