Skip to main content
. 2024 Mar 27;26(4):290. doi: 10.3390/e26040290
Algorithm A1 Overview of (αD, αG)-GAN training
   Require αD, αG, number of epochs ne, batch size B, learning rate η
   Initialize generator G with parameters θG, discriminator D with parameters θD.
   for i=1 to ne do
      Sample batch of real data x={x1,,xB} from dataset
      Sample batch of Gaussian noise vectors z={z1,,zB}N(0,I)
      Update the discriminator’s parameters using an Adam optimizer with learning rate η by descending the gradient:
θD1Bi=1B(α(1,D(xi))α(0,D(G(zi))))

      or update the discriminator’s parameters with a simplified GP:
θD1Bi=1B(α(1,D(xi))α(0,D(G(zi))))     +5i=1B||xlogD(x)1D(x)||22

      Update the generator’s parameters using an Adam optimizer with learning rate η and descending the gradient:
θG1Bi=1Bα(0,D(G(zi)))

   end for