Skip to main content
. 2024 Apr 12;14(4):375. doi: 10.3390/brainsci14040375
Algorithm 1. The training algorithm for DCGAN-GP.
  1. Input: Number of iterations N, discriminator D per iteration, training iterations k, batch size m, gradient penalty weight λ, discriminator update count limit ncritic

  2. Randomly initialize generator G network parameters θG and discriminator D network parameters θG

  3. For i=1:N do

# Train the discriminator
  • 4.

    For j=1:k do

# Collect mini-batch samples
  • 5.

    Randomly sample m samples {z1,z2,,zm} from the random noise distribution data p(z)

  • 6.

    Randomly sample m real samples {x1,x2,,xm} from the real distribution dataset pdata(x)

# Compute the random interpolation points for the gradient penalty term.
  • 7.

    Sample m random numbers {λ1,λ2,,λm} from a uniform distribution in the range [0,1]

  • 8.

    Construct interpolation samples x^i=λixi+(1λi)G(zi)

# Calculate the loss function of the discriminator and update the parameters
  • 9.

    Calculate the loss function of the discriminator

  • 10.

    Calculate the gradient penalty term

  • 11.

    The gradient of the discriminator model parameters with respect to θD is D=Ex~Pr[D(x)]Ez~Pz[D(G(z))]+λEx^~Px^[(x^D(x^)21)2]

  • 12.

    End

# Train the generator
  • 13.

    Collecting m samples {z1,z2,,zm} from the random noise distribution p(z)

# Calculating the generator’s loss function and updating parameters
  • 14.

    Calculating the generator’s loss function.

  • 15.

    Updating the generator model parameters θG with gradient θG=θG[D(G(zi))]

  • 16.

    End

  • 17.

    Output: Generator G