Skip to main content
. 2020 May 31;20(11):3106. doi: 10.3390/s20113106
Algorithm 2 rPassD2SGAN calculates each discriminator’s gradient penalty. We use default λ=10, ncritic=10, ngen=40, α=0.0001, β1=0.5, β2=0.9.
Require: The gradient penalty coefficient λ, the number of critic iteration per generator ncritic, the number of generator iteration per discriminator ngen, the batch size m, Adam hyper-parameters α, β1, β2.
Require: initial D1, D2 critic parameters w0 and u0, initial generator parameter θ0
while θ has not converged do
  for  t=1,,ncritic  do
   for  i=1,,m  do
    Sample real data xPr, latent variable zp(z), a random number ϵU[0,1].
    x˜Gθ(z)
    x^ϵx+(1ϵ)x˜
    LD1iDw(x˜)Dw(x)+λ(x^Dw(x^)21)2
   end for
   wAdam(w1mi=1mLD1(i),w,α,β1,β2)
   for  i=1,,m  do
    Sample real data xPr, latent variable zp(z), a random number ϵU[0,1].
    x˜Gθ(z)
    x¯ϵx˜+(1ϵ)x
     LD2iDu(x)Du(x˜)+λ(x¯Dw(x¯)21)2
   end for
   uAdam(u1mi=1mLD2i,u,α,β1,β2)
  end for
  for  t=1,,ngen  do
   Sample a batch of latent variable {z(i)}i=1mp(z)
   θAdam(θ1mi=1m(Dw(Gθ(z))+Du(Gθ(z)),θ,α,β1,β2)
  end for
end while