Skip to main content
. 2022 Feb 13;22(4):1435. doi: 10.3390/s22041435
Algorithm 1 HRGAN training algorithm.
Model: D: discriminator. G: generator. Gd: generator down-sampling. E: pre-trained MobileNetV3-small.
Parameter: θdisc: discriminator parameters. θgen: generator parameters.
Input: x: data set. y: one-hot encoded label vector. z: random noises sampled from a normal distribution. w: one-hot encoded label vector converted from random integer sampled from a normal distribution.
Require: α: the learning rate of the generator. m: discriminator batch size. n: the ratio of discriminator and generator backpropagation. cls: the number of classes. MSreal: the pre-calculated MobileNet score of real data set. HR: the ratio of high-resolution output images and real images.

1: Initialization  θdisc,θgen  Xavier uniform
2: while θgen has not converged do
3:    for i=0,,n do
4:        Sample {x(i)}i=1m ~ Pr a batch of images from the real data set.
5:        Sample {y(i)}i=1m ~ Pr a batch of one-hot label vectors from the real data set.
6:        Sample {z(i)}i=1m ~ p(z) a batch from a normal distribution.
7:        gradθdisc  θdisc[1mi=1mmin(0,k+D(x(i),y(i)))+1mi=1mmin(0,kD(Gd(z(i),y(i) ),y(i)))] 
8:        θdiscθdisc+2α·Adam(θdisc,gradθdisc)
9:    end for
10:    Sample {z(i)}i=12m ~ p(z) a batch from a normal distribution.
11:    Sample {w(i)}i=12m ~ U(0, cls1)z a batch of one-hot label vectors from a uniform distribution.
12:    P(c|z(i),w(i)) softmax(E(G(z(i),w(i))))
13:    P(c)  12mi=12mP(c|z(i),w(i))
14:    MSfake  exp(12mi=12mc=1clsP(c|z(i),w(i))logP(c|z(i),w(i))P(c))
15:    gradθgen  θgen[12mi=12mD(Gd(z(i),y(i) ),y(i))+max(0,log(MSreal · HR)logMSfake )]
16:    θgenθgenα·Adam(θgen,gradθgen)
17: end while