Table 1.
An overview of GANs variants discussed in “Architecture variants” section
Categories | GAN Type | Main Architectural Contributions to GAN |
---|---|---|
Basic GAN | GAN [67] | Use Multilayer perceptron in the generator and discriminator |
Convolutional Based GAN | DCGAN [112] | Employ Convolutional and transpose-convolutional layers in the discriminator and generator respectively |
PROGAN [131] | Progressively grow layers of GAN as training progresses | |
Condition based GANs | cGAN [118] | Control kind of image being generated using prior information |
ACGAN [119] | Add a classifier loss in addition to adversarial loss to reconstruct class labels | |
VACGAN [120] | Separate out classifier loss of ACGAN by introducing separate classifier network parallel to the discriminator | |
infoGAN [121] | Learn disentangled latent representation by maximizing mutual information between latent vector and generated images | |
SCGAN [122] | Learn disentangled latent representation by adding the similarity constraint on the generator | |
Latent representation based GANs | DEGAN [116] | Utilize the pretrained decoder and encoder structure from VAE to transform random Gaussian noise to distribution that contains intrinsic information of the real images |
VAEGAN [115] | Combine VAE and GAN | |
AAE [113] | Impose discriminator on the latent space of the autoencoder architecture | |
VEEGAN [117] | Add reconstruction network that reverse the action of generator network to address the problem of mode collapse | |
BiGAN [114] | Attach encoder component to learn inverse mapping of data space to latent space | |
Stack of GANs | LAPGAN [132] | Introduce Laplacian pyramid framework for an image detail enhancement |
MADGAN [135] | Use multiple generators to discover diverse modes of the data distribution | |
D2GAN [134] | Employ two discriminators to address the problem of mode collapse | |
CycleGAN [137] | Use two generators and two discriminators to accomplish unpaired image to image translation task | |
CoGAN [136] | Use two GANs to learn a joint distribution from two-domain images | |
Other variants | SAGAN [141] | Incorporate self-attention mechanism to model long range dependencies |
GRAN [133] | Recurrent generative model trained using adversarial process | |
SRGAN [139] | Use very deep convolutional layers with residual blocks for image super resolution |