Table 5:
GANs used in included studies.
GANs used | |||||
---|---|---|---|---|---|
Study | Main categories | Specific categories | Generator (G) and discriminator (D) | Functions of GANs | Characteristics of GANs |
Wang et al. (2018) | Conditional GAN (cGAN) | 3D-cGAN | G: 3D U-net based CNND: 3D U-net based CNN | Image denoising (PET to PET) | Adjusting U-net to fit 3D PET dataUsing progressive refinement scheme to improve generating qualityUsing E1 norm estimation error to reduce blurringUsing batch normalization to improve learning efficiency |
Oh et al. (2020) | 2D-cGAN | G: CNND: CNN | Image segmentation (PET to PET) | Using ReLU for activation function in convolution layer to reduce the vanishing gradient problem | |
Shi et al.(2019) | 2D-cGAN | G: U-net based CNND: CNN | Image segmentation (MRI to MRI) | Using skip-connection in the U-net to increase the ability of the generator to segment small local regions | |
Yan et al. (2018) | 2D-cGAN | G: U-net based CNND: Convolutional Markovian discriminator | Modalities transfer (MRI to PET) | Using convolutional Markovian discriminator to improve discrimination performance | |
Ouyang et al. (2019) | Pix2pix cGAN | G: U-net based CNND: CNN | Image denoising (PET to PET) | Using feature matching to improve training stabilityUsing an extra Amyloid status classifier to make the generated image fit to the patient's real amyloid status | |
Choi et al. (2018) | Pix2pix cGAN | G: U-net based CNND: CNN | Modalities transfer (PET to MRI) | - | |
Wang et al. (2019) | “Locality adaptive” multimodality GAN (LA-GAN) | G: 3D U-net based CNND: 3D U-net based CNN | Image denoising (MRI + PET to PET) | Adjusting U-net to fit 3D PET data Using progressive refinement scheme to improve generating quality (autocontext training method) | |
Baumgartner et al.(2018) | WGAN | WGAN | G: 3D U-net based CNND: CNN | Feature extraction (MRI to MRI) | Using a new map function in generator to generate MRI of AD patients from healthy controls |
Wegmayr et al. (2019) | WGAN | Same as Baumgartner et al. (2018) | Feature extraction (MRI to MRI) | Same as Baumgartner et al. (2018) | |
Bowles et al. (2018) | WGAN | - | Feature extraction (MRI to MRI) | Using a training data reweighting schema to improve the generator's ability to produce severely atrophic images | |
Islam and Zhang (2020) | Deep CGAN | DCGAN | G: CNND: CNN | Data augmentation (noise to PET) | Using BatchNorm to regulate the extracted feature scaleUsing LeakyRelu to prevent the vanishing gradient problem |
Kang et al. (2020) | DCGAN | G: CNND: CNN | Data augmentation (noise to PET) | Using a regularization term in the Wasserstein loss to improve training stabilityTwo different GAN networks are used to generate Aβ negative and positive images, respectively, to improve the generalization | |
Kang et al. (2018) | DCGAN | G: CAED: CNN | Modalities transfer (PET to PETSN) | Using the fidelity loss between the MRI-based spatial normalization result and the generated image to generate the template-like image | |
Pan et al. (2018) | Cycle GAN | 3D Cycle-consistence GAN | Have 2 G & D sets G1 & G2: CNND1 & D2: CNN | Modalities transfer (MRI to PET) | Using two sets of generated countermeasure networks to ensure that the generated image is not only similar to the real image but also corresponding to the input magnetic resonance images |
Kim et al. (2020) | Boundary Equilibrium GAN (BEGAN) | BEGAN | G: CAE D: CAE | Feature extraction (PET to PET) | The discriminator and generator are trained to maximize and minimize the distance between the real and fake image reconstruction loss rather than the data distribution |
Note: PETSN, PET with spatial normalization; U-net, a modified CNN; ReLU, rectified linear unit.