Skip to main content
. 2021 Jan 25:1–16. Online ahead of print. doi: 10.1007/s12559-020-09785-7

Table 2.

Architecture of the proposed GAN

Generator network G
Input layer: noise, number of latent inputs = 100
[Layer 1]; fully connect, reshape to (4 × 4 × 512); ReLu
[Layer 2]; transposed convolution (4, 4, 256); stride = 2; Batchnorm; ReLu
[Layer 3]; transposed convolution (4, 4, 128); stride = 2; Batchnorm; ReLu
[Layer 4]; transposed convolution (4, 4, 64); stride = 2; Batchnorm; ReLu
[Layer 5]; transposed convolution (4, 4, 3); stride = 2; Tanh
Output: generated image (64 × 64 × 3)
Discriminator network D
Input layer: CT image (64 × 64 × 3)
[Layer 1]; convolution layer (5, 5, 64); stride = 2; Batchnorm; LreLu
[Layer 2]; convolution layer (5, 5, 128); stride = 2; Batchnorm; LreLu
[Layer 3]; convolution layer (5, 5, 256); stride = 2; Batchnorm; LreLu
[Layer 4]; convolution layer (5, 5, 512); stride = 2; Batchnorm; LreLu
[Layer 5]; convolution layer (4,4,1); stride = 2; Batchnorm; LreLu
Output: probability of real or fake