Skip to main content
. 2021 Aug 10;14(5):1677–1688. doi: 10.1007/s12559-021-09926-6

Table 2.

CGAN model with activations units

Generative network Discriminative network
Input Activation Input Activation
Input (1×1×100)) 1×1×100 Input(64×64×3) (64×64×3)
Project and reshape 4×4×512 Dropout (50%) (64×64×3)
Transpose convolution 5×5 filter size 8×8×256 Convolution 1 5×5 filter size (32×32×64)
Batch-normalization 8×8×256 LeakyReLU (32×32×64)
ReLU 8×8×256 Convolution 2 5×5 filter size (16×16×128)
Transpose convolution 5×5 filter size 16×16×128 Batch-normalization (16×16×128)
Batch-normalization 16×16×128 LeakyReLU (16×16×128)
ReLU 16×16×128 Convolution 3 5×5 filter size (16×16×256)
Transpose convolution 5×5 filter size 32×32×64 Batch-normalization (16×16×256)
Batch-normalization 32×32×64 LeakyReLU (8×8×128)
ReLU 32×32×64 Convolution 4 5×5 filter size (4×4×512)
Transpose convolution 5×5 filter size 64×64×3 Batch-normalization (4×4×512)
Hyperbolic Tanh 64×64×3 LeakyReLU (4×4×512)
Convolution 5 4×4 filter size (1×1×1)