Table 1.
Hyper-parameters of the GAN models used in the case study.
| Hyper-parameter | Value |
|---|---|
| Activation function | ReLU (rectified linear unit) |
| Learning rate | 0.0001 |
| The size of minibatches | 128 |
| Epochs | 150 |
| Optimizer | Adam |
| Filter size | 3 × 3 |
| Channels in G | 32, 32, 64, 128, 256, 256, 256, 128, 64, 32, 1 |
| Channels in D | 10, 10, 20, 40, 80, 80 |
| Normalization | Instance normalization |
| Stride | 2 |
| Momentum | 0.5 |
| Attention module in D | CBAM |