Figure 4.
The loss functions for the 16x16 D-UNet (red), 24x24 D-UNet (blue), and the 32x32 D-UNet (black) are shown. All three loss functions drop significantly in the first 2 epochs, and then gradually decrease as the training continues. Overfitting is not an issue with the current training method, because each epoch contains a new set of 1,000 data. Therefore, the network does not see any dataset more than once. While more epochs could be used, the loss function flattens after 70 epochs, which implies that further training will yield minimal improvement.