Table 2.
MLP baseline training model. Bold indicates the latent layer.
Id | Layer | Type | Output Shape | Activation | Param # |
---|---|---|---|---|---|
IN | input | Input Layer | (None, 196,608) | ReLU | 0 |
EN256 | encoded_256 | Dense | (None, 256) | ReLU | 50,331,904 |
EN128 | encoded_128 | Dense | (None, 128) | ReLU | 32,896 |
L | latent | Dense | (None, 64) | ReLU | 8256 |
DE128 | decoded_128 | Dense | (None, 128) | ReLU | 8320 |
DE256 | decoded_256 | Dense | (None, 256) | ReLU | 33,024 |
OUT | output | Dense | (None, 196,608) | Sigmoid | 50,528,256 |