Skip to main content
. 2021 Jul 21;21(15):4966. doi: 10.3390/s21154966

Table 2.

MLP baseline training model. Bold indicates the latent layer.

Id Layer Type Output Shape Activation Param #
IN input Input Layer (None, 196,608) ReLU 0
EN256 encoded_256 Dense (None, 256) ReLU 50,331,904
EN128 encoded_128 Dense (None, 128) ReLU 32,896
L latent Dense (None, 64) ReLU 8256
DE128 decoded_128 Dense (None, 128) ReLU 8320
DE256 decoded_256 Dense (None, 256) ReLU 33,024
OUT output Dense (None, 196,608) Sigmoid 50,528,256