Skip to main content
. 2022 Oct 11;22(20):7711. doi: 10.3390/s22207711

Table 3.

MLP architecture with learning rate set to 104.

Layer Name Neurons/Dropout Rate Activation
Dense 64 ReLU
Batch Norm - -
Dense 16 ReLU
Dropout 0.5 -
Flatten - -
Dense 8 ReLU
Dropout 0.5 -
Dense 2 Softmax