Table 3.
MLP architecture with learning rate set to .
| Layer Name | Neurons/Dropout Rate | Activation |
|---|---|---|
| Dense | 64 | ReLU |
| Batch Norm | - | - |
| Dense | 16 | ReLU |
| Dropout | 0.5 | - |
| Flatten | - | - |
| Dense | 8 | ReLU |
| Dropout | 0.5 | - |
| Dense | 2 | Softmax |
MLP architecture with learning rate set to .
| Layer Name | Neurons/Dropout Rate | Activation |
|---|---|---|
| Dense | 64 | ReLU |
| Batch Norm | - | - |
| Dense | 16 | ReLU |
| Dropout | 0.5 | - |
| Flatten | - | - |
| Dense | 8 | ReLU |
| Dropout | 0.5 | - |
| Dense | 2 | Softmax |