Multilayer Perceptron
|
Number of neurons in hidden layers (1–3), layer 4 and hidden layers (5–7) |
512, 1024, 512 |
Activation function used in hidden layers |
ReLU |
Optimizer and learning rate |
Adam and 0.0001 |
Convolutional Neural Network
|
Number of filters in Conv2D layers |
32 |
Stride in Conv2D layers |
(1,1) |
Pool size in MaxPool2D layers |
(2,2) |
Stride in MaxPool 2D layers |
(2,2) |
Kernel size in Conv2D layers 1 and 2 |
(3,3) and (2,2) |
Number of neurons in Dense layers (1–4) |
64, 128, 128, 64 |
Activation function used in Dense layers |
ReLU |
Optimizer and Learning Rate |
Adam and 0.0001 |
LSTM
|
Number of memory cells in LSTM layers 1 and 2 |
64 and 128 |
Number of neurons in Dense layers (1–3) |
64, 256, 128 |
Activation function used in LSTM layers 1 and 2 |
tanh |
Activation function used in Dense layers (1–3) |
ReLU |
Optimizer and Learning Rate |
Adam and 0.0001 |
ResNet-50
|
Number of neurons in Dense layers (1–6) |
256, 128, 64, 512, 512, 512 |
Activation function used in Dense layers (1–6) |
ReLU |
Optimizer and Learning Rate |
Adam and 0.0001 |
Efficient Net B0
|
Number of neurons in Dense layers (1–3) |
256, 128, 64 |
Activation function used in Dense layers (1–3) |
ReLU |
Optimizer and Learning Rate |
Adam and 0.0001 |