Table 3.
DNN Models | Activation Function | Activation Function in the Output Layer | Loss Function | Optimization Algorithm | Epochs and Batch Size |
---|---|---|---|---|---|
DNN_Model_1 3 hidden layers, with 150 neurons on each hidden |
Rectified Linear Unit (RELU) | Adam gradient descent | |||
layer DNN_Model_2 3 hidden layers, with 300 neurons on each hidden |
2-choice scale Sigmoid activation function |
2-choice scale Binary crossentropy |
1000 epochs for training | ||
layer DNN_Model_3 6 hidden layers, with 150 neurons on each hidden |
4-choice scale Softmax activation function |
4-choice scale Categorical crossentropy and one-hot encoding |
Batch size of 20 | ||
layer 6 hidden layers, with 300 neurons on each hidden layer |