Table 2.
Algorithm | Hyperparameters | |
---|---|---|
Constant Values | Values Tested | |
SGD |
Loss: Logistic regression Penalty: ElasticNet |
Learning rate: 10-5–10-2 Epochs: 5–1,000 by 5 |
SVC | Kernel: Radial basis function |
Penalty C term: 1, 10, 100 Gamma: 10-5–10-1 |
Random forest | Estimators: 10–190 by 10 | |
DNN |
Early stopping: No improvement in loss function for 30 epochs Learning rate optimizer: Adam Loss: Binary cross entropy Neuron activation function: ReLu Output activation function: Sigmoid |
Neurons: 23–26 Hidden layers: 0, 1, 2, 4, 6 |
Note: SGD = stochastic gradient descent; SVC = support vector classifier; DNN = dense neural network