Skip to main content
. 2020 Jan 21;43(1):21–38. doi: 10.1007/s40614-020-00244-0

Table 2.

Constant and Variable Hyperparameters for Each Machine-Learning Algorithm

Algorithm Hyperparameters
Constant Values Values Tested
SGD

Loss: Logistic regression

Penalty: ElasticNet

Learning rate: 10-5–10-2

Epochs: 5–1,000 by 5

SVC Kernel: Radial basis function

Penalty C term: 1, 10, 100

Gamma: 10-5–10-1

Random forest Estimators: 10–190 by 10
DNN

Early stopping: No improvement in loss function for 30 epochs

Learning rate optimizer: Adam

Loss: Binary cross entropy

Neuron activation function: ReLu

Output activation function: Sigmoid

Neurons: 23–26

Hidden layers: 0, 1, 2, 4, 6

Note: SGD = stochastic gradient descent; SVC = support vector classifier; DNN = dense neural network