Skip to main content
. 2022 Jun 23:1–20. Online ahead of print. doi: 10.1007/s00146-022-01490-3
Decision tree Criterion: Gini; splitter: best; max depth: 187; minimum samples at a leaf node: 1; maximum number of features: none; class weights: balanced; random state: 1
Random forest Number of trees: 5000; criterion: Gini; maximum depth: 20; minimum samples at a leaf node: 2; minimum samples to split: 2; maximum number of features: auto; class weights: balanced; random state: 1
AdaBoost Maximum number of estimators: 5000; base estimator: decision tree (class weights: balanced); learning rate: 1; random state: 1
Support vector machine Regularization parameter: 2; kernel: linear; class weights: {0:1,1:26}; random state: 0
Logistic regression Loss: binary cross-entropy; kernel initializer: Glorot uniform; kernel optimiser: Adam; kernel regularizer: l2 (λ = 1e−2); learning rate: 1e−4 for epochs 1–150, 1e−5 for epochs 151–200, 1e−9 for epochs > 200; class weights: {0: 1, 1: 26}; batch size: 64; number of epochs: 210
Neural network (1 hidden layer) Number of neurons: 5000; activation: relu; loss: binary cross-entropy; kernel initializer: Glorot uniform; kernel optimiser: Adam; kernel regularizer: l2 (λ = 1e−4); learning rate: 1e−4 for epochs 1–180, 1e−5 for epochs > 180; class weights: {0: 1, 1: 26}; batch size: 64; number of epochs: 300
Neural network (5 hidden layers) Number of neurons: 1000–500–200–100–50; activation: relu; loss: binary cross-entropy; kernel initializer: Glorot uniform; kernel optimiser: Adam; kernel regularizer: dropout (rate = 1e−1); learning rate: 1e−4 for epochs 1–100, 1e−5 for epochs 101–150, 1e−6 for epochs > 150; class weights: {0: 1, 1: 26}; batch size: 64; number of epochs: 200

Source: Author’s own work