Skip to main content
. 2020 Apr 6;15(4):e0231166. doi: 10.1371/journal.pone.0231166

Table 2. Summary of hyperparameters tuning.

Model Hyperparameter Range
LASSO C (inverse of regularizer multiplier) 0.10, 0.12, 0.15, 0.18, 0.21, 0.26, 0.31, 0.37, 0.45, 0.54, 0.66, 0.79, 0.95, 1.15, 1.39, 1.68, 2.02, 2.44, 2.95, 3.56, 4.29, 5.18, 6.25, 7.54, 9.10,10.9, 13.3, 16.0, 19.3, 23.3, 28.1, 33.9, 40.9, 49.4, 59.6, 72.0, 86.9, 105, 126, 153, 184, 222, 268, 324, 391, 471, 569, 687, 829, 1000
Elastic net L1 ratio 0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95
Alpha 0.00001, 0.00004, 0.00016, 0.0006, 0.0025, 0.01, 0.04, 0.16, 0.63, 2.5, 10
CatBoost Tree depth 2, 4
Learning rate 0.03, 0.1, 0.3
Bagging temperature 0.6, 0.8, 1.
L2 leaf regularization 3, 10, 100, 500
Leaf estimation iterations 1, 2
MLP Number of hidden neurons 5, 10, 15, 20
Learning rate 0.001, 0.01
Batch size 16, 32
Dropout rate 0.1, 0.2
L1 regularization ratio 0.0001, 0.001

The table details the hyperparameters and corresponding range that were tuned for each model in the cross-validation process.