Skip to main content
. 2020 Oct 31;26(4):274–283. doi: 10.4258/hir.2020.26.4.274

Table 5.

Optimal hyperparameters of models using random search

Machine learning technique Best feature selection method Optimal hyperparameters values Accuracy (%) Sensitivity (%) Specificity (%) F1-score (%)
Linear SVM Feature selection using LASSO with C = 0.03 penalty = l2
c = 92.82
verbose = false
dual = false
76.03 78.81 73.27 76.59
KNN Feature selection using LASSO with C = 0.03 n_neighbors = 1
weights = uniform
algorithm = kd_tree
leaf_size = 180
94.88 95.08 94.68 94.87
RF Feature selection using LASSO with C = 0.01 n_estimators = 1000
bootstrap = true
criterion = entropy
max_features = none
verbose = false
93.92 93.80 94.03 93.88
XGBoost Feature selection using LASSO with C = 0.03 n_estimators = 1000
max_depth = 15
learning_rate = 0.2
objective = binary:logistic
booster = gbtree
gamma = 0.5
min_child_weight = 3.0
subsample = 0.8
colsample_bytree = 0.9
colsample_bylevel = 0.9
reg_alpha = 0.1
silent = false
95.31 95.19 95.43 95.28

SVM: support vector machine, KNN: k-nearest neighbor, RF: random forest, XGBoost: extreme gradient boosting, LASSO: least absolute shrinkage and selection operator.