Table 3.
Best metric scores of each ML model and the selected hyperparameters of each model.
Classifiers | Accuracy (%) | Recall (%) | F1-Score (%) | Precision (%) | Confusion Matrix | Hyperparameters | ||
---|---|---|---|---|---|---|---|---|
RF | 88.82 | 89.94 | 89.01 | 88.10 | 0 | 1 | Criterion = entropy, min samples leaf = 1, min samples split = 5, n estimators = 30 | |
0 | 2432 | 342 | ||||||
1 | 283 | 2531 | ||||||
KNN | 75.93 | 79.82 | 76.96 | 74.30 | 0 | 1 | algorithm = auto, leaf size = 1, n neighbors = 3, weights = distance | |
0 | 1997 | 777 | ||||||
1 | 568 | 2246 | ||||||
NN (MLP) | 85.68 | 85.54 | 85.75 | 85.96 | 0 | 1 | activation = tanh, alpha = 0.0001, hidden layer sizes = (10, 20, 50), learning rate = constant, solver = adam | |
0 | 2381 | 393 | ||||||
1 | 407 | 2407 | ||||||
LR | 70.22 | 74.09 | 71.48 | 69.04 | 0 | 1 | penalty = l2, C = 10.0 | |
0 | 1839 | 935 | ||||||
1 | 729 | 2085 | ||||||
SVM | 72.39 | 85.86 | 75.80 | 67.85 | 0 | 1 | C = 15, kernel = rbf | |
0 | 1629 | 1145 | ||||||
1 | 398 | 2416 | ||||||
XGBoost | 88.53 | 90.26 | 88.80 | 87.38 | 0 | 1 | gamma = 0.7, max depth = 9, min child weight = 1 | |
0 | 2407 | 367 | ||||||
1 | 274 | 2540 | ||||||
LightGBM | 87.78 | 89.02 | 88.00 | 87.01 | 0 | 1 | n estimators = 520, num leaves = 50 | |
0 | 2400 | 374 | ||||||
1 | 309 | 2505 | ||||||
BB | 88.85 | 90.69 | 89.12 | 87.61 | 0 | 1 | n estimators = 1100 | |
0 | 2413 | 361 | ||||||
1 | 262 | 2552 |