Table 2.
Accuracy | Precision | Recall | F1-score | MCC | KC | |
---|---|---|---|---|---|---|
LR | 0.71 | 0.71 | 0.71 | 0.71 | 0.56 | 0.56 |
RF | 0.73 | 0.74 | 0.73 | 0.74 | 0.60 | 0.60 |
XGBoost | 0.72 | 0.74 | 0.72 | 0.73 | 0.58 | 0.58 |
SVM | 0.73 | 0.74 | 0.74 | 0.74 | 0.60 | 0.60 |
CIM | 0.73 | 0.73 | 0.73 | 0.73 | 0.59 | 0.59 |
Stacking classifier | 0.72 | 0.75 | 0.72 | 0.73 | 0.58 | 0.58 |
Soft voting | 0.73 | 0.74 | 0.73 | 0.73 | 0.59 | 0.59 |
LR = logistic regression, RF = random forest, SVM = support vector machine, CIM = confusion matrix-based classifier integration approach, MCC = Matthews Correlation Coefficient, and KC = Cohen’s kappa score.