Skip to main content
. 2022 Feb 24;22(5):1793. doi: 10.3390/s22051793

Table 3.

Performance metrics for the best-performing combinations (feature selection technique and number of features) for the 10 ML Classifiers.

Classifier Feature Selection # of Feature 95% Confidence Interval Results Inference Time (ms)
Accuracy Precision Sensitivity F1-Score Specificity
MLP XGBoost 2 0.91 ± 01.19 0.91 ± 01.19 0.91 ± 01.19 0.91 ± 01.19 0.91 ± 01.19 0.592
Extra Tree Random Forest 5 0.88 ± 01.17 0.88 ± 01.17 0.88 ± 01.17 0.88 ± 01.17 0.88 ± 01.17 0.406
Random Forest XGBoost 2 0.87 ± 01.17 0.87 ± 01.17 0.87 ± 01.17 0.87 ± 01.17 0.87 ± 01.17 0.412
KNN XGBoost 2 0.87 ± 01.17 0.87 ± 01.17 0.87 ± 01.17 0.87 ± 01.17 0.87 ± 01.17 0.464
SVM XGBoost 2 0.86 ± 01.16 0.86 ± 01.16 0.86 ± 01.16 0.86 ± 01.16 0.86 ± 01.16 0.456
Gradient Boost XGBoost 2 0.84 ± 01.15 0.84 ± 01.15 0.84 ± 01.15 0.85 ± 01.15 0.84 ± 01.15 0.492
XGBoost Random Forest 5 0.84 ± 01.15 0.84 ± 01.15 0.84 ± 01.15 0.84 ± 01.15 0.84 ± 01.15 0.426
Logistic Regression Random Forest 2 0.81 ± 01.13 0.81 ± 01.13 0.81 ± 01.13 0.81 ± 01.13 0.81 ± 01.13 0.532
LDA Random Forest 9 0.78 ± 01.11 0.78 ± 01.11 0.78 ± 01.11 0.78 ± 01.11 0.78 ± 01.11 0.406
AdaBoost Random Forest 3 0.68 ± 01.03 0.68 ± 01.03 0.68 ± 01.03 0.70 ± 01.05 0.68 ± 01.03 0.492