TABLE 7.
Comparison of performance of different combinations of stacking ensemble models.
| Classifier | Accuracy | F1-score | Precision | Recall | Time(s) | AUC-ROC | TPR |
|---|---|---|---|---|---|---|---|
| NB | 0.74 | 0.73 | 0.74 | 0.73 | 1 | 0.81 | 0.73 |
| CatBoost | 0.88 | 0.88 | 0.88 | 0.88 | 122 | 0.96 | 0.88 |
| DT | 0.88 | 0.88 | 0.87 | 0.89 | 7 | 0.88 | 0.89 |
| GB | 0.84 | 0.84 | 0.86 | 0.8 | 83 | 0.92 | 0.86 |
| NDCG | 0.87 | 0.87 | 0.87 | 0.87 | 455 | 0.97 | 0.89 |
| GNCD | 0.83 | 0.83 | 0.84 | 0.82 | 1,005 | 0.83 | 0.82 |
| Hard voting | 0.87 | 0.86 | 0.88 | 0.85 | 182 | 0.87 | 0.85 |
| Soft voting | 0.88 | 0.88 | 0.86 | 0.91 | 187 | 0.88 | 0.91 |