Table 3. Performance metrics of different ensemble models.
Model | Instance | Accuracy | Precision | Recall | F1-score | Training (s) | Testing (s) |
---|---|---|---|---|---|---|---|
Voting | Fraud | 0.99 | 0.98 | 0.97 | 0.98 | 134.00 | 0.13 |
Normal | 0.99 | 0.99 | 0.99 | 0.99 | |||
Stacking | Fraud | 0.99 | 0.98 | 0.97 | 0.97 | 987.09 | 0.37 |
Normal | 0.99 | 0.99 | 0.99 | 0.99 | |||
Boosting | Fraud | 0.98 | 0.98 | 0.95 | 0.96 | 3.99 | 0.02 |
Normal | 0.98 | 0.99 | 0.99 | 0.99 |