Table 1. Performance metrics of different original models.
Model | Instance | Accuracy | Precision | Recall | F1-score | Training (s) | Testing (s) |
---|---|---|---|---|---|---|---|
RNN | Fraud | 0.86 | 0.64 | 0.82 | 0.72 | 8.88 | 0.50 |
Normal | 0.86 | 0.96 | 0.85 | 0.90 | |||
LR | Fraud | 0.94 | 0.81 | 0.92 | 0.86 | 0.09 | 0.00 |
Normal | 0.94 | 0.98 | 0.94 | 0.96 | |||
LOF | Fraud | 0.73 | 0.22 | 0.10 | 0.14 | 0.18 | 0.10 |
Normal | 0.77 | 0.37 | 0.90 | 0.14 | |||
IF | Fraud | 0.66 | 0.37 | 0.82 | 0.51 | 0.07 | 0.05 |
Normal | 0.66 | 0.92 | 0.63 | 0.75 | |||
SVM | Fraud | 0.57 | 0.26 | 0.55 | 0.35 | 2.21 | 0.25 |
Normal | 0.57 | 0.82 | 0.58 | 0.68 | |||
RF | Fraud | 0.99 | 0.98 | 0.95 | 0.97 | 1.52 | 0.01 |
Normal | 0.99 | 0.99 | 1 | 0.99 | |||
XGBoost | Fraud | 0.99 | 0.97 | 0.97 | 0.97 | 0.32 | 0.00 |
Normal | 0.99 | 0.99 | 0.99 | 0.99 |