Skip to main content
. 2023 Nov 27;9:e1580. doi: 10.7717/peerj-cs.1580

Table 6. Comparison results between three boosting and different methods (%).

Model KDDCup99 UNSW NB15 CICIDS 2017
Accuracy F1 Score Accuracy F1 Score Accuracy F1 Score
Naïve Bayes 73.55 72.31 61.8 65.27 93.90 93.53
Decicision Tree 77.89 75.25 73.25 76.36 99.62 99.57
Random Forest 77.20 73.23 74.35 77.28 99.79 99.78
SVM 72.85 68.84 68.49 70.13 96.97 96.99
MLP 78.97 75.40 78.32 76.98 99.48 99.39
RUS + SVM 73.57 70.11 67.16 70.45 96.45 96.55
RUS + MLP 76.66 72.38 77.27 76.21 99.46 99.42
ROS + SVM 73.34 69.90 68.32 70.00 96.98 97.04
ROS + MLP 78.10 74.18 76.13 76.97 99.55 99.55
SMOTE + SVM 79.23 78.36 71.5 73.77 97.00 97.04
SMOTE + MLP 77.47 75.18 79.59 80.10 99.33 99.34
CNN 78.33 74.75 80.52 76.61 99.48 99.44
Fuzziness-based NN 75.33 70.58 81.21 78.58 99.61 99.57
LSSVM + MIFS (β = 0.3) 78.20 72.76 76.83 77.43 98.76 98.67
LSSVM + FMIFS 75.67 73.67 77.18 77.65 99.51 99.48
IGAN-IDS 84.45 84.17 82.53 82.86 99.79 99.98
SMOTE + LightGBM 97.68 97.61 89.32 89.54 99.63 99.62
SMOTE + XGBoost 99.92 99.91 89.32 89.64 99.46 99.44
SMOTE + CatBoost 99.90 99.89 88.66 88.73 99.57 99.56