Table 4.
Ref. | Base Learner Models | Ensemble Model | Data Type | Preprocessing Technique | Positive/Negative Cases | Dataset | Attributes/Instances | Accuracy | Best Model |
---|---|---|---|---|---|---|---|---|---|
[3] | BeggRep, BeggJ48, AdaBoost, LogitBoost, RF | Bagging, Boosting | Clinical | 416/167 | UCI Indian Liver Patient | 10/583 | Boosting(AdaBoost) = 70.2%, Boosting(LogitBoost) = 70.53%, Bagging (RF) = 69.2% | Boosting | |
[49] | NB, SVM, KNN, LR, DT, MLP | Stacking, DT | Clinical | Feature Selection PCA | 416/167 | UCI Indian Liver Patient | 10/583 | Bagging (DT) = 69.40% Stacking = 71.18% |
Stacking |
[50] | KNN | RF, Gradient Boosting, AdaBoost, Stacking | Clinical | 416/167 | UCI Indian Liver Patient | 10/583 | Bagging (RF) = 96.5%, Boosting(Gradient) = 91%, Boosting(AdaBoost) = 94%, Stacking = 97% |
Stacking | |
[33] | DT, NB, KNN, LR, SVM, AdaBoost, CatBoost | XGBoost, Light GBM, RF | Clinical | Handled missing values | 416/167 | UCI Indian Liver Patient | 10/583 | Bagging (RF) = 88.5% Boosting(XGBoost) = 86.7% Boosting (LightGBM) = 84.3% |
Bagging |
[32] | SVM, KNN, NN, LR, CART, ANN, PCA, LDA | Bagging, Stacking | Clinical | Handled missing values, feature selection, PCA | 453/426 | Iris And Physiological | 22/879 | Bagging (RF) = 85%, Stacking = 98% | Stacking |
[9] | KNN, SVM, RF, LR, CNN | RF, XGBoost, Gradient Boost | Handled missing values, scaling, and feature selection | Image | 11/10,000 | Bagging (RF) = 83% Boosting (XGBoost) = 82% Boosting (Gradient) = 85% |
Boosting | ||
[51] | LR, DT, RF KNN, MLP | AdaBoost, XGBoost, Stacking | Clinical | Data Imputation, label encoding, resampling, eliminating duplicate values and outliers | 416/167 | UCI Indian Liver Patient | 10/583 | Boosting (AdaBoost) = 83% Boosting (XGBoost) = 86% Stacking = 85% |
Boosting |
[10] | DT, KNN, SVM, NB | Bagging, Boosting, RF | Clinical | Discretisation, resampling, PCA | 416/167 | UCI Indian Liver Patient | 10/583 | Bagging (RF) = 88.6%, Bagging = 89%, Boosting = 89% | Bagging Boosting |