Table 3.
Ref. | Base Learner Algorithm | Ensemble Approach | Data Type | Preprocessing Technique | Positive/Negative Cases | Dataset | Attributes/Instances | Accuracy | Best One |
---|---|---|---|---|---|---|---|---|---|
[18] | NB, RF KNN, SVM, and MLP | Bagging, Boosting, and Stacking | Clinical | Handled missing values | [20–112]/[254–346] | UCI Dermatology | 34/366 | Bagging = 96%, Boosting = 97%, Stacking = 100% | Stacking |
[8] | DT, LR | Bagging, AdaBoost, and Stacking | Clinical | Feature selection | [20–112]/[254–346] | UCI Dermatology | 34/366 | Bagging = 92.8%, Boosting (AdaBoost) = 92.8%, Stacking = 92.8% | Bagging Boosting Stacking |
[48] | LR, CHAID DT | Bagging, Boosting | Clinical | Handled missing values, data distribution, and balancing, | [20–112]/[254–346] | UCI Dermatology | 34/366 | Bagging = 100%, Boosting = 100% | Bagging Boosting |
[31] | NB, KNN, DT, SVM, RF, MLP | Bagging, Boosting, and Stacking | Clinical | Hybrid Feature selection, information gain, and PCA | [20–112]/[254–346] | UCI Dermatology | 12/366 | Bagging = 95.94%, Boosting = 97.70%, Stacking = 99.67% | Stacking |
[16] | PAC, LDA, RNC, BNB, NB, ETC | Bagging, AdaBoost, Gradient Boosting | Clinical | Feature Selection | [20–112]/[254–346] | UCI Dermatology | 34/366 | Bagging = 97.35%, AdaBoost = 98.21%, Gradient Boosting = 99.46% | Boosting |