Skip to main content
. 2020 Jun 19;8:115041–115050. doi: 10.1109/ACCESS.2020.3003810

TABLE 7. Comparing the Performance Metrics Achieved With the Pruned and Unpruned Model Ensembles From Table 4.

Method Method Acc. AUC Sens. Prec. F MCC
Majority Voting Unpruned 0.9742 0.9807 [0.9686 0.9928] 0.9742 0.9748 0.9742 0.9537
Pruned 0.9821 0.9866 [0.9765 0.9967] 0.9821 0.9822 0.9821 0.9676
Averaging Unpruned 0.9782 0.9969 [0.992 1.0] 0.9782 0.9786 0.9782 0.9607
Pruned 0.9821 0.9969 [0.992 1.0] 0.9821 0.9823 0.9821 0.9677
Weighted Averaging Unpruned 0.9762 0.9968 [0.9918 1.0] 0.9762 0.9767 0.9762 0.9572
Pruned 0.9901 0.9972 [0.9925 1.0] 0.9901 0.9901 0.9901 0.9820
Stacking Unpruned 0.9663 0.9865 [0.9764 0.9966] 0.9663 0.968 0.9662 0.9402
Pruned 0.9712 0.9876 [0.9779 0.9973] 0.9712 0.9711 0.9712 0.9473

* Bold values stand for the model with a statistically significant better performance than the other models.