Skip to main content
. 2020 Nov 24;20(23):6718. doi: 10.3390/s20236718

Table 8.

Accuracy of the AdaBoost classifier trained on the ENDM-based filtered training set with the noise-correction modality, four different metrics (SuMax, UnMax, SuSum, UnSum) and three different classifiers (Bagging, AdaBoost, and k-NN with k=1). The values in brackets are the detected noise ratio. The best results are marked in bold.

SuMax UnMax SuSum UnSum
Letter Bagging 50.94 (1%) 50.85 (1%) 46.02 (0%) 50.88(1%)
AdaBoost 50.44 (1%) 50.44 (1%) 48.40 (0%) 50.32 (1%)
k-NN 43.56 (9%) 46.11 (7%) 41.81 (15%) 44.00 (8%)
Optdigits Bagging 89.80 (0%) 90.10 (0%) 89.87 (0%) 89.80 (0%)
AdaBoost 93.23 (16%) 92.30 (15%) 92.21 (18%) 91.33 (17%)
k-NN 92.36 (17%) 92.73 (16%) 92.17 (19%) 90.51 (22%)
Pendigit Bagging 90.89 (6%) 90.76 (3%) 90.04 (0%) 91.05 (6%)
AdaBoost 93.57 (19%) 91.84 (16%) 93.64 (18%) 91.57 (12%)
k-NN 92.11 (15%) 91.00 (14%) 93.08 (18%) 91.11 (14%)
Statlog Bagging 86.48 (14%) 86.02 (16%) 84.83 (0%) 86.50 (12%)
AdaBoost 87.80 (20%) 86.39 (9%) 88.15 (19%) 86.38 (13%)
k-NN 86.58 (18%) 86.16 (16%) 86.92 (23%) 86.24 (16%)
Vehicle Bagging 73.90 (1%) 73.75 (1%) 73.50 (0%) 72.95 (0%)
AdaBoost 73.85 (17%) 72.00 (20%) 73.55 (17%) 72.80 (23%)
k-NN 73.85 (11%) 72.90 (2%) 73.05 (8%) 72.95 (2%)