Skip to main content
. 2020 Nov 24;20(23):6718. doi: 10.3390/s20236718

Table 9.

Accuracy of the k-NN classifier trained on the ENDM-based filtered training set with the noise correction modality, four different metrics (SuMax, UnMax, SuSum, UnSum) and three different classifiers (Bagging, AdaBoost, and k-NN with k=1). The values in brackets are the detected noise ratio. The best results are marked in bold.

SuMax UnMax SuSum UnSum
Letter Bagging 74.10 (1%) 73.96 (1%) 73.02 (0%) 74.94 (2%)
AdaBoost 73.74 (1%) 73.74 (1%) 72.86 (0%) 73.76 (1%)
k-NN 76.44 (11%) 75.72 (4%) 80.44 (15%) 75.98 (5%)
Optdigits Bagging 76.60 (0%) 76.60 (0%) 78.80 (2%) 76.60 (0%)
AdaBoost 88.90 (15%) 89.30 (17%) 90.10 (19%) 88.00 (17%)
k-NN 89.10 (16%) 89.20 (17%) 90.40 (20%) 88.00 (20%)
Pendigit Bagging 83.55 (3%) 84.25 (4%) 79.90 (0%) 83.55 (3%)
AdaBoost 93.15 (21%) 88.80 (9%) 94.15 (19%) 91.85 (13%)
k-NN 92.60 (15%) 91.75 (15%) 93.65 (18%) 92.05 (16%)
Statlog Bagging 80.30 (8%) 84.55 (16%) 73.70 (1%) 82.20 (10%)
AdaBoost 83.40 (11%) 82.40 (10%) 85.75 (18%) 83.80 (12%)
k-NN 85.05 (18%) 84.45 (16%) 85.80 (19%) 84.70 (16%)
Vehicle Bagging 63.00 (2%) 63.00 (2%) 60.00 (1%) 62.00 (1%)
AdaBoost 65.50 (15%) 65.50 (10%) 67.50 (20%) 66.00 (15%)
k-NN 64.00 (12%) 63.50 (8%) 64.50 (6%) 61.50 (2%)