Skip to main content
. 2021 Nov 27;7(12):254. doi: 10.3390/jimaging7120254

Table 2.

Performance (accuracy) of the different configurations for data augmentation.

DataAUG VIR BARK GRAV POR
NoDA 85.53 87.48 97.66 86.29
App1 87.00 89.60 97.83 87.05
App2 86.87 90.17 98.08 85.97
App3 87.80 89.45 97.99 87.05
App4 86.33 87.91 97.74 84.90
App5 86.00 87.61 97.83 86.41
App6 -- 88.63 98.08 87.37
App7 -- 89.28 97.99 88.13
App8 -- 87.29 97.74 86.06
App9 85.67 88.86 98.24 86.19
App10 84.20 86.39 98.41 85.10
App11 85.47 89.20 97.91 86.71
[29] 82.93 -- -- --
[33] 83.07 -- -- --
EnsDA_all 90.00 91.27 98.33 89.21
EnsDA_5 89.60 91.01 98.08 88.56
EnsBase 89.73 90.67 98.16 87.58
EnsBase_5 89.60 90.66 97.99 87.48
State of the art 89.60 90.40 98.21 80.09/90.08 *

* As noted above, for fair comparison, 80.09 is the best performance using their deep learning approach, but 90.08 was obtained when combining handcrafted with deep learning features. Note: the virus data set has gray level images; for this reason, the three data augmentation methods based on color (App7–8) perform poorly on VIR, so these methods are not reported for this data set. Additionally, because of the low performance on VIR, [29,33] are not tested on BARK, GRAV, and POR. Bold values highlight the best results.