Skip to main content
. 2021 Sep 17;14(4):3609–3620. doi: 10.1007/s12652-021-03488-z

Table 5.

Quantitative comparison among deep learning, various handcrafted and ensemble feature extraction methods (Classifier wise root mean square error (in %))

Features Gaussian Naïve Bayes Decision Tree Random Forest XGB Classifier
F1 31.21 31.24 29.38 30.49
F1 + F2 27.11 26.76 25.63 25.96
F1 + F3 29.96 29.84 28.38 28.64
F1 + F4 25.78 24.75 23.35 22.71
F1 + F5 25.87 26.62 26.04 24.87
F1 + F2 + F3 25.99 24.95 24.61 24.72
F1 + F2 + F4 24.26 23.67 22.88 21.88
F1 + F2 + F5 24.14 23.88 23.66 23.66
F1 + F3 + F4 25.77 24.47 23.88 22.92
F1 + F3 + F5 25.93 25.66 25.12 24.41
F1 + F4 + F5 22.78 22.38 21.42 21.20
F1 + F2 + F3 + F4 22.84 21.63 21.76 21.32
F1 + F2 + F3 + F5 23.43 22.65 22.87 22.45
F1 + F2 + F4 + F5 21.32 21.41 20.51 19.60
F1 + F3 + F4 + F5 22.14 21.56 20.72 21.15
F1 + F2 + F3 + F4 + F5 21.01 20.92 20.05 20.01

Bold face of text depicting the maximum accuracy achieved in each table