Skip to main content
. 2023 Jun 23;23(13):5850. doi: 10.3390/s23135850

Table 7.

Top results (with threshold accuracy set at 80%) of numerical experiments for the STFT approach with hyperparameter optimization.

Model Transformation n_train n_test % for Training Acc. [%]
Gradient Boosting NONE 30 45 60 88.89
Random Forest STD 30 45 60 88.89
Random Forest MINMAX 52 23 30 86.96
Random Forest NONE 52 23 30 86.96
Gradient Boosting MINMAX 37 38 50 86.84
Gradient Boosting MINMAX 30 45 60 86.67
SVC STD 22 53 70 84.91
SVC MINMAX 22 53 70 84.91
SVC NONE 22 53 70 84.91
SVC MINMAX 30 45 60 84.44
SVC STD 30 45 60 84.44
SVC NONE 30 45 60 84.44
Gradient Boosting STD 22 53 70 83.02
XGB MINMAX 30 45 60 82.22
XGB STD 30 45 60 82.22
XGB NONE 30 45 60 82.22
Gradient Boosting NONE 37 38 50 81.58
Random Forest MINMAX 37 38 50 81.58
SGD STD 37 38 50 81.58
GaussianNB STD 22 53 70 81.13
GaussianNB MINMAX 22 53 70 81.13
GaussianNB NONE 22 53 70 81.13
XGB STD 22 53 70 81.13
XGB MINMAX 22 53 70 81.13
XGB NONE 22 53 70 81.13
Decision Tree MINMAX 45 30 40 80.00
LGBM MINMAX 30 45 60 80.00
LGBM STD 30 45 60 80.00
LGBM NONE 30 45 60 80.00
Random Forest NONE 30 45 60 80.00
SGD STD 30 45 60 80.00
SVC MINMAX 45 30 40 80.00
SVC STD 45 30 40 80.00
Decision Tree NONE 45 30 40 80.00
SVC NONE 45 30 40 80.00
Random Forest MINMAX 60 15 20 80.00