Skip to main content
. 2020 Aug 27;20(17):4833. doi: 10.3390/s20174833

Table 3.

Classification performance of the test dataset. Models include the constant (outcome bias), baseline models (k-nearest neighbors (KNN), logistic regression, naïve Bayes, decision tree), and black box models (support vector machine (SVM), neural network, AdaBoost, and random forest).

Activity (Resting, Web, Sudoku) Liberal (95% Training) Conservative (10% Training)
Model AUC F1 Precision Recall AUC F1 Precision Recall
Constant 0.49 0.22 0.16 0.40 0.50 0.22 0.16 0.39
K-Nearest Neighbors (k = 3) 0.97 0.93 0.93 0.93 0.89 0.84 0.84 0.84
Logistic Regression 0.52 0.23 0.44 0.41 0.36 0.28 0.41 0.41
Naïve Bayes 0.65 0.45 0.46 0.47 0.67 0.44 0.45 0.46
Decision Tree (depth = 4) 0.68 0.43 0.57 0.49 0.67 0.45 0.53 0.48
Support Vector Machine 0.50 0.31 0.33 0.32 0.44 0.28 0.31 0.29
Neural Network 0.79 0.61 0.61 0.61 0.33 0.45 0.47 0.47
AdaBoost 0.93 0.92 0.92 0.92 0.78 0.80 0.80 0.80
Random Forest 0.99 0.94 0.94 0.94 0.93 0.85 0.85 0.85