Skip to main content
. 2022 Oct 7;15(Suppl 1):S12–S21. doi: 10.1016/j.optom.2022.09.002

Table 3.

Performance of the trained supervised algorithms on the test set (20% of the whole dataset, i.e., 655 data points in case 1 (140 frames) and 96 data points in case 2 (20 first frames)). Performance was assessed by the number of true negatives (TN), true positives (TP), false negatives (FN), false positives (FP), accuracy, area under the curve (AUC), and 95% confidence interval (CI). The last column (#) orders the algorithms from best performance (1) to worse performance (6).

Case Algorithm TN TP FN FP TN (%) TP (%) FN (%) FP (%) Accuracy (%) AUC CI #
140 frames Logistic Regression 316 164 76 99 48 25 12 15 73.28 0.71 [0.70,0.77] 5
K-nearest Neighbors (K=8) 348 198 44 65 53 30 7 10 83.34 0.82 [0.81,0.86] 1
Kernel Support Vector Machine 340 166 52 97 52 25 8 15 77.25 0.75 [0.74,0.81] 4
Naïve Bayes 326 126 66 137 50 19 10 21 69.00 0.67 [0.67,0.73] 6
Decision Tree Classification 327 188 65 75 50 29 10 11 78.63 0.77 [0.76,0.82] 3
Random Forest Classification 340 179 52 84 52 27 8 13 79.24 0.77 [0.76,0.82] 2

20 first frames Logistic Regression 49 28 10 9 51 29 10 9 80.2 0.79 [0.72,0.88] 5
K-nearest Neighbors (K=5) 53 36 6 1 55 38 6 1 92.7 0.94 [0.88,0.98] 2
Kernel Support Vector Machine 49 27 10 10 51 28 10 10 79.2 0.78 [0.71,0.87] 6
Naïve Bayes 54 28 5 9 56 29 5 9 85.4 0.84 [0.78,0.93] 4
Decision Tree Classification 54 36 5 1 56 38 5 1 93.8 0.94 [0.89,0.99] 1
Random Forest Classification 53 36 6 1 55 38 6 1 92.7 0.94 [0.88,0.98] 2