Skip to main content
. 2017 Feb 16;11:9. doi: 10.3389/fninf.2017.00009

Table 4.

This table shows the classification accuracies from Experiment 1.

Participant Classifier
RFRQA SVMRQA SVMGW DTRQA DTGW
STUDY 1
1 0.83 0.82 0.87 0.77 0.79
2 0.89 0.87 0.85 0.81 0.80
3 0.93 0.93 0.94 0.92 0.89
4 0.91 0.91 0.66 0.87 0.48
5 0.80 0.81 0.75 0.79 0.71
6 0.88 0.88 0.84 0.82 0.81
STUDY 2
1 0.80 0.79 0.71 0.77 0.62
2 0.69 0.68 0.80 0.65 0.72
3 0.99 0.99 0.99 0.99 0.99
4 0.95 0.93 0.90 0.91 0.90
5 0.85 0.85 0.73 0.84 0.69
6

Columns labeled with RF, SVM, or DT indicate results from random forest, support vector machines, or decision tree classifiers, respectively. Columns subscripted with RQA show results from our RQA features, while columns with subscripted GW show best results reported in Goodwin et al. (2014). Note that the latter represents the highest classification accuracy selected from 3 different feature sets for both classifiers. Since participant 6 engaged in only one session in Study 2, we cannot report leave-one-session-out cross-validation results.Bold values indicate best classification accuracy in each row (i.e., for each subject and for each study).