Skip to main content
. Author manuscript; available in PMC: 2023 Jan 1.
Published in final edited form as: Comput Speech Lang. 2021 Jul 19;71:101263. doi: 10.1016/j.csl.2021.101263

Table 3:

Truth-telling task performance using session-level aggregated features and a feature significance threshold of p <0.30.

Model F1 Acc. Prec. FNR Pos. Acc. Neg. Acc.
DT 0.540* 0.662** 0.587 ** 0.268* 0.525 0.737
RF 0.324 0.633 0.517** 0.328 0.250 0.840
GNB 0.378 0.536 0.370 0.354 0.393 0.613
L-SVM 0.640 ** 0.719 ** 0.584** 0.165 ** 0.725 0.718
*

indicates performance better than randomized bootstrap σt.

**

indicates performance better than human simulation h2. Bold values indicate the best performance in their respective columns. Pos. Acc. and Neg. Acc. indicate the accuracy for the positive and negative classes, respectively. The best performing L-SVM hyperparameters, shown in the table above, are C = 1 along with balanced class weights. The decision tree classifier used entropy as the splitting criteria, while the random forest classifier used Gini impurity.