Skip to main content
. Author manuscript; available in PMC: 2023 Jan 1.
Published in final edited form as: Comput Speech Lang. 2021 Jul 19;71:101263. doi: 10.1016/j.csl.2021.101263

Table 4:

Disclosure task performance using session-level aggregated features. Using feature significance of p <0.20.

Model F1 Acc. Prec. FNR Pos. Acc. Neg. Acc.
DT 0.452 0.641 0.451 0.259 0.471 0.725
RF 0.229 0.633 0.327 0.316 0.186 0.849
GNB 0.480* 0.603 0.434 0.256 0.557 0.622
L-SVM 0.609 * 0.703 * 0.531 * 0.161 * 0.719 0.697
*

indicates performance better than randomized bootstrap σd. Bold values indicate the best performance in their respective columns. Pos. Acc. and Neg. Acc. indicate the accuracy for the positive and negative classes, respectively. The best performing L-SVM hyperparameters, shown in the table above, are C = 0.1 along with balanced class weights. The decision tree classifier used entropy as the splitting criteria, while the random forest classifier used Gini impurity.