Skip to main content
. 2021 Sep 20;15:716670. doi: 10.3389/fnhum.2021.716670

Table 6.

Fusion model results compared to individual task model results, reported in AUC ± standard deviation.

Feature Set Modality N GNB LR RF
Pupil Calibration (novel task) Eye 126 0.71 ± 0.02 0.68 ± 0.02 0.63 ± 0.05
Picture Description Eye 126 0.71 ± 0.02 0.73 ± 0.03 0.64 ± 0.04
Lang 162 0.78 ± 0.01 0.77 ± 0.02 0.74 ± 0.02
Eye + Lang 162 0.80 ± 0.02 0.79 ± 0.01 0.77 ± 0.02
Reading Eye 126 0.70 ± 0.02 0.73 ± 0.02 0.72 ± 0.03
Lang 162 0.79 ± 0.01 0.78 ± 0.01 0.78 ± 0.03
Eye + Lang 162 0.78 ± 0.01 0.80 ± 0.01 0.82 ± 0.02
Memory (novel task) Lang 162 0.78 ± 0.01 0.72 ± 0.02 0.72 ± 0.04
Task Fusion Eye + Lang 162 0.82 ± 0.01 0.83 ± 0.01 0.83 ± 0.02

The highest classification performance for each task is in bold. Mod, modality; Eye, eye-movement alone; Lang, language alone; Eye + Lang, eye-movement and language aggregate model. More evaluation metrics such as specificity and sensitivity are reported in Supplementary Table 3. The data in gray background represent unimodal model results when multimodal data were available. So, they were not used for our statistical analysis when we compared task models.