Skip to main content
. 2021 Jul 16;9(7):e28754. doi: 10.2196/28754

Table 2.

Results compared with the existing models.

Model Accuracy (%) Precision (%) Recall (%) F1 (%)
LIWCa,b 70 74 71 72
LDAb,c 75 75 72 74
Unigramb 70 71 95 81
Bigramb 79 80 76 78
LIWC + LDA + unigramb 78 84 79 81
LIWC + LDA + bigramb 91 90 92 91
LSTMd 87.03 90.30 91.67 90.98
Bi-LSTMe 86.46 88.08 95 91.41
Bi-LSTM + Attf 88.59 90.41 94.96 92.63
EANg (our model) 91.3 91.91 96.15 93.98

aLIWC: Linguistic Inquiry and Word Count.

bIndicates that the results are shown in the literature [4].

cLDA: latent Dirichlet allocation.

dLSTM: long short-term memory.

eBi-LSTM: bidirectional long short-term memory.

fAtt: attention mechanism.

gEAN: emotion-based attention network.