Skip to main content
. 2022 Jun 8;8:e1005. doi: 10.7717/peerj-cs.1005

Table 2. Model evaluation.

Model Category Precision Recall F1 Micro-F1 Macro-F1
BERT-BiLSTM-TextCNN 0 0.9235 0.9085 0.9159 0.9052 0.9143
1 0.9146 0.9047 0.9096
2 0.9184 0.9130 0.9157
3 0.9296 0.9022 0.9157
BERT-BiGRU-TextCNN 0 0.9105 0.9029 0.9067 0.8793 0.8885
1 0.9062 0.8953 0.9007
2 0.8883 0.8784 0.8833
3 0.8542 0.8721 0.8631
BERT-LSTM-TextCNN 0 0.8724 0.8951 0.8836 0.8629 0.8785
1 0.8843 0.8765 0.8804
2 0.9023 0.8701 0.8859
3 0.8528 0.8742 0.8634
BERT-TextCNN 0 0.8749 0.8412 0.8577 0.8598 0.8757
1 0.8852 0.8685 0.8768
2 0.8537 0.8821 0.8677
3 0.9043 0.8957 0.9000
Word2Vec-BiLSTM-TextCNN 0 0.7103 0.7348 0.7223 0.6300 0.6723
1 0.6892 0.6438 0.6657
2 0.6719 0.7087 0.6898
3 0.6155 0.6043 0.6098
Word2Vec-BiGRU-TextCNN 0 0.6361 0.6207 0.6283 0.6075 0.6265
1 0.6345 0.6531 0.6437
2 0.6394 0.6112 0.6250
3 0.6145 0.6026 0.6085
Word2Vec-LSTM-TextCNN 0 0.6581 0.6323 0.6449 0.6189 0.6231
1 0.6361 0.6129 0.6243
2 0.6138 0.6199 0.6168
3 0.6037 0.6084 0.6060
Word2Vec-TextCNN 0 0.6326 0.6109 0.6216 0.6010 0.6145
1 0.6370 0.6211 0.6289
2 0.6024 0.6029 0.6026
3 0.6018 0.6075 0.6046