Table 7.
Embedding | Classifier | Accuracy | Precision | Recall | AUC | F1-score |
---|---|---|---|---|---|---|
Word2Vec | CNN | 0.729 | 0.744 | 0.729 | 0.767 | 0.733 |
MLP | 0.644 | 0.643 | 0.644 | 0.711 | 0.644 | |
Bi-LSTM | 0.737 | 0.740 | 0.737 | 0.677 | 0.738 | |
Bi-LSTM-CNN | 0.728 | 0.729 | 0.728 | 0.692 | 0.728 | |
BERT | CNN | 0.770 | 0.788 | 0.777 | 0.908 | 0.781 |
MLP | 0.719 | 0.714 | 0.719 | 0.874 | 0.712 | |
Bi-LSTM | 0.777 | 0.792 | 0.780 | 0.888 | 0.774 | |
Bi-LSTM-CNN | 0.698 | 0.696 | 0.698 | 0.861 | 0.690 | |
Fine-tune | 0.760 | 0.761 | 0.759 | 0.868 | 0.760 | |
IDPT | 0.842 | 0.843 | 0.842 | 0.948 | 0.841 | |
BERT-wmm-ext | Fine-tune | 0.756 | 0.756 | 0.756 | 0.883 | 0.754 |
Mengzi | Fine-tune | 0.751 | 0.751 | 0.751 | 0.846 | 0.750 |
Roberta | Fine-tune | 0.767 | 0.767 | 0.767 | 0.878 | 0.764 |
The highest index is highlighted in bold