Table 2.
The performance of our model and baseline models using the full training data set.
| Model | Accuracy | Precision | Recall | Macro F1 |
| BERTa | 0.836 | 0.779 | 0.802 | 0.788 |
| XLNet | 0.844 | 0.790 | 0.811 | 0.795 |
| ERNIEb | 0.836 | 0.786 | 0.795 | 0.783 |
| RoBERTac | 0.840 | 0.791 | 0.800 | 0.792 |
| Ensemble (Voting) | 0.846 | 0.800 | 0.812 | 0.802 |
| Our model | 0.846 | 0.803 | 0.817 | 0.808 |
aBERT: Bidirectional Encoder Representations from Transformers.
bERNIE: Enhanced Representation through Knowledge Integration.
cRoBERTa: A Robustly Optimized BERT Pretraining Approach.