Skip to main content
. 2020 Jul 1;8(7):e17832. doi: 10.2196/17832

Table 3.

The performance of the 6 models using the reduced training data set.

Model Accuracy Precision Recall Macro F1
BERTa 0.831 0.781 0.776 0.771
XLNet 0.839 0.797 0.759 0.773
ERNIEb 0.822 0.754 0.765 0.751
RoBERTac 0.832 0.7952 0.770 0.776
Ensemble (Voting) 0.832 0.795 0.770 0.776
Our model 0.834 0.790 0.785 0.780

aBERT: Bidirectional Encoder Representations from Transformers.

bERNIE: Enhanced Representation through Knowledge Integration.

cRoBERTa: A Robustly Optimized BERT Pretraining Approach.