Skip to main content
. 2020 Jul 29;8(7):e17958. doi: 10.2196/17958

Table 6.

Performance of deep-learning methods with different language representation models on level 1, 2 and 3.

Model Macro-F1 Macro-Pa Macro-Rb
BERTc [16] 0.370 0.455 0.381
BERT_IDPd [16] 0.406 0.543 e 0.354
RoBERTaf 0.396 0.503 0.360
RoBERTa_IDP 0.424 0.528 0.386
XLNETg 0.387 0.457 0.336
XLNET_IDP 0.398 0.521 0.364

aP: precision.

bR: recall.

cBERT: bidirectional encoder representations from transformers.

d_IDP: The model is further trained on the in-domain unlabeled corpus.

eHighest F1 values are indicated in italics.

fRoBERTa: robustly optimized bidirectional encoder representations from transformers pretraining approach.

gXLNET: generalized autoregressive pretraining for language understanding.