Skip to main content
. 2021 Jul 22;34(23):20449–20461. doi: 10.1007/s00521-021-06276-0

Table 8.

Resulted metrics for testing the models based on collection 2 (’text’) for the label fake (the comparison between architectures, ’bert-base-multilingual-cased_DRE’, ’bert-base-cased_DRE’, ’bert-base-uncased_DRE’, ’bert-base-cased-finetuned-mrpc_DRE’)

Metric ’Bert-base Base-cased_DRE’ Base Finetuned
-Multilingual -Uncased_DRE’ -Mrpc_DRE’
-Cased_DRE’
True positive (TP) 2246 2271 2260 2213
True negative (TN) 2112 2087 2110 2107
False positive (FP) 31 56 33 36
False negative (FN) 31 6 17 64
Precision 0.9864 0.9759 0.9856 0.9840
Recall 0.9864 0.9974 0.9925 0.9719
f1-score 0.9864 0.9865 0.9890 0.9779
Accuracy 97.31% 97.34% 97.84% 95.68%