Skip to main content
. 2023 Mar;60(2):None. doi: 10.1016/j.ipm.2022.103206

Table 5.

Comparison of baseline (base) against best performing user setup (+d, +t, +d/t) for each model and dataset. We abbreviate DistilBERT as DBERT in this table. For simplicity, we take the same user setup for both labels (fake/true). Improvements over base models are underlined.

(a) PolitiFact
fake P R F1
CNN base .7857 .3333 .4681
+d .8235 .4242 .5600

HAN base .8214 .6970 .7541
+d/t .7857 .6667 .7213

DBERT base .8148 .6667 .7333
+d .8750 .6364 .7368

true P R F1

CNN base .5000 .8800 .6377
+d .5366 .8800 .6667

HAN base .6667 .8000 .7273
+d/t .6333 .7600 .6909

DBERT base .6452 .8000 .7143
+d .6471 .8800 .7457
(b) GossipCop
fake P R F1
CNN base .7951 .5542 .6531
+d .7882 .5591 .6542

HAN base .6684 .6502 .6592
+d/t .6569 .6650 .6610

DBERT base .7845 .5468 .6444
+d/t .7724 .5936 .6731

true P R F1

CNN base .8722 .9552 .9118
+d .8732 .9529 .9113

HAN base .8921 .8988 .8950
+d/t .8945 .8910 .8928

DBERT base .8701 .9529 .9096
+d/t .8811 .9451 .9120
(c) ReCOVery
fake P R F1
CNN base .7872 .5606 .6549
+t .8750 .6364 .7368

HAN base .7500 .8182 .7826
+d/t .7714 .8182 .7941

DBERT base .7463 .7576 .7519
+d/t .8421 .7273 .7805

true P R F1

CNN base .8092 .9248 .8632
+t .8411 .9549 .8944

HAN base .9055 .8647 .8846
+d/t .9070 .8797 .8931

DBERT base .8788 .8722 .8755
+d/t .8732 .9323 .9018