Skip to main content
. 2023 Mar 16;24:97. doi: 10.1186/s12859-023-05209-z

Table 5.

Comparison with other BERT variants

Model Parameters NCBI ADR ShARe/CLEF
test refined test test refined test test refined test
BlueBERT(Fine-Tuned) [24] 110 M 88.13 69.73 92.87 79.36 90.66 74.92
PubMedBERT(Fine-Tuned) [12] 110 M 90.28 72.79 93.01 81.96 92.45 78.26
BioDistilBERT(Fine-Tuned) [25] 80 M 91.15 72.13 92.79 80.45 92.67 83.29
BioTinyBERT(Fine-Tuned) [25] 18 M 87.48 67.34 89.68 75.31 89.48 78.64
BioMobileBERT(Fine-Tuned) [25] 30 M 89.86 68.21 90.14 75.93 90.28 76.83
SAPBERT(w/o Fine-Tuned) [26] 110 M 90.02 70.41 92.37 79.52 90.89 77.47
SAPBERT(Fine-Tuned) [26] 110 M 92.34 73.25 93.42 81.64 91.37 78.59
BioSyn [27] 110 M 90.58 72.48 95.02 81.19 92.16 77.34
BioSyn(init. w/SAPBERT) [27] 110 M 92.51 74.04 94.65 82.45 93.45 79.85
Bio-LinkBERT (ours) 108 M 93.57 74.38 94.72 79.89 94.23 80.68

The bold font indicates the best performance on each dataset and the italics font indicates the second-best performance