Skip to main content
. 2021 Sep 17;23(9):e30161. doi: 10.2196/30161

Table 5.

Performance of the individual masked language models and their combination using reciprocal rank fusion (RRF).

Model NDCG@20a P@20b Bprefc MAPd # rele
BERTf 0.6209 0.6430 0.5588 0.2897 6879
RoBERTag 0.6261 0.6440 0.5530 0.2946 6945
XLNet 0.6436 0.6570 0.5644 0.3064 6926
mlm + rrf 0.7716 0.7880 0.5680 0.3468 6963

aNDCG@20: normalized discounted cumulative gain at 20 documents.

bP@20: precision at 20 documents.

cBpref: binary preference.

dMAP: mean average precision.

e# ref: total number of relevant documents retrieved by the model for the 50 queries.

fBERT: Bidirectional Encoder Representations from Transformers.

gRoBERTa: robustly optimized BERT approach.