Table 2.
The benchmark results for a wide variety of retrieval models on ANTIQUE.
| Method | MAP | MRR | P@1 | P@3 | P@10 | nDCG@1 | nDCG@3 | nDCG@10 |
|---|---|---|---|---|---|---|---|---|
| BM25 | 0.1977 | 0.4885 | 0.3333 | 0.2929 | 0.2485 | 0.4411 | 0.4237 | 0.4334 |
| DRMM-TKS [6] | 0.2315 | 0.5774 | 0.4337 | 0.3827 | 0.3005 | 0.4949 | 0.4626 | 0.4531 |
| aNMM [15] | 0.2563 | 0.6250 | 0.4847 | 0.4388 | 0.3306 | 0.5289 | 0.5127 | 0.4904 |
| BERT [4] | 0.3771 | 0.7968 | 0.7092 | 0.6071 | 0.4791 | 0.7126 | 0.6570 | 0.6423 |