Skip to main content
. 2025 Feb 17;26:54. doi: 10.1186/s12859-025-06053-z

Table 3.

Results of the proposed WPLMF framework and baselines in Ours Dataset (The best performance is highlighted in bold)

Method AUPR F1 MMR Precision@15 Recall@15 Precision Recall
MCS-MKL [16] 0.6428 (1.3e−3) 0.6082 (1.2e−3) 10.1247 (1.4e2) 0.6567 (7.6e3) 0.7813 (1.9e−2) 0.6145 (1.2e−2) 0.6082 (1.1e−2)
FGRMF [6] 0.5117 (4.5e−3) 0.4993 (2.6e−3) 9.7799 (7.6e−2) 0.4932 (5.9e−3) 0.7842 (1.3e−2) 0.4953 (1.4e−2) 0.5043 (1.5e−2)
idse-HE [10] 0.5888 (1.1e−2) 0.5564 (1.1e−2) 8.128 1(7.4e−1) 0.5676 (8.5e−3) 0.9136 (1.3e2) 0.4738 (3.5e−2) 0.6804 (3.9e−2)
Galeano’s [4] 0.6294 (2.8e−3) 0.5924 (1.9e−3) 10.0803 (3.4e−2) 0.6081 (5.2e−3) 0.7748 (1.1e−2) 0.6049 (1.9e−2) 0.5804 (1.2e−2)
Logit MF [7] 0.6124 (3.3e−3) 0.5766 (3.5e−3) 10.0619 (3.0e−2) 0.6370 (6.6e−3) 0.7263 (5.0e−3) 0.5819(1.3e−2) 0.5714 (1.6–3)
WPLMF (ours) 0.6553 (3.0e3) 0.6095 (1.8e3) 10.0918 (1.9e−1) 0.6440 (11.8e−2) 0.77981 (5.5e−3) 0.6172 (1.4e−2) 0.6024 (1.1e−2)