Skip to main content
. 2022 Jan 10;2022:9593957. doi: 10.1155/2022/9593957

Table 3.

Performance of EINMF compared with other algorithms (embedding size = 64).

Dataset MovieLens-100k MovieLens-1m
HR@N NDCG@N HR@N NDCG@N
Model N = 5 N = 10 N = 20 N = 5 N = 10 N = 20 N = 5 N = 10 N = 20 N = 5 N = 10 N = 20
Pop [2] 0.2031 0.3712 0.4761 0.0718 0.0786 0.0863 0.2015 0.2983 0.4228 0.0718 0.0786 0.0863
Item-KNN [5] 0.3160 0.4730 0.5758 0.0976 0.1067 0.1185 0.2237 0.3371 0.4874 0.0677 0.0714 0.0817
BPR-MF [10] 0.2874 0.4380 0.5822 0.0971 0.1088 0.1277 0.3340 0.4804 0.6267 0.1135 0.1206 0.1409
NCF [13] 0.6002 0.7540 0.8367 0.2662 0.2641 0.2890 0.603 0.7278 0.8268 0.2608 0.2497 0.2525
DMF [20] 0.6458 0.7667 0.8738 0.2672 0.2692 0.2835 0.5892 0.7197 0.8257 0.2401 0.2354 0.2413
EINMF 0.6978 0.8038 0.8887 0.3163 0.3092 0.3179 0.6540 0.7781 0.8618 0.2880 0.2728 0.2825
MI (%) 8.05 4.84 1.71 18.38 14.86 10.00 8.46 6.91 4.23 7.46 9.25 11.88

“MI” indicates the smallest improvements of our EINMF over the corresponding baseline. The optimal value of each metric of the baseline top-N task is underlined in the table.