Table 2.
Hyperparameter ranges used for the optimization of all the reported algorithms, both baselines and CQFS.
Algorithm | Hyperparameter | Range | Distribution |
---|---|---|---|
ItemKNN cosine |
topK | 5–1000 | uniform |
shrink | 0–1000 | uniform | |
normalize | True, False | categorical | |
weighting | none, TF-IDF, BM25 | categorical | |
PureSVD | num factors | 1–350 | uniform |
RP | topK | 5–1000 | uniform |
alpha | 0–2 | uniform | |
beta | 0–2 | uniform | |
normalize | True, False | categorical | |
CFeCBF | epochs | 1–300 | early-stopping |
learning rate | – | log-uniform | |
sgd mode | Adam | categorical | |
reg | – | log-uniform | |
reg | – | log-uniform | |
dropout | 30–80% | uniform | |
initial weight | 1.0—random | categorical | |
positive only | True, False | categorical | |
add zero quota | 50–100% | uniform | |
CQFS | 1 | categorical | |
– | log-uniform | ||
s | – | log-uniform | |
p | categorical |
The normalize hyperparameter in KNNs controls whether to use the denominator of the cosine similarity. If False, the similarity becomes the dot product alone. The normalize hyperparameter in RP refers to applying regularization on the rows of the similarity matrix after the selection of the neighbors, to ensure it still represents a probability distribution. Percentage of item similarities of value zero added as negative samples to improve its ranking performance.