Skip to main content
. 2021 Jul 28;23(8):970. doi: 10.3390/e23080970

Table 2.

Hyperparameter ranges used for the optimization of all the reported algorithms, both baselines and CQFS.

Algorithm Hyperparameter Range Distribution
ItemKNN
cosine
topK 5–1000 uniform
shrink 0–1000 uniform
normalize a True, False categorical
weighting none, TF-IDF, BM25 categorical
PureSVD num factors 1–350 uniform
RP3β topK 5–1000 uniform
alpha 0–2 uniform
beta 0–2 uniform
normalize b True, False categorical
CFeCBF epochs 1–300 early-stopping
learning rate 105102 log-uniform
sgd mode Adam categorical
l1 reg 10210+3 log-uniform
l2 reg 10110+3 log-uniform
dropout 30–80% uniform
initial weight 1.0—random categorical
positive only True, False categorical
add zero quota c 50–100% uniform
CQFS α 1 categorical
β 100104 log-uniform
s 10010+4 log-uniform
p 40%,60%,80%,95% categorical

a The normalize hyperparameter in KNNs controls whether to use the denominator of the cosine similarity. If False, the similarity becomes the dot product alone. b The normalize hyperparameter in RP3β refers to applying l1 regularization on the rows of the similarity matrix after the selection of the neighbors, to ensure it still represents a probability distribution. c Percentage of item similarities of value zero added as negative samples to improve its ranking performance.