Skip to main content
. 2022 May 12;77:103745. doi: 10.1016/j.bspc.2022.103745

Table 2.

Parameter grids used by grid-search algorithm for hyperparameter optimization.

Model Parameter grid No. of combinations
MLPR hidden layer size × activation function × learning rate
where,
hidden layer size = {(50), (1 0 0), (1 5 0), (2 0 0), (2 5 0), (3 0 0), (50,10), (100,20), (150, 30), (200,40)},
activation function = {ReLU, logistic},
learning rate = {0.01, 0.001, 0.0001, 0.00001}
80
SVR (kernel = RBF) C × gamma × epsilon
where,
C = {5, 10, 15, 20}
gamma = {0.1, 0.01, 0.001, 0.0001}
epsilon = {0.001, 0.01, 0.1, 0.5, 0.8}
80
SVR (kernel = polynomial) C × gamma × epsilon × degree × coefficient
where,
C = {1, 5, 10, 15}
gamma = {0.1, 0.01, 0.001}
epsilon = {0.01, 0.1, 0.5, 0.8}
degree = {2, 3}
coefficient = {1, 2, 3, 4}
384
SVR (kernel = linear) C × epsilon
where,
C = {1, 5, 10, 15}
epsilon = {0.01, 0.1, 0.5, 0.7}
16

Abbreviations: MLPR = Multi-Layer Perceptron Regression; RBF = Radial Basis Function; ReLU = Rectified Linear Unit; SVR = Support Vector Regression.