Skip to main content
. 2021 Apr 11;22(8):3944. doi: 10.3390/ijms22083944

Table 5.

Hyper-parameters values explored in the machine learning tools applied for development of the non-linear mt-QSAR models.

Method Parameters
Tuned
Parameters
Selected
10-Fold CV
Accuracy (%) a
RF Bootstrap: True/False False
Criterion: Gini, Entropy, Gini
Maximum depth: 10, 30, 50, 70, 90, 100, None 90
Maximum features: Auto, Sqrt Sqrt 91.02
Minimum samples leaf: 1, 2, 4 1
Minimum samples split: 2, 5, 10 5
Number of estimators: 50, 100, 200,500 200
kNN Number of neighbors: 1–31 20
Weight options: Uniform, Distance Distance 79.20
Algorithms: Auto, Ball tree, kd_tree, brute Auto
Xgboost Minimum child weight: 1,5,10 1
Gamma: 0, 0.5, 1, 1.5, 2, 5 0
Sum sample: 0.6, 0.8, 1.0 0.8 91.54
Number of estimators: 50, 100, 200,300 100
Maximum depth: 3, 4, 5 5
RBF-SVC C: 0.1, 1, 10, 100, 1000 1 62.30
Gamma: 1, 0.1, 0.01, 0.001 1
MLP Hidden layer sizes:(50,50,50), (50,100,50), (100,) (100,)
Activation: Identity, Logistic, Tanh, Relu Relu
Solver: SGD, Adam Adam 82.97
Alpha: 0.0001, 0.001, 0.01,1 0.0001
Learning rate: Constant, Adaptive, Inverse scaling Adaptive
DT Criterion: Gini, EntropyMaximum depth: 10,30,50,70,90,100, NoneMaximum features: Auto, SqrtMinimum samples leaf: 1,2,4Minimum samples split: 2–50 Entropy100Sqrt113 84.33
NB Alpha: 1,0.5,0.1Fit prior: True, False 0.1True 69.40

a The cross-validation accuracy was estimated only on the sub-training set.