Skip to main content
. 2021 Nov 3;11:21615. doi: 10.1038/s41598-021-00812-7

Table 2.

Classifier optimized hyperparameters and variation range.

Classifier Hyperparameter Variation Range
k-nearest neighbour (k-NN) Distance (DD) {cityblock, chebychev, correlation, cosine, euclidean, hamming, jaccard, mahalanobis, minkowski,spearman}
DistanceWeight (DW) {equal, inverse, squaredinverse}
Exponent (E) [0.5, 3]
NumNeighbors (NN) [1, 5]
Support vector machine (SVM) BoxConstraint (BC) log-scaled in the range [1e-3,1e3]
KernelFunction (KF) {gaussian, linear, polynomial}
KernelScale (KS) log-scaled in the range [1e-3,1e3]
PolynomialOrder (PO) {2,3,4}
Artificial neural network (ANN) Activation Function (AF) {relu, sigmoid, tanh}
Hidden layer nr. of neurons (HLN) [25, 200]
Linear discriminant analysis (LDA) Gamma (G) [0,1]
Delta (D) log-scaled in the range [1e-6,1e3]
DiscrimType (DT) {linear, quadratic, diagLinear,}
{diagQuadratic, pseudoLinear, pseudoQuadratic}
Random forest (RF) Depth (D) [5,20]
Number of trees (NT) [15,100]
Maximum depth of the tree [5,30]
Logistic regression (LR) Penalty (P) {L2, elastic net}
Inverse of regularization strength (C) [0.25, 1.0]