Skip to main content
. 2021 Mar 5;11:5297. doi: 10.1038/s41598-021-84447-8

Table 2.

Classifier optimized hyperparameters and variation range.

Classifier Hyperparameter Variation range
k-nearest neighbour (k-NN) Distance (DD) {Cityblock, chebychev, correlation, cosine, euclidean, hamming, jaccard, mahalanobis, minkowski, seuclidean, spearman}
DistanceWeight (DW) {Equal, inverse, squaredinverse}
Exponent (E) [0.5, 3]
NumNeighbors (NN) [1, 5]
Support Vector Machine (SVM) BoxConstraint (BC) Log-scaled in the range [1e−3, 1e3]
KernelFunction (KF) {Gaussian, linear, polynomial}
KernelScale (KS) Log-scaled in the range [1e−3, 1e3]
PolynomialOrder (PO) {1,2,3,4}
Artificial Neural Network (ANN) Activation Function (AF) {relu, sigmoid, tanh}
Hidden Layer nr. of Neurons (HLN) [25, 200]
Linear Discriminant Analysis (LDA) Gamma (G) [0,1]
Delta (D) Log-scaled in the range [1e−6, 1e3]
DiscrimType (DT) {Linear, quadratic, diagLinear}
{diagQuadratic, pseudoLinear, pseudoQuadratic}
Naive Bayes (NB) DistributionName (DN) {Normal, kernel}
Width (W) Log-scaled in the range [1e−4, 1e14]
Kernel (K) {Normal, box, epanechnikov, triangle}