Skip to main content
. 2021 Jul 15;21(14):4833. doi: 10.3390/s21144833

Table 3.

Parameters used to tune classifiers.

Classifier Values of Parameters Used during Training in This System
SVM Kernel = ‘linear’, c = 1.0, gamma = ‘scale’, degree = 3
DT Criterion = ‘gini’, splitter = best, maximum depth of tree = none, minimum number of samples = 2, minimum required leaf nodes = 1, random states = none, maximum leaf nodes = none, minimum impurity decrease = 0.0
ETC Number of estimators/trees = 100, criterion = entropy, minimum number of samples = 2, maximum number of features to consider during classification = auto
GBM Loss = deviance, number of estimators = 100, criterion = friedman_mse, minimum number of samples = 2, minimum samples to be a leaf node = 1, maximum depth = 5
LR Penalty = L2 regularization (ridge regression), solver = liblinear, maximum iteration = 100
MLP Hidden layers = 2, neurons = 100 for each layer, epochs = 700, activation = ‘relu’, loss_function = ‘stochastic gradient’, solver = ‘adam’