Skip to main content
. 2022 Mar 13;22(6):2224. doi: 10.3390/s22062224

Table 3.

Default parameters values for the machine learning models.

Model Parameters Values
KNN n_neighbors = 3, weights = ‘uniform’, algorithm = ‘auto’, leaf_size = 30, p = 2, metric = ‘minkowski’
SVM Linear C: 0.025, kernel: [‘linear’]
RBF C: 1, gamma: 2, kernel: [‘rbf’]
Decision Tree criterion = ‘gini’, max_depth = 5, max_features = None, max_leaf_nodes = None, min_samples_leaf = 1, min_samples_split = 2, random_state = None, splitter = ‘best’, in_weight_fraction_leaf = 0.0
Naïve Bayes (Gaussian) priors = None, var_smoothing = 10−9
Neural Network (MLP Classifier) activation = ‘relu’, alpha = 1, batch_size = 1024, hidden_layer_sizes = 100, learning_rate_init = 0.001, max_iter = 1000, max_iter = 200, power_t = 0.5, random_state = None, shuffle = True, solver = ‘adam’, tol = 0.0001
Discriminant Analysis Linear n_components = None, priors = None, shrinkage = None, solver = ‘svd’
Quadratic tol = 0.0001, store_covariance = False, reg_param = 0.0, priors = None
Passive C = 1.0, n_iter_no_change = 5, max_iter = 1000, random_state = None
Ridge fit_intercept = True, alpha = 1.0, normalize = False, max_iter = None, random_state = None, solver = ‘auto’,
SGDC loss = ‘hinge’, penalty = ‘l2’, alpha = 0.0001, fit_intercept = True, max_iter = 1000,
Logistic Regression C = 1.0, cv = None, dual = False, fit_intercept = True, max_iter = 100, penalty = ‘l2’, random_state = None, solver = ‘lbfgs’, tol = 0.0001,
Ensemble Learner
Random Forest max_features = 1, n_estimators = 10, max_depth = 5, criterion = ‘gini’, random_state = None, verbose = 0
AdaBoost algorithm = ‘SAMME.R’, learning_rate = 1, n_estimators = 50, random_state = None
Extra Trees criterion = ‘gini’, max_depth = None, max_features = 12, min_samples_leaf = 1, min_samples_split = 2, min_weight_fraction_leaf = 0.0, n_estimators = 100