Logistic Regression |
C: 0.1, penalty: L2, solver: liblinear |
Decision Tree |
criterion: entropy, max_depth: 5, min_samples_split: 10 |
Random Forest |
criterion: entropy, max_depth: 10, min_samples_split: 2, n_estimators: 100, min_samples_leaf = 1, bootstrap = True, class_weight = None, ccp_alpha = 0.0 |
Gradient Boosting |
learning_rate: 0.01, max_depth: 5, n_estimators: 300 |
SVM |
C: 0.1, coef0: 0, degree: 2, gamma: scale, kernel: rbf, tol: 0.0001 |
k-NN |
Number of neighbours: 7, Metric: minkowski, Weight:Uniform, leaf_size = 30, weights = ‘uniform’ |
MLP |
activation: relu, alpha: 0.001, hidden_layer_sizes: (100,), solver: adam |
Adaboost |
n_estimators: 10, base_estimator = None, learning_rate = 1.0, algorithm = SAMME.R, random_state = None |
Stochastic Gradient Descent |
alpha: 0.001, penalty: elasticnet, epsilon = 0.1 l1_ratio = 0.15, learning_rate = ‘optimal’, loss = ‘hinge’, max_iter = 1000, n_iter_no_change = 5, penalty = ‘l2’, power_t = 0.5, tol = 0.001 |