Table 4:
Machine learning algorithms, hyper-parameters and performance using 13 handcrafted features. We perform a 3-fold cross validation to evaluate the performance of the model. F1 scores are reported.
| Algorithm | Hyper- parameter | Values | F1 |
|---|---|---|---|
| KNN | n_neighbors | 5, 6, 7, 8, 9, 10 | 0.65 |
| Linear SVC | gamma | 0.001, 0.01, 0.1, 1 | 0.71 |
| C | 0.001, 0.01, 0.1, 1, 10, 100 | ||
| SVC (RBF kernel) | gamma | 0.001, 0.01, 0.1, 1 | 0.68 |
| C | 0.001, 0.01, 0.1, 1, 10, 100 | ||
| Logistic Regression (l2) | C | np.logspace(−4, 4, 20) | 0.73 |
| Random Forest | max_depth | 3, 5, 7 | 0.56 |
| n_estimators | range(10,101,10) | ||
| min_samples_split | 3, 5 | ||
| Gradient Boosting Tree | max_depth | 3, 4, 5 | 0.68 |
| subsample | 0.5, 0.6, 0.8, 0.85, 0.9, 0.95, 1.0 | ||
| learning_rate | 0.01, 0.03, 0.05, 0.075, 0.1, 0.2 | ||
| n_estimators | range(10,101,10) | ||
| Neural Network | hidden layers | 1, 2, 3 | 0.71 |
| hidden units | 100, 200, 300 | ||
| dropout rate | 0, 0.25, 0.5 |