Table 2.
Optimized values of hyperparameters for ML models.
| ML model | Hyperparameters | Final optimized value | aTotal execution time (ms) |
|---|---|---|---|
| ANN |
kernel_initializer | “normal” | 34.76 |
| model optimizer | “adam” | ||
| hidden layer 1 number of neurons | 62 | ||
| hidden layer 1 activation function | “exponential" | ||
| hidden layer 2 number of neurons | 32 | ||
| hidden layer 2 activation function | “exponential" | ||
| hidden layer 3 number of neurons | 16 | ||
| hidden layer 3 activation function | “exponential" | ||
| output layer number of neurons | 1 | ||
| output layer activation function |
“linear" |
||
| SVM |
C | 700 | 6.38 |
| ε | 0.001 | ||
| ϒ | 6 × 10−3 | ||
| kernel | “RBF” | ||
| random_state |
100 |
||
| ELM |
hidden_units | 50 | 14.74 |
| activation_function | “relu” | ||
| C | 1 | ||
| random_type |
“normal" |
||
| KRR |
α | 0.001 | 368.08 |
| kernel | “RBF” | ||
| ϒ | 1.78 × 10−2 | ||
| random_state |
100 |
||
| XGB |
learning_rate | 0.3 | 510.15 |
| n_estimators | 200 | ||
| ϒ | 0.001 | ||
| max_depth | 50 | ||
| subsample | 0.5 | ||
| colsample_bylevel | 0.7 | ||
| random_state |
100 |
||
| RF | n_estimators | 100 | 50.00 |
| max_features | 17 | ||
| max_depth | None | ||
| min_samples_split | 2 | ||
| min_samples_leaf | 1 | ||
| bootstrap | false |
Total execution time = model training time + inference time.