Table 4. Hyperparameter optimisation overview.
Tested and selected hyperparameters for each of the sub-models included in the FAPAR EML meta-estimator.
| Model | Hyperparameter | Lower limit | Upper limit | Selected |
|---|---|---|---|---|
| Extremely randomized trees | Number of estimators | 10 | 100 | 44 |
| Maximum tree depth | 5 | 100 | 92 | |
| Maximum number of features | 0 | 1 | 0.84 | |
| Minimum samples for splitting | 2 | 100 | 16 | |
| Minimum samples per leaf | 1 | 10 | 2 | |
| Gradient descended trees | Number of estimators | 10 | 100 | 81 |
| Maximum tree depth | 3 | 100 | 50 | |
| Alpha | 0 | 2 | 1.19 | |
| Reg Alpha | 0 | 0.2 | 0.007 | |
| Eta | 0 | 2 | 1.999 | |
| Reg_Lambda | 0 | 0.2 | 0.12 | |
| Gamma | 0 | 2 | 0.05 | |
| Learning rate | 0 | 0.2 | 0.06 | |
| colsample_bytree | 0 | 1 | 0.88 | |
| colsample_bylevel | 0 | 1 | 0.66 | |
| colsample_bynode | 0 | 1 | 0.47 | |
| Artificial neural network | Epochs | – | – | 10 |
| Batch size | – | – | 256 | |
| Learning rate | – | – | 0.0005 | |
| Number of layers | – | – | 4 | |
| Number of neurons | – | – | 128 | |
| Activation | – | – | ReLu | |
| Dropout rate | – | – | 0.15 | |
| Output activation | – | – | Sigmoid |