Skip to main content
. 2023 Nov 22;18(11):e0292047. doi: 10.1371/journal.pone.0292047

Table 4. Random Forest (RF), Gradient Boosting (GB), and AdaBoost (AB) methods: Hyperparameters’ domain and the corresponding tuned values at the data sets under consideration.

The Ne, Mss, Msl, lr, in respect, represents the number of estimators, minimum number of samples per split, minimum number of samples per leaf, and learning rate.

method / data set parameters
N e M ss M sl lr
AB {10, 11, …, 10000} - - [1e-3, 5e-1]
RF {10, 11, …, 10000} {2, 3, …, 10} {1, 2, …, 10} -
GB {10, 11, …, 10000} {2, 3, …, 10} {1, 2, …, 10} [1e-3, 5e-1]
AB at Demo 545 - - 0.017
RF at Demo 6197 5 9 -
GB at Demo 257 4 1 0.005
AB at Fixation 415 - - 0.169
RF at Fixation 2726 2 10 -
GB at Fixation 3380 3 6 0.007
AB at IA 4736 - - 0.087
RF at IA 1980 4 3 -
GB at IA 165 2 10 0.160
AB at Demo-Fixation 309 - - 0.215
RF at Demo-Fixation 9923 9 1 -
GB at Demo-Fixation 2674 9 3 0.282
AB at Demo-IA 7133 - - 0.019
RF at Demo-IA 163 2 1 -
GB at Demo-IA 971 7 3 0.299