Skip to main content
. 2022 Apr 15;2022:3820360. doi: 10.1155/2022/3820360

Table 1.

Hyperparameter optimization.

K-nearest neighbour Random forest Decision trees Multilayer perceptron
Number of neighbours = 45 Size of each bag = 53 Confidence factor = 0.11 Learning rate = 0.003
Batch size = 100 Max depth = 0 Min num. of objects = 1 Momentum = 0.9
Algorithm = linear search No. of trees = 100 Unpruned = false Hidden layers = 10
Distance function = Manhattan function