Skip to main content
. 2023 Nov 22;18(11):e0292047. doi: 10.1371/journal.pone.0292047

Table 3. Multi-Layer Perceptron (MLP) and convolutional neural network methods: Hyperparameters’ domain and the corresponding tuned values at the data sets under consideration.

The Nn, Ne, lr, in respect, represents the number of neurons of the hidden layer, the number of epochs, and the learning rate. The drop shows the maximum dropout in the max pooling layer.

method / data set parameters
N n N e lr activation optimiser Drop
MLP {2, 3, …, 200} [10, 50000] [1e-6, 1e-2] {Identity, Logistic, Tanh, ReLu} {LBFGS, SGD, ADAM}, [0.1, 0.8]
MLP at Demo 173 31270 - Identity LBFGS -
MLP at Fixation 190 58325 - ReLu LBFGS -
CNN at Fixation 192 100 0.0001 ReLu ADAM 0.1
MLP at Demo-Fixation 158 49150 - Tanh LBFGS -
CNN at Demo-Fixation 192 200 0.0001 ReLu ADAM 0.1
MLP at IA 34 66253 - Logistic LBFGS
MLP at Demo-IA 50 13953 - Logistic LBFGS