Skip to main content
. 2009 May 5;2009:381457. doi: 10.1155/2009/381457

Table 2.

Details of the training procedure used for each of the algorithms tested. In all cases the parameter values listed were those found to produce the best results. Parameter values were kept constant across variations in the task. All algorithms except nmfdiv use an online learning procedure. Hence, each weight update occurs after an individual training image has been processed. This is described as a training cycle. In contrast, nmfdiv uses a batch learning method. Hence, each weight update is influenced by all training images. This is described as a training epoch. Hence, with a set of 1000 training images (as used in these experiments) an epoch is equivalent to 1000 training cycles for the online learning algorithms. The third column specifies the number of iterations used to determine the steady-state activations values. Weights were initialised using random values selected from a Gaussian distribution with the mean and standard deviation indicated. In each case initial weights with values less than zero were made equal to zero.

Algorithm Training time Iterations Weight initialisation Parameter values
fyfe 200 000 cycles n/a mean = 18, std = 132 β = 0.0001
harpur 20 000 cycles 100 mean = 18, std = 132 β = 0.1, μ = 0.025
nmfdiv 2 000 epochs n/a mean = 12, std = 18 n/a
nmfseq 20 000 cycles 50 mean = 14, std = 116 β = 0.05
dim 20 000 cycles 50 mean = 116, std = 164 β = 0.05