| Artificial neural networks (ANNs) |
ANNs in their basic form are essentially fully connected layered structures of artificial neurons (AN). An AN is basically a pointwise nonlinear function (e.g., a sigmoid or Gaussian function) applied to the output of a linear regression. ANs with different neural layers are inter-connected with weighted links. The most common ANN structure is a feedforward ANN, where information flows in a unidirectional forward mode. From the input nodes, data pass hidden nodes (if any) towards the output nodes |
Haykin (1999)
|
| Back-propagation ANN (BPANN) |
The basic type of neural network is multi-layer perceptron, which is feedforward back-propagation ANN. BPANN consists of 2 steps: (1) feedforward the values and (2) calculate the error and propagate it back to the earlier layers. So to be precise, forward-propagation is part of the back-propagation algorithm but comes before back-propagating. This is the most commonly used algorithm when referring to ANN. In many papers using ANN, these standard designs are not explicitly mentioned |
Haykin (1999)
|
| Radial basis function ANN (RBFANN) |
RBFANN is a type of ANN that uses nonlinear radial basis functions (RBFs) as activation functions in the hidden layer. The output of the network is a linear combination of RBFs of the inputs and neuron parameters |
Broomhead and Lowe (1988)
|
| Recurrent ANN (RANN) |
A RANN is a type of ANN that make use of sequential information by introducing loops in the network |
Hochreiter and Schmidhuber (1997)
|
| Bayesian regularized ANN (BRANN) |
BRANNs are more robust than standard BPANNs and can reduce or eliminate the need for lengthy cross-validation. Bayesian regularization is a mathematical process that converts a nonlinear regression into a “well-posed” statistical problem in the manner of a ridge regression |
Burden and Winkler (1999)
|