Skip to main content
. 2019 May 20;27(9):392–402. doi: 10.1007/s12471-019-1286-6

Fig. 4.

Fig. 4

a Feedforward network. Input flows from left to right predict output values. b Artificial neuron: each input xi is multiplied by its own weight wi. To introduce an intercept, input b (with value 1) is introduced, with its own weight wb. The sum of all weighed inputs x and intercept b is used as input for the activation function φ that yields an output above a certain threshold. c Examples of activation functions. Tanh is the hyperbolic tangent, with outputs quickly changing from −1 to 1 around input x = 0. The sigmoid resembles tanh, with outputs shifting from 0 to 1 around x = 0. Relu is the rectified linear unit, with output = 0 for inputs smaller than zero and outputs equal to input for inputs greater than or equal to zero. Softplus is a continuous function that only slightly differs from relu for inputs around zero and is described by ln(1 + ex) d Recurrent neural network, which adds a time component to the feedforward network. In addition to input xt at a given moment t, the previous output yt-1 is passed through the model to predict yt. e Long short-term memory, which adds a persisted model state s in addition to the previous model output yt–1, to better assess the effect of input changes, over a longer period, on model output