Skip to main content
. 2019 Apr 16;14(4):e0214712. doi: 10.1371/journal.pone.0214712

Table 1. The notations utilized in the cost function.

Notation Meaning
N Number of data samples
λ Weight decay parameter
β The weight of sparsity penalty term
ρ Sparsity parameter defining the sparsity level
ρ^i The average activation of hidden neuron j
sl Number of neurons in layer l
x(i) Input feature vector
hw,b(x(i)) Output feature vector
KL(ρ||ρ^i) Kullback-Leibler divergence between ρ and ρ^j
Wjil The weight on the connection between neuron j in layer l + 1 and neuron i in layer l