Skip to main content
. 2020 Jan 28;10:1631. doi: 10.3389/fphar.2019.01631

Table 4.

Algorithm and training procedure of restricted Boltzmann machine-capsule networks (RBM-CapsNet).

Algorithm: RBM-CapsNet training algorithm, using a mini-batch stochastic gradient descent (SGD) for simplicity.
Input: mini batch feature vector (x);
   Number of RBM training epoch (S1);
   Number of Capsule training epoch (S2);
   Number of dynamic routing iterations (iter).
Output: Length of each capsules (Len).
1:  For n=1 to S1 do
2:  hf_layerϕ1(x,θ1)………………………………………{RBM1 training}
3:  End for
4:  For n=1 to S1 do
5:  pc_layerϕ2(hf_layer,θ2)…………………………………{RBM2 training}
6:  End for
7:  For n=1 to S2 do
8:  hf_layerϕ1(x,θ1)
9:  pc_layerϕ2(h1_layer,θ2)
10:  uEncapule (pc_layer )
11:   For all capsule i in PrimaryCaps layer:u^j|iWijui…………{contribution computes Eq. 1}
12:  For all capsule i in PrimaryCaps layer and capsule j in DigitCaps layer:bij0
13:  For m=1 to iter do
14:  For all capsule i in PrimaryCaps layer: ……{softmax computes Eq. 2-1}
15:  For all capsule j in DigitCaps layer: sj iciju^j|i………{dynamic computes Eq. 2-2}
16:  For all capsule j in DigitCaps layer:vjsquash(sj)…………{squash computes Eq. 2-3}
17:  For all capsule i in PrimaryCaps layer and capsule j in DigitCaps layer:  bijbij+u^j|ivj
18:  End for
19:  LenLength of v
20:  Lloss of v………………………………………{loss computes Eq. 3}
21: WWL/W
22: θ1θ1L/θ1
23: θ2θ2L/θ2
24: End for