Skip to main content
. 2020 Jan 28;10:1631. doi: 10.3389/fphar.2019.01631

Table 2.

Algorithm and training procedure of convolution-capsule networks (Conv-CapsNet).

Algorithm: Conv-CapsNet training algorithm, using a mini-batch stochastic gradient descent (SGD) for simplicity.
Input: mini batch feature vector (x);
   Number of Conv-CapsNet training epoch (S);
   Number of dynamic routing iterations (iter).
Output: Length of each capsules (Len).
1:  For n=1 to S do
2:  conv_layerconv(x, CW)
3:  hf_layerfc1(conv_layer, W1)
4:  pc_layerfc2(hf_layer, W2)
5:  u ← Encapule(pc_layer)
6:  For all capsule i in PrimaryCaps layer:u^j|iWijui………… {contribution computes Eq. 1}
7:  For all capsule i in PrimaryCaps layer and capsule j in DigitCaps layer:bij ← 0
8:  For m=1 to iter do
9:   For all capsule i in PrimaryCaps layer: cisoftmax(bi) ……{softmax computes Eq. 2-1}
10:  For all capsule j in DigitCaps layer:sjiciju^j|i …{dynamic computes Eq. 2-2}
11:  For all capsule j in DigitCaps layer: vjsquash (sj) ………{squash computes Eq. 2-3}
12:  For all capsule i in PrimaryCaps layer and capsule j in DigitCaps layer: bijbij+u^j|ivj
13:  End for
14:  LenLength of v
15:  Lloss of v………………………………………{loss computes Eq. 3}
16:   WWL/W
17:  CWCWL/CW
18: W1W1L/W1
19:  W2W2L/W2
20:  End for