Table 2.
Algorithm and training procedure of convolution-capsule networks (Conv-CapsNet).
| Algorithm: Conv-CapsNet training algorithm, using a mini-batch stochastic gradient descent (SGD) for simplicity. |
|
Input: mini batch feature vector (x); Number of Conv-CapsNet training epoch (S); Number of dynamic routing iterations (iter). Output: Length of each capsules (Len). |
| 1: For n=1 to S do
2: conv_layer ← conv(x, CW) 3: hf_layer ← fc1(conv_layer, W1) 4: pc_layer ← fc2(hf_layer, W2) 5: u ← Encapule(pc_layer) 6: For all capsule i in PrimaryCaps layer:………… {contribution computes Eq. 1} 7: For all capsule i in PrimaryCaps layer and capsule j in DigitCaps layer:bij ← 0 8: For m=1 to iter do 9: For all capsule i in PrimaryCaps layer: ci ← softmax(bi) ……{softmax computes Eq. 2-1} 10: For all capsule j in DigitCaps layer: …{dynamic computes Eq. 2-2} 11: For all capsule j in DigitCaps layer: vj ← squash (sj) ………{squash computes Eq. 2-3} 12: For all capsule i in PrimaryCaps layer and capsule j in DigitCaps layer: 13: End for 14: |
| 15: ………………………………………{loss computes Eq. 3} 16: 17: 18: 19: 20: End for |