| Algorithm 1 Attention Routing using Scalar Product | |
| Input: input parameters capsule from layer l | |
| Output: output the Digicaps: | |
| 1: affine transformation for all : | |
|
| |
| 2: Calculating self-attention weights: | |
|
| |
| 3: Use softmax to Calculate Weights C: | |
|
| |
| 4: For all capsule j in : | |
|
| |
| 5: Compress the capsule length to between 0 and 1: | |
|
| |
| 6: return |