TABLE III. Deep Networks in This Paper.
| Network | # of Inputs | Base CNN | Weight Initializations | RNN | Attention |
|---|---|---|---|---|---|
| CNN-3D | 1 | 3D VGG16 | Random (R) | - | - |
| CRN1-R | 1 | VGG16 | Random (R) | P | - |
| CRN1-2D | 1 | VGG16 | ImageNet | P | - |
| CRN2-2D | 2 | VGG16 | ImageNet | P | - |
| CRN3-2D | 3 | VGG16 | ImageNet | P | - |
| CAN1-R | 1 | VGG16 | Random (R) | - | P |
| CAN1-2D | 1 | VGG16 | ImageNet | - | P |
| CAN2-2D | 2 | VGG16 | ImageNet | - | P |
| CAN3-2D | 3 | VGG16 | ImageNet | - | P |
Table of networks in this paper and their attributes. CNN-3D is a VGG-16 style 3D CNN. CRNj-2D are convolutional recurrent networks [12] that use recurrent modules to encode slice-wise features. Each CRN network uses different weight initializations or a different number of inputs. CANj-2D are the convolutional attention networks proposed in this paper which are each constructed to process a different number of inputs when considering a time series. In these networks, the index j represents the number of inputs the network accepts. For example, CAN1-R accepts one input while its weights are randomly initialized.