Table 1. Model architectures used for the experiments.
STPNet is the model with short-term synaptic adaptation and RNN is the recurrent neural network. Convolutional layers are denoted as “conv<receptive field size>-<number of channels>”. “maxpool” denotes max pooling using a 2x2 window and a stride of 2. “FC” denotes fully connected layers with the given number of units. “RC” denotes recurrent layers with the given number of units. The ReLU activation function is not shown for brevity. The shared pretrained feature extractor layers are shown in italics (63658 total params).
Model | Network architecture | Params | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
STPNet | conv5–8 | maxpool | conv5–16 | maxpool | FC-128 | FC-64 | FC-16 | FC-1 | sigmoid | 64715 |
RNN | conv5–8 | maxpool | conv5–16 | maxpool | FC-128 | FC-64 | RC-16 | FC-1 | sigmoid | 64971 |
STPRNN | conv5–8 | maxpool | conv5–16 | maxpool | FC-128 | FC-64 | FC/RC-16 | FC-1 | sigmoid | 65995 |