Skip to main content
. Author manuscript; available in PMC: 2019 Nov 1.
Published in final edited form as: J Neurosci Methods. 2018 Aug 18;309:25–34. doi: 10.1016/j.jneumeth.2018.08.019

Fig. 3.

Fig. 3.

Architecture of up-sampling using FSC or ESPC with r = 2, r = 4, and r = 8. Input and output of the model are in black. Intermediate layers are shown in different colors. Layers of the same color have the same height and width. Layers that are identical between GoogleNet and our architecture (blocks of conv/BN/ReLU/pooling), up to the scoring layer are not depicted. After each intermediate up-sampling step, we sum the output of each up-sampled layer with the conv/BN output of the layer immediately before the pooling operation. Only the layers involved in the summation are shown. Each up-sampling step is denoted by arrows: the difference between architectures having r = 8, r = 4, and r = 2 lies in the last up-sampling step, denoted by arrows of different styles. For r = 8, the up-sampling is directly from 128 × 128 × 2 (orange layer) to the output layer of size 1024 × 1024 × 2; for r = 4: 4-fold up-sampling from the 256 × 256 × 2 layer (purple) to the output layer; for r=2, two-fold up-sampling from 512 × 512 × 2 (blue) to the output layer. The input image has one channel with a size of 1024 × 1024 × 1. The output layer is a tensor of shape 1024 × 1024 × 2 which is run through a softmax layer that generates probability maps for the two classes being predicted (spines or non-spine).