Skip to main content
. Author manuscript; available in PMC: 2020 Feb 1.
Published in final edited form as: IEEE Trans Med Imaging. 2018 Aug 13;38(2):394–405. doi: 10.1109/TMI.2018.2865356

TABLE I.

Description of the Trainable Parameters used in the deep network in case of with sharing (WS) and no sharing (NS) architectures between iterations. The weight sharing strategy provides a 10 times reduction in the number of trainable parameters, improving robustness when training data is scarce.

BN (β + γ + μ + σ2) Conv.filters Total
conv 1 64 + 64 + 64 + 64 3 × 3 × 2 × 64 1408
conv2 64 + 64 + 64 + 64 3 × 3 × 64 × 64 37120
conv3 64 + 64 + 64 + 64 3 × 3 × 64 × 64 37120
conv4 64 + 64 + 64 + 64 3 × 3 × 64 × 64 37120
conv5 2 + 2 + 2 + 2 3 × 3 × 64 × 2 1160
λ 1

number of trainable parameters in WS strategy: 113,929
no. of parameters in NS = #WS × 10 iterations: 1,139,290