Skip to main content
. Author manuscript; available in PMC: 2020 Feb 1.
Published in final edited form as: IEEE Trans Med Imaging. 2018 Aug 13;38(2):394–405. doi: 10.1109/TMI.2018.2865356

TABLE II.

Categorization of iterative deep-learning frameworks

Optimization Strategy Training Strategy Network Architecture
Steepest Descent (SD) Pre-trained Denoiser (PD) No Sharing (NS)
Conjugate Gradient (CG) End-to-End Training (ET) With Sharing (WS)

Deep learning frameworks derived based on above three parameters
Framework Description
CG-ET-WS Proposed MoDL framework, which uses conjugate gradient algorithm to solve the DC subproblem, with end-to-end training (ET) strategy, and with sharing (WS) of weights across iterations.
CG-ET-NS The difference from MoDL is that the weights are not shared (NS) across iterations.
SD-ET-WS The difference from MoDL is the change the optimization algorithm in the DC layer to steepest descent (SD) instead of CG.
CG-PD-NS The difference from MoDL is the use of pre-trained denoisers (PD) within the iterations. We trained 10 different Dw blocks with descending noise variance and use them in 10 iterations during testing. Since these are different denoisers, therefore, the weights are not shared (NS).