TABLE II.
Optimization Strategy | Training Strategy | Network Architecture |
---|---|---|
Steepest Descent (SD) | Pre-trained Denoiser (PD) | No Sharing (NS) |
Conjugate Gradient (CG) | End-to-End Training (ET) | With Sharing (WS) |
Deep learning frameworks derived based on above three parameters |
Framework | Description |
---|---|
CG-ET-WS | Proposed MoDL framework, which uses conjugate gradient algorithm to solve the DC subproblem, with end-to-end training (ET) strategy, and with sharing (WS) of weights across iterations. |
CG-ET-NS | The difference from MoDL is that the weights are not shared (NS) across iterations. |
SD-ET-WS | The difference from MoDL is the change the optimization algorithm in the DC layer to steepest descent (SD) instead of CG. |
CG-PD-NS | The difference from MoDL is the use of pre-trained denoisers (PD) within the iterations. We trained 10 different Dw blocks with descending noise variance and use them in 10 iterations during testing. Since these are different denoisers, therefore, the weights are not shared (NS). |