Skip to main content
. 2020 Dec 3;10:21111. doi: 10.1038/s41598-020-77923-0

Figure 3.

Figure 3

Comparisons to the residual encoder–decoder convolutional neural network (RED-CNN). The proposed MS-RDN was compared with RED-CNN in three sets of configurations: single slice training without TOI oriented patch extraction (Z=1, nonTOI), single slice training (Z=1), and multi-slice training (Z=5). Breast images of the test subject were reconstructed by these RED-CNNs and MS-RDNs using the retrospectively undersampled 100-view data. The reference images were obtained using FDK on the 300-view data. The display window is [0.15,0.35]cm-1.