Skip to main content
. 2023 Sep 4;96(1150):20230292. doi: 10.1259/bjr.20230292

Table 1.

Example direct AI reconstruction methods

Name Architecture [total parameters] Input Target Loss function and optimiser (epochs) validation Number of samples (training, validation, test)
AUTOMAP
Zhu et al. 2018 10
Two fully connected layers and CNN [∼800M] 2D noisy sinograms T1w brain images
(128 × 128)
MSE with L1-norm penalty on network weights in final hidden layer
RMSProp (100)
Validation not used
50,000
n/a
1
DeepPET
Häggström et al. 2019 11
CED
[>60M]
2D noisy sinograms (269 × 288) Ground-truth PET images
(128 × 128)
MSE
Adam (150)
Validation used
203,305 (70%)
43,499 (15%)
44,256 (15%)
DPIR-Net
Hu et al. 2020 39
CED
[>60M]
Discriminator
[>3.5M]
2D noisy sinograms (269 × 288) Ground-truth PET images
(128 × 128)
Wasserstein GAN + VGG + MSE
Adam (100)
Validation not used
37,872 (80%)
n/a
9468 (20%)
CED extended to SSRB sinograms from large FOV PET
Ma et al. 2022 40
CED
[∼64M]
2D noisy sinogram (269 × 288) Reconstructed PET images using OSEM + PSF TOF from list-mode data
(128 × 128)
MSE + SSIM+ VGG
Adam (300)
Validation used
35,940 (76%)
5590 (12%)
5590 (12%)
FastPET
Whiteley et al. 2021 48
U-Net
[∼20M]
Noisy histo-image slices + attenuation map slices
(2 × 440 × 440 × 96)
Image slices reconstructed using OSEM + PSF
(440 × 440 × 96)
MAE + MS-SSIM
Adam (500)
Validation used
20,297 slices (74%)
1767 slices (6%)
5208 slices (20%)

CNN refers to a convolutional neural network, CED refers to a convolutional encoder-decoder. VGG in this table refers to perceptual loss based on a VGG network.

3D, three-dimensional; FBP, filtered backprojection; MLEM, maximum likelihood–expectation maximisation; OSEM, ordered subsets expectation maximisation; PSF, point spread function.