Table 2.
Name | Architecture [total parameters] | Input | Target | Loss function and optimiser (epochs) validation training method | Number of samples (training, validation, test) |
---|---|---|---|---|---|
EM-Net Gong et al. 2019 54 |
10 modules U-Net shared [∼2M] |
Previous output/iteration | 3D high count reconstruction (128 × 128 × 46) |
MSE Adam Validation not used Gradient truncation |
16 n/a 1 |
MAPEM-Net Gong et al. 2019 61 |
8 modules U-Net not shared [∼16M] |
Current output from a block | 3D high count reconstruction (128 × 128 × 105) |
MSE [Details not specified] Validation not used End-to-end |
18 n/a 1 |
FBSEM-Net Mehranian and Reader 2020 52 |
10 modules CNN shared [∼77k] |
Previous output / iteration | 3D high count reconstruction— cropped (114 × 114 × 128) |
MSE Adam (200) Validation used Gradient truncation |
45 5 5 |
Iterative Neural Network Lim et al. 2020 56 |
10 modules CNN not-shared [∼40k] |
Current output from a block | 3D true activity image (200 × 200 × 112) |
MSE Adam (500) Validation not used Sequential training |
4 n/a 1 |
TransEM Hu and Liu 2022 62 |
10 modules Swin Transformer [details not specified] |
Previous output / iteration | 2D high count reconstruction | MSE Adam Validation used Gradient truncation |
510 30 60 |
AI, artificial intelligence; CNN, convolutional neural network; 2D, two-dimensional; 3D, three-dimensional; PET, positron emission tomography.