Table 2.
Author | Study | Method | Architecture | Accuracy |
---|---|---|---|---|
Suk and Shen, 2013 | AD/CN classification; MCI/CN classification; AD to MCI conversion | Feature representation with a stacked autoencoder from MRI and PET data | Data processing and training: SAE; Classifier: SVM | AD/CN classification: 95.9% MCI/CN classification: 85.0% MCI to AD prediction: 75.8% |
Ngiam et al., 2011; Liu et al., 2015 |
AD/CN classification | Extraction of complementary information from multimodal neuroimaging data. | SAE, Softmax logistic regressor, and zero-masking strategy | 91.40% |
Liu et al., 2014 | AD/CN classification | Extraction of complementary information from multimodal neuroimaging data | Stacked SAE and Softmax regression layer | 87% |
Li et al., 2014 | AD/CN Classification and MCI to AD Conversion Prediction | Subjects with MRI and PET scans encode non-linear relationship between MRI and PET images. Trained network used to estimate PET patterns for subjects with only MRI data. | 3D CNN | AD/CN classification: 92.87% MCI to AD prediction: 72.44% |
Li et al., 2014; Vu et al., 2017; Choi and Jin 2018 | AD/CN classification and MCI to AD conversion | Subjects with MRI and FDG PET scans encode non-linear relationship between MRI and PET images. Trained network used to estimate PET patterns for subjects with only MRI data. | 3D CNN models, SAE and 3D CNN | 96.00% 91.10% AD/CN classification: 92.87% MCI to AD prediction: 72.44% |
Li et al., 2014; Vu et al., 2017; Choi and Jin, 2018; Liu et al., 2018b | AD/CN classification and MCI to AD conversion prediction | Subjects with MRI and FDG PET scans encode non-linear relationship between MRI and PET images. Trained network used to estimate PET patterns for subjects with only MRI data. Used three independent data sets (Training: ADNI-1; testing: ADNI-2 and MIRIAD) | 3D CNN models SAE and 3D CNN, 3D CNN | 96.00% 91.10% AD/CN classification: 92.87% MCI to AD prediction: 72.44% ADNI-2: 91.09 MIRIAD: 92.75% |
Li et al., 2014; Vu et al., 2017 | AD/CN classification and MCI to AD conversion prediction | Subjects with MRI and PET scans encode non-linear relationship between MRI and PET images. Trained network used to estimate PET patterns for subjects with only MRI data | SAE and 3D CNN3D CNN | 91.10% AD/CN classification: 92.87% MCI to AD prediction: 72.44% |
Vu et al., 2017 | AD/CN Classification | MRI and FDG PET scans | SAE and 3D CNN | 91.10% |
Cheng and Liu, 2017; Cheng et al., 2017 | AD/CN classification | Neuroimages from MRI and PET scans. Results combined in 3D CNN Feature extraction from MRI images. | Two 3D CNN | Two 3D CNN: 89.6% |
Single 3D CNN | Single 3D CNN: 87.2% | |||
Korolev et al., 2017; Cheng and Liu, 2017 | AD/CN classification | Manual feature extraction. Neuroimages from MRI and PET scans. Results combined in 3D CNN | Plain (VoxCNN) | 80% |
Residual Neural Networks (ResNet) Two 3D CNNs | 89.60% | |||
Aderghal et al., 2017; Korolev et al., 2017 | AD/CN classification | 2D slices from hippocampal region in axial, sagittal, and coronal directions. Manual feature extraction | 2D CNN plain (VoxCNN) | 85.90% |
residual neural networks (ResNet) | 80% | |||
Cheng et al., 2017; Lu et al., 2018 |
AD/CN classification and MCI to AD conversion prediction | Extraction of features from MRI images. Pre-training: SAE; final prediction: DNN | Single 3D CNN | Single 3D CNN: 87.2% |
SAE and DNN | AN/CN classification: 84.6% MCI to AD prediction: 82.93% |
|||
Aderghal et al., 2017; Liu et al., 2018a | Intra-slice and inter-slice features for AD/CN classification | Decomposed 3D PET images into 2D slices, used 2D slices from hippocampal region in axial, sagittal, and coronal directions | Combination of 2D CNN and RNN + 2D CNN | 91.20% 85.90% |
Lu et al., 2018 | AD/CN classification and MCI to AD conversion prediction | SAE for pre-training, DNN for final step | SAE and DNN | AN/CN classification: 84.6% MCI to AD prediction: 82.93% |
Liu et al., 2018b; Liu et al., 2018a |
Intra-slice and inter-slice features for AD/CN classification | Three independent data sets of 3D PET images decomposed into 2D slices. (Training: ADNI-1; Testing: ADNI-2, MIRIAD) | 3D CNN Combination of 2D CNN and RNNs | ADNI-2: 91.09 MIRIAD: 92.75%, 91.2% |
From the available literature, the 3D CNN model developed by Choi and Jin (2018) appears to outperform all ML algorithms with an accuracy of 96.0%.