Table 4.
Objectives | Type of diagnosis | Source of data | Number of subjects (n) | Machine learning method(s), splitting strategy and cross validation | Outcomes | Year | References |
---|---|---|---|---|---|---|---|
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | Fuzzy neural system with 10-fold cross validation | Testing accuracy = 100% | 2016 | Abiyev and Abizade, 2016 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | RPART, C4.5, PART, Bagging CART, random forest, Boosted C5.0, SVM | SVM: | 2019 | Aich et al., 2019 |
Accuracy = 97.57% | |||||||
Sensitivity = 0.9756 | |||||||
Specificity = 0.9987 | |||||||
NPV = 0.9995 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | DBN of 2 RBMs | Testing accuracy = 94% | 2016 | Al-Fatlawi et al., 2016 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | EFMM-OneR with 10-fold cross validation or 5-fold cross validation | Accuracy = 94.21% | 2019 | Sayaydeha and Mohammad, 2019 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 40; 20 HC + 20 PD | Linear regression, LDA, Gaussian naïve Bayes, decision tree, KNN, SVM-linear, SVM-RBF with leave-one-subject-out cross validation | Logistic regression or SVM-linear accuracy = 70% | 2019 | Ali et al., 2019a |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 40; 20 HC + 20 PD | LDA-NN-GA with leave-one-subject-out cross validation | Training: | 2019 | Ali et al., 2019c |
Accuracy = 95% | |||||||
Sensitivity = 95% | |||||||
Test: | |||||||
Accuracy = 100% | |||||||
Sensitivity = 100% | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | NNge with AdaBoost with 10-fold cross validation | Accuracy = 96.30% | 2018 | Alqahtani et al., 2018 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | Logistic regression, KNN, naïve Bayes, SVM, decision tree, random forest, DNN with 10-fold cross validation | KNN accuracy = 95.513% | 2018 | Anand et al., 2018 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | MLP with a train-validation-test ratio of 50:20:30 | Training accuracy = 97.86% | 2012 | Bakar et al., 2012 |
Test accuracy = 92.96% | |||||||
MSE = 0.03552 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31 (8 HC + 23 PD) for dataset 1 and 68 (20 HC + 48 PD) for dataset 2 | FKNN, SVM, KELM with 10-fold cross validation | FKNN accuracy = 97.89% | 2018 | Cai et al., 2018 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 40; 20 HC + 20 PD | SVM, logistic regression, ET, gradient boosting, random forest with train-test split ratio = 80:20 | Logistic regression accuracy = 76.03% | 2019 | Celik and Omurca, 2019 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 40; 20 HC + 20 PD | MLP, GRNN with a training-test ratio of 50:50 | GRNN: | 2016 | Çimen and Bolat, 2016 |
Error rate = 0.0995 (spread parameter = 195.1189) | |||||||
Error rate = 0.0958 (spread parameter = 1.2) | |||||||
Error rate = 0.0928 (spread parameter = 364.8) | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | ECFA-SVM with 10-fold cross validation | Accuracy = 97.95% | 2017 | Dash et al., 2017 |
Sensitivity = 97.90% | |||||||
Precision = 97.90% | |||||||
F-measure = 97.90% | |||||||
Specificity = 96.50% | |||||||
AUC = 97.20% | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 40; 20 HC + 20 PD | Fuzzy classifier with 10-fold cross validation, leave-one-out cross validation or a train-test ratio of 70:30 | Accuracy = 100% | 2019 | Dastjerd et al., 2019 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | Averaged perceptron, BPM, boosted decision tree, decision forests, decision jungle, locally deep SVM, logistic regression, NN, SVM with 10-fold cross-validation | Boosted decision trees: | 2017 | Dinesh and He, 2017 |
Accuracy = 0.912105 | |||||||
Precision = 0.935714 | |||||||
F-score = 0.942368 | |||||||
AUC = 0.966293 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 50; 8 HC + 42 PD | KNN, SVM, ELM with a train-validation ratio of 70:30 | SVM: | 2017 | Erdogdu Sakar et al., 2017 |
Accuracy = 96.43% | |||||||
MCC = 0.77 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 252; 64 HC + 188 PD | CNN with leave-one-person-out cross validation | Accuracy = 0.869 | 2019 | Gunduz, 2019 |
F-measure = 0.917 | |||||||
MCC = 0.632 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | SVM, logistic regression, KNN, DNN with a train-test ratio of 70:30 | DNN: | 2018 | Haq et al., 2018 |
Accuracy = 98% | |||||||
Specificity = 95% | |||||||
sensitivity = 99% | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | SVM-RBF, SVM-linear with 10-fold cross validation | Accuracy = 99% | 2019 | Haq et al., 2019 |
Specificity = 99% | |||||||
Sensitivity = 100% | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | LS-SVM, PNN, GRNN with conventional (train-test ratio of 50:50) and 10-fold cross validation | LS-SVM or PNN or GRNN: | 2014 | Hariharan et al., 2014 |
Accuracy = 100% | |||||||
Precision = 100% | |||||||
Sensitivity = 100% | |||||||
specificity = 100% | |||||||
AUC = 100 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | Random tree, SVM-linear, FBANN with 10-fold cross validation | FBANN: | 2014 | Islam et al., 2014 |
Accuracy = 97.37% | |||||||
Sensitivity = 98.60% | |||||||
Specificity = 93.62% | |||||||
FPR = 6.38% | |||||||
Precision = 0.979 | |||||||
MSE = 0.027 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | SVM-linear with 5-fold cross validation | Error rate ~0.13 | 2012 | Ji and Li, 2012 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 40; 20 HC + 20 PD | Decision tree, random forest, SVM, GBM, XGBoost | SVM-linear: | 2018 | Junior et al., 2018 |
FNR = 10% | |||||||
Accuracy = 0.725 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | CART, SVM, ANN | SVM accuracy = 93.84% | 2020 | Karapinar Senturk, 2020 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | Dataset 1: 31; 8 HC + 23 PD Dataset 2: 40; 20 HC + 20 PD |
EWNN with a train-test ratio of 90:10 and cross validation | Dataset 1: Accuracy = 92.9% |
2018 | Khan et al., 2018 |
Ensemble classification accuracy = 100.0% | |||||||
Sensitivity = 100.0% | |||||||
MCC = 100.0% | |||||||
Dataset 2: | |||||||
Accuracy = 66.3% | |||||||
Ensemble classification accuracy = 90.0% | |||||||
Sensitivity = 93.0% | |||||||
Specificity = 97.0% | |||||||
MCC = 87.0% | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 40; 20 HC + 20 PD | Stacked generalization with CMTNN with 10-fold cross validation | Accuracy = ~70% | 2015 | Kraipeerapun and Amornsamankul, 2015 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 40; 20 HC + 20 PD | HMM, SVM | HMM: | 2019 | Kuresan et al., 2019 |
Accuracy = 95.16% | |||||||
Sensitivity = 93.55% | |||||||
Specificity = 91.67% | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | IGWO-KELM with 10-fold cross validation | Iteration number = 100 | 2017 | Li et al., 2017 |
Accuracy = 97.45% | |||||||
Sensitivity = 99.38% | |||||||
Specificity = 93.48% | |||||||
Precision = 97.33% | |||||||
G-mean = 96.38% | |||||||
F-measure = 98.34% | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | SCFW-KELM with 10-fold cross validation | Accuracy = 99.49% | 2014 | Ma et al., 2014 |
Sensitivity = 100% | |||||||
Specificity = 99.39% | |||||||
AUC = 99.69% | |||||||
F-measure = 0.9966 | |||||||
Kappa = 0.9863 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | SVM-RBF with 10-fold cross validation | Accuracy = 96.29% | 2016 | Ma et al., 2016 |
Sensitivity = 95.00% | |||||||
Specificity = 97.50% | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | Logistic regression, NN, SVM, SMO, Pegasos, AdaBoost, ensemble selection, FURIA, rotation forest Bayesian network with 10-fold cross-validation | Average accuracy across all models = 97.06% SMO, Pegasos, or AdaBoost accuracy = 98.24% |
2013 | Mandal and Sairam, 2013 |
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | Logistic regression, KNN, SVM, naïve Bayes, decision tree, random forest, ANN | ANN: | 2018 | Marar et al., 2018 |
Accuracy = 94.87% | |||||||
Specificity = 96.55% | |||||||
Sensitivity = 90% | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | Dataset 1: 31; 8 HC + 23 PD | KNN | Dataset 1 accuracy = 90% | 2017 | Moharkan et al., 2017 |
Dataset 2: 40; 20 HC + 20 PD | Dataset 2 accuracy = 65% | ||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | Rotation forest ensemble with 10-fold cross validation | Accuracy = 87.1% | 2011 | Ozcift and Gulten, 2011 |
Kappa error = 0.63 | |||||||
AUC = 0.860 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | Rotation forest ensemble | Accuracy = 96.93% | 2012 | Ozcift, 2012 |
Kappa = 0.92 | |||||||
AUC = 0.97 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | SVM-RBF with 10-fold cross validation or a train-test ratio of 50:50 | 10-fold cross validation: | 2016 | Peker, 2016 |
Accuracy = 98.95% | |||||||
Sensitivity = 96.12% | |||||||
Specificity = 100% | |||||||
F-measure = 0.9795 | |||||||
Kappa = 0.9735 | |||||||
AUC = 0.9808 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | ELM with 10-fold cross validation | Accuracy = 88.72% | 2016 | Shahsavari et al., 2016 |
Recall = 94.33% | |||||||
Precision = 90.48% | |||||||
F-score = 92.36% | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | Ensemble learning with 10-fold cross validation | Accuracy = 90.6% | 2019 | Sheibani et al., 2019 |
Sensitivity = 95.8% | |||||||
Specificity = 75% | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | GLRA, SVM, bagging ensemble with 5-fold cross validation | Bagging: | 2017 | Wu et al., 2017 |
Sensitivity = 0.9796 | |||||||
Specificity = 0.6875 | |||||||
MCC = 0.6977 | |||||||
AUC = 0.9558 | |||||||
SVM: | |||||||
Sensitivity = 0.9252 | |||||||
specificity = 0.8542 | |||||||
MCC = 0.7592 | |||||||
AUC = 0.9349 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | Decision tree classifier, logistic regression, SVM with 10-fold cross validation | SVM: | 2011 | Yadav et al., 2011 |
Accuracy = 0.76 | |||||||
Sensitivity = 0.9745 | |||||||
Specificity = 0.13 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 80; 40 HC + 40 PD | KNN, SVM with 10-fold cross validation | SVM: | 2019 | Yaman et al., 2020 |
Accuracy = 91.25% | |||||||
Precision = 0.9125 | |||||||
Recall = 0.9125 | |||||||
F-Measure = 0.9125 | |||||||
Classification of PD from HC | Diagnosis | UCI machine learning repository | 31; 8 HC + 23 PD | MAP, SVM-RBF, FLDA with 5-fold cross validation | MAP: | 2014 | Yang et al., 2014 |
Accuracy = 91.8% | |||||||
Sensitivity = 0.986 | |||||||
Specificity = 0.708 | |||||||
AUC = 0.94 | |||||||
Classification of PD from other disorders | Differential diagnosis | Collected from participants | 50; 30 PD + 9 MSA + 5 FND + 1 somatization + 1 dystonia + 2 CD + 1 ET + 1 GPD | SVM, KNN, DA, naïve Bayes, classification tree with LOSO | SVM-linear: | 2016 | Benba et al., 2016a |
Accuracy = 90% | |||||||
Sensitivity = 90% | |||||||
Specificity = 90% | |||||||
MCC = 0.794067 | |||||||
PE = 0.788177 | |||||||
Classification of PD from other disorders | Differential diagnosis | Collected from participants | 40; 20 PD + 9 MSA + 5 FND + 1 somatization + 1 dystonia + 2 CD + 1ET + 1 GPD | SVM (RBF, linear, polynomial, and MLP kernels) with LOSO | SVM-linear accuracy = 85% | 2016 | Benba et al., 2016b |
Classification of PD from HC and assess the severity of PD | Diagnosis | Collected from participants | 52; 9 HC + 43 PD | SVM-RBF with cross validation | Accuracy = 81.8% | 2014 | Frid et al., 2014 |
Classification of PD from HC | Diagnosis | Collected from participants | 54; 27 HC + 27 PD | SVM with stratified 10-fold cross validation or leave-one-out cross validation | Accuracy = 94.4% | 2018 | Montaña et al., 2018 |
Specificity = 100% | |||||||
Sensitivity = 88.9% | |||||||
Classification of PD from HC | Diagnosis | Collected from participants | 40; 20 HC + 20 PD | KNN, SVM-linear, SVM-RBF with leave-one-subject-out or summarized leave-one-out | SVM-linear: | 2013 | Sakar et al., 2013 |
Accuracy = 77.50% | |||||||
MCC = 0.5507 | |||||||
Sensitivity = 80.00% | |||||||
Specificity = 75.00% | |||||||
Classification of PD from HC | Diagnosis | Collected from participants | 78; 27 HC + 51 PD | KNN, SVM-linear, SVM-RBF, ANN, DNN with leave-one-out cross validation | SVM-RBF: | 2017 | Sztahó et al., 2017 |
Accuracy = 84.62% | |||||||
Precision = 88.04% | |||||||
Recall = 78.65% | |||||||
Classification of PD from HC and assess the severity of PD | Diagnosis | Collected from participants | 88; 33 HC + 55 PD | KNN, SVM-linear, SVM-RBF, ANN, DNN with leave-one-subject-out cross validation | SVM-RBF: | 2019 | Sztahó et al., 2019 |
Accuracy = 89.3% | |||||||
Sensitivity = 90.2% | |||||||
Specificity = 87.9% | |||||||
Classification of PD from HC | Diagnosis | Collected from participants | 43; 10 HC + 33 PD | Random forests, SVM with 10-fold cross validation and a train-test ratio of 90:10 | SVM accuracy = 98.6% | 2012 | Tsanas et al., 2012 |
Classification of PD from HC | Diagnosis | Collected from participants | 99; 35 HC + 64 PD | Random forest with internal out-of-bag (OOB) validation | EER = 19.27% | 2017 | Vaiciukynas et al., 2017 |
Classification of PD from HC | Diagnosis | UCI machine learning repository and participants | 40 and 28; 20 HC + 20 PD and 28 PD, respectively | ELM | Training data: | 2016 | Agarwal et al., 2016 |
Accuracy = 90.76% | |||||||
MCC = 0.815 | |||||||
Test data: | |||||||
Accuracy = 81.55% | |||||||
Classification of PD from HC | Diagnosis | The Neurovoz corpus | 108; 56 HC + 52 PD | Siamese LSTM-based NN with 10-fold cross- validation | EER = 1.9% | 2019 | Bhati et al., 2019 |
Classification of PD from HC | Diagnosis | mPower database | 2,289; 2,023 HC + 246 PD | L2-regularized logistic regression, random forest, gradient boosted decision trees with 5-fold cross validation | Gradient boosted decision trees: | 2019 | Tracy et al., 2019 |
Recall = 0.797 | |||||||
Precision = 0.901 | |||||||
F1-score = 0.836 | |||||||
Classification of PD from HC | Diagnosis | PC-GITA database | 100; 50 HC + 50 PD | ResNet with train-validation ratio of 90:10 | Precision = 0.92 | 2019 | Wodzinski et al., 2019 |
Recall = 0.92 | |||||||
F1-score = 0.92 | |||||||
Accuracy = 91.7% |
ANN, artificial neural network; AUC, area under the receiver operating characteristic (ROC) curve; CART, classification and regression trees; CD, cervical dystonia; CMTNN, complementary neural network; CNN, convolutional neural network; DA, discriminant analysis; DBN, deep belief network; DNN, deep neural network; ECFA, enhanced chaos-based firefly algorithm; EFMM-OneR, enhanced fuzzy min-max neural network with the OneR attribute evaluator; ELM, extreme Learning machine; ET, extra trees or essential tremor; EWNN, evolutionary wavelet neural network; FBANN, feedforward back-propagation based artificial neural network; FKNN, fuzzy k-nearest neighbor; FLDA, Fisher's linear discriminant analysis; FND, functional neurological disorder; FNR, false negative rate; FPR, false positive rate; FURIA, fuzzy unordered rule induction algorithm; GA, genetic algorithm; GBM, gradient boosting machine; GLRA, generalized logistic regression analysis; GPD, generalized paroxysmal dystonia; GRNN, general(ized) regression neural network; HC, healthy control; HMM, hidden Markov model; IGWO-KELM, improved gray wolf optimization and kernel(-based) extreme learning machine; KELM, kernel-based extreme learning machine; KNN, k-nearest neighbors; LDA, linear discriminant analysis; LOSO, leave-one-subject-out; LS-SVM, least-square support vector machine; LSTM, long short-term memory; MAP, maximum a posteriori decision rule; MCC, Matthews correlation coefficient; MLP, multilayer perceptron; MSA, multiple system atrophy; MSE, mean squared error; NN, neural network; NNge, non-nested generalized exemplars; NPV, negative predictive value; PD, Parkinson's disease; PNN, probabilistic neural network; RBM, restricted Boltzmann machine; ResNet, residual neural network; RPART, recursive partitioning and regression trees; SCFW-KELM, subtractive clustering features weighting and kernel-based extreme learning machine; SMO, sequential minimal optimization; SVM, support vector machine; SVM-linear, support vector machine with linear kernel; SVM-RBF, support vector machine with radial basis function kernel; XGBoost, extreme gradient boosting.