Table 4.
Author (Year) | Classifier | Precision/PPV | Recall/Sensitivity | Specificity | Accuracy |
---|---|---|---|---|---|
Afkari [30] | TB | - | - | - | dry swallow 94.3% swallow: 92.75% |
Amft and Troster [31] | LR | 10% | 65% | - | - |
AGREE | 20% | 68% | - | - | |
Bi et al. [32] | HMM (Event) | - | - | - | 86.6% |
DT | 86.2% | 87.5% | - | 87.1% | |
Fontana et al. [33] | TB | 50.1% | 86.1% | - | 68.2% |
Fukuike et al. [34] | TB | - | 97.2% | 95.2 | - |
Kurihara et al. [35] | Template matching | - | - | - | 88.8% * |
Lee et al. [36] | ANN | - | 91% | 88.2% | 88.5% |
Makeyev et al. [37] | SVM (Epoch) | - | 44% | 99% | 95.7% |
SVM (Event) | - | 71.3% | 87% | 80.4% | |
Sazonov et al. [38] | SVM (Epoch) | - | - | - | 96.4% |
SVM (Event) | - | - | - | 96.8% | |
Sejdic et al. [39] | 2-class fuzzy c-means | - | - | - | 94.6% |
Skowronski et al. [40] | GMM | - | 89.5% | 98% | 96.3% |
AGREE: Agreement Fusion of detectors; DT: Decision Tree; TB: Threshold-based; GMM: Gaussian Mixture Model; HMM: Hidden Markov Model; LR: Logistic Regression; SVM: Support Vector Machine. * Accuracy was calculated by the weighted average of class accuracy.