Skip to main content
. 2022 Jun 10;10(6):1084. doi: 10.3390/healthcare10061084

Table 7.

Applied model results for WISDM dataset. MLP: multi-layer perceptron. LR: logistic regression. Stat. Feat.: statistical features. Att. M.: attention mechanism. R. B.: residual block. LSTM: Long short-term memory.

Evaluation Reference Segment Length (s) Feature Extraction Classifier Accuracy (%)
10-fold
cross
validation
Kwapisz et al. [43] 10 Handcrafted MLP 91.7
Garcia-Ceja et al. [55] 5 CNN FC layer 94.2
Catal et al. [57] 10 Handcrafted Ensemble of
(LR, MLP, j48)
91.62
Ignatov [58] 10 CNN + Stat. Feat. FC layer 93.32
Current model 10 Handcrafted RF 94
70%/30%
split
Gao et al. [56] 10 CNN + Att. M. FC layer 98.85
Suwannarat et al. [59] 8 CNN FC layer 95
Abdel-Basset et al. [60] 10 CNN + R. B.
+ LSTM + Att. M.
MLP 98.90
Zhang et al. [61] 11.2 CNN FC layers 96.4
Zhang et al. [62] 10 CNN + Att. FC layer 96.4
Current model 10 Handcrafted RF 98.56