Table 2.
Task-specific performance comparison.
| Task type and methods/ model | Key features | Performance metrics (number of studies) | References | ||||
| Classification (n=28) | |||||||
|
|
RFa (n=13) | Handcrafted features: time/frequency features (eg, mean, SD, percentiles, lag-1 autocorrelation), ensemble of decision trees |
|
[28,36-38,50-53,64,67,72,74,80] | |||
|
|
ANNb (n=7) | Handcrafted features: time/frequency features (eg, spectral entropy, signal power), multilayer perceptron |
|
[19,29,58,60,62,65,73] | |||
|
|
SVMc (n=4) | Kernal-based classification on RBFd, advanced cross-correlation metrics (xy, xz, yz) |
|
[28,50,55] | |||
|
|
DTe (n=4) | Tree-based splits, integrate with ANN outcomes |
|
[58,62,64,74] | |||
|
|
Gradient boosting (n=3) | Gradient boosting framework, handling missing data |
|
[37,54,64] | |||
|
|
HMMf (n=3) | Temporal sequence modeling, Viterbi smoothing |
|
[71,75,76] | |||
|
|
QDAg (n=1) | Quadratic decision boundaries, probabilistic classification |
|
[71] | |||
|
|
LASSOh (n=1) | L1 regularization, sparse solutions |
|
[64] | |||
|
|
CNNi (n=1) | Automated feature extraction via convolutional filters on raw signals |
|
[68] | |||
| Estimation (n=10) | |||||||
|
|
RF (n=6) | Regression trees, bootstrapped subsets of ActiGraph data |
|
[56,57,59,70,77] | |||
|
|
ANN (n=2) | Nonlinear activation functions, raw signal processing |
|
[61,66] | |||
|
|
SVM (n=1) | Kernal-based regression. |
|
[70] | |||
|
|
k-NNj (n=2) | Instance-based learning, Euclidean distance metrics |
|
[37,70] | |||
|
|
XGBoostk (n=1) | Gradient boosting framework, handling missing data |
|
[37] | |||
|
|
Gradient boosting (n=1) | Iterative error correction, additive regression trees |
|
[70] | |||
| Deep learning (n=5) | |||||||
|
|
Bi-LSTMl (n=3) | Bidirectional temporal modeling, raw signal processing |
|
[31,33,79] | |||
|
|
CNN (n=2) | Automated feature extraction via convolutional filters on raw signals |
|
[33,68] | |||
|
|
ViTm (n=1) | Self-attention mechanisms for long-range dependencies |
|
[33] | |||
|
|
CNN-LSTMn or CNN-BiLSTMo (n=2) | Hybrid architecture, integrate spatial and temporal learning |
|
[33,34] | |||
|
|
ViT-BiLSTMp (n=1) | Vision Transformer + BiLSTM, gravity-based acceleration analysis. |
|
[33] | |||
aRF: random forest.
bANN: artificial neural network.
cSVM: support vector machine.
dRBF: radial basis function.
eDT: decision tree.
fHMM: hidden Markov model.
gQDA: quadratic discriminant analysis.
hLASSO: least absolute shrinkage and selection operator.
iCNN: convolutional neural network.
jk-NN: k-nearest neighbor.
kXGBoost: extreme gradient boosting.
lBiLSTM: bidirectional long short-term memory.
mViT: vision transformer.
nCNN-LSTM: convolutional neural network and bidirectional long short-term memory.
oCNN-BiLSTM: convolutional neural network and bidirectional long short-term memory.
pViT-BiLSTM: vision transformer bidirectional long short-term memory.