Table 3.
C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | |
---|---|---|---|---|---|---|---|---|
# Features | Feature extraction | ML or DL model | Architecture | Metrics | Validation | Hyper-parameters/optimizer/loss function | CIT* | |
R1 | 6/Time domain | Hand-crafted | SVM | SVM classifier for different kernels (polynomial, Radial basis function and linear) | F1-score, accuracy | tenfold | C, ℽ & degree in grid search | Garcia-Gonzalez et al. (2020) |
R2 | Spatial features | Automatic | CNN | C (32) − C (64) − C (128) − P − C (128) − P − C (128) − P – FC (128) – SM | Accuracy | 10% data for validation | LR: 0.001, BS: 50/Adam | Wang et al. (2019a) |
R3 | Frequency domain | Automatic | CNN | 3C with MP and dropout, 2 FC with dropout and SM | F1-score, Precision, recall | CV | LR: 0.01, DO/Adam | Lawal and Bano (2020) |
R4 | Time domain | Automatic | CNN-RNN with attention mechanism | TRASEND: C1- C2- C3)- flatten and concat, merge layer- temporal information extractor using a 8-headed self-attention mechanism RNN, o/p layer | F1-score | Leave one user out and CV | LR: {0.001, 0.0001,,00,001}/Adam/Cross Entropy | Buffelli and Vandin (2020) |
R5 | Spatial, Temporal | Automatic | LSTM-CNN | 2 LSTM layer (32 neurons), CNN (64), Max pooling, CNN (128), GAP, BN, o/p layer(softmax) | F1-score, accuracy | _ | LR: 0.001/Adam/Cross entropy | Xia et al. (2020) |
R6 | 18/Time & Frequency domain | Hand-crafted | AdaBoost, AdaBoost-CNN, CNN-SVM | For AdaBoost CNN- 4C, AP, FC, SM | Accuracy | Sub-out validation | Experiment with and without personalization similarity | Ferrari et al. (2020) |
R7 | 225 sensory features | Automatic | DNN | Layer 1(256), layer2 (512), layer 3 (128), O/p (softmax) | Accuracy, F1-score, Specificity, Sensitivity | 5% training data is used | No. of layers, no. of nodes per layer, appropriate regularization function | Fazli et al. (2021) |
R8 | Time domain | Automatic | CNN- CapsNet architecture | SenseCapsNet: I/p, 1D C (K = 5, S = 1), Primary caps: C2(K = 5, S = 2) and squash, Activity caps where k is kernel size and S is strides | Precision, recall | tenfold CV | mini batches:64,LR: 0.01, DO/SGD | Pham et al. (2020) |
CV cross validation, LOSO leave one subject out, C convolution, P pooling, AP average pooling, MP max pooling, FC fully connected, SM softmax, BN batch normalization layer, LR learning rate, DO dropout, BS batch size, SGD stochastic gradient descent, concat concatenation, Spec. specificity, Sens sensitivity, TL transfer learning, CIT citations