Table 6.
Ref | ML Model/NN Type | Details | Epochs | No. of Participants | Test for Analysis | Results |
---|---|---|---|---|---|---|
[166] | (CNN) and (LSTM-RNN) | TensorFlow is used to implement the NN. | 40 | 22 | Accuracy (84%) | CNNs may perform better than LSTM-RNN for real-time datasets. |
[167] | CNN with the Deep Q Neural Network (DQN) model compared with LSTM models and DQN | CCR, EER, AUC, MAP and the CMC. | 50 | Classification accuracy (98.33%) | CNN model performing better than the LSTM model. | |
[176] | 1-D Convolutional neural network (1-D CNN)—a RNN model with LSTM | 3+3 C-RNN designed for data processing. | 1000 | 80 | Accuracy (90.29%) | Model works well for lower sampling rates. However, for large data set accuracy is getting lower. |
[135] | Hierarchical Dirichlet process (HDP) model to detect human activity levels | SVM | 27 | Precision of 0.81 and recall of 0.77. | (HDP) model that can infer the number of levels automatically from a sliding window time duration. | |
[168] | Apriori Algorithm and Pattern Recognition (PR) Algorithm | New algorithm for PR is designed and implemented in MATLAB. | 9 | Standard deviation of Predicted v/s Actual Graph (Standard Deviations were around 2.6 for PR-Algorithm and 3.32 for Apriori algorithm). | PR algorithm indicated better prediction than the Apriori algorithm. | |
[177] | Hierarchical Dirichlet Process Model (HDPM) | Feed forward neural network. | 50 | 201 | Simple accuracy (sitting—78.60%, standing—9.45%, walking—26.87%) | The physical activity levels are automatically learned from the input data using the HDPM. |
[169] | HAR method based on U-Net | CNN | 100 | 266,555 samples and 5026 windows | Accuracy and Fw-score (Max. Accuracy of 96.4% and Fw-Score of 0.965). | U-Net method overcomes the multiclass window problem inherent in the sliding window method and realises the prediction of each sampling point’s label in time series data. |
[170] | InnoHAR—DL model | Combination of inception neural network and RNN structure built with Keras. | 9 | Opportunity, PAMAP2, and Smartphone datasets with F-scores of 0.946, 0.935 and 0.945, respectively. | Consistent superior performance and has good generalisation performance. | |
[171] | Deep Neural Network | Combination of convolutional and recurrent NN. | 417 | F1-Score in between 0.8–0.9 for different activities. | Simulated sensor data demonstrates the feasibility of classifying athletic tasks using wearable sensors. | |
[172] | Deep Neural Network | Fully connected CNN. | 50 | 5 (20 actions per person) |
cross validated accuracy for action classification. (Camera only—85.3% IMU only 67.1%, Combined—86.9%). | Action recognition algorithm utilising both images and inertial sensor data that can efficiently extract feature vectors using a CNN and performs the classification using an RNN. |
[173] | Hybrid DL model | Combines the simple recurrent units (SRUs) with the gated recurrent units (GRUs) of neural networks. | 50 | 1007 | Accuracy (99.8%) | Deep SRUs-GRUs networks to process the sequences of multisensors input data by using the capability of their internal memory states and exploit their speed advantage. |
[174] | CNN | Akamatsu Transform | 120 | Accuracy (85%) | Proposed a human action recognition method using data acquired from wearable sensors and learned using a Neural Network. | |
[178] | SVM, ANN and HMM, and one compressed sensing algorithm, SRC-RP | DL using MATLAB. | 4 people with 5 different tests | Recognition accuracy for different datasets (Debora—93.4%, Katia—99.6%, Wallace—95.6%). | Three different ML algorithms, such as SVM, HMM and ANN, and one compressed sensing-based algorithm, SRC-RP are implemented to recognise human body activities. | |
[179] | ML | Ensemble Empirical Mode Decomposition (EEMD), Sparse Multinomial Logistic Regression algorithm with Bayesian regularisation (SBMLR) and the Fuzzy Least Squares Support Vector Machine (FLS-SVM). | 23 | Classification accuracy (93.43%). | A novel approach based on the EEMD and FLS-SVM techniques is presented to recognise human activities. Demonstrated that the EEMD features can make significant contributions in improving classification accuracy. | |
[180] | ML | WEKA | 30 | Accuracy (98.5333%) |
Sensors on a smartphone, including an accelerometer and a gyroscope were used to gather and log the wearable sensing data for human activities. | |
[151] | Real-time Gesture Pattern Classification | Neural network-based classifier model. | 1040 | Accuracy (77%) |
Human hand gesture recognition using manually collected data and processed by LSTM layer structure. Accuracy is denoted using unity visualisation. | |
[181] | Pattern Recognition Methods for Head Gesture-Based Interface of a Virtual Reality Helmet (VRH) Equipped with a Single IMU Sensor | Classifier uses a two-stage PCA-based method, a feedforward artificial neural network, and random forest. | 975 gestures from 12 patients | Classification rate (0.975) |
VRH with sensors are used to collect data. Dynamic Time Warping (DTW) algorithm used for pattern recognition. | |
[182] | Hand Gesture Recognition (HGR) System. | Restricted Coulomb Energy (RCE) neural networks distance measurement scheme of DTW. | 252 | Accuracy (98.6%) | Hand Gesture Recognition (HAR) system for Human-Computer Interaction (HCI) based on time-dependent data from IMU sensors. | |
[183] | Motion capturing gloves are designed using 3D sensory data | Classification model with ANN. | 6700 | Accuracy (98%) | Data gloves with IMU sensors are used to capture finger and palm movements. | |
[184] | Quaternion-Based Gesture Recognition Using Wireless Wearable Motion Capture Sensors | SVM and ANN | 11 | Accuracy (90%) | Multisensor motion capturing system that is capable of identifying six hand and upper body movements. |