TABLE 11.
Reference | AI Algorithm Best Achieved accuracy | Dataset/Input features | Task |
---|---|---|---|
Kwon and Lee, (2021) | KNN, SVM, NB, DT, 100% | UPCVgaitK1, UPCVgaitK2 | GR |
Zhang et al. (2019b) | multi-task CNN, AE: MAE = 5.47, GR: 98.1% | OULP-Age dataset GEI from video | GR and AE |
El-Alfy and Binsaadoon, (2019) | LK-SVM with FLBP Normal: 96.40% | CASIA-B GEI from video | GR |
Carrying: 87.97% | |||
Wearing coat: 86.54% | |||
(Jain and Kanhangad, 2018) | Bootstrap DT 94.44% | 1D HG extracted from Smartphone in the front pocket | GR |
(Castro et al., 2017) | CNN F:77%, M:96% | TUM-GAID: extracted from low-resolution video streams recorded with MS Kinect | automatic PI and GR |
Lu et al. (2014) | AP clustering + SRML PI: 87.6% GR: 93.1% | Own dataset: ADSCAWD USF and CASIA-B C-AGI instead of GEI from MS Kinect Depth Sensor | PI and GR |
Legend: Sparse Reconstruction-based Metric Learning (SRML), Cluster-based Averaged Gait Image (C-AGI), Affinity Propagation (AP), Optical Flow (OF), Person Identification (PI), Gender Recognition (GR), Age Estimation (AE), Fuzzy Local Binary Pattern (FLBP), Linear Kernel SVM (LK-SVM).
Datasets: UPCVgaitK1 (Kastaniotis et al., 2013), UPCVgaitK2 (Kastaniotis et al., 2016), OULP-Age (Iwama et al., 2012), CASIA-B (Yu et al., 2006), TUM-GAID (Hofmann et al., 2014), USF (Sarkar et al., 2005).