TABLE 5.
Reference | Algorithm | Data Collection/Input | AI Task/Output |
---|---|---|---|
Taborri et al. (2021) | Linear SVM 96% | N = 39, inertial sensors, and optoelectronic bars | ACL risk prediction in female basketball players via LESS score |
Johnson et al. (2021) | CNN, not enough accuracy | Wearable accelerometer | predict near real-time GRF/Ms from kinematic data |
Nguyen et al. (2020) | CNN | 7 IMU’s | Gait classification: athlete vs. foot abnormalities |
Guo and Wang, (2021) | TS-DBN | Public datasets of videos KTH and UCF | HAR/sports behavior recognition |
Gholami et al. (2020) | CNN | shoe-mounted accelerometer | Abnormal running kinematics Activity recognition |
Cronin et al. (2019) | DeepLabCut | single GoPro camera | Markerless 2D kinematic analysis of underwater running |
Kang et al. (2018) | FFT | Smartphone (unconstrained) | Detects walking, counts steps, irrespective of phone placement |
Onodera et al. (2017) | ANN with IG | infrared cameras and force plates | Influence of shoe midsole resilience and upper structure on running kinematics and kinetics |
Sundholm et al. (2014) | KNN with DTW | pressure sensor mat | Exercise detection and exercise count |
Legend: Fast Fourier Transform (FFT), Time-Space Deep Belief Network (TS-DBN), Landing Error Score System (LESS), Ground Reaction Forces and Moments (GRF/M), DeepLabCut as in (Mathis et al., 2018).
Datasets: Royal Institute of Technology (KTH) (Jaouedi et al., 2020) and University of Central Florida (UCF) (Perera et al., 2019).