Abidine et al. [56] (2015) |
Recognition of activities of daily living |
The work proposed principal component analysis, independent component analysis, and linear discriminant analysis features with weighted support vector machines. The work also applied the features with other machine learning algorithms such as conditional random fields. |
94% accuracy. |
Aertssen et al. [57] (2011) |
Recognition of activities of daily living |
Motion information was extracted using motion history images and analyzed to detect three different actions for elderly people: walking, bending, and getting up. Shape deformations of the motion history images were investigated for different activities and used later for comparison in-room monitoring. |
94% accuracy. |
Auvinet et al. [58] (2008) |
Fall detection |
One of the authors of the work performed the falls on a mattress in a laboratory. The work mainly focused on post-fall phase. Twenty-two fall events were recorded for the experiments. |
Analytical study of the proposed design was done rather than reporting accuracy. |
Auvinet et al. [59] (2011) |
Fall detection |
The authors first recorded a dataset of videos from eight different cameras installed around the room where falls were simulated with the help of a neuropsychologist. For testing, some fake falls were also recorded. |
100% accuracy. |
Belshaw et al. [60] (2011) |
Fall detection |
Two in-home fall trials were done in two real living rooms. For each trial, the users performed simulated falls and real daily living behaviors for seven days. For the second trial, the users were instructed to simulate falls only and 11 simulated falls were done for seven days. |
100% sensitivity; 95% specificity. |
Belshaw et al. [61] (2011) |
Fall detection |
An annotated training set was designed with fall or no-fall. For experiments, three office rooms were set for recording training and testing videos of simulated falls over the course of three weeks. |
92% sensitivity; 95% specificity. |
Berlin & John [62] (2016) |
Recognition of activities of daily living |
Harris corner-based interest points and histogram-based features were applied with deep neural networks to recognize different human activities. The dataset consisted of six types of different activities: shake hands, hug, kick, point, punch, and push. |
95% accuracy. |
Brulin et al. [63] (2012) |
Activity posture recognition |
Fuzzy rules were applied to recognized different kind of postures: sitting, lying, squatting, and standing. |
74.29% accuracy. |
Chen et al. [64] (2016) |
Recognition of activities of daily living |
Action graph of skeleton-based features were extracted and applied with maximum likelihood estimation. Twenty different actions with 557 sequences were tried. The experiments included the cross-subject test where half of the subjects were applied for training and the rest for testing. The experiments were repeated 252 times with different folds. |
96.1% accuracy. |
Chia-Wen & Zhi-Hong [65] (2007) |
Fall detection |
The authors recorded a total of 78 videos for fall detection where 48 were used for training and 30 for testing. They focused on three feature parameters (i.e., the centroid of a silhouette, the highest vertical projection histogram, and the fall-down duration) to represent three different motion types (i.e., walk, fall, and squat). |
86.7% sensitivity; 100% specificity. |
Du et al. [66] (2015) |
Recognition of activities of daily living |
Skeleton data was extracted by sub networks and then applied with hierarchical bidirectional recurrent neural network. More than 7000 images were used to determine the postures from different activities such as undetermined, lying, squatting, sitting, and standing. |
100% accuracy. |
Foroughi et al. [67] (2008) |
Fall and activities of daily living recognition |
The authors applied best-fit approximation ellipse of silhouette, histograms, and temporal variations of head position features to represent daily activities and falls. Fifty subjects were used to record 10 activities five times each for experiments. |
97% accuracy. |
Huang et al. [68] (2016) |
Recognition of activities of daily living |
Lie group features were extracted and applied with Lie group network for different human activity recognition. The experiments included the largest 3D activity recognition dataset consisted of more than 56,000 sequences from 60 different activities performed by 40 different subjects. |
89.10% accuracy. |
Krekovic et al. [69] (2012) |
Fall detection |
The fall detection system consisted of background estimation, moving object extraction, motion feature extraction, and finally, fall detection. Dynamics of human motion and body orientation were focused. The small data set was built. |
90% accuracy. |
Lan et al. [70] (2015) |
Recognition of activities of daily living |
Dense activity trajectory was developed using histogram of oriented gradients and histogram of optical flow features to apply with support vector machines. The proposed method was validated on four different challenging datasets: Hollywood2, UCF101 and UCF50, and HMDB51. |
94.4% accuracy. |
Li et al. [71] (2016) |
Recognition of activities of daily living |
Vector of locally aggregated descriptor features were applied to analyze deep dynamics of the activities and later combined with deep convolutional neural networks. The proposed approach was tried on a public dataset of 16 different activities. |
90.81% accuracy. |
Li et al. [72] (2012) |
Fall detection |
The experimental dataset used in the work consisted two kinds of activities: falls and non-falls. The subjects were trained by nursing collaborators to act falling like an elderly. The first dataset was recorded in a laboratory where a mattress was used to fall on. The dataset consisted on 240 fall and non-fall videos (i.e., 120 for each). The second dataset was recorded in a realistic environment in four different apartments where each subject performed six falls on a mattress. |
100% sensitivity; 97% specificity. |
Lee & Mihailidis [73] (2005) |
Fall detection |
Trials for experimental analysis were done in a fake bedroom setting. The room consisted of a bed, a chair, and random bedroom furniture. The subjects were asked to complete five scenarios, which generated a total of 315 tasks consisting of 126 falls and 189 non-falls. |
77% accuracy. |
Lee & Chung [74] (2012) |
Fall detection |
Kinect depth camera with a laptop was installed to record a total 175 videos of different fall scenarios in indoor environments. |
97% accuracy. |
Leone et al. [75] (2011) |
Fall detection |
A geriatrician provided instructions for the simulation of falls which were performed using crash mats and knee or elbow protectors. A total amount of 460 videos were simulated of which 260 were falls. Several activities of daily living were stimulated other than falls to evaluate the ability of discriminating falls from activities of daily living. |
97.3% sensitivity; 80% specificity. |
Mirmahboub et al. [76] (2013) |
Fall detection |
The experimental dataset consists of 24 scenarios. In each scenario, a subject performed activities such as falling, sitting on a sofa, walking, and pushing objects. All activities were performed by one subject with different dresses. |
95.2% accuracy. |
Mo et al. [77] (2016) |
Recognition of activities of daily living |
Robust features were automatically extracted from body skeletons. The features were then applied with deep convolutional neural networks for modeling and recognition of 12 different daily activities. |
81.8% accuracy. |
Nyan et al. [78] (2008) |
Fall detection |
A total of 20 sets of data were recorded for different activities such as forward fall, backward fall, sideways fall, fall to half-left, and fall to half-right. Subjects were also asked to simulate activities of daily livings. |
100% accuracy. |
Peng et al. [79] (2014) |
Recognition of activities of daily living |
Space-time interest points, histogram of oriented gradients, and histogram of optical flow features were applied with support vector machines. The proposed approach was tried on three different realistic datasets: UCF50, UCF101, and HMDB51. |
92.3% accuracy. |
Peng et al. [80] (2014) |
Recognition of activities of daily living |
Robust dense trajectories were encoded with stacked Fisher kernels and applied with support vector machines for activity recognition. The approach was tried on three large datasets collected from different sources such as YouTube. |
93.38% accuracy. |
Rougier et al. [81] (2011) |
Fall detection |
Shape matching technique was applied was used to track a silhouette from a video sequence. Then, Gaussian mixture model was used for fall detection. |
100% accuracy. |
Shahroudy et al. [82] (2015) |
Recognition of activities of daily living |
Robust features were extracted using histogram of oriented gradients and histogram of optical flows. The features were then applied with support vector machines. The method was evaluated on three datasets: MSR-DailyActivity, MSR-Action3D, and 3D-ActionPairs dataset. |
81.9% accuracy |
Shi et al. [83] (2016) |
Recognition of activities of daily living |
Three sequential deep trajectory descriptors were tried with deep recurrent neural networks and convolutional neural networks for efficient activity recognition. The approach was tried on three datasets: KTH, HMDB51, and UCF101. |
96.8% accuracy. |
Shieh & Huang [84] (2012) |
Fall detection |
Subjects were requested to perform different events of falls and non-falls. The non-fall events include walking, running, sitting, and standing. The fall events include slipping, tripping, bending and fainting in any directions. In the experimental analysis, a total of 60 and 40 videos were used for non-fall and fall, respectively. |
90% accuracy. |
Simonyan & Zisserman [85] (2014) |
Recognition of activities of daily living |
Optical flow based temporal streams were applied with deep convolutional neural networks to model different human activities. The method was tried on two different datasets of benchmarks where it showed competitive performance with the state of the art methods. |
88.0% accuracy |
Uddin. [86] (2017) |
Recognition of activities of daily living |
Body parts in the depth images were first segmented based on random forests. Then, body skeletons were obtained from the segmented body parts. Furthermore, the robust spatiotemporal features were extracted and applied with hidden Markov models. The approach was tried on a public dataset of 12 human activities to check its robustness. |
98.27% accuracy. |
Uddin et al. [87] (2017) |
Recognition of gaits |
Spatiotemporal features were extracted using local directional edge patterns and optical flows. Then, deep convolutional neural networks were applied on them for normal and abnormal gait recognition. |
98.5% accuracy. |
Uddin et al. [88] (2017) |
Recognition of activities of daily living |
Body parts were segmented to get skeletons in the depth images based on random features and forests. Furthermore, spatiotemporal features were extracted based on the skeleton joint position and motion in consecutive frames. The body limbs were represented in spherical coordinate system to obtain person independent body features. Finally, the features were applied with deep convolutional neural networks on a public activity dataset of 12 different activities. |
98.27% accuracy. |
Veeriah et al. [89] (2015) |
Recognition of activities of daily living |
Normalized pair-wise angles, offset of joint positions, histogram of the velocity, and pairwise joint distances were applied with differential recurrent neural network. The approach was applied to recognize activities in two public datasets: MSR-Action3D and KTH. |
93.96% accuracy. |
Wang et al. [90] (2014) |
Recognition of activities of daily living |
Local occupancy patterns were applied to obtain depth maps. Fourier temporal pyramid was used for temporal representations of activities. Finally, the features were applied on support vector machines to characterize 12 different activities in a public dataset. |
97.06% accuracy. |
Wang et al. [91] (2016) |
Recognition of activities of daily living |
Weighted hierarchical depth motion maps were applied on three-channel deep convolutional neural networks. The method was applied on four different public datasets: MSRAction3D, MSRAction3DExt, UTKinect-Action, and MSRDailyActivity3D. |
100% accuracy. |
Wang et al. [92] (2015) |
Recognition of activities of daily living |
Pseudo-color images on three-channel deep convolutional neural networks were utilized to recognize activities on four public datasets (i.e., MSRAction3D, MSRAction3DExt, UTKinect-Action, and MSRDailyActivity3D) where it achieved the state-of-the-art results. |
100% accuracy. |
Wang et al. [93] (2015) |
Recognition of activities of daily living |
Skeleton-based robust features were applied with support vector machines. The approach was evaluated on two challenging datasets (i.e., HMDB51 and UCF101) where it outperformed the conventional approaches. |
91.5% accuracy. |
Willems et al. [94] (2009) |
Fall detection |
Grayscale video processing algorithm was applied to detect falls in the video. Background subtraction, shadow removal, ellipse fitting, and fall detection were done based on fall angle and aspect ratio. Finally, fall confirmation was done considering vertical projection histograms. |
85% accuracy. |
Yang et al. [95] (2017) |
Recognition of activities of daily living |
Low-level polynormal was assembled from local neighboring hypersurface normal and then aggregated by super normal vectors with linear classifier. The proposed method outperformed other traditional approaches on four public datasets: MSRActionPairs3D, MSRAction3D, MSRDailyActivity3D, and MSRGesture3D. |
100% accuracy. |
Yu et al. [96] (2012) |
Fall detection |
Simulating postures, activities, and falls in a laboratory setting. |
97.08% accuracy. |
Zhen et al. [97] (2016) |
Recognition of activities of daily living |
Space-time interest points with histogram of oriented gradient features were encoded with various encoding methods and then applied with support vector machines. The methods were tried on three public datasets: KTH, UCF-YouTube, and HMDB51. |
94.1% accuracy. |
Zhu et al. [98] (2016) |
Recognition of activities of daily living |
Co-occurrence features of skeleton joints were extracted and applied with deep recurrent neural networks with long short-term memory. The proposed method was validated on three different benchmark activity datasets: SBU kinect interaction, HDM05, and CMU. |
100% accuracy. |