Skip to main content
. 2022 Nov 14;22(22):8802. doi: 10.3390/s22228802

Table 6.

Combination of sensors used in different research works.

Source Algorithms or Software Involved Outputs Evaluation Metrics (%) Labelled Data
Barsocchi et al. [56]
  • (1)

    Data provided by the sensor filtered. In particular, data from the magnetic contacts and power usage sensors processed to obtain information about when they change their status. Moreover, the median filter applied to the spikes produced by the power usage sensor of the personal computer.

  • (2)

    Room-level localization algorithm “where is” (WHIZ) exploits the data provided by the sensor in order to provide information about the location of the elderly.

  • (3)

    A set of possible activities associated with the room where the activity is usually performed (cooking/kitchen, feeding/living room, etc.).

Detection of ADLs such as lunch/dinner, resting/pc/tv, sleeping and hygiene. 81% sensitivity Yes
Lussier et al. [57]
  • (1)

    Algorithms developed to monitor sleep, going out for activities, low activity periods, cooking-related activities and hygiene-related activities. The algorithms built around assumptions about these different activities.

  • (2)

    Codification and matrix building used for data analysis. First, descriptive codes created. These codes labeled units of text (words, sentences, paragraphs) that encompassed a distinct meaning with regard to how and why monitoring data was used by social and health care professionals. The coding grid emerged from the data. Second, matrices used to further analyze the decision-making process of the social and health care professionals.

Detection of ADLs.
Results showed that AAL monitoring technologies provide health professionals with information about seniors related to self-neglect such as malnutrition, deficient hygiene, lack of household chores, oversleeping, and social isolation.
No data available No data available
Gochoo et al. [58]
  • (1)

    The annotated binary data converted into a binary activity image for ADLs.

  • (2)

    Activity images used for training and testing the Deep Convolutional Neural Network (DCNN) classifier.

  • (3)

    Classifiers evaluated with 10-fold cross-validation method.

Detection of four ADLs: bed to toilet movement, eating, preparation meals, and relaxing.
DCNN classifier gives an average accuracy of 99.36%.
99.36% accuracy Yes
Dawadi et al. [59]
  • (1)

    Activity recognition based on SVM.

  • (2)

    Support Vector Regression (SVR), Linear Regression (LR), Random Forest (RF) used to predict clinical scores of smart home residents using activity performance features computed from activity labeled sensor data.

Detection of seven ADLs: sleep, bed to toilet movement, cooking, eating, relaxation, personal hygiene, and the mobility of the resident inside the home.
There is a correlation between the predicted clinical assessment using activity behavior and the mobility scores provided by the clinician.
95% accuracy Yes
Pirzada et al. [60]
  • (1)

    The K-Nearest Neighbors algorithm (KNN) used to detect any irregular activity. In addition, the training and test data use the k-fold technique to generate different sets in the iteration.

Detection of anomalies in patterns. No data available No data available
Ghosh et al. [63]
  • (1)

    Support Vector Machine (SVM) with linear kernel, K-Nearest Neighbors (KNN) and decision tree techniques used on ultrasonic sensors data.

Detection of standing, sitting and falling. 90% accuracy Yes
Rebeen et al. [69]
  • (1)

    The sequence of binary sensor features with incremental fuzzy time windows (FTWs) extracted, equal size (1 min) temporal windows (ESTWs) and Raw Last sensor Activation (RLA) in one-minute windows.

  • (2)

    ADLs identified using different machine learning algorithm: Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN) with ESTWs, C4.5 and SVM with RLA and LSTM, CNN and hybrid CNN LSTM model with FTWs.

Better results in recognition activities (eating, grooming, going out, showering, sleeping, saving time and going to the bathroom) when the recognition of the activity is delayed, preceding 1-min sensor activations with 5-min delays (20-min delay, 1-h delay, etc.) compared to considering only the 1-min delay sensor data. CNN LSTM: 96.97% and 96.72% f1-score for the first and second database, respectively Yes
Seint et al. [64]
  • (1)

    Labeling the colored bottles by RGB color space and labeling the skin parts by YCbCr color space, then tracking the desired objects.

  • (2)

    Features were extracted for the drug intake model and the dietary activity model.

  • (3)

    Hybrid PRNN-SVM (Pattern Recognition Neural Network) model was used for classification and interpretation of drug intake activity.

  • (4)

    Rule-based learning with occurrence count method was used for classification and interpretation of meal intake activity.

Detection of medication and meal intake. 90% accuracy for taking medication and 95% accuracy for taking meals Yes
Cippitelli et al. [66]
  • (1)

    A body orientation algorithm applied to the depth frame to identify the orientation of the person while sitting to the table. Then, point cloud filtering and Self-Organizing Map (SOM) algorithm applied for the upper part of the human body.

  • (2)

    With subsequent mapping, depth and RGB information are combined in the same frame.

Detection of eating and drinking actions. 98.3% accuracy Yes
Vuegen et al. [68]
  • (1)

    Feature extraction from acoustic sensor data performed using Mel-Frequency Cepstral Coefficients (MFCCs) approach.

  • (2)

    Support Vector Machine (SVM) used for ADL classification.

Detection of brushing teeth, washing dishes, dressing, eating, preparing food, setting table, showering, sleeping, toileting and washing hands. 78.6 ± 1.4% accuracy Yes
Yunfei et al. [72]
  • (1)

    The mobile device’s orientation is detected by the GPS sensor.

  • (2)

    A Wi-Fi fingerprinting database created using data from multiple locations inside the house’s Received Signal Strength Indicator (RSSI). Then, SVM was used as classifier to conduct location estimation.

  • (3)

    The sounds were categorized using timbres

Detection of 6 ADLs: working on a desktop PC in the bedroom, wandering walk, hygiene activities, cooking, washing dishes, and eating. Between 92.35% and 99.17% accuracy for each of the four databases Yes
Tsang et al. [74]
  • (1)

    Using SVM, the accelerometer and gyroscope data were classified into transitions (walking motion) or activity (non-transition periods).

  • (2)

    The activity’s basic posture is classified by SVM. Then, the direction and features of the transition motion were examined to determine the current activity.

Recognition of five indoor activities: sleeping, watching TV, toileting, cooking and eating. All other activities including outdoor activities are assigned to “others”. 99.8% accuracy Yes
Park et al. [65]
  • (1)

    Homography mapping for 3D localization of people from the two wide-FOV cameras and foreground segmentation for unoccluded views of people used for (fine) body-level analysis from the two narrow-FOV cameras. K-means clustering adopted for the background model and the probabilistic appearance model to identify the person performing an activity.

Recognition of six activities: walking around, sitting and watching TV, preparing a utensil, storing utensil, preparing cereal and drinking water. 83% mean accuracy for all activities Yes
Ueda et al. [70]
  • (1)

    The feature value of the sensor data extracted from the 5-min time interval that is labeled (a recorded video is used as ground truth to label the sensor data according to the type of activity).

  • (2)

    SVM used to identify the activities using the feature values from the sensor data.

Recognition of six different activities: watching TV, taking a meal, cooking, reading a book, washing dishes, and others. 85% accuracy Yes