Skip to main content
. 2020 Dec 24;21(1):56. doi: 10.3390/s21010056

Table 4.

Extraction of visual features used to train and test the network based on online state-of-the-art vision-based datasets.

Cited Data Source Features Link URL
[11] NTHU-DDD Dataset 36 subjects, video: 9.5 h,
5 different classes
http://cv.cs.nthu.edu.tw/php/callforpaper/datasets/DDD/
[180] UTA-RLDD dataset Video—30 h,
3 features: alertness, low vigilance, and drowsiness, frame rate: 30 fps, participant: 60
http://vlm1.uta.edu/~athitsos/projects/drowsiness/
[181] MultiPIE different subjects, poses, illumination, occlusions, 68 landmark points https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/
- Kaggle-distracted drivers 22,424 images of size (480 × 680), 10 classes https://www.kaggle.com/c/state-farm-distracted-driver-detection
[182] 3MDAD 60 subjects,
16 different actions
https://sites.google.com/site/benkhalifaanouar1/6-datasets#h.nzos3chrzmb2
[183] MiraclHB AVI format with a resolution of 640 × 480 and frequency 30 fps, 12: subjects http://www.belhassen-akrout.com/
[184] BU-3DFE 100: subjects with 2500 facial expression models http://www.cs.binghamton.edu/~lijun/Research/3DFE/3DFE_Analysis.html

University of Texas at Arlington Real-Life Drowsiness Dataset (UTA–RLDD), National Tsing Hua University Drowsy Driver Detection (NTHU–DDD), multiview points, illumination and expressions (MultiPIE), multimodal multiview and multispectral driver action dataset (3MDAD), Multimedia Information Systems and Advanced Computing Laboratory Hypo-vigilance database (MiraclHB), and Binghamton University 3D facial expression (BU–3DFE).