Table 5.
Methods | Measurement Information | Previous Studies | Method | Accuracy |
---|---|---|---|---|
Contact method | Biometric information | Satti et al. [38] | Electromyogram measurement from electrodes attached to the steering wheel Electrocardiographic measurements from a wearable sensor on the wrist. |
NA |
Kundinger et al. [42] | ≧92% | |||
Kundinger et al. [43] | ≧90% | |||
Non-contact method | Vehicle behavior | Subaru [44] Hino [45] Mazda [46] Honda [47] Volvo [48] Jaguar [49] |
Detects changes in vehicle behavior and warns from HMI. | NA |
Arefnezhad et al. [10] | Apply ANFIS with steering angle as input. | 98.12% | ||
Jeon et al. [50] | Estimation by ensemble network model using steering and pedal pressure as input. | 94.2% | ||
Graphic information (of driver) | Toyota [51] Subaru [53] Nissan [54] Hino [45] Thanko [55] Yupiteru [56] |
Warnings for closed eyes and side glances. | NA | |
Toyota [52] | Stops the car when the driver is not in a good posture or does not respond to warnings. |
|||
Cardone et al. [61] | Applied PERCLOS to visible images obtained by a thermal imaging camera and classified “wakefulness”, “fatigue”, and “dozing” by deep learning. Support vector machine, K-nearest neighbor method, and decision tree were used to classify sleepiness based on the temperature patterns of the forehead and cheeks. |
Approximately 65% | ||
Tashakori et al. [63] | 84% | |||
Non-contact method | Graphic information (of driver) | Celecia et al. [62] | Fuzzy inference system to estimate sleepiness from eye and mouth information. | 95.5% |
Chakkravarthy [64] | EAR | 75% when blinking, 35% when wearing glasses, and 25% when hair is hanging over the face | ||
Manu [67] | Correlation coefficient template matching. | 94.58% | ||
Li et al. [69] | Detecting fatigue from driver’s eye closure time, few blinks, and few yawns. | 95.10% | ||
Képešiová et al. [70] | Learning grayscale face images with CNN. | 98.02% | ||
Dua et al. [71] | Detects drowsiness by considering four different types of features (hand gestures, facial expressions, behavioral features, and head movements) using four deep learning models: AlexNet, VGG-FaceNet, FlowImageNet, and ResNet. | 85% | ||
Yang et al. [72] | Nodding detection using LSTM autoencoder on RFID tag data. | ≧90% | ||
Jabber et al. [74] | Facial landmarks from images were detected and estimated by a system based on multilayers perception classifiers. | 81% | ||
Ma et al. [75] | Classified the driver’s drowsiness by PSO-H-ELM based on the power spectrum density of EEG data. | 83.12% | ||
Multiple methods | de Naurois et al. [6] | Modeled using the information on eyelid closure, eye and head movements, and driving time. Logistic regression with Eye Closure, head movement, KSS, HFC, etc., as explanatory variables. |
MSE of drowsiness level: 0.22 | |
Baccour et al. [27] | Pulse, respiration, and center of gravity information were obtained, and ESN was used for estimation. | 72.7% | ||
Ariizumi et al. [76] | 83.3% |