Skip to main content
. 2022 Mar 7;22(5):2069. doi: 10.3390/s22052069

Table 8.

Hybrid-based drowsiness detection systems.

Ref. Sensors Hybrid
Parameters
Extracted Features Classification Method Description Quality Metric Dataset
[24] Automatic gearbox, image-generating computers, and control-loaded steering system Image- and vehicle-based features Latera position, yaw angle, speed, steering angle, driver’s input torque, eyelid opening degree, etc. A series of mathematical operations, specified schemes from the study hypothesis A system that assists the driver in case drowsiness is detected to prevent lane departure. It gives the driver a specific duration of time to control the car. If not, the system controls the vehicle and parks it. Accuracies up to 100% in taking control of the car when the specified driving conditions were met Prepared their own dataset
[28] PPG, sensor, accelerometer, and gyroscope Biological- and vehicle-based features Heart rate, stress level, respiratory rate, adjustment counter, and pulse rate variability, steering wheel’s linear acceleration, and radian speed SVM It collected data from the sensors. Then, the features were extracted and fed to the SVM algorithm. If determined drowsy, the driver is alerted via the watch’s alarm. Accuracy: 98.3% Prepared their own dataset
[121] Smartphone camera Biological- and image-based features Blood volume pulse, blinking duration and frequency, HRV, and yawning frequency If any of the detected parameters showed a specific change/value Used a multichannel second-order blind identification based on the extended-PPG in a smartphone to extract blood volume pulse, yawning, and blinking signals. Sensitivity: Up to 94% Prepared their own dataset
[29] Headband, equipped with EEG electrodes, accelerometer, and
gyroscope
Biological- and behavioral-based features Eyeblink patterns analysis, head movement angle, and magnitude, and spectral power analysis Backward feature selection method applied followed by various classifiers Used a non-invasive and wearable headband that contains three sensors. This system combines the features extracted from the head movement analysis, eye blinking, and spectral signals. The features are then fed to a feature selection block followed by various classification methods. Linear SVM performed the best. Accuracy, sensitivity, and precision:
Linear SVM: 86.5%, 88%, and 84.6%
Linear SVM after feature selection: 92%, 88%, and 95.6%
Prepared their own dataset
[122] SCANeR Studio, faceLAB, electrocardiogram, PPG sensor,
electro-dermal activity, Biopac MP150 system, and AcqKnowledge software
Biological-, image-, and vehicle-based features Heart rate and variability, respiration rate, blink duration, frequency,
PERCLOS, head and eyelid movements, time-to-lane-crossing, position on the lane, speed, and SWA
ANN Included two models that used ANN. One is for detecting the drowsiness degree, and the other is for predicting the time needed to reach a specific drowsiness level. Different combinations of the features were tested. Overall mean square error of 0.22 for predicting various drowsiness levels
Overall mean square error of 4.18 min for predicting when a specific drowsiness level will be reached
Prepared their own dataset
[123] EEG, EOG, ECG
electrodes, and channels
Biological-based features and NIRS Heart rate, alpha and beta bands power, blinking rate, and eye closure duration Fisher’s linear discriminant analysis method A new approach that combined EEG and NIRS to detect driver drowsiness. The most informative parameters were the frontal beta band and the oxygenation. As for classification, Fisher’s linear discriminant analysis method was used. Additionally, time series analysis was employed to predict drowsiness. Accuracy: 79.2% MIT/BIH polysomnographic database [82]
[124] Multi-channel amplifier with active electrodes, projection screen, and touch screen Biological-based features and contextual information EEG signal: power spectra, five frequency characteristics, along with four power ratiosEOG signal: blinking duration and PERCLOS contextual information: the driving conditions (lighting condition and driving environment) and sleep/wake predictor value. KNN, SVM, case-based reasoning, and RF Used EOG, EEG, and contextual information. The scheme contained five sub-modules. Overall, the SVM classifier showed the best performance. Accuracy:
SVM multiclass classification: 79%
SVM binary classification: 93%
Sensitivity:
SVM multiclass classification: 74%
SVM binary classification: 94%.
Prepared their own data
[125] Smartphone Image-based features, as well as voice and touch information PERCLOS, vocal data, touch response data Linear SVM Utilized a smartphone for DDD. The system used three verification stages in the process of detection. If drowsiness is verified, an alarm will be initiated. Accuracy: 93.33% Prepared their own dataset called ‘Invedrifac’ [126]
[127] Driving simulator and monitoring system Biological-, image-, and vehicle-based features 80 features were extracted: PERCLOS, SWA, LF/HF, etc. RF and majority voting (logistic regression, SVM, KNN) classifiers Vehicle-based, physiological, and behavioral signs were used in this system. Two ways for labeling the driver’s drowsiness state were used, slightly drowsy and moderately drowsy. Accuracy, sensitivity, and precision:
RF classifier:
Slightly drowsy labeling: 82.4%, 84.1%, and 81.6%
Majority voting:
Moderately drowsy labeling: 95.4%, 92.9%, and 97.1%
Prepared their own dataset