Skip to main content
. 2020 Dec 24;21(1):56. doi: 10.3390/s21010056

Table 9.

Comparisons to state-of-the-art driver’s fatigue detection systems, in terms of multimodal (visual and non-visual); these comparisons are based on two-stage classification DFD systems, such as normal- and fatigue-based on 10 subjects under normal conditions.

Cited Methodology Detection Accuracy (ACC) Time Platform
(a) Classification driver fatigue without pre-training
[36] Simon_EEG (2012) EEG with statistical analysis FT: 83.5%, NM: 84.5% 6.7 s No
[157] BJ Chang-smartphone (2012) Different sensors, including video, electrocardiography, photoplethysmography, temperature, and a three-axis accelerometer FT: 85.5%, NM: 86.5% 7.88 s Yes
(b) Classification driver fatigue in cloud platform
[36] Simon_EEG (2012) EEG with statistical analysis FT: 83.5%, NM: 84.5% 4.33 s No
[157] BJ Chang-smartphone (2012) Different sensors, including video, electrocardiography, photoplethysmography, temperature, and a three-axis accelerometer FT: 85.5%, NM: 86.5% 6.35 s Yes
(c) M-DFD: Combine visual and non-visual features without smartphone
Visual and non-visual features CNN + RNN without pre-training FT: 89.65%, NM: 89.5% 3.45 s NA
Visual and non-visual features CNN + RNN with pre-training on scratch FT: 90.40%, NM: 90.5% 3.75 s NA
(d) M-DFD: Combine visual and non-visual features with smartphone
Visual and non-visual features CNN+ RNN without pre-training FT: 89.65%, NM: 88.5% 3.77 s Yes
Visual and non-visual features CNN + RNN with pre-training on scratch FT: 94.50%, NM: 92.5% 3.85 s Yes
(e) M-DFD: Combine visual and non-visual features with smartphone and Cloud
Visual and non-visual features CNN+ RNN without pre-training FT: 89.65%, NM: 88.5% 1.2 s Yes
Visual and non-visual features CNN + RNN with pre-training on scratch FT: 94.50%, NM: 93.5% 1.3 s Yes

CNN: Convolutional neural network, RNN: Recurrent neural network; EEG: electroencephalography, FT: Fatigue, NM: Normal state.