Abstract
Tracking drivers’ eyes and gazes is a topic of great interest in the research of advanced driving assistance systems (ADAS). It is especially a matter of serious discussion among the road safety researchers’ community, as visual distraction is considered among the major causes of road accidents. In this paper, techniques for eye and gaze tracking are first comprehensively reviewed while discussing their major categories. The advantages and limitations of each category are explained with respect to their requirements and practical uses. In another section of the paper, the applications of eyes and gaze tracking systems in ADAS are discussed. The process of acquisition of driver’s eyes and gaze data and the algorithms used to process this data are explained. It is explained how the data related to a driver’s eyes and gaze can be used in ADAS to reduce the losses associated with road accidents occurring due to visual distraction of the driver. A discussion on the required features of current and future eye and gaze trackers is also presented.
Keywords: advanced driving assistance systems (ADAS), eye tracking, gaze tracking, line of sight (LoS), point of regard (PoR), road safety
1. Introduction
1.1. Background and Motivation
The human eyes, a beautiful and interactive organ in the human body, have unique physical, photometric, and motion characteristics. These characteristics provide vital information required for eye detection and tracking. In our daily lives, a person’s emotional state, mental occupancy, and needs can be judged by the person’s eyes movements. Through our eyes, we identify the properties of the visual world and collect the information essential to our lives. Moreover, in the field of image and video processing, eyes play a vital role in the process of face detection and recognition [1,2,3,4]. The history of eye tracking dates back to second half of 18th century when researchers observed eye movements to analyze reading patterns. The early trackers used a sort of contact lens with a hole for the pupil [5]. In this arrangement, the movements of eye were tracked using an aluminum pointer connected to the lens. The authors of [6,7] developed first non-intrusive eye-trackers using light beams that were reflected on the eye and then recorded on a film. The authors also provided a systematic analysis of reading and picture viewing. A significant contribution in eye tracking research was made by the author of [8] in the 1950s and 1960s. The author showed that the gaze trajectories depend on the task that the observer has to execute. If the observers are asked particular questions about an image, their eyes concentrate on question-relevant areas of the image. The author also devised a suction cup that could stay on the human eye by suction to analyze visual perceptions in the absence of eye movements. In 1970s and afterwards, the research of eye tracking expanded rapidly [9]. In 1980s, a hypothesis known as the eye-mind hypothesis was formulated and critically analyzed by other researchers [10,11,12]. The hypothesis proposed that there is no considerable lag between what is fixated and what is processed. Further, several aspects related to eye tracking in the field of human-computer interaction and eye tracking applications to assist disabled people were also developed in the same decade [13]. During the last two to three decades, a revolutionary development was observed in eye tracking due to introduction of artificial intelligence techniques and portable electronics and head-mounted eye trackers.
Eye tracking and gaze estimation are essentially two areas of research. The process of eye tracking involves three main steps; viz., to discover the presence of eyes, a precise interpretation of eye positions, and frame to frame tracking of detected eyes. The position of the eye is generally measured with the help of the pupil or iris center [14]. Gaze estimation is a process to estimate and track the 3D line of sight of a person, or simply, where a person is looking. The device or apparatus used to track gaze by analyzing eye movements is called a gaze tracker. A gaze tracker performs two main tasks simultaneously: localization of the eye position in the video or images, and tracking its motion to determine the gaze direction [15,16]. A generic representation of such techniques is shown in Figure 1. In addition to its application in advanced driving assistance systems (ADAS), gaze tracking is also critical in several other applications, such gaze-dependent graphical displays, gaze-based user interface, investigations of human cognitive states, and human attention studies [17,18,19].
Tracking of driver’s eyes and gaze is an interesting feature of advanced driving assistance systems (ADAS) that can help reduce the losses involved in road accidents. According to World Health Organization’s reports [20,21,22], every year approximately 1–1.25 million people die and 20–50 million people receive injuries due to road accidents across the world. Moreover, if the recent trend of road accidents persists by 2030, road accidents could be the 5th main cause of death. In terms of cost, the damages involved in road accidents are more than five hundred billion USD. This amount is approximately equal to 2% of the gross national product (GNP) of advanced countries, 1.5% of the GNP of medium-income economies, and 1% of GNP of low-income countries. According to the recent studies (e.g., [23]), it is hoped that the amount of road accidents (related to visual distraction) will be reduced by 10–20% due to facial monitoring feature of ADAS.
1.2. Contribution and Organization
The intention of this paper is to benefit researchers by offering a comprehensive framework for a basic understanding of eye and gaze tracking and their applications in ADAS. To the best of authors’ knowledge, this is the first study that reviews the visual data (i.e., eyes and gaze data) techniques in the context of ADAS applications, though studies do exist regarding individual topics covered in this paper.
This paper is organized as follows: Section 2 and Section 3 explain the models and techniques developed for eye and gaze tracking, respectively. The major categories of these models and techniques, with emphasis on the literature in which these techniques were initially proposed, and their respective benefits and limitations, are also discussed. Section 4 explains the driving process and challenges associated with a driver’s visual distraction. In this section, it is explained that how visual data of a driver are collected, processed, and used in ADAS applications. Further, the features of modern vehicles based on utilization of visual data of drivers and other vehicle parameters are summarized in this section. At the end of each section of the paper, necessary information is presented in a comprehensive tabular form. Section 5 concludes the paper with pointers on the future directions in this research field. The authors do admit that the topic presented is too wide and deep to be reviewed by a single paper. We encourage the interested readers to refer to other references, provided at the end of this paper, for further study of specific areas or the topics not covered in this work. For example, operational definitions of driving performance measures and statistics are well-documented in [24].
2. Eye Tracking
2.1. Introduction
The first step in eye tracking is to detect the eyes. The detection of eyes in image or video data is based on eye models. An exemplar eye model should be sufficiently meaningful to accommodate the variability in eyes’ dynamics and appearance while adequately constrained to be computationally efficient. Eye detection and tracking is an arduous job due to exceptional issues, such as degrees of eye openness; variability in size, head pose, and reflectivity; and occlusion of the eye by eyelids [3,25,26]. For instance, a small variation in viewing angle or head position causes significant changes in the eye appearance or gaze direction, as shown in Figure 2. The eye’s appearance is also influenced by ethnicity of the subject, light conditions, texture, iris position within eye socket, and the eye status (open or closed). Eye detection methods are broadly categorized based on eyes’ shape, features, and appearance, as explained below.
2.2. Shape-Based Techniques
An open eye can be efficiently expressed by its exterior (e.g., eyelids) and interior (e.g., iris and pupil) parts. The shape-based techniques are based on a geometric eye model (i.e., an elliptical or a complex eye structure) augmented with a similarity index. The model defines the allowable template deformations and contains parameters for nonrigid template deformations and rigid transformations. The main feature of these techniques is their capability of handling the changes in shape and scale.
2.2.1. Elliptical Eye Models
For simpler applications of eye detection and tracking, the elliptical appearance of the eye can serve the job. Though simple elliptical eye shape models proficiently model features such as the pupil and iris under various viewing angles, these models are lacking in terms of capturing the variations and inter-variations of certain eye features. A major category of the techniques which consider the simple elliptical eye model are known as model fitting techniques which fit the designated features to the elliptical model [27,28]. Typically, in the techniques which utilize the elliptical eye model, pupil boundaries are extracted with the help of edge detection techniques, while transformation algorithms such as the Hough transform are utilized to extract the features of iris and pupil [29]. The authors of [30] and [31] estimated the center of pupil ellipse using thresholds of the image intensities. In their techniques, a constraint of shape circularity is employed to improve the efficiency; however, the model works only for near-frontal faces due to this constraint. Another category of techniques that exploit the simple elliptical eye model calls its members voting-based techniques [31]. The parameter selected in voting techniques support a given hypothesis through an accumulation process. The authors of [32] proposed a voting scheme that utilized temporal and spatial data to detect the eyes. They used a large temporal support and a set of heuristic rules to reject false pupil candidates. A similar voting scheme, which used edge orientation directly in the voting process, was also suggested in [33]. This technique was based on the intensity features of the images, and it relied on anthropomorphic averages and a prior face model to filter out the false positives. A limitation of such techniques is that they basically rely on maxima in feature space. When the number of eye region features decreases, the techniques may mistake other regions, such as eyebrows, for the eyes. So, these techniques are typically applicable when the search region is confined. A low-cost eye tracking system is proposed in [34], where the Starburst algorithm is used for iris detection. This algorithm finds the highest gray-level differences along rays while recursively sparkling new rays at the already found maxima. The Starburst algorithm is basically an active shape model which uses several features along each normal.
2.2.2. Complex Shape Models
Complex shape-based models are based on in-depth modeling of the eye shape [35,36,37,38]. A well-known example of complex shape models is the deformable template model [35], which consists of a circle for the iris representation and two parabolas for the eyelids. To fit the model to an image, energy functions for internal forces, edges, valleys, and image peaks, are incorporated in an update rule. However, the right selection of the template’s initial position is crucial for accurate results in this approach as the system cannot detect the eyes if the template is initialized above the eyebrow. Other limitations of this model are the complex template description and complexity with eye occlusions due to non-frontal head pose or eyelid closure. The authors of [36] extended this model to extract the eye features by considering eye corners as the initialization points. They used a nonparametric technique (known as snake model) to determine the head’s outline, and found the approximated eye positions by anthropomorphic averages. The information of the detected eye corners is utilized to lower the iterations number in the optimization of the deformable template. Similarly, the authors of [39,40] proposed the ways to speed up the technique proposed in [35]. Some researchers combined the features of complex eye models with elliptical models to improve accuracy and speed of the localization process (e.g., [41]).
Certain deformable models (e.g., snake model) can accommodate for significant shape variations, while the others cannot handle the large variability of eye shapes. The techniques based on deformable eye template are typically considered more logical, generic, and accurate. However, they have certain limitations, such as the requirement for high contrast images, being computationally demanding, and requiring initialization close to the eye. Moreover, for larger head movements, they subsequently rely on other techniques to provide good results.
2.3. Feature-Based Techniques
Feature-based techniques are based on the identification and utilization of a set of unique features of the human eyes. These techniques identify such local features of the eye and the face which have reduced sensitivity to variations in viewing angles and illumination. The commonly used features for eye localization are corneal reflections, limbus, and dark and bright pupil images. Typically, these techniques first identify and detect the local features; then, they apply a filter to highlight desired features while suppressing the others or utilize a prior eye shape model to construct a local contour; and, finally, they apply the classification algorithms to produce the output. Generally, the feature-based techniques are reported to provide good results in indoors applications; however, their outdoor performance is comparatively limited. These techniques are further subcategorized as follows.
2.3.1. Local Features
Eyes’ local features are detected and utilized in combination with a prior shape model to detect and track the eyes [42,43,44,45,46]. For instance, the approach proposed in [42] first located a specific edge and then employed steerable Gabor filters to trail the edges of the eye corners or the iris. Next, based on the selected features and the eye model, a search policy was adopted to detect the shape, position, and corners of the eye.
The authors of [44] suggested a part-based model, in which an eye part (e.g., eyelid) is considered as a microstructure. They extracted face features using a multilayer perception method by locating eyes on face images. The authors of [45] extended the work of [42] and made improvements by utilizing multiple specialized neural networks (NN) trained to detect scaled or rotated eye images, and they worked effectively under various illumination conditions. The authors of [46,47] detected and utilized the information of area between the two eyes instead of eyes themselves. The area between the eyes is comparably bright on lower and upper sides (nose bridge and forehead, respectively) and has dark regions on its right and left sides. This area is supposed to be more stable and detectable than the eyes themselves. Moreover, this area can be viewed from a wide range of angles, and has a common pattern for most people. The authors of [46,47] located the candidate points by employing a circle-frequency filter. Subsequently, by analyzing the pattern of intensity distribution around the point, they eliminated the spurious points. Enhancing the robustness of this method, a fixed “between the eyes” template was developed to identify the actual candidates and to avoid the confusion between the eye regions and other parts [48,49].
2.3.2. Filter Response
Use of specific filter was also proposed in several techniques to enhance a desired set of features while diminishing the impact of irrelevant features. For instance, authors of [50,51] used linear and nonlinear filters for eye detection and face modeling. They used Gabor wavelets for detection of edges of the eye’s sclera. The eye corners, detected through nonlinear filter, are utilized to determine the eye regions after elimination of the spurious eye corner candidates. The edges of the iris are located through a voting method. Experimental results demonstrate that the nonlinear filtering techniques are superior to the traditional, edge-based, linear filtering techniques in terms of detection rates. However, the nonlinear techniques require high-quality images.
2.3.3. Detection of Iris and Pupil
The pupil and iris being darker than their surroundings are commonly considered reliable features for eye detection. The authors of [52] used a skin-color model and introduced an algorithm to locate the pupils by searching for two dark areas that fulfill specific anthropometric requirements. Their technique, however, cannot perform well in different light conditions due to limitation of the skin-color model. Generally, use of IR light instead of visible light seems more appropriate for dark region detection. The techniques based on iris and pupil detection require the images taken from close to the eyes or high-resolution images.
The majority of the feature-based techniques cannot be used to model closed eyes. In an effort to overcome this limitation, a method [53] was proposed to track the eyes and to retrieve the eye parameters with the help of a dual-state (i.e., open or closed) eye model. The eyelids and eyes’ inner corners are detected through the algorithm proposed in [54]. This technique, however, requires a manual initialization of the eye model and high contrast images.
2.4. Appearance-Based Techniques
The appearance-based techniques detect and track the eyes by using photometric appearance of the eyes, which is characterized by the filter response or color distribution of the eyes with respect to their surroundings. These techniques can be applied either in a spatial or a transformed domain which diminishes the effect of light variations.
Appearance-based techniques are either image template-based or holistic in approach. In the former approach, both the intensity and spatial information of each pixel is maintained, while in the latter technique, intensity distribution is considered and the spatial information is disregarded. Image template-based techniques have limitations associated with scale and rotational modifications, and are negatively influenced by eye movements and head pose variations for the same subject. Holistic approaches (e.g., [55,56]) make use of statistical techniques to derive an efficient representation while analyzing the intensity distribution of the entire object’s appearance. The representation of the object, defined in a latent space, is utilized to deal with the disparities in the object’s appearance. During the test stages of the technique, the similarity analysis between the stored patterns and the test image is performed in the latent space. These techniques usually need a large amount of training data (e.g., the eyes of different subjects under different illumination conditions and facial orientations). However, the underlying developed models, constructed through regression, are principally independent of the object classes.
2.5. Hybrid Models and Other Techniques
Some techniques are based on symmetry operators [57,58,59] while some approaches exploit the data of eye blinks and motions [48,53,60,61,62]. Hybrid models combine the benefits of various eye models in a single arrangement while overcoming their deficiencies. These models, for instance, combine shape and intensity features [63,64,65], and shape and color features [52,62,63,64,65,66,67,68,69,70,71].
2.6. Discussion
The eye detection and tracking techniques, based on their photometric and geometric properties, are discussed in the preceding sections. Each technique has its own pros and cons, and the best performance of any scheme requires fulfillment of specific conditions in image and video data. These conditions are related to ethnicity, head pose, illumination, and degree of eye openness. The existing approaches are usually well applicable to fully open eyes, near-frontal viewing angles, and under good illumination conditions. Table 1 summarizes the various eye detection techniques and compares them under various image conditions.
Table 1.
Technique | Information | Illumination | Robustness | Requirements | References | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Pupil | Iris | Corner | Eye | Between-the-Eyes | Indoor | Outdoor | Infrared | Scale | Head Pose | Occlusion | High Resolution | High Contrast | Temporal Dependent | Good Initialization | ||
Shape-based (circular) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | [30,31,34] | |||||||||
Shape-based (elliptical) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | [27,28,72] | ||||||||
Shape-based (elliptical) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | [73,74,75] | |||||||||
Shape-based (complex) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | [35,39,41,76] | ||||||
Feature-based | ✓ | ✓ | ✓ | [42,77] | ||||||||||||
Feature-based | ✓ | ✓ | ✓ | [62,78] | ||||||||||||
Feature-based | ✓ | ✓ | ✓ | ✓ | [50,51] | |||||||||||
Feature-based | ✓ | ✓ | ✓ | [53,68,70] | ||||||||||||
Feature-based | ✓ | ✓ | ✓ | ✓ | ✓ | [46,47,48] | ||||||||||
Feature-based | ✓ | ✓ | ✓ | [52,60,79,80,81] | ||||||||||||
Appearance-based | ✓ | ✓ | ✓ | ✓ | ✓ | [82,83,84,85] | ||||||||||
Symmetry | ✓ | ✓ | ✓ | [57,58,86] | ||||||||||||
Eye motion | ✓ | ✓ | ✓ | ✓ | [60,61,62] | |||||||||||
Hybrid | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | [65,69,85] |
3. Gaze Tracking
3.1. Introduction
The typical eye structure used in gaze tracking applications is demonstrated in Figure 3. The modeling of gaze direction is based either on the visual axis or on the optical axis. The visual axis, which forms the line of sight (LoS) and is considered the actual direction of gaze, is the line connecting the center of the cornea and the fovea. The optical axis, or the line of gaze (LoG), is the line passing through the centers of pupil, cornea, and the eyeball. The center of cornea is known as the nodal point of the eye. The visual and optical axes intersect at the nodal point of the eye with a certain angular offset. The position of head in 3D space can be directly estimated by knowing the 3D location of the corneal or eyeball center. In this way, there remains no need for separate head location models. Thus, the knowledge of these points is the keystone for majority of the head pose invariant models [87,88].
The objective of gaze tracking process is to identify and track the observer’s point of regard (PoR) or gaze direction. For this purpose, the important features of eye movements such as fixation, saccades, and smooth pursuit are utilized. Fixation represents the state when the observer’s gaze rests for a minimum time (typically more than 80–100 ms) on a specific area within 2–5° of central vision. Saccades are quick movements of eyes that take place when visual attention transfers between two fixated areas, with the aim of an bringing area of interest within the narrow visual field. When a driver visually follows a traveling object, this state is represented by smooth pursuit [62]. The data associated with the fixations and saccades provides valuable information that is used for the identification and classification of vision, neurological, and sleep conditions. In the field of medical psychology, data of the fixations is utilized to analyze a person’s attentiveness and level of concentration. Saccadic eye movements are widely studied in a variety of applications such as human vision research and drowsiness detection for vehicle drivers. Moreover, saccade is also used as a helpful index for determination of mental workload. Studies show that the saccade distance decreases when the task’s complexity increases [89].
The gaze tracking systems take two parameters as the input: eyeball orientation and head pose (defined by the orientation and position of the head) [90]. To change the gaze, a person can move his (or her) head while keeping the position of eyes fixed with respect to the head. Alternatively, gaze direction can also be changed by moving the eyeballs and pupil while the head is at rest. These two practices are respectively named “owl” and “lizard” vision in [91] because of their resemblance with these animals’ viewing behavior. Normally, we first move our heads to a comfortable position and then orient our eyes to see something. In this process, the head pose defines the gaze direction on a coarse scale, whereas the fine scale gaze direction is determined by the eyeball orientation. More specifically, to further understand the correlation between the head pose and eye pose, the study in [91] investigates two question: (i) How much better can gaze classification methods classify driver gaze using head and eye pose versus using head pose only? (ii) With the addition of eye pose information, how much does gaze classification improve? Generally, information of both the head pose and the pupil position is required in gaze estimation applications. As it will be explained in the later sections, the information of head pose is usually incorporated implicitly in gaze estimation applications rather than directly. An important aspect of gaze tracking process is the head pose invariance. The resultant head position invariance is achieved with the help of specific configurations of multiple cameras and other sensors whose a priori knowledge is available in the algorithms.
There are various configurations of lights and cameras, such as single camera, single light [88,92,93,94]; single camera, multiple lights [85,95,96,97,98]; and multiple cameras, multiple lights [30,99,100,101,102,103]. A complementary practice performed in all gaze tracking schemes is known as calibration. During the calibration process, elements of gaze tracking system are calibrated to determine a set of useful parameters, as explained below.
Calibration of geometric configuration of the setup is necessary to determine the relative orientations and locations of various devices (e.g., light sources and cameras).
Calibration associated with individuals is carried out to estimate corneal curvature—the angular offset between optical and visual axes.
Calibration of eye-gaze mapping functions according to the applied method.
Calibration of the camera is performed to incorporate the inherent parameters of the camera.
Certain parameters such as human specific measurements are calculated only once, whereas the other parameters are determined for every session by making the subject gaze at a set of specific points on a display. The parameters associated with devices, such as physical and geometric parameters of angles and locations between various devices, are calibrated prior to use. A system is considered fully calibrated if the geometric configuration and camera parameters are accurately known.
After introducing the basic concepts related to gaze tracking, the major techniques of gaze tracking are explained as follows.
3.2. Feature-Based Techniques
Feature-based gaze tracking techniques use eyes’ local features for gaze estimation. These techniques are broadly categorized as the model-based and the interpolation-based techniques, as explained below.
3.2.1. Model-Based Techniques
The model-based techniques use the geometric model features of the eye to directly calculate the gaze direction. The point of gaze is determined by the intersection of the gaze path with the object of a gaze [90,97,99,101,104]. These techniques model the general physical structure of the eye in geometric forms to estimate a 3D gaze direction vector. The PoR is calculated as the intersection of the closest object in the scene with the gaze direction vector.
Typically, there are three categories (i.e., intrinsic, extrinsic, and variable) of the parameters utilized for development of the geometric model of the eye [99]. The intrinsic parameters, calculated for fixed eye, remain unchanged during a tracking session; however, they change gradually over the years. These parameters include iris radius, cornea radius, the distance between the centers of the cornea and the pupil, the angle between optical and visual axes, and refraction parameters. The extrinsic parameters such as pupil radius are used to develop a model for optical axis and 3D eye position. These models adjust the shape of the eye according to the parameters.
Most 3D model-based techniques (e.g., [88,90,96,97,102,104,105,106,107,108] depend on metric information, and consequently, call for a global geometric model of orientation and position of devices and light sources. Further, camera calibration is also critical in these techniques. Some exceptional approaches use simplified assumptions [27] or use projective invariants [95,98]. We will not discuss the mathematical details of these techniques; however, most of them work on the same fundamental principles. The calibrated output of cameras is utilized to measure the lengths and angles by applying Euclidean relations. The general strategy is to make an assessment of the center of the cornea, and then to develop a model of optical axis. The points on the visual axis cannot be measured directly from the images. However, the offset to the visual axis is estimated by showing one or more points on the screen. The intersection of the visual axis and the screen in a fully calibrated setup provides the PoR.
In a model-based technique, the corneal center, which is the point of intersection of visual and optical axes, is considered as an important parameter for gaze estimation. If the corneal curvature is already known, it is possible to determine the corneal center with the help of two light sources and a camera. For estimation of corneal curvature, anthropomorphic averages are usually adopted due to their simplicity and ease of use [107,109]. However, if the eye-related parameters are unidentified, at least two cameras and two light sources are required to estimate the corneal center [96]. Several studies, such as [88,102,110], used model-based techniques in a fully calibrated arrangement. At a minimum, a single point of calibration is mandatory to estimate the angle between the visual and optical axes. This angle is used to estimate the direction of gaze [102].
3.2.2. Interpolation-Based Techniques
The regression-based methods (e.g., [27,69,100,111,112,113,114,115]), on the other hand, map the image features to the gaze coordinates. They either have a nonparametric form, such as in neural networks [113,116] or a specific parametric form, such as polynomials [112,117]. In initial gaze tracking applications, a single source of IR light was employed to enhance the contrast and consequently produce stable gaze estimates. Many single-glint techniques were implicitly based on an erroneous assumption that “the corneal surface is a perfect mirror.” This assumption inferred that the glint should remain stationary as long as the head position is fixed even when the corneal surface rotates. Therefore, the glint is taken as the origin in glint-centered coordinate systems. In this view, the difference between the pupil center and the glint is utilized to estimate the gaze direction. So, the pupil-glint difference vector is typically mapped to the screen. The authors of [118] developed a video-based eye tracker for real-time application. They used a single camera and employed IR light for dark bright pupil images. To compensate for head movements, they considered a set of mirrors and galvanometers. The PoR was estimated by using a linear mapping and the pupil-glint vector. The higher values of pupil-glint angles were considered to correspond to nonlinearities. They used polynomial regression to compensate for these nonlinearities. Similarly, in a later study, the authors of [73] represented a mapping of glint-pupil difference vector to the PoR. They utilized a single camera and considered a 2nd order polynomial to calculate the x and y-coordinates. However, as explained in [112], as the head moves farther from its initial position, decay in the calibration mapping is observed. In a way similar to [73] and [118], the authors of [119] proposed a polynomial regression technique for estimation of the PoR while assuming a flat cornea surface. Additionally, to compensate the gaze imprecision due to lateral head movements, they proposed a first order linear interpolation model. The results of these studies suggest that the higher order polynomials do not deliver superior calibration in practical applications. The findings of [119] are also supported by the results of [88,96].
For interpolation tasks, NNs and their variants are frequently adopted. The authors of [120] suggested a generalized NN-based regression technique in which the glint coordinates, pupil-glint displacement, pupil parameters, and ratio and orientation of the pupil ellipse’s major and minor axes are utilized to map the screen coordinates. The main objective of this technique is to eliminate the need for calibration after having performed the initial training. The results of the technique are accurate within 5° even in the presence of head movements. In [121], the authors used support vector regression to construct a highly non-linear generalized gaze mapping function that accounts for head movement. The results of this technique show that eye gaze can be accurately estimated for multiple users under natural head movement. Most gaze tracking techniques are unable to distinguish if the present input (or test data) is no longer compatible with the training or calibration data. So, the authors of [69,116] used the covariance of the test and training data to indicate when the gaze estimates significantly diverge from the training data.
It is observed that the head pose changes are not properly addressed by 2D interpolation techniques even with the eye trackers mounted on the head as these trackers might slip and change their position. To adjust minor slippage of head mounts, the authors of [122] proposed a set of heuristic rules. The single camera based 2D interpolation techniques indirectly model the eye physiology, geometry, and optical properties; and are typically considered approximate models. It is notable that head pose invariance is not strictly guaranteed in these models. However, their implementation is simple without requiring geometric or camera calibration, and they produce reasonably acceptable results for minor head movements. Some interpolation-based techniques try to improve the accuracy under increased head movements by using additional cameras, or through compensation [123]. The authors of [123] introduced a 2D interpolation-based technique to estimate 3D head position with the help of two cameras. They modified the regression function using the 3D eye position to compensate for head motions. However, in contrast to other interpolation-based techniques, the technique in [123] requires a prior calibration of the cameras.
3.3. Other Techniques
Most gaze estimation techniques are based on feature extraction and use IR light. However, in the following subsections, some alternative approaches are discussed which are based on different lines of action. These techniques utilize the reflections from the eye layers (Purkinje image), in contrast to of extracting iris and pupil features [124,125], appearance-based techniques [89,114,126], and the techniques that use visible light [114,116,127].
3.3.1. Appearance-Based Techniques
The appearance-based gaze estimation techniques take the contents of an image as input with the objective of mapping them directly to PoR on the screen. Accordingly, the underlying function to estimate the personal variations has relevant features extracted implicitly, without requiring the calibration of cameras and geometry. These techniques employ cropped images of the eye for training of the regression functions as observed in Gaussian process [114], multilayered networks [68,89], and manifold learning [128]. The authors of [114] obtained gaze predictions and related error measurements by using a sparse Gaussian process interpolation technique on filtered images in visible spectrum. The technique in [128] learned the eye image manifold by employing locally linear embedding. This technique significantly reduces the number of calibration points without sacrificing the accuracy. The accuracy of results of [128] is comparable to that of [89].
Appearance-based techniques normally do not necessitate the camera and geometric calibration as the mapping is performed directly on the contents of the images. While appearance-based techniques aim to model the geometry in an implicit manner, head pose invariance has not been reported in the literature. Moreover, since a change in illumination may alter the eye appearance, the accuracy of these techniques is also affected by the different light conditions for the same pose.
3.3.2. Visible Light-Based Techniques
The techniques based on visible or natural light are considered a substitute for the techniques based on IR, especially for outdoor daylight applications [27,34,69,76,90,106,114]. However, they have limitations due to the light variations in the visible spectrum and poor contrast images.
The authors of [76] modeled the visible part of the subject’s eyeball as a planar surface. They regarded gaze shifts due to eyeball rotations as translations of the pupil. Considering the 1-to-1 mapping of the projective plane and the hemisphere, the authors of [27] modeled the PoR as a homographic mapping to the monitor from center of the iris. The resultant model represents an approximation only as it does not consider the nonlinear one-to-one mapping. Moreover, this technique does not provide the head pose invariant models. The techniques developed in [90,106,127] estimated gaze direction by employing stereo and face models. The authors of [106] modeled the eyes as spheres and estimated the PoR from the intersection of the two estimates of LoG for each eye. In their work, a head pose model is used to estimate the eyeball center, and personal calibration is also considered. The authors of [90,127] combined a narrow-view-field camera with a face pose estimation system to compute the LoG through the one iris [90] and two irises [127], respectively. They assumed iris contours to be circles to approximate their normal directions in three dimensions by proposing novel eye models. Gaze estimation techniques that use rigid facial features are also reported in other studies, such as [63,129,130]. The locations of eye corners and the iris are tracked by means of a single camera, and the visual axis is estimated by employing various algorithms. The authors of [131] proposed the use of stereo cameras in natural light to estimate the gaze point. While these techniques do not require an IR light source, their accuracy is low as they are in the initial stages of development.
Finally, it is notable that a lack of light at night time reduces the functionality of human vision and cameras, which results in increased pedestrian fatalities occurring at night. The authors of [132] proposed an approach which utilized cost-effective arrayed ultrasonic sensors to detect traffic participants in low-speed situations. The results show an overall detection accuracy of 86%, with correct detection rates of cyclists, pedestrians, and vehicles at around 76.7%, 85.7%, and 93.1%, respectively.
3.4. Discussion
The gaze tracking systems which present negligible intrusiveness and minimal usage difficulty are usually sought-after as they allow free head movements. In the modern gaze tracking applications, video-based gaze trackers are gaining increased popularity. They maintain good accuracy (0.5° or better) while providing the user with enhanced freedom of head movement. The recent studies indicate that high-accuracy trackers can be realized if some specific reflections from the cornea are utilized. Furthermore, the resultant gaze estimation is more stable and head pose invariant. However, unfortunately, commercially available, high-accuracy gaze trackers are very expensive. Moreover, there is a trade-off among accuracy, setup flexibility, and the cost for gaze trackers. The readers can find a thorough discussion on performance and preferences of eye tracking systems in [133]. A comprehensive comparison of gaze estimation methods is provided in Table 2.
Table 2.
No. of Cameras | No. of Lights | Gaze Information | Head Pose Invariant? | Calibration | Accuracy (Degrees) | Comments | References |
---|---|---|---|---|---|---|---|
1 | 0 | PoR | No. Needs extra unit. | 2–4 | webcam | [27,69,114] | |
1 | 0 | LoS/LoG | No. Needs extra unit. | Fully | 1–2 | [90,97,108] | |
1 | 0 | LoG | Approximate solution | < 1 | additional markers, iris radius, parallel with screen | [30] | |
1 | 1 | PoR | No. Needs extra unit. | 1–2 | Polynomial approximation | [73,119,120] | |
1 | 2 | PoR | Yes | Fully | 1–3 | [88,104,105] | |
1 + 1 PT camera | 1 | PoR | Yes | Fully | 3 | Mirrors | [107] |
1 + 1 PT camera | 4 | PoR | Yes | <2.5 | PT camera used during implementation | [95,98] | |
2 | 0 | PoR | Yes | 1 | 3D face model | [106] | |
2 + 1 PT camera | 1 | LoG | Yes | 0.7–1 | [134] | ||
2 + 2 PT cameras | 2 | PoR | Yes | Fully | 0.6 | [99] | |
2 | 2(3) | PoR | Yes | Fully | <2 | extra lights used during implementation, experimentation conducted with three glints | [96,102] |
3 | 2 | PoR | Yes | Fully | not reported | [100,135] | |
1 | 1 | PoR | No. Needs extra unit. | 0.5–1.5 | Appearance-based | [68,89,128] |
4. Applications in ADAS
4.1. Introduction
A driver’s gaze data can be used to characterize the changes in visual and cognitive demands to assess the driver’s alertness [136,137]. For instance, it is reported that increased cognitive demand impacts the drivers’ allocation of attention to the roadway [138,139,140,141]. With the increase of cognitive demand, drivers tend to concentrate their gaze in front of the vehicle. This gaze concentration results in a reduced frequency of viewing the speedometer and mirrors, and a reduced ability to detect in both peripheries [138,139,140,142,143,144,145]. These practices are consistent with unintentional blindness, loss of situational awareness, and situations such as ‘‘looked but failed to see’’ [139,143,146].
A prominent and intuitive measure to detect the changes in drivers’ gaze due to increased cognitive demand is percent road center (PRC). PRC is defined as “the percentage of fixations that fall within a predefined road center area during a specific period.” It has been shown that PRC increases with increased cognitive demand [136,141,142,143,147]. While the concept of PRC is simple to understand, the definition of road center differs significantly in the literature. It is defined either as a rectangular region centered in front of the vehicle having a width of 15° [141] and 20° [142], or a circular region of 16° diameter centered about the road center point [142] and centered on the driver’s most recurrent gaze angle [148]. Some implementations of PRC utilized raw gaze points and gaze trajectories recorded by eye trackers that were not clustered into saccades and fixations. The authors of [148] compared these approaches and observed a strong correlation between raw gaze-based PRC and fixation-based PRC. To characterize the variations in gaze behavior with cognitive demand, standard deviation of gaze points is also used by several researchers [137,138,139,142]. The standard deviation is either computed from the projection of the driver’s gaze trail on a plane or the driver’s gaze angle. A comparison of various techniques used to characterize the changes in drivers’ gaze under cognitive load is presented in [149].
The data associated with driver’s eyes and gaze is utilized by the ADAS algorithms to detect the driver’s attentiveness. A typical scheme adopted in the ADAS algorithms to detect and improve the driver’s alertness using usual visual data of the driver is shown in Figure 4. These algorithms continuously capture the driver’s visual data through numerous sensors associated with the driver’s body and installed inside the vehicle. The obtained visual data is processed at the next stages to extract and classify the vital features. At the subsequent stage, a decision is made on the basis of data classification. The decision is conveyed to the driver in form of audible or visible signals, as shown in Figure 4.
The subsequent sections present a detailed review of the systems and techniques that are used to detect the visual activities and distraction of a driver. A brief overview of driving process and associated challenges, however, seems feasible for better understanding of the subsequent sections.
4.2. Driving Process and Associated Challenges
The key elements of the driving process are driver, vehicle, and driving environment, as shown in Figure 5. The driver, who plays the pivotal role in this process, has to understand the driving environment (e.g., nearby traffic and road signals), make decisions, and execute the appropriate actions [150]. Thus, the driver’s role has three stages: situational awareness, decision, and actions. Situational awareness is considered to be the most important and complicated stage which can be modeled as a three-step process. The first step is to perceive the elements in the environment within specific limits of time and space. The second step is to comprehend the relative significance of the perceived elements; and, the final step is to project their impact in near future. A driver’s ability to accurately perceive multiple events and entities in parallel depends on his (or her) attention during the first step (i.e., perception); and consequently, the situational awareness stage principally depends on it. Regarding the driver’s attention, is necessary to take in and process the available information during the decision and actions stages as well. Moreover, in a complex and vibrant driving environment, the need for the driver’s active attention increases, in order to save life and property. Thus, the ADAS continuously monitors the driver’s attention and generates an alarm or a countermeasure if any negligence is observed. The level of the alarm or countermeasure depends on the nature and intensity of the negligence.
The recent studies [151,152] explain that there are three major causes of road accidents that contribute to more than 90% of total road accidents. These causes are: distraction, fatigue, and aggressive driver behavior. The term “fatigue” denotes a compromised mental or physical performance and a subjective feeling of drowsiness. For drivers, the most dangerous types of fatigue are mental and central nervous fatigues which ultimately lead to drowsiness. Other types of fatigue include local physical fatigue (e.g., skeletal muscle fatigue) and general physical fatigue which is typically felt after an exhaustive physical activity. Aggressive driving activities such as shortcut maneuvers and ignoring speed limits also constitute to major reasons for road accidents. Since they are primarily related to a driver’s intended actions, local traffic rules seem more effective than mere warnings from ADAS. Nevertheless, ADAS systems are capable of warning, and in near-autonomous vehicles, preventing the severe consequences. Distraction is defined as the engagement of a driver in a competitive parallel task other than driving [153].
The driver’s performance is severely affected by the distraction, and it is considered the main reason for nearly half of the total accidents [154,155]. There are several distracting activities, such as eating, drinking, texting, calling, using the in-vehicle-technology, and viewing at the off-road environment [156,157,158,159]. According to the NHTSA, these activities are categorized as [155,159,160]:
Visual distraction (taking the eyes off the road);
Physical distraction (e.g., hands off the steering wheel);
Cognitive distraction (e.g., mind off the duty of driving);
Auditory distraction (e.g., taking ears off of the auditory signals and honks).
4.3. Visual Distraction and Driving Performance
Human beings have limited capability to perform multiple tasks simultaneously without compromising the performance of the all tasks. Therefore, engaging in a competing task while driving degrades the driver’s performance; and, consequently, endangers traffic safety. Driving behavior can be evaluated with certain driving performance indicators [161,162]. These indicators include: lateral control, reaction time, and speed, as discussed below.
4.3.1. Lateral Control
Typically, the lateral control is affected by visual distraction. The distracted drivers ultimately maneuver larger deviations in lane positioning as they need to compensate for slip-ups made while their eyes were off the road. This increased lane-position variability has been reported by several researchers (e.g., [140,163]). Moreover, as reported in [140], the steering control of distracted drivers is less smooth in comparison to their attentive driving states. On the other hand, the author of [164] found that there is no significant difference in the standard deviation of lateral control for distracted and normal drivers. The difference in findings of the researchers could be due to different test conditions and varying driving behaviors.
4.3.2. Reaction Time
Reaction time is calculated by numerous measures, such as brake reaction time (BRT), detection response time (DRT), and peripheral detection time (PDT). These reaction times provide a measure of the driver’s mental load. Usually, the reaction time increases for visually distracted drivers [165,166,167].
4.3.3. Speed
A driver’s distraction due to visual stimuli typically results in a speed reduction [147,163,168]. The reduced speed is perhaps the result of a compensatory mechanism for a potential risk as the potential risk can be minimized through a reduced speed. However, contradictory findings are reported in [164]. The authors of [164] observed an increased average speed and several speed violations for distracted drivers. The authors reasoned that the very low noise inside the vehicle was reason for the inconsistencies as the driver, thinking that the vehicle is at normal speed, did not monitor the speedometer frequently. We believe that since different researchers have different simulation or test environments (e.g., nearby vehicles, road conditions), differences between or opposition to each other’s findings are natural. Moreover, the behavior of different distracted drivers in respect to speed control is not always the same.
4.4. Measurement Approaches
Researchers have exploited the features of eye movement data for driver’s distraction and drowsiness detection [169,170]. The following features related to eyeball and eyelid movements are frequently used in this field [171,172,173,174].
PERCLOS: It is a measure of percentage of eye closure. It corresponds to the percentage of time during a one-minute period for which the eyes remain at least 70% or 80% closed.
Percentage eyes >70% closed (PERCLOS70).
Percentage eyes >80% closed (PERCLOS80).
PERCLOS70 baselined.
PERCLOS80 baselined.
Blink Amplitude: Blink amplitude is the measure of electric voltage during a blink. Its typical value ranges from 100 to 400 μV.
Amplitude/velocity ratio (APVC).
APCV with regression.
Energy of blinking (EC).
EC baselined.
Blink Duration: It is the total time from the start to the end of a blink. It is typically measured in the units of milliseconds. A challenge associated with blink behavior-based drowsiness detection techniques is the individually-dependent nature of the measure. For instance, some people blink more frequently in wakeful conditions or some persons’ eyes remain slightly open even in sleepy conditions. So, personal calibration is a prerequisite to apply these techniques.
Blink Frequency: Blink frequency is the number of blinks per minute. An increased blink frequency is typically associated with the onset of sleep.
Lid Reopening Delay: It is measure of the time from fully closed eyelids to the start of their reopening. Its value is in the range of few milliseconds for an awake person; it increases for a drowsy person; and is prolonged to several hundred milliseconds for a person undergoing a microsleep.
Microsleep: An eye blink is detected when the upper lid of the eye remains in contact with the lower lid for around 200–400 ms, and if this duration exceed 500 ms (and less than 10 s), this situation corresponds to a microsleep [173,175]. A driver’s microsleep can lead to fatal accidents.
Microsleep event 0.5 sec rate.
Microsleep event 1.0 sec rate.
Mean square eye closure.
Mean eye closure.
Average eye closure speed.
A driver’s physical activities such as head movements are captured and processed in the ADAS applications [176,177,178,179]. The video cameras are installed inside the vehicle at suitable locations to record the driver’s physical movements and gaze data. The main advantage of video-based gaze detection approaches lies with its nonintrusive nature [180,181,182,183]. For instance, the authors of [176] modeled and detected a driver’s visual distraction using the information associated with pose and position of the driver’s head. However, both intuitively and when explained by the authors, this technique is prone to report false positives. The primary reason for this is the possibility of the driver looking on the road while his (or her) head is tilted to a side. This study also explains the need for high-performance eye and gaze tracking systems for ADAS. The author of [177] proposed an improved technique by incorporating the PRC of gaze direction. They analyzed it over a 1 min epoch. For their setup, they found that PRC < 58% was a result of visual distraction, whereas PRC > 92% was due to cognitive distraction.
The authors of [184] reported a correlation between driving performance and visual distraction by utilizing gaze duration as a detection feature. The existence of such correlation was also confirmed by the authors of [185]. It has been reported that the detection accuracy observed through using eye-movement data alone is nearly equal to that observed thorough using both eye-movement and driving performance data [185]. As reported in earlier studies and verified by recent research [186,187,188,189,190], eye-movement features can be effectively used for detection of visual as well as cognitive distraction. Distracted drivers are found to exhibit longer fixation durations or frequent fixations towards competing tasks. It is also observed that, a cognitively distracted driver usually exhibits longer fixation duration at the same area. The area of fixation can be either associated with a competing task (e.g., multimedia inside the vehicle) or with the peripheries of the field of view.
The combined effect of visual and cognitive distraction is also reported in [140]. It is notable that, by definition, visual distraction is different from cognitive distraction (which includes the state “looked but did not see”), and their effects are also not the same. Cognitive distraction disturbs the longitudinal control of the vehicle, whereas visual distraction affects the vehicle’s lateral control and steering ability of a driver [191]. Moreover, as discussed in [140], overcompensation and steering neglect is related to the visual distraction, whereas under-compensation is associated with cognitive distraction. Similarly, hard braking is mostly related to the cognitive distraction [136,141]. Typically, the accidents due to visual distraction are more disastrous compared to the accidents because of cognitive distraction. The findings of [50] suggest that during visual distraction only the frequency and duration of eye fixations is higher than the combined (visual as well as cognitive) distraction. However, the frequency and duration of eye fixations during combined distraction is higher than that of cognitive distraction alone. It is notable that for adequate situation awareness there must be a specific range of suitable duration and frequency of eye fixation that depends on the driver and driving environment. Therefore, eye movement features can be helpful in order to accurately discriminate between visual and cognitive distraction only if the specific range of eye-movement features is pre-identified for each driver.
In addition to already explained physical measures, biological measures such as electrooculography (EOG) also provide data for sleepiness detection. EOG signals are frequently used to measure eye-related activities for medical purposes; however, their use in ADAS applications is accompanied with certain challenges. For example, the location of EOG electrodes has a special significance in its applications, as the accuracy of the collected data depends on distance of the electrodes from the eyes [192,193]. At the same time, it was observed that drivers do not feel comfortable with the electrodes attached to their eyes during normal driving situations. So, such experimentation is possible for simulation-based studies but not feasible for real-world applications.
Realizing the relative advantages and limitations of the above-discussed techniques, the researchers now tend to fuse various techniques to produce an optimal solution for distraction detection systems of ADAS. By merging the information obtained from vehicle’s parameters (e.g., turning speed, and acceleration) and driver’s physical and biological parameters, more accurate and reliable results are reported. For example, the authors of [194] reported the distraction detection accuracy to be 81.1% by fusing the data of saccades, eye fixation, lateral control, and steering wheel through a support vector machine algorithm. The authors of [195] detected driver’s distraction by processing the information obtained from physical (blink frequency, location, and eye-fixation duration) and driving performance parameters (steering wheel and lateral control). Using the same physical parameters, the authors of [196] considered different driving performance measures (i.e., speed, lateral acceleration, and longitudinal deceleration) to detect the driver’s distraction. The authors of [197] merged biological and physical parameters (head orientation, gaze data, and pupil diameter) to produce more accurate results (91.7% and 93%) using support vector machine and adaptive boosting (Adaboost) algorithms, respectively. A summary of measurement techniques, their advantages, and their limitations are presented in Table 3.
Table 3.
Measurement | Ability to Detect Distraction | Pros | Cons | ||
---|---|---|---|---|---|
Visual | Cognitive | Visual and Cognitive | |||
Driving Performance | Y | N | N |
|
|
Physical Measurements | Y | Y | N |
|
|
Biological Measurements | Y | Y | Y |
|
|
Subjective Reports | N | Y | N |
|
|
Hybrid Measurements | Y | Y | Y |
|
|
4.5. Data Processing Algorithms
The data of driver’s eyes and gaze has information associated with the driver’s level of alertness. The following features of driver’s visual data are frequently used in ADAS applications:
Difference between the maximum and minimum value of the data;
Standard deviation of the data;
Root mean square value of the data;
Duration of the signal data;
Maximum difference between any two consecutive values;
Median of the data;
Mean of the data;
Maximum value of the data;
Minimum value of the data;
Amplitude of the difference between the first value and the last value;
Difference between the max and min value of the differential of data.
There are various algorithms developed and implemented by researchers to model and utilize eye and gaze data for detection of a driver’s alertness and intentions. These algorithms use fuzzy logic [198,199,200,201]; neural networks [202,203]; Bayesian networks [113,204,205]; unsupervised, semi-supervised, and supervised machine learning techniques [186,189,206]; and combinations of multiple techniques. It is logical that depending upon the usage and available resources, the processing algorithms select and process the data or part of it. For example, the authors of [207] argued that it is sufficient to partition gaze into regions for the purpose of keeping the driver safe. Their proposed approach, which estimates driver’s gaze region without using eye movements, extracts facial features and classifies their spatial configuration into six regions in real time. They evaluated the developed system on a dataset of 50 drivers from an on-road study while resulting in an average accuracy of 91.4% at an average decision rate of 11 Hz. Furthermore, algorithms for special circumstances such as during hazy weather are also discussed in the literature and belong to already discussed categories. For instance, the work in [208] is based on deep learning approaches. In general, all of these algorithms execute a recursive process similar to the flowchart shown in Figure 6. The presented flowchart shows, for example, how eye tracking is achieved in the ADAS applications. The main steps shown in the flowchart can be realized by application of any suitable conventional or modern algorithm.
Additionally, the eye and gaze data are also used for early detection of a driver’s intentions, which is an interesting feature of ADAS. Most schemes developed for prediction of a driver’s maneuvering behavior are principally based on the hidden Markov model (HMM) and its variants [209,210,211,212]. These schemes are applied to the data obtained from the driver’s gaze sequence [9] and head position [213]. To process the data, feature-based pattern recognition and machine learning techniques are frequently utilized [214,215,216]. These schemes are designed to either detect a single maneuver behavior such as lane change only, or turn only [211,214,217,218,219] or multiple maneuver behaviors [220]. For instance, early detection of intention to change the lane was achieved in [221] using HMM-based steering behavior models. This work is also capable of differentiating between normal and emergency lane changes. Similarly, researchers utilized the relevance vector machine to predict driver intentions to change lanes [222], apply brakes [223], and take turns [224]. Moreover, by applying artificial neural network models on gaze behavior data, the authors of [202] conjectured the driver’s maneuvering intentions. In [206], deep learning approaches were utilized for early detection of the driver’s intentions. In this work, recurrent neural network (RNN) and long short-term memory (LSTM) units were combined which fuse the various features associated with the driver and the driving environment to predict the maneuvers. These features included the face and eye-related features captured by a face camera, and the driving parameters and street map and scene. The system developed in [206] can predict a maneuver 3.5 s earlier, together with the recall performance of 77.1% and 87.4% and the precision of 84.5% and 90.5% for an out of the box and a customized optimal face tracker, respectively. In addition to feature-based pattern recognition algorithms, linguistic-based syntactic pattern recognition algorithms are also proposed in the literature for early detection of driver’s intent [220]. The authors of [225] adopted the random forest algorithm and utilized the data of transition patterns between individual maneuver states to predict driving style. They showed that use of transition probabilities between maneuvers resulted in improved prediction of driving style in comparison to the traditional maneuver frequencies in behavioral analysis. Table 4 presents a summary of data processing algorithms used in ADAS that utilize a driver’s eye and gaze data for detection of distraction and fatigue.
Table 4.
Eye Detection | Tracking Method | Used Features | Algorithm for Distraction/Fatigue Detection | Performance | References |
---|---|---|---|---|---|
Imaging in the IR spectrum and verification by SVM | Combination of Kalman filter and mean shift | PERCLOS, Head nodding, Head orientation, Eye blink speed, Gaze direction Eye saccadic movement, Yawning | Probability theory (Bayesian network) | Very good | [204] |
Imaging in the IR Spectrum | Adaptive filters (Kalman filter) | PERCLOS, Eye blink speed, Gaze direction, Head rotation | Probability theory (Bayesian network) | Very Good | [113] |
Imaging in the IR Spectrum | Adaptive filters (Kalman filter) | PERCLOS, Eye blink rate, Eye saccadic movement, Head nodding, Head orientation | Knowledge-based (Fuzzy expert system) | [226] | |
Feature-based (binarization) | Combination of 4 hierarchical tracking method | PERCLOS, Eye blink rate, Gaze direction, Yawning, Head orientation | Knowledge-based (Finite State Machine) | Average | [227] |
Explicitly by Feature-based (projection) | Search window (based on face template matching) | PERCLOS Distance between eyelids, Eye blink rate, Head orientation | Knowledge-based (Fuzzy expert System) | Good | [228] |
Other methods (elliptical model in daylight and IR imaging in nightlight) | Combination of NN and condensation algorithm | PERCLOS, Eye blink rate, Head orientation | Thresholding | Good | [229] |
Feature-based (projection) | Search window (based on face template matching) | PERCLOS, Distance between eyelids | Thresholding | Good | [230] |
Feature-based (projection) | Adaptive filters (UKF) | Continuous eye closure | Thresholding | Average | [231] |
Feature-based (projection and connected component analysis) | Search window (eye template matching) | Eyelid distance | Thresholding | Very good | [232] |
Feature-based (projection) | Adaptive filters (Kalman filter) | Eye blink rate | Poor | [233] | |
Feature-based (variance projection and face model) | Adaptive filters (Kalman filter) | PERCLOS, Eye blink speed, Head rotation | Poor | [234] |
4.6. Application in Modern Vehicles
Vehicle manufacturing companies use the features of drivers’ visual data to offer services and facilities in high-end models their vehicles. These vehicles are equipped with cameras, radars, and other sensors to assist drivers in safe and comfortable driving. For example, the Cadillac Super Cruise system utilizes FOVIO vision technology developed by Seeing Machines. In this system, a gumdrop-sized IR camera is installed on the steering wheel column to precisely determine the driver’s alertness level. This is achieved through an exact measurement of eyelid movements and head orientation under a full range of day and night-time driving conditions. The system is capable of working well even when the driver is wearing sunglasses. Table 5 summarizes the features offered by vehicle manufacturing companies.
Table 5.
Make | Technology Brand | Description | Alarm Type | Reference |
---|---|---|---|---|
Audi | Rest recommendation system + Audi pre sense | Uses features extracted with the help of far infrared system, camera, radar, thermal camera, lane position, proximity detection to offer features such as collision avoidance assist sunroof and windows closinghigh beam assistturn assistrear cross-path assistexit assist (to warn door opening when a nearby car passes) traffic jam assistnight vision | Audio, display, vibration | [235] |
BMW | Active Driving Assistant with Attention Assistant | Uses features extracted with the help of radar, camera, thermal camera, lane position, proximity detection to offer features such as lane change warning, night vision, steering and lane control system for semi-automated driving, crossroad warning, assistive parking | Audio, display, vibration | [236] |
Cadillac | Cadillac Super Cruise | System based on FOVIO vision technology developed by Seeing Machines IR camera on the steering wheel column to accurately determine the driver’s attention state | Audio and visual | [237] |
Ford | Ford Safe and Smart (Driver alert control) |
Uses features extracted with the help of radar, camera, steering sensors, lane position, proximity detection to offer features such as lane-keeping system, adaptive cruise control, forward collision warning with brake support, front rain-sensing windshield wipers, auto high-beam headlamps, blind spot information system, reverse steering | Audio, display, vibration | [238] |
Mercedez-Benz | MB Pre-safe Technology | Uses features extracted with the help of radar, camera, sensors on the steering column, steering wheel movement and speed to offer features such as driver’s profile and behaviour, accident investigation, pre-safe brake and distronic plus technology, night view assist plus, active lane keeping assist and active blind spot monitoring, adaptive high beam assist, attention assist | Audio, display | [239] |
Toyota | Toyota Safety Sense | Uses features extracted with the help of radar, charge-coupled camera, eye tracking and head motion, audio, display advanced obstacle detection system, pre-collision system, lane departure alert, automatic high beams, dynamic radar cruise control, pedestrian detection, | Audio, display | [240] |
5. Summary and Conclusions
This paper reviewed eye and gaze tracking systems—their models and techniques, the classification of techniques, and their advantages and shortcomings. Specifically, their application in ADAS for safe and comfortable driving has been discussed in detail. While these tracking systems and techniques show improvement in ADAS applications, there exists a significant potential for further developments, especially due to emergence of autonomous vehicle technology. The National Highway Traffic Safety Administration (NHTSA) of the USA defines six levels of vehicle automation to provide a common interface for research and discussions among different agencies, companies, and stakeholders [241]. These levels range from no automation (level-0) to fully automated vehicles (level-5). Between the levels of no automation to full automation, the automated system has an authority to control the vehicle. In this way, the drivers reduce attention to the road, and, consequently, get distracted as they feel the freedom of disengaging themselves from driving [242,243]. Although the vehicle manufacturing companies and the traffic control agencies clearly state that human drivers should monitor the driving environment at these levels, several challenges related to use and application still persist. Specifically, can a driver remain disengaged from the driving while relying on the ADAS and still maintain a safe driving environment? Similarly, what if the automated system has only the option to save either vehicle or property? Satisfactory answers to these questions are still unclear and belong to an area of active research.
The authors believe that the mass adoption of eye and gaze trackers depends on their cost as much as their accurate functionality in natural environments (i.e., changing light conditions and usual head movements). In this regard, requirements and features of future eye and gaze trackers are discussed here.
Cost: The prices of existing eye trackers are too high to be used by the general public. The high cost of eye trackers is mainly due to the cost of parts (e.g., high quality lenses and cameras), the development cost, and comparatively limited market. To overcome this problem, the future eye and gaze trackers should opt for the commonly available standard off-the-shelf components, such as digital or web cameras. Additionally, new theoretical and experimental developments are needed so that accurate eye and gaze tracking may be achieved with low quality images.
Flexibility: Existing gaze trackers typically need calibration of both the geometric arrangement and the camera(s) which is a tedious job. In certain situations, it could be appropriate to calibrate, for example, the monitor and light sources without requiring the geometric and camera calibration. Such a flexible setup is advantageous for the eye trackers intended for on-the-move usage.
Calibration: The present gaze tracking techniques either use a simple prior model with several calibration points or a strong prior model (hardware calibrated) with a brief calibration session. A future direction in gaze tracking is to develop the techniques that require no (or extremely minimal) calibration. We believe that novel eye and gaze models should be developed to realize calibration-free gaze tracking, which is reliable as well.
Tolerance: Currently, only partial solutions exist to accommodate the tolerance required by the application involving eyeglasses and contact lenses. The problems in such situations may be partially solved by using multiple light sources coordinated with the users’ head movement relative to the light source and camera. The trend of producing low-cost eye tracking systems may increase for their use in mainstream applications. This practice, however, can lead to low accuracy gaze tracking which could be acceptable for certain applications, but not for ADAS. We believe that additional modeling approaches such as modeling eyeglasses themselves under various light conditions may be required if eye trackers are to be utilized in outdoor applications.
Interpretation of gaze: While addressing the technical issues associated with eyes and gaze tracking, the interpretation of relationship between visual and cognitive states is also very important. The analysis of the behavior of eye movements helps determining the cognitive and emotional states as well as the human visual perception. The future eye and gaze trackers may exploit a combination of eye and gaze data with other gestures. Definitely, this is a topic of long-term multi-disciplinary research.
Usage of IR and Outdoor Application: IR light is used in eye tracking systems as it is invisible to the user and light conditions can be controlled to obtain stable gaze estimation and high contrast images. A practical drawback of such systems is the limited reliability when used in outdoor applications. So, the increased reliability in outdoor usage is a requirement for future eye tracking systems. The current efforts to overcome this limitation are in development stages and further research is required.
Head mounts: A part of the research community emphasizes remote gaze tracking, eliminating the need for head mounts. However, the gaze trackers with head mounts may see a revival due to the problems associated with remote trackers and due to the higher attention on portable, tiny head-mounted displays [244]. The head-mounted eye tracking systems are usually more precise as they remain minimally affected by the external variations and their geometry allows for more constraints to be applied.
Author Contributions
M.Q.K. and S.L. conceived and designed the content. M.Q.K. drafted the paper. S.L. supervised M.Q.K. with the critical assessment of the draft for a quality revision.
Funding
This research was supported, in part, by “3D Recognition Project” of Korea Evaluation Institute of Industrial Technology (KEIT) (10060160), in part, by “Robocarechair: A Smart Transformable Robot for Multi-Functional Assistive Personal Care” Project of KEIT (P0006886), and, in part, by “e-Drive Train Platform Development for Commercial Electric Vehicles based on IoT Technology” Project of Korea Institute of Energy Technology Evaluation and Planning (KETEP) (20172010000420) sponsored by the Korean Ministry of Trade, Industry and Energy (MOTIE), as well as, in part, by the Institute of Information and Communication Technology Planning & Evaluation (IITP) Grant sponsored by the Korean Ministry of Science and Information Technology (MSIT): No. 2019-0-00421, AI Graduate School Program.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- 1.Omer Y., Sapir R., Hatuka Y., Yovel G. What Is a Face? Critical Features for Face Detection. Perception. 2019;48:437–446. doi: 10.1177/0301006619838734. [DOI] [PubMed] [Google Scholar]
- 2.Cho S.W., Baek N.R., Kim M.C., Koo J.H., Kim J.H., Park K.R. Face Detection in Nighttime Images Using Visible-Light Camera Sensors with Two-Step Faster Region-Based Convolutional Neural Network. Sensors. 2018;18:2995. doi: 10.3390/s18092995. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Bozomitu R.G., Păsărică A., Tărniceriu D., Rotariu C. Development of an Eye Tracking-Based Human-Computer Interface for Real-Time Applications. Sensors. 2019;19:3630. doi: 10.3390/s19163630. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Cornia M., Baraldi L., Serra G., Cucchiara R. Predicting Human Eye Fixations via an LSTM-Based Saliency Attentive Model. IEEE Trans. Image Process. 2018;27:5142–5154. doi: 10.1109/TIP.2018.2851672. [DOI] [PubMed] [Google Scholar]
- 5.Huey E.B. The Psychology and Pedagogy of Reading. The Macmillan Company; New York, NY, USA: 1908. [Google Scholar]
- 6.Buswell G.T. Fundamental Reading Habits: A Study of Their Development. American Psychological Association; Worcester, MA, USA: 1922. [Google Scholar]
- 7.Buswell G.T. How People Look at Pictures: A Study of the Psychology and Perception in Art. American Psychological Association; Worcester, MA, USA: 1935. [Google Scholar]
- 8.Yarbus A.L. Eye Movements and Vision. Springer; Berlin/Heidelberg, Germany: 2013. [Google Scholar]
- 9.Rayner K. Eye movements in reading and information processing. Psychol. Bull. 1978;85:618. doi: 10.1037/0033-2909.85.3.618. [DOI] [PubMed] [Google Scholar]
- 10.Wright R.D., Ward L.M. Orienting of Attention. Oxford University Press; Oxford, UK: 2008. [Google Scholar]
- 11.Posner M.I. Orienting of attention. Q. J. Exp. Psychol. 1980;32:3–25. doi: 10.1080/00335558008248231. [DOI] [PubMed] [Google Scholar]
- 12.Carpenter P.A., Just M.A. Eye Movements in Reading. Elsevier; Amsterdam, The Netherlands: 1983. What your eyes do while your mind is reading; pp. 275–307. [Google Scholar]
- 13.Jacob R.J., Karn K.S. The Mind’s Eye. Elsevier; Amsterdam, The Netherlands: 2003. Eye tracking in human-computer interaction and usability research: Ready to deliver the promises; pp. 573–605. [Google Scholar]
- 14.Aleem I.S., Vidal M., Chapeskie J. Systems, Devices, and Methods for Laser Eye Tracking. Application No. 9,904,051. U.S. Patent. 2018 Feb 27;
- 15.Naqvi R.A., Arsalan M., Batchuluun G., Yoon H.S., Park K.R. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor. Sensors. 2018;18:456. doi: 10.3390/s18020456. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Swaminathan A., Ramachandran M. Enabling Augmented Reality Using Eye Gaze Tracking. Application No. 9,996,15. U.S. Patent. 2018 Jun 12;
- 17.Vicente F., Huang Z., Xiong X., De la Torre F., Zhang W., Levi D. Driver gaze tracking and eyes off the road detection system. IEEE Trans. Intell. Transp. Syst. 2015;16:2014–2027. doi: 10.1109/TITS.2015.2396031. [DOI] [Google Scholar]
- 18.Massé B., Ba S., Horaud R. Tracking gaze and visual focus of attention of people involved in social interaction. IEEE Trans. Pattern Anal. Mach. Intell. 2017;40:2711–2724. doi: 10.1109/TPAMI.2017.2782819. [DOI] [PubMed] [Google Scholar]
- 19.Ramirez Gomez A., Lankes M. Towards Designing Diegetic Gaze in Games: The Use of Gaze Roles and Metaphors. Multimodal Technol. Interact. 2019;3:65. doi: 10.3390/mti3040065. [DOI] [Google Scholar]
- 20.World Health Organization . Global Status Report on Road Safety 2013. WHO; Geneva, Switzerland: 2015. [Google Scholar]
- 21.World Health Organization . World Report on Road Traffic Injury Prevention. WHO; Geneva, Switzerland: 2014. [Google Scholar]
- 22.World Health Organization . Global Status Report on Road Safety: Time for Action. WHO; Geneva, Switzerland: 2009. [Google Scholar]
- 23.Bayly M., Fildes B., Regan M., Young K. Review of crash effectiveness of intelligent transport systems. Emergency. 2007;3:14. [Google Scholar]
- 24.Society of Automotive Engineers . Operational Definitions of Driving Performance Measures and Statistics. Society of Automotive Engineers; Warrendale, PA, USA: 2015. [Google Scholar]
- 25.Kiefer P., Giannopoulos I., Raubal M., Duchowski A. Eye tracking for spatial research: Cognition, computation, challenges. Spat. Cogn. Comput. 2017;17:1–19. doi: 10.1080/13875868.2016.1254634. [DOI] [Google Scholar]
- 26.Topolšek D., Areh I., Cvahte T. Examination of driver detection of roadside traffic signs and advertisements using eye tracking. Transp. Res. F Traffic Psychol. Behav. 2016;43:212–224. doi: 10.1016/j.trf.2016.10.002. [DOI] [Google Scholar]
- 27.Hansen D.W., Pece A.E.C. Eye tracking in the wild. Comput. Vis. Image Underst. 2005;98:155–181. doi: 10.1016/j.cviu.2004.07.013. [DOI] [Google Scholar]
- 28.Daugman J. The importance of being random: Statistical principles of iris recognition. Pattern Recognit. 2003;36:279–291. doi: 10.1016/S0031-3203(02)00030-4. [DOI] [Google Scholar]
- 29.Young D., Tunley H., Samuels R. Specialised Hough Transform and Active Contour Methods for Real-Time Eye Tracking. University of Sussex, Cognitive & Computing Science; Brighton, UK: 1995. [Google Scholar]
- 30.Kyung-Nam K., Ramakrishna R.S. Vision-based eye-gaze tracking for human computer interface; Proceedings of the IEEE SMC’99 Conference Proceedings, IEEE International Conference on Systems, Man, and Cybernetics; Tokyo, Japan. 12–15 October 1999; pp. 324–329. [Google Scholar]
- 31.Peréz A., Córdoba M.L., Garcia A., Méndez R., Munoz M., Pedraza J.L., Sanchez F. A Precise Eye-Gaze Detection and Tracking System. Union Agency; Leeds, UK: 2003. [Google Scholar]
- 32.Comaniciu D., Ramesh V., Meer P. Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 2003;25:564–577. doi: 10.1109/TPAMI.2003.1195991. [DOI] [Google Scholar]
- 33.Locher P.J., Nodine C.F. Symmetry Catches the Eye. In: O’Regan J.K., Levy-Schoen A., editors. Eye Movements from Physiology to Cognition. Elsevier; Amsterdam, The Netherlands: 1987. pp. 353–361. [Google Scholar]
- 34.Dongheng L., Winfield D., Parkhurst D.J. Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches; Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)-Workshops; San Diego, CA, USA. 21–23 September 2005; p. 79. [Google Scholar]
- 35.Yuille A.L., Hallinan P.W., Cohen D.S. Feature extraction from faces using deformable templates. Int. J. Comput. Vision. 1992;8:99–111. doi: 10.1007/BF00127169. [DOI] [Google Scholar]
- 36.Kin-Man L., Hong Y. Locating and extracting the eye in human face images. Pattern Recognit. 1996;29:771–779. doi: 10.1016/0031-3203(95)00119-0. [DOI] [Google Scholar]
- 37.Edwards G.J., Cootes T.F., Taylor C.J. European Conference on Computer Vision—ECCV’98. Springer; Berlin/Heidelberg, Germany: 1998. Face recognition using active appearance models; pp. 581–595. [Google Scholar]
- 38.Heo H., Lee W.O., Shin K.Y., Park K.R. Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information. Sensors. 2014;14:8577–8604. doi: 10.3390/s140508577. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Zhang L. Estimation of eye and mouth corner point positions in a knowledge-based coding system; Proceedings of the Digital Compression Technologies and Systems for Video Communications; Berlin, Germany. 16 September 1996; pp. 21–28. [Google Scholar]
- 40.Kampmann M., Zhang L. Estimation of eye, eyebrow and nose features in videophone sequences; Proceedings of the International Workshop on Very Low Bitrate Video Coding (VLBV 98); Urbana, IL, USA. 8–9 October 1998; pp. 101–104. [Google Scholar]
- 41.Chow G., Li X. Towards a system for automatic facial feature detection. Pattern Recognit. 1993;26:1739–1755. doi: 10.1016/0031-3203(93)90173-T. [DOI] [Google Scholar]
- 42.Herpers R., Michaelis M., Lichtenauer K., Sommer G. Edge and keypoint detection in facial regions; Proceedings of the Second International Conference on Automatic Face and Gesture Recognition; Killington, VT, USA. 14–16 October 1996; pp. 212–217. [Google Scholar]
- 43.Li B., Fu H., Wen D., Lo W. Etracker: A Mobile Gaze-Tracking System with Near-Eye Display Based on a Combined Gaze-Tracking Algorithm. Sensors. 2018;18:1626. doi: 10.3390/s18051626. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Vincent J.M., Waite J.B., Myers D.J. Automatic location of visual features by a system of multilayered perceptrons. IEE Proc. F Radar Signal Process. 1992;139:405–412. doi: 10.1049/ip-f-2.1992.0058. [DOI] [Google Scholar]
- 45.Reinders M.J.T., Koch R.W.C., Gerbrands J.J. Locating facial features in image sequences using neural networks; Proceedings of the Second International Conference on Automatic Face and Gesture Recognition; Killington, VT, USA. 14–16 October 1996; pp. 230–235. [Google Scholar]
- 46.Kawato S., Ohya J. Two-step approach for real-time eye tracking with a new filtering technique; Proceedings of the Smc Conference Proceedings, IEEE International Conference on Systems, Man and Cybernetics. ‘Cybernetics Evolving to Systems, Humans, Organizations, and Their Complex Interactions’ (Cat. No. 0); Nashville, TN, USA. 8–11 October 2000; pp. 1366–1371. [Google Scholar]
- 47.Kawato S., Ohya J. Real-time detection of nodding and head-shaking by directly detecting and tracking the “between-eyes”; Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition; Grenoble, France. 28–30 March 2000; pp. 40–45. [Google Scholar]
- 48.Kawato S., Tetsutani N. Detection and tracking of eyes for gaze-camera control. Image Vis. Comput. 2004;22:1031–1038. doi: 10.1016/j.imavis.2004.03.013. [DOI] [Google Scholar]
- 49.Kawato S., Tetsutani N. Real-time detection of between-the-eyes with a circle frequency filter; Proceedings of the 5th Asian Conference on Computer Vision (ACCV2002); Melbourne, Australia. 23–25 January 2002; pp. 442–447. [Google Scholar]
- 50.Sirohey S., Rosenfeld A., Duric Z. A method of detecting and tracking irises and eyelids in video. Pattern Recognit. 2002;35:1389–1401. doi: 10.1016/S0031-3203(01)00116-9. [DOI] [Google Scholar]
- 51.Sirohey S.A., Rosenfeld A. Eye detection in a face image using linear and nonlinear filters. Pattern Recognit. 2001;34:1367–1391. doi: 10.1016/S0031-3203(00)00082-0. [DOI] [Google Scholar]
- 52.Yang J., Stiefelhagen R., Meier U., Waibel A. Real-time face and facial feature tracking and applications; Proceedings of the AVSP’98 International Conference on Auditory-Visual Speech Processing; Sydney Australia. 4–6 December 1998. [Google Scholar]
- 53.Ying-li T., Kanade T., Cohn J.F. Dual-state parametric eye tracking; Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition; Grenoble, France. 28–30 March 2000; pp. 110–115. [Google Scholar]
- 54.Lucas B.D., Kanade T. An iterative image registration technique with an application to stereo vision; Proceedings of the 7th International Joint Conference on Artificial Intelligence-Volume 2; Vancouver, BC, Canada. 24–28 August 1981; pp. 674–679. [Google Scholar]
- 55.Weimin H., Mariani R. Face detection and precise eyes location; Proceedings of the 15th International Conference on Pattern Recognition, ICPR-2000; Barcelona, Spain. 3–7 September 2000; pp. 722–727. [Google Scholar]
- 56.Samaria F., Young S. HMM-based architecture for face identification. Image Vis. Comput. 1994;12:537–543. doi: 10.1016/0262-8856(94)90007-8. [DOI] [Google Scholar]
- 57.Kovesi P. Symmetry and asymmetry from local phase; Proceedings of the Tenth Australian Joint Conference on Artificial Intelligence; Perth, Australia. 30 November–4 December 1997; pp. 2–4. [Google Scholar]
- 58.Lin C.-C., Lin W.-C. Extracting facial features by an inhibitory mechanism based on gradient distributions. Pattern Recognit. 1996;29:2079–2101. doi: 10.1016/S0031-3203(96)00034-9. [DOI] [Google Scholar]
- 59.Sela G., Levine M.D. Real-Time Attention for Robotic Vision. Real Time Imaging. 1997;3:173–194. doi: 10.1006/rtim.1996.0057. [DOI] [Google Scholar]
- 60.Grauman K., Betke M., Gips J., Bradski G.R. Communication via eye blinks-detection and duration analysis in real time; Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001; Kauai, HI, USA. 8–14 December 2001. [Google Scholar]
- 61.Crowley J.L., Berard F. Multi-modal tracking of faces for video communications; Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; San Juan, Puerto Rico, USA. 17–19 June 1997; pp. 640–645. [Google Scholar]
- 62.Bala L.-P. Automatic detection and tracking of faces and facial features in video sequences; Proceedings of the Picture Coding Symposium; Berlin, Germany. 10–12 September 1997. [Google Scholar]
- 63.Ishikawa T. [(accessed on 7 September 2019)];Passive Driver Gaze Tracking with Active Appearance Models. 2004 Available online: https://kilthub.cmu.edu/articles/Passive_driver_gaze_tracking_with_active_appearance_models/6557315/1.
- 64.Matsumoto Y., Zelinsky A. An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement; Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580); Grenoble, France. 28–30 March 2000; pp. 499–504. [Google Scholar]
- 65.Xie X., Sudhakar R., Zhuang H. On improving eye feature extraction using deformable templates. Pattern Recognit. 1994;27:791–799. doi: 10.1016/0031-3203(94)90164-3. [DOI] [Google Scholar]
- 66.Feris R.S., de Campos T.E., Marcondes R.C. Detection and Tracking of Facial Features in Video Sequences. Springer; Berlin/Heidelberg, Germany: 2000. pp. 127–135. [Google Scholar]
- 67.Horng W.-B., Chen C.-Y., Chang Y., Fan C.-H. Driver fatigue detection based on eye tracking and dynamic template matching; Proceedings of the IEEE International Conference on Networking, Sensing and Control; Taipei, Taiwan. 21–23 March 2004; pp. 7–12. [Google Scholar]
- 68.Stiefelhagen R., Yang J., Waibel A. Tracking eyes and monitoring eye gaze; Proceedings of the Workshop on Perceptual User Interfaces; Banff, AB, Canada. 20–21 October 1997; pp. 98–100. [Google Scholar]
- 69.Hansen D.W., Hansen J.P., Nielsen M., Johansen A.S., Stegmann M.B. Eye typing using Markov and active appearance models; Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision (WACV 2002); Orlando, FL, USA. 4 December 2002; pp. 132–136. [Google Scholar]
- 70.Stiefelhagen R., Jie Y., Waibel A. A model-based gaze tracking system; Proceedings of the IEEE International Joint Symposia on Intelligence and Systems; Washington, DC, USA. 4–5 November 1996; pp. 304–310. [Google Scholar]
- 71.Xie X., Sudhakar R., Zhuang H. A cascaded scheme for eye tracking and head movement compensation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 1998;28:487–490. doi: 10.1109/3468.686709. [DOI] [Google Scholar]
- 72.Valenti R., Gevers T. Accurate eye center location and tracking using isophote curvature; Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition; Anchorage, AK, USA. 23–28 June 2008; pp. 1–8. [Google Scholar]
- 73.Morimoto C.H., Koons D., Amir A., Flickner M. Pupil detection and tracking using multiple light sources. Image Vis. Comput. 2000;18:331–335. doi: 10.1016/S0262-8856(99)00053-0. [DOI] [Google Scholar]
- 74.Morimoto C.H., Flickner M. Real-time multiple face detection using active illumination; Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580); Grenoble, France. 28–30 March 2000; pp. 8–13. [Google Scholar]
- 75.Ebisawa Y. Realtime 3D position detection of human pupil; Proceedings of the 2004 IEEE Symposium on Virtual Environments, Human-Computer Interfaces and Measurement Systems (VCIMS); Boston, MA, USA. 12–14 July 2004; pp. 8–12. [Google Scholar]
- 76.Colombo C., Del Bimbo A. Real-time head tracking from the deformation of eye contours using a piecewise affine camera. Pattern Recognit. Lett. 1999;20:721–730. doi: 10.1016/S0167-8655(99)00036-7. [DOI] [Google Scholar]
- 77.Feng G.C., Yuen P.C. Variance projection function and its application to eye detection for human face recognition. Pattern Recognit. Lett. 1998;19:899–906. doi: 10.1016/S0167-8655(98)00065-8. [DOI] [Google Scholar]
- 78.Orazio T.D., Leo M., Cicirelli G., Distante A. An algorithm for real time eye detection in face images; Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004; Cambridge, UK. 26–26 August 2004; pp. 278–281. [Google Scholar]
- 79.Hallinan P.W. Recognizing Human Eyes. Volume 1570 SPIE; Bellingham, WA, USA: 1991. [Google Scholar]
- 80.Hillman P.M., Hannah J.M., Grant P.M. Global fitting of a facial model to facial features for model-based video coding; Proceedings of the 3rd International Symposium on Image and Signal Processing and Analysis, ISPA 2003; Rome, Italy. 18–20 September 2003; pp. 359–364. [Google Scholar]
- 81.Zhu Z., Fujimura K., Ji Q. Real-time eye detection and tracking under various light conditions; Proceedings of the 2002 Symposium on Eye Tracking Research & Applications; New York, NY, USA. 25–27 March 2002; pp. 139–144. [Google Scholar]
- 82.Fasel I., Fortenberry B., Movellan J. A generative framework for real time object detection and classification. Comput. Vis. Image Underst. 2005;98:182–210. doi: 10.1016/j.cviu.2004.07.014. [DOI] [Google Scholar]
- 83.Huang J., Wechsler H. Eye location using genetic algorithm; Proceedings of the 2nd International Conference on Audio and Video-Based Biometric Person Authentication; Washington, DC, USA. 22–23 March 1999. [Google Scholar]
- 84.Hansen D.W., Hammoud R.I. An improved likelihood model for eye tracking. Comput. Vis. Image Underst. 2007;106:220–230. doi: 10.1016/j.cviu.2006.06.012. [DOI] [Google Scholar]
- 85.Cristinacce D., Cootes T.F. Feature detection and tracking with constrained local models; Proceedings of the British Machine Vision Conference; Edinburgh, UK. 4–7 September 2006; p. 3. [Google Scholar]
- 86.Kimme C., Ballard D., Sklansky J. Finding circles by an array of accumulators. Commun. ACM. 1975;18:120–122. doi: 10.1145/360666.360677. [DOI] [Google Scholar]
- 87.Ruddock K.H. Movements of the Eyes. J. Mod. Opt. 1989;36:1273. doi: 10.1080/09500348914551271. [DOI] [Google Scholar]
- 88.Guestrin E.D., Eizenman M. General theory of remote gaze estimation using the pupil center and corneal reflections. IEEE Trans. Biomed. Eng. 2006;53:1124–1133. doi: 10.1109/TBME.2005.863952. [DOI] [PubMed] [Google Scholar]
- 89.Baluja S., Pomerleau D. Non-Intrusive Gaze Tracking Using Artificial Neural Networks. Carnegie Mellon University; Pittsburgh, PA, USA: 1994. [Google Scholar]
- 90.Wang J.-G., Sung E., Venkateswarlu R. Estimating the eye gaze from one eye. Comput. Vis. Image Underst. 2005;98:83–103. doi: 10.1016/j.cviu.2004.07.008. [DOI] [Google Scholar]
- 91.Fridman L., Lee J., Reimer B., Victor T. IET Computer Vision. Volume 10. Institution of Engineering and Technology; Stevenage, UK: 2016. ‘Owl’ and ‘Lizard’: Patterns of head pose and eye pose in driver gaze classification; pp. 308–314. [Google Scholar]
- 92.Ohno T. One-point calibration gaze tracking method; Proceedings of the 2006 Symposium on Eye Tracking Research & Applications; San Diego, CA, USA. 27 March 2006; p. 34. [Google Scholar]
- 93.Ohno T., Mukawa N., Yoshikawa A. FreeGaze: A gaze tracking system for everyday gaze interaction; Proceedings of the 2002 Symposium on Eye Tracking Research & Applications; New Orleans, LA, USA. 25 March 2002; pp. 125–132. [Google Scholar]
- 94.Villanueva A., Cabeza R. Models for Gaze Tracking Systems. EURASIP J. Image Video Process. 2007;2007:023570. doi: 10.1186/1687-5281-2007-023570. [DOI] [Google Scholar]
- 95.Coutinho F.L., Morimoto C.H. Free head motion eye gaze tracking using a single camera and multiple light sources; Proceedings of the 2006 19th Brazilian Symposium on Computer Graphics and Image Processing; Manaus, Brazil. 8–11 October 2006; pp. 171–178. [Google Scholar]
- 96.Shih S.W., Wu Y.T., Liu J. A calibration-free gaze tracking technique; Proceedings of the 15th International Conference on Pattern Recognition, ICPR-2000; Barcelona, Spain. 3–7 September 2000; pp. 201–204. [Google Scholar]
- 97.Villanueva A., Cabeza R., Porta S. Eye tracking: Pupil orientation geometrical modeling. Image Vis. Comput. 2006;24:663–679. doi: 10.1016/j.imavis.2005.06.001. [DOI] [Google Scholar]
- 98.Yoo D.H., Chung M.J. A novel non-intrusive eye gaze estimation using cross-ratio under large head motion. Comput. Vis. Image Underst. 2005;98:25–51. doi: 10.1016/j.cviu.2004.07.011. [DOI] [Google Scholar]
- 99.Beymer D., Flickner M. Eye gaze tracking using an active stereo head; Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Madison, WI, USA. 18–20 June 2003; p. II-451. [Google Scholar]
- 100.Brolly X.L.C., Mulligan J.B. Implicit Calibration of a Remote Gaze Tracker; Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop; Washington, DC, USA. 27 June–2 July 2004; p. 134. [Google Scholar]
- 101.Ohno T., Mukawa N. A free-head, simple calibration, gaze tracking system that enables gaze-based interaction; Proceedings of the 2004 symposium on Eye Tracking Research & Applications; San Antonio, TX, USA. 22 March 2004; pp. 115–122. [Google Scholar]
- 102.Shih S.-W., Liu J. A novel approach to 3-D gaze tracking using stereo cameras. Trans. Sys. Man Cyber. Part B. 2004;34:234–245. doi: 10.1109/TSMCB.2003.811128. [DOI] [PubMed] [Google Scholar]
- 103.Kim S.M., Sked M., Ji Q. Non-intrusive eye gaze tracking under natural head movements; Proceedings of the 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; San Francisco, CA, USA. 1–5 September 2004; pp. 2271–2274. [DOI] [PubMed] [Google Scholar]
- 104.Meyer A., Böhme M., Martinetz T., Barth E. International Tutorial and Research Workshop on Perception and Interactive Technologies for Speech-Based Systems. Springer; Berlin/Heidelberg, Germany: 2006. A Single-Camera Remote Eye Tracker; pp. 208–211. [Google Scholar]
- 105.Morimoto C.H., Amir A., Flickner M. Detecting eye position and gaze from a single camera and 2 light sources; Proceedings of the Object Recognition Supported by User Interaction for Service Robots; Quebec, QC, Canada. 11–15 August 2002; pp. 314–317. [Google Scholar]
- 106.Newman R., Matsumoto Y., Rougeaux S., Zelinsky A. Real-time stereo tracking for head pose and gaze estimation; Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580); Grenoble, France. 28–30 March 2000; pp. 122–128. [Google Scholar]
- 107.Noureddin B., Lawrence P.D., Man C.F. A non-contact device for tracking gaze in a human computer interface. Comput. Vis. Image Underst. 2005;98:52–82. doi: 10.1016/j.cviu.2004.07.005. [DOI] [Google Scholar]
- 108.Villanueva A., Cabeza R., Porta S. Gaze Tracking system model based on physical parameters. Int. J. Pattern Recognit. Artif. Intell. 2007;21:855–877. doi: 10.1142/S0218001407005697. [DOI] [Google Scholar]
- 109.Hansen D.W., Skovsgaard H.H.T., Hansen J.P., Møllenbach E. Noise tolerant selection by gaze-controlled pan and zoom in 3D; Proceedings of the 2008 Symposium on Eye Tracking Research & Applications; Savannah, GA, USA. 26 March 2008; pp. 205–212. [Google Scholar]
- 110.Vertegaal R., Weevers I., Sohn C. GAZE-2: An attentive video conferencing system; Proceedings of the CHI’02 Extended Abstracts on Human Factors in Computing Systems; Kingston, ON, Canada. 20 April 2002; pp. 736–737. [Google Scholar]
- 111.Ebisawa Y., Satoh S. Effectiveness of pupil area detection technique using two light sources and image difference method; Proceedings of the 15th Annual International Conference of the IEEE Engineering in Medicine and Biology Societ; San Diego, CA, USA. 31 October 1993; pp. 1268–1269. [Google Scholar]
- 112.Morimoto C.H., Mimica M.R.M. Eye gaze tracking techniques for interactive applications. Comput. Vis. Image Underst. 2005;98:4–24. doi: 10.1016/j.cviu.2004.07.010. [DOI] [Google Scholar]
- 113.Ji Q., Yang X. Real-Time Eye, Gaze, and Face Pose Tracking for Monitoring Driver Vigilance. Real Time Imaging. 2002;8:357–377. doi: 10.1006/rtim.2002.0279. [DOI] [Google Scholar]
- 114.Williams O., Blake A., Cipolla R. Sparse and Semi-supervised Visual Mapping with the S^ 3GP; Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06); New York, NY, USA. 17–22 June 2006; pp. 230–237. [Google Scholar]
- 115.Bin Suhaimi M.S.A., Matsushita K., Sasaki M., Njeri W. 24-Gaze-Point Calibration Method for Improving the Precision of AC-EOG Gaze Estimation. Sensors. 2019;19:3650. doi: 10.3390/s19173650. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 116.Hansen D.W. Committing Eye Tracking. IT University of Copenhagen, Department of Innovation; København, Denmark: 2003. [Google Scholar]
- 117.Stampe D.M. Heuristic filtering and reliable calibration methods for video-based pupil-tracking systems. Behav. Res. Methods Instrum. Comput. 1993;25:137–142. doi: 10.3758/BF03204486. [DOI] [Google Scholar]
- 118.Merchant J., Morrissette R., Porterfield J.L. Remote Measurement of Eye Direction Allowing Subject Motion Over One Cubic Foot of Space. IEEE Trans. Biomed. Eng. 1974;BME-21:309–317. doi: 10.1109/TBME.1974.324318. [DOI] [PubMed] [Google Scholar]
- 119.White K.P., Hutchinson T.E., Carley J.M. Spatially dynamic calibration of an eye-tracking system. IEEE Trans. Syst. Man Cybern. 1993;23:1162–1168. doi: 10.1109/21.247897. [DOI] [Google Scholar]
- 120.Zhu Z., Ji Q. Eye and gaze tracking for interactive graphic display. Mach. Vis. Appl. 2004;15:139–148. doi: 10.1007/s00138-004-0139-4. [DOI] [Google Scholar]
- 121.Zhiwei Z., Qiang J., Bennett K.P. Nonlinear Eye Gaze Mapping Function Estimation via Support Vector Regression; Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06); Hong Kong, China. 20–24 August 2006; pp. 1132–1135. [Google Scholar]
- 122.Kolakowski S.M., Pelz J.B. Compensating for eye tracker camera movement; Proceedings of the 2006 Symposium on Eye Tracking Research & Applications; San Diego, CA, USA. 27 March 2006; pp. 79–85. [Google Scholar]
- 123.Zhu Z., Ji Q. Novel Eye Gaze Tracking Techniques Under Natural Head Movement. IEEE Trans. Biomed. Eng. 2007;54:2246–2260. doi: 10.1109/tbme.2007.895750. [DOI] [PubMed] [Google Scholar]
- 124.Müller P.U., Cavegn D., d’Ydewalle G., Groner R. Perception and Cognition: Advances in Eye Movement Research. North-Holland/Elsevier Science Publishers; Amsterdam, The Netherlands: 1993. A comparison of a new limbus tracker, corneal reflection technique, Purkinje eye tracking and electro-oculography; pp. 393–401. [Google Scholar]
- 125.Crane H.D., Steele C.M. Accurate three-dimensional eyetracker. Appl. Opt. 1978;17:691–705. doi: 10.1364/AO.17.000691. [DOI] [PubMed] [Google Scholar]
- 126.Xu L.-Q., Machin D., Sheppard P. A Novel Approach to Real-time Non-intrusive Gaze Finding; Proceedings of the British Machine Vision Conference; Southampton, UK. 14–17 September 1998; pp. 1–10. [Google Scholar]
- 127.Wang J.-G., Sung E. Gaze determination via images of irises. Image Vis. Comput. 2001;19:891–911. doi: 10.1016/S0262-8856(01)00051-8. [DOI] [Google Scholar]
- 128.Tan K.-H., Kriegman D.J., Ahuja N. Appearance-based eye gaze estimation; Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision (WACV 2002); Orlando, FL, USA. 4 December 2002; pp. 191–195. [Google Scholar]
- 129.Heinzmann K., Zelinsky A. 3-D Facial Pose and Gaze Point Estimation Using a Robust Real-Time Tracking Paradigm; Proceedings of the 3rd, International Conference on Face & Gesture Recognition; Nara, Japan. 14–16 April 1998; p. 142. [Google Scholar]
- 130.Yamazoe H., Utsumi A., Yonezawa T., Abe S. Remote gaze estimation with a single camera based on facial-feature tracking without special calibration actions; Proceedings of the 2008 Symposium on Eye Tracking Research & Applications; Savannah, GA, USA. 26 March 2008; pp. 245–250. [Google Scholar]
- 131.Matsumoto Y., Ogasawara T., Zelinsky A. Behavior recognition based on head pose and gaze direction measurement; Proceedings of the 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113); Takamatsu, Japan. 31 October–5 November 2000; pp. 2127–2132. [Google Scholar]
- 132.Li G., Li S.E., Zou R., Liao Y., Cheng B. Detection of road traffic participants using cost-effective arrayed ultrasonic sensors in low-speed traffic situations. Mech. Syst. Signal Process. 2019;132:535–545. doi: 10.1016/j.ymssp.2019.07.009. [DOI] [Google Scholar]
- 133.Scott D., Findlay J.M., Hursley Human Factors Laboratory W., Laboratory I.U.H.H.F. Visual Search, Eye Movements and Display Units. IBM UK Hursley Human Factors Laboratory; Winchester, UK: 1991. [Google Scholar]
- 134.Talmi K., Liu J. Eye and gaze tracking for visually controlled interactive stereoscopic displays. Signal Process. Image Commun. 1999;14:799–810. doi: 10.1016/S0923-5965(98)00044-7. [DOI] [Google Scholar]
- 135.Tomono A., Iida M., Kobayashi Y. A TV Camera System Which Extracts Feature Points for Non-Contact Eye Movement Detection. Volume 1194 SPIE; Bellingham, WA, USA: 1990. [Google Scholar]
- 136.Harbluk J.L., Noy Y.I., Eizenman M. [(accessed on 14 December 2019)];The Impact of Cognitive Distraction on Driver Visual Behaviour and Vehicle Control. 2002 Available online: https://trid.trb.org/view/643031.
- 137.Sodhi M., Reimer B., Cohen J.L., Vastenburg E., Kaars R., Kirschenbaum S. On-road driver eye movement tracking using head-mounted devices; Proceedings of the 2002 Symposium on Eye Tracking Research & Applications; New Orleans, LA, USA. 25–27 March 2002; pp. 61–68. [Google Scholar]
- 138.Reimer B., Mehler B., Wang Y., Coughlin J.F. A Field Study on the Impact of Variations in Short-Term Memory Demands on Drivers’ Visual Attention and Driving Performance Across Three Age Groups. Hum. Factors. 2012;54:454–468. doi: 10.1177/0018720812437274. [DOI] [PubMed] [Google Scholar]
- 139.Reimer B., Mehler B., Wang Y., Coughlin J.F. The Impact of Systematic Variation of Cognitive Demand on Drivers’ Visual Attention across Multiple Age Groups. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2010;54:2052–2055. doi: 10.1177/154193121005402407. [DOI] [Google Scholar]
- 140.Liang Y., Lee J.D. Combining cognitive and visual distraction: Less than the sum of its parts. Accid. Anal. Prev. 2010;42:881–890. doi: 10.1016/j.aap.2009.05.001. [DOI] [PubMed] [Google Scholar]
- 141.Harbluk J.L., Noy Y.I., Trbovich P.L., Eizenman M. An on-road assessment of cognitive distraction: Impacts on drivers’ visual behavior and braking performance. Accid. Anal. Prev. 2007;39:372–379. doi: 10.1016/j.aap.2006.08.013. [DOI] [PubMed] [Google Scholar]
- 142.Victor T.W., Harbluk J.L., Engström J.A. Sensitivity of eye-movement measures to in-vehicle task difficulty. Transp. Res. Part F Traffic Psychol. Behav. 2005;8:167–190. doi: 10.1016/j.trf.2005.04.014. [DOI] [Google Scholar]
- 143.Recarte M.A., Nunes L.M. Effects of verbal and spatial-imagery tasks on eye fixations while driving. J. Exp. Psychol. Appl. 2000;6:31–43. doi: 10.1037/1076-898X.6.1.31. [DOI] [PubMed] [Google Scholar]
- 144.Recarte M.A., Nunes L.M. Mental workload while driving: Effects on visual search, discrimination, and decision making. J. Exp. Psychol. Appl. 2003;9:119–137. doi: 10.1037/1076-898X.9.2.119. [DOI] [PubMed] [Google Scholar]
- 145.Nunes L., Recarte M.A. Cognitive demands of hands-free-phone conversation while driving. Transp. Res. Part F Traffic Psychol. Behav. 2002;5:133–144. doi: 10.1016/S1369-8478(02)00012-8. [DOI] [Google Scholar]
- 146.Kass S.J., Cole K.S., Stanny C.J. Effects of distraction and experience on situation awareness and simulated driving. Transp. Res. Part F Traffic Psychol. Behav. 2007;10:321–329. doi: 10.1016/j.trf.2006.12.002. [DOI] [Google Scholar]
- 147.Engström J., Johansson E., Östlund J. Effects of visual and cognitive load in real and simulated motorway driving. Transp. Res. Part F Traffic Psychol. Behav. 2005;8:97–120. doi: 10.1016/j.trf.2005.04.012. [DOI] [Google Scholar]
- 148.Ahlström C., Kircher K., Kircher A. Considerations when calculating percent road centre from eye movement data in driver distraction monitoring; Proceedings of the Fifth International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design; Big Sky, MT, USA. 22–25 June 2009; pp. 132–139. [Google Scholar]
- 149.Wang Y., Reimer B., Dobres J., Mehler B. The sensitivity of different methodologies for characterizing drivers’ gaze concentration under increased cognitive demand. Transp. Res. F Traffic Psychol. Behav. 2014;26:227–237. doi: 10.1016/j.trf.2014.08.003. [DOI] [Google Scholar]
- 150.Endsley M.R. Toward a Theory of Situation Awareness in Dynamic Systems. Routledge; Abingdon, UK: 2016. [Google Scholar]
- 151.Khan M.Q., Lee S. A Comprehensive Survey of Driving Monitoring and Assistance Systems. Sensors. 2019;19:2574. doi: 10.3390/s19112574. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 152.Martinez C.M., Heucke M., Wang F., Gao B., Cao D. Driving Style Recognition for Intelligent Vehicle Control and Advanced Driver Assistance: A Survey. IEEE Trans. Intell. Transp. Syst. 2018;19:666–676. doi: 10.1109/TITS.2017.2706978. [DOI] [Google Scholar]
- 153.Regan M.A., Lee J.D., Young K. Driver Distraction: Theory, Effects, and Mitigation. CRC Press; Boca Raton, FL, USA: 2008. [Google Scholar]
- 154.Ranney T., Mazzae E., Garrott R., Goodman M., Administration N.H.T.S. Driver distraction research: Past, present and future; Proceedings of the 17th International Technical Conference of Enhanced Safety of Vehicles; Amsterdam, The Netherlands. 4–7 June 2001. [Google Scholar]
- 155.Young K., Regan M., Hammer M. Driver distraction: A review of the literature. Distracted Driv. 2007;2007:379–405. [Google Scholar]
- 156.Stutts J.C., Reinfurt D.W., Staplin L., Rodgman E. The Role of Driver Distraction in Traffic Crashes. AAA Foundation for Traffic Safety; Washington, DC, USA: 2001. [Google Scholar]
- 157.Zhao Y., Görne L., Yuen I.-M., Cao D., Sullman M., Auger D., Lv C., Wang H., Matthias R., Skrypchuk L., et al. An Orientation Sensor-Based Head Tracking System for Driver Behaviour Monitoring. Sensors. 2017;17:2692. doi: 10.3390/s17112692. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 158.Khandakar A., Chowdhury M.E.H., Ahmed R., Dhib A., Mohammed M., Al-Emadi N.A.M.A., Michelson D. Portable System for Monitoring and Controlling Driver Behavior and the Use of a Mobile Phone While Driving. Sensors. 2019;19:1563. doi: 10.3390/s19071563. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 159.Ranney T.A., Garrott W.R., Goodman M.J. NHTSA Driver Distraction Research: Past, Present, and Future. SAE Technical Paper; Warrendale, PA, USA: 2001. [Google Scholar]
- 160.Fitch G.M., Soccolich S.A., Guo F., McClafferty J., Fang Y., Olson R.L., Perez M.A., Hanowski R.J., Hankey J.M., Dingus T.A. The Impact of Hand-Held and Hands-Free Cell Phone Use on Driving Performance and Safety-Critical Event Risk. U.S. Department of Transportation, National Highway Traffic Safety Administration; Washington, DC, USA: 2013. [Google Scholar]
- 161.Miller J., Ulrich R. Bimanual Response Grouping in Dual-Task Paradigms. Q. J. Exp. Psychol. 2008;61:999–1019. doi: 10.1080/17470210701434540. [DOI] [PubMed] [Google Scholar]
- 162.Gazes Y., Rakitin B.C., Steffener J., Habeck C., Butterfield B., Ghez C., Stern Y. Performance degradation and altered cerebral activation during dual performance: Evidence for a bottom-up attentional system. Behav. Brain Res. 2010;210:229–239. doi: 10.1016/j.bbr.2010.02.036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 163.Törnros J.E.B., Bolling A.K. Mobile phone use—Effects of handheld and handsfree phones on driving performance. Accid. Anal. Prev. 2005;37:902–909. doi: 10.1016/j.aap.2005.04.007. [DOI] [PubMed] [Google Scholar]
- 164.Young K.L., Salmon P.M., Cornelissen M. Distraction-induced driving error: An on-road examination of the errors made by distracted and undistracted drivers. Accid. Anal. Prev. 2013;58:218–225. doi: 10.1016/j.aap.2012.06.001. [DOI] [PubMed] [Google Scholar]
- 165.Chan M., Singhal A. The emotional side of cognitive distraction: Implications for road safety. Accid. Anal. Prev. 2013;50:147–154. doi: 10.1016/j.aap.2012.04.004. [DOI] [PubMed] [Google Scholar]
- 166.Strayer D.L., Cooper J.M., Turrill J., Coleman J., Medeiros-Ward N., Biondi F. Measuring Cognitive Distraction in the Automobile. AAA Foundation for Traffic Safety; Washington, DC, USA: 2013. [Google Scholar]
- 167.Rakauskas M.E., Gugerty L.J., Ward N.J. Effects of naturalistic cell phone conversations on driving performance. J. Saf. Res. 2004;35:453–464. doi: 10.1016/j.jsr.2004.06.003. [DOI] [PubMed] [Google Scholar]
- 168.Horberry T., Anderson J., Regan M.A., Triggs T.J., Brown J. Driver distraction: The effects of concurrent in-vehicle tasks, road environment complexity and age on driving performance. Accid. Anal. Prev. 2006;38:185–191. doi: 10.1016/j.aap.2005.09.007. [DOI] [PubMed] [Google Scholar]
- 169.Awais M., Badruddin N., Drieberg M. A Hybrid Approach to Detect Driver Drowsiness Utilizing Physiological Signals to Improve System Performance and Wearability. Sensors. 2017;17:1991. doi: 10.3390/s17091991. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 170.Chien J.-C., Chen Y.-S., Lee J.-D. Improving Night Time Driving Safety Using Vision-Based Classification Techniques. Sensors. 2017;17:2199. doi: 10.3390/s17102199. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 171.Thum Chia C., Mustafa M.M., Hussain A., Hendi S.F., Majlis B.Y. Development of vehicle driver drowsiness detection system using electrooculogram (EOG); Proceedings of the 2005 1st International Conference on Computers, Communications, & Signal Processing with Special Track on Biomedical Engineering; Kuala Lumpur, Malaysia. 14–16 November 2005; pp. 165–168. [Google Scholar]
- 172.Sirevaag E.J., Stern J.A. Ocular Measures of Fatigue and Cognitive Factors. Engineering Psychophysiology: Issues and Applications. CRC Press; Boca Raton, FL, USA: 2000. pp. 269–287. [Google Scholar]
- 173.Schleicher R., Galley N., Briest S., Galley L. Blinks and saccades as indicators of fatigue in sleepiness warnings: Looking tired? Ergonomics. 2008;51:982–1010. doi: 10.1080/00140130701817062. [DOI] [PubMed] [Google Scholar]
- 174.Yue C. EOG Signals in Drowsiness Research. [(accessed on 14 December 2019)];2011 Available online: https://pdfs.semanticscholar.org/8b77/9934f6ceae3073b3312c947f39467a74828f.pdf.
- 175.Thorslund B. Electrooculogram Analysis and Development of a System for Defining Stages of Drowsiness. Statens väg-och transportforskningsinstitut; Linköping, Sweden: 2004. [Google Scholar]
- 176.Pohl J., Birk W., Westervall L. A driver-distraction-based lane-keeping assistance system. Proc Inst. Mech. Eng. Part I J. Syst. Control Eng. 2007;221:541–552. doi: 10.1243/09596518JSCE218. [DOI] [Google Scholar]
- 177.Kircher K., Ahlstrom C., Kircher A. Comparison of two eye-gaze based real-time driver distraction detection algorithms in a small-scale field operational test; Proceedings of the Fifth International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design; Big Sky, MT, USA. 22–25 June 2009. [Google Scholar]
- 178.Kim W., Jung W.-S., Choi H.K. Lightweight Driver Monitoring System Based on Multi-Task Mobilenets. Sensors. 2019;19:3200. doi: 10.3390/s19143200. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 179.Mavely A.G., Judith J.E., Sahal P.A., Kuruvilla S.A. Eye gaze tracking based driver monitoring system; Proceedings of the 2017 IEEE International Conference on Circuits and Systems (ICCS); Thiruvananthapuram, India. 20–21 December 2017; pp. 364–367. [Google Scholar]
- 180.Wollmer M., Blaschke C., Schindl T., Schuller B., Farber B., Mayer S., Trefflich B. Online Driver Distraction Detection Using Long Short-Term Memory. IEEE Trans. Intell. Transp. Syst. 2011;12:574–582. doi: 10.1109/TITS.2011.2119483. [DOI] [Google Scholar]
- 181.Castro M.J.C.D., Medina J.R.E., Lopez J.P.G., Goma J.C.d., Devaraj M. A Non-Intrusive Method for Detecting Visual Distraction Indicators of Transport Network Vehicle Service Drivers Using Computer Vision; Proceedings of the IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM); Baguio, Philippines. 29 November–2 December 2018; pp. 1–5. [Google Scholar]
- 182.Banaeeyan R., Halin A.A., Bahari M. Nonintrusive eye gaze tracking using a single eye image; Proceedings of the 2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA); Kuala Lumpur, Malaysia. 19–21 October 2015; pp. 139–144. [Google Scholar]
- 183.Anjali K.U., Thampi A.K., Vijayaraman A., Francis M.F., James N.J., Rajan B.K. Real-time nonintrusive monitoring and detection of eye blinking in view of accident prevention due to drowsiness; Proceedings of the 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT); Nagercoil, India. 18–19 March 2016; pp. 1–6. [Google Scholar]
- 184.Hirayama T., Mase K., Takeda K. Analysis of Temporal Relationships between Eye Gaze and Peripheral Vehicle Behavior for Detecting Driver Distraction. Int. J. Veh. Technol. 2013;2013:8. doi: 10.1155/2013/285927. [DOI] [Google Scholar]
- 185.Yang Y., Sun H., Liu T., Huang G.-B., Sourina O. Proceedings of ELM-2014 Volume 2. Springer; Berlin/Heidelberg, Germany: 2015. Driver Workload Detection in On-Road Driving Environment Using Machine Learning; pp. 389–398. [Google Scholar]
- 186.Tango F., Botta M. Real-Time Detection System of Driver Distraction Using Machine Learning. IEEE Trans. Intell. Transp. Syst. 2013;14:894–905. doi: 10.1109/TITS.2013.2247760. [DOI] [Google Scholar]
- 187.Mbouna R.O., Kong S.G., Chun M. Visual Analysis of Eye State and Head Pose for Driver Alertness Monitoring. IEEE Trans. Intell. Transp. Syst. 2013;14:1462–1469. doi: 10.1109/TITS.2013.2262098. [DOI] [Google Scholar]
- 188.Ahlstrom C., Kircher K., Kircher A. A Gaze-Based Driver Distraction Warning System and Its Effect on Visual Behavior. IEEE Trans. Intell. Transp. Syst. 2013;14:965–973. doi: 10.1109/TITS.2013.2247759. [DOI] [Google Scholar]
- 189.Liu T., Yang Y., Huang G., Yeo Y.K., Lin Z. Driver Distraction Detection Using Semi-Supervised Machine Learning. IEEE Trans. Intell. Transp. Syst. 2016;17:1108–1120. doi: 10.1109/TITS.2015.2496157. [DOI] [Google Scholar]
- 190.Yekhshatyan L., Lee J.D. Changes in the Correlation between Eye and Steering Movements Indicate Driver Distraction. IEEE Trans. Intell. Transp. Syst. 2013;14:136–145. doi: 10.1109/TITS.2012.2208223. [DOI] [Google Scholar]
- 191.Carsten O., Brookhuis K. Issues arising from the HASTE experiments. Transp. Res. Part F Traffic Psychol. Behav. 2005;8:191–196. doi: 10.1016/j.trf.2005.04.004. [DOI] [Google Scholar]
- 192.Ebrahim P. Ph.D. Thesis. University of Stuttgart; Stuttgart, Germany: 2016. Driver Drowsiness Monitoring Using Eye Movement Features Derived from Electrooculography. [Google Scholar]
- 193.Shin D.U.K., Sakai H., Uchiyama Y. Slow eye movement detection can prevent sleep-related accidents effectively in a simulated driving task. J. Sleep Res. 2011;20:416–424. doi: 10.1111/j.1365-2869.2010.00891.x. [DOI] [PubMed] [Google Scholar]
- 194.Liang Y., Reyes M.L., Lee J.D. Real-Time Detection of Driver Cognitive Distraction Using Support Vector Machines. IEEE Trans. Intell. Transp. Syst. 2007;8:340–350. doi: 10.1109/TITS.2007.895298. [DOI] [Google Scholar]
- 195.Liang Y., Lee J.D. A hybrid Bayesian Network approach to detect driver cognitive distraction. Transp. Res. Part C Emerg. Technol. 2014;38:146–155. doi: 10.1016/j.trc.2013.10.004. [DOI] [Google Scholar]
- 196.Weller G., Schlag B. A robust method to detect driver distraction. [(accessed on 14 December 2019)]; Available online: http://www.humanist-vce.eu/fileadmin/contributeurs/humanist/Berlin2010/4a_Weller.pdf.
- 197.Miyaji M., Kawanaka H., Oguri K. Driver’s cognitive distraction detection using physiological features by the adaboost; Proceedings of the 2009 12th International IEEE Conference on Intelligent Transportation Systems; St. Louis, MO, USA. 4–7 October 2009; pp. 1–6. [Google Scholar]
- 198.Xu J., Min J., Hu J. Real-time eye tracking for the assessment of driver fatigue. Healthc. Technol. Lett. 2018;5:54–58. doi: 10.1049/htl.2017.0020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 199.Tang J., Fang Z., Hu S., Ying S. Driver fatigue detection algorithm based on eye features; Proceedings of the 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery; Yantai, China. 10–12 August 2010; pp. 2308–2311. [Google Scholar]
- 200.Li J., Yang Z., Song Y. A hierarchical fuzzy decision model for driver’s unsafe states monitoring; Proceedings of the 2011 Eighth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD); Shanghai, China. 26–28 July 2011; pp. 569–573. [Google Scholar]
- 201.Rigane O., Abbes K., Abdelmoula C., Masmoudi M. A Fuzzy Based Method for Driver Drowsiness Detection; Proceedings of the IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA); Hammamet, Tunisia. 30 October–3 November 2017; pp. 143–147. [Google Scholar]
- 202.Lethaus F., Baumann M.R., Köster F., Lemmer K. International Conference on Adaptive and Natural Computing Algorithms. Springer; Berlin/Heidelberg, Germany: 2011. Using pattern recognition to predict driver intent; pp. 140–149. [Google Scholar]
- 203.Xiao Z., Hu Z., Geng L., Zhang F., Wu J., Li Y. Fatigue driving recognition network: Fatigue driving recognition via convolutional neural network and long short-term memory units. IET Trans. Intell. Transp. Syst. 2019;13:1410–1416. doi: 10.1049/iet-its.2018.5392. [DOI] [Google Scholar]
- 204.Qiang J., Zhiwei Z., Lan P. Real-time nonintrusive monitoring and prediction of driver fatigue. IEEE Trans. Veh. Technol. 2004;53:1052–1068. [Google Scholar]
- 205.Wang H., Song W., Liu W., Song N., Wang Y., Pan H. A Bayesian Scene-Prior-Based Deep Network Model for Face Verification. Sensors. 2018;18:1906. doi: 10.3390/s18061906. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 206.Jain A., Koppula H.S., Soh S., Raghavan B., Singh A., Saxena A. Brain4cars: Car that knows before you do via sensory-fusion deep learning architecture. arXiv. 20161601.00740 [Google Scholar]
- 207.Fridman L., Langhans P., Lee J., Reimer B. Driver Gaze Region Estimation without Use of Eye Movement. IEEE Intell. Syst. 2016;31:49–56. doi: 10.1109/MIS.2016.47. [DOI] [Google Scholar]
- 208.Li G., Yang Y., Qu X. Deep Learning Approaches on Pedestrian Detection in Hazy Weather. IEEE Trans. Ind. Electron. 2019:2945295. doi: 10.1109/TIE.2019.2945295. [DOI] [Google Scholar]
- 209.Song C., Yan X., Stephen N., Khan A.A. Hidden Markov model and driver path preference for floating car trajectory map matching. IET Intell. Transp. Syst. 2018;12:1433–1441. doi: 10.1049/iet-its.2018.5132. [DOI] [Google Scholar]
- 210.Muñoz M., Reimer B., Lee J., Mehler B., Fridman L. Distinguishing patterns in drivers’ visual attention allocation using Hidden Markov Models. Transp. Res. Part F Traffic Psychol. Behav. 2016;43:90–103. doi: 10.1016/j.trf.2016.09.015. [DOI] [Google Scholar]
- 211.Hou H., Jin L., Niu Q., Sun Y., Lu M. Driver Intention Recognition Method Using Continuous Hidden Markov Model. Int. J. Comput. Intell. Syst. 2011;4:386–393. doi: 10.1080/18756891.2011.9727797. [DOI] [Google Scholar]
- 212.Fu R., Wang H., Zhao W. Dynamic driver fatigue detection using hidden Markov model in real driving condition. Expert Syst. Appl. 2016;63:397–411. doi: 10.1016/j.eswa.2016.06.042. [DOI] [Google Scholar]
- 213.Morris B., Doshi A., Trivedi M. Lane change intent prediction for driver assistance: On-road design and evaluation; Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV); Baden-Baden, Germany. 5–9 June 2011; pp. 895–901. [Google Scholar]
- 214.Tang J., Liu F., Zhang W., Ke R., Zou Y. Lane-changes prediction based on adaptive fuzzy neural network. Expert Syst. Appl. 2018;91:452–463. doi: 10.1016/j.eswa.2017.09.025. [DOI] [Google Scholar]
- 215.Zhu W., Miao J., Hu J., Qing L. Vehicle detection in driving simulation using extreme learning machine. Neurocomputing. 2014;128:160–165. doi: 10.1016/j.neucom.2013.05.052. [DOI] [Google Scholar]
- 216.Kumar P., Perrollaz M., Lefevre S., Laugier C. Learning-based approach for online lane change intention prediction; Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV); Gold Coast, Australia. 23–26 June 2013; pp. 797–802. [Google Scholar]
- 217.Beggiato M., Pech T., Leonhardt V., Lindner P., Wanielik G., Bullinger-Hoffmann A., Krems J. UR: BAN Human Factors in Traffic. Springer; Berlin/Heidelberg, Germany: 2018. Lane Change Prediction: From Driver Characteristics, Manoeuvre Types and Glance Behaviour to a Real-Time Prediction Algorithm; pp. 205–221. [Google Scholar]
- 218.Krumm J. A Markov Model for Driver Turn Prediction. [(accessed on 2 October 2019)]; Available online: https://www.microsoft.com/en-us/research/publication/markov-model-driver-turn-prediction/
- 219.Li X., Wang W., Roetting M. Estimating Driver’s Lane-Change Intent Considering Driving Style and Contextual Traffic. IEEE Trans. Intell. Transp. Syst. 2019;20:3258–3271. doi: 10.1109/TITS.2018.2873595. [DOI] [Google Scholar]
- 220.Husen M.N., Lee S., Khan M.Q. Syntactic pattern recognition of car driving behavior detection; Proceedings of the 11th International Conference on Ubiquitous Information Management and Communication; Beppu, Japan. 5–7 January 2017; pp. 1–6. [Google Scholar]
- 221.Kuge N., Yamamura T., Shimoyama O., Liu A. A Driver Behavior Recognition Method Based on a Driver Model Framework. SAE Technical Paper; Warrendale, PA, USA: 2000. 0148-7191. [Google Scholar]
- 222.Doshi A., Trivedi M.M. On the roles of eye gaze and head dynamics in predicting driver’s intent to change lanes. IEEE Trans. Intell. Transp. Syst. 2009;10:453–462. doi: 10.1109/TITS.2009.2026675. [DOI] [Google Scholar]
- 223.McCall J.C., Trivedi M.M. Driver behavior and situation aware brake assistance for intelligent vehicles. Proc. IEEE. 2007;95:374–387. doi: 10.1109/JPROC.2006.888388. [DOI] [Google Scholar]
- 224.Cheng S.Y., Trivedi M.M. Turn-intent analysis using body pose for intelligent driver assistance. IEEE Pervasive Comput. 2006;5:28–37. doi: 10.1109/MPRV.2006.88. [DOI] [Google Scholar]
- 225.Li G., Li S.E., Cheng B., Green P. Estimation of driving style in naturalistic highway traffic using maneuver transition probabilities. Transp. Res. Part C Emerg. Technol. 2017;74:113–125. doi: 10.1016/j.trc.2016.11.011. [DOI] [Google Scholar]
- 226.Bergasa L.M., Nuevo J., Sotelo M.A., Barea R., Lopez M.E. Real-time system for monitoring driver vigilance. IEEE Trans. Intell. Transp. Syst. 2006;7:63–77. doi: 10.1109/TITS.2006.869598. [DOI] [Google Scholar]
- 227.Smith P., Shah M., Lobo N.D.V. Determining driver visual attention with one camera. IEEE Trans. Intell. Transp. Syst. 2003;4:205–218. doi: 10.1109/TITS.2003.821342. [DOI] [Google Scholar]
- 228.Sigari M.-H., Fathy M., Soryani M. A Driver Face Monitoring System for Fatigue and Distraction Detection. Int. J. Veh. Technol. 2013;2013:263983. doi: 10.1155/2013/263983. [DOI] [Google Scholar]
- 229.Flores M., Armingol J., de la Escalera A. Driver Drowsiness Warning System Using Visual Information for Both Diurnal and Nocturnal Illumination Conditions. EURASIP J. Adv. Signal Process. 2010;2010:438205. doi: 10.1155/2010/438205. [DOI] [Google Scholar]
- 230.Wang R.-B., Guo K.-Y., Shi S.-M., Chu J.-W. A monitoring method of driver fatigue behavior based on machine vision; Proceedings of the IEEE IV2003 Intelligent Vehicles Symposium, Proceedings (Cat. No.03TH8683); Columbus, OH, USA. 9–11 June 2003; pp. 110–113. [Google Scholar]
- 231.Zhang Z., Zhang J.S. Driver Fatigue Detection Based Intelligent Vehicle Control; Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06); Hong Kong, China. 20–24 August 2006; pp. 1262–1265. [Google Scholar]
- 232.Wenhui D., Xiaojuan W. Fatigue detection based on the distance of eyelid; Proceedings of the 2005 IEEE International Workshop on VLSI Design and Video Technology; Suzhou, China. 28–30 May 2005; pp. 365–368. [Google Scholar]
- 233.Lalonde M., Byrns D., Gagnon L., Teasdale N., Laurendeau D. Real-time eye blink detection with GPU-based SIFT tracking; Proceedings of the Fourth Canadian Conference on Computer and Robot Vision (CRV ‘07); Montreal, QC, Canada. 28–30 May 2007; pp. 481–487. [Google Scholar]
- 234.Batista J. A Drowsiness and Point of Attention Monitoring System for Driver Vigilance; Proceedings of the 2007 IEEE Intelligent Transportation Systems Conference; Seattle, WA, USA. 30 September–3 October 2007; pp. 702–708. [Google Scholar]
- 235.Audi|Luxury Sedans, SUVs, Convertibles, Electric Vehicles & More. [(accessed on 2 October 2019)]; Available online: https://www.audiusa.com.
- 236.Bayerische Motoren Werke AG The International BMW Website|BMW.com. [(accessed on 2 October 2019)]; Available online: https://www.bmw.com/en/index.html.
- 237.National Highway Traffic Safety Administration . Crash Factors in Intersection-Related Crashes: An on-Scene Perspective. Nat. Center Stat. Anal. National Highway Traffic Safety Administration; Washington, DC, USA: 2010. p. 811366. [Google Scholar]
- 238.Ford Ford–New Cars, Trucks, SUVs, Crossovers & Hybrids|Vehicles Built Just for You|Ford.com. [(accessed on 2 October 2019)]; Available online: https://www.ford.com/
- 239.Mercedes-Benz International News, Pictures, Videos & Livestreams. [(accessed on 2 October 2019)]; Available online: https://www.mercedes-benz.com/content/com/en.
- 240.New Cars, Trucks, SUVs & Hybrids|Toyota Official Site. [(accessed on 2 October 2019)]; Available online: https://www.toyota.com.
- 241.National Highway Traffic Safety Administration . Federal Automated Vehicles Policy: Accelerating the Next Revolution in Roadway Safety. Department of Transportationl; Washington, DC, USA: 2016. [Google Scholar]
- 242.Lee J.D. Dynamics of Driver Distraction: The process of engaging and disengaging. Ann. Adv. Automot. Med. 2014;58:24–32. [PMC free article] [PubMed] [Google Scholar]
- 243.Fridman L., Brown D.E., Glazer M., Angell W., Dodd S., Jenik B., Terwilliger J., Patsekin A., Kindelsberger J., Ding L. MIT advanced vehicle technology study: Large-scale naturalistic driving study of driver behavior and interaction with automation. IEEE Access. 2019;7:102021–102038. doi: 10.1109/ACCESS.2019.2926040. [DOI] [Google Scholar]
- 244.Su D., Li Y., Chen H. Toward Precise Gaze Estimation for Mobile Head-Mounted Gaze Tracking Systems. IEEE Trans. Ind. Inform. 2019;15:2660–2672. doi: 10.1109/TII.2018.2867952. [DOI] [Google Scholar]