Cao et al. (2016) |
2016 |
Intention recognition |
To examine and evaluate whether pupil variation has a relevant impact on the endoscopic manipulator activation judgment |
Pupil size, velocity of eye rotation |
12 (10 males, 2 females) |
Tobii 1750 |
SVM and PNN |
88.6% |
Ahmed and Noble (2016) |
2016 |
Image classification |
Attempt to classify and acquiring the image frames of the head, abdominal, and femoral from 2-D B-Mode ultrasound scanning |
Fixations |
10 |
EyeTribe (30Hz) |
Bag of words model |
85–89% |
Zhang and Juhola (2016) |
2017 |
Biometric identification |
To study primarily biometric recognition as a multi-class classification process and biometric authentication as binary classification |
Saccades |
109 |
EyeLink (SR Research) |
SVM, LDA, RBF, MLP |
80–90% |
Zhou et al. (2017) |
2017 |
Image classification |
To propose an approach of two-stage feature selection for image classification by considering human factors and leveraging the importance of the eye-tracking data. |
Fixations, ROI |
- |
Tobii X120 |
SVM |
94.21% |
Borys et al. (2017) |
2017 |
User performance classification in RFFT |
To verify and evaluate whether eye-tracking data in combination with machine learning could be used to identify user output in RFFT. |
Fixations, saccades, blinks, pupil size |
61 |
Tobii Pro TX300 |
Quadratic discriminant analysis |
78.7% |
Karessli et al. (2017) |
2017 |
Image classification |
To propose an approach that uses gaze data for zero-shot image classification |
Gaze point |
5 |
Tobii TX300 (300Hz) |
SVM |
78.2% |
Labibah et al. (2018) |
2018 |
Lie detection |
To construct the object using a lie detector with the analysis of pupil changes and eye movements using image processing and decision tree algorithm. |
Pupil diameter, eye movements |
40 |
Computer camera |
Decision tree |
95% |
Qi et al. (2018) |
2018 |
Material classification |
To investigate how humans interpret material images and find information on eye fixation enhances the efficiency of material recognition. |
Fixation points, gaze paths |
8 |
Eye-tracker |
CNN |
85.9% |
Singh et al. (2018) |
2018 |
Reading pattern classification |
To analyze the reading patterns of eye-tracking inspectors and assesses their ability to detect specific types of faults. |
Fixations, saccades |
39 |
EyeLink 1000 |
NB, MNB, RF, SGD, ensemble, decision trees, Lazy network |
79.3–94% |
Lagodzinski et al. (2018) |
2018 |
Cognitive activity recognition |
To discuss the concept of the eye movement study, which can be used effectively in behavior detection due to the good connection with cognitive activities. |
EOG, accelerometer data |
100 |
JINS MEME EOG-based eye-tracker |
SVM |
99.3% |
Bozkir et al. (2019) |
2019 |
Cognitive load classification |
To propose a scheme for the detection of cognitive driver loads in safety-critical circumstances using eye data in VR. |
Pupil diameter |
16 |
Pupil Labs |
SVM, KNN, RF, decision trees |
80% |
Orlosky et al. (2019) |
2019 |
User understanding recognition |
To recognize the understanding of the vocabulary of a user in AR/VR learning interfaces using eye-tracking. |
Pupil size |
16 |
Pupil Labs Dev IR camera |
SVM |
62–75% |
Sargezeh et al. (2019) |
2019 |
Gender classification |
To examine parameters of eye movement to explore gender eye patterns difference while viewing the indoor image and classify them into two subgroups. |
Saccade amplitude, number of saccades, fixation duration, spatial density, scan path, RFDSD |
45 (25 males, 20 females) |
EyeLink 1000 plus |
SVM |
84.4% |
Tamuly et al. (2019) |
2019 |
Image classification |
To develop a system for classifying images into three categories from extracted eye features. |
Fixation count, fixation duration average, fixation frequency, saccade count, saccade frequency, saccade duration total, saccade velocity total |
25 |
SMI eye-tracker |
KNN, NB, decision trees |
57.6% |
Luo et al. (2019) |
2019 |
Object detection |
To develop a framework for extracting high-level eye features from low-cost remote eye-tracker's outputs with which the object can be detected. |
Fixation length, radius of fixation, number of time-adjacent clusters |
15 (6 males, 9 females) |
Tobii Eye Tracker 4C |
SVM |
97.85% |
Startsev and Dorr (2019) |
2019 |
ASD classification |
To propose a framework that identifies an individual's viewing activity as likely to be correlated with either ASD or normal development in a fully automated fashion, based on scan path and analytically expected salience. |
Fixations, scan path |
14 |
Tobii T120 |
RF |
76.9% AUC |
Zhu et al. (2019) |
2019 |
Depression recognition |
To propose a depression detection using CBEM and compare the accuracy with the traditional classifier. |
Fixation, saccade, pupil size, dwell time |
36 |
EyeLink 1000 |
CBEM |
82.5% |
Vidyapu et al. (2019) |
2019 |
Attention prediction |
To present an approach for user attention prediction on webpage images. |
Fixations |
42 (21 males, 21 females) |
Computer webcam |
SVM |
67.49% |
Kacur et al. (2019) |
2019 |
Schizophrenia disorder detection |
To present a method to detect schizophrenia disorder using the Rorschach Inkblot Test and eye-tracking. |
Gaze position |
44 |
Tobii X2-60 |
KNN |
62% - 75% |
Yoo et al. (2019) |
2019 |
Gaze-writing classification |
To propose a gaze-writing entry method to identify numeric gaze-writing as a hands-free environment. |
Gaze position |
10 |
Tobii Pro X2-30 |
CNN |
99.21% |
Roy et al. (2017) |
2020 |
Image identification |
To develop a cognitive model for ambiguous image identification. |
Eye fixations, fixation duration, pupil diameter, polar moments, moments of inertia |
24 (all males) |
Tobii Pro X2-30 |
LDA, QDA, SVM, KNN, decision trees, bagged tree |
~90% |
Guo et al. (2021) |
2021 |
Workload estimation |
To investigate the usage of eye-tracking technology for workload estimation and performance evaluation in space teleoperation |
Eye fixation, eye saccade, blink, gaze, and pupillary response |
10 (8 males, 2 females) |
Pupil Labs Core |
LOSO protocol, SVM (RBF) |
49.32% |
Saab et al. (2021) |
2021 |
Image classification |
To propose an observational supervision approach for medical image classification using gaze features and deep learning |
Gaze data |
- |
Tobii Pro Nano |
CNN |
84.5% |