Abstract
Eye gaze tracking has been used to study the influence of visual stimuli on consumer behavior and attentional processes. Eye gaze tracking techniques have made substantial contributions in advertisement design, human computer interaction, virtual reality and disease diagnosis. Eye gaze estimation is considered critical for prediction of human attention, and hence indispensable for better understanding human activities. In this paper, Latent Semantic Analysis is used to develop an information model for identifying emerging research trends within eye gaze estimation techniques. An exhaustive collection of 423 titles and abstracts of research papers published during 2005–2018 were used. Five major research areas and ten research trends were classified based upon this study.
Keywords: Computer science, Latent semantic analysis, Eye gaze tracking, Eye gaze tracking applications, Research trends
Computer science; Latent semantic analysis; Eye gaze tracking; Eye gaze tracking applications; Research trends
1. Introduction
Eye gaze trackers (EGTs) are smart devices, which are used to estimate the direction of eye gaze. Eye gaze is defined as the line of sight of an individual and it represents focus of attention. Initially, EGTs were used in neurology, ophthalmology and psychology to study oculomotor patterns with respect to different cognition states of an individual [1, 2]. Recently, EGTs have been used to decide appropriate marketing mix, advertisement design, human computer interaction, virtual reality, disease diagnosis and to study human behavior [3, 4, 5, 6, 7]. Tracking the gaze of an eye is used to study the influence of visual stimuli on consumer behavior and attentional processes [3, 8, 9]. EGTs have also been extensively used in web page design to predict the salient regions of web pages [10]. EGTs can be broadly classified into two categories namely intrusive and non-intrusive techniques. Intrusive techniques make use of electrodes, contact lenses and head mounted EGTs to record eye gaze, whereas non-intrusive techniques rely upon high precision cameras to capture eye images and gaze direction [11]. Commercially available EGTs are pretty expensive, making it economically unavailable for most of the user and researchers [12, 13]. Manuscripts pertaining to eye gaze tracking and computational models for measuring gaze of an eye has been published by many researchers. Studies providing an insight into contemporary status of eye gaze research and outcomes is available at renowned research databases. Literature considering algorithms, system configuration, user conditions and performance issues for existing gaze tracking systems has too been reviewed by many researchers. Most of the available reviews are done manually and may suffer from opinion bias resulting from experience, expertise and analytical skills of the reviewer [11, 14]. Semi-automated topic modelling algorithms imbued with established methods to conduct a systematic review, could be an alternative to restrict this opinion bias to a great extent. Further, it may also help in identifying research trends within eye gaze tracking research [15, 16, 17]. Furthermore, semi-automated review methods to find the core research trends have been adopted by many researchers in many domains [18, 19, 20, 21]. To the best of our knowledge there is no empirical study suggesting EGT research trends available so far [11, 14]. In this paper a quantitative method called Latent Semantic Analysis (LSA) is used for open ended text analysis of literature associated with EGTs and their applications [22]. LSA is a fully automatic mathematical technique for extracting and inferring meaningful relations from the contextual usage of words [22]. This method provides textual meaning to identified topic solutions using an automated approach thereby eliminating human bias [15, 20]. Primary aim of this work is to gain a realistic understanding of prominent research trends and EGT applications as promulgated by EGT researchers [22]. Subsequently, the relationships amongst them were investigated to achieve following objectives:
-
•
To find the leading researchers and prominent publications in Eye Gaze Tracking.
-
•
To find out most prominent research topics associated with Eye Gaze Tracking.
-
•
To anticipate the future research directions in applications of Eye Gaze Tracking.
Rest of the paper is structured in five sections. Section two elaborates methodology used in finding research trends and prominent researchers. Section three details about the results obtained after implementation. Section four provides a discussion about research objectives and probable future directions. Section five provides limitations of the study and last section includes conclusion drawn from the study.
2. Methodology
Bibliographic datasets like IEEE Xplore, ScienceDirect, DBLP computer science bibliography, ArXiv, Google Scholar, Mendeley, Directory of Open Access Journals (DOAJ), Association for Computing Machinery Digital Library (ACM DL), SPIE, Journal of Eye Movement Research, Hindawi and CiteSeerX were referred to collect literature dataset. Further, Taylor and Francis, Wiley, the MDPI journal bibliographic database were also searched to locate any suitable literature. EGT manuscripts published from 2005 till 2018 were considered based upon the keywords and stipulations as discussed in section 2.1.
2.1. Data acquisition
Aforesaid databases were manually searched to find suitable literature. The articles were selected using “Eye Gaze” OR “Gaze Points” OR “Eye gaze trackers (EGTs)” OR “Eye Gaze Technology” OR “Eye and gaze” OR “Limbus tracking” OR “Video-oculography” OR “Pupil tracking” OR “Purkinje image” OR “Eye gaze applications” as search keywords. Mendeley was used for the purpose of collection, screening, selection and corpus preparation. Literature accumulated was then manually reviewed in Mendeley to filter out articles based upon inclusion and exclusion criteria as mentioned in Table 1. Few of the articles, which were out of scope or were duplicated were eliminated as detailed in Table 2. The dataset was later converted into comma-separated values (CSV) using an export filter. The exported file included titles, abstracts and year of publication. Year wise distribution of publications are presented in Figure 1. The previously reported research in EGTs being subjective and qualitative, might not offer an insight towards top researchers and majorly contributing journals. Based on the number of occurrences within dataset, the top ten researchers with most of the publications on EGTs during the period 2005–2018 and top twelve journals publishing articles related to EGTs are presented in Tables 3 and 4, respectively.
Table 1.
Inclusion and Exclusion criteria.
| S.No | Inclusion criteria | Exclusion criteria |
|---|---|---|
| 1 | The articles must either be published in the proceedings of reputed conferences or journals during the period 2005–2018. | Articles not directly relevant to eye gaze tracking techniques. |
| 2 | The articles must have focus on eye gaze tracking and applications | Articles which are published before 2005 were not reporting on the study and development of EGT. |
Table 2.
Paper count during data pre-processing.
| S. No | Steps | Paper Count |
|---|---|---|
| 1 | Online Database Search | 1041 |
| 2 | After Filtered for particular keywords | 763 |
| 3 | After elimination of duplicate articles | 649 |
| 4 | After elimination of non-relevant articles | 423 |
Figure 1.
Year wise distribution of publications during 2005–2018.
Table 3.
Top 10 researchers in Eye gaze research.
| S.No | Steps | Paper Count |
|---|---|---|
| 1 | Yusuke Sugano | 31 |
| 2 | Roberto Valenti | 24 |
| 3 | Qiang Ji | 19 |
| 4 | Andrew Duchowski | 16 |
| 5 | Soussan Djamasbi | 14 |
| 6 | Dan Witzner Hansen | 14 |
| 7 | Carlos Hitoshi Morimoto | 12 |
| 8 | Takashi Nagamatsu | 9 |
| 9 | Zhiwei Zhu | 7 |
| 10 | Xucong Zhang | 7 |
Table 4.
Top Journals publishing research on Eye gaze.
| S.No | Journal Name | Number of Publications |
|---|---|---|
| 1 | IEEE Transactions on Pattern Analysis and Machine Intelligence | 29 |
| 2 | IEEE Transactions on Human Machine Systems | 18 |
| 3 | Pattern Recognition | 15 |
| 4 | IEEE Transactions on Image Processing | 14 |
| 5 | ACM Transactions on Graphics | 12 |
| 6 | Multimedia tools and applications | 11 |
| 7 | IEEE Transactions on Biomedical Engineering | 11 |
| 8 | Computer Vision and Image Understanding | 11 |
| 9 | International Journal of Computer Vision | 10 |
| 10 | Developmental Cognitive Neuroscience | 9 |
| 11 | Cognitive, Affective and Behavioural Neuroscience | 9 |
| 12 | Journal of Vision | 7 |
| 13 | Journal of Eye Movement Research | 6 |
| 14 | Expert systems with applications | 3 |
| 15 | Frontiers in Human Neuroscience | 3 |
2.2. Application of Latent Semantic Analysis
An information model can be defined as representation of concepts, operations, rules and relationships between data and semantics for a chosen domain. An information model offers an organized structure of domain information requirements, which is not only stable but can also be shared. Researchers have proposed and experimented with numerous information modelling techniques like Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA), Probabilistic Latent Semantic Analysis (PLSA) and Correlated Topic Modelling (CTM). Choosing an appropriate information model was a real challenge to execute this study. A thorough comparison of modelling techniques revealed the fact that, LSA seem most appropriate to carry this study forward. Comparison of various information modelling techniques has been summarized in Table 5. Latent semantic analysis (LSA) is a natural language processing approach which can identify research trends within large literature dataset [22, 23]. LSA can not only summarize any text dataset but can also search, organize and understand it in an automated fashion. It is capable to examine prevalent relationships within documents and terms within dataset to reveal associated concepts and trends. LSA is an unsupervised learning approach based upon Singular Vector Decomposition (SVD). SVD creates a low dimensional space to reveal topics and relationships by comparing documents [18, 22]. Moreover, it is an established approach for identifying research trends prevalent within large literature dataset [17, 23]. Recommendations for application of the methodology were taken from Evangelopoulos et al. [17]. Since the aim of the study was to explore the latent structure of the dataset, factor analysis extension was applied to LSA in association with fast truncated incremental stochastic single pass SVD algorithm. Document loading and term loading matrices are the two primary outcomes of LSA. Term loading matrix includes trending topics with highly loaded terms associated with them. The document loading matrix includes trending topics with highly loaded documents associated with them. High loading values is an indication to higher familiarity with a topic [18]. Corpus prepared as given in section 2.1 was fed into LSA model to subsequently uncover latent semantic structure of the dataset. Five topic solution exemplifying five latent classes, associated keywords and their labels are exhibited in Table 6. It also includes highly loaded terms obtained from LSA driven empirical analysis of literature dataset. The detailed procedure followed is discussed in following sections.
Table 5.
Comparison of information modelling techniques.
| Technique and Reference | LSA [22, 24] | LDA [25, 26] | PLSA [27, 28] | CTM [29, 30] |
|---|---|---|---|---|
| Characteristics | Quick and efficient | Suitable for short length documents | Generative model, different words generated from different topics. | Allows word occurrences in more than one topic |
| Peculiar and distinct words within topic | ||||
| Captures unique words | ||||
| Limitations | Difficult to decide on the number of topics. In this study, dimensionality reduction has been performed using SVD that factorizes any matrix into product of three matrices i.e. U*S*V which provides topic coherence useful in determining the number of topics | Difficult to identify relations among topics | Not suitable for lengthy documents | Requires complex computations |
| Results are difficult to interpret and analyze | No probabilistic model at the level of documents | Results are difficult to interpret and analyze | ||
| Overfitting | ||||
| Polysemy (same word with different meaning) | Handles polysemy partially | Does not handle | Handles partially | Does not handle |
| Synonymy (different words with same meaning) | Handles synonymy | Does not handle | Handles partially | Does not handle |
| Applications | Automatic essay grading | Anti- phishing | Image retrieval | Image retrieval |
| Spam filtering | Word sense disambiguation | Classification | Query classification |
Table 6.
Five topic solution along with high loaded terms.
| Topic Id | Topic Label | High loading terms |
|---|---|---|
| T5.1 | Real time head pose estimation | based pose estimation real time head robust intrusive illumination localization |
| T5.2 | Corneal fixation for pupil monitoring | accurate pupil recognition active feature analyse predict corneal fixation monitoring |
| T5.3 | Movement tracking and detection | computer movement video analyses system detection attention interface reliable measure |
| T5.4 | Commercial use of eye gaze tracking | technique images web advertise intelligence commercial computing motion interface heatmap |
| T5.5 | Interactive human computer applications | system active human calibration pupil computer interactive applications disable cognitive |
2.2.1. Pre-processing and term filtering
The first step towards LSA corpus preparation was pre-processing and term filtering. Characters, words and sentences discovered during pre-processing act as tokens for further processing by LSA. It would not only reduce the dictionary size but would improve efficacy of LSA. This step will further improve the efficiency of the text mining approach [31]. As per the recommendations of Evangelopoulos et al. [17], names, numbers, acronyms, abbreviations and punctuations were removed from the corpus. The below mentioned steps were followed or corpus preparation in Python using NLTK (Natural Language processing toolkit):
-
1)
Tokenize titles and abstracts for each document within corpus.
-
2)
Tokens were converted into lowercase letters.
-
3)
Punctuations like periods, commas, question marks and apostrophes were eliminated.
-
4)
Numbers were filtered to textual terms only.
-
5)
Refine the corpus by removing English stopwords, and common keywords (“eye gaze”, “gaze points”, “eye gaze trackers (EGTs)”, “Eye Gaze Technology” or “Eye and gaze”) from all the publications.
-
6)
N-character filtering was done to filter out words with less than three characters.
-
7)
Corpus is further refined by removing words that appear only once in the whole corpus.
-
8)
Create transformations by using TF-IDF model.
-
9)
Now corpora are prepared for LSI modelling.
-
10)
Select highest appearance of tokens based on topic solution.
Initially the dataset had 2640 tokens. After pre-processing, the count was reduced to 267 tokens. 423 sparse vectors were created with 267 tokens. The dataset of 423 documents was thus converted to a vector space where rows represent 267 terms or dimensions of 423 columns, each corresponding to an article. Each document was subsequently converted into bag of words. This mapping would allocate an integer identity (ID) to the terms within bag and would also count their occurrences within each document producing a dictionary. This dictionary is further used to create a weighted matrix as shown in Figure 2.
Figure 2.
Two dimensional representation and documents (columns) for a particular topic solution.
2.2.2. Term frequency and inverse document frequency
A TF-IDF weighting scheme was deployed to identify the significance of a given entity in comparison to other entities (term or document) within corpus. It is used as a weighting factor to proportionately increase the weight of any term depending upon the number of its occurrences within document and an offset by the frequency of term in corpus. TF-IDF is helpful in adjusting the weights of words [18]. The approach followed in this paper is presented in (1) below. Wherein the terminologies used in the equation are defined as:
W (i, j) = TF-IDF weight obtained.
tf = term frequency.
df = number of documents in the corpus.
nd = TF-IDF weight obtained
| (1) |
The term frequency (tf) in Eq. (2) measures the number of occurrences of a term in a document.
| (2) |
Using the same weighting scheme discussed in Eq. (1), a 267*423 term-document weighted matrix is created for ith term in jth document of nd documents in corpus and is used in all the identified topic solutions.
2.2.3. Rank lowering using Singular Vector Decomposition
The weighted matrix TF-IDF was provided to the SVD to further perform rank lowering. The SVD model is used to perform matrix X factorization into variables [22, 32]. The following terminology is used in Eqs. (3) and (4).
U: Initial rotation.
∑: Scaling.
V: Final rotation
| (3) |
| (4) |
The mathematical expression XXt and XtX provides term-loading and document-loading respectively. represents the weights of the topics in descending order. The maximum number of topics generated was equal to the number of documents in the corpus. For extracting a few topics (k), the topmost k singular values were taken from the matrix [17, 33]. The procedure to be adopted for rank lowering and SVD is explained with in following example:
-
1)
Text is represented as a matrix of form X = U∑Vt such that each row stands for unique word and each column represents unique document. Each cell represents the frequency of the word with which it appears in a document as shown in Table 7.
-
2)
Apply preliminary transformation wherein weights have been assigned describing word importance in particular document w. r.t all other documents. The dimension reduction step had structured the matrices in such a way that words that did not appear originally in some contexts now do appear, at least fractionally as shown in Table 8.
-
3)
Apply SVD which decomposed the original matrix into the product of three other matrices. One component of one matrix describes the original row entities as shown in Table 8, another describes the original column entities as shown in Table 10 and third matrix contains scaling values diagonally in Table 9. The three components are matrix multiplied in such a way to reconstruct the original matrix as shown in Table 11.
-
4)
Complete SVD transformation of matrix is shown in Table 11. The table below shows the two dimensional reconstruction of the original matrix after applying LSA transformations which induces the similarity relations between the documents.
-
5)
Term loading XXt matrix represents terms loaded for a particular topic solution. Each cell contains term weight for a particular topic giving more weightage to that topic solution as per the specified threshold value as described in Table 12.
-
6)
Document loading matrix X tX represents documents loaded for a particular topic solution. Each cell contains document weight for a particular topic giving more weightage to that topic in terms of number of documents loaded for that topic as per the specified threshold value as shown in Table 13.
Table 7.
Terms frequency per document matrix (X).
| Terms/Docs | Doc1 | Doc2 | Doc3 | : | Doc423 |
|---|---|---|---|---|---|
| movement | 0 | 1 | 1 | : | 0 |
| images | 1 | 0 | 1 | : | 0 |
| head | 1 | 0 | 1 | : | 1 |
Table 8.
Initial rotation (U).
| Terms/topics | Topic1 | Topic2 | Topic3 | : | Topic10 |
|---|---|---|---|---|---|
| movement | 0.031 | 0.432 | -0.154 | : | 0.021 |
| images | -0.346 | -0.119 | 0.02 | : | -0.102 |
| head | 0.181 | 0.135 | 0.399 | : | -0.034 |
Table 10.
Final rotation (V).
| Documents/topics | Topic1 | Topic2 | Topic3 | : | Topic10 |
|---|---|---|---|---|---|
| Doc1 | 0.2 | 0.21 | 0.06 | : | 0.14 |
| Doc2 | 0.06 | 0.17 | -0.13 | : | -0.23 |
| : | : | : | : | : | : |
| Doc423 | -0.11 | -0.3 | 0.21 | : | 0.07 |
Table 9.
Scaling matrix (∑).
| Topics/topics | Topic1 | Topic2 | Topic3 | : | Topic10 |
|---|---|---|---|---|---|
| Topic1 | 2.432 | : | |||
| Topic2 | 2.167 | : | |||
| Topic3 | 2.043 | : | |||
| : | : | : | : | : | : |
| Topic10 | : | 0.721 |
Table 11.
Terms-documents matrix after SVD transformation (Xt).
| Terms/documents | Doc1 | Doc2 | Doc3 | : | Doc423 |
|---|---|---|---|---|---|
| movement | 0.16 | -0.4 | 0.38 | : | 0.47 |
| images | -0.14 | 0.37 | 0.33 | : | 0.4 |
| head | 0.15 | 0.51 | -0.36 | : | 0.41 |
Table 12.
Term loading (XXt).
| Topics/terms | Movement | Images | Head | Pose |
|---|---|---|---|---|
| Topic1 | 0.124 | 0.091 | 0.327 | 0.385 |
| Topic2 | 0.025 | 0.112 | 0.125 | 0.014 |
| : | : | : | : | : |
| Topic10 | 0.002 | 0.182 | 0.109 | 0.051 |
Table 13.
Document Loading (X tX).
| Documents/topics | Topic1 | Topic2 | Topic3 | : | Topic10 |
|---|---|---|---|---|---|
| Doc1 | 0.182 | 0.092 | 0.182 | : | 0.011 |
| Doc2 | 0.019 | 0.002 | 0.129 | : | 0.118 |
| : | : | : | : | : | : |
| Doc423 | 0.117 | 0.009 | 0.114 | : | 0.091 |
2.2.4. Selecting optimal topic solutions
Optimal topic solutions are attained through dimensionality reduction. Dimensionality reduction is a process of selecting k largest singular values from the singular matrix generated by SVD. Selection of an optimal dimension has been a key challenge associated with this process as it requires extensive understanding and numerous iterations to reach optimal value [17]. As recommended by Deerwester et al. [22], the optimal number of topic solutions for 423 corpus of documents is approximately 10. It may suffice to predict trends within Eye gaze research. In addition, three and five topic solutions were considered optimal to express core research areas.
2.2.5. Selecting threshold values
The term loading and document loading matrices indicate corresponding weights for uncovered topics. It means every cell of the within term and document loading matrix has a loading value corresponding to the term/document (row) and topic (column). Values of loading matrices could be both negative and positive. Varimax rotation was done to interpret results obtained from loading matrices. This resulted in increased loading for one topic in comparison to other topics [34]. The number of loaded documents for a particular topic defines importance of that topic. A heuristic empirical tail distribution approach was applied to differentiate between significant and insignificant loading [23]. For example, for ten topic solution, the loading values of (423) documents in each ten topics were transformed to a one dimensional matrix (vector) having 4230 elements. To obtain the threshold value, the vector is sorted in descending order thereby retaining the 1/423th term of high loading values. After performing certain calculations using tail distribution the threshold values obtained for three, five and ten topic solutions are 0.196, 0.213, and 0.227 respectively. Therefore, the documents having the loading values and more than the specified threshold were considered significant for the topic.
2.2.6. Labelling of topic
After sorting the loading values in descending order from both term loading and document loading matrix, an iterative approach was followed for the labelling of topics based on highly loaded terms in term loading matrix for each topic. The highly loaded values are grouped together based on their occurrences and weightage in term loading for creating a label for each topic as shown in Tables 14, 15, and 16. The topic labelling was done manually and is subject to human bias as topical coherence varied significantly. Owing to limited computing resources, topic solutions were obtained from only titles and abstract of the articles primarily focusing on eye gaze instead of taking the complete article.
Table 14.
Core Eye gaze research areas for three and five topic solution.
| Topic no | Topic label | 2005–2018 | 2005–2011 | 2012–2018 |
|---|---|---|---|---|
| T3.1 | Real time head pose estimation | 127 | 31 | 96 |
| T3.2 | Movement tracking and detection | 93 | 28 | 65 |
| T3.3 | Appearance based estimation | 12 | 3 | 9 |
| T5.1 | Real time head pose estimation | 103 | 41 | 62 |
| T5.2 | Corneal fixation for pupil monitoring | 17 | 5 | 12 |
| T5.3 | Movement tracking and detection | 69 | 23 | 46 |
| T5.4 | Commercial use of pattern analysis | 19 | 5 | 14 |
| T5.5 | Interactive human computer applications | 17 | 5 | 12 |
Table 15.
Five topic solution with high-loading research papers.
| Topic No. | Topic Labels | High-loading Papers | Loading Values |
|---|---|---|---|
| T5.1 | Real time head pose estimation | [40] | 0.586 |
| [41] | 0.545 | ||
| [42] | 0.524 | ||
| [43] | 0.514 | ||
| T5.2 | Corneal fixation for pupil monitoring | [44] | 0.631 |
| [45] | 0.524 | ||
| [46] | 0.501 | ||
| [47] | 0.463 | ||
| T5.3 | Movement tracking and detection | [48] | 0.431 |
| [49] | 0.327 | ||
| [50] | 0.325 | ||
| [51] | 0.287 | ||
| T5.4 | Commercial use of eye gaze tracking | [52] | 0.497 |
| [53] | 0.369 | ||
| [54] | 0.360 | ||
| [55] | 0.325 | ||
| T5.5 | Interactive human computer applications | [56] | 0.533 |
| [57] | 0.408 | ||
| [58] | 0.369 | ||
| [59] | 0.339 |
Table 16.
Research trends in Eye Gaze Tracking.
| Topic No | Topic Label | 2005–2018 | 2005–2011 | 2011–2018 |
|---|---|---|---|---|
| T10.1 | Real time head pose estimation | 127 | 39 | 88 |
| T10.2 | Appearance based gaze estimation | 24 | 9 | 15 |
| T10.3 | Calibration methods | 8 | 2 | 6 |
| T10.4 | Neural networks for gaze recognition | 51 | 19 | 32 |
| T10.5 | Human computer interaction for disabled | 29 | 6 | 23 |
| T10.6 | Interdisciplinary use of eye gaze tracking | 30 | 6 | 24 |
| T10.7 | Cognitive applications | 19 | 5 | 14 |
| T10.8 | Gaze points using oculography | 13 | 4 | 9 |
| T10.9 | Pupil tracking | 18 | 10 | 8 |
| T10.10 | Iris calibration | 5 | 2 | 3 |
3. Results
3.1. Summary of topic solutions
LSA resulted into three, five and ten topic solutions representing core research areas in eye gaze tracking. Topic labels and number of publications associated with core research areas for three different time periods in between 2005-2018 are shown in Table 14. Topic solutions are represented as Ti,j, which represents the jth factor of the ith topic solution. For instance, T3.2 represents second factor of the third topic solution. The number of articles associated with each topic solution indicates the value of respective research area within that particular topic solution. The mapping displayed in Table 17, presents the connections between core research areas and the research trends identified using cross-loading analysis.
Table 17.
Mapping of core Eye gaze research areas and research trends.
| Topic No | Five Topic Labels | Ten Topic no | Ten Topic Labels |
|---|---|---|---|
| T5.1 | Real time head pose estimation | T10.1 | Real time head pose estimation |
| T10.2 | Appearance based gaze estimation | ||
| T5.2 | Corneal fixation for pupil monitoring | T10.3 | Calibration methods |
| T10.9 | Pupil tracking | ||
| T10.10 | Iris calibration | ||
| T5.3 | Movement tracking and detection | T10.8 | Gaze points using oculography |
| T10.4 | Neural networks for gaze recognition | ||
| T5.4 | Commercial use of pattern analysis | T10.6 | Interdisciplinary use of eye gaze tracking |
| T5.5 | Interactive human computer applications | T10.5 | Human computer interaction for disabled |
| T10.7 | Cognitive applications |
3.2. Core research areas associated with eye gaze
The core research areas presented in Table 14 for three topic solution were focused on “real time head pose estimation”, “movement tracking and detection” and “appearance-based estimation”.
The core research areas presented in five topic solution were “real time head pose estimation”, “Corneal fixation for pupil monitoring”, “Movement tracking and detection”, “Commercial use of pattern analysis”, “Interactive human computer applications”. The documents with high loading values for each topic are presented in Table 15. Most of the papers had been loaded to one research area “Real time head pose estimation” in three and five topic solution. Loading numbers in five and three topic solution may vary, as in five topic solution more research areas have been emerged out. Another research area that emerges from the results is “movement tracking and detection” with the loading of 69 papers [14, 35, 36, 37, 38, 39]. The estimation of eye gaze between a group of people communicating with each other or with robot is estimated through correlation between gaze points and head pose movement as stated by Masse et al [36]. Duchowski et al. gives a review on eye gaze tracking techniques and its applications [39]. Some other literature review on detection and tracking of eye movements is discussed by many researchers [11, 14]. Moreira et al. discussed eyes and eyebrows detection using simple webcam for animation type applications [37]. Corcoran et al. discussed real time detection of gaze in combination with human emotions useful in gaming applications [38].
3.3. Eye gaze research trends
The ten topic solutions revealing research trends are displayed in Table 16, along with the count of highly loaded papers for a particular topic solution. Papers having loading values of 0.227 or greater than that were only considered relevant for ten topic solution. The ten topic solution emphasized on emerging research trends “real time head pose estimation” (T10.1), “neural networks for gaze recognition” (T10.4) and “interdisciplinary use of eye gaze tracking” (T10.6).
3.3.1. Real time head pose estimation
The major research trend that has maximum loading of papers that has been emerged from ten topic solution is “real time head pose estimation” with the maximum loading of 127 papers. Real time head pose estimation plays a major role in EGT applications. Variation in head pose and slight illumination change may affect the results in estimating the gaze of an eye using EGTs. Researchers have worked upon numerous head pose based gaze estimation algorithms [40, 41, 60, 61]. Initially, head pose estimate is not known, some initialization is required for accurate pose tracking and gaze estimation [42, 60, 62, 63].
3.3.2. Neural networks for gaze recognition
Eyes plays a major role in understanding human social interactions. In the field of cognitive and behavioral neuroscience, eye gaze processing involving the use of neural networks is used in understanding the abnormal activities in a pathological condition [64, 65]. The author proposed a camera based eye tracker using artificial neural network to estimate eye gaze. As neural networks works directly on eye image and based upon that gaze points are estimated [63, 66]. Some other researchers applied convolutional neural networks for regression and prediction problems i.e. human eye fixations and pose estimation [67, 68].
3.3.3. Appearance based gaze estimation
It is analysis of a person's eye appearance. It utilizes natural gaze of eyes as seen from a commodity camera. One of the biggest advantage of appearance based estimation is that, it does not require any special equipment. Appearance based gaze estimation can be done with the help of ordinary cameras as stated by Sugano et al., proposed a novel method using visual saliency computed from video clips for eye gaze estimation [69]. Adaptive linear regression, neural networks or other learning measures are some of the prominent appearance based eye gaze estimation methods [70, 71]. As learning based methods requires labelled training data for appearance based gaze estimation, an alternative method proposed by Wood et al. solve this problem using synthesized eye images [72, 73]. The method proposed by Zhang et al. makes use of multimodal convolutional neural networks tested on their own dataset, worked well under varied head movements and different lighting conditions [74].
3.3.4. Pupil tracking
Robust, accurate and real time tracking of pupil is a key component in online eye gaze estimation [75]. Model based gaze estimation techniques predict eye gaze using 3D geometric model and camera calibration basis of stereo-vision system [76, 77]. Nagamatsu et al. proposed a stable and fast calibration free method for gaze points estimation using 3D eye model [77]. The other model based methods proposed by many makes use of RGB camera for real time gaze estimation with improved accuracy [78, 79]. K-nearest neighbor, random forest regression method and support vector regressor machine are the few listed model based methods used by researchers for detection of pupil images [80, 81, 82, 83]. Another model based approach proposed by Xiong et al. considered optical and visual axis deviation using spherical eye model for eye gaze estimation [84].
3.3.5. Human computer interaction for disabled
Eye gaze has its applications in human computer interaction wherein input is taken from gaze of an eye and is used to further control the computer system basis of certain features and then the computer executes the commands based on gaze location on the screen [64, 85, 86, 87]. Another application of EGT is in simulation, where eye tracker helps in analyzing the attention of pilots in realistic situations [88]. With the use of eye tracking technique in e-learning has made possible to estimate the focus of learner in real time [7, 89]. Human robotic interaction is the another area where researchers have focused upon, navigational portable robot worked well with user's eye gaze [90, 91]. Smart home and smart television are some other prominent researched application using eye gaze for disabled people in effectively controlling smart devices [92].
3.3.6. Gaze points using oculography
The eye movement recording, the author proposed an algorithm reducing the effect of eye blinks while measuring gaze points. Various techniques using oculography have been proposed by researchers to find gaze points [93, 94, 95].
3.3.7. Calibration methods
Eye gaze tracking is the process of finding a location on screen where exactly the user is looking at. In order to find angle between horizontal and vertical eye movements, system calibration needs to be done for every user to get accurate gaze points [96]. The method proposed by Zhu et al. estimates eye gaze without any head movement restriction. The 3D gaze estimation method so proposed minimize calibration procedure thereby providing a more accurate gaze tracking solution [97].
3.3.8. Cognitive applications
Eye gaze tracking and detection helps in getting deep insights in decision making and problem solving. The author proposed multimodal eye gaze interface for people with physical disability and other locked in diseases [7, 98, 99, 100].
3.3.9. Iris calibration
Most of the EGTs estimates eye gaze with few restrictions such as head movement restriction and calibration need for every user. A calibration procedure is required to compute eye orientations. The author presents a calibration procedure for which user has to look at a particular set of targets based upon which the corresponding calibration function and gaze points have been estimated [101, 102].
3.3.10. Interdisciplinary use of eye gaze tracking
EGTs have been widely used in interactive and diagnostic applications such as in marketing field, e-commerce, psychology, in augmented and virtual reality [103, 104, 105, 106]. Eye gaze tracking has been used to study the influence of human visual behavior on consumer behavior and other attentional processes [9, 53, 104, 107].
3.4. Mapping between core research areas and research trends
Mapping between the core research areas and research trends is shown in Table 17. The connection between core research areas and research trends was made manually on the basis of loading of research papers loaded for topic solutions as stated in numerous study [18, 23]. A manual connection has been made between core research areas and research trends based upon the loading of papers. LSA gives weight factors based upon which one document can be loaded into more than one topic solution as mentioned in number of studies [18, 23]. The mapping provides the connection between low aggregated topic loading with the high aggregated topic loading. In this study, most of the documents are clustered around one topic solution i.e. “real time head pose estimation”.
4. Discussions about research objectives and potential future applications
This section tenders a discussion about how LSA has contributed towards answering research objectives mentioned in the introduction.
Objective 1: To find the leading researchers and prominent publications in Eye Gaze Tracking
Top journals and authors worked in eye gaze tracking are shown in Table 4 and Table 3. Journals include IEEE transactions on pattern analysis and machine intelligence, IEEE transactions on human machine systems and Pattern Recognition. Research on eye gaze has been done by researchers all over the world especially from Eastern Asia and Europe. Yusuke Sugano [69], Roberto Valenti [42], Andrew Duchowski [39] are some of the prominent researchers published their articles on eye gaze. Sugano et al. worked on appearance based gaze estimation that take input images from the video clip and based on that gaze points are estimated using visual saliency [69]. Hansen et al. gives a comparative analysis on methods of gaze estimation based on their geometric features and discussed about various Eye gaze estimation techniques along with their accuracies [14].
Objective 2: To find out most prominent research topics associated with Eye Gaze Tracking
The outcome of study indicates that “real time head pose estimation”, “movement tracking and detection” are the most widely researched topics in Eye gaze. Appearance based estimation though simple and easy to setup, utilizes natural gaze of eyes as seen from a commodity camera. The advantage of appearance based gaze estimation is, it does not require any special equipment. Appearance based gaze estimation can be done with the help of ordinary cameras as stated by Sugano et al [69]. Eye tracking and detection can be done with many tools and equipment's [108]. From this research area many methods and techniques have been identified focusing on Eye Gaze along with its applications in various fields [109].
Objective 3: To anticipate the future research directions in applications of Eye Gaze Tracking
Contemporary EGTs are expensive and are beyond the reach of masses as they require dedicated laboratories and expert handling. Although the said EGTs are highly calibrated and produce accurate results, yet they are not the only option to track eye gaze. Advancement in technology has resulted into the integration of high resolution cameras within smartphones and portable devices like laptops. It would not be long when these integrated cameras assisted solutions would eventually offer a substitute for these expensive EGTs. It would also offer a cost effective yet reliable method of eye gaze tracking. Eye gaze is a nascent and emerging area in computer vision and neuroscience, hence there is an ample scope for expansion in its applications [110]. Few of the prominent prospective applications of EGT are enlisted as under:
-
a.)
Smart Phones: Convenience was the main factor which brought revolution in the mobile industry. Firstly, bulky mobiles transformed into sleeker ones, key pad gave way to touch screens, then came gesture control. Now eye gaze can further revolutionize mobile industry. Through eye gaze one can unlock the phone, scroll through all the applications, open or close any app, click pictures by just blinking eye, making a phone call by just gazing at the name on the screen, can write a message or search a particular app by just gazing at the required alphabets in the correct order. There are endless possibilities wherein eye gaze can be associated with smartphones making them even more smarter [56, 111, 112, 113].
-
b.)
Driver assistance system: Eye gaze finds numerous applications not only in guiding the drivers through all types of terrains but also in preventing them from over speeding, breaking traffic rules thereby decreasing on road accidents. Combined with GPS eye gaze tracking can also warn driver of any unforeseen obstruction coming in his path like traffic jams, road blocks etc. which comes in handy especially when navigating in dense fog conditions, thereby reducing accidents considerably. In fact if coupled with proximity sensors eye gaze tracking can play a major role in averting accidents in the form of preinstalled accident preventing system in vehicles, thereby saving thousands of lives which are lost every year due to such mishaps [109, 114, 115, 116, 117, 118].
-
c.)
Security and authentication: Conventional password based security systems are still widely used, but they pose a serious threat to the entire system as they are susceptible for being misused by unauthorized persons. Eye gaze tracking provides an impeccable solution in the form of retina scan, retina being unique in every individual makes it virtually impossible for unauthorized persons to gain access in critical areas. It can be one of the best possible solution for Home security, wherein only individual whose retina scan is stored in the system would be allowed to unlock the door. Any unauthenticated person who tries to enter would be thwarted by sending out loud alarms alerting nearby residents and sending distress call to the house owner, thereby avoiding any loss to the owner [119, 120].
-
d.)
Robotics: With the advancements in Robotic, it is just a matter of time that robots will become an indispensable part of human kind. Through eye gaze now robots can anticipate what the person might be requiring, it may be just a glass of water or even a walk in the park [91, 121]. They can act as a security guards of our homes in our absence, recording each and every movement in and around the house. Through eye gaze they can even assist doctors in complex surgeries and with the help of sophisticated sensors it can also guide doctors in performing such surgeries with utmost precision. Robots being indispensable part of industries, various technological upgradations are carried out as well as researched to make robots more efficient, time is money. The lesser the time taken to complete a task the more is the output and hence the profit. Robots can be assigned various task, the sequence of which can be decided upon the eye gaze of the user [122, 123].
-
e.)
Virtual and augmented reality: Virtual reality is a relatively new technology which gives user an immersive experience of a particular place without being actually present there. This technology is made possible only through eye gaze tracking, as virtual reality is based upon the gaze of the user wherein user wears some device around eyes, creating such an environment that the user start relating himself to the virtual world. In whichever direction user looks, the gadget creates the environment as per his gaze thereby giving him an immersive experience [124, 125, 126].
-
f.)
Entertainment: It can be very useful in various fields for example, entertainment wherein it can be used as an excellent promotional tool by giving the user an experience of the movie sets without being actually there. It creates an excitement among the audience, in medical it can assist young doctors by showing them various medical aspects like Human Anatomy, Surgeries in a virtual environment thereby giving them in depth knowledge of these aspects which might had been difficult in the real environment [127, 128].
-
g.)
Gaming: Gaming is an ever evolving field. Every now and then new innovative ideas are being introduced to make gaming experience all the more interesting. Now gaming consoles can not only be controlled by hands but also through the EGTs. Eye movements are tracked through these sophisticated trackers which performs corresponding actions on the gaming screen. For example, in a racing game car can be controlled through eye gaze, or in a puzzle game the puzzle pieces can be controlled onscreen through eye movements and can brought together to complete the puzzle. Other applications include strategy game like PUBG player can look around the scene by just tilting his head and make the character move in any direction based on the player's eye gaze direction making the gameplay experience more thrilling and engaging [38, 128].
-
h.)
Smart homes and TV control: Smart homes and TVs have been built to provide convenience to disabled person in their daily routine. Various methods have been proposed by researchers in designing these devices by analyzing gaze on the basis of face recognition and gaze estimation. Some examples include, in channels changing by just looking on the extreme left or extreme right of the television screen, increasing and decreasing volume by looking at the extreme top and bottom of the screen respectively [92, 129, 130].
-
i.)Medicare: Eye gaze tracking has its applications in medicine, treatment of eye cancer and in surgical operations [131, 132, 133]. Non-intrusive technique of eye gaze tracking is utilized here in the treatment and detection of eye diseases.
-
1)Cervical treatment: It can help the patients suffering from cervical spondylitis and other backache problems by correcting their postures and warning them when their posture becomes wrong. The main reason for such problems is sitting or standing in incorrect posture for long period of time, it can also help others avoid such ailments by helping them take adequate precautions by correcting and maintaining their posture [134].
-
2)A means of communication for the disabled: Communicating becomes very difficult for a disabled person specifically those whose limbs are paralyzed. Unable to speak, write and move makes it difficult even to ask for water and food etc. Eye gaze tracking can come as a blessing for such patients. By entering keywords in the computer through eye gaze, one can convey his thoughts, requirements to the concerned person thereby making their life easier [57, 135, 136, 137, 138].
-
1)
-
j.)
Banking System: In digital era of banking industry eye gaze can be a game changer. For instance, in fully automated branches customer's eye gaze can be detected to know exactly what is he looking for. Whether he wants to update his passbook, deposit or withdraw cash or even open an account [139].
-
k.)
Simulation: Aircrafts are among the most expensive vehicles available on planet and so is there training. A small mistake can lead to huge losses of life and money. An ideal pilot must be able to deal with all kinds of situations be it bad weather or heavy air traffic. For him to reach that level of expertise he would initially need extensive training. Training him directly on a trainer jet might prove very costly, hence giving him an immersive virtual experience would be a cost effective option. For instance, flying through rough weather is a very tedious, requires lot of patience and complete control over the plane. This can be practiced in the simulative environment where the user can experience such situations and can learn how to tackle it. Here eye gaze can help in studying and analyzing the behavior of the person in emergency situations [140, 141, 142].
-
l.)
User Interface design and evaluation: Eye gaze tracking has its application in user interface design in finding which areas of a webpage are getting attention from users [143, 144]. Eye tracking has been used in evaluating graphical user interface within web environments in order to improve the usability in designing web pages [145, 146]. Using scan paths information, analysts can get an estimate which areas of the interface are getting attention or which have been ignored [147].
-
m.)
Appraising product packaging: In today's market, product packaging plays an important role to influence customer's decision in purchasing a product. Using eye movements can help in identifying users focus of attention which in turn helps in increasing product market by noticing user gaze points [148, 149, 150].
-
n.)
Social Analysis: Eye gaze helps in understanding the ability of an individual in paying attention to any useful information [151]. Gaze direction along with magnetic resonance imaging technique helps in developing potential interpersonal interaction [152, 153].
-
o.)
Sports: Eye movements helps athletes in anticipating where exactly the ball will bounce [154]. Eye gaze tracking can be used in enhancing skills and performance by analyzing eye movements of the players in cricket or squash type sports [155, 156, 157].
The comparison of the above discussed eye gaze tracking applications is listed in Table 18 on the basis of usage of intrusive and non-intrusive technique along with varying head pose estimation.
Table 18.
Applications of eye gaze tracking.
| Sr. No. | Applications | Reference Number | Intrusive | Non -intrusive | Head pose estimation |
|---|---|---|---|---|---|
| 1 | Virtual reality | [125, 126, 127, 128] | ✓ | ||
| 2 | IPTV Controlling | [92] | ✓ | ||
| 3 | Medicine | [131, 132, 133, 158, 159] | ✓ | ||
| 4 | Sports | [154] | ✓ | ||
| 5 | Simulator | [140, 141] | ✓ | ||
| 6 | Augmented Reality | [160, 161] | ✓ | ✓ | |
| 7 | Marketing | [110, 149, 162] | ✓ | ||
| 8 | Driver assistance system | [109, 115, 116, 163] | ✓ | ✓ | |
| 9 | E-learning | [89] | ✓ | ✓ | |
| 10 | Gaming | [128] | ✓ | ||
| 11 | Robotics | [91, 121, 164] | ✓ | ✓ | |
| 12 | Smartphone based object detection | [56, 112, 113] | ✓ | ✓ | |
| 13 | Security and authentication | [119, 120] | ✓ | ||
| 14 | Smart homes and TV control | [129, 130] | ✓ | ✓ |
5. Limitations
Some issues might have arisen while compiling literature dataset on Eye gaze tracking techniques. It relied upon factors like the literature source, the query used and the identification and selection of final literature used to prepare corpus. The keywords “Eye Gaze”, “Gaze Points”, “Eye gaze trackers (EGTs)”, “Eye Gaze Technology”, “Eye and gaze”, “Limbus tracking”, “Video-oculography”, “Pupil tracking”, “Purkinje image”, “Eye gaze applications” were incorporated to find out suitable research papers. To generate a good reliable dataset, the other research databases, which did not look up in the automated search, were checked manually. The research papers collected for the present study were extensively checked for refining the data by applying inclusion and exclusion criteria as listed in Table 1. However, there might be a possibility that few relevant papers may have been omitted. There is a possibility of bias even when using LSA and it may depend upon the choice of keywords, inclusion and exclusion criteria etc. To reduce this bias to the extent possible, a heuristic approach was followed to identify suitable threshold to be used by algorithm. Although LSA can significantly improve the vector space model with the inclusion of synonyms yet it cannot automatically decide the appropriate number of required topic solutions. To mitigate impact of this limitation, thorough discussions with domain experts were made before choosing optimal topic solutions. Further, topic labelling was carried out on the basis of human judgement, which might have induced some subjective bias. There might be some limitations in relation to generalization of results. Owing to limited computing resources, only topic solutions were obtained from titles and abstract of the articles primarily focusing on eye gaze instead of taking the complete article. The identification of research trends and core research areas were based on experimental design that involved pre-processing of literature, selection of literature, utilization of SVD, term document matrix creation and topic labelling. Each of these subsequent choices may influence results. However, data verification was conducted through manual review so the results must be reliable enough to achieve generalization.
6. Conclusion
This study primarily investigates prominent research trends by deploying information modelling techniques on 423 research articles published from 2005 till 2018. Using LSA, K- topic solutions were identified based on document loadings and corresponding terms. LSA revealed five core research areas and ten research trends followed by EGT research community. Yusuke Sugano and Roberto Valenti were found to major contributors towards EGT articles and IEEE Transactions on Pattern Analysis and Machine Intelligence and IEEE transactions on human machine systems are pioneer in promulgating EGT research literature. Real time head pose estimation, interdisciplinary use of eye gaze tracking along with neural networks for eye gaze recognition came out as prominent contemporary research trends within EGT research community. The study also concludes the fact that, inducting EGT with state of art augmented and virtual reality, medicare, gaming, artificial intelligence, social analytics, sports, entertainment and Internet of Things has opened numerous new horizons for EGT research.
Declarations
Author contribution statement
Nandini Modi: Conceived and designed the experiments.
Jaiteg Singh Khaira: Analyzed and interpreted the data; Wrote the paper.
Funding statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Competing interest statement
The authors declare no conflict of interest.
Additional information
No additional information is available for this paper.
References
- 1.Roy A.K., Mahadevappa M., Guha R., Mukherjee J., Akhtar M.N. A novel technique to develop cognitive models for ambiguous image identification using eye tracker. IEEE Trans. Affect. Comput. 2017;3045(c):1–16. [Google Scholar]
- 2.Hutton S.B. Cognitive control of saccadic eye movements. Brain Cogn. 2008 doi: 10.1016/j.bandc.2008.08.021. [DOI] [PubMed] [Google Scholar]
- 3.Wedel M. The Wiley Handbook of Human Computer Interaction Set. 2017. Improving ad interfaces with eye tracking. [Google Scholar]
- 4.Liu S.S. An eye-gaze tracking and human computer interface system for people with als and other locked-in diseases. J. Med. Biol. Eng. 2012 [Google Scholar]
- 5.Magee J.J., Betke M., Gips J., Scott M.R., Waber B.N. A human-computer interface using symmetry between eyes to detect gaze direction. IEEE Trans. Syst. Man Cybern. Part A Systems Humans. 2008;38(6):1248–1261. [Google Scholar]
- 6.Muñoz-Leiva F., Hernández-Méndez J., Gómez-Carmona D. Measuring advertising effectiveness in Travel 2.0 websites through eye-tracking technology. Physiol. Behav. 2019 doi: 10.1016/j.physbeh.2018.03.002. [DOI] [PubMed] [Google Scholar]
- 7.Wang C.C., Hung J.C., Chen S.N., Chang H.P. Tracking students’ visual attention on manga-based interactive e-book while reading: an eye-movement approach. Multimed. Tools Appl. 2018 [Google Scholar]
- 8.Dimpfel W. Neuromarketing: neurocode-tracking in combination with eye-tracking for quantitative objective assessment of TV commercials. J. Behav. Brain Sci. 2015 [Google Scholar]
- 9.Ho H.F. The effects of controlling visual attention to handbags for women in online shops: evidence from eye movements. Comput. Hum. Behav. 2014 [Google Scholar]
- 10.Buscher G., Cutrell E., Morris M.R. What do you see when you’re surfing? Using eye tracking to predict salient regions of web pages. CHI ’09 Proc. SIGCHI Conf. Hum. Factors Comput. Syst. 2009 [Google Scholar]
- 11.Morimoto C.H., Mimica M.R.M. Eye gaze tracking techniques for interactive applications. Comput. Vis. Image Understand. 2005;98(1):4–24. [Google Scholar]
- 12.Li D., Babcock J., Parkhurst D.J. openEyes. Proc. 2006 Symp. Eye Track. Res. Appl. ETRA ’06. 2006;1(March):95. [Google Scholar]
- 13.Franchak J.M., Adolph K.E. Visually guided navigation: head-mounted eye-tracking of natural locomotion in children and adults. Vision Res. 2010;50(24):2766–2774. doi: 10.1016/j.visres.2010.09.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Hansen D.W., Ji Q. In the eye of the beholder: a survey of models for eyes and gaze. IEEE Trans. Pattern Anal. Mach. Intell. 2010 doi: 10.1109/TPAMI.2009.30. [DOI] [PubMed] [Google Scholar]
- 15.White A., Schmidt K. Systematic literature reviews. Complement. Ther. Med. 2005 doi: 10.1016/j.ctim.2004.12.003. [DOI] [PubMed] [Google Scholar]
- 16.Delen D., Crossland M.D. Seeding the survey and analysis of research literature with text mining. Expert Syst. Appl. 2008 [Google Scholar]
- 17.Evangelopoulos N. Latent semantic analysis: five methodological recommendations. Eur. J. Inf. Syst. 2012;21(January):70–86. [Google Scholar]
- 18.Yalcinkaya M., Singh V. Patterns and trends in building information modeling (BIM) research: a latent semantic analysis. Autom. Constr. 2015 [Google Scholar]
- 19.Sidorova A., Evangelopoulos N., Torres R., Johnson V. SpringerBriefs in Computer Science. 2013. A survey of core research in information systems. [Google Scholar]
- 20.Sehra S., Singh J., Rai H. Using latent semantic analysis to identify research trends in OpenStreetMap. ISPRS Int. J. Geo-Information. 2017 [Google Scholar]
- 21.Sehra S.K., Brar Y.S., Kaur N., Sehra S.S. Research patterns and trends in software effort estimation. Inf. Softw. Technol. 2017 [Google Scholar]
- 22.Deerwester S., Dumais S.T., Furnas G.W., Landauer T.K., Harshman R. Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 1990 [Google Scholar]
- 23.Sidorova, Evangelopoulos, Valacich, Ramakrishnan Uncovering the intellectual core of the information systems discipline. MIS Q. 2008 [Google Scholar]
- 24.Landauer T.K., Foltz P.W., Laham D. An introduction to latent semantic analysis. Discourse Process. 1998 [Google Scholar]
- 25.Blei David M., Ng A.Y., Jordan M.I. Latent dirichlet allocation. J. Mach. Learn. Res. 2003 [Google Scholar]
- 26.Papadimitriou C.H., Raghavan P., Tamaki H., Vempala S. Latent semantic indexing: a probabilistic analysis. J. Comput. Syst. Sci. 2000 [Google Scholar]
- 27.Fuhr N. Probabilistic models in information retrieval. Comput. J. 1992 [Google Scholar]
- 28.Ding C.H.Q. A probabilistic model for latent semantic indexing. J. Am. Soc. Inf. Sci. Technol. 2005 [Google Scholar]
- 29.Ahrendt P., Larsen J., Goutte C. 2005 IEEE Workshop on Machine Learning for Signal Processing. 2005. Co-occurrence models in music genre classification. [Google Scholar]
- 30.Zhai H., Guo J., Wu Q., Cheng X., Sheng H., Zhang J. Proceedings - 2009 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2009. 2009. Query classification based on regularized correlated topic model. [Google Scholar]
- 31.Feldman R., Sanger J. The text mining handbook: advanced approaches in analyzing unstructured data. Imagine. 2007 [Google Scholar]
- 32.Abidin T.F., Yusuf B., Umran M. ICETC 2010 - 2010 2nd Int. Conf. Educ. Technol. Comput. 2010. Singular Value Decomposition for dimensionality reduction in unsupervised text learning problems. [Google Scholar]
- 33.Kuandykov A.A., Rakhmetulayeva S.B., Baiburin Y.M., Nugumanova A.B. 2015 54th Annu. Conf. Soc. Instrum. Control Eng. Japan, SICE 2015. 2015. Usage of singular value decomposition matrix for search latent semantic structures in natural language texts; pp. 286–291. [Google Scholar]
- 34.Kaiser H.F. The varimax criterion for analytic rotation in factor analysis. Psychometrika. 1958 [Google Scholar]
- 35.Holmqvist K., Nyström M., Andersson R., Dewhurst R., Jarodzka H., Van De Weijer J. 2011. Eye Tracking: A Comprehensive Guide to Methods and Measures; p. 560. [Google Scholar]
- 36.Masse B., Ba S., Horaud R. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2017. Tracking gaze and visual focus of attention of people involved in social interaction. [DOI] [PubMed] [Google Scholar]
- 37.Moreira J.L., Braun A., Musse S.R. Proc. - 23rd SIBGRAPI Conf. Graph. Patterns Images, SIBGRAPI 2010, No. April 2016. 2010. Eyes and eyebrows detection for performance driven animation; pp. 17–24. [Google Scholar]
- 38.Corcoran P.M., Nanu F., Petrescu S., Bigioi P., S. Member, and A. H. C. I. 2012. Eye-gaze, “IEEE Xplore - Real-Time Eye Gaze Tracking for Gaming Design and Consumer Electronics Systems; pp. 347–355. [Google Scholar]
- 39.A. D.-T. and practice and undefined . 2007. Eye Tracking Methodology. [Google Scholar]
- 40.Murphy-Chutorian E., Trivedi M.M. Head pose estimation in computer vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 2009 doi: 10.1109/TPAMI.2008.106. [DOI] [PubMed] [Google Scholar]
- 41.Jafari R., Ziou D. Eye-gaze estimation under various head positions and iris states. Expert Syst. Appl. 2015;42(1):510–518. [Google Scholar]
- 42.Valenti R., Sebe N., Gevers T. Combining head pose and eye location information for gaze estimation. IEEE Trans. Image Process. 2012;21(2):802–815. doi: 10.1109/TIP.2011.2162740. [DOI] [PubMed] [Google Scholar]
- 43.Cheung Y.M., Peng Q. Eye gaze tracking with a web camera in a desktop environment. IEEE Trans. Human Mach. Syst. 2015;45(4):419–430. [Google Scholar]
- 44.Sigut J., Sidha S.A. Iris center corneal reflection method for gaze tracking using visible light. IEEE Trans. Biomed. Eng. 2011 doi: 10.1109/TBME.2010.2087330. [DOI] [PubMed] [Google Scholar]
- 45.Zhou Z., Yao P., Zhuang Z., Li J. Proceedings of the 2011 6th IEEE Conference on Industrial Electronics and Applications, ICIEA 2011. 2011. A robust algorithm for iris localization based on radial symmetry and circular integro differential operator. [Google Scholar]
- 46.Guestrin E.D., Eizenman M. General theory of remote gaze estimation using the pupil center and corneal reflections. IEEE Trans. Biomed. Eng. 2006 doi: 10.1109/TBME.2005.863952. [DOI] [PubMed] [Google Scholar]
- 47.Choe K.W., Blake R., Lee S.H. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation. Vision Res. 2016 doi: 10.1016/j.visres.2014.12.018. [DOI] [PubMed] [Google Scholar]
- 48.Lin Y.T., Lin R.Y., Lin Y.C., Lee G.C. Real-time eye-gaze estimation using a low-resolution webcam. Multimed. Tools Appl. 2013;65(3):543–568. [Google Scholar]
- 49.Mele M.L., Federici S. Gaze and eye-tracking solutions for psychological research. Cogn. Process. 2012 doi: 10.1007/s10339-012-0499-z. [DOI] [PubMed] [Google Scholar]
- 50.Diaz G., Cooper J., Kit D., Hayhoe M. Real-time recording and classification of eye movements in an immersive virtual environment. J. Vis. 2013 doi: 10.1167/13.12.5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Skodras E., Kanas V.G., Fakotakis N. On visual gaze tracking based on a single low cost camera. Signal Process. Image Commun. 2015 [Google Scholar]
- 52.Wong W., Bartels M., Chrobot N. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 2014. Practical eye tracking of the ecommerce website user experience. [Google Scholar]
- 53.Hwang Y.M., Lee K.C. Using an eye-tracking approach to explore gender differences in visual attention and shopping attitudes in an online shopping environment. Int. J. Hum. Comput. Interact. 2018 [Google Scholar]
- 54.Djamasbi S., Siegel M., Tullis T. Generation Y, web design, and eye tracking. Int. J. Hum. Comput. Stud. 2010 [Google Scholar]
- 55.Bogomolova S., Oppewal H., Cohen J., Yao J. 2015. How the Layout of a Price Label Influences Unit Price Visual Attention and Choice during Grocery Shopping. [Google Scholar]
- 56.Rozado D., Moreno T., San Agustin J., Rodriguez F.B., Varona P. Controlling a smartphone using gaze gestures as the input mechanism. Hum. Comput. Interact. 2015 [Google Scholar]
- 57.Kocejko T., Bujnowski A., Wtorek J. 2008 Conference on Human System Interaction, HSI 2008. 2008. Eye mouse for disabled. [Google Scholar]
- 58.Zander T.O., Gaertner M., Kothe C., Vilimek R. Combining eye gaze input with a brain-computer interface for touchless human-computer interaction. Int. J. Hum. Comput. Interact. 2011 [Google Scholar]
- 59.Meena Y.K., Cecotti H., Wong-Lin K., Dutta A., Prasad G. Toward optimization of gaze-controlled human-computer interaction: application to Hindi virtual keyboard for stroke patients. IEEE Trans. Neural Syst. Rehabil. Eng. 2018 doi: 10.1109/TNSRE.2018.2814826. [DOI] [PubMed] [Google Scholar]
- 60.Asteriadis S., Tzouveli P., Karpouzis K., Kollias S. Estimation of behavioral user state based on eye gaze and head pose-application in an e-learning environment. Multimed. Tools Appl. 2009;41(3):469–493. [Google Scholar]
- 61.Martins P., Batista J. 2008 15th IEEE Int. Conf. Image Process. 2008. Single view head pose estimation; pp. 1652–1655. [Google Scholar]
- 62.Zhu X., Ramanan D. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2012. Face detection, pose estimation, and landmark localization in the wild. [Google Scholar]
- 63.Mukherjee S.S., Robertson N.M. Deep head pose: gaze-direction estimation in multimodal video. IEEE Trans. Multimed. 2015 [Google Scholar]
- 64.Demjén E., Aboši V., Tomori Z. Eye tracking using artificial neural networks for human computer interaction. Physiol. Res. 2011 doi: 10.33549/physiolres.932117. [DOI] [PubMed] [Google Scholar]
- 65.Liu N., Han J., Zhang D., Wen S., Liu T. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2015. Predicting eye fixations using convolutional neural networks. [Google Scholar]
- 66.Sewell W., Komogortsev O. Proceedings of the 28th of the International Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA ’10. 2010. Real-time eye gaze tracking with an unmodified commodity webcam employing a neural network. [Google Scholar]
- 67.Kruthiventi S.S.S., Ayush K., Babu R.V. DeepFix: a fully convolutional neural network for predicting human eye fixations. IEEE Trans. Image Process. 2017 doi: 10.1109/TIP.2017.2710620. [DOI] [PubMed] [Google Scholar]
- 68.Chinsatit W., Saitoh T. CNN-based pupil center detection for wearable gaze estimation system. Appl. Comput. Intell. Soft Comput. 2017 [Google Scholar]
- 69.Sugano Y., Matsushita Y., Sato Y. Appearance-based gaze estimation using visual saliency. IEEE Trans. Pattern Anal. Mach. Intell. 2013;35(2):329–341. doi: 10.1109/TPAMI.2012.101. [DOI] [PubMed] [Google Scholar]
- 70.Lu F., Sugano Y., Okabe T., Sato Y. Adaptive linear regression for appearance-based gaze estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2014;36(10):2033–2046. doi: 10.1109/TPAMI.2014.2313123. [DOI] [PubMed] [Google Scholar]
- 71.Zhang X., Sugano Y., Fritz M., Bulling A. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2017. It’s written all over your face: full-face appearance-based gaze estimation. [Google Scholar]
- 72.Wood E., Baltrušaitis T., Morency L.-P., Robinson P., Bulling A. Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications - ETRA ’16. 2016. Learning an appearance-based gaze estimator from one million synthesised images. [Google Scholar]
- 73.Lu F., Sugano Y., Okabe T., Sato Y. 2012. Head Pose-free Appearance-Based Gaze Sensing via Eye Image Synthesis; pp. 1008–1011. Icpr, no. Icpr. [Google Scholar]
- 74.Zhang X., Sugano Y., Fritz M., Bulling A. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2015. Appearance-based gaze estimation in the wild. [Google Scholar]
- 75.Świrski L., Bulling A., Dodgson N. Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA ’12. 2012. Robust real-time pupil tracking in highly off-axis images. [Google Scholar]
- 76.Nagamatsu T., Kamahara J., Tanaka N. Proceedings of the 27th International Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA ’09. 2009. Calibration-free gaze tracking using a binocular 3D eye model. [Google Scholar]
- 77.Nagamatsu T., Sugano R., Iwamoto Y., Kamahara J., Tanaka N. User-calibration-free gaze estimation method using a binocular 3D eye model. IEICE Trans. Info Syst. 2011;E94–D(9):1817–1829. [Google Scholar]
- 78.Wood E., Bulling A. Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA ’14. 2014. EyeTab. [Google Scholar]
- 79.Jianfeng L., Shigang L. Eye-model-based gaze estimation by RGB-D camera. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2014:606–610. [Google Scholar]
- 80.Kozakaya T., Shibata T., Yuasa M., Yamaguchi O. Facial feature localization using weighted vector concentration approach. Image Vis Comput. 2010 [Google Scholar]
- 81.Sandbach G., Zafeiriou S., Pantic M. Markov random field structures for facial action unit intensity estimation. Proc. IEEE Int. Conf. Comput. Vis. 2013:738–745. [Google Scholar]
- 82.Li Dongheng, Winfield D., Parkhurst D.J. Vol. 3. 2005. Starburst: a hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches. (2005 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work.). 79–79. [Google Scholar]
- 83.Zhang Y. A Time Delay Neural Network model for simulating eye gaze data. J. Exp. Theor. Artif. Intell. 2011;23(1):111–126. [Google Scholar]
- 84.Xiong X., Liu Z., Cai Q., Zhang Z. Eye gaze tracking using an RGBD camera: a comparison with a RGB solution. Int. Jt. Conf. Pervasive Ubiquitous Comput. Adjun. Publ. 2014 [Google Scholar]
- 85.Chandra S., Sharma G., Malhotra S., Jha D., Mittal A.P. 2015 Int. Conf. Man Mach. Interfacing, No. December. 2015. Eye tracking based human computer interaction: applications and their uses; pp. 1–5. [Google Scholar]
- 86.Lupu R.G., Ungureanu F., Siriteanu V. 2013 E-Health Bioeng. Conf., No. January 2016. 2013. Eye tracking mouse for human computer interaction; pp. 1–4. [Google Scholar]
- 87.Arai K., Mardiyanto R. Proceedings - 2011 8th International Conference on Information Technology: New Generations, ITNG 2011. 2010. Eye-based HCI with full specification of mouse and keyboard using pupil knowledge in the gaze estimation. [Google Scholar]
- 88.Anonymous SensoMotoric instruments launches SMI eye tracking glasses. M2 Presswire. 2011 [Google Scholar]
- 89.Remya P., George, Hend Suliman Al-Khalifa. eLearn Magazine, an ACM Publication; 2010. Eye Tracking and E-Learning Seeing through Your Students Eyes. [Google Scholar]
- 90.Zhang L., Mistry K., Jiang M., Neoh S.C., Hossain M.A. Adaptive facial point detection and emotion recognition for a humanoid robot. Comput. Vis. Image Understand. 2015;140:93–114. [Google Scholar]
- 91.Zaraki A., Mazzei D., Giuliani M., De Rossi D. Designing and evaluating a social gaze-control system for a humanoid robot. IEEE Trans. Human Mach. Syst. 2014 [Google Scholar]
- 92.Lee H.C., Luong D.T., Cho C.W., Lee E.C., Park K.R. Gaze tracking system at a distance for controlling IPTV. IEEE Trans. Consum. Electron. 2010 [Google Scholar]
- 93.Lin M., Li B. Proceedings - 2010 3rd International Conference on Biomedical Engineering and Informatics, BMEI 2010. 2010. A wireless EOG-based human computer interface. [Google Scholar]
- 94.Bulling A., Ward J.A., Gellersen H., Tröster G. Eye movement analysis for activity recognition using electrooculography. IEEE Trans. Pattern Anal. Mach. Intell. 2011 doi: 10.1109/TPAMI.2010.86. [DOI] [PubMed] [Google Scholar]
- 95.Schneider E. Annals of the New York Academy of Sciences. 2009. Eyeseecam: an eye movement-driven head camera for the examination of natural visual exploration. [DOI] [PubMed] [Google Scholar]
- 96.Pfeuffer K., Vidal M., Turner J., Bulling A., Gellersen H. Pursuit calibration: making gaze calibration less tedious and more flexible. Uist. 2013 [Google Scholar]
- 97.Wang K., Ji Q. 3D gaze estimation without explicit personal calibration. Pattern Recognit. 2018 [Google Scholar]
- 98.Eckstein M.K., Guerra-Carrillo B., Miller Singley A.T., Bunge S.A. Beyond eye gaze: what else can eyetracking reveal about cognition and cognitive development? Develop. Cognit. Neurosci. 2017 doi: 10.1016/j.dcn.2016.11.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 99.Gidlöf K., Dewhurst R., Holmqvist K., Wallin A. Using eye tracking to trace a cognitive Process : gaze behaviour during decision making in a natural environment. J. Eye Mov. Res. 2013 [Google Scholar]
- 100.Kuhn G., Martinez L.M. Misdirection – past, present, and the future. Front. Hum. Neurosci. 2012 doi: 10.3389/fnhum.2011.00172. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101.Cerrolaza J.J., Villanueva A., Cabeza R. Study of polynomial mapping functions in video-oculography eye trackers. ACM Trans. Comput. Interact. 2012 [Google Scholar]
- 102.Johnson J.S., Li L., Thomas G., Spencer J.P. Calibration algorithm for eyetracking with unrestricted head movement. Behav. Res. Methods. 2007 doi: 10.3758/bf03192850. [DOI] [PubMed] [Google Scholar]
- 103.van Giesen R.I., Fischer A.R.H., van Dijk H., van Trijp H.C.M. Tracing attitude expressions: an eye-tracking study. J. Behav. Decis. Mak. 2016 [Google Scholar]
- 104.Ohme R., Matukin M., Pacula-Lesniak B. Biometric measures for interactive advertising research. J. Interact. Advert. 2011 [Google Scholar]
- 105.Djamasbi S., Siegel M., Skorinko J., Tullis T. Online viewing and aesthetic preferences of generation y and the baby boom generation: testing user web site experience through eye tracking. Int. J. Electron. Commer. 2011 [Google Scholar]
- 106.Sari J.N., Nugroho L.E., Santosa P.I., Ferdiana R. The measurement of consumer interest and prediction of product selection in E-commerce using eye tracking method. Int. J. Intell. Eng. Syst. 2018 [Google Scholar]
- 107.Mele M.L., Federici S. Gaze and eye-tracking solutions for psychological research. Cogn. Process. 2012;13(1 Suppl) doi: 10.1007/s10339-012-0499-z. [DOI] [PubMed] [Google Scholar]
- 108.Tobii Tobii eye tracking - an introduction to eye tracking and tobii eye trackers. Technology. 2010 [Google Scholar]
- 109.Dongare H., Shah S. Eye gaze tracking and eyes off the road detection for traffic safety on raspberry pi. Int. J. Innov. Res. Electr. Electron. Instrum. Control Eng. 2016;4(6):154–157. [Google Scholar]
- 110.dos Santos R.D.O.J., de Oliveira J.H.C., Rocha J.B., Giraldi J.D.M.E. Eye tracking in neuromarketing: a research agenda for marketing studies. Int. J. Psychol. Stud. 2015;7(1):32–42. [Google Scholar]
- 111.Dybdal M., Agustin J., Hansen J. Gaze input for mobile devices by dwell and gestures. Symp. Eye Track. 2012:225–228. [Google Scholar]
- 112.Jiang Z. Proceedings - IEEE INFCOCOM. 2016. VADS: visual attention detection with a smartphone. [Google Scholar]
- 113.Xiao D., Feng C. 2016 12Th Int. Conf. Nat. Comput. Fuzzy Syst. Knowl. Discov. 2016. Detection of drivers visual attention using smartphone. [Google Scholar]
- 114.Dasgupta A., George A., Happy S.L., Routray A. A vision-based system for monitoring the loss of attention in automotive drivers. IEEE Trans. Intell. Transp. Syst. 2013;14(4):1825–1838. [Google Scholar]
- 115.Anderson C., Chang A.M., Sullivan J.P., Ronda J.M., Czeisler C.A. Assessment of drowsiness based on ocular parameters detected by infrared reflectance oculography. J. Clin. Sleep Med. 2013 doi: 10.5664/jcsm.2992. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 116.Vicente F., Huang Z., Xiong X., De La Torre F., Zhang W., Levi D. Driver gaze tracking and eye off the road detection system. IEEE Trans. Intell. Transp. Syst. 2015;16(4):2014–2027. [Google Scholar]
- 117.Jin L., Niu Q., Hou H., Xian H., Wang Y., Shi D. Driver cognitive distraction detection using driving performance measures. Discret. Dyn. Nat. Soc. 2012 [Google Scholar]
- 118.Naqvi R.A., Arsalan M., Batchuluun G., Yoon H.S., Park K.R. Deep learning-based gaze detection system for automobile drivers using a NIR camera sensor. Sensors (Switzerland) 2018 doi: 10.3390/s18020456. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 119.Tistarelli M., Tistarelli M., Li S., Li S., Chellappa R., Chellappa R. 2009. Handbook of Remote Biometrics: for Surveillance and Security. [Google Scholar]
- 120.Cantoni V., Musci M., Nugrahaningsih N., Porta M. Gaze-based biometrics: an introduction to forensic applications. Pattern Recognit. Lett. 2016 [Google Scholar]
- 121.Admoni H., Scassellati B. Social eye gaze in human-robot interaction: a review. J. Human Robot Interact. 2017 [Google Scholar]
- 122.Huang C.M., Mutlu B. ACM/IEEE International Conference on Human-Robot Interaction. 2016. Anticipatory robot control for efficient human-robot collaboration. [Google Scholar]
- 123.Saran A., Majumdar S., Thomaz A., Niekum S. HRI’18. 2018. Real-time human gaze following for human-robot interaction. [Google Scholar]
- 124.Pettersson J., Albo A., Eriksson J., Larsson P., Falkman K.W., Falkman P. CIVEMSA 2018 - 2018 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, Proceedings. 2018. Cognitive ability evaluation using virtual reality and eye tracking. [Google Scholar]
- 125.Sun Q. ACM Trans. Graph. 2018. Towards virtual reality infinite walking: dynamic saccadic redirection. [Google Scholar]
- 126.Wibirama S., Nugroho H.A., Hamamoto K. Evaluating 3D gaze tracking in virtual space: a computer graphics approach. Entertain. Comput. 2017 [Google Scholar]
- 127.Tripathi S., Guenter B. 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) 2017. A statistical approach to continuous self-calibrating eye gaze tracking for head-mounted virtual reality systems. [Google Scholar]
- 128.Kumar D., Sharma A. 2016 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2016. 2016. Electrooculogram-based virtual reality game control using blink detection and gaze calibration. [Google Scholar]
- 129.Khowaja S.A., Dahri K., Kumbhar M.A., Soomro A.M. Facial expression recognition using two-tier classification and its application to smart home automation system. Proc. 2015 Int. Conf. Emerg. Technol. ICET. 2015:2016. [Google Scholar]
- 130.Nguyen D.T. International Conference on ICT Convergence. 2013. Gaze detection based on head pose estimation in smart TV. [Google Scholar]
- 131.Harezlak K., Kasprowski P. Application of eye tracking in medicine: a survey, research issues and challenges. Comput. Med. Imaging Graph. 2017 doi: 10.1016/j.compmedimag.2017.04.006. [DOI] [PubMed] [Google Scholar]
- 132.Wyder S., Hennings F., Pezold S., Hrbacek J., Cattin P.C. With gaze tracking toward noninvasive eye cancer treatment. IEEE Trans. Biomed. Eng. 2016 doi: 10.1109/TBME.2015.2505740. [DOI] [PubMed] [Google Scholar]
- 133.Yip H.M., Navarro-Alarcon D., Liu Y.H. 2016 IEEE International Conference on Robotics and Biomimetics, ROBIO 2016. 2016. Development of an eye-gaze controlled interface for surgical manipulators using eye-tracking glasses. [Google Scholar]
- 134.Cheever K., Kawata K., Tierney R., Galgon A. Cervical injury assessments for concussion evaluation: a review. J. Athl. Train. 2016 doi: 10.4085/1062-6050-51.12.15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 135.Miyamoto M., Shimada Y., Maki Y., Shibasato K. 4th IGNITE Conference and 2016 International Conference on Advanced Informatics: Concepts, Theory and Application, ICAICTA 2016. 2016. Development of eye gaze software for children with physical disabilities. [Google Scholar]
- 136.Bates R., Vickers S., Istance H.O. Gaze interaction with virtual on-line communities: levelling the playing field for disabled users. Univers. Access Inf. Soc. 2010 [Google Scholar]
- 137.Cecotti H. Spelling with non-invasive brain-computer interfaces - current and future trends. J. Physiol. Paris. 2011 doi: 10.1016/j.jphysparis.2011.08.003. [DOI] [PubMed] [Google Scholar]
- 138.Karlsson P., Allsop A., Dee-Price B.J., Wallen M. Eye-gaze control technology for children, adolescents and adults with cerebral palsy with significant physical disability: findings from a systematic review. Dev. Neurorehabil. 2018 doi: 10.1080/17518423.2017.1362057. [DOI] [PubMed] [Google Scholar]
- 139.Bossaerts P., Suzuki S., O’Doherty J.P. Perception of intentionality in investor attitudes towards financial risks. J. Behav. Exp. Finance. 2018 [Google Scholar]
- 140.Kaiser M.K., Gans N.R., Dixon W.E. Vision-based estimation for guidance, navigation, and control of an aerial vehicle. IEEE Trans. Aerosp. Electron. Syst. 2010;46(3):1064–1077. [Google Scholar]
- 141.Stan O., Miclea L., Centea A. Proceedings - 2nd International Conference on Artificial Intelligence, Modelling, and Simulation, AIMS 2014. 2014. Eye-gaze tracking method driven by raspberry PI applicable in automotive traffic safety. [Google Scholar]
- 142.Biswas P., DV J. Proceedings of the 2018 Conference on Human Information Interaction&Retrieval - IUI ’18. 2018. Eye gaze controlled MFD for military aviation. [Google Scholar]
- 143.Chynał P., Falkowska J., Sobecki J. Intelligent Human Systems Integration: Proceedings of the 1st International Conference on Intelligent Human Systems Integration (IHSI 2018): Integrating People and Intelligent Systems, January 7-9, 2018, Dubai, United Arab Emirates. 2018. Web page graphic design usability testing enhanced with eye-tracking. [Google Scholar]
- 144.Menges R., Tamimi H., Kumar C., Walber T., Schaefer C., Staab S. Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications - ETRA ’18. 2018. Enhanced representation of web pages for usability analysis with eye tracking. [Google Scholar]
- 145.Eraslan S., Yesilada Y., Harper S. Eye tracking scanpath analysis techniques on web pages: a survey, evaluation and comparison. J. Eye Mov. Res. 2016 [Google Scholar]
- 146.Djamasbi S. Eye tracking and web experience. AIS Trans. Hum. Comput. Interact. 2014 [Google Scholar]
- 147.Blascheck T., Kurzhals K., Raschke M., Burch M., Weiskopf D., Ertl T. State-of-the-Art of visualization for eye tracking data. Eurographics Conf. Vis. 2014:1–20. [Google Scholar]
- 148.Bogomolova S., Oppewal H., Cohen J., Yao J. How the layout of a unit price label affects eye-movements and product choice: an eye-tracking investigation. J. Bus. Res. 2018 [Google Scholar]
- 149.Mundel J., Huddleston P., Behe B., Sage L., Latona C. An eye tracking study of minimally branded products: hedonism and branding as predictors of purchase intentions. J. Prod. Brand Manag. 2018 [Google Scholar]
- 150.Fenko A., Nicolaas I., Galetzka M. Does attention to health labels predict a healthy food choice? An eye-tracking study. Food Qual. Prefer. 2018 [Google Scholar]
- 151.Wang Q., Wedel M., Huang L., Liu X. Effects of model eye gaze direction on consumer visual processing: evidence from China and America. Inf. Manag. 2018 [Google Scholar]
- 152.Sun D., Shao R., Wang Z., Lee T.M.C. Perceived gaze direction modulates neural processing of prosocial decision making. Front. Hum. Neurosci. 2018 doi: 10.3389/fnhum.2018.00052. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 153.Jiang T., Potters J., Funaki Y. Eye-tracking social preferences. J. Behav. Decis. Mak. 2016 [Google Scholar]
- 154.Pires B.R., Hwangbo M., Devyver M., Kanade T. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2013. Visible-spectrum gaze tracking for sports. [Google Scholar]
- 155.Halson S.L., Peake J.M., Sullivan J.P. Wearable technology for athletes: information overload and pseudoscience? Int. J. Sports Physiol. Perform. 2016 doi: 10.1123/IJSPP.2016-0486. [DOI] [PubMed] [Google Scholar]
- 156.Kredel R., Vater C., Klostermann A., Hossner E.J. Eye-tracking technology and the dynamics of natural gaze behavior in sports: a systematic review of 40 years of research. Front. Psychol. 2017 doi: 10.3389/fpsyg.2017.01845. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 157.Discombe R., Cotterill S. Eye tracking in sport: a guide for new and aspiring researchers. Sport Exerc. Psychol. Rev. 2015 [Google Scholar]
- 158.Atkins M.S., Tien G., Khan R.S.A., Meneghetti A., Zheng B. What do surgeons see: capturing and synchronizing eye gaze for surgery applications. Surg. Innov. 2013;20(3):241–248. doi: 10.1177/1553350612449075. [DOI] [PubMed] [Google Scholar]
- 159.Açık A., Barkana D.E., Akgün G., Yantaç A.E., Aydın Ç. Evaluation of a surgical interface for robotic cryoablation task using an eye-tracking system. Int. J. Hum. Comput. Stud. 2016 [Google Scholar]
- 160.Pfeiffer T., Renner P. EyeSee3D: a low-cost approach for analyzing mobile 3D eye tracking data using computer vision and augmented reality technology. Proc. Symp. Eye Track. Res. Appl. 2014 [Google Scholar]
- 161.Renner P., Pfeiffer T. 2017 IEEE Symposium on 3D User Interfaces, 3DUI 2017 - Proceedings. 2017. Attention guiding techniques using peripheral vision and eye tracking for feedback in augmented-reality-based assistance systems, [Google Scholar]
- 162.Khushaba R.N., Wise C., Kodagoda S., Louviere J., Kahn B.E., Townsend C. Consumer neuroscience: assessing the brain response to marketing stimuli using electroencephalogram (EEG) and eye tracking. Expert Syst. Appl. 2013 [Google Scholar]
- 163.Jo J. Vision-based method for detecting driver drowsiness and distraction in driver monitoring system. Opt. Eng. 2011 [Google Scholar]
- 164.Palinko O., Rea F., Sandini G., Sciutti A. IEEE International Conference on Intelligent Robots and Systems. 2016. A Robot reading human gaze: why eye tracking is better than head tracking for human-robot collaboration. [Google Scholar]


