Skip to main content
Springer logoLink to Springer
. 2025 Jan 6;57(1):46. doi: 10.3758/s13428-024-02529-7

The fundamentals of eye tracking part 4: Tools for conducting an eye tracking study

Diederick C Niehorster 1,, Marcus Nyström 2, Roy S Hessels 3, Richard Andersson 4, Jeroen S Benjamins 5, Dan Witzner Hansen 6, Ignace T C Hooge 3
PMCID: PMC11703944  PMID: 39762687

Abstract

Researchers using eye tracking are heavily dependent on software and hardware tools to perform their studies, from recording eye tracking data and visualizing it, to processing and analyzing it. This article provides an overview of available tools for research using eye trackers and discusses considerations to make when choosing which tools to adopt for one’s study.

Keywords: Eye tracking, Tools, Software, Programming, Hardware and

Introduction

This article is the fourth in a series on the fundamentals of eye tracking (see also Hessels et al., 2025; Hooge et al., in press; Nyström et al., in press). The articles are aimed at individuals who are (one of) the first in their group, company, or research field to use eye tracking, with a focus on all the decisions one may make in the context of an eye-tracking study. Such individuals may come from academia (e.g., psychology, biology, medicine, educational science, computer science), commercial institutions (e.g., marketing research, usability, decision making) and non-commercial institutions (e.g., hospitals, air traffic control, military organizations). Note that this is not an exhaustive description of the target audience. More experienced eye-tracking researchers may find useful insights in the article series, or may find the article series a useful reference or hub to relevant research. One may either choose to read this article as the fourth part of the series, but one may choose to skip the other articles in the series if this article is of more immediate interest.

This article discusses what researchers use to collect, visualize, process, and analyze eye tracking data. We will refer to the software (and hardware) researchers use for these tasks as eye tracking tools. In this article, we will provide an overview of different tools that can be used for eye tracking research, and provide advice on choosing tools that fit one’s operation. A notable exception in this discussion is the eye tracking hardware itself, which is instead discussed in Nyström et al. (in press). For discussions of how eye movements and eye tracking can be applied in research, the reader is referred to Duchowski (2007); Holmqvist et al. (2011); Leigh and Zee (2015) as well as parts 1 and 2 in this article series (Hessels et al., 2025; Hooge et al., in press).

How are tools used by researchers when conducting eye tracking research? Imagine two extreme approaches that may be taken by researchers. At one extreme we find a person relying exclusively on commercial hardware and software, who we refer to as the proprietary software researcher. Often, eye tracker manufacturers also sell a software suite to go together with their eye trackers (e.g., SR Research Experiment Builder and Data Viewer, SMI Experiment Center,1 Gazepoint Analysis and Tobii Pro Lab) and there are also software suites available from companies that do not make their own eye trackers (e.g., Blickshift, iMotions). Such software suites are designed to take care of many of the difficult problems in doing research with eye trackers, from building experiments and presenting stimuli to visualizing the recorded data and performing all manner of analyses on eye tracking data. An advantage of software suites is that they can be used to carry out most mainstream research (such as reading research, various paradigms from experimental psychology, or usability research). Another advantage is that such software is essentially plug and play: many difficult problems (e.g., data synchronization, trial segmentation and fixation classification) that researchers may not even be aware of, or are interested in, are solved for them and almost everything necessary to conduct a study can be performed in the software. Despite being excellent tools for many researchers, proprietary software also has disadvantages, such as that its methods are not open-source and thus cannot be inspected for correctness. Furthermore, being heavily reliant on a single software suite may have several downsides. For instance, it is possible that an old experiment cannot be opened, run, or analyzed anymore when the developer performs a major update or drops support for a needed feature. Also, when the software does not fits one’s needs, one may have to look for solutions outside the software or invest time in learning possibly complicated extension procedures. If you prefer that your tools handle all the technical details and other difficult problems for you, then such all-in-one software suites are probably the simplest and most convenient way to conduct good research.

On the other end of the spectrum there is a person with technical skills and enough time, motivation and reasons to develop their own tools, who we refer to as the do-it-yourself (DIY) researcher. The authors of this article belong to this category and make a lot of their tools themselves (e.g., Agustin et al., 2010; Nyström & Holmqvist, 2010; Hooge and Camps, 2013; Hessels et al., 2017; Benjamins et al., 2018; Hansen et al., 2019; Niehorster et al., 2020a; Niehorster & Nyström, 2020; Niehorster et al., 2023). This type of researcher often conducts research that is not fully served by existing tools, and has to do some significant development themselves. They would, for instance, have to build 1) a stimulus-creation program; 2) a program for presenting the stimuli that can record synchronized gaze data at the same time; and 3) a fixation classifier for determining the on- and offsets of fixations. They may even need to develop custom hardware, and many other things. An advantage of the DIY approach to research is that there are few limits to the experimental setups and designs one can build and the analyses that can be performed: when tools are needed that do not exist, one creates them. Other advantages of inhabiting this side of the spectrum include having full control over and intimate knowledge of the experimental setup and analysis procedures, and that the setup may become simpler because all tools are exactly tailored to the research project without being burdened with extra unneeded functionality or with workarounds that are needed because existing tools do not exactly fit the needs of the research. There are, however, also disadvantages to the DIY approach. Firstly, it is skill-intensive because it requires good programming competence, an understanding of hardware and possibly electronics, good knowledge and skill in physics and mathematics, etc. The approach is also time-consuming, especially because tool development may become an open-ended problem with no guarantee of success. A final important potential disadvantage is that developing tools and spending a significant time programming should match one’s interests. For a researcher who already spends all their time and intellectual capacity on a difficult scientific problem, maybe spending additional time on building the tools is not feasible or worth it. Perhaps a leading motivation for becoming a DIY researcher is to be able to perform science beyond existing limits, and such researchers happily accept the extra effort such an approach demands.

Most researchers probably find themselves somewhere on the spectrum between the two extremes of the proprietary software researcher who does most of their research using a single software suite and the DIY researcher who builds everything themselves. For instance, there probably are a lot of researchers who use the simplest tools for the job and mix both approaches when designing their workflow for a study. They happily use their favorite software suite where possible, but may choose to use another tool or even develop their own tool when they encounter a problem that cannot be solved using their preferred software suite. One can also choose to handle generic or difficult parts of the research workflow, such as stimulus presentation and fixation classification, using one’s standard tool, but choose to develop oneself the parts of the analysis that are heavily tied to the research question. Such researchers reap the benefits of being able to quickly execute a lot of their work using tools they know well, while their skills and knowledge of other available tools enable them to not be limited by their standard tools when the research problem demands more.

There are many good open-source and paid tools around that can do the difficult jobs for the researcher. However, how does one find and select the tools needed to conduct a specific study using an eye tracker? One important source is the scientific literature. In the past 50 years, many eye movement researchers have not only published their results, theory, and conclusions, but also published detailed recipes for collecting or analyzing eye tracking data or even made their tools freely available (e.g., Stampe, 1993; Engbert & Kliegl, 2003; Hansen & Pece, 2005; Nyström & Holmqvist, 2010; Hooge & Camps, 2013; Lao et al., 2017; Rousselet et al., 2017; Niehorster et al., 2020a; Niehorster et al., 2023). In this article, we will introduce many eye tracking tools that tackle a wide range of problems. This includes tools for data recording and cleaning, classification of fixations and other events, and to generate areas-of-interest and conduct analyses with them. We will not deliver a ready-to-use pipeline for conducting eye tracking research in this article, since we do not believe there is a one-size-fits-all solution. Nor do we aim to provide an exhaustive list of tools.

The aim of this article is to improve the level of individual eye tracking researchers and their operation. As such, in this article we share our knowledge and experience creating our own tools and using tools created by others. We do not only adopt the perspective of researchers using eye tracking, but also that of supervisors of students using eye tracking, of teachers of eye tracking courses, of collaborators in research projects where we handled the eye tracking, and that of reviewers for eye tracking studies and papers describing eye tracking tools, importantly in journals such as Behavior Research Methods (BRM) and conference venues such as Eye Tracking Research and Applications (ETRA). To meet the goal of improving the level of individual eye tracking researchers and their operation, we will discuss eye tracking tools from multiple perspectives. First, to provide insight into the kinds of eye tracking tools out there, we have developed a taxonomy of tools. Second, we will discuss how to approach recognizing good tools for adoption into one’s eye tracking study. Furthermore, to help researchers find tools that suit their problem, we will provide a survey of tools for a wide range of problems in eye tracking research. This non-exhaustive list is intended to serve as a hub from which researchers can start their search for tools they need. Finally, we provide a discussion of the human side of tool use. This includes both a discussion of choosing tools that fit, and advice for what a researcher can do to expand their solution space when they do not like where they currently find themselves on the spectrum, and, for instance, want to move more towards the DIY side by gaining useful programming skills. While one can chose to read this article all the way through, this article is written in such a way that the reader can also directly skip to the section(s) of their interest.

Taxonomy of tools

There are many different kinds of tools for conducting eye tracking research. First, there are software tools. Software tools for eye tracking research can be subdivided into at least two categories. One category consists of complete ready-to-use software suites for stimulus presentation that allow recording, visualizing and analyzing eye tracking data, such as Tobii Pro Lab, the SR Research Experiment Builder and Data Viewer bundle, iMotions Lab and EyeTrace (Kübler et al., 2015; Otto et al., 2018).

In contrast to software suites that aim to provide a one-stop solution to all one’s research needs stand dedicated software tools that aim to do only one thing and do that well. Many such tools for solving more specific problems in eye tracking research are made available by authors of articles in journals dedicated to research methods, such as Behavior Research Methods and the Journal of Neuroscience Methods, and also more broad journals such as the Journal of Eye Movement Research. Examples of such tools are tools for fixation classification (Nyström & Holmqvist, 2010; Hessels et al., 2017; van Renswoude et al., 2018; see Andersson et al., 2017; and Hooge et al., 2022a, for further algorithms), Area of Interest analyses (Berger et al., 2012; Hessels et al., 2016, 2018a) and the processing of head-worn eye tracker data (Reimer & Sodhi, 2006; Benjamins et al., 2018; Niehorster et al., 2020b, 2023).

There are also tools that have been made available in the form of written recipes. Recipes are basically higher-level descriptions (for instance in the form of pseudo-code) of how a problem is solved and can be used as a plan for programmers to implement their own software (for example recipes, see, e.g., Van der Steen & Bruno, 1995, pp. 3461–3462; Goldberg & Kotval, 1999, pp. 637, 639; Hooge & Camps, 2013, pp. 3–4; Niehorster et al., 2015, pp. 13–14; Hessels et al., 2020a, pp. 21–23). While recipes have the potential downside that the tool cannot directly be downloaded and used on one’s gaze data, they have the benefit that they can be integrated into any existing eye tracking data processing pipeline. Another benefit is that the researcher who implements the tool themselves will have a much better understanding of how it works than if they would have just copied a piece of code and used it without delving into its details.

Finally, there are also hardware tools such as chinrests (NIMH-NIF, 2019; MetisVidere, 2020), self-built eye trackers (Barsingerhorn et al., 2018; Hosp et al., 2020), and a button box for use with an eye tracker (Niehorster et al., 2020b).

Recognizing good tools

When building a study workflow, various considerations crop up. These considerations can be both about the tool per se (e.g., has it been validated and is it well documented?) and about one’s context (e.g., does the team possess the needed math or Python programming skills?). Here we will discuss potential indicators one may use to recognize good tools, whereas considerations one may have in choosing tools that fit one’s context will be discussed below when we consider the human side of tool use. Note that while we discuss these two aspects separately, we recognize that considerations about a tool should take one’s context into account. For instance, well-organized and well-documented code and a good readme may be of less importance to an experienced programmer. We also expect that these two sets of considerations will be used in parallel when choosing tools. For instance, regardless of the presence of good example code, a tool may be immediately judged as unsuitable because one does not have the required skills to understand or use it. Finally, one may explicitly decide not to give weight to some considerations. For instance, one may strategically decide to adopt a tool that one currently does not yet have the skills to understand or use because one deems learning to use the tool and associated skills to be a worthwhile long-term investment. Especially at the beginning of an extended research program such as a PhD project, such long-term investments to expand one’s skills might trump short-term opportunism.

Imagine searching for a tool to process eye tracking data, for instance to classify which parts of a recording are fixations. Searching the Internet for a tool that does just that, one finds multiple tools that promise to provide fixations (and other eye movement events) when fed with some gaze position signals. One would probably even find some journal articles that present a tool in detail and that make the software available (e.g., Komogortsev et al., 2010; Nyström & Holmqvist, 2010; Hein & Zangemeister, 2017; Hessels et al., 2017; van Renswoude et al., 2018). However, at first glance, many of these tools may appear to be hard to use. Furthermore, we have had mixed experiences with trying to use tools we found online. For some tools it was easy to get them to do what they promised, while we never managed to get other tools to deliver what they promised. This might have been because the tools did not run, or worse did run but checks on their output revealed that they did not deliver what they claimed. This was, for instance, due to bugs in the tool or due to assumptions made by the tool that did not apply in our intended use case. That raises the question, when examining a tool found on the Internet, how does one judge the quality of such a tool? Here we present some aspects that may inform one’s evaluation of a tool.

When assessing the quality of a tool, one should of course test whether it performs as advertised. Testing may require sufficient openness in the tool to be able to inspect its methods, and a sufficient understanding of the tool’s methods to be able to judge their appropriateness for one’s data and research question. See the section “Choosing tools that fit” below for considerations about the required skills.

Before starting the labor-intensive tool testing phase, one may wish to check for several other indicators of good tools to make a quick selection from the many search results. The following is a list of such indicators that we ourselves use:2

  • Validation: Has the tool been validated in some way, for instance in an accompanying publication in a journal, or in a text on a blog post or in the tool’s manual? Is a procedure implemented to validate updated versions of the tool, or is it validated only sporadically as part of a publication? Has the tool been validated by third parties? Is the validation replicable?

  • Examples: Does the tool come with example data on which it can be tested, and with example scripts that show how to use it on the example data?

  • Open source: Is the source code of the tool available?

  • Code documentation: Is the source code well organized and well documented? For example, are function inputs and outputs clearly described?

  • User documentation: Did the makers of the tool provide a good readme or manual? Does it have a tutorial or instruction videos? For examples see (Santini et al., 2017a; Chen et al., 2023; Niehorster et al., 2023).

  • Active maintenance: Is the tool of a kind that needs to be actively maintained? If so, when was it last updated? Were these updates backwards-compatible or did the updates break existing workflows? Is there only a single maintainer or is the tool maintained by a team? Keep in mind that some tools may require little to no maintenance, while others may need frequent updates. For instance, the GUI of a tool may no longer work correctly on newer operating system versions, and a Python tool may crash because a package on which it depends has changed.

  • Popularity: Does the tool appear to be used a lot? Tools that are used a lot are not only well understood and accepted, but the likelihood that bugs have been spotted in the tool is also larger. It may be hard to judge whether a tool is used a lot, but a possible indicator may be the number of citations a tool receives in the scientific literature. It should be noted that the number of citations does not necessarily reflect how much a tool is used. We, for instance, suspect that Nyström and Holmqvist (2010) and Hayes and Petrov (2016) are cited more frequently than the presented tools are used. The large number of citations but infrequent use we observe in the literature may come about because the terminology used and the principle discussed in these papers are also relevant to cite on their own.

  • Support: Is there a good infrastructure around the software? For instance, is there an open forum where users can ask questions and exchange tips? Is there a place where bugs can be reported and are these reports responded to by the software’s developers in a timely fashion?

  • Purpose: What is the use case or purpose of the tool? Which eye tracker or which data formats does the tool support? Who are the intended or prioritized audience of the tool? Such information may indicate whether a particular use case will stay covered in the future or might no longer be maintained or might even be dropped. It also might provide an indication of whether the tool could become burdened with irrelevant functionality in the future.

Problem domains and available tools

In this section, we will provide a list of tools that address a wide range of problems that may be encountered in studies using eye tracking. We do not aim to be complete in this list, but instead provide a list of tools that we expect to be useful for the majority of readers. We are also aware that any such list quickly becomes out of date. Instead, this section is meant to serve as a hub, providing a sample of what tools are available and thereby a starting point for researchers to find the tools they need, or to understand the lay of the land when they endeavor to develop their own. The tools listed have been categorized by stage in the experimental workflow. That is, we first discuss tools that can be used for recording gaze data (which may include functionality for building experiments), then tools for data visualization and finally tools for data processing and analysis. Besides tools that have functionality specific to a certain stage of the experimental workflow, there are also software suites whose functionality covers multiple stages. We discuss these in a separate section.

We did not test all tools that we refer to, nor did we only include the most widely used tools because we also think it is important to expose the reader to the breadth of tools that is available and thereby help them find tools for themselves, e.g., through knowing what keywords to use when searching. Even though the vast majority of current eye trackers are video-based, many of the techniques used in the below tools are likely agnostic as to the recording technique used to generate the eye movement data. As such, many tools may work equally well for data recorded with a video-based eye tracker as with data from, for instance, an electro-oculography (EOG) or scleral search coil setup. Furthermore, in searching the literature, we found a preponderance of tools for remote eye trackers and less for wearable eye tracking. Since the discussion in this section is limited to available tools, this entails that there will be less focus on wearable eye tracking tools, despite wearable eye tracking being an area that undergoes rapid development and that has its own unique tool needs (Fu et al., 2024). In the following section, we further limit our discussion to tools that are aimed at processing eye tracking data. More general techniques that can solve parts of analysis problems encountered in eye tracking research, such as object recognition and segmentation techniques (e.g., Cheng et al., 2024a; Ravi et al., 2024; Wang & Liao, 2024) or scene reconstruction techniques (e.g., Rublee et al., 2011; Häne et al., 2013; Wang et al., 2023; Avetisyan et al., 2024) but are not aimed at the analysis of eye tracking data per se will not be discussed. Finally, in the overview below, we will only focus on tools. Adjacent topics that may be of interest to eye tracking researchers are discussed in the other parts of the current article series, such as the operationalization of study concepts using eye tracking measurements in part 2 (Hooge et al., in press) and practical considerations such as data quality, form factor, and calibration requirements in part 3 (Nyström et al., in press).

Finally, we will only discuss tools but not provide recommendations regarding which tools the reader should use because we think this is neither possible nor would it be doing the reader a favor. First, we will not provide judgments of whether the tools listed in this section are any good. Determining whether a tool is good is the responsibility of the individual researcher. The section “Recognizing good tools” discusses some signposts for good tools. Second, determining whether a tool is of use for the reader is not possible for us as authors of this article, since we lack knowledge about the reader’s context (e.g., their research question, budget, knowledge and skills, and their interests). As discussed in the section “Choosing tools that fit”, these are all factors that the reader should take into account themselves when deciding what tools to adopt into their research project. Third, the reader, and not the authors of this article, is responsible for their tool choices and for ensuring that they have sufficient skill and understanding of both their problem and potential solutions to make appropriate tool choices. If we were to make specific recommendations in this paper, we would run the risk that we shortcut the reader’s decision process. This could lead them down the wrong path or preclude their discovery that they should invest in their skills to be able to properly address their research problem.

Software suites

In this section, we discuss software suites – comprehensive tools that often encompass multiple stages of the research process involving eye trackers. Table 1 gives an overview of various software suites for recording and/or analyzing eye tracking data. It is divided into four sections: Software suites by eye tracker manufacturers; other software suites from commercial companies; open-source and commercial suites that are often used for stimulus presentation and data recording; and suites that focus on eye tracking data analysis.

Table 1.

Software suites and their functionality

Suite Recording Importing Visualizing Processing/ analysis Statistical analysis Multi-brand Open-source Interface
Argus Science GUI
GazePoint GUI
ISCAN GUI
Pupil Labs GUI
SMI GUI
SR Research GUI & Python
Tobii GUI & Websocket
Blickshift GUI & Python, R
EyeWorks GUI & SDK
iMotions GUI & UDP/TCP
Mangold GUI
NYAN GUI
Okazolab GUI & Python, C#, Visual Basic
Cedrus GUI
E-Prime GUI & E-Basic
OpenSesame GUI & Python
Presentation GUI & PCL, Python
PsychoPy GUI & Python
PsychToolbox MATLAB
ZING GUI & C#
ETRAN R
EyeFeatures Python
EyeTrace GUI
EyeTrack GUI
eyetrackingR R
ILAB GUI & MATLAB
Gazealytics GUI
GazeAlyze GUI & MATLAB
GazeR R
GraFIX GUI
OGAMA GUI
PerceptionToolkit Python
pymovements Python
PyTrack Python

The columns indicate the following: Suite: Short name of the tool. Tools are divided into four sections (tools from eye tracker manufacturers, third party commercial tools, stimulus presentation suites with support for recording eye tracking data, suites for analyzing eye tracking data); Recording: Has the ability to record eye tracking data in the program, possibly by means of plugins; Importing: Can import recorded data; Visualizing: Can create eye tracking data visualizations; Processing/analysis: Can process eye tracking data, such as performing fixation classification and AOI analysis or calculation of metrics such as average fixation duration; Statistical analysis: Can perform statistical analysis on outcome metrics; Multi-brand: has support for eye trackers from more than one brand; Open source: Is open source; Interface: Interface through which user interacts with the program. Can be a Graphical User Interface (GUI) and/or one or multiple programming languages or communication protocols by means of which functionality of the program can be accessed or extended, or by means of which the program can be remotely controlled.

The short names used in the table refer to the following: Argus Science – ETVision and ETAnalysis (https://www.argusscience.com); Gazepoint – Gazepoint Analysis (https://www.gazept.com); ISCAN – ISCAN DQW, PRZ & GMT (https://www.iscaninc.com); Pupil Labs – Pupil-Labs Pupil Invisible/Neon Companion & Pupil Cloud (https://pupil-labs.com); SMI – SMI Experiment Center & BeGaze; SR Research – SR Research Experiment Builder, Data Viewer & WebLink (https://www.sr-research.com); Tobii – Tobii Pro Lab (https://www.tobii.com);

Blickshift – Blickshift Recorder & Blickshift Analytics (https://www.blickshift.com); EyeWorks (https://www.eyetracking.com); iMotions – iMotions Lab (https://imotions.com); Mangold – Mangold Vision (https://www.mangold-international.com); NYAN – Interactive Minds NYAN (https://www.interactive-minds.com); Okazolab – Okazolab EventIDE (https://www.okazolab.com);

Cedrus – Cedrus SuperLab (https://cedrus.com); E-Prime – Psychology Software Tools E-Prime (https://pstnet.com); OpenSesame – Mathôt et al. (2012) (https://osdoc.cogsci.nl); Presentation – Neurobehavioral Systems Presentation (https://www.neurobs.com); PsychoPy – Peirce (2007); Peirce et al. (2019) (https://www.psychopy.org); PsychToolbox – Brainard (1997); Pelli (1997); Kleiner et al. (2007) (http://psychtoolbox.org); ZING – ZING (Hosp & Wahl, 2023b) & ZERO (Hosp & Wahl, 2023a) (only for virtual reality);

ETRAN – Zhegallo and Marmalyuk (2015, https://github.com/PMarmalyuk/ETRAN/); EyeFeatureshttps://github.com/hse-scila/EyeFeatures; EyeTrace – Kübler et al. (2015), including the experimenter plugin (Otto et al., 2018); EyeTrack – EyeTrack, EyeDoctor & EyeDry (Abbott, 2011, see https://websites.umass.edu/eyelab/software/) (only for reading studies); eyetrackingR – Dink and Ferguson (2015); ILAB – Gitelman (2002); Gazealytics – Chen et al. (2023); GazeAlyze – Berger et al. (2012); GazeR – Geller et al. (2020); GraFIX – Saez de Urabain et al. (2015); OGAMA – Voßkühler et al. (2008) (http://www.ogama.net); PerceptionToolkit – Perception engineer’s toolkit (Kübler, 2020); pymovements – Krakowczyk et al. (2023); Jakobi et al. (2024); PyTrack – Ghose et al. (2020)

Perusing Table 1, several interesting points can be made. First, users of suites from the first two sections often have a tool on their hands in which they can perform most steps of working with an eye tracker, from data recording to the processing of their data into eye tracking measures. This stands in contrast to users of tools from the third section, which mostly provide only data recording functionality. Users of such tools will have to look to other tools or self-developed scripts for the visualization and processing of eye tracking data. While it is possible to create such scripts in, for instance, MATLAB and Python, and one can thus integrate them with stimulus presentation and data acquisition scripts created using toolboxes such as PsychToolbox (Brainard, 1997; Pelli, 1997; Kleiner et al., 2007) and PsychoPy (Peirce, 2007; Peirce et al., 2019), these toolboxes do not provide such functionality out of the box.

Second, with the exception of PyTrack (Ghose et al., 2020) and eyetrackingR (Dink & Ferguson, 2015), none of the tools listed in the table incorporate functionality for statistical analysis of eye tracking data, such as a comparison of means between two groups or conditions. For such analyses, the user will normally have to export their measures of interest from the software suite and perform further analysis in statistical environments such as SPSS and R. It should be noted that a significant number of tools advertise being able to create export files that can be directly imported into such statistical analysis environments.

Thirdly, not unexpectedly, software suites developed by eye tracker manufacturers only have support for eye trackers from a single brand, whereas the other software suites predominantly have support for eye trackers from multiple manufacturers or their data.

Data recording

In this section, we will discuss software tools for eye tracking data acquisition.

Plugins for stimulus presentation programs

Imagine a face perception researcher who has already set up a successful research line and in that process amassed a range of experiments that were built using a GUI in a stimulus presentation tool like PsychoPy. Now, for a next study, they want to use an experiment paradigm similar to a previous study, but add eye tracking. An obvious route to creating this new experiment is to take a previous experiment and implementation integrate eye tracking into it. Is this possible? Luckily for our face perception researcher, the answer is that they will likely be able to integrate eye tracking into their existing experiment. The stimulus presentation tool they chose, PsychoPy, comes with multiple plugins that enable communication between PsychoPy and a host of eye trackers (PyGaze, Dalmaijer et al., 2014; and the ioHub component of PsychoPy, Peirce et al., 2019). PsychoPy can furthermore be extended to communicate with more eye trackers by using eye tracker control libraries (a collection of functions to control an eye tracker, see below). Researchers who have built their experiments using other tools will likely also be able to record eye tracking data using plugins. Similarly to PsychoPy, OpenSesame (Mathôt et al., 2012) comes with plugins for integrating eye tracking and can be extended using eye tracker control libraries. PsychToolbox comes with the EyelinkToolbox (Cornelissen et al., 2002) and can be extended with several other eye tracker control libraries (e.g., Gibaldi et al., 2017; Niehorster et al., 2020a; Niehorster & Nyström, 2020). The Opticka experiment GUI for PsychToolbox (Andolina, 2024), for instance, has support for several other eye trackers built in. The situation is similar for users of tools such as E-Prime (Psychology Software Tools) and Presentation (Neurobehavioral Systems) in that eye tracker extensions are available for both and that eye tracker control libraries can be used.

When support for a specific brand or model of eye tracker is not available in one’s stimulus presentation program, an extension may be available from the eye tracker manufacturer, or have been posted online by another user of the stimulus presentation program. Using such extensions from sources other than the stimulus presentation program or extending their functionality using eye tracker control libraries may require some programming skills. The required skills may be no more than being able to take example code and adapt it, or to copy-paste code for controlling the eye tracker into some custom code element at the right location in one’s existing experiment.

Data recording programs

It is possible to record eye tracking data without presenting a stimulus. Some programs, often those that come with wearable eye trackers such as Tobii Pro Glasses 2 Controller and Pupil Recorder, are made for the sole purpose of recording data. They may run on a standard computer but also on a mobile phone (e.g., the Pupil Neon and Invisible Companion apps, the SMI ETG 2w smart recorder and the Argus Science ETPhone app). Besides such dedicated data recording programs, the stimulus presentation programs mentioned in the previous section as well as manufacturer software can also be used just to record eye tracking data. This may be desirable, for instance, when doing eye tracking studies using world-bound eye trackers without a screen (see, e.g., Gredebäck et al., 2010; Nyström et al., 2017; Thorup et al., 2018; and Valtakari et al., 2021, for a discussion). There are also dedicated tools such as Blickshift Recorder that can be used for this purpose to record from an eye tracker, a scene video and other data streams. Finally, there are tools, such as the lab streaming layer ecosystem (Kothe et al., 2024), for recording and directly streaming eye tracking data over a network connection.

Eye tracker control libraries

An eye tracker control library is a collection of programming functions that enable the control of an eye tracker. Controlling an eye tracker entails actions such as performing a calibration and starting or stopping data acquisition. Many eye tracker manufacturers provide such eye tracker control libraries for their products in the form of a software development kit (SDK). These support one or sometimes several popular programming languages, such as C, C#, Python and MATLAB. Readers with programming skills may be able to use SDKs for integrating the eye tracker in the stimulus presentation programs mentioned above, or to develop their own tools for controlling their eye tracker.

The SDKs have also been used to develop convenience packages for using an eye tracker in stimulus presentation environments. Some of these are specific to eye trackers from a given manufacturer (SMI: Niehorster & Nyström, 2020; SR Research: Cornelissen et al., 2002; Tobii: Niehorster et al., 2020a) or even a specific commercial (Tobii Pro Glasses 2: De Tommaso and Wykowska, 2019) or open-source (Sogo, 2017) eye tracker. Other tools are generic interfaces that allow controlling eye tracker systems from several manufacturers using a single interface (Dalmaijer et al., 2014; and the ioHub component in Peirce et al., 2019, for screen-based eye trackers; and Hosp & Wahl, 2023a, for VR). By providing a single uniform interface to a range of eye trackers, such generic tools may not expose all the capabilities of an individual eye tracker to the researcher.

Data visualization and exploration

Data visualization may be an important part of the research process. Visualizations of the recorded eye tracking data can be useful for, for instance, quality checks, verification that processing steps in the study analysis pipeline (e.g., fixation classification) function as required, to provide insight into the analysis pipeline and make it more explainable, for exploration of participant behavior, or to provide intuitive insight into a study’s findings (see, e.g., Koch et al., 2023). The visualization of eye tracking data is a vast field where many purpose-built techniques (Kurzhals et al., 2017a) have been developed (see Blascheck et al., 2014, 2017, for overviews as well as example visualizations; and Sundstedt & Garro, 2022, for an overview of tools specific to 3D environments) and are evaluated (see, e.g., Claus et al., 2023). Here we will briefly consider tools for common visualizations related to eye tracking. These can be roughly divided into two categories, visualizations that only show the spatial distribution of gaze such as heatmaps (Wooding, 2002b), and visualizations that also contain information about the temporal component of looking behavior, such as scanpaths and scarf plots (Camilli et al., 2008; Wu & Munzner, 2015).

Some visualization tools are dedicated to producing a specific visualization. Examples of dedicated heatmap tools are tools for offline (Wooding, 2002a, b; Špakov & Miniotas, 2007; Stellmach et al., 2010a) or real-time (Duchowski et al., 2012; Pfeiffer & Memili, 2016) generation of heatmaps, eyeScrollR (Larigaldie et al., 2024) for making heatmaps for gaze data recorded on scrolling webpages, for statistical analysis (Caldara & Miellet, 2011; Lao et al., 2017), and the GazeAlyze toolbox (Berger et al., 2012) for generating heatmaps in MATLAB. While a standard heatmap visualization only shows the spatial distribution of gaze, the Heatmap Explorer tool (Tula et al., 2016; see also Wooding, 2002a) allows visualizing the temporal component of gaze data by making it possible to generate heatmaps for specific temporal intervals. Other visualizations that contain information about the temporal component of looking behavior and that can be generated by dedicated tools include scanpaths (Camilli et al., 2008; D’Angelo et al., 2019), time plots (Räihä et al., 2005), arrow plots (Hooge & Camps, 2013), scarf plots (Wu and Munzner, 2015; https://gazeplotter.com), and techniques for producing scale-filtered scanpaths by Rodrigues et al. (2018) and a directional map-based technique (Peysakhovich & Hurter, 2018a) that emphasizes saccade directions in rendered scanpaths.

Instead of being dedicated to a single visualization, most visualization tools support producing a range of different visualizations. Here we will give some examples for different application areas to guide the reader to tools of potential interest. The reader who does not find what they are looking for among these examples or is looking for a more complete overview is referred to Blascheck et al. (2014, 2017), Kurzhals et al. (2017a), and Sundstedt and Garro (2022). Example visualization tools are VERP Explorer (Demiralp et al., 2017) and Gazealytics (Chen et al., 2023) which focus on gaze sequence analysis, frameworks for analysis of eye tracking data captured in VR (Ugwitz et al., 2022) and AR (Pathmanathan et al., 2023; David et al., 2024) environments, the work by Burch et al. (2019, 2021) focused on cartography, by Tang et al. (2012) for reading research, and by Menges et al. (2020) for web stimuli.

Data processing and analysis

There are GUI applications (e.g., Voßkühler et al., 2008; Kübler et al., 2015) as well as program libraries written in Python (Ghose et al., 2020; Krakowczyk et al., 2023), R (Geller et al., 2020) and MATLAB (Gitelman, 2002) that can perform many of the below processing steps, such as data filtering, fixation classification and AOI analysis. These will not be mentioned repeatedly in the below sections but are discussed in the “Software suites” section above.

Data filtering

Many operations that one may apply to eye tracking data may act as a filter. A filter is any manipulation of an (eye tracker) signal that may alter the frequency content of the signal. This article is not about filter theory, for more about that see for instance Proakis and Manolakis (1996). An example of an operation that has filter properties is downsampling. Downsampling reduces the bandwidth of the signal, i.e., it reduces and removes high-frequency aspects of the signal. Filters, depending on their properties, may affect the eye movement properties derived from the signal, such as saccade peak velocity (Juhola, 1986; Mack et al., 2017).

Filtering may be applied at multiple stages during eye tracking data analysis and for different purposes. Here we discuss example motivations to filter eye tracking data, along with references that provide further discussion of these use cases and tools to address them. Besides the aforementioned downsampling, filters are for instance often used to remove unwanted components from the eye tracker signal. Like any signal, eye tracking signals contain not only the behavior of interest but also unwanted variability. Sources of this variability, often called “noise”, include human behavior and measurement noise of the eye tracker (Mack et al., 2017; Niehorster et al., 2020d, 2021). For various uses it is desirable to filter out this noise. For instance, filtering is often used for interactive applications where gaze data is used online (Špakov, 2012; Feit et al., 2017). Removing unwanted components from eye tracking signals requires great care because “denoising [...] is challenging since eye movements are usually non-repetitive making the [...] signal unpredictable.” (Pettersson et al., 2013, p. 2). Many filters have been designed for denoising (or smoothing) the gaze position signal. Some have been designed for online use (e.g., Stampe, 1993; Olsson, 2007; Chartier & Renaud 2008; Toivanen, 2016), and others for offline use (Engelken et al., 1990; Juhola, 1991; Pekkanen & Lappi, 2017; Blignaut, 2019). There are also methods that are specifically aimed at removing sudden large jumps in the gaze signal (Stampe, 1993; Abdulin et al., 2017; see also Chartier & Renaud, 2008).

Besides smoothing data, another common use for filters is to produce velocity and acceleration signals. Velocity signals may, for instance, be used to study saccade dynamics (Tabernero & Artal, 2014; Hooge et al., 2015) or be used as part of an event classification algorithm (e.g., Van der Steen & Bruno, 1995; Engbert & Kliegl, 2003; Nyström & Holmqvist, 2010). The simplest way to compute a velocity signal is to subtract adjacent gaze positions from each other and divide by the inter-sample interval. This works well if the signal has very low noise (high precision), but not if the signal is noisy and measured at a high frequency. In this case, this so-called two-point central difference method has problematic characteristics in that the high-frequency noise in the signal is amplified (e.g., Bahill et al., 1982), potentially obscuring the actual eye movement. The filter properties of different ways to calculate velocity signals and their impact on the resulting estimated eye movement dynamics has received much research (e.g., Bahill et al., 1981; Bahill et al., 1982; Bahill & McDonald, 1983; Jantti et al., 1983; Inchingolo & Spanio, 1985; Juhola, 1986; Mack et al., 2017). Many filters have been designed to suppress the unwanted noise when calculating derivatives of eye tracker signals (Engelken et al., 1982; Inchingolo & Spanio, 1985; Engelken & Stevens, 1990; Das et al., 1996; Nyström & Holmqvist, 2010).

Finally, data loss may occur in eye tracking signals, presenting as gaps in the eye tracking data. Gucciardi et al. (2022) present a filter for the filling of gaps in eye tracking signals, but it should be noted that many event classification algorithms also include procedures for such gap filling (e.g., Hessels et al., 2017; van Renswoude et al., 2018).

Converting pixels and millimeters to degrees

Eye tracking measures such as saccade amplitude and gaze direction may be reported by the eye tracker or the tools one uses in various units, such as pixels and millimeters, but are normally reported in degrees (e.g., following the guidelines of Dunn et al., 2023, for reporting eye- and vision-related quantities). If the angle in question is below 10°, then converting from pixels or millimeters to degrees is simple. In this situation, the small angle approximation (Wikipedia, 2024) can be used and the conversion can be done with a constant scaling factor (e.g., the number of pixels per degree) computed for one’s setup. Doing so incurs an error of less than 1% for angles under 10°, and thus works fine if one, for instance, works with a remote eye tracker on a small screen. When working with visual stimuli, it may further be useful to come armed with the rule of thumb that the nail of one’s thumb subtends about 1.5° at arm’s length (O’Shea, 1991), and with the knowledge that 1cm on the screen subtends 1° at an eye-to-screen distance of 57cm. The small angle method applies to converting distances (or sizes) centered on the screen to angles. Gaze positions on a screen are, however, often provided in pixels with an origin in the top-left of the screen. If one wishes to convert such gaze positions to gaze directions in degrees, one additionally needs to make the reported gaze position relative to the center of the screen to be able to apply the small angle method.

When working with extents exceeding 10° such as on bigger screens, or even with wearable eye trackers where gaze shifts of 145° may well be observed (Hooge et al., 2024), the small angle method would lead to huge errors. In this case, proper methods such as trigonometry should be used for converting distances to angles. Hutton (2019) provides an online tool that does the heavy lifting. This tool can also handle converting distances that are not centered on the screen. One complication of working on a flat screen is that the viewing distance to various locations on the screen differs. This is taken into account when using trigonometry, whether through this tool or otherwise. Using the tool by Hutton (2019) however still leaves room for some errors, in that it calculates the horizontal and vertical extent of objects separately. When working with angles instead of Cartesian coordinates, the horizontal and vertical dimensions are no longer independent, which means that the order of the rotations matters. For a full treatment of this problem in eye tracking and correct solutions, the reader is referred to Haslwanter (1995), or to a set of conversion functions that ship with PsychoPy (https://psychopy.org/api/tools/monitorunittools.html).

Event classification

Depending on their research question, many researchers are interested in fixations, saccades, blinks and perhaps other events. However, this is not what most eye trackers deliver. Instead, the eye tracking signal consists of a regularly sampled sequence of eye orientations or gaze positions indicating where the participant looked, often at a high sampling frequency. As such, a processing step is needed that applies labels to parts of the eye tracker signal, segmenting the signal into episodes that are meaningful and usable for the researcher. Such methods that process the eye tracking signal into meaningful units such as fixations and saccades are called event classifiers (or event detectors). Classification can, in principle, be performed manually, but this is rarely a good option if algorithmic approaches are available (Hooge et al., 2018). There are a lot of different methods for labeling fixations, saccades and other events in an eye tracker signal (Hein & Zangemeister, 2017, provide a comprehensive overview of methods suitable for data from video-based eye trackers, but also EOG and other eye tracking techniques). These different methods can show marked differences in terms of the number of classified fixations and saccades and their properties (e.g., Andersson et al., 2017; Hessels et al., 2017), although Hooge et al. (2022a) have recently argued these differences may be significantly reduced once one considers adequate post-processing of the classified fixations or saccades. In this section, we will provide a brief overview of various methods for the classification of fixations, saccades and other events in eye tracking data.

Fixation and saccade classification

The operationalization of fixation may depend on the frame of reference in which the eye movement data is expressed, and thereby on whether the eye tracker is fixed to the world or to the head. For instance, when an observer sporting a wearable eye tracker walks over a path and fixates a flower along the side of the path, they produce a gaze fixation that is stable in the world with respect to the flower, but an eye orientation that continuously changes with respect to the head as they walk past the flower. Lappi (2016), Hessels et al. (2018b), and Nyström et al. (in press) provide extensive discussions of this point.

We will first consider the classification of fixations and saccades in world-referenced signals, such as gaze position on a screen. Such classifiers for fixations and saccades can be divided into two categories, so-called fixation pickers and saccade pickers (Karn, 2000). Which type of classifier one may wish to use depends on whether one is interested primarily in fixations, or in saccades. Fixation pickers focus on labeling fixations, and employ explicit rules for determining which segments of an eye tracker signal meet the criteria of a fixation (e.g., the recorded gaze positions are within a certain distance from each other for a certain minimum time). Saccade pickers on the other hand have explicit rules for labeling saccades (e.g., eye velocity exceeds a certain minimum speed). Methods for classifying saccades in eye tracking signals have been around for a long time (e.g., Baloh et al., 1980; Juhola et al., 1986; Van der Steen & Bruno, 1995), including even methods that are locally adaptive to the noise in the eye tracking signal (Tole & Young, 1981; Juhola et al., 1985; see also, e.g., Engbert & Kliegl, 2003; Nyström & Holmqvist, 2010; Mould et al., 2012). There are saccade classifiers that focus on specific use cases, such as detecting saccades during smooth pursuit (Sauter et al., 1991; Liston et al., 2013; Daye & Optican, 2014; Niehorster et al., 2015), for online detection of saccades (Han et al., 2013; Schweitzer & Rolfs, 2020; Elmadjian et al., 2023) and methods specialized for microsaccades (Engbert & Kliegl, 2003; Bettenbühl et al., 2010; Mihali et al., 2017), or for VR environments (Diaz et al., 2013). Various fixation classifiers have also been introduced (e.g., Wijnen & Groot, 1984; Salvucci & Goldberg, 2000; Blignaut, 2009; Komogortsev et al., 2010; Olsen, 2012; Krassanakis et al., 2014), including some that are designed to provide robust classification of low quality data such as observed in infant participants (Hessels et al., 2017; van Renswoude et al., 2018), or that are specialized for data recorded in VR environments (Weber et al., 2018; Llanes-Jurado et al., 2020). Implementations of a selection of fixation and saccade classification algorithms and common post-processing routines are available from Hooge et al. (2022a).

There are also dedicated algorithms for classifying eye movements in data from wearable eye trackers. Examples of such algorithms are (Reimer & Sodhi, 2006; Larsson et al., 2016; Hessels et al., 2020b; Kothari et al., 2020). This is an area of active research (e.g., Kinsman et al., 2012; Drews & Dierkes, 2024).

Classification of other events

Besides algorithms focused on classifying fixation or saccades, there are also algorithms that classify other types of events, such as smooth pursuit and post-saccadic oscillations (PSO). These algorithms often classify these events alongside fixations and saccades. For instance, Komogortsev and Karpov (2013), Larsson et al. (2015), Pekkanen and Lappi (2017), Startsev et al. (2019), and Dar et al. (2021) additionally classify pursuit, and Larsson et al. (2013) and Zemblys et al. (2018, 2019) PSO. Santini et al. (2016) and Elmadjian et al. (2023) provide online classifiers for fixations, saccades and pursuit.

Blink classification

Besides the above events that describe movement of the eye or lack thereof, one may also wish to classify blinks, i.e., motions of the eyelid. Blink classification can for instance be used when blinks themselves are of interest (see Nyström et al., 2024) or used to separate data loss due to blinks from other causes. There are various methods for doing so, based on a video of the eye (e.g., Sanchis-Jurado et al., 2020), on the pupil signal of an eye tracker (Pedrotti et al., 2011; Mathôt, 2013; Hershman et al., 2018), and on an eye openness signal provided by some eye trackers (Nyström et al., 2024).

Using an algorithm

What technical problems may one encounter when trying to use a classifier found online? One problem is that the data provided by the eye tracker does not match the expected input format for the classifier. In this case, a data input function has to be written to solve issues such as ensuring that timestamps have the right unit, that gaze data columns have the expected names and that gaze data are converted to degrees (see the section “Converting pixels and millimeters to degrees”) if the classifier requires that but the gaze data provided by the eye tracker is expressed in pixels on a computer screen. Another consideration might be how to evaluate the performance of event classification algorithms. Startsev and Zemblys (2023) provides a discussion of this problem.

Area of interest analysis

One common use of eye trackers is to answer questions about what participants look at. The analysis approach for answering such a question often involves areas of interest (AOIs, also known as regions of interest [ROIs], or interest areas [IAs]). AOIs are regions in the visual stimulus delineating where, e.g., objects of interest are located. When performing an AOI analysis, gaze data are tested for whether they fall on the AOI (AOI hits). This enables determining what objects or areas are looked at, but also other aspects of spatiotemporal looking behavior such as when objects are looked at and for how long (Goldberg & Helfman, 2010a).

What methods and tools are there for creating AOIs and using AOIs to analyze gaze data of participants looking at static stimuli (e.g., images)? Many of the software suites discussed above support AOI analysis based on hand-drawn AOIs, for instance using shapes such as rectangles, ovals and polygons. The Titta toolbox for Tobii eye trackers (Niehorster et al., 2020a) provides examples in MATLAB and Python for performing AOI analysis using binary AOI mask images that can be created in any photo editing software. Besides these approaches, tools have been published for creating AOIs that are robust to offsets in eye tracking data (Hessels et al., 2016), and for creating AOIs based on salience modeling of visual stimuli (Privitera & Stark, 2000; Fuhl et al., 2018b) or on the spatial distribution of gaze data (Santella & DeCarlo, 2004; Fuhl et al., 2018a). Nuthmann et al. (2017) provide a method and toolbox for using gridded AOIs, which also provides for statistical evaluation of such data. Finally, Rim et al. (2021) introduced an alternative method that avoids having to delineate AOIs but instead uses points of interest.

What to do with dynamic stimuli (e.g., videos and animations) where AOIs are not stationary over time? Here we discuss the case for screen-based eye tracking studies. AOI tools for a specific type of dynamic stimulus, those created by the participant themselves with the scene camera of a wearable eye tracker, are discussed in the wearable eye tracking section below. Various software suites support AOI analyses on videos and other dynamic stimuli. A common method implemented in such software is for researchers to define AOIs on key frames, after which the software determines the AOI position and shape on intermediate frames using interpolation. There are also dedicated tools for AOI analysis on dynamic stimuli. Papenmeier and Huff (2010) provide an automated method for AOI analysis in dynamic computer-generated scenes. Their method can, however, be adapted for use with real-world videos if a 3D reconstruction of the scene depicted in the video is available. Several other methods are available for determining what object is looked at in computer-generated 3D (Holmberg, 2007; Stellmach et al., 2010b; Sundstedt et al., 2013; Bernhard et al., 2014) or 2D (Alam & Jianu, 2017; Jianu & Alam, 2018) environments, for automatic construction of AOIs for faces in eye tracking setups for interpersonal interaction (Hessels et al., 2018a; Vehlen et al., 2022), and for tracking manually annotated (Bonikowski et al., 2021) or automatically detected (Kang et al., 2016) AOIs across video frames. Finally, a method has been developed for visually performing AOI analyses using space-time cubes (Kurzhals et al., 2014).

Higher-order measures

From the output of an AOI analysis or fixation classification, higher-order descriptions of spatiotemporal looking behavior can be determined. One common representation of looking behavior is scanpaths, the sequential pattern of fixations. Scanpaths provide information about, for instance, the visual strategy when performing a task (Noton & Stark, 1971; Ballard et al., 1995) and are frequently used for comparing or classifying strategies used by participants or participant groups.

To perform such analyses, there are various scanpath analysis techniques (Eraslan et al., 2015; see Anderson et al., 2015; French et al., 2017; Dewhurst et al., 2018; Fahimi & Bruce, 2021, for evaluations), such as scanpath comparison methods (e.g., Privitera & Stark, 2000; Josephson & Holmes, 2002; West et al., 2006; Cristino et al., 2010; Duchowski et al., 2010; Dewhurst et al., 2012; de Bruin et al., 2013; Le Meur & Baccino, 2013; Dolezalova & Popelka, 2016; Peysakhovich & Hurter, 2018b; Newport et al., 2022; and an R implementation of Dewhurst et al., 2012, https://github.com/bbuchsbaum/eyesim), including methods for dynamic stimuli (Dorr et al., 2010; Salas & Levin, 2022); methods for simplifying scanpaths or identifying average scanpaths (Goldberg & Helfman, 2010b; Grindinger et al., 2010; Eraslan et al., 2016, 2018); and methods for classifying scanpaths (e.g., Haass et al., 2016; Kübler et al., 2017; Coutrot et al., 2018; Fuhl et al., 2019a; b; Kucharský et al., 2020), including methods for dynamic stimuli (Grindinger et al., 2011) and specific to reading (von der Malsburg & Vasishth, 2011; Ma et al., 2023).

Gaze behavior can also be represented as a sequence of transitions between different AOIs. For some analyses, the transitions themselves are of interest. There are tools that focus specifically on transition matrices (Goldberg & Kotval, 1999; Hooge & Camps, 2013; Chen et al., 2023). Measures of the degree of orderliness of gaze behavior can be computed from the scanpath or the transition matrix (Ellis & Stark, 1986; Weiss et al., 1989; Krejtz et al., 2014, 2015; see, e.g., Allsop & Gray, 2014; Niehorster et al., 2019, for applications).

Whereas scanpath measures emphasize the spatial aspects of looking behavior, recurrence quantification analysis instead more directly taps into the temporal aspects of gaze behavior, such as the proportions and temporal dynamics of refixations. Anderson et al. (2013) provide a tool that can perform recurrence quantification analysis (see also Vaidyanathan et al., 2014; and see Gurtner et al., 2019, for an application). Such analysis techniques can also be employed to investigate the interplay in gaze behavior between two participants (e.g., Richardson & Dale, 2005; Jermann et al., 2011; Schneider et al., 2016) through cross recurrence analysis (Dale et al., 2011; Coco & Dale, 2014). Topological tools have also been proposed for these purposes (Hein & Zangemeister, 2017; see also Kasprowski & Harezlak, 2017).

Special topics

In this section we will discuss specialized tools that are designed for the study of a specific topic using eye tracking (e.g., reading) or that support a processing step or the use of a specific technique.

Gaze estimation, calibration, and offset correction

Gaze estimation is the process of mapping raw acquired measures (such as the raw image or the positions of features in the eye image) to eye orientations in the head or gaze positions in the world (e.g., on a screen). To ensure the accuracy of this mapping, it is necessary to perform a user calibration,3 through which it is possible to optimize the parameters of the gaze estimation model to the individual. Several articles discuss options for calibrating eye trackers and provide advice on how to perform the calibration, such as how many calibration points to use, how to instruct the participant and how to deal with special populations (Stampe, 1993; Sasson & Elison, 2012; Karl et al., 2020; Hopper et al., 2021; Leppänen et al., 2022; Park et al., 2023; Fu et al., 2024; Niehorster et al., 2024; Zeng et al., 2024; Nyström et al., in press). The reader is referred to these articles for further discussion of these topics.

There are a wide range of approaches to performing gaze estimation based on data acquired with a video camera, see for instance Hansen and Ji (2010) and Liu et al. (2022) for overviews of methods, although there are also methods to, for instance, acquire gaze signals free of baseline drift with EOG eye trackers (Barbara et al., 2024). Gaze estimation can for instance be performed using linear interpolation (Kliegl & Olson, 1981; McConkie, 1981) or through the fitting of various polynomials (e.g., Wijnen & Groot, 1984; Cerrolaza et al., 2012; Blignaut, 2013; Blignaut & Wium, 2013; Lara-Alvarez & Gonzalez-Herrera, 2020; Narcizo et al., 2021), sometimes with additional corner correction factors (e.g., Sheena & Borah, 1981; Stampe, 1993) or through geometric transformations (e.g., Yoo & Chung, 2005; Hansen et al., 2010; Ma et al., 2015; Morimoto et al., 2020). Gaze estimation may also involve geometric eye models that can be used in eye tracking setups using multiple corneal reflections and/or cameras (e.g., Shih et al., 2000; Coutinho & Morimoto, 2006; Guestrin & Eizenman, 2006; Villanueva & Cabeza, 2007; Barsingerhorn et al., 2017) or methods for recovering the visual axis from images of the pupil (e.g., Świrski & Dodgson, 2013; Dierkes et al., 2018; Santini et al., 2019; Su et al., 2020).

Calibration of such gaze estimation methods normally involves asking participant to look at one or multiple fixation targets. There are also various specialist tools, such as methods for collecting data for calibrating head-worn eye trackers based on VOR (Santini et al., 2017b) and by looking at fingertips (Bâce et al., 2018), smooth pursuit-based techniques (O’Regan, 1978; Pfeuffer et al., 2013; Blignaut, 2017; Drewes et al., 2019; Hassoumi et al., 2019), methods for calibrating nystagmus patients (Rosengren et al., 2020), several methods exploiting scene salience or visual task structure for implicit calibration (see Zhang et al., 2024, for an example and overview) and elaborate schemes for calibrating the eye tracker for robust gaze estimation at different pupil sizes (Drewes et al., 2012, 2014; Hooge et al., in press).

Besides tools for the calibration of eye tracking data, there are also tools for reduction of offsets in already calibrated data. Some of these are general methods (e.g., Zhang & Hornof, 2011; Blignaut et al., 2014; Zhang & Hornof, 2014; Vadillo et al., 2015; Blignaut, 2016; Lander et al., 2016; Ramirez Gomez & Gellersen, 2018), while others are designed for specific problems, such as assigning of gaze data to text lines in reading research (e.g., Špakov et al., 2019; Carr et al., 2022; Mercier et al., 2024, see the section on reading for more detail) or improving the accuracy of determining the binocular gaze point in 3D (Essig et al., 2006).

Reading

There is a long history of using eye trackers for studying reading (see, e.g., Rayner, 1998; Radach & Kennedy, 2004; Liversedge et al., 2022, for overviews). As such, over the years, specific tools have been developed to support reading research using eye trackers, targeting both the creation of experiments and data recording, and the analysis of eye tracking data during reading. Besides these specific tools, manufacturer software such as provided by SMI, SR Research and Tobii has specific support for the analysis of eye tracking data collected from participants reading sentences or paragraphs, and the ability to compute gaze metrics that are specific to the reading field (see Inhoff & Radach, 1998). These software suites can for instance automatically make AOIs for the words in the text. It is our impression that especially the SR Research Experiment Builder and Data Viewer software bundle is a popular tool among reading researchers. EyeMap (Tang et al., 2012; see also Hegarty-Kelly, 2020) is an open-source tool for the analysis of reading data, and the EyeTrack, EyeDoctor and EyeDry tools from the university of Amherst (Abbott, 2011, see https://websites.umass.edu/eyelab/software/) offer an environment for conducting measurements, visually inspecting and analyzing reading data. Furthermore, there is the popEye toolbox for the R environment (Schroeder, 2019, 2022) and eyekit for Python (Carr, 2023). The cleaning of fixation data for reading analyses, such as removing fixations that are too short, is examined by Eskenazi (2024). Tools for comparing scanpaths during reading are available (von der Malsburg & Vasishth, 2011; Ma et al., 2023).

A common problem encountered in reading studies is that the gaze data is offset from the text stimulus due to inaccuracy of the eye tracking data. Several tools (e.g., EyeMap, Tang et al., 2012; and EyeDoctor, Abbott, 2011) enable manually moving the classified fixations to correct for these offsets (performed by, e.g., Scherr et al., 2016; Vasilev et al., 2021; Adedeji et al., 2024; according to Mercier et al., 2024). However, since the manual correction process is both time-consuming and subjective, it may be preferrable to use automated tools instead (Cohen, 2013). Several such tools are available (e.g., Cohen, 2013; Špakov et al., 2019; Glandorf & Schroeder, 2021; Carr et al., 2022; Mercier et al., 2024). The alignment methods by Glandorf and Schroeder (2021) and Carr et al. (2022) are also implemented in the popEye (Schroeder, 2019, 2022) and eyekit (Carr, 2023) toolboxes.

Besides tools for studying the reading of normal sentences and paragraphs using an eye tracker, there are also tools for studying the reading of computer source code (Guarnera et al., 2018; Behler et al., 2023a, b; Saranpää et al., 2023; Behler et al., 2024; Stolp et al., 2024; Tang et al., 2024), the production of text (Alamargot et al., 2006; Andersson et al., 2006; Wengelin et al., 2009; Chukharev-Hudilainen et al., 2019; Wengelin et al., 2019, 2024), and the translation of text (Carl, 2012; Jakobsen, 2019).

Wearable eye tracking

Studies using wearable eye tracking face unique challenges, see for instance Fu et al. (2024) for an extensive discussion and practical advice for setting up studies using wearable eye tracking (see also Nyström et al., in press, for considerations when considering adopting wearable eye tracking in a study). As discussed in several articles (Lappi, 2016; Hessels et al., 2018b; Nyström et al., in press), head-worn eye trackers record eye movements with respect to the head of the participant, not gaze with respect to locations in the world. Researchers are on the other hand most often interested in what or where in the world participants look at. As such, eye movement data collected with a wearable eye tracker needs to be transformed to gaze in the world. Such processing of the recorded data can be done manually (see, e.g., Gidlöf et al., 2013; Häggström et al., 2015; Vansteenkiste et al., 2015; Gidlöf et al., 2017), but this is a subjective and very time-consuming process.

There are many tools and methods that use the gaze data from a mobile eye tracker along with the scene video (a video acquired with a camera that is attached to the participant’s head) to enable determining what is looked at or where in the world the participant looks. These methods include ones based on 3D reconstructions of the scene that gaze can be mapped to directly (Paletta et al., 2013; Booth et al., 2014; Jensen et al., 2017; Singh et al., 2018; Jogeshwar & Pelz, 2021; Stein et al., 2023), or where the reconstructions are annotated by hand (Kopácsi et al., 2023); object recognition and segmentation of the scene video (Wolf et al., 2018; Panetta et al., 2020; Deane et al., 2023; Alinaghi et al., 2024) including methods focused on bodies and faces (Callemein et al., 2019; Hessels et al., 2020a; Jongerius et al., 2021); manual annotation of key frames (Essig et al., 2011; Meyer et al., 2021) or creation of AOI template images (Weibel et al., 2012; Ryabinin et al., 2022; Zhang et al., 2022) which are then tracked throughout the recording; clustering (Kurzhals et al., 2017b), matching (Brône et al., 2011; Toyama et al., 2012), or classification (Panetta et al., 2019; Barz et al., 2023) based on image content around the fixation point in the scene video; or AOI placement based on fiducial markers visible in the scene video (Weibel et al., 2012; Kiefer et al., 2014; Pfeiffer et al., 2016; Duchowski et al., 2020; Tabuchi & Hirotomi, 2022). There are also methods that specialize in mapping gaze to one or more screens in the environment, either using fiducial markers (Faraji et al., 2023) or based on only the scene camera video (Mardanbegi & Hansen, 2011; Turner et al., 2012; Paletta et al., 2014a, b; Lander et al., 2015; Batliner et al., 2020). Lastly, there is a tool using a poster with fiducial markers for automated data quality (accuracy and precision) assessment for mobile eye tracking data (Niehorster et al., 2023).

Other methods for mapping gaze to the world use additional sensors, such as motion capture (Allison et al., 1996; Johnson et al., 2007; Ronsse et al., 2007; Herholz et al., 2008; Essig et al., 2012; Pfeiffer, 2012; Cesqui et al., 2013; Burger et al., 2018; Langstrand et al., 2018; Lavoie et al., 2018; Narasappa, 2022; Stone et al., 2024), fiducial marker-based head tracking systems (Hooge et al., 2024) or inertial measurement units (IMUs, Larsson et al., 2016; Tomasi et al., 2016; Matthis et al., 2018).

Finally, there are some open tools for the control and data acquisition of wearable eye trackers (De Tommaso & Wykowska, 2019; Nasrabadi & Alonso, 2022), and for the analysis of such data (Benjamins et al., 2018; Niehorster et al., 2020b).

Gaze-contingent displays

Gaze-contingent manipulations are techniques where the stimuli depend on data from an eye tracker. They may be as simple as starting the trial when the participant looks at a fixation cross, can involve altering what stimuli are shown to the observer based on where the observer is looking, and may even involve changing the content of the display during saccades. Gaze-contingent techniques have been used to effect several manipulations of visual stimuli, and have notably been used in research where only part of the display near where one looks is visible during reading (McConkie & Rayner, 1975; Rayner, 2014) and scene viewing (van Diepen et al., 1998; Nuthmann, 2014; Nuthmann & Canas-Bajo, 2022), research using artificial scotomas where foveal vision is blocked (Rayner & Bertera, 1979; Cornelissen et al., 2005; Biebl et al., 2022) or degraded (Jordan et al., 2012; Cajar et al., 2016), research into multiresolution display techniques such as foveated rendering where the rendered level of detail depends on where the observer looks (see Mohanto et al., 2022, for a review), studies using gaze-contingent displays for gaze guidance (e.g., Barth et al., 2006; Bailey et al., 2009; Sridharan et al., 2015) and studies where the content of the screen is manipulated while saccades are in flight to study, for instance, motor learning (McLaughlin, 1967; Frens & van Opstal, 1994; Havermann et al., 2011). Furthermore, there are even studies with young children that have used techniques where the behavior of an on-screen avatar depends on where the children look to study social perception (Vernetti et al., 2017, 2018; Wang et al., 2020) and learning (Tsuji et al., 2021), studies where gaze-contingent displays are used for rehabilitation of brain disorders (see Carelli et al., 2022, for a review) or other therapeutic efforts (Machner et al., 2020) and studies where the display content was contingent on the behavior of one or multiple other participants (Brennan et al., 2008; Niehorster et al., 2019; Siirtola et al., 2019). Gaze-contingent displays where stimuli are manipulated based on the participant’s eye movements often have strict requirements regarding timing, which requires an eye tracker that can deliver gaze position samples with low latency, fast software for updating the stimulus and a fast graphics display system. Studies such as Loschky and Wolverton (2007) examine the timing requirements for gaze-contingent manipulations.

We have found that many studies used their own custom implementation to create the gaze-contingent manipulation. There are, however, some more general tools, including a hardware tool for low-latency updating of stimulus displays using 60 Hz projectors (Richlan et al., 2013), two combined hardware and software platforms (Reder, 1973; Santini et al., 2007; see McConkie, 1997, for context on Reder’s work), a self-contained stimulus presentation tool designed for presenting gaze-contingently masked stimuli (Orlov & Bednarik, 2016), and two tools for gaze-contingent localized blurring of stimuli (Perry & Geisler, 2002; Malkin et al., 2020). Furthermore, SR Research’ Experiment Builder provides the ability to create gaze-contingent displays, and other software suites such as SMI Experiment Builder and Tobii Pro Lab support to start or stop trials based on the participant’s gaze position. Several other software tools come with gaze-contingent example code for MATLAB (Cornelissen et al., 2002; Niehorster et al., 2020a) and Python (Peirce, 2007; Dalmaijer et al., 2014; Nyström et al., 2017; Niehorster et al., 2020a). Finally, McConkie et al. (1984) and Aguilar and Castet (2011) provide advice for implementing gaze-contingent display paradigms.

Hardware

In this section, we will discuss various hardware tools, ranging from self-built eye trackers and tools for validating eye trackers, to important aspects of an eye tracker setup that often do not receive sufficient consideration, such as the furniture and design of a lab room.

Self-built and webcam eye trackers

Many different self-built or open-source eye trackers can be found in the literature. Here we provide a small sample. Firstly, there are many complete desktop eye tracking systems that are designed to use the pupil and/or corneal reflection(s) (e.g., Ebisawa & Fukumoto, 2013; Sogo, 2013; Balthasar et al., 2016; Wyder & Cattin, 2016; Zimmermann et al., 2016, see https://www.myeyetracker.com; Matsuda et al., 2017; Barsingerhorn et al., 2018; Hosp et al., 2020; Ivanchenko et al., 2021) or the dual Purkinje principle (Chamberlain, 1996; Wu et al., 2023) for gaze estimation. There are also open head-worn eye tracker designs and systems (e.g., Babcock & Pelz, 2004; Li et al., 2006; Rantanen et al., 2011; Mantiuk et al., 2012; Lukander et al., 2013; Kassner et al., 2014; Lanata et al., 2015; Eivazi et al., 2018; Lander et al., 2018; Krohn et al., 2020; Nourrit et al., 2021; Yang et al., 2023). Furthermore, there are system designs focused on pupillometry (Zandi et al., 2021; Martin et al., 2022; Barry & Wang, 2023), VR (https://github.com/EyeTrackVR/EyeTrackVR) and even eye trackers that can be embedded in surgical microscopes (Charlier et al., 1991; Eivazi et al., 2016; Eivazi & Maurer, 2018) and gun scopes (Hansen et al., 2019).

Besides such complete systems, there are also software libraries that provide various implementations of eye tracking algorithms for desktop (Agustin et al., 2010; Casas & Chandrasekaran, 2019; Sadeghi et al., 2024) and head-worn (Santini et al., 2017a) eye tracking, pupillometry (Mazziotti et al., 2021), as well as even software libraries and methods for eye tracking from retinal images (Mulligan, 1997; Stevenson et al., 2010; Bedggood & Metha, 2017; Agaoglu et al., 2018; Zhang et al., 2021).

Although webcam eye tracking has a long history (e.g., Hansen et al., 2002; Hansen & Pece, 2005), in the last decade, the use of standard webcams and smartphone cameras for eye tracking has increased rapidly. This development is boosted by researchers looking to bring experiments to people at home instead of bringing people to dedicated lab facilities. As such, significant research has been done to evaluate the performance of webcam and smartphone-based eye tracking (e.g., Semmelmann & Weigelt, 2018; Yang & Krajbich, 2021; Bánki et al., 2022; Kaduk et al., 2023; Saxena et al., 2023; Van der Cruyssen et al., 2023; Bogdan et al., 2024; Falch & Lohan, 2024; Hagihara et al., 2024; Prystauka et al., 2024; Steffan et al., 2024; Valtakari et al., 2024). Some example tools for eye tracking using a webcam or smartphone are Papoutsaki et al. (2016), Valliappan et al. (2020), Erel et al. (2023), Werchan et al. (2023), https://github.com/NativeSensors/EyeGestures, as well as online tools such as GazeRecorder (https://gazerecorder.com), iMotions WebET (https://imotions.com), SeeSo (https://visual.camp) and LabVanced (https://www.labvanced.com, Finger et al., 2017), but the reader is recommended to consult the overviews of Heck et al. (2023) and Cheng et al. (2024b) if such techniques and tools are of interest.

Synchronization and setup validation

Eye trackers may be used in conjunction with other sensors, such as motion capture systems and EEG. Analyzing such multimodal data requires synchronization of data streams from the various sensors. How can this be achieved? Many software suites from eye tracker manufacturers, as well as other companies (e.g., Blickshift, iMotions, Neurobehavioral Systems Presentation) advertise support for synchronized co-recording from an eye tracker and other sensors. There are also hardware solutions for synchronizing displays and various other sensors, such as offered by the VPixx ecosystem, the Cedrus Stimtracker and the Cambridge Research Systems Bits#. Besides these commercial products, there are general open-source solutions that offer support for a wide array of sensors, such as OpenSync (Razavi et al., 2022) and other solutions built upon the LabStreamingLayer (Kothe et al., 2024) infrastructure (see Table 1 in Razavi et al., 2022, for an overview).

Furthermore, there are many tools that enable the synchronization of eye tracking data with data collected from a specific sensor, such as for EEG (Dimigen et al., 2011; Bœkgaard et al., 2014; Ionescu et al., 2022), including in virtual reality setups (Larsen et al., 2024; Salehi et al., 2024); motion capture (Huang et al., 2004; Essig et al., 2012; Cesqui et al., 2013; Burger et al., 2018; Lavoie et al., 2018; Narasappa, 2022; Stone et al., 2024); MRI (Lawrence et al., 2011; Hanke et al., 2020); and audio (Arslan Aydin et al., 2018; Boulay et al., 2023). Other techniques are designed for ensuring or verifying synchronization of the eye tracking data to stimulus presentation systems (Saunders & Woods, 2014; Brooks et al., 2019; Watson et al., 2019), including VR and simulator systems (Medenica & Kun, 2012; Diaz et al., 2013; Loeb et al., 2016). Several products are available to verify the timing of such setups, such as the Blackbox Toolkit and the Neurobehavioral Systems LabStreamer.

It may also be of interest to synchronize multiple eye trackers with each other, whether for studying human behavior (e.g., Hessels et al., 2019; Niehorster et al., 2019; Hessels et al., 2023; Saxena & Fink, 2023) or for the evaluation of eye tracker performance by comparing eye trackers using simultaneous co-recording (e.g., van der Geest & Frens, 2002; Houben et al., 2006; McCamy et al., 2013; McCamy et al., 2015; Titz et al., 2018; Ehinger et al., 2019; Holmqvist et al., 2020; Stein et al., 2021; Aziz et al., 2022) or comparing eye trackers and webcams (e.g., Kaduk et al., 2023; Falch & Lohan, 2024).

It may also be desirable to validate the performance of the eye tracker itself (not to be confused with validation of the eye tracker’s user-calibration). One method for evaluating aspects of the performance of eye trackers is to use artificial eyes (e.g., Crane & Steele, 1985; Hayes & Petrov, 2016; Wang et al., 2017; Ooms & Krassanakis, 2018; Wyder & Cattin, 2018; Holmqvist & Blignaut, 2020; Holmqvist et al., 2021; Niehorster et al., 2021). Many different devices have been designed that are able to move artificial eyes in precisely known ways, so that eye tracker data can be evaluated with a known input (e.g., Abramov & Harris, 1984; Biamino et al., 2005; Bassett et al., 2010; Reingold, 2014; Tannfelt Wu, 2018; Wyder & Cattin, 2018; Holmqvist & Blignaut, 2020; Felßberg & Strazdas, 2022; Sueishi et al., 2022; Lotze et al., 2024).

The forgotten parts of an eye tracking setup

When building an eye tracking setup, the focus may be on the eye tracker, the screen and possibly the computer that they are connected to. In our experience, other critical components such as the chin rest, the table, the chair and room lighting might not be considered at all or at best be an afterthought. We are not aware of literature that systematically investigates the impact of choosing a stable table and a chair that keeps the participant from moving on the quality of the recorded eye movement data. From personal experience, we know that these are important aspects of an eye tracking setup that require careful attention.

Chinrests and tables

When using an eye tracker that is fixed in the world (see Valtakari et al., 2021; Nyström et al., in press), one may want to consider how to keep participants from moving too much. Even for eye trackers that can ostensibly deal with significant movement of the participant, such movement may lead to large offsets in the gaze data or significant data loss (Blignaut & Wium, 2014; Hessels et al., 2015; Niehorster et al., 2018, for world-fixed eye trackers; see Niehorster et al., 2020c; Hooge et al., 2022b; Onkhar et al., 2023; Velisar & Shanidze, 2024, for wearable eye trackers). If lower data quality is problematic for a study, we recommend that participants are placed on a chin- and forehead rest when possible (see NIMH-NIF, 2019; MetisVidere, 2020, for open-source chinrest designs; and Nyström et al., in press, for considerations regarding the use of chinrests). Furthermore, placing the eye tracker and the chinrest on a sturdy table (i.e., ideally a table with four stable legs) helps to minimize movement of the participant with respect to the eye tracker. Real perfectionists may consider to put the chinrest, mouse, keyboard and other parts of the setup that are in contact with the participant on another table than the eye tracker so that any actions of the participant do not cause the eye tracker to move. Besides physical chinrests, virtual chinrests that keep the participant in place based on visual feedback have been developed (Li et al., 2020) and are available in online experiment environments such as LabVanced (see Kaduk et al., 2023) and jsPsych.

Chairs

What happens when participants are put in a chair that can roll, swivel, and pivot, such as a standard desk chair? Participants may use the mobility they are afforded. Such movement may negatively impact the quality of the eye tracker signal. Thus, it makes sense to use chairs that do not afford more freedom of movement than needed for the setup. For instance, instead of using height-adjustable chairs that often have embedded springs, one can use a fixed chair and use a height-adjustable table (e.g., Niehorster et al., 2024). If participants may be very large or small, having extra chairs to accommodate their size may be useful. The freedom to roll, pivot or swivel the chair is often not needed and using a fixed chair reduces the ability of the participant to move around. Standard desk chairs do provide such extra freedom of movement and are therefore not suitable. Besides stable chairs, sometimes neckrests or pillows are used to further fixate participants or increase their comfort (e.g., Trojano et al., 2012; Sprenger et al., 2013; Choe et al., 2016). We know of a further interesting solution to stimulate participants to sit still. Although they did not report it in the paper, Holleman et al. (2023) had children (8–10 years olds) wear a vest with Velcro attached to the back and to the backrest of the chair. Using this method, children felt resistance on their back when they moved, encouraging them to sit still. For younger participants (infants and toddlers), setups using car seats have been developed with the aim to afford them as little movement as possible (see, e.g., Hessels & Hooge, 2019, Figure 2 and associated text for an in-depth discussion).

Climate control

If one has equipment that produces significant heat in a small experiment room, such as one or multiple powerful computers, this room may get uncomfortably warm (e.g., Niehorster et al., 2024). This may especially be a problem in the summer months depending on the climate where the lab room is situated. If this is a potential concern, it should be considered whether the lab room has adequate climate control.

Room lighting

Finally, depending on the eye tracker and the research question, adequate control over the lighting conditions in the room may be crucial. For some eye trackers, one may wish to avoid measuring in the dark because pupils may become too large for successful eye tracking (Holmqvist et al., 2011), and because participants may experience strong afterimages if bright visual stimuli are used. Conversely, some other eye trackers (e.g., Crane & Steele, 1985) require large pupils and may thus work better in light-controlled dark rooms. Large changes in pupil size due to changing lighting conditions may also lead to large offsets in the recorded eye tracking data (see, e.g., Wyatt, 2010; Drewes et al., 2012; Choe et al., 2016; Hooge et al., 2021; Hooge et al., in press). One may also wish to avoid incandescent light and other sources of infrared light as these may interfere with the operation of some eye trackers. In our experience, fluorescent and LED lighting are unlikely to interfere with the eye tracker.

Depending on the need for controlled room lighting, it may not be advisable to use a room with windows as an eye tracking lab. Light from outdoors has enormous variation both in terms of intensity and wavelength spectrum throughout the day and throughout the year. Curtains, blinds, and other methods for covering windows may not provide sufficient control over the illumination of the room. Instead, we think that a well thought out setup for an eye tracking lab uses indirect light, ideally provided by some movable lamps that are aimed at the walls or ceilings. If conversely, recordings are made in outdoor conditions, special measures such as visors blocking infrared light (e.g., Matthis et al., 2018) may be used to reduce data loss and improve recording quality. This and other considerations regarding working with uncontrolled lighting conditions during wearable eye tracking recordings are discussed in Fu et al. (2024).

Lastly, depending on one’s needs, one may wish to consider the color of walls and tables in the lab. For instance, light walls and tables may be too reflective if one wishes to work in the dark. For experiments where a completely dark environment is required, special light-proofed rooms can be used (Heywood, 1972; Bennett & Barnes, 2004), in which all surfaces may additionally be painted in flat optical black to prevent any reflections (Shaffer et al., 2003).

The human factor: Skills and interests

This article so far has considered the tool per se, and has not taken into account the researcher’s context such as team competencies and personal interests. Here we examine this human factor in tool choice, and provide advice for researchers looking to expand their horizons by investing in, for instance, computer or programming skills.

Choosing tools that fit

An important question a researcher may ask before deciding to adopt a tool in their workflow is whether the tool is suitable for them. That is, is the tool needed to address the research question at hand, does it fit within the project’s budget and does the tool fit in one’s workflow and match one’s skills? Let us consider an example eye tracking problem that may be solved with a tool. A researcher, Bobby, is examining gaze behavior while playing a table-top game using a head-worn eye tracker. When drawing up an analysis plan, Bobby runs into the problem that a wearable eye tracker delivers eye movement data referenced to the participant’s head, while for their analysis they need gaze data referenced to the game’s board (cf. Nyström et al., in press). After searching the Internet, Bobby finds a tool that can perform this transformation from head-referenced to world-referenced gaze data for them and an article describing the mathematics implemented by the tool along with a rigorous validation. This looks like just the tool Bobby needs. But can they use it in their project?

To answer this question, it is important to have a clear view of what the competencies present in the research team are, which tools and techniques the members of the team understand, and their motivation. Such an understanding of the team’s skills, limitations and aspirations delineates the solution space available to the team and shows what routes are available for enlarging the solution space if needed. It also allows the team to make strategic decisions about what tools to adopt in the research project. When deciding to use a method or tool in a study, one has a responsibility to have a sufficient understanding of the tool to be able to apply it correctly and explain what one did when reporting the study. In the case of Bobby, who runs their research project by themselves, sadly the tool did not come with a user-friendly GUI. Even though example scripts were provided, Bobby does not know how to read or change such scripts nor has an interest in learning to do so, and as such was not able to adopt the tool into their project.

More concretely, the answer to the question of whether a research team can use a specific tool depends on both the tool and the skills present in the researcher’s team. Some tools are packaged in a convenient graphical user interface and require only minimal computer skills (but may still require sufficient mathematics and physics competence to understand the methods implemented by the tool). Other tools may instead come in the form of a loose collection of files with programming code that the researcher has to adapt for themselves. There are also tools that fall somewhere in between these two extremes, such as tools with well-thought-out programming interfaces and good documentation and examples, but which nonetheless still require some computer or programming skills to be able to adopt them in one’s project. If one finds that the required computer skills, programming skills, or domain knowledge needed to understand and use the tool and method are lacking in the research team, one is left with some options to widen the solution space of the team. These include, for instance,

  • Enlisting the help of support staff with the needed technical skills.

  • Identifying a willing member of the research team who has the time and interest to invest in the required technical skills.

  • Adding an extra co-author to the research team who can bring in the needed skills.

Overall, we think it is important to be realistic when deciding which tools to adopt in an experiment setup and data processing and analysis pipeline. It is easy to underestimate the effort involved in getting a tool or technique to work, especially when the required skills are not available yet. It is also important to be realistic about one’s level of interest in technology. We can imagine that researchers with an interesting research topic may prioritize that research topic over spending a lot of time on technical details such as parsing eye tracker signals or the above-mentioned reference frame transformation for wearable eye tracking data. Instead, the researcher may prefer that some software package takes care of these technical problems without them having to think about it. When roadblocks in terms of insufficient skills or interest are encountered, it may be worth rethinking the study design so that tools that are better suited for the team can be used instead.

Expanding the solution space: Computer skills and programming

We aim for this article to provide readers with the means to improve the efficiency and quality of the research they perform with eye trackers. This article has hopefully provided the reader with an entry ticket to the vast world of eye tracking tools, but that is only part of the story. Technical skills are another important part of performing effective, high-quality research.

The authors of this article would like to encourage the reader to try to always reserve a little bit of time in the week for improving the technical and other skills (e.g., math, physics, statistics, experimental design, computer skills and programming) that are needed or useful when doing eye-tracking research. In this section we go into examples of how to get started.

Perhaps the ultimate tool of all for a researcher is the computer. For a suitable task and when properly instructed, it can do one’s work, and even the work of an army of research assistants, sometimes in just minutes. This may mean that problems that are impossible to solve manually become possible with a computer. As such, an investment that may improve the effectiveness of one’s research is to invest in computer skills.

For instance, skills using graphical design programs such as Adobe Photoshop and Illustrator, but also The GIMP and Microsoft PowerPoint may be needed to produce the visual stimuli for a study. Skills with Microsoft Excel, SPSS or JASP (JASP Team, 2024) will likely be helpful when organizing and statistically analyzing the eye movement measures produced by tools. For more complicated research projects, further computer skills may be required such as video editing, and perhaps programming skills to enable different tools to work together and to automate mindnumbing repetitive manual tasks. Some studies require specific skills such as programming in languages like Python and MATLAB using toolboxes like PsychoPy (Peirce, 2007; Peirce et al., 2019) and PsychToolbox (Brainard, 1997; Pelli, 1997; Kleiner et al., 2007) to implement custom (and possibly interactive) stimulus presentation routines, or computer vision skills to support the analysis of wearable eye tracking data. Maybe most importantly of all, investing in computer skills allows expanding the solution space of possible study designs, thereby opening new research vistas that are simply not available to researchers who do not have the required skills.

When doing jobs such as building experiments with tens or hundreds of trials, or analyzing data from an eye-tracker that records a thousand samples per second, it is often not an effective use of time to approach this work manually. For these and many other jobs, a good piece of code can vastly improve the effectiveness and efficiency of one’s research compared to performing them manually and also reduces the potential for mistakes to sneak in. Our strong recommendation therefore is for the reader to consider learning to program. What do we mean when we say “learn how to program”? Do we want everybody to become software developers? Of course not. Even only some basic programming skills can already be very useful, allowing a researcher to become more self-reliant and produce more reliable and reproducible results in less time. The authors have time and again seen participants of their introduction to programming courses make such progress.

The level of programming skills that is perhaps easiest to obtain is the ability to take code from the Internet and to be able to set up an environment to get it to run. Also achievable is the ability to read code written by others and understand enough to be able to explain what the code does when writing an article. Perhaps one is also able to make small modifications to adapt the code to one’s own research project, or write small code snippets and insert them in an experiment built with the GUI of tools such as E-Prime or PsychoPy to, for instance, randomize the position of a fixation target. Each of these very obtainable steps in the direction of becoming a proficient programmer may already open up a whole world of new tools and research designs, thereby making new research problems tractable. After all, as shown in this article, many tools in one or another way involve programming. For instance, transforming or converting recorded data to the input format required by the tool almost always requires a tailor-made solution. Often one or two example functions for reading in data are provided with the tool, and only some small changes to one of these functions is enough to enable the tool to read one’s data files, and thereby include the tool in one’s solution space.

While a whole world of new tools already becomes accessible with minimal programming skills, one should consider whether it is worth it to develop programming skills beyond the ability to read and run code written by others and make small modifications. Once one finds out that reading and producing computer code is not as hard as one thought and perhaps even starts to develop a taste for programming, one may start seeing many more possibilities enabled by programming. For instance, it may become enticing to design and write more complicated scripts that can transform one’s data into an analyzable format, and to build a pipeline that processes the data in order with several tools written by others. Lets take an example manual data analysis workflow. It consists of a whole bunch of steps that need to be performed for each recording separately: 1) manually opening each data file in Excel, making a bunch of edits and transformations of columns; 2) importing it in some analysis tool and running that tool; 3) again opening and editing its output in Excel; etc; until finally a file is produced with the relevant eye tracking measures for all participants. A script may reduce this hell of manual data analysis to simply putting the files of recorded eye tracking data in a folder and starting the script. All data analysis steps are automated by the script, and eventually it delivers an output file that is ready for statistical analysis. This level of programming brings many advantages over a manual data analysis workflow, such as:

  1. Writing a script forces one to determine exactly what data processing steps should be performed. It forces one to design an explicit plan and the script itself provides an exact record of what was done, making it easier to report in scientific publications.

  2. Once the script is written, it is simple to make a change at any step in the data processing pipeline. Doing so simply means one has to rerun the script instead of manually performing all the analysis steps again.

  3. A script provides a verifiable procedure that enables testing each individual step, and enables examining the impact on a study’s conclusions of changing a parameter or other choice made at any of these steps.

Finally, we get to the terrain of developing one’s own tools, the terrain of software engineers. Developing one’s own tools may well be a relatively niche occupation for scientists using eye tracking. While new tools are sometimes required to be able to address new questions or to make novel discoveries, the majority of scientific discovery unlikely requires new tools, but just an ability to use existing tools in novel ways. Developing one’s own tools, however, does not mean that one is no longer dependent on the tools and platforms of others. For instance, when converting gaze data from pixels on a screen to degrees, one might use existing tools instead of developing one’s own. Building upon the solutions of others and not reinventing everything oneself makes it possible to finish one’s tool in a reasonable amount of time. It, however, also introduces a maintenance burden because new releases of one of the building blocks may introduce a change in behavior (or a bug). One’s own tool would then suddenly no longer work and new investments are needed to fix the situation and restore functionality.

In short, we recommend that researchers who run their own experiments or analyze data consider learning a programming language like Python, R, or MATLAB. For instance, https://software-carpentry.org/lessons/ and https://datacarpentry.org/lessons/ provide relevant courses for learning these languages as well as adjacent best-practice tools and skills (e.g., version control with Git). If you are affiliated with a university, it is possible that you will also be able to locally find an introduction to programming course aimed at non-computer scientists. Many beginners do not invest in learning to program because they, or their supervisors, think they do not have the time for it. We recommend disregarding this short-term argument and finding the time to invest in these skills because it can save you a lot of time later in your research or project. If you heed our advise, the programming language that we would strongly recommend to learn is Python. While there are many great choices such as MATLAB and its PsychToolbox, in our opinion, Python may provide a more flexible and widely applicable choice of programming language for researchers and engineers. Python is one of the most popular programming languages. It is freely available, relatively simple compared to languages like C or Java, and can be a valuable addition to a CV for careers outside academia.

Conclusion

The goal of this article was to improve the efficiency and quality of research performed with eye trackers by advancing knowledge about all kinds of useful tools. To meet this goal, we have firstly provided an extensive overview of tools that address a wide range of problems in research projects. The aim of this non-exhaustive list was to provide the reader with a sense of the breath of problems for which others have already developed solutions, and as a starting point to finding tools that are suitable for them.

Secondly, we have provided a discussion of how to approach choosing the tools for a research project. A first important factor in choosing the right tool is recognizing good tools. We have discussed what we view as the characteristics of good tools. Once a set of potentially useful (and usable!) tools has been established, one then has to choose which tools to adopt in the research project. This is something that one has to do themselves for each research project because the right choice depends on many factors that we as authors of this article do not know. For instance, what tools are appropriate for a research project depends on the research question, the budget, the available equipment, the skills of the members of the project team, how much time is available and also whether members of the research team even have an interest in spending time on custom tools at all. We have provided a discussion on how to navigate this process and make suitable trade-offs.

A goal of this article was to increase the solution space for researchers, regardless of their level of technical interest and skill, by pointing them to how others have solved problems and helping them gain access to useful tools. We do hope that we have been able to awaken in the reader a little interest in the technical aspects of the work of a researcher. We think there are at least two benefits to having such interests. First, since many researchers are limited in what research question they can explore by the software and other tools they can use, having an interest in exploring new tools or even developing one’s own would broaden one’s research horizons. Second, we think that a research tradition that adopts a single tool as the gold standard may be fragile, as they are at the mercy of mistakes, “smart” tricks and any potential bias in the tool that they use. We think it is useful to look around and explore other tools to see if the main conclusions of one’s work survive when data is collected and/or processed using a different tool.

Funding

Open access funding provided by Lund University. The authors have no funding to declare.

Availability of data and materials

Not applicable.

Code availability

Not applicable.

Declarations

Competing interests

RA is since 2017 an employee of Tobii AB. The other authors declare no conflicts of interest.

Ethics approval

Not applicable.

Consent to participate

Not applicable.

Consent for publication

Not applicable.

Open Practices Statement

There are no data or code associated with the present article.

Footnotes

1

While SMI Experiment Center has not been available since 2017, we suspect it is still widely used.

2

We also keep this list in mind when deciding to publish a tool, since doing so entails going the extra mile and for instance spending time and effort to document the code well, provide good examples, and a detailed readme.

3

It should be noted that “calibration” is technically a misnomer, instead “adjustment” would be the appropriate term for this action (BIPM et al., 2012, p. 28). However, since the term calibration has been in common use among researchers using eye tracking, we will continue using this term throughout this section.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Abbott, M. J. (2011). A guide to reading experiments using the umass eyetracking lab software suite (tech. rep.). https://people.umass.edu/eyelab/eyelab%20manual.pdf
  2. Abdulin, E., Friedman, L., & Komogortsev, O. V. (2017). Method to detect eye position noise from video-oculography when detection of pupil or corneal reflection position fails. arXiv:1709.02700
  3. Abramov, I., & Harris, C. M. (1984). Artificial eye for assessing corneal-reflection eye trackers. Behavior Research Methods, Instruments, & Computers,16(5), 437–438. 10.3758/BF03202479 [Google Scholar]
  4. Adedeji, V. I., Kirkby, J. A., Vasilev, M. R., & Slattery, T. J. (2024). Children’s reading of sublexical units in years three to five: A combined analysis of eye-movements and voice recording. Scientific Studies of Reading,28(2), 214–233. 10.1080/10888438.2023.2259522 [Google Scholar]
  5. Agaoglu, M. N., Sit, M., Wan, D., & Chung, S. T. L. (2018). Revas: An open-source tool for eye motion extraction from retinal videos obtained with scanning laser ophthalmoscopy. Investigative Ophthalmology & Visual Science,59(9), 2161. [Google Scholar]
  6. Aguilar, C., & Castet, E. (2011). Gaze-contingent simulation of retinopathy: Some potential pitfalls and remedies. Vision Research,51(9), 997–1012. 10.1016/j.visres.2011.02.010 [DOI] [PubMed] [Google Scholar]
  7. Agustin, J. S., Skovsgaard, H. H. T., Møllenbach, E., Barret, M., Tall, M., Hansen, D. W., & Hansen, J. P. (2010). Evaluation of a low-cost open-source gaze tracker. Proceedings of the 2010 symposium on eye-tracking research & applications, ETRA 2010, Austin, Texas, USA, march 22-24, 2010 (pp. 77–80). http://doi.acm.org/10.1145/1743666.1743685
  8. Alam, S. S., & Jianu, R. (2017). Analyzing eye-tracking information in visualization and data space: From where on the screen to what on the screen. IEEE Transactions on Visualization and Computer Graphics,23(5), 1492–1505. 10.1109/TVCG.2016.2535340 [DOI] [PubMed] [Google Scholar]
  9. Alamargot, D., Chesnet, D., Dansac, C., & Ros, C. (2006). Eye and pen: A new device for studying reading during writing. Behavior Research Methods,38(2), 287–299. 10.3758/BF03192780 [DOI] [PubMed] [Google Scholar]
  10. Alinaghi, N., Hollendonner, S., & Giannopoulos, I. (2024). MYFix: Automated fixation annotation of eye-tracking videos. Sensors, 24(9),. 10.3390/s24092666 [DOI] [PMC free article] [PubMed]
  11. Allison, R., Eizenman, M., & Cheung, B. (1996). Combined head and eye tracking system for dynamic testing of the vestibular system. IEEE Transactions on Biomedical Engineering,43(11), 1073–1082. 10.1109/10.541249 [DOI] [PubMed] [Google Scholar]
  12. Allsop, J., & Gray, R. (2014). Flying under pressure: Effects of anxiety on attention and gaze behavior in aviation. Journal of Applied Research in Memory and Cognition,3(2), 63–71. 10.1016/j.jarmac.2014.04.010 [Google Scholar]
  13. Anderson, N. C., Bischof, W. F., Laidlaw, K. E., Risko, E. F., & Kingstone, A. (2013). Recurrence quantification analysis of eye movements. Behavior Research Methods,45(3), 842–856. 10.3758/s13428-012-0299-5 [DOI] [PubMed] [Google Scholar]
  14. Anderson, N. C., Anderson, F., Kingstone, A., & Bischof, W. F. (2015). A comparison of scanpath comparison methods. Behavior Research Methods,47(4), 1377–1392. 10.3758/s13428-014-0550-3 [DOI] [PubMed] [Google Scholar]
  15. Andersson, B., Dahl, J., Holmqvist, K., Holsanova, J., Johansson, V., Karlsson, H. ... Wengelin, Å. (2006). Combining keystroke logging with eye-tracking. In L. van Waes, M. Leijten, & C. Neuwirth (Eds.), Writing and digital media (pp. 166–172). Leiden, The Netherlands: Brill. 10.1163/9781849508209_014
  16. Andersson, R., Larsson, L., Holmqvist, K., Stridh, M., & Nyström, M. (2017). One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms. Behavior Research Methods,49(2), 616–637. 10.3758/s13428-016-0738-9 [DOI] [PubMed] [Google Scholar]
  17. Andolina, I. M. (2024). Opticka: Psychophysics-toolbox based experiment manager. 10.5281/zenodo.592253
  18. Arslan Aydin, l., Kalkan, S., & Acarturk, C. (2018). Magic: A multimodal framework for analysing gaze in dyadic communication. Journal of Eye Movement Research,11(6),. 10.16910/jemr.11.6.2 [DOI] [PMC free article] [PubMed]
  19. Avetisyan, A., Xie, C., Howard-Jenkins, H., Yang, T.- Y., Aroudj, S., Patra, S. ... Balntas, V. (2024). SceneScript: Reconstructing scenes with an autoregressive structured language model. arXiv:2403.13064
  20. Aziz, S., Lohr, D. J., & Komogortsev, O. (2022). Synchroneyes: A novel, paired data set of eye movements recorded simultaneously with remote and wearable eye-tracking devices. 2022 symposium on eye tracking research and applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3517031.3532522
  21. Babcock, J. S., & Pelz, J. B. (2004). Building a lightweight eyetracking headgear. Proceedings of the eye tracking research & application symposium, ETRA 2004, San antonio, Texas, USA, march 22-24, 2004 (pp. 109–114). http://doi.acm.org/10.1145/968363.968386
  22. Bâce, M., Staal, S., & Sörös, G. (2018). Wearable eye tracker calibration at your fingertips. Proceedings of the 2018 acm symposium on eye tracking research & applications. New York, NY, USA: Association for Computing Machinery.
  23. Bahill, A. T., & McDonald, J. D. (1983). Frequency limitations and optimal step size for the two-point central difference derivative algorithm with applications to human eye movement data. IEEE Transactions on Biomedical Engineering,BME–30(3), 191–194. 10.1109/TBME.1983.325108 [DOI] [PubMed]
  24. Bahill, A. T., Brockenbrough, A., & Troost, B. T. (1981). Variability and development of a normative data base for saccadic eye movements. Investigative Ophthalmology & Visual Science,21(1), 116–125. [PubMed] [Google Scholar]
  25. Bahill, A. T., Kallman, J. S., & Lieberman, J. E. (1982). Frequency limitations of the two-point central difference differentiation algorithm. Biological Cybernetics,45(1), 1–4. 10.1007/BF00387207 [DOI] [PubMed] [Google Scholar]
  26. Bailey, R., McNamara, A., Sudarsanam, N., & Grimm, C. (2009). Subtle gaze direction. ACM Transactions on Graphics,28(4),. 10.1145/1559755.1559757
  27. Ballard, D. H., Hayhoe, M. M., & Pelz, J. B. (1995). Memory representations in natural tasks. Journal of Cognitive Neuroscience,7(1), 66–80. 10.1162/jocn.1995.7.1.66 [DOI] [PubMed] [Google Scholar]
  28. Baloh, R. W., Langhofer, L., Honrubia, V., & Yee, R. D. (1980). On-line analysis of eye movements using a digital computer. Aviation, Space, and Environmental Medicine,51(6), 563–567. [PubMed] [Google Scholar]
  29. Balthasar, S., Martin, M., van de Camp, F., Hild, J., & Beyerer, J. (2016). Combining low-cost eye trackers for dual monitor eye tracking. M. Kurosu (Ed.), Human-computer interaction. interaction platforms and techniques (pp. 3–12). Cham: Springer International Publishing. 10.1007/978-3-319-39516-6_1
  30. Bánki, A., de Eccher, M., Falschlehner, L., Hoehl, S., & Markova, G. (2022). Comparing online webcam- and laboratory-based eye-tracking for the assessment of infants’ audio-visual synchrony perception. Frontiers in Psychology,12,. 10.3389/fpsyg.2021.733933 [DOI] [PMC free article] [PubMed]
  31. Barbara, N., Camilleri, T. A., & Camilleri, K. P. (2024). Real-time continuous eog-based gaze angle estimation with baseline drift compensation under non-stationary head conditions. Biomedical Signal Processing and Control,90, 105868. 10.1016/j.bspc.2023.105868 [Google Scholar]
  32. Barry, C., & Wang, E. (2023). Racially fair pupillometry measurements for RGB smartphone cameras using the far red spectrum. Scientific Reports,13(1), 13841. 10.1038/s41598-023-40796-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Barsingerhorn, A. D., Boonstra, F. N., & Goossens, H. H. L. M. (2017). Optics of the human cornea influence the accuracy of stereo eye-tracking methods: A simulation study. Biomedical Optics Express,8(2), 712–725. 10.1364/BOE.8.000712 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Barsingerhorn, A. D., Boonstra, F. N., & Goossens, J. (2018). Development and validation of a high-speed stereoscopic eyetracker. Behavior Research Methods,50(6), 2480–2497. 10.3758/s13428-018-1026-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Barth, E., Dorr, M., Böhme, M., Gegenfurtner, K., & Martinetz, T. (2006). Guiding eye movements for better communication and augmented vision. E. André, L. Dybkjær, W. Minker, H. Neumann, & M. Weber (Eds.), Perception and interactive technologies (pp. 1–8). Berlin, Heidelberg: Springer Berlin Heidelberg. 10.1007/11768029_1
  36. Barz, M., Bhatti, O. S., Alam, H. M. T., Nguyen, D. M. H., & Sonntag, D. (2023). Interactive fixation-to-AOI mapping for mobile eye tracking data based on few-shot image classification. Companion proceedings of the 28th international conference on intelligent user interfaces (pp. 175–178). New York, NY, USA: Association for Computing Machinery. 10.1145/3581754.3584179
  37. Bassett, K., Hammond, M., & Smoot, L. (2010). A fluid-suspension, electromagnetically driven eye with video capability for animatronic applications. Acm siggraph 2010 emerging technologies. New York, NY, USA: Association for Computing Machinery. 10.1145/1836821.1836824
  38. Batliner, M., Hess, S., Ehrlich-Adám, C., Lohmeyer, Q., & Meboldt, M. (2020). Automated areas of interest analysis for usability studies of tangible screen-based user interfaces using mobile eye tracking. Artificial Intelligence for Engineering Design, Analysis and Manufacturing,34(4), 505–514. 10.1017/S0890060420000372 [Google Scholar]
  39. Bedggood, P., & Metha, A. (2017). De-warping of images and improved eye tracking for the scanning laser ophthalmoscope. PLoS One,12(4), 1–10. 10.1371/journal.pone.0174617 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Behler, J., Weston, P., Guarnera, D. T., Sharif, B., & Maletic, J. I. (2023a). iTrace-Toolkit: A pipeline for analyzing eye-tracking data of software engineering studies. Proceedings of the 45th international conference on software engineering: Companion proceedings (pp. 46–50). IEEE Press. 10.1109/ICSECompanion58688.2023.00022
  41. Behler, J., Chiudioni, G., Ely, A., Pangonis, J., Sharif, B., & Maletic, J. I. (2023b). iTrace-Visualize: Visualizing eye-tracking data for software engineering studies. 2023 IEEE working conference on software visualization (VISSOFT) (pp. 100-104). Los Alamitos, CA, USA: IEEE Computer Society. 10.1109/VISSOFT60811.2023.00021
  42. Behler, J., Villalobos, G., Pangonis, J., Sharif, B., & Maletic, J. I. (2024). Extending iTrace-Visualize to support token-based heatmaps and region of interest scarf plots for source code. 2024 IEEE working conference on software visualization (VISSOFT). Los Alamitos, CA, USA: IEEE Computer Society.
  43. Benjamins, J. S., Hessels, R. S., & Hooge, I. T. C. (2018). GazeCode: Open-source software for manual mapping of mobile eye-tracking data. Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications. 10.1145/3204493.3204568
  44. Bennett, S. J., & Barnes, G. R. (2004). Predictive smooth ocular pursuit during the transient disappearance of a visual target. Journal of Neurophysiology,92(1), 578–590. 10.1152/jn.01188.2003 [DOI] [PubMed] [Google Scholar]
  45. Berger, C., Winkels, M., Lischke, A., & Höppner, J. (2012). GazeAlyze: A MATLAB toolbox for the analysis of eye movement data. Behavior Research Methods,44(2), 404–419. 10.3758/s13428-011-0149-x [DOI] [PubMed] [Google Scholar]
  46. Bernhard, M., Stavrakis, E., Hecher, M., & Wimmer, M. (2014). Gaze-to-object mapping during visual search in 3D virtual environments. ACM Transactions on Applied Perception,11(3). 10.1145/2644812
  47. Bettenbühl, M., Paladini, C., Mergenthaler, K., Kliegl, R., Engbert, R., & Holschneider, M. (2010). Microsaccade characterization using the continuous wavelet transform and principal component analysis. Journal of Eye Movement Research,3(5). 10.16910/jemr.3.5.1
  48. Biamino, D., Cannata, G., Maggiali, M., & Piazza, A. (2005). MAC-EYE: A tendon driven fully embedded robot eye. 5th IEEE-RAS International Conference on Humanoid Robots, 2005 (pp. 62–67). 10.1109/ICHR.2005.1573546
  49. Biebl, B., Arcidiacono, E., Kacianka, S., Rieger, J. W., & Bengler, K. (2022). Opportunities and limitations of a gaze-contingent display to simulate visual field loss in driving simulator studies. Frontiers in Neuroergonomics,3,. 10.3389/fnrgo.2022.916169 [DOI] [PMC free article] [PubMed]
  50. BIPM, IEC, IFCC, ILAC, IUPAC, IUPAP, ... OIML (2012). The international vocabulary of metrology–basic and general concepts and associated terms (VIM) (3rd ed., tech. rep. No. JCGM 200:2012). https://www.bipm.org/vim
  51. Blascheck, T., Kurzhals, K., Raschke, M., Burch, M., Weiskopf, D., & Ertl, T. (2014). State-of-the-art of visualization for eye tracking data. R. Borgo, R. Maciejewski, & I. Viola (Eds.), EuroVis - STARs. The Eurographics Association. 10.2312/eurovisstar.20141173
  52. Blascheck, T., Kurzhals, K., Raschke, M., Burch, M., Weiskopf, D., & Ertl, T. (2017). Visualization of eye tracking data: A taxonomy and survey. Computer Graphics Forum,36(8), 260–284. 10.1111/cgf.13079 [Google Scholar]
  53. Blignaut, P. (2013). Mapping the pupil-glint vector to gaze coordinates in a simple video-based eye tracker. Journal of Eye Movement Research,7(1). 10.16910/jemr.7.1.4
  54. Blignaut, P. (2016). Idiosyncratic feature-based gaze mapping. Journal of Eye Movement Research,9(3). 10.16910/jemr.9.3.2
  55. Blignaut, P. (2017). Using smooth pursuit calibration for difficult-to-calibrate participants. Journal of Eye Movement Research,10(4). 10.16910/jemr.10.4.1 [DOI] [PMC free article] [PubMed]
  56. Blignaut, P. (2019). A cost function to determine the optimum filter and parameters for stabilizing gaze data. Journal of Eye Movement Research,12(2). 10.16910/jemr.12.2.3 [DOI] [PMC free article] [PubMed]
  57. Blignaut, P., & Wium, D. (2013). The effect of mapping function on the accuracy of a video-based eye tracker. Proceedings of the 2013 conference on eye tracking south africa (pp. 39-46). New York, NY, USA: Association for Computing Machinery. 10.1145/2509315.2509321
  58. Blignaut, P., Holmqvist, K., Nyström, M., & Dewhurst, R. (2014). Improving the accuracy of video-based eye tracking in real time through post-calibration regression. M. Horsley, N. Toon, B. A. Knight, & R. Reilly (Eds.), Current trends in eye tracking research (pp. 77–100). Switzerland: Springer. 10.1007/978-3-319-02868-2_5
  59. Blignaut, P. (2009). Fixation identification: The optimum threshold for a dispersion algorithm. Attention, Perception, & Psychophysics,71(4), 881–895. 10.3758/app.71.4.881 [DOI] [PubMed] [Google Scholar]
  60. Blignaut, P., & Wium, D. (2014). Eye-tracking data quality as affected by ethnicity and experimental design. Behavior Research Methods,46(1), 67–80. 10.3758/s13428-013-0343-0 [DOI] [PubMed] [Google Scholar]
  61. Bœkgaard, P., Petersen, M. K., & Larsen, J. E. (2014). In the twinkling of an eye: Synchronization of EEG and eye tracking based on blink signatures. 2014 4th International Workshop on Cognitive Information Processing (CIP) (pp. 1–6). 10.1109/CIP.2014.6844504
  62. Bogdan, P. C., Dolcos, S., Buetti, S., Lleras, A., & Dolcos, F. (2024). Investigating the suitability of online eye tracking for psychological research: Evidence from comparisons with in-person data using emotion-attention interaction tasks. Behavior Research Methods,56(3), 2213–2226. 10.3758/s13428-023-02143-z [DOI] [PubMed] [Google Scholar]
  63. Bonikowski, L., Gruszczyński, D., & Matulewski, J. (2021). Open-source software for determining the dynamic areas of interest for eye tracking data analysis. Procedia Computer Science,192, 2568–2575. 10.1016/j.procs.2021.09.026 [Google Scholar]
  64. Booth, T., Sridharan, S., Bethamcherla, V., & Bailey, R. (2014). Gaze3D: Framework for gaze analysis on 3D reconstructed scenes. Proceedings of the ACM Symposium on Applied Perception (pp. 67–70). New York, NY, USA: Association for Computing Machinery. 10.1145/2628257.2628274
  65. Boulay, E., Wallace, B., Fraser, K. C., Kunz, M., Goubran, R., Knoefel, F., & Thomas, N. (2023). Design and validation of a system to synchronize speech recognition and eye-tracking measurements. 2023 IEEE sensors applications symposium (sas) (pp. 01-06). 10.1109/SAS58821.2023.10254132
  66. Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision,10(4), 433–436. 10.1163/156856897X00357 [PubMed] [Google Scholar]
  67. Brennan, S. E., Chen, X., Dickinson, C. A., Neider, M. B., & Zelinsky, G. J. (2008). Coordinating cognition: The costs and benefits of shared gaze during collaborative search. Cognition,106(3), 1465–1477. 10.1016/j.cognition.2007.05.012 [DOI] [PubMed] [Google Scholar]
  68. Brône, G., Oben, B., & Goedemé, T. (2011). Towards a more effective method for analyzing mobile eye-tracking data: Integrating gaze data with object recognition algorithms. Proceedings of the 1st international workshop on pervasive eye tracking & mobile eye-based interaction (pp. 53-56). New York, NY, USA: Association for Computing Machinery. 10.1145/2029956.2029971
  69. Brooks, J. S., Smith, W. J., Webb, B. M., Heath, M. D., & Dickey, J. P. (2019). Development and validation of a high-speed video system for measuring saccadic eye movement. Behavior Research Methods,51(5), 2302–2309. 10.3758/s13428-019-01197-2 [DOI] [PubMed] [Google Scholar]
  70. Burch, M., Veneri, A., & Sun, B. (2019). Eyeclouds: A visualization and analysis tool for exploring eye movement data. Proceedings of the 12th international symposium on visual information communication and interaction. New York, NY, USA: Association for Computing Machinery. 10.1145/3356422.3356423
  71. Burch, M., Wallner, G., Broeks, N., Piree, L., Boonstra, N., Vlaswinkel, P. ... van Wijk, V. (2021). The power of linked eye movement data visualizations. ACM Symposium on Eye Tracking Research and Applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3448017.3457377
  72. Burger, B., Puupponen, A., & Jantunen, T. (2018). Synchronizing eye tracking and optical motion capture: How to bring them together. Journal of Eye Movement Research,11(2),. 10.16910/jemr.11.2.5 [DOI] [PMC free article] [PubMed]
  73. Cajar, A., Engbert, R., & Laubrock, J. (2016). Spatial frequency processing in the central and peripheral visual field during scene viewing. Vision Research,127, 186–197. 10.1016/j.visres.2016.05.008 [DOI] [PubMed] [Google Scholar]
  74. Caldara, R., & Miellet, S. (2011). iMap: A novel method for statistical fixation mapping of eye movement data. Behavior Research Methods,43(3), 864–878. 10.3758/s13428-011-0092-x [DOI] [PubMed] [Google Scholar]
  75. Callemein, T., Van Beeck, K., Brône, G., & Goedemé, T. (2019). Automated analysis of eye-tracker-based human-human interaction studies. K. J. Kim, & N. Baek (Eds.), Information science and applications 2018 (pp. 499–509). Singapore: Springer Singapore. 10.1007/978-981-13-1056-0_50
  76. Camilli, M., Nacchia, R., Terenzi, M., & Di Nocera, F. (2008). ASTEF: A simple tool for examining fixations. Behavior Research Methods,40(2), 373–382. 10.3758/BRM.40.2.373 [DOI] [PubMed] [Google Scholar]
  77. Carelli, L., Solca, F., Tagini, S., Torre, S., Verde, F., Ticozzi, N. ... Poletti, B. (2022). Gaze-contingent eye-tracking training in brain disorders: A systematic review. Brain Sciences,12(7),. 10.3390/brainsci12070931 [DOI] [PMC free article] [PubMed]
  78. Carl, M. (2012). Translog-II: A program for recording user activity data for empirical reading and writing research. N. Calzolari et al. (Eds.), Proceedings of the eight international conference on language resources and evaluation (LREC’12). Istanbul, Turkey: European Language Resources Association (ELRA).
  79. Carr, J. W. (2023). eyekit: A lightweight python package for doing open, transparent, reproducible science on reading behavior. Retrieved 11 June 2024, from https://github.com/jwcarr/eyekit
  80. Carr, J. W., Pescuma, V. N., Furlan, M., Ktori, M., & Crepaldi, D. (2022). Algorithms for the automated correction of vertical drift in eye-tracking data. Behavior Research Methods,54(1), 287–310. 10.3758/s13428-021-01554-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Casas, J. P., & Chandrasekaran, C. (2019). openEyeTrack - a high speed multi-threaded eye tracker for head-fixed applications. Journal of Open Source Software,4(42), 1631. 10.21105/joss.01631 [DOI] [PMC free article] [PubMed]
  82. Cerrolaza, J. J., Villanueva, A., & Cabeza, R. (2012). Study of polynomial mapping functions in video-oculography eye trackers. ACM Transactions on Computer-Human Interaction,19(2), 1–25. 10.1145/2240156.2240158 [Google Scholar]
  83. Cesqui, B., de Langenberg, R. V., Lacquaniti, F., & d’Avella, A. (2013). A novel method for measuring gaze orientation in space in unrestrained head conditions. Journal of Vision,13(8), 28. 10.1167/13.8.28 [DOI] [PubMed] [Google Scholar]
  84. Chamberlain, A. C. (1996). Dual purkinje-image eyetracker (USNA Trident Scholar report No. 238). United States Naval Academy: Annapolis, MD. 10.21236/ADA375792
  85. Charlier, J., Sourdille, P., Behague, M., & Buquet, C. (1991). Eye-controlled microscope for surgical applications. P. Sourdille (Ed.), Evolution of microsurgery: Meeting of the international ophthalmic microsurgery study group (IOMSG), la baule, september 1990 (Vol. 22, pp. 154–158). S. Karger AG. 10.1159/000419923 [DOI] [PubMed]
  86. Chartier, S., & Renaud, P. (2008). An online noise filter for eye-tracker data recorded in a virtual environment. Proceedings of the 2008 symposium on eye tracking research & applications (pp. 153-156). New York, NY, USA: Association for Computing Machinery. 10.1145/1344471.1344511
  87. Chen, K. -T., Prouzeau, A., Langmead, J., Whitelock-Jones, R. T., Lawrence, L., Dwyer, T., & Goodwin, S. (2023). Gazealytics: A unified and flexible visual toolkit for exploratory and comparative gaze analysis. Proceedings of the 2023 symposium on eye tracking research and applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3588015.3589844
  88. Cheng, T., Song, L., Ge, Y., Liu, W., Wang, X., & Shan, Y. (2024a). YOLO-World: Real-time open-vocabulary object detection. arXiv:2401.17270
  89. Cheng, Y., Wang, H., Bao, Y., & Lu, F. (2024b). Appearance-based gaze estimation with deep learning: A review and benchmark. arXiv:2104.12668 [DOI] [PubMed]
  90. Choe, K. W., Blake, R., & Lee, S.-H. (2016). Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation. Vision Research,118, 48–59. 10.1016/j.visres.2014.12.018 [DOI] [PubMed] [Google Scholar]
  91. Chukharev-Hudilainen, E., Saricaoglu, A., Torrance, M., & Feng, H.-H. (2019). Combined deployable keystroke logging and eyetracking for investigating L2 writing fluency. Studies in Second Language Acquisition,41(3), 583–604. 10.1017/S027226311900007X [Google Scholar]
  92. Claus, M., Hermens, F., & Bromuri, S. (2023). A user study of visualisations of spatio-temporal eye tracking data. arXiv:2309.15731
  93. Coco, M. I., & Dale, R. (2014). Cross-recurrence quantification analysis of categorical and continuous time series: An r package. Frontiers in Psychology,5,. 10.3389/fpsyg.2014.00510 [DOI] [PMC free article] [PubMed]
  94. Cohen, A. L. (2013). Software for the automatic correction of recorded eye fixation locations in reading experiments. Behavior Research Methods,45(3), 679–683. 10.3758/s13428-012-0280-3 [DOI] [PubMed] [Google Scholar]
  95. Cornelissen, F. W., Peters, E. M., & Palmer, J. (2002). The eyelink toolbox: Eye tracking with matlab and the psychophysics toolbox. Behavior Research Methods, Instruments, & Computers,34(4), 613–617. 10.3758/BF03195489 [DOI] [PubMed] [Google Scholar]
  96. Cornelissen, F. W., Bruin, K. J., & Kooijman, A. C. (2005). The influence of artificial scotomas on eye movements during visual search. Optometry and Vision Science,82(1), 27–35. 10.1097/01.OPX.0000150250.14720.C5 [PubMed] [Google Scholar]
  97. Coutinho, F. L., & Morimoto, C. H. (2006). Free head motion eye gaze tracking using a single camera and multiple light sources. 2006 19th brazilian symposium on computer graphics and image processing (pp. 171–178). 10.1109/SIBGRAPI.2006.21
  98. Coutrot, A., Hsiao, J. H., & Chan, A. B. (2018). Scanpath modeling and classification with hidden markov models. Behavior Research Methods,50(1), 362–379. 10.3758/s13428-017-0876-8 [DOI] [PMC free article] [PubMed]
  99. Crane, H. D., & Steele, C. M. (1985). Generation-V dual-purkinje-image eyetracker. Applied Optics,24(4), 527–537. 10.1364/AO.24.000527 [DOI] [PubMed] [Google Scholar]
  100. Cristino, F., Mathôt, S., Theeuwes, J., & Gilchrist, I. D. (2010). ScanMatch: A novel method for comparing fixation sequences. Behavior Research Methods,42(3), 692–700. 10.3758/BRM.42.3.692 [DOI] [PubMed] [Google Scholar]
  101. Dale, R., Warlaumont, A. S., & Richardson, D. C. (2011). Nominal cross recurrence as a generalized lag sequential analysis for behavioral streams. International Journal of Bifurcation and Chaos,21(04), 1153–1161. 10.1142/S0218127411028970 [Google Scholar]
  102. Dalmaijer, E. S., Mathôt, S., & Van der Stigchel, S. (2014). PyGaze: An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments. Behavior Research Methods,46(4), 913–921. 10.3758/s13428-013-0422-2 [DOI] [PubMed] [Google Scholar]
  103. D’Angelo, S., Brewer, J., & Gergle, D. (2019). Iris: A tool for designing contextually relevant gaze visualizations. Proceedings of the 11th acm symposium on eye tracking research & applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3317958.3318228
  104. Dar, A. H., Wagner, A. S., & Hanke, M. (2021). REMoDNaV: Robust eye-movement classification for dynamic stimulation. Behavior Research Methods,53(1), 399–414. 10.3758/s13428-020-01428-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Das, V. E., Thomas, C. W., Zivotofsky, A. Z., & Leigh, R. J. (1996). Measuring eye movements during locomotion: Filtering techniques for obtaining velocity signals from a video-based eye monitor. Journal of Vestibular Research,6(6), 455–461. 10.3233/VES-1996-6606 [PubMed] [Google Scholar]
  106. David, E., Gutiérrez, J., Võ, M.L.-H., Coutrot, A., Perreira Da Silva, M., & Le Callet, P. (2024). The Salient360! toolbox: Handling gaze data in 3D made easy. Computers & Graphics,119, 103890. 10.1016/j.cag.2024.103890 [Google Scholar]
  107. Daye, P. M., & Optican, L. M. (2014). Saccade detection using a particle filter. Journal of Neuroscience Methods,235, 157–168. 10.1016/j.jneumeth.2014.06.020 [DOI] [PubMed] [Google Scholar]
  108. de Bruin, J. A., Malan, K. M., & Eloff, J. H. P. (2013). Saccade deviation indicators for automated eye tracking analysis. Proceedings of the 2013 conference on eye tracking south africa (pp. 47–54). 10.1145/2509315.2509324
  109. De Tommaso, D., & Wykowska, A. (2019). TobiiGlassesPySuite: An open-source suite for using the Tobii Pro Glasses 2 in eye-tracking studies. Proceedings of the 11th acm symposium on eye tracking research & applications. New York, NY, USA: Association for Computing Machinery.10.1145/3314111.3319828
  110. Deane, O., Toth, E., & Yeo, S.-H. (2023). Deep-saga: A deep-learning-based system for automatic gaze annotation from eye-tracking data. Behavior Research Methods,55(3), 1372–1391. 10.3758/s13428-022-01833-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Demiralp, Ç., Cirimele, J., Heer, J., & Card, S. K. (2017). The VERP explorer: A tool for exploring eye movements of visual-cognitive tasks using recurrence plots. M. Burch, L. Chuang, B. Fisher, A. Schmidt, & D. Weiskopf (Eds.), Eye tracking and visualization (pp. 41–55). Cham: Springer International Publishing. 10.1007/978-3-319-47024-5_3
  112. Dewhurst, R., Nyström, M., Jarodzka, H., Foulsham, T., Johansson, R., & Holmqvist, K. (2012). It depends on how you look at it: Scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach. Behavior Research Methods,44(4), 1079–1100. 10.3758/s13428-012-0212-2 [DOI] [PubMed] [Google Scholar]
  113. Dewhurst, R., Foulsham, T., Jarodzka, H., Johansson, R., Holmqvist, K., & Nyström, M. (2018). How task demands influence scanpath similarity in a sequential number-search task. Vision Research,149, 9–23. 10.1016/j.visres.2018.05.006 [DOI] [PubMed] [Google Scholar]
  114. Diaz, G., Cooper, J., Kit, D., & Hayhoe, M. (2013). Real-time recording and classification of eye movements in an immersive virtual environment. Journal of Vision,13(12), 5–5. 10.1167/13.12.5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Dierkes, K., Kassner, M., & Bulling, A. (2018). A novel approach to single camera, glint-free 3d eye model fitting including corneal refraction. Proceedings of the 2018 acm symposium on eye tracking research & applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3204493.3204525
  116. Dimigen, O., Sommer, W., Hohlfeld, A., Jacobs, A. M., & Kliegl, R. (2011). Coregistration of eye movements and EEG in natural reading: Analyses and review. Journal of Experimental Psychology: General,140(4), 552–572. 10.1037/a0023885 [DOI] [PubMed] [Google Scholar]
  117. Dink, J., & Ferguson, B. (2015). eyetrackingR: An R library for eye-tracking data analysis. Retrieved 13 May 2024, from http://www.eyetrackingr.com
  118. Dolezalova, J., & Popelka, S. (2016). ScanGraph: A novel scanpath comparison method using visualisation of graph cliques. Journal of Eye Movement Research,9(4),. 10.16910/jemr.9.4.5
  119. Dorr, M., Martinetz, T., Gegenfurtner, K. R., & Barth, E. (2010). Variability of eye movements when viewing dynamic natural scenes. Journal of Vision,10(10), 28–28. 10.1167/10.10.28 [DOI] [PubMed] [Google Scholar]
  120. Drewes, J., Masson, G. S., & Montagnini, A. (2012). Shifts in reported gaze position due to changes in pupil size: Ground truth and compensation. Proceedings of the symposium on eye tracking research and applications (pp. 209-212). New York, NY, USA: Association for Computing Machinery. 10.1145/2168556.2168596
  121. Drewes, H., Pfeuffer, K., & Alt, F. (2019). Time- and space-efficient eye tracker calibration. Proceedings of the 11th acm symposium on eye tracking research & applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3314111.3319818
  122. Drewes, J., Zhu, W., Hu, Y., & Hu, X. (2014). Smaller is better: Drift in gaze measurements due to pupil dynamics. PLoS One,9(10), e111197. 10.1371/journal.pone.0111197 [DOI] [PMC free article] [PubMed] [Google Scholar]
  123. Drews, M., & Dierkes, K. (2024). Strategies for enhancing automatic fixation detection in head-mounted eye tracking. Behavior Research Methods. 10.3758/s13428-024-02360-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  124. Duchowski, A. T. (2007). Eye tracking methodology. Theory and Practice,328,.
  125. Duchowski, A. T., Driver, J., Jolaoso, S., Tan, W., Ramey, B. N., & Robbins, A. (2010). Scanpath comparison revisited. Proceedings of the symposium on eye-tracking research & applications (pp. 219–226). 10.1145/1743666.1743719
  126. Duchowski, A. T., Price, M. M., Meyer, M., & Orero, P. (2012). Aggregate gaze visualization with real-time heatmaps. Proceedings of the symposium on eye tracking research and applications (pp. 13–20). New York, NY, USA: Association for Computing Machinery. 10.1145/2168556.2168558
  127. Duchowski, A. T., Peysakhovich, V., & Krejtz, K. (2020). Using pose estimation to map gaze to detected fiducial markers. Procedia Computer Science,176, 3771–3779. 10.1016/j.procs.2020.09.010 [Google Scholar]
  128. Dunn, M. J., Alexander, R. G., Amiebenomo, O. M., Arblaster, G., Atan, D., Erichsen, J. T. ... Sprenger, A. (2023). Minimal reporting guideline for research involving eye tracking (2023 edition). Behavior Research Methods,. 10.3758/s13428-023-02187-1 [DOI] [PMC free article] [PubMed]
  129. Ebisawa, Y., & Fukumoto, K. (2013). Head-free, remote eye-gaze detection system based on pupil-corneal reflection method with easy calibration using two stereo-calibrated video cameras. IEEE Transactions on Biomedical Engineering,60(10), 2952–2960. 10.1109/TBME.2013.2266478 [DOI] [PubMed] [Google Scholar]
  130. Ehinger, B. V., Groß, K., Ibs, I., & König, P. (2019). A new comprehensive eye-tracking test battery concurrently evaluating the Pupil Labs glasses and the EyeLink 1000. PeerJ,7, e7086. 10.7717/peerj.7086 [DOI] [PMC free article] [PubMed] [Google Scholar]
  131. Eivazi, S., & Maurer, M. (2018). Eyemic: An eye tracker for surgical microscope. Proceedings of the 2018 acm symposium on eye tracking research & applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3204493.3208342
  132. Eivazi, S., Kübler, T. C., Santini, T., & Kasneci, E. (2018). An inconspicuous and modular head-mounted eye tracker. Proceedings of the 2018 acm symposium on eye tracking research & applications (pp. 106:1–106:2). New York, NY, USA: ACM. http://doi.acm.org/10.1145/3204493.3208345
  133. Eivazi, S., Bednarik, R., Leinonen, V., & von und zu Fraunberg, M., & Jääskeläinen, J.E. (2016). Embedding an eye tracker into a surgical microscope: Requirements, design, and implementation. IEEE Sensors Journal,16(7), 2070–2078. 10.1109/JSEN.2015.2501237
  134. Ellis, S. R., & Stark, L. (1986). Statistical dependency in visual scanning. Human Factors,28(4), 421–438. 10.1177/001872088602800405 [DOI] [PubMed] [Google Scholar]
  135. Elmadjian, C., Gonzales, C., Costa, R. L. D., & Morimoto, C. H. (2023). Online eye-movement classification with temporal convolutional networks. Behavior Research Methods,55(7), 3602–3620. 10.3758/s13428-022-01978-2 [DOI] [PubMed] [Google Scholar]
  136. Engbert, R., & Kliegl, R. (2003). Binocular coordination in microsaccades. J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind’s eye: Cognitive and applied aspects of oculomotor research (pp. 103–117). Elsevier. 10.1016/B978-044451020-4/50007-4
  137. Engelken, E. J., & Stevens, K. W. (1990). A new approach to the analysis of nystagmus: An application for order-statistic filters. Aviation, Space, and Environmental Medicine,61(9), 859–864. [PubMed] [Google Scholar]
  138. Engelken, E. J., Stevens, K. W., & Wolfe, J. W. (1982). Application of digital filters in the processing of eye movement data. Behavior Research Methods & Instrumentation,14(3), 314–319. 10.3758/BF03203222 [Google Scholar]
  139. Engelken, E. J., Stevens, K. W., & Enderle, J. D. (1990). Development of a non-linear smoothing filter for the processing of eye-movement signals. Biomedical Sciences Instrumentation,26, 5–10. [PubMed] [Google Scholar]
  140. Eraslan, Ş., Karabulut, S., Atalay, M. C., & Yeşilada, Y. (2018). ViSTA: Visualisation of scanpath trend analysis (STA). Proceedings of the 12th Turkish national symposium on software engineering.
  141. Eraslan, Ş., Yeşilada, Y., & Harper, S. (2015). Eye tracking scanpath analysis techniques on web pages: A survey, evaluation and comparison. Journal of Eye Movement Research,9(1). 10.16910/jemr.9.1.2
  142. Eraslan, Ş., Yeşilada, Y., & Harper, S. (2016). Scanpath trend analysis on web pages: Clustering eye tracking scanpaths. ACM Transactions on the Web,10(4). 10.1145/2970818
  143. Erel, Y., Shannon, K. A., Chu, J., Scott, K., Struhl, M. K., Cao, P., Liu, & S. (2023). iCatcher+: Robust and automated annotation of infants’ and young children’s gaze behavior from videos collected in laboratory, field, and online studies. Advances in Methods and Practices in Psychological Science,6(2), 25152459221147250. 10.1177/25152459221147250 [DOI] [PMC free article] [PubMed]
  144. Eskenazi, M. A. (2024). Best practices for cleaning eye movement data in reading research. Behavior Research Methods,56(3), 2083–2093. 10.3758/s13428-023-02137-x [DOI] [PubMed] [Google Scholar]
  145. Essig, K., Dornbusch, D., Prinzhorn, D., Ritter, H., Maycock, J., & Schack, T. (2012). Automatic analysis of 3D gaze coordinates on scene objects using data from eye-tracking and motion-capture systems. Proceedings of the symposium on eye tracking research and applications (pp. 37–44). New York, NY, USA: Association for Computing Machinery. 10.1145/2168556.2168561
  146. Essig, K., Frank, S., Sand, N., Jörn, K., Pfeiffer, T., Ritter, H., & Schack, T. (2011). JVideoGazer - towards an automatic annotation of gaze videos from natural scenes. Proceedings of the world congress on engineering and technology (cet).
  147. Essig, K., Pomplun, M., & Ritter, H. J. (2006). A neural network for 3D gaze recording with binocular eye trackers. International Journal of Parallel, Emergent and Distributed Systems,21(2), 79–95. 10.1080/17445760500354440 [Google Scholar]
  148. Fahimi, R., & Bruce, N. D. B. (2021). On metrics for measuring scanpath similarity. Behavior Research Methods,53(2), 609–628. 10.3758/s13428-020-01441-0 [DOI] [PubMed] [Google Scholar]
  149. Falch, L., & Lohan, K. S. (2024). Webcam-based gaze estimation for computer screen interaction. Frontiers in Robotics and A,I, 11. 10.3389/frobt.2024.1369566 [DOI] [PMC free article] [PubMed] [Google Scholar]
  150. Faraji, Y., van Rijn, J. W., van Nispen, R. M. A., van Rens, G. H. M. B., Melis-Dankers, B. J. M., Koopman, J., & van Rijn, L. J. (2023). A toolkit for wide-screen dynamic area of interest measurements using the pupil labs core eye tracker. Behavior Research Methods,55(7), 3820–3830. 10.3758/s13428-022-01991-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  151. Feit, A. M., Williams, S., Toledo, A., Paradiso, A., Kulkarni, H., Kane, S., & Morris, M. R. (2017). Toward everyday gaze input: Accuracy and precision of eye tracking and implications for design. Proceedings of the 2017 chi conference on human factors in computing systems (pp. 1118–1130). New York, NY, USA: Association for Computing Machinery. 10.1145/3025453.3025599
  152. Felßberg, A.-M., & Strazdas, D. (2022). RELAY: Robotic eyelink analysis of the eyelink 1000 using an artificial eye. arXiv:2206.01327
  153. Finger, H., Goeke, C., Diekamp, D., Standvoß, K., & König, P. (2017). LabVanced: A unified JavaScript framework for online studies. International conference on computational social science icInline graphicsInline graphic (pp. 1-3). Cologne, Germany.
  154. French, R. M., Glady, Y., & Thibaut, J.-P. (2017). An evaluation of scanpath-comparison and machine-learning classification algorithms used to study the dynamics of analogy making. Behavior Research Methods,49(4), 1291–1302. 10.3758/s13428-016-0788-z [DOI] [PubMed] [Google Scholar]
  155. Frens, M. A., & van Opstal, A. J. (1994). Transfer of short-term adaptation in human saccadic eye movements. Experimental Brain Research,100(2), 293–306. 10.1007/BF00227199 [DOI] [PubMed] [Google Scholar]
  156. Fu, X., Franchak, J. M., MacNeill, L. A., Gunther, K. E., Borjon, J. I., Yurkovic-Harding, J., & Pérez-Edgar, K. E. (2024). Implementing mobile eye tracking in psychological research: A practical guide. Behavior Research Methods. 10.3758/s13428-024-02473-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  157. Fuhl, W., Bozkir, E., Hosp, B., Castner, N., Geisler, D., Santini, T. C., & Kasneci, E. (2019a). Encodji: Encoding gaze data into emoji space for an amusing scanpath classification approach. In Proceedings of the 11th acm symposium on eye tracking research & applications. 10.1145/3314111.3323074
  158. Fuhl, W., Castner, N., Kübler, T., Lotz, A., Rosenstiel, W., & Kasneci, E. (2019b). Ferns for area of interest free scanpath classification. In Proceedings of the 11th acm symposium on eye tracking research & applications. 10.1145/3314111.3319826
  159. Fuhl, W., Kuebler, T., Brinkmann, H., Rosenberg, R., Rosenstiel, W., & Kasneci, E. (2018a). Region of interest generation algorithms for eye tracking data. Proceedings of the 3rd workshop on eye tracking and visualization. New York, NY, USA: Association for Computing Machinery. 10.1145/3205929.3205937
  160. Fuhl, W., Kuebler, T., Santini, T., & Kasneci, E. (2018b). Automatic generation of saliency-based areas of interest for the visualization and analysis of eye-tracking data. Proceedings of the conference on vision, modeling, and visualization (pp. 47–54). Goslar, Germany: Eurographics Association. 10.2312/vmv.20181252
  161. Geller, J., Winn, M. B., Mahr, T., & Mirman, D. (2020). GazeR: A package for processing gaze position and pupil size data. Behavior Research Methods,52(5), 2232–2255. 10.3758/s13428-020-01374-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  162. Ghose, U., Srinivasan, A. A., Boyce, W. P., Xu, H., & Chng, E. S. (2020). PyTrack: An end-to-end analysis toolkit for eye tracking. Behavior Research Methods,52(6), 2588–2603. 10.3758/s13428-020-01392-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  163. Gibaldi, A., Vanegas, M., Bex, P. J., & Maiello, G. (2017). Evaluation of the Tobii EyeX eye tracking controller and matlab toolkit for research. Behavior Research Methods,49(3), 923–946. 10.3758/s13428-016-0762-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  164. Gidlöf, K., Anikin, A., Lingonblad, M., & Wallin, A. (2017). Looking is buying. How visual attention and choice are affected by consumer preferences and properties of the supermarket shelf. Appetite,116, 29–38. https://doi.org/j.appet.2017.04.020 [DOI] [PubMed]
  165. Gidlöf, K., Wallin, A., Dewhurst, R., & Holmqvist, K. (2013). Using eye tracking to trace a cognitive process: Gaze behaviour during decision making in a natural environment. Journal of Eye Movement Research,6(1). 10.16910/jemr.6.1.3
  166. Gitelman, D. R. (2002). ILAB: A program for postexperimental eye movement analysis. Behavior Research Methods, Instruments, & Computers,34(4), 605–612. 10.3758/BF03195488 [DOI] [PubMed] [Google Scholar]
  167. Glandorf, D., & Schroeder, S. (2021). Slice: An algorithm to assign fixations in multi-line texts. Procedia Computer Science,192, 2971–2979. 10.1016/j.procs.2021.09.069 [Google Scholar]
  168. Goldberg, J. H., & Helfman, J. I. (2010a). Comparing information graphics: A critical look at eye tracking. Proceedings of the 3rd beliv’10 workshop: Beyond time and errors: Novel evaluation methods for information visualization (pp. 71–78). New York, NY, USA: Association for Computing Machinery. 10.1145/2110192.2110203
  169. Goldberg, J. H., & Helfman, J. I. (2010b). Scanpath clustering and aggregation. Proceedings of the 2010 symposium on eye-tracking research & applications (pp. 227–234). New York, NY, USA: Association for Computing Machinery. 10.1145/1743666.1743721
  170. Goldberg, J. H., & Kotval, X. P. (1999). Computer interface evaluation using eye movements: Methods and constructs. International Journal of Industrial Ergonomics,24(6), 437–442. 10.1016/S0169-8141(98)00068-7 [Google Scholar]
  171. Gredebäck, G., Fikke, L., & Melinder, A. (2010). The development of joint visual attention: A longitudinal study of gaze following during interactions with mothers and strangers. Developmental Science,13(6), 839–848. 10.1111/j.1467-7687.2009.00945.x [DOI] [PubMed] [Google Scholar]
  172. Grindinger, T. J., Duchowski, A. T., & Sawyer, M. (2010). Group-wise similarity and classification of aggregate scanpaths. Proceedings of the 2010 symposium on eye-tracking research & applications (pp. 101–104). New York, NY, USA: Association for Computing Machinery. 10.1145/1743666.1743691
  173. Grindinger, T. J., Murali, V. N., Tetreault, S., Duchowski, A. T., Birchfield, S. T., & Orero, P. (2011). Algorithm for discriminating aggregate gaze points: Comparison with salient regions-of-interest. R. Koch, & F. Huang (Eds.), Computer vision – ACCV 2010 workshops (pp. 390–399). Berlin, Heidelberg: Springer Berlin Heidelberg. 10.1007/978-3-642-22822-3_39
  174. Guarnera, D. T., Bryant, C. A., Mishra, A., Maletic, J. I., & Sharif, B. (2018). iTrace: Eye tracking infrastructure for development environments. Proceedings of the 2018 acm symposium on eye tracking research & applications. New York, NY, USA: Association for Computing Machinery.10.1145/3204493.3208343
  175. Gucciardi, A., Crotti, M., Ben Itzhak, N., Mailleux, L., Ortibus, E., Michelucci, U., Sadikov, & A. (2022). A new median filter application to deal with large windows of missing data in eye-gaze measurements. Ceur workshop proceedings: Neurodevelopmental impairments in preterm children - computational advancements. 10.1145/1344471.1344511
  176. Guestrin, E. D., & Eizenman, M. (2006). General theory of remote gaze estimation using the pupil center and corneal reflections. IEEE Transactions on Biomedical Engineering,53(6), 1124–1133. 10.1109/tbme.2005.863952 [DOI] [PubMed] [Google Scholar]
  177. Gurtner, L. M., Bischof, W. F., & Mast, F. W. (2019). Recurrence quantification analysis of eye movements during mental imagery. Journal of Vision,19(1), 17–17. 10.1167/19.1.17 [DOI] [PubMed] [Google Scholar]
  178. Haass, M. J., Matzen, L. E., Butler, K. M., & Armenta, M. (2016). A new method for categorizing scanpaths from eye tracking data. Proceedings of the ninth biennial acm symposium on eye tracking research & applications (pp. 35–38). New York, NY, USA: Association for Computing Machinery. 10.1145/2857491.2857503
  179. Häggström, C., Englund, M., & Lindroos, O. (2015). Examining the gaze behaviors of harvester operators: An eye-tracking study. International Journal of Forest Engineering,26(2), 96–113. 10.1080/14942119.2015.1075793 [Google Scholar]
  180. Hagihara, H., Zaadnoordijk, L., Cusack, R., Kimura, N., & Tsuji, S. (2024). Exploration of factors affecting webcam-based automated gaze coding. Behavior Research Methods. 10.3758/s13428-024-02424-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  181. Han, P., Saunders, D. R., Woods, R. L., & Luo, G. (2013). Trajectory prediction of saccadic eye movements using a compressed exponential model. Journal of Vision,13(8), 27–27. 10.1167/13.8.27 [DOI] [PMC free article] [PubMed] [Google Scholar]
  182. Häne, C., Zach, C., Cohen, A., Angst, R., & Pollefeys, M. (2013). Joint 3d scene reconstruction and class segmentation. In 2013 IEEE conference on computer vision and pattern recognition (ICCV) (pp. 97–104). 10.1109/CVPR.2013.20
  183. Hanke, M., Mathôt, S., Ort, E., Peitek, N., Stadler, J., & Wagner, A. (2020). A practical guide to functional magnetic resonance imaging with simultaneous eye tracking for cognitive neuroimaging research. In S. Pollmann (Ed.), Spatial learning and attention guidance (pp. 291–305). New York, NY: Springer US. 10.1007/7657_2019_31
  184. Hansen, D. W., Agustin, J. S., & Villanueva, A. (2010). Homography normalization for robust gaze estimation in uncalibrated setups. Proceedings of the 2010 symposium on eye-tracking research & applications (pp. 13–20). New York, NY, USA: Association for Computing Machine10.1145/1743666.1743670
  185. Hansen, D. W., Hansen, J. P., Nielsen, M., Johansen, A. S., & Stegmann, M. B. (2002). Eye typing using markov and active appearance models. Proceedings of the sixth IEEE workshop on applications of computer vision, 2002. (WACV 2002) (pp. 132–136).
  186. Hansen, D. W., Heinrich, A., & Cañal Bruland, R. (2019). Aiming for the quiet eye in biathlon. Proceedings of the 11th acm symposium on eye tracking research & applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3314111.3319850
  187. Hansen, D. W., & Ji, Q. (2010). In the eye of the beholder: A survey of models for eyes and gaze. IEEE Transactions on Pattern Analysis and Machine Intelligence,32(3), 478–500. 10.1109/TPAMI.2009.30 [DOI] [PubMed] [Google Scholar]
  188. Hansen, D. W., & Pece, A. E. (2005). Eye tracking in the wild. Computer Vision and Image Understanding,98(1), 155–181. 10.1016/j.cviu.2004.07.013 [Google Scholar]
  189. Haslwanter, T. (1995). Mathematics of three-dimensional eye rotations. Vision Research,35(12), 1727–1739. 10.1016/0042-6989(94)00257-m [DOI] [PubMed] [Google Scholar]
  190. Hassoumi, A., Peysakhovich, V., & Hurter, C. (2019). Improving eye-tracking calibration accuracy using symbolic regression. PLoS One,14(3), 1–22. 10.1371/journal.pone.0213675 [DOI] [PMC free article] [PubMed] [Google Scholar]
  191. Havermann, K., Zimmermann, E., & Lappe, M. (2011). Eye position effects in saccadic adaptation. Journal of Neurophysiology,106(5), 2536–2545. 10.1152/jn.00023.2011 [DOI] [PubMed] [Google Scholar]
  192. Hayes, T. R., & Petrov, A. A. (2016). Mapping and correcting the influence of gaze position on pupil size measurements. Behavior Research Methods,48(2), 510–527. 10.3758/s13428-015-0588-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  193. Heck, M., Becker, C., & Deutscher, V. (2023). Webcam eye tracking for desktop and mobile devices: A systematic review. Proceedings of the 56th hawaii international conference on system sciences (pp. 6820–6829). https://hdl.handle.net/10125/103459
  194. Hegarty-Kelly, E. (2020). The development of eyemap 2.0 [Master’s thesis, National University of Ireland]. https://mural.maynoothuniversity.ie/14854/
  195. Hein, O., & Zangemeister, W. H. (2017). Topology for gaze analyses - raw data segmentation. Journal of Eye Movement Research,10(1),. 10.16910/jemr.10.1.1 [DOI] [PMC free article] [PubMed]
  196. Herholz, S., Chuang, L., Tanner, T., Bülthoff, H., & Fleming, R. (2008). LibGaze: Real-time gaze-tracking of freely moving observers for wall-sized displays. O. Deussen, & D. Keim (Eds.), Vision, modeling, and visualization (pp. 101-100). Heidelberg, Germany: Akademische Verlags-Gesellschaft AKA.
  197. Hershman, R., Henik, A., & Cohen, N. (2018). A novel blink detection method based on pupillometry noise. Behavior Research Methods,50(1), 107–114. 10.3758/s13428-017-1008-1 [DOI] [PubMed] [Google Scholar]
  198. Hessels, R. S., Benjamins, J. S., Cornelissen, T. H. W., & Hooge, I. T. C. (2018). A validation of automatically generated areas-of-interest in videos of a face for eye-tracking research. Frontiers in Psychology,9,. 10.3389/fpsyg.2018.01367 [DOI] [PMC free article] [PubMed]
  199. Hessels, R. S., Benjamins, J. S., van Doorn, A. J., Koenderink, J. J., Holleman, G. A., & Hooge, I. T. C. (2020). Looking behavior and potential human interactions during locomotion. Journal of Vision,20(10), 1–25. 10.1167/jov.20.10.5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  200. Hessels, R. S., Niehorster, D. C., Nyström, M., Andersson, R., & Hooge, I. T. C. (2018). Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. Royal Society Open Science,5(8), 180502. 10.1098/rsos.180502 [DOI] [PMC free article] [PubMed] [Google Scholar]
  201. Hessels, R. S., Nuthmann, A., Nyström, M., Andersson, R., Niehorster, D. C., & Hooge, I. T. C. (2025). The fundamentals of eye tracking part 1: The link between theory and research question. Behavior Research Methods,57, 16. 10.3758/s13428-024-02544-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  202. Hessels, R. S., van Doorn, A. J., Benjamins, J. S., Holleman, G. A., & Hooge, I. T. C. (2020). Task-related gaze control in human crowd navigation. Attention, Perception, & Psychophysics,82(6), 2482–2501. 10.3758/s13414-019-01952-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  203. Hessels, R. S., & Hooge, I. T. C. (2019). Eye tracking in developmental cognitive neuroscience-The good, the bad and the ugly. Developmental Cognitive Neuroscience,40, 100710. 10.1016/j.dcn.2019.100710 [DOI] [PMC free article] [PubMed] [Google Scholar]
  204. Hessels, R. S., Andersson, R., Hooge, I. T. C., Nyström, M., & Kemner, C. (2015). Consequences of eye color, positioning, and head movement for eye-tracking data quality in infant research. Infancy,20(6), 601–633. 10.1111/infa.12093 [Google Scholar]
  205. Hessels, R. S., Kemner, C., van den Boomen, C., & Hooge, I. T. C. (2016). The area-of-interest problem in eyetracking research: A noise-robust solution for face and sparse stimuli. Behavior Research Methods,48(4), 1694–1712. 10.3758/s13428-015-0676-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  206. Hessels, R. S., Niehorster, D. C., Kemner, C., & Hooge, I. T. C. (2017). Noise-robust fixation detection in eye movement data: Identification by two-means clustering (I2MC). Behavior Research Methods,49(5), 1802–1823. 10.3758/s13428-016-0822-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  207. Hessels, R. S., Holleman, G. A., Kingstone, A., Hooge, I. T., & Kemner, C. (2019). Gaze allocation in face-to-face communication is affected primarily by task structure and social context, not stimulus-driven factors. Cognition,184, 28–43. 10.1016/j.cognition.2018.12.005 [DOI] [PubMed] [Google Scholar]
  208. Hessels, R. S., Teunisse, M. K., Niehorster, D. C., Nyström, M., Benjamins, J. S., Senju, A., & Hooge, I. T. C. (2023). Task-related gaze behaviour in face-to-face dyadic collaboration: Toward an interactive theory? Visual Cognition,31(4), 291–313. 10.1080/13506285.2023.2250507 [Google Scholar]
  209. Heywood, S. (1972). Voluntary control of smooth eye movements and their velocity. Nature,238(5364), 408–410. 10.1038/238408a0 [DOI] [PubMed] [Google Scholar]
  210. Holleman, G. A., Hooge, I. T. C., Huijding, J., Deković, M., Kemner, C., & Hessels, R. S. (2023). Gaze and speech behavior in parent-child interactions: The role of conflict and cooperation. Current Psychology,42(14), 12129–12150. 10.1007/s12144-021-02532-7 [Google Scholar]
  211. Holmberg, A. (2007). Eye tracking and gaming: Eye movements in Quake III: Arena. [Master’s thesis, Lund University, Sweden].
  212. Holmqvist, K., Lee Örbom, S., & Zemblys, R. (2021). Small head movements increase and colour noise in data from five video-based P-CR eye trackers. Behavior Research Methods,1–16,. 10.3758/s13428-021-01648-9 [DOI] [PMC free article] [PubMed]
  213. Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (2011). Eye tracking: A comprehensive guide to methods and measures. Oxford University Press.
  214. Holmqvist, K., Örbom, S. L., Miller, M., Kashchenevsky, A., Shovman, M., & Greenlee, M. W. (2020). Validation of a prototype hybrid eye-tracker against the DPI and the Tobii Spectrum. ACM symposium on eye tracking research and applications (pp. 1–9). 10.1145/3379155.3391330
  215. Holmqvist, K., & Blignaut, P. (2020). Small eye movements cannot be reliably measured by video-based P-CR eye-trackers. Behavior Research Methods,52(5), 2098–2121. 10.3758/s13428-020-01363-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  216. Hooge, I. T. C., Hessels, R. S., Niehorster, D. C., Andersson, R., Skrok, M. K., Konklewski-Pilewicz, R. ... Nystrom, M. (in press). Eye tracker calibration: How well can humans refixate a target? Behavior Research Methods. 10.3758/s13428-024-02564-4 [DOI] [PMC free article] [PubMed]
  217. Hooge, I. T. C., Niehorster, D. C., Hessels, R. S., Benjamins, J. S., & Nyström, M. (2022). How robust are wearable eye trackers to slow and fast head and body movements? Behavior Research Methods,1–15,. 10.3758/s13428-022-02010-3 [DOI] [PMC free article] [PubMed]
  218. Hooge, I. T. C., Niehorster, D. C., Nyström, M., Andersson, R., & Hessels, R. S. (2022). Fixation classification: How to merge and select fixation candidates. Behavior Research Methods,54(6), 2765–2776. 10.3758/s13428-021-01723-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  219. Hooge, I. T. C., Nuthmann, A., Nyström, M., Niehorster, D. C., Holleman, G. A., Andersson, R., & Hessels, R. S. (in press). The fundamentals of eye tracking part 2: From research question to operationalisation. Behavior Research Methods [DOI] [PMC free article] [PubMed]
  220. Hooge, I. T. C., & Camps, G. (2013). Scan path entropy and arrow plots: Capturing scanning behavior of multiple observers. Frontiers in Psychology,4, 996. 10.3389/fpsyg.2013.00996 [DOI] [PMC free article] [PubMed] [Google Scholar]
  221. Hooge, I. T. C., Nyström, M., Cornelissen, T., & Holmqvist, K. (2015). The art of braking: Post saccadic oscillations in the eye tracker signal decrease with increasing saccade size. Vision Research,112, 55–67. 10.1016/j.visres.2015.03.015 [DOI] [PubMed] [Google Scholar]
  222. Hooge, I. T. C., Niehorster, D. C., Nyström, M., Andersson, R., & Hessels, R. S. (2018). Is human classification by experienced untrained observers a gold standard in fixation detection? Behavior Research Methods,50(5), 1864–1881. 10.3758/s13428-017-0955-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  223. Hooge, I. T. C., Niehorster, D. C., Hessels, R. S., Cleveland, D., & Nyström, M. (2021). The pupil-size artefact (PSA) across time, viewing direction, and different eye trackers. Behavior Research Methods. 10.3758/s13428-020-01512-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  224. Hooge, I. T. C., Niehorster, D. C., Nyström, M., & Hessels, R. S. (2024). Large eye-head gaze shifts measured with a wearable eye tracker and an industrial camera. Behavior Research Methods. 10.3758/s13428-023-02316-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  225. Hopper, L. M., Gulli, R. A., Howard, L. H., Kano, F., Krupenye, C., Ryan, A. M., & Paukner, A. (2021). The application of noninvasive, restraint-free eyetracking methods for use with nonhuman primates. Behavior Research Methods,53(3), 1003–1030. 10.3758/s13428-020-01465-6 [DOI] [PubMed] [Google Scholar]
  226. Hosp, B., & Wahl, S. (2023a). ZERO: A generic open-source extended reality eye-tracking controller interface for scientists. Proceedings of the 2023 symposium on eye tracking research and applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3588015.3589203
  227. Hosp, B., & Wahl, S. (2023b). ZING: An eye-tracking experiment software for organization and presentation of omnidirectional stimuli in virtual reality. Proceedings of the 2023 symposium on eye tracking research and applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3588015.3589201
  228. Hosp, B., Eivazi, S., Maurer, M., Fuhl, W., Geisler, D., & Kasneci, E. (2020). RemoteEye: An open-source high-speed remote eye tracker. Behavior Research Methods,52(3), 1387–1401. 10.3758/s13428-019-01305-2 [DOI] [PubMed] [Google Scholar]
  229. Houben, M., Goumans, J., & van der Steen, J. (2006). Recording three-dimensional eye movements: Scleral search coils versus video oculography. Investigative Ophthalmology & Visual Science,47(1), 179–187. 10.1167/iovs.05-0234 [DOI] [PubMed] [Google Scholar]
  230. Huang, H., Allison, R. S., & Jenkin, M. (2004). Combined head-eye tracking for immersive virtual reality. Icat’2004 14th international conference on artificial reality and telexistance Seoul, Korea.
  231. Hutton, S. B. (2019). Visual angle calculator. Retrieved 15 Apr 2024, from https://www.sr-research.com/visual-angle-calculator/
  232. Inchingolo, P., & Spanio, M. (1985). On the identification and analysis of saccadic eye movements-a quantitative study of the processing procedures. IEEE Transactions on Biomedical Engineering,BME–32(9), 683–695. 10.1109/TBME.1985.325586 [DOI] [PubMed]
  233. Inhoff, A. W., & Radach, R. (1998). Definition and computation of oculomotor measures in the study of cognitive processes. G. Underwood (Ed.), Eye guidance in reading and scene perception (pp. 29–53). Amsterdam: Elsevier Science Ltd. 10.1016/B978-008043361-5/50003-1
  234. Ionescu, G., Frey, A., Guyader, N., Kristensen, E., Andreev, A., & Guérin-Dugué, A. (2022). Synchronization of acquisition devices in neuroimaging: An application using co-registration of eye movements and electroencephalography. Behavior Research Methods,54(5), 2545–2564. 10.3758/s13428-021-01756-6 [DOI] [PubMed] [Google Scholar]
  235. Ivanchenko, D., Rifai, K., Hafed, Z. M., & Schaeffel, F. (2021). A low-cost, high-performance video-based binocular eye tracker for psychophysical research. Journal of Eye Movement Research,14(3),. 10.16910/jemr.14.3.3 [DOI] [PMC free article] [PubMed]
  236. Jakobi, D. N., Krakowczyk, D. G., & Jäger, L. A. (2024). Reporting eye-tracking data quality: Towards a new standard. Proceedings of the 2024 symposium on eye tracking research and applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3649902.3655658
  237. Jakobsen, A. L. (2019). Translation technology research with eye tracking. In M. O’Hagan (Ed.), The routledge handbook of translation and technology (pp. 398-416). London, UK: Routledge. 10.4324/9781315311258-28
  238. Jantti, V., Pyykkö, I., Juhola, M., Ignatius, J., Hansson, G. -Å., & Henriksson, N.-G. (1983). Effect of filtering in the computer analysis of saccades. Acta Oto-Laryngologica,96(sup406), 231–234. 10.3109/00016488309123040 [DOI] [PubMed] [Google Scholar]
  239. JASP Team (2024). JASP (Version 0.18.3)[Computer software]. https://jasp-stats.org/
  240. Jensen, R. R., Stets, J. D., Suurmets, S., Clement, J., & Aanæs, H. (2017). Wearable gaze trackers: Mapping visual attention in 3D. P. Sharma, & F. M. Bianchi (Eds.), Image analysis (pp. 66–76). Cham: Springer International Publishing. 10.1007/978-3-319-59126-1_6
  241. Jermann, P., Mullins, D., Nüssli, M.-A., & Dillenbourg, P. (2011). Collaborative gaze footprints: Correlates of interaction quality. H. Spada, G. Stahl, N. Miyake, and N. Law (Eds.), Connecting computer-supported collaborative learning to policy and practice: CSCL2011 conference proceedings (Vol. 1, pp. 184–191). International Society of the Learning Sciences.10.22318/cscl2011.184
  242. Jianu, R., & Alam, S. S. (2018). A data model and task space for data of interest (DOI) eye-tracking analyses. IEEE Transactions on Visualization and Computer Graphics,24(3), 1232–1245. 10.1109/TVCG.2017.2665498 [DOI] [PubMed] [Google Scholar]
  243. Jogeshwar, A. K., & Pelz, J. B. (2021). GazeEnViz4D: 4-D gaze-in-environment visualization pipeline. Procedia Computer Science,192, 2952–2961. 10.1016/j.procs.2021.09.067 [Google Scholar]
  244. Johnson, J. S., Liu, L., Thomas, G., & Spencer, J. P. (2007). Calibration algorithm for eyetracking with unrestricted head movement. Behavior Research Methods,39(1), 123–132. 10.3758/BF03192850 [DOI] [PubMed] [Google Scholar]
  245. Jongerius, C., Callemein, T., Goedemé, T., Van Beeck, K., Romijn, J. A., Smets, E. M. A., & Hillen, M. A. (2021). Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest. Behavior Research Methods,53(5), 2037–2048. 10.3758/s13428-021-01544-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  246. Jordan, T. R., McGowan, V. A., & Paterson, K. B. (2012). Reading with a filtered fovea: The influence of visual quality at the point of fixation during reading. Psychonomic Bulletin & Review,19(6), 1078–1084. 10.3758/s13423-012-0307-x [DOI] [PubMed] [Google Scholar]
  247. Josephson, S., & Holmes, M. E. (2002). Attention to repeated images on the world-wide web: Another look at scanpath theory. Behavior Research Methods, Instruments, & Computers,34(4), 539–548. 10.3758/BF03195483 [DOI] [PubMed] [Google Scholar]
  248. Juhola, M. (1986). The effect of digital lowpass filters on the maximum velocity of saccadic eye movements. Computers in Biology and Medicine,16(5), 361–370. 10.1016/0010-4825(86)90003-X [DOI] [PubMed] [Google Scholar]
  249. Juhola, M. (1991). Median filtering is appropriate to signals of saccadic eye movements. Computers in Biology and Medicine,21(1), 43–49. 10.1016/0010-4825(91)90034-7 [DOI] [PubMed] [Google Scholar]
  250. Juhola, M., Jäntti, V., Pyykkö, I., Magnusson, M., Schalén, L., & Åkesson, M. (1985). Detection of saccadic eye movements using a non-recursive adaptive digital filter. Computer Methods and Programs in Biomedicine,21(2), 81–88. 10.1016/0169-2607(85)90066-5 [DOI] [PubMed] [Google Scholar]
  251. Juhola, M., Jäntti, V., & Aantaa, E. (1986). Analysis of saccadic eye movements with a microcomputer. Journal of Biomedical Engineering,8(3), 262–267. 10.1016/0141-5425(86)90093-2 [DOI] [PubMed] [Google Scholar]
  252. Kaduk, T., Goeke, C., Finger, H., & König, P. (2023). Webcam eye tracking close to laboratory standards: Comparing a new webcam-based system and the EyeLink 1000. Behavior Research Methods. 10.3758/s13428-023-02237-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  253. Kang, Z., Mandal, S., Crutchfield, J., Millan, A., & McClung, S. N. (2016). Designs and algorithms to map eye tracking data with dynamic multielement moving objects. Computational Intelligence and Neuroscience,2016, 9354760. 10.1155/2016/9354760 [DOI] [PMC free article] [PubMed] [Google Scholar]
  254. Karl, S., Boch, M., Virányi, Z., Lamm, C., & Huber, L. (2020). Training pet dogs for eye-tracking and awake fMRI. Behavior Research Methods,52(2), 838–856. 10.3758/s13428-019-01281-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  255. Karn, K. S. (2000). “saccade pickers” vs. “fixation pickers”: The effect of eye tracking instrumentation on research. Proceedings of the 2000 symposium on eye tracking research & applications (pp. 87-88). New York, NY, USA: Association for Computing Machinery. 10.1145/355017.355030
  256. Kasprowski, P., & Harezlak, K. (2017). Gaze self-similarity plot - a new visualization technique. Journal of Eye Movement Research,10(5),. 10.16910/jemr.10.5.3 [DOI] [PMC free article] [PubMed]
  257. Kassner, M., Patera, W., & Bülling, A. (2014). Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction. Proceedings of the 2014 acm international joint conference on pervasive and ubiquitous computing: Adjunct publication (pp. 1151–1160). New York, NY, USA: ACM. http://doi.acm.org/10.1145/2638728.2641695
  258. Kiefer, P., Giannopoulos, I., Kremer, D., Schlieder, C., & Raubal, M. (2014). Starting to get bored: An outdoor eye tracking study of tourists exploring a city panorama. Proceedings of the symposium on eye tracking research and applications (pp. 315-318). New York, NY, USA: Association for Computing Machinery. 10.1145/2578153.2578216
  259. Kinsman, T., Evans, K., Sweeney, G., Keane, T., & Pelz, J. (2012). Ego-motion compensation improves fixation detection in wearable eye tracking. Proceedings of the symposium on eye tracking research and applications (pp. 221-224). New York, NY, USA: Association for Computing Machinery. 10.1145/2168556.2168599
  260. Kleiner, M., Brainard, D., Pelli, D., Ingling, A., Murray, R., & Broussard, C. (2007). What’s new in Psychtoolbox-3. Perception,36(14), 1. [Google Scholar]
  261. Kliegl, R., & Olson, R. K. (1981). Reduction and calibration of eye monitor data. Behavior Research Methods & Instrumentation,13(2), 107–111. 10.3758/BF03207917 [Google Scholar]
  262. Koch, M., Kurzhals, K., Burch, M., & Weiskopf, D. (2023). Visualization psychology for eye tracking evaluation. In D. Albers Szafir, R. Borgo, M. Chen, D. J. Edwards, B. Fisher, & L. Padilla (Eds.), Visualization psychology (pp. 243–260). Cham: Springer International Publishing. 10.1007/978-3-031-34738-2_10
  263. Komogortsev, O. V., & Karpov, A. (2013). Automated classification and scoring of smooth pursuit eye movements in the presence of fixations and saccades. Behavior Research Methods,45(1), 203–215. 10.3758/s13428-012-0234-9 [DOI] [PubMed] [Google Scholar]
  264. Komogortsev, O. V., Gobert, D. V., Jayarathna, S., Koh, D. H., & Gowda, S. M. (2010). Standardization of automated analyses of oculomotor fixation and saccadic behaviors. IEEE Transactions on Biomedical Engineering,57(11), 2635–2645. 10.1109/TBME.2010.2057429 [DOI] [PubMed] [Google Scholar]
  265. Kopácsi, L., Barz, M., Bhatti, O. S., & Sonntag, D. (2023). IMETA: An interactive mobile eye tracking annotation method for semi-automatic fixation-to-AOI mapping. Companion proceedings of the 28th international conference on intelligent user interfaces (pp. 33–36). New York, NY, USA: Association for Computing Machinery. 10.1145/3581754.3584125
  266. Kothari, R., Yang, Z., Kanan, C., Bailey, R., Pelz, J. B., & Diaz, G. J. (2020). Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities. Scientific Reports,10(1), 1–18. 10.1038/s41598-020-59251-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  267. Kothe, C., Shirazi, S. Y., Stenner, T., Medine, D., Boulay, C., Grivich, M. I. ... Makeig, S. (2024). The lab streaming layer for synchronized multimodal recording. bioRxiv. 10.1101/2024.02.13.580071
  268. Krakowczyk, D. G., Reich, D. R., Chwastek, J., Jakobi, D. N., Prasse, P., Süss, A. ... Jäger, L. A. (2023). pymovements: A Python package for eye movement data processing. Proceedings of the 2023 symposium on eye tracking research and applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3588015.3590134
  269. Krassanakis, V., Filippakopoulou, V., & Nakos, B. (2014). EyeMMV toolbox: An eye movement post-analysis tool based on a two-step spatial dispersion threshold for fixation identification. Journal of Eye Movement Research,7(1). 10.16910/jemr.7.1.1
  270. Krejtz, K., Szmidt, T., Duchowski, A. T., & Krejtz, I. (2014). Entropy-based statistical analysis of eye movement transitions. Proceedings of the symposium on eye tracking research and applications (pp. 159-166). New York, NY, USA: Association for Computing Machinery.
  271. Krejtz, K., Duchowski, A., Szmidt, T., Krejtz, I., González Perilli, F., Pires, A., & Villalobos, N. (2015). Gaze transition entropy. ACM Transactions on Applied Perception,13(1), 1–20. 10.1145/2834121 [Google Scholar]
  272. Krohn, O. A. N., Varankian, V., Lind, P. G., & Moreno e Mello, G. B. (2020). Construction of an inexpensive eye tracker for social inclusion and education. In M. Antona, & C. Stephanidis (Eds.), Universal access in human-computer interaction. design approaches and supporting technologies (pp. 60–78). Cham: Springer International Publishing.
  273. Kübler, T. C. (2020). The perception engineer’s toolkit for eye-tracking data analysis. Acm symposium on eye tracking research and applications. New York, NY, USA: Association for Computing Machinery. 10.1145/3379156.3391366
  274. Kübler, T. C., Sippel, K., Fuhl, W., Schievelbein, G., Aufreiter, J., Rosenberg, R. ... Kasneci, E. (2015). Analysis of eye movements with eyetrace. A. Fred, H. Gamboa, & D. Elias (Eds.), Biomedical engineering systems and technologies (pp. 458–471). Cham: Springer International Publishing.
  275. Kübler, T. C., Rothe, C., Schiefer, U., Rosenstiel, W., & Kasneci, E. (2017). SubsMatch 2.0: Scanpath comparison and classification based on subsequence frequencies. Behavior Research Methods,49(3), 1048–1064. 10.3758/s13428-016-0765-6 [DOI] [PubMed]
  276. Kucharský, Š., Visser, I., Trut,escu, G.-O., Laurence, P. G., Zaharieva, M., & Raijmakers, M. E. J. (2020). Cognitive strategies revealed by clustering eye movement transitions. Journal of Eye Movement Research, 13. 10.16910/jemr.13.1.1 [DOI] [PMC free article] [PubMed]
  277. Kurzhals, K., Burch, M., Blascheck, T., Andrienko, G., Andrienko, N., & Weiskopf, D. (2017). A task-based view on the visual analysis of eye-tracking data. In M. Burch, L. Chuang, B. Fisher, A. Schmidt, & D. Weiskopf (Eds.), Eye tracking and visualization (pp. 3–22). Cham: Springer International Publishing. [Google Scholar]
  278. Kurzhals, K., Heimerl, F., & Weiskopf, D. (2014). ISeeCube: Visual analysis of gaze data for video. Proceedings of the symposium on eye tracking research and applications (pp. 43-50). New York, NY, USA: Association for Computing Machinery.
  279. Kurzhals, K., Hlawatsch, M., Seeger, C., & Weiskopf, D. (2017). Visual analytics for mobile eye tracking. IEEE Transactions on Visualization and Computer Graphics,23(1), 301–310. 10.1109/TVCG.2016.2598695 [DOI] [PubMed] [Google Scholar]
  280. Lanata, A., Valenza, G., Greco, A., & Scilingo, E. P. (2015). Robust head mounted wearable eye tracking system for dynamical calibration. Journal of Eye Movement Research,8(5). 10.16910/jemr.8.5.2
  281. Lander, C., Gehring, S., Krüger, A., Boring, S., & Bülling, A. (2015). Gazeprojector: Accurate gaze estimation and seamless gaze interaction across multiple displays. Proceedings of the 28th annual acm symposium on user interface software & technology (pp. 395–404).
  282. Lander, C., Kerber, F., Rauber, T., & Krüger, A. (2016). A time-efficient re-calibration algorithm for improved long-term accuracy of head-worn eye trackers. Proceedings of the ninth biennial ACM symposium on eye tracking research & applications, ETRA 2016, Charleston, SC, USA, March 14-17, 2016 (pp. 213–216). http://doi.acm.org/10.1145/2857491.2857513
  283. Lander, C., Löchtefeld, M., & Krüger, A. (2018). hEYEbrid: A hybrid approach for mobile calibration-free gaze estimation. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies,1(4),. 10.1145/3161166
  284. Langstrand, J.-P., Nguyen, H. T., & Hildebrandt, M. (2018). Synopticon: Sensor fusion for real-time gaze detection and analysis. Proceedings of the Human Factors and Ergonomics Society Annual Meeting,62(1), 311–315. 10.1177/1541931218621072 [Google Scholar]
  285. Lao, J., Miellet, S., Pernet, C., Sokhn, N., & Caldara, R. (2017). iMap4: An open source toolbox for the statistical fixation mapping of eye movement data with linear mixed modeling. Behavior Research Methods,49, 559–575. 10.3758/s13428-016-0737-x [DOI] [PubMed] [Google Scholar]
  286. Lappi, O. (2016). Eye movements in the wild: Oculomotor control, gaze behavior & frames of reference. Neuroscience & Biobehavioral Reviews,69, 49–68. 10.1016/j.neubiorev.2016.06.006 [DOI] [PubMed] [Google Scholar]
  287. Lara-Alvarez, C., & Gonzalez-Herrera, F. (2020). Testing multiple polynomial models for eye-tracker calibration. Behavior Research Methods,52(6), 2506–2514. 10.3758/s13428-020-01371-x [DOI] [PubMed] [Google Scholar]
  288. Larigaldie, N., Dreneva, A., & Orquin, J. (2024). eyeScrollR: A software method for reproducible mapping of eye-tracking data from scrollable web pages. Behavior Research Methods. 10.3758/s13428-024-02343-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  289. Larsen, O. F. P., Tresselt, W. G., Lorenz, E. A., Holt, T., Sandstrak, G., Hansen, T. I., & Holt, A. (2024). A method for synchronized use of eeg and eye tracking in fully immersive vr. Frontiers in Human Neuroscience,18,. 10.3389/fnhum.2024.1347974 [DOI] [PMC free article] [PubMed]
  290. Larsson, L., Nyström, M., & Stridh, M. (2013). Detection of saccades and postsaccadic oscillations in the presence of smooth pursuit. IEEE Transactions on Biomedical Engineering,60(9), 2484–2493. 10.1109/tbme.2013.2258918 [DOI] [PubMed] [Google Scholar]
  291. Larsson, L., Nyström, M., Andersson, R., & Stridh, M. (2015). Detection of fixations and smooth pursuit movements in high-speed eye-tracking data. Biomedical Signal Processing and Control,18, 145–152. 10.1016/j.bspc.2014.12.008 [DOI] [PubMed] [Google Scholar]
  292. Larsson, L., Schwaller, A., Nyström, M., & Stridh, M. (2016). Head movement compensation and multi-modal event detection in eye-tracking data for unconstrained head movements. Journal of Neuroscience Methods,274, 13–26. 10.1016/j.jneumeth.2016.09.005 [DOI] [PubMed] [Google Scholar]
  293. Lavoie, E. B., Valevicius, A. M., Boser, Q. A., Kovic, O., Vette, A. H., Pilarski, P. M., & Chapman, C. S. (2018). Using synchronized eye and motion tracking to determine high-precision eye-movement patterns during object-interaction tasks. Journal of Vision,18(6), 18. 10.1167/18.6.18 [DOI] [PubMed] [Google Scholar]
  294. Lawrence, J. M., Abhari, K., Prime, S. L., Meek, B. P., Desanghere, L., Baugh, L. A., & Marotta, J. J. (2011). A novel integrative method for analyzing eye and hand behaviour during reaching and grasping in an MRI environment. Behavior Research Methods,43(2), 399–408. 10.3758/s13428-011-0067-y [DOI] [PubMed] [Google Scholar]
  295. Le Meur, O., & Baccino, T. (2013). Methods for comparing scanpaths and saliency maps: Strengths and weaknesses. Behavior Research Methods,45(1), 251–266. 10.3758/s13428-012-0226-9 [DOI] [PubMed] [Google Scholar]
  296. Leigh, R. J., & Zee, D. S. (2015). The neurology of eye movements. Oxford University Press. [Google Scholar]
  297. Leppänen, J. M., Butcher, J. W., Godbout, C., Stephenson, K., Hendrixson, D. T., Griswold, S., & Manary, M. J. (2022). Assessing infant cognition in field settings using eye-tracking: A pilot cohort trial in Sierra Leone. BMJ Open,12(2), e049783. 10.1136/bmjopen-2021-049783 [DOI] [PMC free article] [PubMed] [Google Scholar]
  298. Li, D., Babcock, J. S., & Parkhurst, D. J. (2006). openEyes: A low-cost head-mounted eye-tracking solution. Proceedings of the eye tracking research & application symposium, ETRA 2006, San Diego, California, USA, March 27-29, 2006 (pp. 95–100). http://doi.acm.org/10.1145/1117309.1117350
  299. Li, Q., Joo, S. J., Yeatman, J. D., & Reinecke, K. (2020). Controlling for participants’ viewing distance in large-scale, psychophysical online experiments using a virtual chinrest. Scientific Reports,10(1), 904. 10.1038/s41598-019-57204-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  300. Liston, D. B., Krukowski, A. E., & Stone, L. S. (2013). Saccade detection during smooth tracking. Displays,34(2), 171–176. 10.1016/j.displa.2012.10.002 [Google Scholar]
  301. Liu, J., Chi, J., Yang, H., & Yin, X. (2022). In the eye of the beholder: A survey of gaze tracking techniques. Pattern Recognition,132, 108944. 10.1016/j.patcog.2022.108944 [Google Scholar]
  302. Liversedge, S. P., Zang, C., & Liang, F. (2022). Reading comprehension II. In The science of reading (pp. 261–279). John Wiley & Sons, Ltd.
  303. Llanes-Jurado, J., Marín-Morales, J., Guixeres, J., & Alcañiz, M. (2020). Development and calibration of an eye-tracking fixation identification algorithm for immersive virtual reality. Sensors,20(17),. 10.3390/s20174956 [DOI] [PMC free article] [PubMed]
  304. Loeb, H., Chamberlain, S., & Lee, Y. (2016). EyeSync - real time integration of an eye tracker in a driving simulator environment (tech. rep. No. SAE Technical Paper 2016-01-1419). 10.4271/2016-01-1419
  305. Loschky, L. C., & Wolverton, G. S. (2007). How late can you update gaze-contingent multiresolutional displays without detection? ACM Transactions on Multimedia Computing, Communications, and Applications,3(4), 1–10. 10.1145/1314303.1314310 [Google Scholar]
  306. Lotze, A., Love, K., Velisar, A., & Shanidze, N. M. (2024). A low-cost robotic oculomotor simulator for assessing eye tracking accuracy in health and disease. Behavior Research Methods,56(1), 80–92. 10.3758/s13428-022-01938-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  307. Lukander, K., Jagadeesan, S., Chi, H., & Müller, K. (2013). OMG!: A new robust, wearable and affordable open source mobile gaze tracker. Proceedings of the 15th international conference on human-computer interaction with mobile devices and services (pp. 408–411).
  308. Ma, C., & Choi, K.- A., Choi, B.- D., & Ko, S.- J. (2015). Robust remote gaze estimation method based on multiple geometric transforms. Optical Engineering,54(8), 083103. 10.1117/1.OE.54.8.083103
  309. Ma, X., Liu, Y., Clariana, R., Gu, C., & Li, P. (2023). From eye movements to scanpath networks: A method for studying individual differences in expository text reading. Behavior Research Methods,55(2), 730–750. 10.3758/s13428-022-01842-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  310. Machner, B., Lencer, M. C., Möller, L., von der Gablentz, J., Heide, W., Helmchen, C., & Sprenger, A. (2020). Unbalancing the attentional priority map via gaze-contingent displays induces neglect-like visual exploration. Frontiers in Human Neuroscience,14,. 10.3389/fnhum.2020.00041 [DOI] [PMC free article] [PubMed]
  311. Mack, D. J., Belfanti, S., & Schwarz, U. (2017). The effect of sampling rate and lowpass filters on saccades - a modeling approach. Behavior Research Methods,49(6), 2146–2162. 10.3758/s13428-016-0848-4 [DOI] [PubMed] [Google Scholar]
  312. Malkin, E., Deza, A., & Poggio, T. A. (2020). CUDA-optimized real-time rendering of a foveated visual system. 2nd workshop on shared visual representations in human and machine intelligence (svrhm), neurips 2020.https://openreview.net/forum?id=ZMsqkUadtZ7
  313. Mantiuk, R., Kowalik, M., Nowosielski, A., & Bazyluk, B. (2012). Do-it-yourself eye tracker: Low-cost pupil-based eye tracker for computer graphics applications. International conference on multimedia modeling (pp. 115–125).
  314. Mardanbegi, D., & Hansen, D. W. (2011). Mobile gaze-based screen interaction in 3D environments. Proceedings of the 1st conference on novel gaze-controlled applications. New York, NY, USA: Association for Computing Machinery.
  315. Martin, J. T., Pinto, J., Bulte, D., & Spitschan, M. (2022). PyPlr: A versatile, integrated system of hardware and software for researching the human pupillary light reflex. Behavior Research Methods,54(6), 2720–2739. 10.3758/s13428-021-01759-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  316. Mathôt, S. (2013). A simple way to reconstruct pupil size during eye blinks.[SPACE]10.6084/m9.figshare.688001.v1 [Google Scholar]
  317. Mathôt, S., Schreij, D., & Theeuwes, J. (2012). Opensesame: An open-source, graphical experiment builder for the social sciences. Behavior Research Methods,44(2), 314–324. 10.3758/s13428-011-0168-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  318. Matsuda, K., Nagami, T., Sugase, Y., Takemura, A., & Kawano, K. (2017). A widely applicable real-time mono/binocular eye tracking system using a high frame-rate digital camera. M. Kurosu (Ed.), Human-computer interaction. user interface design, development and multimodality (pp. 593–608). Cham: Springer International Publishing.
  319. Matthis, J. S., Yates, J. L., & Hayhoe, M. M. (2018). Gaze and the control of foot placement when walking in natural terrain. Current Biology,28(8), 1224-1233.e5. 10.1016/j.cub.2018.03.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  320. Mazziotti, R., Carrara, F., Viglione, A., Lupori, L., Verde, L. L., Benedetto, A. ... Pizzorusso, T. (2021). MEYE: Web app for translational and real-time pupillometry. eNeuro,8(5),. 10.1523/ENEURO.0122-21.2021 [DOI] [PMC free article] [PubMed]
  321. McCamy, M. B., Collins, N., Otero-Millan, J., Al-Kalbani, M., Macknik, S. L., Coakley, D., et al. (2013). Simultaneous recordings of ocular microtremor and microsaccades with a piezoelectric sensor and a video-oculography system. PeerJ,1, e14. 10.7717/peerj.14 [DOI] [PMC free article] [PubMed] [Google Scholar]
  322. McCamy, M. B., Otero-Millan, J., Leigh, R. J., King, S. A., Schneider, R. M., Macknik, S. L., & Martinez-Conde, S. (2015). Simultaneous recordings of human microsaccades and drifts with a contemporary video eye tracker and the search coil technique. PLoS One,10(6), e0128428. 10.1371/journal.pone.0128428 [DOI] [PMC free article] [PubMed] [Google Scholar]
  323. McConkie, G. W., Wolverton, G. S., & Zola, D. (1984). Instrumentation considerations in research involving eye-movement contingent stimulus control. A. G. Gale, & F. Johnson (Eds.), Theoretical and applied aspects of eye movement research (Vol. 22, pp. 39-47). North-Holland.
  324. McConkie, G. W. (1981). Evaluating and reporting data quality in eye movement research. Behavior Research Methods,13(2), 97–106. 10.3758/bf03207916 [Google Scholar]
  325. McConkie, G. W. (1997). Eye movement contingent display control: Personal reflections and comments. Scientific Studies of Reading,1(4), 303–316. 10.1207/s1532799xssr0104_1 [Google Scholar]
  326. McConkie, G. W., & Rayner, K. (1975). The span of the effective stimulus during a fixation in reading. Perception & Psychophysics,17(6), 578–586. 10.3758/BF03203972 [Google Scholar]
  327. McLaughlin, S. C. (1967). Parametric adjustment in saccadic eye movements. Perception & Psychophysics,2(8), 359–362. 10.3758/bf03210071 [Google Scholar]
  328. Medenica, Z., & Kun, A. L. (2012). Data synchronization for cognitive load estimation in driving simulator-based experiments. Adjunct proceedings of the 4th international conference on automotive user interfaces and interactive vehicular applications (pp. 92–94).
  329. Menges, R., Kramer, S., Hill, S., Nisslmueller, M., Kumar, C., & Staab, S. (2020). A visualization tool for eye tracking data analysis in the web. Acm symposium on eye tracking research and applications. New York, NY, USA: Association for Computing Machinery. [Google Scholar]
  330. Mercier, T. M., Budka, M., Vasilev, M. R., Kirkby, J. A., Angele, B., & Slattery, T. J. (2024). Dual input stream transformer for vertical drift correction in eye-tracking reading data. Retrieved from arXiv:2311.06095 [DOI] [PubMed]
  331. MetisVidere (2020). Chinrest [github repository]. Retrieved 8 Mar 2024, from https://github.com/MetisVidere/ChinRest
  332. Meyer, L., Josefsson, B., Vrotsou, K., Westin, C., & Lundberg, J. (2021). Evaluation of an aoi mapping and analysis tool for the identification of visual scan pattern. 2021 Ieee/aiaa 40th digital avionics systems conference (dasc) (pp. 1-8).
  333. Mihali, A., van Opheusden, B., & Ma, W. J. (2017). Bayesian microsaccade detection. Journal of Vision,17(1), 13–13. 10.1167/17.1.13 [DOI] [PMC free article] [PubMed] [Google Scholar]
  334. Mohanto, B., Islam, A. T., Gobbetti, E., & Staadt, O. (2022). An integrative view of foveated rendering. Computers & Graphics,102, 474–501. 10.1016/j.cag.2021.10.010 [Google Scholar]
  335. Morimoto, C. H., Coutinho, F. L., & Hansen, D. W. (2020). Screen-light decomposition framework for point-of-gaze estimation using a single uncalibrated camera and multiple light sources. Journal of Mathematical Imaging and Vision,62(4), 585–605. 10.1007/s10851-020-00947-8 [Google Scholar]
  336. Mould, M. S., Foster, D. H., Amano, K., & Oakley, J. P. (2012). A simple nonparametric method for classifying eye fixations. Vision Research,57, 18–25. 10.1016/j.visres.2011.12.006 [DOI] [PubMed] [Google Scholar]
  337. Mulligan, J. B. (1997). Recovery of motion parameters from distortions in scanned images. Proceedings of the nasa image registration workshop (irw97).
  338. Narasappa, D. (2022). Integration of eye tracking device and 3D motion capture for simultaneous gaze and body movement analysis [Master’s thesis, KTH, Stockholm, Sweden]. https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-316590
  339. Narcizo, F. B., dos Santos, F. E. D., & Hansen, D. W. (2021). High-accuracy gaze estimation for interpolation-based eye-tracking methods. Vision,5(3),. 10.3390/vision5030041 [DOI] [PMC free article] [PubMed]
  340. Nasrabadi, H. R., & Alonso, J.-M. (2022). Modular streaming pipeline of eye/head tracking data using Tobii Pro Glasses 3. bioRxiv,. 10.1101/2022.09.02.506255
  341. Newport, R. A., Russo, C., Liu, S., Suman, A. A., & Di Ieva, A. (2022). SoftMatch: Comparing scanpaths using combinatorial spatio-temporal sequences with fractal curves. Sensors,22(19),. 10.3390/s22197438 [DOI] [PMC free article] [PubMed]
  342. Niehorster, D. C., Andersson, R., & Nyström, M. (2020). Titta: A toolbox for creating PsychToolbox and Psychopy experiments with Tobii eye trackers. Behavior Research Methods,52(2), 1970–1979. 10.3758/s13428-020-01358-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  343. Niehorster, D. C., Hessels, R. S., & Benjamins, J. S. (2020). GlassesViewer: Open-source software for viewing and analyzing data from the Tobii Pro Glasses 2 eye tracker. Behavior Research Methods,52(3), 1244–1253. 10.3758/s13428-019-01314-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  344. Niehorster, D. C., Santini, T., Hessels, R. S., Hooge, I. T. C., Kasneci, E., & Nyström, M. (2020). The impact of slippage on the data quality of head-worn eye trackers. Behavior Research Methods,52(3), 1140–1160. 10.3758/s13428-019-01307-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  345. Niehorster, D. C., Whitham, W., Lake, B. R., Schapiro, S. J., Andolina, I. M., & Yorzinski, J. L. (2024). Enhancing eye tracking for nonhuman primates and other subjects unable to follow instructions: Adaptive calibration and validation of tobii eye trackers with the Titta toolbox. Behavior Research Methods,57, 4. 10.3758/s13428-024-02540-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  346. Niehorster, D. C., Zemblys, R., Beelders, T., & Holmqvist, K. (2020). Characterizing gaze position signals and synthesizing noise during fixations in eye-tracking data. Behavior Research Methods,52(6), 2515–2534. 10.3758/s13428-020-01400-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  347. Niehorster, D. C., & Nyström, M. (2020). SMITE: A toolbox for creating Psychophysics Toolbox and PsychoPy experiments with SMI eye trackers. Behavior Research Methods,52(1), 295–304. 10.3758/s13428-019-01226-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  348. Niehorster, D. C., Siu, W. W. F., & Li, L. (2015). Manual tracking enhances smooth pursuit eye movements. Journal of Vision,15(15), 11. 10.1167/15.15.11 [DOI] [PMC free article] [PubMed] [Google Scholar]
  349. Niehorster, D. C., Cornelissen, T. H. W., Holmqvist, K., Hooge, I. T. C., & Hessels, R. S. (2018). What to expect from your remote eye-tracker when participants are unrestrained. Behavior Research Methods,50(1), 213–227. 10.3758/s13428-017-0863-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  350. Niehorster, D. C., Cornelissen, T., Holmqvist, K., & Hooge, I. T. C. (2019). Searching with and against each other: Spatiotemporal coordination of visual search behavior in collaborative and competitive settings. Attention, Perception, & Psychophysics,81(3), 666–683. 10.3758/s13414-018-01640-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  351. Niehorster, D. C., Zemblys, R., & Holmqvist, K. (2021). Is apparent fixational drift in eye-tracking data due to filters or eyeball rotation? Behavior Research Methods,53(1), 311–324. 10.3758/s13428-020-01414-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  352. Niehorster, D. C., Hessels, R. S., Benjamins, J. S., Nyström, M., & Hooge, I. T. C. (2023). GlassesValidator: A data quality tool for eye tracking glasses. Behavior Research Methods. 10.3758/s13428-023-02105-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  353. Niehorster, D. C., Gullberg, M., & Nyström, M. (2024). Behavioral science labs: How to solve the multi-user problem. Behavior Research Methods. 10.3758/s13428-024-02467-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  354. NIMH-NIF (2019). RestEasy: An open source chin rest for human psychophysics experiments [github repository wiki]. Retrieved 8 Mar 2024, from https://github.com/nimh-nif/SCNI_Toolbar/wiki/RestEasy:-An-open-source-chin-rest-for-human-psychophysics-experiments
  355. Noton, D., & Stark, L. (1971). Scanpaths in saccadic eye movements while viewing and recognizing patterns. Vision Research,11(9), 929–942. 10.1016/0042-6989(71)90213-6 [DOI] [PubMed] [Google Scholar]
  356. Nourrit, V., Poilane, R., & de Bougrenet de La Tocnaye, J.-L. (2021). Custom on-axis head-mounted eye tracker for 3D active glasses. Electronic Imaging,33(2), 55–1. 10.2352/ISSN.2470-1173.2021.2.SDA-055
  357. Nuthmann, A., Einhäuser, W., & Schütz, I. (2017). How well can saliency models predict fixation selection in scenes beyond central bias? A new approach to model evaluation using generalized linear mixed models. Frontiers in Human Neuroscience,11,. 10.3389/fnhum.2017.00491 [DOI] [PMC free article] [PubMed]
  358. Nuthmann, A. (2014). How do the regions of the visual field contribute to object search in real-world scenes? Evidence from eye movements. Journal of Experimental Psychology: Human Perception and Performance,40(1), 342. 10.1037/a0033854 [DOI] [PubMed] [Google Scholar]
  359. Nuthmann, A., & Canas-Bajo, T. (2022). Visual search in naturalistic scenes from foveal to peripheral vision: A comparison between dynamic and static displays. Journal of Vision,22(1), 10. 10.1167/jov.22.1.10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  360. Nyström, M., Andersson, R., Niehorster, D. C., Hessels, R. S., & Hooge, I. T. C. (2024). What is a blink? Classifying and characterizing blinks in eye openness signals. Behavior Research Methods,. 10.3758/s13428-023-02333-9 [DOI] [PMC free article] [PubMed]
  361. Nyström, P., Bölte, S., Falck-Ytter, T., The EASE Team, Achermann, S., Andersson Konke, L. ... Zander, E. (2017). Responding to other people’s direct gaze: Alterations in gaze behavior in infants at risk for autism occur on very short timescales. Journal of Autism and Developmental Disorders,47(11), 3498–3509. 10.1007/s10803-017-3253-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  362. Nyström, M., Hooge, I. T. C., Hessels, R. S., Andersson, R., Hansen, D. W., Johansson, R., & Niehorster, D. C. (in press). The fundamentals of eye tracking part 3: Choosing an eye tracker (setup). Behavior Research Methods [DOI] [PMC free article] [PubMed]
  363. Nyström, M., & Holmqvist, K. (2010). An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data. Behavior Research Methods,42(1), 188–204. 10.3758/BRM.42.1.188 [DOI] [PubMed] [Google Scholar]
  364. Nyström, M., Niehorster, D. C., Cornelissen, T., & Garde, H. (2017). Real-time sharing of gaze data between multiple eye trackers-evaluation, tools, and advice. Behavior Research Methods,49(4), 1310–1322. 10.3758/s13428-016-0806-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  365. Olsen, A. (2012). The Tobii I-VT fixation filter - algorithm description (tech. rep.). Tobii AB.
  366. Olsson, P. (2007). Real-time and offline filters for eye tracking [Master’s thesis, Royal Institute of Technology]. https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-106244
  367. Onkhar, V., Dodou, D., & de Winter, J. C. F. (2023). Evaluating the Tobii Pro Glasses 2 and 3 in static and dynamic conditions. Behavior Research Methods. 10.3758/s13428-023-02173-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  368. Ooms, K., & Krassanakis, V. (2018). Measuring the spatial noise of a low-cost eye tracker to enhance fixation detection. Journal of Imaging,4(8), 96. 10.3390/jimaging4080096 [Google Scholar]
  369. O’Regan, K. (1978). A new horizontal eye movement calibration method: Subject-controlled “smooth pursuit’’ and “zero drift’’. Behavior Research Methods & Instrumentation,10(3), 393–397. 10.3758/BF03205159 [Google Scholar]
  370. Orlov, P. A., & Bednarik, R. (2016). Screenmasker: An open-source gaze-contingent screen masking environment. Behavior Research Methods,48(3), 1145–1153. 10.3758/s13428-015-0635-7 [DOI] [PubMed] [Google Scholar]
  371. O’Shea, R. P. (1991). Thumb’s rule tested: Visual angle of thumb’s width is about 2 deg. Perception,20(3), 415–418. 10.1068/p200415 [DOI] [PubMed] [Google Scholar]
  372. Otto, K., Castner, N., Geisler, D., & Kasneci, E. (2018). Development and evaluation of a gaze feedback system integrated into eyetrace. Proceedings of the 2018 acm symposium on eye tracking research & applications. New York, NY, USA: Association for Computing Machinery.
  373. Paletta, L., Neuschmied, H., Schwarz, M., Lodron, G., Pszeida, M., Ladstätter, S., & Luley, P. (2014a). Smartphone eye tracking toolbox: Accurate gaze recovery on mobile displays. Proceedings of the symposium on eye tracking research and applications (pp. 367-68). New York, NY, USA: Association for Computing Machinery.
  374. Paletta, L., Neuschmied, H., Schwarz, M., Lodron, G., Pszeida, M., Luley, P. ... Tscheligi, M. (2014b). Attention in mobile interactions: Gaze recovery for large scale studies. Chi ’14 extended abstracts on human factors in computing systems (pp. 1717-1722). New York, NY, USA: Association for Computing Machinery.
  375. Paletta, L., Santner, K., Fritz, G., Mayer, H., & Schrammel, J. (2013). 3D attention: Measurement of visual saliency using eye tracking glasses. Chi ’13 extended abstracts on human factors in computing systems (pp. 199-204). New York, NY, USA: Association for Computing Machinery.
  376. Panetta, K., Wan, Q., Rajeev, S., Kaszowska, A., Gardony, A. L., Naranjo, K., Agaian, & S. (2020). ISeeColor: Method for advanced visual analytics of eye tracking data. IEEE Access,8, 52278–52287. 10.1109/ACCESS.2020.2980901
  377. Panetta, K., Wan, Q., Kaszowska, A., Taylor, H. A., & Agaian, S. (2019). Software architecture for automating cognitive science eye-tracking data analysis and object annotation. IEEE Transactions on Human-Machine Systems,49(3), 268–277. 10.1109/THMS.2019.2892919 [Google Scholar]
  378. Papenmeier, F., & Huff, M. (2010). DynAOI: A tool for matching eye-movement data with dynamic areas of interest in animations and movies. Behavior Research Methods,42(1), 179–187. 10.3758/BRM.42.1.179 [DOI] [PubMed] [Google Scholar]
  379. Papoutsaki, A., Sangkloy, P., Laskey, J., Daskalova, N., Huang, J., & Hays, J. (2016). Webgazer: scalable webcam eye tracking using user interactions. Proceedings of the twenty-fifth international joint conference on artificial intelligence (pp. 3839-3845). AAAI Press.
  380. Park, S. Y., Holmqvist, K., Niehorster, D. C., Huber, L., & Virányi, Z. (2023). How to improve data quality in dog eye tracking. Behavior Research Methods,55(4), 1513–1536. 10.3758/s13428-022-01788-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  381. Pathmanathan, N., & öney, S., Becher, M., Sedlmair, M., Weiskopf, D., & Kurzhals, K. (2023). Been there, seen that: Visualization of movement and 3D eye tracking data from real-world environments. Computer Graphics Forum,42(3), 385–396. 10.1111/cgf.14838
  382. Pedrotti, M., Lei, S., Dzaack, J., & Rötting, M. (2011). A data-driven algorithm for offline pupil signal preprocessing and eyeblink detection in low-speed eye-tracking protocols. Behavior Research Methods,43(2), 372–383. 10.3758/s13428-010-0055-7 [DOI] [PubMed] [Google Scholar]
  383. Peirce, J. W., Gray, J. R., Simpson, S., MacAskill, M., Höchenberger, R., Sogo, H., Lindeløv, & J. K. (2019). PsychoPy2: Experiments in behavior made easy. Behavior Research Methods,51(1), 195–203. 10.3758/s13428-018-01193-y [DOI] [PMC free article] [PubMed]
  384. Peirce, J. W. (2007). Psychopy-psychophysics software in Python. Journal of Neuroscience Methods,162(1), 8–13. 10.1016/j.jneumeth.2006.11.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  385. Pekkanen, J., & Lappi, O. (2017). A new and general approach to signal denoising and eye movement classification based on segmented linear regression. Scientific Reports,7(1), 17726. 10.1038/s41598-017-17983-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  386. Pelli, D. G. (1997). The videotoolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision,10(4), 437–442. 10.1163/156856897X00366 [PubMed] [Google Scholar]
  387. Perry, J. S., & Geisler, W. S. (2002). Gaze-contingent real-time simulation of arbitrary visual fields. B. E. Rogowitz, & T. N. Pappas (Eds.), Human vision and electronic imaging vii (Vol. 4662, pp. 57 – 69). SPIE.
  388. Pettersson, K., Jagadeesan, S., Lukander, K., Henelius, A., Hæggström, E., & Müller, K. (2013). Algorithm for automatic analysis of electro-oculographic data. BioMedical Engineering OnLine,12(1), 110. 10.1186/1475-925X-12-110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  389. Peysakhovich, V., & Hurter, C. (2018a). Intuitive visualization technique to support eye tracking data analysis: A user-study. Proceedings of the 3rd workshop on eye tracking and visualization. New York, NY, USA: Association for Computing Machinery.
  390. Peysakhovich, V., & Hurter, C. (2018b). Scan path visualization and comparison using visual aggregation techniques. Journal of Eye Movement Research,10(5),. 10.16910/jemr.10.5.9 [DOI] [PMC free article] [PubMed]
  391. Pfeiffer, T. (2012). Measuring and visualizing attention in space with 3D attention volumes. Proceedings of the symposium on eye tracking research and applications (pp. 29–36). New York, NY, USA: Association for Computing Machinery.
  392. Pfeiffer, T., & Memili, C. (2016). Model-based real-time visualization of realistic three-dimensional heat maps for mobile eye tracking and eye tracking in virtual reality. Proceedings of the ninth biennial acm symposium on eye tracking research & applications (pp. 95–102). New York, NY, USA: Association for Computing Machinery.
  393. Pfeiffer, T., Renner, P., & Pfeiffer-Leßmann, N. (2016). EyeSee3D 2.0: Model-based real-time analysis of mobile eye-tracking in static and dynamic three-dimensional scenes. Proceedings of the ninth biennial acm symposium on eye tracking research & applications (pp. 189-196). New York, NY, USA: Association for Computing Machinery.
  394. Pfeuffer, K., Vidal, M., Turner, J., Bülling, A., & Gellersen, H. (2013). Pursuit calibration: making gaze calibration less tedious and more flexible. The 26th annual ACM symposium on user interface software and technology, uist’13, St. Andrews, United Kingdom, October 8-11, 2013 (pp. 261–270). http://doi.acm.org/10.1145/2501988.2501998
  395. Privitera, C., & Stark, L. (2000). Algorithms for defining visual regions-of-interest: Comparison with eye fixations. IEEE Transactions on Pattern Analysis and Machine Intelligence,22(9), 970–982. 10.1109/34.877520 [Google Scholar]
  396. Proakis, J. G., & Manolakis, D. G. (1996). Digital signal processing: Principles, algorithms, and applications (3rd ed.). Englewood Cliffs, NJ: Prentice-Hall. [Google Scholar]
  397. Prystauka, Y., Altmann, G. T. M., & Rothman, J. (2024). Online eye tracking and real-time sentence processing: On opportunities and efficacy for capturing psycholinguistic effects of different magnitudes and diversity. Behavior Research Methods,56(4), 3504–3522. 10.3758/s13428-023-02176-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  398. Radach, R., & Kennedy, A. (2004). Theoretical perspectives on eye movements in reading: Past controversies, current issues, and an agenda for future research. European Journal of Cognitive Psychology,16(1–2), 3–26. 10.1080/09541440340000295 [Google Scholar]
  399. Räihä, K.- J., Aula, A., Majaranta, P., Rantala, H., & Koivunen, K. (2005). Static visualization of temporal eye-tracking data. M. F. Costabile, & F. Paternò (Eds.), Human-computer interaction - interact 2005 (pp. 946–949). Berlin, Heidelberg: Springer Berlin Heidelberg.
  400. Ramirez Gomez, A., & Gellersen, H. (2018). Smooth-i: Smart re-calibration using smooth pursuit eye movements. Proceedings of the 2018 acm symposium on eye tracking research & applications. New York, NY, USA: Association for Computing Machinery.
  401. Rantanen, V., Vanhala, T., Tuisku, O., & Niemenlehto, P.- H., Verho, J., Surakka, V. ... Lekkala, J. (2011). A wearable, wireless gaze tracker with integrated selection command source for human-computer interaction. IEEE Transactions on Information Technology in Biomedicine,15(5), 795–801. 10.1109/TITB.2011.2158321 [DOI] [PubMed]
  402. Ravi, N., Gabeur, V., Hu, Y.- T., Hu, R., Ryali, C., Ma, T. ... Feichtenhofer, C. (2024). SAM 2: Segment anything in images and videos. arXiv:2408.00714
  403. Rayner, K. (2014). The gaze-contingent moving window in reading: Development and review. Visual Cognition,22(3-4), 242-258-1401. 10.1080/13506285.2013.879084
  404. Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin,124(3), 372. 10.1037/0033-2909.124.3.372 [DOI] [PubMed] [Google Scholar]
  405. Rayner, K., & Bertera, J. H. (1979). Reading without a fovea. Science,206(4417), 468–469. 10.1126/science.504987 [DOI] [PubMed] [Google Scholar]
  406. Razavi, M., Janfaza, V., Yamauchi, T., Leontyev, A., Longmire-Monford, S., & Orr, J. (2022). OpenSync: An open-source platform for synchronizing multiple measures in neuroscience experiments. Journal of Neuroscience Methods,369, 109458. 10.1016/j.jneumeth.2021.109458 [DOI] [PubMed] [Google Scholar]
  407. Reder, S. M. (1973). On-line monitoring of eye-position signals in contingent and noncontingent paradigms. Behavior Research Methods & Instrumentation,5(2), 218–228. 10.3758/BF03200168 [Google Scholar]
  408. Reimer, B., & Sodhi, M. (2006). Detecting eye movements in dynamic environments. Behavior Research Methods,38(4), 667–682. 10.3758/BF03193900 [DOI] [PubMed] [Google Scholar]
  409. Reingold, E. M. (2014). Eye tracking research and technology: Towards objective measurement of data quality. Visual Cognition,22(3), 635–652. 10.1080/13506285.2013.876481 [DOI] [PMC free article] [PubMed] [Google Scholar]
  410. Richardson, D. C., & Dale, R. (2005). Looking to understand: The coupling between speakers’ and listeners’ eye movements and its relationship to discourse comprehension. Cognitive Science,29(6), 1045–1060. 10.1207/s15516709cog0000_29 [DOI] [PubMed] [Google Scholar]
  411. Richlan, F., Gagl, B., Schuster, S., Hawelka, S., Humenberger, J., & Hutzler, F. (2013). A new high-speed visual stimulation method for gaze contingent eye movement and brain activity studies. Frontiers in Systems Neuroscience,7,. 10.3389/fnsys.2013.00024 [DOI] [PMC free article] [PubMed]
  412. Rim, N. W., Choe, K. W., Scrivner, C., & Berman, M. G. (2021). Introducing point-of-interest as an alternative to area-of-interest for fixation duration analysis. PLoS One,16(5), 1–18. 10.1371/journal.pone.0250170 [DOI] [PMC free article] [PubMed] [Google Scholar]
  413. Rodrigues, N., Netzel, R., Spalink, J., & Weiskopf, D. (2018). Multiscale scanpath visualization and filtering. Proceedings of the 3rd workshop on eye tracking and visualization. New York, NY, USA: Association for Computing Machinery.
  414. Ronsse, R., White, O., & Lefèvre, P. (2007). Computation of gaze orientation under unrestrained head movements. Journal of Neuroscience Methods,159(1), 158–169. 10.1016/j.jneumeth.2006.06.016 [DOI] [PubMed] [Google Scholar]
  415. Rosengren, W., Nyström, M., Hammar, B., & Stridh, M. (2020). A robust method for calibration of eye tracking data recorded during nystagmus. Behavior Research Methods,52(1), 36–50. 10.3758/s13428-019-01199-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  416. Rousselet, G. A., Pernet, C. R., & Wilcox, R. R. (2017). Beyond differences in means: Robust graphical methods to compare two groups in neuroscience. European Journal of Neuroscience,46(2), 1738–1748. 10.1111/ejn.13610 [DOI] [PubMed] [Google Scholar]
  417. Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. 2011 international conference on computer vision (ICCV 2011) (pp. 2564-2571).
  418. Ryabinin, K., Alexeeva, S., & Petrova, T. (2022). Proceedings of the international conference on computer graphics and vision “Graphicon” (19-21 September 2022, Ryazan) (Vol. 32, pp. 228–239). Keldysh Institute of Applied Mathematics.
  419. Sadeghi, R., Ressmeyer, R., Yates, J., & Otero-Millan, J. (2024). Open Iris - an open source framework for video-based eye-tracking research and development. Proceedings of the 2024 symposium on eye tracking research and applications. New York, NY, USA: Association for Computing Machinery.
  420. Saez de Urabain, I. R., Johnson, M. H., & Smith, T. J. (2015). GraFIX: A semiautomatic approach for parsing low- and high-quality eye-tracking data. Behavior Research Methods,47(1), 53–72. 10.3758/s13428-014-0456-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  421. Salas, J. A., & Levin, D. T. (2022). Efficient calculations of NSS-based gaze similarity for time-dependent stimuli. Behavior Research Methods,54(1), 94–116. 10.3758/s13428-021-01562-0 [DOI] [PubMed] [Google Scholar]
  422. Salehi, F., Razavi, M., Smith, M., & Dixit, M. (2024). Integrated eye-tracking and EEG data collection and synchronization for virtual reality-based spatial ability assessments. T. Ahram, W. Karwowski, D. Russo, & G. D. Bucchianico (Eds.), Intelligent human systems integration (IHSI 2024): Integrating people and intelligent systems (Vol. 119, pp. 1-6).
  423. Salvucci, D. D., & Goldberg, J. H. (2000). Identifying fixations and saccades in eye-tracking protocols. Proceedings of the 2000 symposium on eye tracking research & applications (pp. 71–78). New York, NY, USA: ACM. http://doi.acm.org/10.1145/355017.355028
  424. Sanchis-Jurado, V., Talens-Estarelles, C., Esteve-Taboada, J. J., Pons, Á. M., & García-Lázaro, S. (2020). Non-invasive high-speed blinking kinematics characterization. Graefe’s Archive for Clinical and Experimental Ophthalmology,258(12), 2701–2714. 10.1007/s00417-020-04782-w [DOI] [PubMed] [Google Scholar]
  425. Santella, A., & DeCarlo, D. (2004). Robust clustering of eye movement recordings for quantification of visual interest. Proceedings of the 2004 symposium on eye tracking research & applications (pp. 27-34). New York, NY, USA: Association for Computing Machinery.
  426. Santini, T., Fuhl, W., & Kasneci, E. (2017b). CalibMe: Fast and unsupervised eye tracker calibration for gaze-based pervasive human-computer interaction. Proceedings of the 2017 chi conference on human factors in computing systems (pp. 2594–2605).
  427. Santini, T., Fuhl, W., Geisler, D., & Kasneci, E. (2017a). EyeRecToo: Open-source software for real-time pervasive head-mounted eye tracking. Proceedings of the 12th international joint conference on computer vision, imaging and computer graphics theory and applications - volume 6: VISAPP, (VISIGRAPP 2017) (pp. 96-101). SciTePress.
  428. Santini, T., Fuhl, W., Kübler, T., & Kasneci, E. (2016). Bayesian identification of fixations, saccades, and smooth pursuits. Proceedings of the ninth biennial acm symposium on eye tracking research & applications (pp. 163–170). New York, NY, USA: ACM. http://doi.acm.org/10.1145/2857491.2857512
  429. Santini, T., Niehorster, D. C., & Kasneci, E. (2019). Get a grip: Slippage-robust and glint-free gaze estimation for real-time pervasive head-mounted eye tracking. Proceedings of the 11th acm symposium on eye tracking research & applications (pp. 17:1–17:10). New York, NY, USA: ACM. http://doi.acm.org/10.1145/3314111.3319835
  430. Santini, F., Redner, G., Iovin, R., & Rucci, M. (2007). EyeRIS: A general-purpose system for eye-movement contingent display control. Behavior Research Methods,39(3), 350–364. 10.3758/BF03193003 [DOI] [PubMed] [Google Scholar]
  431. Saranpää, W., Apell Skjutar, F., Heander, J., Söderberg, E., Niehorster, D. C., Mattsson, O. ... Church, L. (2023). Gander: A platform for exploration of gaze-driven assistance in code review. Proceedings of the 2023 symposium on eye tracking research and applications. New York, NY, USA: Association for Computing Machinery.
  432. Sasson, N. J., & Elison, J. T. (2012). Eye tracking young children with autism. Journal of Visualized Experiments,61, e3675. 10.3791/3675 [DOI] [PMC free article] [PubMed] [Google Scholar]
  433. Saunders, D. R., & Woods, R. L. (2014). Direct measurement of the system latency of gaze-contingent displays. Behavior Research Methods,46(2), 439–447. 10.3758/s13428-013-0375-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  434. Sauter, D., Martin, B., Di Renzo, N., & Vomscheid, C. (1991). Analysis of eye tracking movements using innovations generated by a Kalman filter. Medical and Biological Engineering and Computing,29(1), 63–69. [DOI] [PubMed] [Google Scholar]
  435. Saxena, S., & Fink, L. (2023). Synchronized multi-person eye-tracking in dynamic scenes. 19th annual neuromusic conference. https://www.neuromusic.ca/posters-2023/synchronized-multi-person-eye-tracking-in-dynamic-scenes/
  436. Saxena, S., Fink, L. K., & Lange, E. B. (2023). Deep learning models for webcam eye tracking in online experiments. Behavior Research Methods. 10.3758/s13428-023-02190-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  437. Scherr, K. C., Agauas, S. J., & Ashby, J. (2016). The text matters: Eye movements reflect the cognitive processing of interrogation rights. Applied Cognitive Psychology,30(2), 234–241. 10.1002/acp.3195 [Google Scholar]
  438. Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. (2016). Detecting collaborative dynamics using mobile eye-trackers. C. K. Looi, J. L. Polman, U. Cress, & P. Reimann (Eds.), Transforming learning, empowering learners: The international conference of the learning sciences (Vol. 1, pp. 522–529). International Society of the Learning Sciences.
  439. Schroeder, S. (2019). popEye - an R package to analyse eye movement data from reading experiments Retrieved 13 May 2024, from https://github.com/sascha2schroeder/popEye
  440. Schroeder, S. (2022). What’s up popEye? [Abstract]. Proceedings of ECEM 2022.
  441. Schweitzer, R., & Rolfs, M. (2020). An adaptive algorithm for fast and reliable online saccade detection. Behavior Research Methods,52(3), 1122–1139. 10.3758/s13428-019-01304-3 [DOI] [PubMed] [Google Scholar]
  442. Semmelmann, K., & Weigelt, S. (2018). Online webcam-based eye tracking in cognitive science: A first look. Behavior Research Methods,50(2), 451–465. 10.3758/s13428-017-0913-7 [DOI] [PubMed] [Google Scholar]
  443. Shaffer, D. M., Krisky, C. M., & Sweeney, J. A. (2003). Frequency and Metrics of Square-Wave Jerks: Influences of Task-Demand Characteristics. Investigative Ophthalmology & Visual Science,44(3), 1082–1087. 10.1167/iovs.02-0356 [DOI] [PubMed] [Google Scholar]
  444. Sheena, D., & Borah, J. (1981). Compensation for some second order effects to improve eye position measurements. In D. F. Fisher, R. A. Monty, & J. W. Senders (Eds.), Eye movements: Cognition and visual perception (pp. 257–268). Hillsdale, N.J.: Lawrence Erlbaum Associates. [Google Scholar]
  445. Shih, S.- W., Wu, Y.- T., & Liu, J. (2000). A calibration-free gaze tracking technique. Proceedings 15th international conference on pattern recognition. icpr-2000 (Vol. 4, pp. 201–204).
  446. Siirtola, H., Špakov, O., Istance, H., & Räihä, K.-J. (2019). Shared gaze in collaborative visual search. International Journal of Human-Computer Interaction,35(18), 1693–1705. 10.1080/10447318.2019.1565746 [Google Scholar]
  447. Singh, K., Kalash, M., & Bruce, N. (2018). Capturing real-world gaze behaviour: Live and unplugged. Proceedings of the 2018 acm symposium on eye tracking research & applications. New York, NY, USA: Association for Computing Machinery.
  448. Sogo, H. (2013). GazeParser: An open-source and multiplatform library for low-cost eye tracking and analysis. Behavior Research Methods,45(3), 684–695. 10.3758/s13428-012-0286-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  449. Sogo, H. (2017). Sgttoolbox: Utility for controlling SimpleGazeTracker from Psychtoolbox. Behavior Research Methods,49(4), 1323–1332. 10.3758/s13428-016-0791-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  450. Špakov, O. (2012). Comparison of eye movement filters used in HCI. Proceedings of the symposium on eye tracking research and applications (pp. 281-284). New York, NY, USA: Association for Computing Machinery.
  451. Špakov, O., Istance, H., Hyrskykari, A., Siirtola, H., & Räihä, K.- J. (2019). Improving the performance of eye trackers with limited spatial accuracy and low sampling rates for reading analysis by heuristic fixation-to-word mapping. Behavior Research Methods,51(6), 2661–2687. 10.3758/s13428-018-1120-x [DOI] [PubMed]
  452. Špakov, O., & Miniotas, D. (2007). Visualization of eye gaze data using heat maps. Elektronika ir Elektrotechnika,74(2), 55–58. [Google Scholar]
  453. Sprenger, A., Trillenberg, P., Nagel, M., Sweeney, J. A., & Lencer, R. (2013). Enhanced top-down control during pursuit eye tracking in schizophrenia. European Archives of Psychiatry and Clinical Neuroscience,263(3), 223–231. 10.1007/s00406-012-0332-9 [DOI] [PubMed] [Google Scholar]
  454. Sridharan, S., Pieszala, J., & Bailey, R. (2015). Depth-based subtle gaze guidance in virtual reality environments. Proceedings of the acm siggraph symposium on applied perception (pp. 132). New York, NY, USA: Association for Computing Machinery.
  455. Stampe, D. M. (1993). Heuristic filtering and reliable calibration methods for video-based pupil-tracking systems. Behavior Research Methods, Instruments, & Computers,25(2), 137–142. 10.3758/BF03204486 [Google Scholar]
  456. Startsev, M., & Zemblys, R. (2023). Evaluating eye movement event detection: A review of the state of the art. Behavior Research Methods,55(4), 1653–1714. 10.3758/s13428-021-01763-7 [DOI] [PubMed] [Google Scholar]
  457. Startsev, M., Agtzidis, I., & Dorr, M. (2019). 1D CNN with BLSTM for automated classification of fixations, saccades, and smooth pursuits. Behavior Research Methods,51(2), 556–572. 10.3758/s13428-018-1144-2 [DOI] [PubMed] [Google Scholar]
  458. Steffan, A., Zimmer, L., Arias-Trejo, N., Bohn, M., Dal Ben, R., Flores-Coronado, M. A., Schuwerk, & T. (2024). Validation of an open source, remote web-based eye-tracking method (WebGazer) for research in early childhood. Infancy,29(1), 31–55. 10.1111/infa.12564 [DOI] [PMC free article] [PubMed]
  459. Stein, I., Jossberger, H., & Gruber, H. (2023). MAP3D: An explorative approach for automatic mapping of real-world eye-tracking data on a virtual 3D model. Journal of Eye Movement Research,15(3),. 10.16910/jemr.15.3.8 [DOI] [PMC free article] [PubMed]
  460. Stein, N., Niehorster, D. C., Watson, T., Steinicke, F., Rifai, K., Wahl, S., & Lappe, M. (2021). A comparison of eye tracking latencies among several commercial head-mounted displays. i-Perception,12(1), 1–16, 10.1177/2041669520983338 [DOI] [PMC free article] [PubMed]
  461. Stellmach, S., Nacke, L., & Dachselt, R. (2010a). 3D attentional maps: Aggregated gaze visualizations in three-dimensional virtual environments. Proceedings of the international conference on advanced visual interfaces (pp. 345-348). New York, NY, USA: Association for Computing Machinery.
  462. Stellmach, S., Nacke, L., & Dachselt, R. (2010b). Advanced gaze visualizations for three-dimensional virtual environments. Proceedings of the 2010 symposium on eye-tracking research & applications (pp. 109-112). New York, NY, USA: Association for Computing Machinery.
  463. Stevenson, S. B., Roorda, A., & Kumar, G. (2010). Eye tracking with the adaptive optics scanning laser ophthalmoscope. Proceedings of the 2010 symposium on eye-tracking research & applications (pp. 195-198). New York, NY, USA: Association for Computing Machinery.
  464. Stolp, F., Stellmacher, M., & Arnrich, B. (2024). CognitIDE: An IDE plugin for mapping physiological measurements to source code. Companion proceedings of the 32nd ACM international conference on the foundations of software engineering (pp. 592-596).
  465. Stone, S. A., Boser, Q. A., Dawson, T. R., Vette, A. H., Hebert, J. S., Pilarski, P. M., & Chapman, C. S. (2024). Generating accurate 3D gaze vectors using synchronized eye tracking and motion capture. Behavior Research Methods,56(1), 18–31. 10.3758/s13428-022-01958-6 [DOI] [PubMed] [Google Scholar]
  466. Su, D., & Li, Y.- F., & Chen, H. (2020). Cross-validated locally polynomial modeling for 2-D/3-D gaze tracking with head-worn devices. IEEE Transactions on Industrial Informatics,16(1), 510–521. 10.1109/TII.2019.2933481
  467. Sueishi, T., Matsumura, S., Yachida, S., & Ishikawa, M. (2022). Optical and control design of bright-pupil microsaccadic artificial eye. 2022 ieee/sice international symposium on system integration (sii) (pp. 760-765).
  468. Sundstedt, V., & Garro, V. (2022). A systematic review of visualization techniques and analysis tools for eye-tracking in 3D environments. Frontiers in Neuroergonomics,3,. 10.3389/fnrgo.2022.910019 [DOI] [PMC free article] [PubMed]
  469. Sundstedt, V., Bernhard, M., Stavrakis, E., Reinhard, E., & Wimmer, M. (2013). Visual attention and gaze behavior in games: An object-based approach. In M. Seif El-Nasr, A. Drachen, & A. Canossa (Eds.), Game analytics: Maximizing the value of player data (pp. 543–583). London: Springer London.
  470. Świrski, L., & Dodgson, N. A. (2013). A fully-automatic, temporal approach to single camera, glint-free 3D eye model fitting [abstract]. Proceedings of ecem 2013.
  471. Tabernero, J., & Artal, P. (2014). Lens oscillations in the human eye. implications for post-saccadic suppression of vision. PLoS One, 9(4), 1-6, 10.1371/journal.pone.0095764 [DOI] [PMC free article] [PubMed]
  472. Tabuchi, M., & Hirotomi, T. (2022). Using fiducial marker for analyzing wearable eye-tracker gaze data measured while cooking. In M. Kurosu, S. Yamamoto, H. Mori, D. D. Schmorrow, C. M. Fidopiastis, N. A. Streitz, & S. Konomi (Eds.), HCI international 2022 - late breaking papers. multimodality in advanced interaction environments (pp. 192–204). Cham: Springer Nature Switzerland. 10.1007/978-3-031-17618-0_15
  473. Tang, N., An, J., Chen, M., Bansal, A., Huang, Y., McMillan, C., & Li, T. J.- J. (2024). CodeGRITS: A research toolkit for developer behavior and eye tracking in IDE. Proceedings of the 2024 ieee/acm 46th international conference on software engineering: Companion proceedings (pp. 119–123). New York, NY, USA: Association for Computing Machinery.
  474. Tang, S., Reilly, R. G., & Vorstius, C. (2012). Eyemap: A software system for visualizing and analyzing eye movement data in reading. Behavior Research Methods,44(2), 420–438. 10.3758/s13428-011-0156-y [DOI] [PubMed] [Google Scholar]
  475. Tannfelt Wu, J. (2018). Robot mimicking human eye movements to test eye tracking devices (Master’s thesis, KTH, Stockholm, Sweden). https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-245066
  476. Thorup, E., Nyström, P., Gredebäck, G., Bölte, S., Falck-Ytter, T., the EASE Team. (2018). Reduced alternating gaze during social interaction in infancy is associated with elevated symptoms of autism in toddlerhood. Journal of Abnormal Child Psychology,46(7), 1547–1561. 10.1007/s10802-017-0388-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  477. Titz, J., Scholz, A., & Sedlmeier, P. (2018). Comparing eye trackers by correlating their eye-metric data. Behavior Research Methods,50(5), 1853–1863. 10.3758/s13428-017-0954-y [DOI] [PubMed] [Google Scholar]
  478. Toivanen, M. (2016). An advanced Kalman filter for gaze tracking signal. Biomedical Signal Processing and Control,25, 150–158. 10.1016/j.bspc.2015.11.009 [Google Scholar]
  479. Tole, J. R., & Young, L. R. (1981). Digital filters for saccade and fixation detection. In D. F. Fisher, R. A. Monty, & J. W. Senders (Eds.), Eye movements: Cognition and visual perception (pp. 247–256). Hillsdale, N.J.: Lawrence Erlbaum Associates. [Google Scholar]
  480. Tomasi, M., Pundlik, S., Bowers, A. R., Peli, E., & Luo, G. (2016). Mobile gaze tracking system for outdoor walking behavioral studies. Journal of Vision,16(3), 27–27. 10.1167/16.3.27 [DOI] [PMC free article] [PubMed] [Google Scholar]
  481. Toyama, T., Kieninger, T., Shafait, F., & Dengel, A. (2012). Gaze guided object recognition using a head-mounted eye tracker. Proceedings of the symposium on eye tracking research and applications (pp. 91–98). New York, NY, USA: Association for Computing Machinery.
  482. Trojano, L., Moretta, P., Loreto, V., Cozzolino, A., Santoro, L., & Estraneo, A. (2012). Quantitative assessment of visual behavior in disorders of consciousness. Journal of Neurology,259(9), 1888–1895. 10.1007/s00415-012-6435-4 [DOI] [PubMed] [Google Scholar]
  483. Tsuji, S., & Fiévét, A.- C., & Cristia, A. (2021). Toddler word learning from contingent screens with and without human presence. Infant Behavior and Development,63, 101553. 10.1016/j.infbeh.2021.101553 [DOI] [PubMed]
  484. Tula, A. D., Kurauchi, A., Coutinho, F., & Morimoto, C. (2016). Heatmap explorer: An interactive gaze data visualization tool for the evaluation of computer interfaces. Proceedings of the 15th brazilian symposium on human factors in computing systems. New York, NY, USA: Association for Computing Machinery.
  485. Turner, J., Bulling, A., & Gellersen, H. (2012). Extending the visual field of a head-mounted eye tracker for pervasive eye-based interaction. Proceedings of the symposium on eye tracking research and applications (pp. 269-272). New York, NY, USA: Association for Computing Machinery.
  486. Ugwitz, P., Kvarda, O., Juříková, Z., Šašinka, E., & Tamm, S. (2022). Eye-tracking in interactive virtual environments: Implementation and evaluation. Applied Sciences,12(3),. 10.3390/app12031027
  487. Vadillo, M. A., Street, C. N. H., Beesley, T., & Shanks, D. R. (2015). A simple algorithm for the offline recalibration of eye-tracking data through best-fitting linear transformation. Behavior Research Methods,47(4), 1365–1376. 10.3758/s13428-014-0544-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  488. Vaidyanathan, P., Pelz, J., Alm, C., Shi, P., & Haake, A. (2014). Recurrence quantification analysis reveals eye-movement behavior differences between experts and novices. Proceedings of the symposium on eye tracking research and applications (pp. 303-306). New York, NY, USA: Association for Computing Machinery.
  489. Valliappan, N., Dai, N., Steinberg, E., He, J., Rogers, K., Ramachandran, V., & Navalpakkam, V. (2020). Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nature Communications,11(1), 4553. 10.1038/s41467-020-18360-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  490. Valtakari, N. V., Hessels, R. S., Niehorster, D. C., Viktorsson, C., Nyström, P., Falck-Ytter, T., & Hooge, I. T. C. (2024). A field test of computer-vision-based gaze estimation in psychology. Behavior Research Methods,56(3), 1900–1915. 10.3758/s13428-023-02125-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  491. Valtakari, N. V., Hooge, I. T. C., Viktorsson, C., Nyström, P., Falck-Ytter, T., & Hessels, R. S. (2021). Eye tracking in human interaction: Possibilities and limitations. Behavior Research Methods,1–17. 10.3758/s13428-020-01517-x [DOI] [PMC free article] [PubMed]
  492. Van der Cruyssen, I., Ben-Shakhar, G., Pertzov, Y., Guy, N., Cabooter, Q., Gunschera, L. J., & Verschuere, B. (2023). The validation of online webcam-based eye-tracking: The replication of the cascade effect, the novelty preference, and the visual world paradigm. Behavior Research Methods. 10.3758/s13428-023-02221-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  493. van der Geest, J. N., & Frens, M. A. (2002). Recording eye movements with video-oculography and scleral search coils: A direct comparison of two methods. Journal of Neuroscience Methods,114(2), 185–195. 10.1016/S0165-0270(01)00527-1 [DOI] [PubMed] [Google Scholar]
  494. Van der Steen, J., & Bruno, P. (1995). Unequal amplitude saccades produced by aniseikonic patterns: effects of viewing distance. Vision Research,35(23–24), 3459–3471. 10.1016/0042-6989(95)00138-5 [DOI] [PubMed] [Google Scholar]
  495. van Diepen, P. M., Wampers, M., & d’Ydewalle, G. (1998). Functional division of the visual field: Moving masks and moving windows. G. Underwood (Ed.), Eye guidance in reading and scene perception (pp. 337-355). Amsterdam: Elsevier Science Ltd.
  496. van Renswoude, D. R., Raijmakers, M. E., Koornneef, A., Johnson, S. P., Hunnius, S., & Visser, I. (2018). Gazepath: An eye-tracking analysis tool that accounts for individual differences and data quality. Behavior Research Methods,50(2), 834–852. 10.3758/s13428-017-0909-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  497. Vansteenkiste, P., Cardon, G., Philippaerts, R., & Lenoir, M. (2015). Measuring dwell time percentage from head-mounted eye-tracking data - comparison of a frame-by-frame and a fixation-by-fixation analysis. Ergonomics,58(5), 712–721. 10.1080/00140139.2014.990524 [DOI] [PubMed] [Google Scholar]
  498. Vasilev, M. R., Adedeji, V. I., Laursen, C., Budka, M., & Slattery, T. J. (2021). Do readers use character information when programming return-sweep saccades? Vision Research,183, 30–40. 10.1016/j.visres.2021.01.003 [DOI] [PubMed] [Google Scholar]
  499. Vehlen, A., Standard, W., & Domes, G. (2022). How to choose the size of facial areas of interest in interactive eye tracking. PLoS One,17(2), 1–13. 10.1371/journal.pone.0263594 [DOI] [PMC free article] [PubMed] [Google Scholar]
  500. Velisar, A., & Shanidze, N. M. (2024). Noise estimation for head-mounted 3D binocular eye tracking using Pupil Core eye-tracking goggles. Behavior Research Methods,56(1), 53–79. 10.3758/s13428-023-02150-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  501. Vernetti, A., Smith, T. J., & Senju, A. (2017). Gaze-contingent reinforcement learning reveals incentive value of social signals in young children and adults. Proceedings of the Royal Society B: Biological Sciences,284(1850), 20162747. 10.1098/rspb.2016.2747 [DOI] [PMC free article] [PubMed] [Google Scholar]
  502. Vernetti, A., Senju, A., Charman, T., Johnson, M. H., & Gliga, T. (2018). Simulating interaction: Using gaze-contingent eye-tracking to measure the reward value of social signals in toddlers with and without autism. Developmental Cognitive Neuroscience,29, 21–29. 10.1016/j.dcn.2017.08.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  503. Villanueva, A., & Cabeza, R. (2007). Models for gaze tracking systems. EURASIP Journal on Image and Video Processing,2007(3), 4. 10.1155/2007/23570 [Google Scholar]
  504. von der Malsburg, T., & Vasishth, S. (2011). What is the scanpath signature of syntactic reanalysis? Journal of Memory and Language,65(2), 109–127. 10.1016/j.jml.2011.02.004 [Google Scholar]
  505. Voßkühler, A., Nordmeier, V., Kuchinke, L., & Jacobs, A. M. (2008). OGAMA (open gaze and mouse analyzer): Open-source software designed to analyze eye and mouse movements in slideshow study designs. Behavior Research Methods,40(4), 1150–1162. 10.3758/BRM.40.4.1150 [DOI] [PubMed] [Google Scholar]
  506. Wang, C.- Y., & Liao, H.- Y. M. (2024). YOLOv9: Learning what you want to learn using programmable gradient information. arXiv:2402.13616
  507. Wang, Y., Han, Q., Habermann, M., Daniilidis, K., Theobalt, C., & Liu, L. (2023). NeuS2: Fast learning of neural implicit surfaces for multi-view reconstruction. 2023 IEEE/CVF international conference on computer vision (ICCV) (pp. 3272-3283).
  508. Wang, D., Mulvey, F. B., Pelz, J. B., & Holmqvist, K. (2017). A study of artificial eyes for the measurement of precision in eye-trackers. Behavior Research Methods,49(3), 947–959. 10.3758/s13428-016-0755-8 [DOI] [PubMed] [Google Scholar]
  509. Wang, Q., Wall, C. A., Barney, E. C., Bradshaw, J. L., Macari, S. L., Chawarska, K., & Shic, F. (2020). Promoting social attention in 3-year-olds with ASD through gaze-contingent eye tracking. Autism Research,13(1), 61–73. 10.1002/aur.2199 [DOI] [PMC free article] [PubMed] [Google Scholar]
  510. Watson, M. R., Voloh, B., Thomas, C., Hasan, A., & Womelsdorf, T. (2019). Use: An integrative suite for temporally-precise psychophysical experiments in virtual environments for human, nonhuman, and artificially intelligent agents. Journal of Neuroscience Methods,326, 108374. 10.1016/j.jneumeth.2019.108374 [DOI] [PubMed] [Google Scholar]
  511. Weber, S., Schubert, R. S., Vogt, S., Velichkovsky, B. M., & Pannasch, S. (2018). Gaze3DFix: Detecting 3D fixations with an ellipsoidal bounding volume. Behavior Research Methods,50(5), 2004–2015. 10.3758/s13428-017-0969-4 [DOI] [PubMed] [Google Scholar]
  512. Weibel, N., Fouse, A., Emmenegger, C., Kimmich, S., & Hutchins, E. (2012). Let’s look at the cockpit: exploring mobile eye-tracking for observational research on the flight deck. Proceedings of the symposium on eye tracking research and applications (pp. 107-114).
  513. Weiss, R. S., Remington, R., & Ellis, S. R. (1989). Sampling distributions of the entropy in visual scanning. Behavior Research Methods, Instruments, & Computers,21(3), 348–352. 10.3758/BF03202796 [Google Scholar]
  514. Wengelin, Å., Frid, J., Johansson, R., & Johansson, V. (2019). Combining keystroke logging with other methods: Towards an experimental environment for writing process research. In E. Lindgren, & K. Sullivan (Eds.), Observing writing: Insights from keystroke logging and handwriting (pp. 30-49). Leiden, The Netherlands: Brill. https://brill.com/view/book/edcoll/9789004392526/BP000002.xml
  515. Wengelin, Å., Torrance, M., Holmqvist, K., Simpson, S., Galbraith, D., Johansson, V., & Johansson, R. (2009). Combined eyetracking and keystroke-logging methods for studying cognitive processes in text production. Behavior Research Methods,41(2), 337–351. 10.3758/BRM.41.2.337 [DOI] [PubMed] [Google Scholar]
  516. Wengelin, Å., Johansson, R., Frid, J., & Johansson, V. (2024). Capturing writers’ typing while visually attending the emerging text: A methodological approach. Reading and Writing,37(2), 265–289. 10.1007/s11145-022-10397-w [Google Scholar]
  517. Werchan, D. M., Thomason, M. E., & Brito, N. H. (2023). OWLET: An automated, open-source method for infant gaze tracking using smartphone and webcam recordings. Behavior Research Methods,55(6), 3149–3163. 10.3758/s13428-022-01962-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  518. West, J. M., Haake, A. R., Rozanski, E. P., & Karn, K. S. (2006). eyePatterns: Software for identifying patterns and similarities across fixation sequences. Proceedings of the 2006 symposium on eye tracking research & applications (pp. 149-154). New York, NY, USA: Association for Computing Machinery.
  519. Wijnen, J. L. C., & Groot, C. J. (1984). An eye movement analysis system (EMAS) for the identification of cognitive processes on figural tasks. Behavior Research Methods, Instruments, & Computers,16(3), 277–281. 10.3758/BF03202402 [Google Scholar]
  520. Wikipedia (2024). Small-angle approximation. Retrieved 15 Apr 2024, from https://en.wikipedia.org/wiki/Small-angle_approximationt
  521. Wolf, J., Hess, S., Bachmann, D., Lohmeyer, Q., & Meboldt, M. (2018). Automating areas of interest analysis in mobile eye tracking experiments based on machine learning. Journal of Eye Movement Research,11(6). 10.16910/jemr.11.6.6 [DOI] [PMC free article] [PubMed]
  522. Wooding, D. S. (2002b). Fixation maps: quantifying eye-movement traces. Proceedings of the 2002 symposium on eye tracking research & applications (pp. 31-36). New York, NY, USA: Association for Computing Machinery.
  523. Wooding, D. S. (2002). Eye movements of large populations: II. Deriving regions of interest, coverage, and similarity using fixation maps. Behavior Research Methods, Instruments, & Computers,34(4), 518–528. 10.3758/BF03195481 [DOI] [PubMed]
  524. Wu, M. M. A., & Munzner, T. (2015). SEQIT: Visualizing sequences of interest in eye tracking data. Proc. ieee conference on information visualization (infovis).
  525. Wu, R.- J., Clark, A. M., Cox, M. A., Intoy, J., Jolly, P. C., Zhao, Z., & Rucci, M. (2023). High-resolution eye-tracking via digital imaging of Purkinje reflections. Journal of Vision,23(5), 4–4. 10.1167/jov.23.5.4 [DOI] [PMC free article] [PubMed]
  526. Wyatt, H. J. (2010). The human pupil and the use of video-based eyetrackers. Vision Research,50(19), 1982–1988. 10.1016/j.visres.2010.07.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  527. Wyder, S., & Cattin, P. C. (2016). Stereo eye tracking with a single camera for ocular tumor therapy. Proceedings of the ophthalmic medical image analysis international workshop (Vol. 3, pp. 81–88).
  528. Wyder, S., & Cattin, P. C. (2018). Eye tracker accuracy: Quantitative evaluation of the invisible eye center location. International Journal of Computer Assisted Radiology and Surgery,13(10), 1651–1660. 10.1007/s11548-018-1808-5 [DOI] [PubMed] [Google Scholar]
  529. Yang, X., & Krajbich, I. (2021). Webcam-based online eye-tracking for behavioral research. Judgment and Decision Making,16(6), 1485–1505. 10.1017/S1930297500008512 [Google Scholar]
  530. Yang, M., Gao, Y., Tang, L., Hou, J., & Hu, B. (2023). Wearable eye-tracking system for synchronized multimodal data acquisition. IEEE Transactions on Circuits and Systems for Video Technology. 10.1109/TCSVT.2023.3332814 [Google Scholar]
  531. Yoo, D. H., & Chung, M. J. (2005). A novel non-intrusive eye gaze estimation using cross-ratio under large head motion. Computer Vision and Image Understanding,98(1), 25–51. 10.1016/j.cviu.2004.07.011 [Google Scholar]
  532. Zandi, B., Lode, M., Herzog, A., Sakas, G., & Khanh, T. Q. (2021). PupilEXT: Flexible open-source platform for high-resolution pupillometry in vision research. Frontiers in Neuroscience,15,. 10.3389/fnins.2021.676220 [DOI] [PMC free article] [PubMed]
  533. Zemblys, R., Niehorster, D. C., Komogortsev, O., & Holmqvist, K. (2018). Using machine learning to detect events in eye-tracking data. Behavior Research Methods,50(1), 160–181. 10.3758/s13428-017-0860-3 [DOI] [PubMed] [Google Scholar]
  534. Zemblys, R., Niehorster, D. C., & Holmqvist, K. (2019). gazeNet: End-to-end eye-movement event detection with deep neural networks. Behavior Research Methods,51(2), 840–864. 10.3758/s13428-018-1133-5 [DOI] [PubMed] [Google Scholar]
  535. Zeng, G., Simpson, E. A., & Paukner, A. (2024). Maximizing valid eye-tracking data in human and macaque infants by optimizing calibration and adjusting areas of interest. Behavior Research Methods,56(2), 881–907. 10.3758/s13428-022-02056-3 [DOI] [PubMed] [Google Scholar]
  536. Zhang, Y., & Hornof, A. J. (2014). Easy post-hoc spatial recalibration of eye tracking data. Proceedings of the symposium on eye tracking research and applications (pp. 95-98). New York, NY, USA: Association for Computing Machinery.
  537. Zhang, M., Gofas-Salas, E., Leonard, B. T., Rui, Y., Snyder, V. C., Reecher, H. M., & Rossi, E. A. (2021). Strip-based digital image registration for distortion minimization and robust eye motion measurement from scanned ophthalmic imaging systems. Biomedical Optics Express,12(4), 2353–2372. 10.1364/BOE.418070 [DOI] [PMC free article] [PubMed] [Google Scholar]
  538. Zhang, Y., & Hornof, A. J. (2011). Mode-of-disparities error correction of eye-tracking data. Behavior Research Methods,43(3), 834–842. 10.3758/s13428-011-0073-0 [DOI] [PubMed] [Google Scholar]
  539. Zhang, L., Liu, X., Chen, Q., Zhou, Y., & Xu, T. (2022). EyeBox: A toolbox based on Python3 for eye movement analysis. Procedia Computer Science,201, 166–173. 10.1016/j.procs.2022.03.024 [Google Scholar]
  540. Zhang, H., Wu, S., Chen, W., Gao, Z., & Wan, Z. (2024). Self-calibrating gaze estimation with optical axes projection for head-mounted eye tracking. IEEE Transactions on Industrial Informatics,20(2), 1397–1407. 10.1109/TII.2023.3276322 [Google Scholar]
  541. Zhegallo, A. V., & Marmalyuk, P. A. (2015). ETRAN-R extension package for eye tracking results analysis. Perception,44(8–9), 1129–1135. 10.1177/0301006615594944 [DOI] [PubMed] [Google Scholar]
  542. Zimmermann, J., Vazquez, Y., Glimcher, P. W., Pesaran, B., & Louie, K. (2016). Oculomatic: High speed, reliable, and accurate open-source eye tracking for humans and non-human primates. Journal of Neuroscience Methods,270, 138–146. 10.1016/j.jneumeth.2016.06.016 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.

Not applicable.


Articles from Behavior Research Methods are provided here courtesy of Springer

RESOURCES