Abstract
An opportune early diagnosis of Alzheimer's disease (AD) would help to overcome symptoms and improve the quality of life for AD patients. Research studies have identified early manifestations of AD that occur years before the diagnosis. For instance, eye movements of people with AD in different tasks differ from eye movements of control subjects. In this review, we present a summary and evolution of research approaches that use eye tracking technology and computational analysis to measure and compare eye movements under different tasks and experiments. Furthermore, this review is targeted to the feasibility of pioneer work on developing computational tools and techniques to analyze eye movements under naturalistic scenarios. We describe the progress in technology that can enhance the analysis of eye movements everywhere while subjects perform their daily activities and give future research directions to develop tools to support early AD diagnosis through analysis of eye movements.
1. Introduction
Neurodegenerative diseases are a group of disorders characterized by the progressive degeneration of the neurons of the central or peripheral nervous systems. The degeneration affects neuron synapsis or produces neuron death [1]. The most frequent neurodegenerative diseases are Alzheimer's disease (AD) and Parkinson disease (PD) [2, 3]. According to the Alzheimer's Association (https://www.alz.org/), currently there are 5.7 million people living with AD only in the US and it is expected that this number would increase to 13.8 million by 2050 [4]. Although there is no cure for AD [5], several treatments have been tested [6], for example, currently approved drugs such as donepezil, galantamine, and rivastigmine [7] and nonpharmacologic therapies [4].
Alzheimer' disease is frequently diagnosed at late stages when symptoms have become evident, which occurs after a process of months or years of neuron degeneration [6]. However, when diagnosed at early stages, treatment helps to overcome the symptoms and improves the quality of life [2, 6] and offers to caregivers the opportunity to adapt and prepare the characteristics changes of dementia [8]. Also, the early diagnosis would allow testing the administration of more aggressive therapy to prevent AD development [9]. Despite many efforts, the noninvasive diagnosis of AD at early stages remains unsolved [5, 10].
Recent literature reviews have outlined robust findings demonstrating that eye movements abnormalities are sign of cognitive decline [11, 12] and can eventually be used to assess AD disease progression. Furthermore, current technology provides noninvasive equipment and methods to assess visual deficits ubiquitously and objectively in naturalistic scenarios [13]. An example is the use of eye trackers, which are devices that measure gaze fixation and saccadic motions of eyes. Eye trackers have been used in experiments of oculomotor performance related to AD diagnosis [14]. However, currently eye trackers have been used only in controlled laboratory settings. To analyze eye movements in naturalistic scenarios, such as in activities of daily living (ADL), besides the gaze fixation points, understanding the scene is required. The understanding of the scene can be achieved through analysis of recorded video with computer vision techniques. The computational analysis of the video, supported by the areas of psychology and neurology, allows distinguishing the items from the scenes that grab the attention of the viewer [15, 16]. This information can be used to compare the areas of interest from people with AD (PwAD) and control groups (people without AD) when observing natural scenes with a potential use in early detection.
In this paper, we firstly describe technological tools and methods that have been used to gather eye movements data. Then, we review existing research that has encountered relation between eye movements and AD. This section also describes a evolution of research on eye movements and AD since earliest research towards naturalistic scenarios more suitable for early detection. Section 4 describes computational techniques that are useful for complementing the analysis of eye movements and AD in naturalistic scenarios. Finally, Section 5 includes the conclusion and future directions.
2. Data Collection
To collect data, an important step is choosing a proper eye tracker device according to a planned research study. Eye trackers are devices that measure the point of gaze or eye movements from an individual [17]. The availability of eye movements recordings allows researchers to gather and analyze ground truth data about visual exploration [18]. This feature makes eye trackers a useful tool to study changes in cognition through eye movements analysis [19, 20]. The cognitive process are not directly measured with eye trackers. We can manipulate independent variables according to experimental design setups and measure the behavioral response from participants with eye movements measures [21].
Eye tracking provides a noninvasive tool without contraindications suitable for potential screening and tracking of AD [22]. Eye trackers provide data sensitivity that makes them suitable for analyzing oculomotor abnormalities in AD. However, there are different technological approaches for the construction of eye trackers [21]. For this reason, it is important to choose properly the adequate eye tracker features that fit the study.
Eye trackers can be static or provide mobility. For example, there are screen-based eye trackers, as the one used in [23]. These types of eye trackers are desktop mounted and collect fixation points only from the gaze towards the content displayed on a screen. Another type of eye trackers is head mounted, as the “ExpressEye” used in [19]. In this case the device allows capturing gaze fixations not limited to a specific screen; however, the user can not freely move because the apparatus is cumbersome. Nowadays, there are available commercial mobile eye trackers, such as Tobbi© (https://www.tobiipro.com) or SMI© (https://www.smivision.com) that provide continuous, remote, and pervasive capture capabilities. These capabilities are desirable to analyze eye movements from PwAD in naturalistic scenarios.
Despite capabilities provided by eye trackers, there are concerns about the use of eye trackers when participants are unrestrained [24]. The concerns are about the reliability of gaze recording when capturing data when participants take a nonoptimal pose. This later might represent a challenge in naturalistic experimental setups.
To gather data from participants in AD studies, researches divide participants into groups. Usually, the groups reported in the literature are young controls, elder controls, and PwAD. However, there are studies that also include a group people with Mild Cognitive Decline (MCI) to differentiate between persons in a more advanced stage of AD. The cognitive status from participants is usually evaluated through neuropsychological tests, such as the Mini Mental State Examination (MMSE) [12]. However, some studies have used other techniques such as thyroid function test and magnetic resonance images [11] among others.
The participants perform instructed oculomotor tasks while observing visual stimuli, such as images or video. The fixation points from the participants are collected using the chosen eye tracking device. Then, statistical tests and other modeling techniques are applied for data analysis. Finally, results are presented correlating outcome measures with a cognitive status and by showing differences among control groups and PwAD if present. In the next section different approaches that have encountered relation between eye movements and AD under different experimental setups are described.
3. Eye Movements and Alzheimer's Disease
Several researches on predementia have reported manifestations of visual symptoms produced by senile plaques and tangles located in the visual regions of the brain [38, 39]. The pathological changes in the visual system caused by neurodegenerative diseases are reviewed in [40–43]. Examples of these pathological changes are visual acuity changes, atypical pupillary responses, and alteration in the oculomotor performance [44].
Eye movements involve a complex oculomotor control system formed by extensive cerebral regions [45]. Through post mortem studies, there is evidence that pathologies associated with AD affect the oculomotor brain regions [46, 47]. Altered eye movements patterns reflect the resulting underlying visuospatial and executive function impairments. Thus, movement patterns are related to higher cognitive control processes [48]. That is why, for example, eye movements allow exploring the cognitive process underlying visual search, providing information about how people forage and plan when performing visual search tasks [49].
Typically, the studies dedicated to explore relationships between eye movements and AD compare certain outcome measures between persons with AD (PwAD) and control groups when they accomplish a specific oculomotor tasks. Examples of these tasks are fixating on a given point [19] or watching a picture [12]. An example of a dependent variable on these studies is the reaction time to perform the given tasks.
To examine the research involving eye movements and Alzheimer's disease, we searched in the PubMed© (https://www.ncbi.nlm.nih.gov/pubmed) database with the keywords “Alzheimer's disease” and “Eye movements”. The database included 165 articles from 1979 to 2017. From the 165 articles, 14 are review articles and 80 articles are not related to AD; focus on atypical subsets of AD or eye movements are not part of the study. From the 165 results, we found that 71 articles are relevant to the research of oculomotor performance of AD patients. The 19.7% (14 articles) from the relevant articles were published in the last 3 years. This shows that research analyzing eye movements is gaining importance in AD studies. Furthermore, recent research shows that the analysis of oculomotor deficits is useful in early detection of AD and also has the potential to be used to assess disease progression [43, 50].
Table 1 shows a summary of the research indexed in PubMed since the last 5 years related to eye movements analysis and AD. The summary includes the references and years, the methods used by the researches, the main findings from the studies and information about the participants, and apparatus if present. In the following sections, we describe and categorize conducted research showing an evolution from early attempts towards more naturalistic scenarios.
Table 1.
Research on eye movements and Alzheimer disease on pubmed since 2013.
Cite | Methods | Findings | Participants/Apparatus |
---|---|---|---|
2016 [25] | Subjects responded to targets presented on a hemispherical screen with diverse eccentricity. | PwAD recognized less targets in the center. No difference was found with CG on the peripheral targets. | AD: 18 CG: 20 Apparatus: Hemispherical screen Octupus 900 with camera used for eye tracking. |
| |||
2017 [26] | The King-Devick test (with saccadic and other movements) was applied to subjects. | The King-Devick test may a tool to detect cognitive impairment associated with AD. | AD: 32 CG: 135 MCI: 39 Apparatus: N/A |
| |||
2016 [27] | Subjects looked a series of slides containing four images of different emotional themes. | PwAD with apathy had diminished attentional bias toward social-themed stimuli. | AD: 36 (Apathy: 17 Not apathy: 19) Apparatus: Binocular eye tracking system developed by EL-MAR Inc. |
| |||
2016 [11] | Eye movements from subjects were examined during reading regular and high predictable sentences. | PwAD gaze was longer than CG gaze. CG decreased gaze duration with high predictable sentences suggesting reading enhancement using stored information. | AD: 35 CG: 35 Apparatus: EyeLink 1000. Chinrest to control eye movements. |
| |||
2015 [28] | Subjects performed a variety of tasks: walking, through stairs, through a room with and without obstacles. | The Posterior Cortical Atrophy (PCA) patient had longer mean fixation durations than PwAD and CG. Mean fixation duration between PwAD and CG was similar. | AD: 1 CG:1 PCA: 1 Apparatus: SMI mobile eye tracker |
| |||
2015 [29] | Eye movements from subjects were examined while read sentences. | PwAD had more fixations on regular and high predictable sentences. PwAD spend more time reading the sentence. CG had less frequent second pass fixation over sentences. | AD: 35 CG-elderly: 35 Apparatus: EyeLink 1000. Chinrest to control eye movements. |
| |||
2015 [19] | Longitudinal study with Gap and overlap paradigms. | PwAD had slower reaction times than CG. Prosaccades did not deteriorate after the 12-month longitudinal study in AD. | AD: 11 CG elderly: 25 Apparatus: ExpressEye |
| |||
2015 [30] | Subjects made saccadic movement to photographs to target instructed scenes (natural vs urban, indoor vs outdoor) | Were found differences between controls and PwAD on accuracy but not saccadic latency. | AD: 24 CG age-matched: 28 CG young: 26 Apparatus: Eye tracker (Red-M, Senso-Motoric Instruments) |
| |||
2015 [23] | Eye movements from subjects were examined while read proverbs. | PwAD have less word predictability than CG. | AD: 20 CG: 40 Apparatus: EyeLink 1000. Chinrest to control eye movements. |
| |||
2014 [31] | Eye movements from subjects were examined while read low and high predictable sentences. | CG have shorter gaze duration on high predictable sentences. PwAD have similar gaze duration on both low and high predictable sentences. PwAD gaze duration is longer than CG. | AD: 20 CG age-matched: 40 Apparatus: EyeLink 1000. Chinrest to control eye movements. |
| |||
2014 [32] | Eye movements from subjects were examined while read sentences | PwAD have altered visual exploration and absence on contextual predictability. | AD: 18 HC age-matched: 40 Apparatus: EyeLink 2K. Chinrest to control eye movements. |
| |||
2013 [33] | Eye movements from subjects were examined while read sentences | PwAD evidences marked alterations in eye movement behavior during reading. | AD: 20 CG age-matched: 25 Apparatus: EyeLink 1000. Chinrest to control eye movements. |
| |||
2014 [12] | Subjects were asked to spot an animal target contained in Colored photographs along with other distracting items. | PwAD were significate less accurate than elderly controls. Elder were less accurate than young controls. | AD: 17 mild AD. CG elderly: 23 CG young: 24. Apparatus: Eye tracker (Senso-Motoric Instruments) |
| |||
2014 [34] | Subjects were required to look to a small fixation cross for 20 seconds on the center of a screen. | CG and PwAD showed significantly differences of microsaccade direction. | AD: 18 MCI: 15 CG age-matched: 21 Apparatus: Eye See Cam |
| |||
2013 [35] | Visual targets were presented to subjects in a dim room. Prosaccade and antisaccade trials. | The antisaccade taks performance serves as a measure of executive function on PwAD. | AD: 28 MCI: 36 CG elderly: 118 Apparatus: Dual Purkinje Image Tracker. Heads stabilized on a chinrest. |
| |||
2013 [36] | Pro-saccade and anti-saccade tasks. Gap and overlap paradigms. | PwAD have an excessive proportion of uncorrected errors in the antisaccade test. | AD: 18 Parkinson disease: 25 CG-young: 17 CG elderly: 18. Apparatus: Head mounted device ExpressEye eyetracker. |
| |||
2013 [37] | Horizontal and vertical saccades. Gap and overlap paradigms on a black computer screen. | A link between MMSE and saccade latency. | AD: 25 Amnestic MCI: 18 CG elderly: 30 Apparatus: Head mounted Eyeseecam |
CG: Control Group; MCI: Mild Cognitive Impairment; MMSE: Mini Mental State Examination.
3.1. Saccadic Eye Movements and Alzheimer's Disease
Traditional studies using saccadic eye movements (SEM) tasks have reported differences between PwAD and control groups. A saccade is a rapid motion of the eye (typically lasting between 30 to 80 ms to complete) [21]. Example of these studies includes prosaccades and antisaccades analysis [19, 51]. To study prosaccades, a participant has to saccade from a initial point to an appearing peripheral target. Then, the reaction time or latency is measured from the subject to fixate on the presented peripheral target. The research described in [37, 52, 53] reports increased latency of saccades from PwAD when compared to control groups that can be associated with cognitive process. On the other hand, to study antisaccades, the participant must fixate an opposite direction from a presented peripheral target [54]. As the participants have to inhibit the automatic saccade towards the stimulus, the antisaccade task requires additional executive processing from participants [55]. The nature of antisaccades can be associated with executive attention and research results indicate that patients with AD have shown more antisaccade errors with fewer corrections than control groups [56]. The papers [14, 44, 48, 50] review work conducted on eye movements and their relationship with AD.
There are different challenges regarding eye movements studies and AD. The first challenge arises because oculomotor abnormalities are not exclusive from AD and it is important to develop techniques that properly distinguish AD from other diseases. For example, SEM abnormalities have been encountered in Multiple System Atrophy, such as slower prosaccade and increased antisaccade errors [57].
Furthermore, SEM abnormalities might be related to aging. For example, in [58] it is reported that latencies uncorrected or increased time to correct error in antisaccades increase with aging. Older adults appear to have stronger difficulty ignoring distractions during day-to-day activities than younger adults. It seems that any variable that reduces the strength of the top-down neural signal to produce a voluntary saccade, or that increases saccade speed, will enhance the likelihood that a reflexive saccade to a stimulus with an abrupt onset will occur [59]. So, what is the effect of “normal” aging on eye saccade speed? It has been shown [60] that the Digit Symbol Substitution Test can be altered as far as 20 years before AD in older individuals with a high level of study is manifest. The performance in this test is related to the speed of eye saccades. This decline in performance speed and executive functions might be nonspecific prodromal Alzheimer's but could as well characterize a state of cerebral vulnerability on which the illness would progress more easily. Despite the relationship between age related cognitive decline and saccadic eye movements (SEM) deficits has been outlined, specific cognitive alterations underlying age-related changes in saccadic performance remain unclear. The nature of aging effects on SEMs has been only rarely approached. The progressive age-related decline of processing speed and executive attention is associated with and can be highlighted through saccadic age movement deficits as well in prosaccade and antisaccade tasks.
As can be see from Table 1, research from five years ago mainly focused in studying prosaccades and antisaccades. Indeed, the study of SEM was dominant since the earliest approaches dedicated to analyze eye movements and its relation to AD [52, 61–64]. As described, in SEM experiments the participants must fixate to a target. Although SEM studies have reported significant difference between persons with AD and control groups, there is still a research gap to fill in order to use SEM analysis as a marker for AD. Differentiate SEM abnormalities from AD, “normal” aging, and other conditions are among the main challenges from SEM analysis.
Prosaccade and antisaccade tasks have been popular in research studies due to their simplicity [58]. These tasks require a controlled scenario to conduct the evaluations. As research has evolved, more complex tasks have been studied towards associating eye movements deficits to support AD diagnosis. In Section 3.2 studies involving the execution of more complex tasks than attending to single target points are described. However, this research still lies in a category of controlled scenarios.
3.2. Eye Movements Analysis in Controlled Scenarios
Since the past years, research studies have moved forward to conduct other types of experiments aiming to identify eye movements abnormalities related to AD. For example, in [12] the participants performed a more complex task that only attend a target point that consisted in detecting and categorizing a specific object within a natural scene. The participants observed two visual stimuli in a monitor, one including an image with an animal and the other including a distractor image. The participants were asked to saccade to the image that contained the animal while their success to fixate to the animal and to the correct image was measured. The results from this study show that persons with AD, even in a mild stage of the disease, when compared with control groups have difficulties to select the relevant targets.
Another example is given by the work in [11] that focuses on the analysis of reading behavior of PwAD. Reading is an ADL that involves the use of working memory and memory retrieval function. Thus, the experiments in [11] involve analysis of more complex task than usual SEM studies. The experiment consisted in a comparison of the eye gaze position of PwAD and control participants when reading sentences. The findings from [11] show that PwAD have a longer gaze duration than controls. Additionally, they found that a predictability degree on the sentences is accounted by control subjects but not by PwAD. This suggests that PwAD have impairments with their working memory and memory retrieval functions. Although the work in [11] is towards analysis of eye movements in ADL, the current experiments are under controlled scenarios in the sense that use screen based eye trackers and even use a chin rest to constrain head movements. Another research studies the attention to repeated and novel stimuli [65] that is related to cognition and attention. The experiments consist in presenting slides to mild-to-moderate PwAD that contain novel and repeated images. The researchers report that fixations on the images serve to evaluate attendance to repeated and novel content providing the potential to be used to measure disease progression.
Another study analyzes the effects of AD on visual exploration [25]. The study focuses on visual search performance for target detection in the far periphery. The participants, AD patients, and control subjects explore a hemispherical screen and respond to presented targets. The results from this study show differences in AD patients and control subjects when identifying targets on different eccentricities from the screen. Researchers also report differences on target detection times and number of fixations. The work in [66] uses eye movements analysis during video watching to infer people's cognitive function. Researches defined 13 features from fixations and found correlations between the features and memory capability. Example of these features include mean fixation duration, fixation count, and mean saccade amplitude. Different from other specific laboratory tasks, in these experiments the participants freely watch videos from different scenarios while features from eye movements are extracted.
A pilot study in France, LYLO [67], focused on measuring the rigidity and lack of curiosity of PwAD. In this study, the patients were screened in laboratory settings with static images displayed. And statistical parameters computed from recordings of saccades and fixations were compared. A step forward measuring visual impairments in a naturalistic situation consists in using everyday visual content such as colour video content for patients screening. The lack of curiosity hence can be induced from gaze recordings of intentionally degraded natural video content. The baseline model of automatic prediction of attention for normal control subjects was developed in [67, 68]. Contrarily to [69] where the first step to make the observation “natural” was done by using video in free viewing conditions, in [68, 69] video was intentionally degraded.
While studying visual deficits on laboratory tasks has been productive for AD assessment, its application requires cooperative scenarios. That is, subjects must cooperate consciously performing the oculomotor tasks to get their assessment. Thus, this type of assessment is adequate when there is already an evident manifestation of dementia that requires evaluation. In addition, the manifestation cannot be severe enough so the subject is not able to cooperate. To achieve early diagnosis, it is necessary to have techniques that allow the evaluation of eye movements abnormalities on scenarios that allow naturalistic assessment without the explicit cooperation from subjects, for example, to analyze how AD affects eye movements in ADL, such as cooking or gardening. In the following section we describe the work related to analysis of eye movements in naturalistic scenarios.
3.3. Eye Movements Analysis in Naturalistic Tasks
When performing daily activities such as cooking or gardening, subjects interact with several objects, for example knifes, pans, or remote controls. During these interactions, a succession of different actions is involved, for example, cutting a vegetable or watering a plant. While executing these actions, humans use their vision to locate the objects and to manipulate them [70]. Indeed, the eye movements during everyday tasks provide relevant information about complex cognitive process related to object identification, place memory, tasks execution, and monitoring [71].
Studies attempt to understand the relation between activity execution and eye movements by investigating the eye patterns and eye-hand coordination on actions [72, 73]. For example, results support that people shift their gaze to target sites anticipating actions [74]. Also, results indicate that subjects rarely fixate on objects irrelevant to a performed action [73]. In fact, almost all eye movements during activities are targeted to fixation of task relevant objects, suggesting that visual attention can be modelled as “top-down” having little influence of the “intrinsic saliency” of the scene [72]. Top-down modelling refers to aspects of attention and gaze that are under executive control and may be influenced by tasks' directives and working memory [75]. Thus, top-down models require prior knowledge from the context [15]. On the other hand, bottom-up modelling refers to attention that is driven by properties of the visual stimulus being independent from tasks or semantic [75].
The described findings so far have arisen from experiments with healthy subjects. While the results are important to understand the eye movements in ADL, experiments with AD patients are scarce. This later is critical to support the diagnosis in early stages and to monitor changes of the disease. Healthy subjects have demonstrated different results when conducting specific oculomotor task when compared with PwAD and we expect the same in naturalistic scenarios.
In the work by Forde et al. [76] the eye movements in an ADL task are analyzed from a patient with action disorganization syndrome (ADS), from a PwAD, and from control subjects. The results show difference in the visual behavior from the participants while they were preparing a cup of tea. For example, the ADS patient made no glances to objects anticipating their use and had an increased number of fixations to irrelevant objects during the task. This shows different results from those stated in [73, 74]. The AD patient showed fewer fixations overall than control subjects and the ADS patients. In addition, the PwAD showed a lower proportion of relevant fixations compared to control subjects. In the work by Suzuki et al. [28] eye movement is investigated during locomotion. One AD patient, one Posterior Cortical Atrophy (PCA) patient, and one healthy subject used an eye tracking device while performing locomotion activities (walking along corridors, up and down stairs, and across a room with or without obstacles). The results show that the PCA patients were the slowest in performing the locomotion activities. Also, the PCA patient had longer fixation than the PwAD and the healthy subject. The PwAD required prompting during task competition showing memory impairment. Both studies show important findings toward understanding eye movements from PwAD; however more experiments with more participants are required.
A research goal is to understand eye movements abnormalities when performing ADL that serve to identify early signs of AD and to alert about a possible development of the disease. However, several challenges must be addressed first. For example, despite finding differences between PwAD and control groups, several abnormalities on visual deficits are not unique to AD but they are also present in other pathologies. In addition, it is important to find a visual marker that can be used to measure the progression of the disease; that is, the longitudinal studies must be conducted. Additionally, the clinical and personal history of each patient must be considered. PwAD might have differences in their visual behavior due to their physiological and personal context. Such is the case, for example, if they have sensorial impairments or a determining event in their lives. For example, a manual worker might have a different behavior than a white-collar worker.
The analysis of visual behavior during ADL has the potential to become a tool for AD assessment and for monitoring progression. Its success strongly relies on the development of technology able to measure eye movements. To be a pervasive tool it has to measure eye movements easily and in a noninvasive way allowing subjects to perform their activities in a natural manner.
Currently, there are clinical trials registers describing ongoing research with the objective to analyze eye movements in naturalistic tasks. For example, by doing a search with the terms “Alzheimer's disease” and “eye movements” in the database ClinicalTrials.gov provided by the US National Library of Medicine, there are 10 results currently recruiting participants. We identified 2 as relevant ongoing studies of eye movements in naturalistic scenarios for AD. In [77], researchers aim to analyze eye movements when reading sentences. In [78], researchers aim to analyze deficits in visual exploration in ADL.
Researchers in [78] expect to encounter that persons with AD have less ability of using scene semantic when locating objects. In this sense, the understanding of scene is paramount. In the next section we describe how the use of computational techniques can leverage the analysis of eye movements and AD by scene understanding.
4. Towards Early Detection Leveraging on Computational Attention Modelling
As we mentioned before, the identification of abnormalities in eye movements to support the early diagnosis or progression of an eventual dementia disease in elderly population during performance of ADL is a real scientific challenge. In this sense, some interesting approaches propose to predict human visual attention emulating the Human Visual Systems performance [79]. Indeed, computational visual attention models (CVSM) attempt to explain and describe the process of perceptual behavior and are compared with ground truth measured by eye trackers in psychovisual experiments [80–82]. Several exciting CVSM such as visual saliency techniques on egocentric video are useful for the use in naturalistic scenarios to estimate the areas from the video that are more likely to become the focus of human visual attention [15]. The egocentric video provides a first-person view from the individual who “wears” an egocentric camera giving visual information about objects, locations, and interactions.
Visual saliency techniques have been already combined with eye tracking in the field of Autism Spectrum Disorder (ASD) to screen differences between people with ASD and controls [83]. Researchers compare eye movements from both groups when freely viewing natural scenes images. In the analysis fixations towards visually salient regions, such as color, intensity, orientation, objects, and faces, are considered. To the best of our knowledge, research regarding the combination of data from eye trackers and visual saliency modeling to analyze eye movements abnormalities from PwAD is scarce. As we show earlier in Section 3.2 the work in [67] approaches with the analysis of fixations to degraded images, but more research is missing.
Suitable devices identified to be appropriated for monitoring ADLs with egocentric vision capabilities are mobile eye trackers [20], such as Tobbi© or SMI©. Additionally, egocentric cameras such as GoPro, Samsung Glass, and Microsoft Sense Cam [84] would allow scene understanding. Certainly, they record egocentric video giving a first-person view or in other words, what the camera wearer sees [85]. This captured information can be useful to analyze or predict some or all of visual attentive behavior through visual saliency computation. In this section, the main characteristics from the research on visual saliency are described, the techniques used and how this research field can be applied in the context of eye movements analysis.
(1) Computational Visual Saliency Models. The research field that analyzes video computationally to estimate the image regions that attract visual attention is called visual saliency detection [86, 87]. This research field matches the areas of neuroscience, psychology, and computer vision [16].
Early work on visual saliency modelling uses handcrafted low-level features such as contrast [88], color [89], edges [90], and orientation. It is funded on feature integration theory by Treisman and Gelade [91]. In addition, there is work that performs higher-level features extraction such as objects [92] and faces [93] in order to incorporate, into the low-level features, semantic elements of the observed scenes. Moreover, since the boom of deep learning, there are different proposals of configuration and arrangements of these supervised classification tools such as the convolutional neural networks [94–96] that report increased results for saliency estimation tasks. However, as deep learning requires a huge amount of data, more annotated information on diverse scenarios is still required. The works in [15, 97, 98] review techniques used for visual saliency detection.
According to the method used for modeling attention, there are two main categories of methods followed in the research of visual saliency: bottom-up modelling and top-down modelling. Bottom-up methods use information such as color, contrast, orientation, and texture [99]; they predict stimuli driven attention. And top-down models require prior knowledge on the visual search task and the context [15]. Currently, most of the work in visual saliency enters in the category of bottom-up methods.
The visual saliency modelling outputs a saliency map S, which is a two-dimensional topographically arranged map that encodes stimulus conspicuity of the visual scene [100]. The pixel values in the map indicate the saliency degree of the corresponding regions in the visual scene [15]. The maps are compared against ground truth maps built upon gaze fixations recorded by eye trackers when subjects are performing visual tasks. The ground truth might include synthetic stimuli or come from natural scenes including still images and video [101].
Pioneer research on visual saliency studied saliency from a third-person perspective, for instance, using still images [102] or video [103] coming from nonegocentric cameras. In the scenario of understanding scene for AD studies, the interest is nevertheless in saliency from the point of view of the subject performing activities. Hence, building saliency maps in egocentric video content and from the point of view of the subjects wearing recording device is required. However, the analysis of egocentric video brings new challenges. For instance, motion cues are significant in third-person video but camera motion is inherent to egocentric recordings [104]. Therefore, a residual motion in image plane of egocentric video has to be computed after motion of camera wearer has been compensated [105]. Egocentric video allows better introducing contextual knowledge from the subject because egocentric video follows the field of view of the subject's action. In this sense, this paradigm can support top-down attention modeling [106] in naturalistic scenarios such as in ADLs execution.
Egocentric video, beside the bottom-up image cues, provides context about the manipulation of objects [107, 108], about hands positions [106, 109], ego-motion [110], actions [111, 112], and activities [113, 114]. The egocentric video supports top-down attention modelling by the ensemble of these diverse contextual information. For example, a model assumes that gaze goes towards a given object currently held by the subject's hands. In addition, the ego-motion that occurs in visual exploration towards locating a specific object that is required in a given activity also serves for gaze estimation [115].
Top-down and bottom-up computational visual attention modelling show correlation between human fixations and predicted saliency maps. However, most of the results arise from experiments including healthy participants while scarce studies involve PwAD. In this review, we focus on the scenario for early AD detection. Thus, we address the relevant work relating computational attention modelling applied to AD in the next section.
(2) Computational Visual Saliency Models and Diagnosis. Progress in prediction of visual saliency including in egocentric content makes it possible to build robust prediction models for regions of high expectancy for the fixations of the test subject. Therefore, if a subject executing an ADL does not fixate the predicted areas properly, then it can be supposed that this subject has to undergo further tests to diagnose AD or not. For healthy subjects, visual saliency techniques assume that the subject will execute the activity fixating toward relevant objects or with coherent visual exploration. However, as mentioned in the work by Forde et al. [76], the participant with AD had lower proportion of relevant fixations compared to healthy subjects in ADL settings.
Another relevant feature about gaze measuring in patients with AD is the sensitivity. For example, the diagnosis performed with SEM tasks requires saccadic sensitivity. However, current visual saliency techniques addressing saccadic estimation are in early stages [116].
Although top-down mechanisms dominate the attention modelling from healthy subjects, it has been suggested that the visual behavior from persons with cognitive problems might rely on bottom-up mechanisms and saliency driven with less fixations on objects relevant to the task [71]. The work in [117] suggests that visual attention problems from AD patients are more notorious when the target item is not salient and shares common features with the background. The study in [118] analyzes the visual search task performance from AD patients by conducting experiments using salient and not salient search conditions. The researchers measure the reaction times when PwAD and control participants search for target elements. The PwAD show longer reaction times than control participants. However, the gap between both groups is bigger when searching for nonsalient target items. This suggests that salient elements attract PwAD.
The research in [105] on egocentric video acknowledges the potential to use visual saliency techniques to develop a tool for medical practitioners in realistic ADL scenarios. The researchers perform gaze comparison from an actor and a viewer. The actor is a person performing an activity (potentially a PwAD) while the viewer is a person watching egocentric video recordings from the actor (the medical practitioner). The paper suggests a relation between the gaze from the actor and the viewer. This relation consists in a time shift between the points of attention from the actor and the viewer. In other words, the viewer looks at the same place in the visual scene compared to the (healthy) actor but a few milliseconds later. The potential of this tool relies on the ability of a system to determine if the gaze from the actor is normal or abnormal according to the perspective from a medical practitioner. Additionally, the settings from the tool, like the use of egocentric camera, allow the use of computational attention-modelling techniques.
Computational attention-modelling techniques can be complemented with other contextual information in order to know more about the subject oculomotor behavior. For example, the amount of time that a participant takes to complete an activity can be explored. As literature shows, people with cognitive deficits take longer to complete tasks.
5. Conclusions and Future Directions
Neurodegenerative diseases, specifically Alzheimer's disease, are a problem that affects population worldwide. Currently, AD has no cure, but it has been demonstrated that treatment is helpful to delay the progression and improve quality of life. The diagnosis occurs frequently at late stages of the disease, when symptoms are evident. Nevertheless, research has found that AD is present up to 20 years before the disease is manifested. To have better treatment outcomes, it is desirable to have an early diagnosis. Understanding contextual differences that might influence the course of the disease would also be helpful. Among the current diagnostic techniques, visual behavior has the potential to become useful in early stages and to be a pervasive tool. Several investigations have explored the relations of eye movements with AD through specific oculomotor tasks demonstrating visual features that can be used for early diagnosis and progression measurement. However, more experiments are necessary under naturalistic scenarios to develop a useful tool that can be used in early stages. Nevertheless, changes happening in older individuals without cognitive impairment must also be taken into consideration and eventually they have to be approached by means of further research in normal individuals. Eye movements abnormalities have been measured mostly using eye tracking technology. Nevertheless, computer vision techniques, such as visual saliency and object detections in ADL performance settings, could be a good means to measure visual attention of PwAD to diagnose in terms of its difference to normal control attention in naturalistic scenarios when performing ADLs. Several challenges must be addressed, such as estimating gaze on top-down driven mechanisms and relating bottom-up mechanisms with the activities. Also, it is important to conduct experiments with persons with different cognitive problems in order to learn the features that differentiate among healthy subjects, people with different diseases, and persons with AD.
Acknowledgments
This review was supported by the projects CATEDRAS CONACYT numbers 672 and IPN-SIP2018.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
References
- 1.Díaz G., Romero E., Hernández-Tamames J. A., Molina V., Malpica N. Automatic classification of structural MRI for diagnosis of neurodegenerative diseases. Acta Biologica Colombiana. 2010;15(3):165–180. [Google Scholar]
- 2.Koikkalainen J., Rhodius-Meester H., Tolonen A., et al. Differential diagnosis of neurodegenerative diseases using structural MRI data. NeuroImage: Clinical. 2016;11:435–449. doi: 10.1016/j.nicl.2016.02.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Stoessl A. J. Neuroimaging in the early diagnosis of neurodegenerative disease. Translational Neurodegeneration. 2012;1, article no. 5 doi: 10.1186/2047-9158-1-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Association A. 2018 Alzheimers disease facts and figures. Alzheimer’s & Dementia. 2018;14(3):367–429. doi: 10.1016/j.jalz.2018.02.001. [DOI] [Google Scholar]
- 5.Dauwels J., Kannan S. Diagnosis of alzheimer's disease using electric signals of the brain—a grand challenge. Asia-Pacific Biotech News. 2012;16(10n11):22–38. doi: 10.1142/S0219030312000651. [DOI] [Google Scholar]
- 6.Nordberg A., Rinne J. O., Kadir A., Långström B. The use of PET in Alzheimer disease. Nature Reviews Neurology. 2010;6(2):78–87. doi: 10.1038/nrneurol.2009.217. [DOI] [PubMed] [Google Scholar]
- 7.Graham W. V., Bonito-Oliva A., Sakmar T. P. Update on alzheimer's disease therapy and prevention strategies. Annual Review of Medicine. 2017;68(1):413–430. doi: 10.1146/annurev-med-042915-103753. [DOI] [PubMed] [Google Scholar]
- 8.De Vugt M. E., Verhey F. R. J. The impact of early dementia diagnosis and intervention on informal caregivers. Progress in Neurobiology. 2013;110:54–62. doi: 10.1016/j.pneurobio.2013.04.005. [DOI] [PubMed] [Google Scholar]
- 9.Pietrzak K., Czarnecka K., Mikiciuk-Olasik E., Szymanski P. New Perspectives of Alzheimer Disease Diagnosis – the Most Popular and Future Methods. Medicinal Chemistry. 2017 doi: 10.2174/1573406413666171002120847. [DOI] [PubMed] [Google Scholar]
- 10.Weuve J., Proust-Lima C., Power M. C., et al. Guidelines for reporting methodological challenges and evaluating potential bias in dementia research. Alzheimer’s & Dementia. 2015;11(9, article no. 2048):1098–1109. doi: 10.1016/j.jalz.2015.06.1885. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Fernández G., Manes F., Politi L. E., et al. Patients with Mild Alzheimer's Disease Fail When Using Their Working Memory: Evidence from the Eye Tracking Technique. Journal of Alzheimer's Disease. 2016;50(3):827–838. doi: 10.3233/JAD-150265. [DOI] [PubMed] [Google Scholar]
- 12.Boucart M., Bubbico G., Szaffarczyk S., Pasquier F. Animal spotting in Alzheimer's disease: An eye tracking study of object categorization. Journal of Alzheimer's Disease. 2014;39(1):181–189. doi: 10.3233/JAD-131331. [DOI] [PubMed] [Google Scholar]
- 13.König A., Sacco G., Bensadoun G., et al. The role of information and communication technologies in clinical trials with patients with Alzheimer's disease and related disorders. Frontiers in Aging Neuroscience. 2015;7:p. 110. doi: 10.3389/fnagi.2015.00110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Molitor R. J., Ko P. C., Ally B. A. Eye movements in alzheimers disease. Journal of Alzheimers Disease. 2015;44(1):1–12. doi: 10.3233/JAD-141173. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Runxin M., Yu Y., Yue X. Survey on Image Saliency Detection Methods. Proceedings of the 7th International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, CyberC 2015; September 2015; pp. 329–338. [DOI] [Google Scholar]
- 16.Huo L., Jiao L., Wang S., Yang S. Object-level saliency detection with color attributes. Pattern Recognition. 2016;49:162–173. doi: 10.1016/j.patcog.2015.07.005. [DOI] [Google Scholar]
- 17.Muthumanickam P. K., Forsell C., Vrotsou K., Johansson J., Cooper M. Supporting exploration of eye tracking data: Identifying changing behaviour over long durations. Proceedings of the 6th Workshop Beyond Time and Errors on Novel Evaluation Methods for Visualization, BELIV 2016; pp. 70–77. [DOI] [Google Scholar]
- 18.Pallarés V., Hernández M., Dempere-Marco L. Eye-Tracking Data in Visual Search Tasks: A, Hallmark of Cognitive Function. Biosystems and Biorobotics. 2017;15:873–877. doi: 10.1007/978-3-319-46669-9_142. [DOI] [Google Scholar]
- 19.Crawford T. J., Devereaux A., Higham S., Kelly C. The disengagement of visual attention in Alzheimer's disease: A longitudinal eye-tracking study. Frontiers in Aging Neuroscience. 2015;7, article no. 118 doi: 10.3389/fnagi.2015.00118. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Itti L. New Eye-Tracking Techniques May Revolutionize Mental Health Screening. Neuron. 2015;88(3):442–444. doi: 10.1016/j.neuron.2015.10.033. [DOI] [PubMed] [Google Scholar]
- 21.Holmqvist K., Nyström M., Andersson R., et al. Eye tracking: A comprehensive guide to methods and measures. OUP Oxford; 2011. [Google Scholar]
- 22.Pavisic I. M., Firth N. C., Parsons S., et al. Eyetracking metrics in young onset alzheimer's disease: a window into cognitive visual functions. Frontiers in Neurology. 2017;8, article 377 doi: 10.3389/fneur.2017.00377. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Fernández G., Castro L. R., Schumacher M., Agamennoni O. E. Diagnosis of mild Alzheimer disease through the analysis of eye movements during reading. Journal of integrative neuroscience. 2015;14(1):121–133. doi: 10.1142/S0219635215500090. [DOI] [PubMed] [Google Scholar]
- 24.Niehorster D. C., Cornelissen T. H. W., Holmqvist K., Hooge I. T. C., Hessels R. S. What to expect from your remote eye-tracker when participants are unrestrained. Behavior Research Methods. 2017:1–15. doi: 10.3758/s13428-017-0863-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Vallejo V., Cazzoli D., Rampa L., et al. Effects of Alzheimer's disease on visual target detection: A "peripheral bias". Frontiers in Aging Neuroscience. 2016;8, article no. 200 doi: 10.3389/fnagi.2016.00200. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Galetta K. M., Chapman K. R., Essis M. D., et al. Screening Utility of the King-Devick Test in Mild Cognitive Impairment and Alzheimer Disease Dementia. Alzheimer Disease & Associated Disorders. 2017;31(2):152–158. doi: 10.1097/WAD.0000000000000157. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Chau S. A., Chung J., Herrmann N., Eizenman M., Lanctôt K. L. Apathy and Attentional Biases in Alzheimer's Disease. Journal of Alzheimer's Disease. 2016;51(3):837–846. doi: 10.3233/JAD-151026. [DOI] [PubMed] [Google Scholar]
- 28.Suzuki T., Yong K., Yang B., et al. Locomotion and eye behaviour under controlled environment in individuals with Alzheimer's disease. Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2015; August 2015; pp. 6594–6597. [DOI] [PubMed] [Google Scholar]
- 29.Fernández G., Schumacher M., Castro L., Orozco D., Agamennoni O. Patients with mild Alzheimer's disease produced shorter outgoing saccades when reading sentences. Psychiatry Research. 2015;229(1-2):470–478. doi: 10.1016/j.psychres.2015.06.028. [DOI] [PubMed] [Google Scholar]
- 30.Lenoble Q., Bubbico G., Szaffarczyk S., Pasquier F., Boucart M. Scene categorization in Alzheimer's disease: A saccadic choice task. Dementia and Geriatric Cognitive Disorders Extra. 2015;5(1):1–12. doi: 10.1159/000366054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Fernández G., Manes F., Rotstein N. P., et al. Lack of contextual-word predictability during reading in patients with mild Alzheimer disease. Neuropsychologia. 2014;62(1):143–151. doi: 10.1016/j.neuropsychologia.2014.07.023. [DOI] [PubMed] [Google Scholar]
- 32.Fernández G., Laubrock J., Mandolesi P., Colombo O., Agamennoni O. Registering eye movements during reading in Alzheimers disease: Difficulties in predicting upcoming words. Journal of Clinical and Experimental Neuropsychology. 2014;36(3):302–316. doi: 10.1080/13803395.2014.892060. [DOI] [PubMed] [Google Scholar]
- 33.Fernández G., Mandolesi P., Rotstein N. P., Colombo O., Agamennoni O., Politi L. E. Eye movement alterations during reading in patients with early Alzheimer disease. Investigative ophthalmology & visual science. 2013;54(13):8345–8352. doi: 10.1167/iovs.13-12877. [DOI] [PubMed] [Google Scholar]
- 34.Kapoula Z., Yang Q., Otero-Millan J., et al. Distinctive features of microsaccades in Alzheimer's disease and in mild cognitive impairment. AGE. 2014;36(2):535–543. doi: 10.1007/s11357-013-9582-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Heuer H. W., Mirsky J. B., Kong E. L., et al. Antisaccade task reflects cortical involvement in mild cognitive impairment. Neurology. 2013;81(14):1235–1243. doi: 10.1212/WNL.0b013e3182a6cbfe. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Crawford T. J., Higham S., Mayes J., Dale M., Shaunak S., Lekwuwa G. The role of working memory and attentional disengagement on inhibitory control: Effects of aging and Alzheimer's disease. AGE. 2013;35(5):1637–1650. doi: 10.1007/s11357-012-9466-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Yang Q., Wang T., Su N., Xiao S., Kapoula Z. Specific saccade deficits in patients with Alzheimer's disease at mild to moderate stage and in patients with amnestic mild cognitive impairment. AGE. 2013;35(4):1287–1298. doi: 10.1007/s11357-012-9420-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.McKee A. C., Au R., Cabral H. J., et al. Visual association pathology in preclinical Alzheimer disease. Journal of Neuropathology & Experimental Neurology. 2006;65(6):621–630. doi: 10.1097/00005072-200606000-00010. [DOI] [PubMed] [Google Scholar]
- 39.Brewer A. A., Barton B. Visual cortex in aging and Alzheimer's disease: Changes in visual field maps and population receptive fields. Frontiers in Psychology. 2014;5 doi: 10.3389/fpsyg.2014.00074.Article 74 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Kusne Y., Wolf A. B., Townley K., Conway M., Peyman G. A. Visual system manifestations of Alzheimer's disease. Acta Ophthalmologica. 2016 doi: 10.1111/aos.13319. [DOI] [PubMed] [Google Scholar]
- 41.Lim J. K. H., Li Q.-X., He Z., et al. The eye as a biomarker for Alzheimer's disease. Frontiers in Neuroscience. 2016;10, article no. 536 doi: 10.3389/fnins.2016.00536. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Armstrong R. A. Alzheimer's disease and the eye. Journal of Optometry. 2009;2(3):103–111. doi: 10.3921/joptom.2009.103. [DOI] [Google Scholar]
- 43.Javaid F. Z., Brenton J., Guo L., Cordeiro M. F. Visual and ocular manifestations of Alzheimer's disease and their use as biomarkers for diagnosis and progression. Frontiers in Neurology. 2016;7, article no. 55 doi: 10.3389/fneur.2016.00055. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.MacAskill M. R., Anderson T. J. Eye movements in neurodegenerative diseases. Current Opinion in Neurology. 2016;29(1):61–68. doi: 10.1097/WCO.0000000000000274. [DOI] [PubMed] [Google Scholar]
- 45.Tzekov R., Mullan M. Vision function abnormalities in Alzheimer disease. Survey of Ophthalmology. 2014;59(4):414–433. doi: 10.1016/j.survophthal.2013.10.002. [DOI] [PubMed] [Google Scholar]
- 46.Rüb U., Del Tredici K., Schultz C., Büttner-Ennever J. A., Braak H. The premotor region essential for rapid vertical eye movements shows early involvement in Alzheimer's disease-related cytoskeletal pathology. Vision Research. 2001;41(16):2149–2156. doi: 10.1016/S0042-6989(01)00090-6. [DOI] [PubMed] [Google Scholar]
- 47.Boxer A. L., Garbutt S., Seeley W. W., et al. Saccade abnormalities in autopsy-confirmed frontotemporal lobar degeneration and alzheimer disease. JAMA Neurology. 2012;69(4):509–517. doi: 10.1001/archneurol.2011.1021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Freitas Pereira M. L. G., von Zuben A Camargo M., Aprahamian I., Forlenza O. V. Eye movement analysis and cognitive processing: Detecting indicators of conversion to Alzheimer's disease. Neuropsychiatric Disease and Treatment. 2014;10:1273–1285. doi: 10.2147/NDT.S55371. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Amor T. A., Reis S. D. S., Campos D., Herrmann H. J., Andrade J. S. Persistence in eye movement during visual search. Scientific Reports. 2016;6 doi: 10.1038/srep20815.20815 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Coubard O. A. What do we know about eye movements in Alzheimer's disease? The past 37 years and future directions. Biomarkers in Medicine. 2016;10(7):677–680. doi: 10.2217/bmm-2016-0095. [DOI] [PubMed] [Google Scholar]
- 51.Crawford T. J., Higham S. Distinguishing between impairments of working memory and inhibitory control in cases of early dementia. Neuropsychologia. 2016;81:61–67. doi: 10.1016/j.neuropsychologia.2015.12.007. [DOI] [PubMed] [Google Scholar]
- 52.Yang Q., Wang T., Su N., Liu Y., Xiao S., Kapoula Z. Long Latency and High Variability in Accuracy-Speed of Prosaccades in Alzheimer’s Disease at Mild to Moderate Stage. Dementia and Geriatric Cognitive Disorders Extra. 2011;1(1):318–329. doi: 10.1159/000333080. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Garbutt S., Matlin A., Hellmuth J., et al. Oculomotor function in frontotemporal lobar degeneration, related disorders and Alzheimer's disease. Brain. 2008;131(5):1268–1281. doi: 10.1093/brain/awn047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Kaufman L. D., Pratt J., Levine B., Black S. E. Antisaccades: A probe into the dorsolateral prefrontal cortex in Alzheimer's disease. A critical review. Journal of Alzheimer's Disease. 2010;19(3):781–793. doi: 10.3233/JAD-2010-1275. [DOI] [PubMed] [Google Scholar]
- 55.Peltsch A., Hemraj A., Garcia A., Munoz D. P. Saccade deficits in amnestic mild cognitive impairment resemble mild Alzheimer's disease. European Journal of Neuroscience. 2014;39(11):2000–2013. doi: 10.1111/ejn.12617. [DOI] [PubMed] [Google Scholar]
- 56.Kaufman L. D., Pratt J., Levine B., Black S. E. Executive deficits detected in mild Alzheimer's disease using the antisaccade task. Brain and Behavior. 2012;2(1):15–21. doi: 10.1002/brb3.28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Brooks S. H., Klier E. M., Red S. D., et al. Slowed prosaccades and increased antisaccade errors as a potential behavioral biomarker of multiple system atrophy. Frontiers in Neurology. 2017;8, article no. 261 doi: 10.3389/fneur.2017.00261. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Noiret N., Carvalho N., Laurent É., et al. Saccadic Eye Movements and Attentional Control in Alzheimer's Disease. Archives of Clinical Neuropsychology. 2018;33(1):1–13. doi: 10.1093/arclin/acx044. [DOI] [PubMed] [Google Scholar]
- 59.Bowling A. C., Lindsay P., Smith B. G., Storok K. Saccadic eye movements as indicators of cognitive function in older adults. Aging, Neuropsychology, and Cognition. 2015;22(2):201–219. doi: 10.1080/13825585.2014.901290. [DOI] [PubMed] [Google Scholar]
- 60.Amieva H., Mokri H., Le Goff M., et al. Compensatory mechanisms in higher-educated subjects with Alzheimer's disease: A study of 20 years of cognitive decline. Brain. 2014;137(4):1167–1175. doi: 10.1093/brain/awu035. [DOI] [PubMed] [Google Scholar]
- 61.Bylsma F. W., Rasmusson D. X., Rebok G. W., Keyl P. M., Tune L., Brandt J. Changes in visual fixation and saccadic eye movements in Alzheimer's disease. International Journal of Psychophysiology. 1995;19(1):33–40. doi: 10.1016/0167-8760(94)00060-R. [DOI] [PubMed] [Google Scholar]
- 62.Hershey L. A., Whicker L., Abel L. A., Dell'osso L. F., Traccis S., Grossniklaus D. Saccadic Latency Measurements in Dementia. JAMA Neurology. 1983;40(9):592–593. doi: 10.1001/archneur.1983.04050080092023. [DOI] [PubMed] [Google Scholar]
- 63.Fletcher W. A., Sharpe J. A. Saccadic eye movement dysfunction in Alzheimer's disease. Annals of Neurology. 1986;20(4):464–471. doi: 10.1002/ana.410200405. [DOI] [PubMed] [Google Scholar]
- 64.Pirozzolo F. J., Hansch E. C. Oculomotor reaction time in dementia reflects degree of cerebral dysfunction. Science. 1981;214(4518):349–351. doi: 10.1126/science.7280699. [DOI] [PubMed] [Google Scholar]
- 65.Chau S. A., Herrmann N., Sherman C., et al. Visual Selective Attention Toward Novel Stimuli Predicts Cognitive Decline in Alzheimer's Disease Patients. Journal of Alzheimer's Disease. 2017;55(4):1–11. doi: 10.3233/JAD-160641. [DOI] [PubMed] [Google Scholar]
- 66.Zhang Y., Wilcockson T., Kim K. I., Crawford T., Gellersen H., Sawyer P. Monitoring dementia with automatic eye movements analysis. Smart Innovation, Systems and Technologies. 2016;57:299–309. doi: 10.1007/978-3-319-39627-9_26. [DOI] [Google Scholar]
- 67.Chaabouni S., Benois-pineau J., Tison F., Ben Amar C., Zemmari A. Prediction of visual attention with deep CNN on artificially degraded videos for studies of attention of patients with Dementia. Multimedia Tools and Applications. 2017;76(21):1–20. doi: 10.1007/s11042-017-4796-5. [DOI] [Google Scholar]
- 68.Chaabouni S., Tison F., Benois-Pineau J., Ben Amar C. Prediction of visual attention with Deep CNN for studies of neurodegenerative diseases. Proceedings of the 14th International Workshop on Content-Based Multimedia Indexing, CBMI 2016; June 2016; pp. 1–6. [DOI] [Google Scholar]
- 69.Tseng P.-H., Cameron I. G. M., Pari G., Reynolds J. N., Munoz D. P., Itti L. High-throughput classification of clinical populations from natural viewing eye movements. Journal of Neurology. 2013;260(1):275–284. doi: 10.1007/s00415-012-6631-2. [DOI] [PubMed] [Google Scholar]
- 70.Land M., Mennie N., Rusted J. The roles of vision and eye movements in the control of activities of daily living. Perception. 1999;28(11):1311–1328. doi: 10.1068/p2935. [DOI] [PubMed] [Google Scholar]
- 71.Seligman S. C., Giovannetti T. The Potential Utility of Eye Movements in the Detection and Characterization of Everyday Functional Difficulties in Mild Cognitive Impairment. Neuropsychology Review. 2015;25(2):199–215. doi: 10.1007/s11065-015-9283-z. [DOI] [PubMed] [Google Scholar]
- 72.Land M. F., Hayhoe M. In what ways do eye movements contribute to everyday activities? Vision Research. 2001;41(25-26):3559–3565. doi: 10.1016/S0042-6989(01)00102-X. [DOI] [PubMed] [Google Scholar]
- 73.Land M. F. Eye movements and the control of actions in everyday life. Progress in Retinal and Eye Research. 2006;25(3):296–324. doi: 10.1016/j.preteyeres.2006.01.002. [DOI] [PubMed] [Google Scholar]
- 74.Donnarumma F., Costantini M., Ambrosini E., Friston K., Pezzulo G. Action perception as hypothesis testing. Cortex. 2017;89:45–60. doi: 10.1016/j.cortex.2017.01.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Boisvert J. F. G., Bruce N. D. B. Predicting task from eye movements: On the importance of spatial distribution, dynamics, and image features. Neurocomputing. 2016;207:653–668. doi: 10.1016/j.neucom.2016.05.047. [DOI] [Google Scholar]
- 76.Forde E. M. E., Rusted J., Mennie N., Land M., Humphreys G. W. The eyes have it: An exploration of eye movements in action disorganisation syndrome. Neuropsychologia. 2010;48(7):1895–1900. doi: 10.1016/j.neuropsychologia.2010.01.024. [DOI] [PubMed] [Google Scholar]
- 77. ClinicalTrials.gov [Internet], National Library of Medicine (US), Centre Hospitalier Universitaire de Nice, Identifier NCT02557464, “Identification of early markers of alzheimer’s disease by using eye tracking in reading. (adal),” 2015, this study is currently recruiting participants. Available: https://clinicaltrials.gov/ct2/show/NCT02557464?term=eye&cond=Alzheimer+Disease&cntry=FR&rank=1.
- 78. ClinicalTrials.gov [Internet], National Library of Medicine (US), Centre Hospitalier Universitaire de Nice, Identifier NCT02941289 , “Visuospatial attention, eye movements and instrumental activities of daily living (iadls) in alzheimer’s disease (arva-ma),” 2016, this study is currently recruiting participants. Available: https://clinicaltrials.gov/ct2/show/NCT02941289?term=Eye+movements&recrs=ab&cond=Alzheimer+Disease&rank=1.
- 79.Mancas M., Ferrera V. P., Riche N., Taylor J. G. From Human Attention to Computational Attention: A Multidisciplinary Approach. Vol. 10. Springer; 2016. [Google Scholar]
- 80.Frintrop S. Computer Analysis of Human Behavior. Springer; 2011. Computational visual attention; pp. 69–101. [Google Scholar]
- 81.Itti L., Koch C. Computational modelling of visual attention. Nature Reviews Neuroscience. 2001;2(3):194–203. doi: 10.1038/35058500. [DOI] [PubMed] [Google Scholar]
- 82.Tsotsos J. K., Rothenstein A. Computational models of visual attention. Scholarpedia. 2011;6(1, article 6201) doi: 10.4249/scholarpedia.6201. [DOI] [Google Scholar]
- 83.Wang S., Jiang M., Duchesne X. M., et al. Atypical Visual Saliency in Autism Spectrum Disorder Quantified through Model-Based Eye Tracking. Neuron. 2015;88(3):604–616. doi: 10.1016/j.neuron.2015.09.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84.Singh S., Arora C., Jawahar C. V. First person action recognition using deep learned descriptors. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016; July 2016; pp. 2620–2628. [Google Scholar]
- 85.Betancourt A., Morerio P., Regazzoni C. S., Rauterberg M. The evolution of first person vision methods: A survey. IEEE Transactions on Circuits and Systems for Video Technology. 2015;25(5):744–760. doi: 10.1109/TCSVT.2015.2409731. [DOI] [Google Scholar]
- 86.Pan J., Canton-Ferrer C., McGuinness K., et al. Salgan: Visual saliency prediction with generative adversarial networks. CoRR. https://arxiv.org/abs/1701.01081v2. [Google Scholar]
- 87.Cornia M., Baraldi L., Serra G., Cucchiara R. Predicting human eye fixations via an lstm-based saliency attentive model. CoRR. doi: 10.1109/TIP.2018.2851672. https://arxiv.org/abs/1611.09571v3. [DOI] [PubMed] [Google Scholar]
- 88.Reinagel P., Zador A. M. Natural scene statistics at the centre of gaze. Network: Computation in Neural Systems. 1999;10(1-10, article 4) doi: 10.1088/0954-898X/10/4/304. [DOI] [PubMed] [Google Scholar]
- 89.Jost T., Ouerhani N., Wartburg R. V., Müri R., Hügli H. Assessing the contribution of color in visual attention. Computer Vision and Image Understanding. 2005;100(1-2):107–123. doi: 10.1016/j.cviu.2004.10.009. [DOI] [Google Scholar]
- 90.Baddeley R. J., Tatler B. W. High frequency edges (but not contrast) predict where we fixate: A Bayesian system identification analysis. Vision Research. 2006;46(18):2824–2833. doi: 10.1016/j.visres.2006.02.024. [DOI] [PubMed] [Google Scholar]
- 91.Treisman A. M., Gelade G. A feature-integration theory of attention. Cognitive Psychology. 1980;12(1):97–136. doi: 10.1016/0010-0285(80)90005-5. [DOI] [PubMed] [Google Scholar]
- 92.Borji A., Cheng M., Jiang H., Li J. Salient object detection: a survey. CoRR. https://arxiv.org/abs/1411.5878. [Google Scholar]
- 93.Cerf M., Frady E. P., Koch C. Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of vision. 2009;9(12):10–10. doi: 10.1167/9.12.10. [DOI] [PubMed] [Google Scholar]
- 94.Kruthiventi S. S. S., Gudisa V., Dholakiya J. H., Babu R. V. Saliency unified: A deep architecture for simultaneous eye fixation prediction and salient object segmentation. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016; July 2016; pp. 5781–5790. [Google Scholar]
- 95.Li G., Yu Y. Deep contrast learning for salient object detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016; July 2016; pp. 478–487. [Google Scholar]
- 96.Liu N., Han J. DHSNet: Deep hierarchical saliency network for salient object detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016; July 2016; pp. 678–686. [Google Scholar]
- 97.Zhao Q., Koch C. Learning saliency-based visual attention: A review. Signal Processing. 2013;93(6):1401–1407. doi: 10.1016/j.sigpro.2012.06.014. [DOI] [Google Scholar]
- 98.Borji A., Itti L. State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2013;35(1):185–207. doi: 10.1109/TPAMI.2012.89. [DOI] [PubMed] [Google Scholar]
- 99.Kuen J., Wang Z., Wang G. Recurrent attentional networks for saliency detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016; July 2016; pp. 3668–3677. [Google Scholar]
- 100.Veale R., Hafed Z. M., Yoshida M. How is visual salience computed in the brain? Insights from behaviour, neurobiology and modeling. Philosophical Transactions of the Royal Society B: Biological Sciences. 2017;372(1714) doi: 10.1098/rstb.2016.0113.20160113 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101.Borji A., Sihite D. N., Itti L. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Transactions on Image Processing. 2013;22(1):55–69. doi: 10.1109/TIP.2012.2210727. [DOI] [PubMed] [Google Scholar]
- 102.Duan L., Gu J., Yang Z., Miao J., Ma W., Wu C. Genetic and Evolutionary Computing. Vol. 238. Springer; 2014. Bio-inspired Visual Attention Model and Saliency Guided Object Segmentation; pp. 291–298. (Advances in Intelligent Systems and Computing). [DOI] [Google Scholar]
- 103.Li W.-T., Chang H.-S., Lien K.-C., Chang H.-T., Wang Y.-C. F. Exploring visual and motion saliency for automatic video object extraction. IEEE Transactions on Image Processing. 2013;22(7):2600–2610. doi: 10.1109/TIP.2013.2253483. [DOI] [PubMed] [Google Scholar]
- 104.Su Y.-C., Grauman K. Detecting engagement in egocentric video. Proceedings of the European Conference on Computer Vision; 2016; Springer; pp. 454–471. [Google Scholar]
- 105.Boujut H., Buso V., Benois-Pineau J., et al. Innovation in Medicine & Healthcare. KES; 2013. Visual saliency maps for studies of behavior of patients with neurodegenerative diseases: Observer’s versus actor’s points of view. [Google Scholar]
- 106.Buso V., González-Díaz I., Benois-Pineau J. Goal-oriented top-down probabilistic visual attention model for recognition of manipulated objects in egocentric videos. Signal Processing: Image Communication. 2015;39:418–431. doi: 10.1016/j.image.2015.05.006. [DOI] [Google Scholar]
- 107.Fathi A., Ren X., Rehg J. M. Learning to recognize objects in egocentric activities. Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011; June 2011; pp. 3281–3288. [DOI] [Google Scholar]
- 108.Ren X., Philipose M. Egocentric recognition of handled objects: Benchmark and analysis. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009; 2009; IEEE; pp. 1–8. [Google Scholar]
- 109.Li Y., Fathi A., Rehg J. M. Learning to predict gaze in egocentric video. Proceedings of the 2013 14th IEEE International Conference on Computer Vision, ICCV 2013; December 2013; pp. 3216–3223. [DOI] [Google Scholar]
- 110.Matsuo K., Yamada K., Ueno S., Naito S. An attention-based activity recognition for egocentric video. Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2014; June 2014; USA. pp. 551–556. [DOI] [Google Scholar]
- 111.Le Bek P. Learning to recognise actions in egocentric video, [MSc, thesis] University Glasgow, School of Computing Science; 2014. [Google Scholar]
- 112.Singh S., Arora C., Jawahar C. V. Trajectory aligned features for first person action recognition. Pattern Recognition. 2017;62:45–55. doi: 10.1016/j.patcog.2016.07.031. [DOI] [Google Scholar]
- 113.Ma M., Fan H., Kitani K. M. Going deeper into first-person activity recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016; July 2016; pp. 1894–1903. [Google Scholar]
- 114.Nguyen T.-H., Nebel J.-C., Florez-Revuelta F. Recognition of activities of daily living with egocentric vision: A review. Sensors. 2016;16(1, article no. 72) doi: 10.3390/s16010072. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115.Yamada K., Sugano Y., Okabe T., et al. Attention prediction in egocentric video using motion and visual saliency. Advances in Image and Video Technology. 2012:277–288. [Google Scholar]
- 116.Sun X., Yao H., Ji R., Liu X.-M. Toward statistical modeling of saccadic eye-movement and visual saliency. IEEE Transactions on Image Processing. 2014;23(11):4649–4662. doi: 10.1109/TIP.2014.2337758. [DOI] [PubMed] [Google Scholar]
- 117.Foster J. K., Behrmann M., Stuss D. T. Visual attention deficits in Alzheimer's disease: Simple versus conjoined feature search. Neuropsychology. 1999;13(2):223–245. doi: 10.1037/0894-4105.13.2.223. doi: 10.1037/0894-4105.13.2.223. [DOI] [PubMed] [Google Scholar]
- 118.Tales A., Muir J., Jones R., Bayer A., Snowden R. J. The effects of saliency and task difficulty on visual search performance in ageing and Alzheimer's disease. Neuropsychologia. 2004;42(3):335–345. doi: 10.1016/j.neuropsychologia.2003.08.002. [DOI] [PubMed] [Google Scholar]