Abstract
Strategically shaping patterns of eye movements through training has manifold promising applications, with the potential to improve the speed and efficiency of visual search, improve the ability of humans to extract information from complex displays, and help correct disordered eye movement patterns. However, training how a person moves their eyes when viewing an image or scene is notoriously difficult, with typical approaches relying on explicit instruction and strategy, which have notable limitations. The present study introduces a novel approach to eye movement training using aversive conditioning with near-real-time feedback. Participants viewed indoor scenes (eight scenes presented over 48 trials) with the goal of remembering those scenes for a later memory test. During viewing, saccades meeting specific amplitude and direction criteria probabilistically triggered an aversive electric shock, which was felt within 50 ms after the eliciting eye movement, allowing for a close temporal coupling between an oculomotor behavior and the feedback intended to shape it. Results demonstrate a bias against performing an initial saccade in the direction paired with shock (Experiment 1) or generally of the amplitude paired with shock (Experiment 2), an effect that operates without apparent awareness of the relationship between shocks and saccades, persists into extinction, and generalizes to the viewing novel images. The present study serves as a proof-of-concept concerning the implementation of near-real-time feedback in eye movement training.
Keywords: aversive conditioning, saccades, training
Eye movements are a ubiquitous part of everyday life for individuals with a fully-functioning visual system. A typical person makes several saccadic eye movements per second when scanning the natural world (Hayhoe & Ballard, 2005; Henderson, 2003), and such eye movements are necessary for representing different aspects of the visual world with the detail and precision afforded by the high-acuity fovea (Curcio et al., 1990; Jacobs, 1979). It is therefore unsurprising that eye movement patterns are associated with the efficiency and accuracy of visual search performance (Najemnik & Geisler, 2005; Neider & Zelinsky, 2006; Zelinsky et al., 1997) and with later memory for visual displays (Hollingworth & Henderson, 2002; Voss et al., 2017; Zelinsky & Loschky, 2005). How a person moves their eyes when scanning the visual world is directly related to their perceptual experience, with far-reaching implications for health, safety, and well-being.
The ability to strategically shape how a person moves their eyes when scanning a scene has manifold potential benefits. There are many situations and professions in which efficient visual search is critical, including radiologists scanning an image for cancer (Aizenman et al., 2017; Brennan et al., 2018; Reingold & Sheridan, 2011), Transportation Security Administration officers scanning baggage for contraband (Halbherr et al., 2013; Mitroff et al., 2018), air traffic control officers monitoring flight patterns for safety (Ahlstrom & Friedman-Berg, 2006), and military personnel or emergency responders searching for a potential threat (Crundall et al., 2003). Efficiently extracting visual information through eye movements also has implications for optimizing the effectiveness of education and instruction practices, which often rely on visual media to communicate complex concepts (Yuan et al., 2019). Eye movements can be abnormal for a wide variety of reasons, from visual neglect following stroke (Kinsbourne, 1987), to amblyopia (Ciuffreda et al., 1991), to atypical eye movement patterns linked to schizophrenia (Holzman et al., 1973). In these contexts, effective methods for curbing abnormal eye movement patterns have potential clinical utility (Levi & Li, 2009; Malhotra et al., 2013; Russell et al., 2008). In these and other ways, training more optimal eye movement patterns could have broad benefits for maximizing human ability, performance, and learning.
Unfortunately, training eye movement patterns in visual search is notoriously difficult. Individuals often have limited awareness of how they move their eyes (Chun & Jiang, 1998; Horowitz & Wolfe, 1998; Reingold & Sheridan, 2011; Vo et al., 2016), limiting the effectiveness of verbal or pictorial instruction concerning eye movements. Providing pictures showing people where and how to look while searching can have some utility (Litchfield et al., 2008, 2010; Vitak et al., 2012), but such an approach to training is typically limited to simple rules for orienting and requires the active engagement of an explicit strategy that is accessible to awareness (Auffermann et al., 2015a, 2015b; Carroll et al. 2013; Chapman et al., 2002; Koenig et al., 1998; Kok et al., 2016; Litchfield et al., 2008, 2010; Nickles et al., 1998, 2003; Pradhan et al., 2009; Vitak et al., 2012). This limits the scope, flexibility, and generalizability of training (Drew & Williams, 2017; Kok et al., 2016; Kramer et al., 2019; Peltier & Becker, 2017).
Conditioning via reinforcement has been robustly applied to object-based (Anderson et al., 2011, 2014; Della Libera & Chelazzi, 2009; Donohue et al., 2016; Hickey & Peelen, 2015; Shomstein & Johnson, 2014; Kim & Anderson, 2019) and space-based orienting (Anderson, 2015; Anderson & Kim, 2018a, 2018b; Chelazzi et al., 2014; Hickey et al., 2014), curbing the frequency of eye movements targeting particular objects and locations. However, such an approach has not been applied to directional eye movements per se outside of the context of an oculomotor decision-making task (Liao & Anderson, 2020), owing in part to the challenge of linking eye movements themselves to outcomes with the temporal precision necessary to distinguish which eye movement triggered a given outcome during naturalistic viewing. The ability to effectively train eye movement patterns through outcome-based learning would have the potential to shape visual scanning behavior in a way that does not depend on explicit strategy (and can even operate without awareness), can be applied to a variety of complex eye movement patterns that are not easily communicated through explicit instruction, and generalizes to untrained contexts and visual search tasks. This last point—generalizability—is particularly important if eye movement training is going to have a meaningful impact in everyday life, beyond the often narrow and artificial context experienced during training in a laboratory or clinic.
In the present study, I introduce a novel approach to the shaping of eye movements through training using aversive conditioning with near-real-time feedback. Participants view images of scenes and try to commit the details of each scene to memory, and are informed that they will periodically receive a mild electric shock during the task. Unbeknownst to the participant, the shocks are related to how they move their eyes. Eye movements exceeding a minimal amplitude in a particular direction (Experiment 1) or of a particular amplitude regardless of direction (Experiment 2) probabilistically result in an electric shock, which is delivered very rapidly after the eye movement is made. Results demonstrate that this feedback manipulation shapes the frequency of leftward compared to rightward eye movements (Experiment 1) and the frequency of low- compared to high-amplitude saccades (Experiment 2), depending on which is paired with shock, in a manner that (a) proceeds without awareness of the relationship between shocks and saccades, (b) persists into extinction, and (c) generalizes to the viewing of novel images.
Methods
Experiment 1
Participants.
Thirty participants were recruited from the Texas A&M University community. Participants were compensated with course credit. All reported normal or corrected-to-normal visual acuity and normal color vision. All procedures were approved by the Texas A&M University Institutional Review Board and conformed with the principles outlined in the Declaration of Helsinki. Given the novelty of the experimental paradigm, there was no clear basis for estimating sample size; the chosen sample size would yield power (1-β) > 0.9 using the effect size for the influence of reward learning on saccades during free viewing of images (d = 1.65 and 0.75 in Anderson & Kim, 2018a, 2018b, respectively), which seemed like the closest analogue.
Apparatus.
A Dell OptiPlex equipped with Matlab software and Psychophysics Toolbox extensions (Brainard, 1997) was used to present the stimuli on a Dell P2717H monitor. The participants viewed the monitor from a distance of approximately 70 cm in a dimly lit room. Eye position was monitored using an EyeLink 1000-plus desktop mount eye tracker (SR Research). Head position was maintained using an adjustable chin and forehead rest (SR Research). Electric shocks were delivered through an isolated linear stimulator under the constant current setting (STMISOLA, BioPac Systems) using paired electrodes (EL500, BioPac Systems).
Delivery of Shock and Calibration of Shock Intensity.
Electric shocks were delivered via two electrodes attached to participants’ left forearm. Prior to completion of the training phase, shock intensity was individually calibrated by gradually increasing it to a level that participants self-reported as “uncomfortable but not painful” (as in, e.g., Gregoire et al., in press; Kim & Anderson, in press a, in press b; Kim et al., in press). The resulting intensity of electrical stimulation was used for the training phase. Each shock, during both calibration and the training phase, consisted of an electrical pulse 2 ms in duration.
Training Phase.
Each trial consisted of a fixation display followed by the presentation of a whole-screen image of an indoor scene. The fixation display, which consisted of a central white plus-sign presented against a black background, remained on screen until eye position was registered within 1.4° of the center of the fixation cross for a continuous period of 500 ms (as in, e.g., Anderson & Kim, 2019a, 2019b). This was to ensure that (a) eye position was properly calibrated and (b) each scene was presented with eye position beginning in the middle of the scene. Each scene (46.4° × 27.3° visual angle) was presented for 12 sec. Participants were instructed to look over the scenes carefully in order to commit the details of each scene to memory for a later memory test, and that they would have multiple opportunities to view each scene. They were also informed that they would periodically receive a mild electric shock, but they were not informed of any relationship between shock and eye movements (see Figure 1). The training phase consisted of two blocks each consisting of 24 trials. Eight different indoor scenes were used, which were taken from the CB Database (Sareen, Ehinger, & Wolfe, 2016); each scene was therefore presented three times during each block.
Figure 1.

Example trial with eye position over time indicated by black arrows (directional saccades) and blue circles (areas of fixation). In this example, leftward saccades exceeding 11.6° in amplitude, indicated with a lightning bolt, are probabilistically followed by an electric shock immediately upon detection.
Test Phase.
The test phase was identical to the training phase, with the following exceptions. Participants were informed that shocks would no longer be delivered and were disconnected from the linear isolated stimulator. Twelve different scenes were now used, four new scenes taken from the same database in addition to the same eight scenes used during training. The duration of scene presentation on each trial was shortened from 12 sec to 6 sec, and the number of trials-per-block was increased to 36. Participants again completed two blocks of the task.
Memory Test.
Participants were presented with images that matched those that were viewed during the experiment in addition to left-right mirror-reversed versions of those same scenes. Participants were instructed to press the “m” key if the scene matched one that they had previously viewed and the “z” key if it was a mirror reversal. Text reminding participants of the button-to-response mapping remained at the bottom of the screen throughout the memory test. Each scene and its mirror reversal were presented twice during the memory test, for a total of 48 trials. Trials were untimed. The memory test was not designed to be difficult and was merely included as a check that participants were attentive during the training and test phases.
Awareness Assessment.
At the end of the experiment, participants were asked to write down a short summary (five sentences or less) describing what they thought the purpose of the experiment was. The specific prompt was: “In 5 sentences or less, describe what you think the purpose of this experiment was? What do you think the researchers are testing? Please provide your best guess, even if you really feel like you have no idea.” Participants entered their responses into a word processor. Responses were assessed for any indication that the participant thought that shocks were related to how they looked at the images.
Procedure.
Participants completed shock calibration, two blocks of the training phase, two blocks of the test phase, the memory test, and the awareness assessment in that order. For half of participants, rightward saccades could result in an electric shock, whereas for the other half of participants, leftward saccades could result in shock (see below for additional details concerning the specific criteria for delivery of shock). The entire experiment took approximately 50 minutes to complete.
Measurement of Eye Position and Delivery of Electric Shock.
Eye position was calibrated prior to each block of the task using a 9-point array (3 × 3 grid), and was manually drift-corrected by the experimenter as necessary during the fixation display (as described above, a trial could not begin until 500 ms of continuous fixation at the center of the screen was acquired). Saccades were defined as occurring when velocity exceeded 35°/s and acceleration exceeded 9500°/s2, consistent with prior research on the effects of reward learning on eye movements during scene viewing (Anderson & Kim, 2018a, 2018b). Using these parameters, saccade events from the parsed EyeLink data were read into the Matlab script controlling stimulus presentation using the EyeLink functions for the Psychophysics Toolbox.
Naturally, there is a brief delay between the end of a saccade and when the end is detected, as samples need to be accumulated before the beginning of the end can be identified, but even with this natural limitation, timestamps generated during testing of the paradigm indicate that the end of a saccade is detected by the computer controlling stimulus presentation within 35 ms of the moment the saccade was determined to have ended by the EyeLink event parser (which back-logs to the actual estimated endpoint once a sufficient number of samples have been measured to indicate it). This testing was accomplished by having the computer controlling stimulus presentation send an event marker to the computer controlling the eye tracker immediately upon registration of a saccade and comparing the time of the event marker to the time at which the saccade was logged as having ended by the EyeLink event parser. Note that this is a conservative estimate, as it includes the additional time required to send an event marker back to the computer controlling the eye tracker, which only occurred in testing. In the experiment, the endpoint of the measured saccade was compared to the starting point (both available in the end-of-saccade event in the parsed data), and if the amplitude of the saccade exceeded 11.6° (25% of the image extent) in one of two directions (left or right, counterbalanced across participants), a shock was delivered with 33% probability immediately upon detection (see Figure 1). Pilot data without the inclusion of shock suggested that this criteria would result in approximately 50 shocks per participant over the course of the entire study (mean number of shocks delivered in the actual experiment: 52.7).
Shock was chosen for the unconditioned stimulus used to shape behavior given that it is felt almost immediately upon delivery. Given the speed with which electricity travels and the speed with which the Matlab trigger for the linear isolated stimulator can be generated upon the command to do so (measured between 9 and 12 ms over 25 tests using the present apparatus), the electrical pulse should reach the participant in under 50 ms of the eliciting saccade, safely before the next saccade can be programmed and executed, especially under naturalistic viewing conditions involving scene stimuli in which fixations are predominantly goal-directed (see, e.g., Hayhoe & Ballard, 2005; Henderson, 2003; Henderson & Choi, 2015). In order to avoid detecting blinks that trigger a saccade event or saccades not related to task performance, saccades with start and/or end points falling outside of the area within which the images were presented (i.e., the computer monitor) were ignored.
Analysis of Eye Movements.
Saccades exceeding 25% of the image extent in amplitude in any direction during the period that an image was on the screen were recorded by the computer controlling stimulus presentation. The direction of each such saccade was coded based off of which direction had the higher amplitude with respect to the cardinal directions of up, down, left, and right using the x,y position of the eye (i.e., using a 45° cutoff). Of particular interest was the first saccade for each image presentation that exceeded 25% of the image extent in any direction (i.e., was of a high enough amplitude to elicit a shock if in the trained direction) given that (a) aversive conditioning tends to most strongly affect early saccades (Anderson & Britton, in press; Britton & Anderson, in press; Kim & Anderson, in press b; Nissens et al., 2017) and (b) a high-amplitude saccade in one direction makes it much more likely that the next high-amplitude saccade will be in the opposite direction given the task of viewing the entire image (as each such saccade necessarily brings eye position closer to the edge of the screen in that direction), making the total frequency of saccades in different directions likely to be less informative. From a translational standpoint, early saccades are especially important in situations in which search time is limited and a target must be found quickly, and so the ability to shape the direction of early saccades specifically has clear utility.
Once coded as described above, the frequency (proportion) of the first saccade exceeding 25% of the image extent in amplitude was determined for each of the four cardinal directions for each participant. Then, the difference between leftward and rightward saccades was computed and compared between the two training conditions (leftward vs. rightward saccades associated with shock) using an independent samples t-test. In addition, the landing point of each initial saccade exceeding 25% of the image extent in amplitude in any direction was plotted (with the starting point anchored to the center of the screen) to visualize the results of training on the spatial distribution of eye movements.
Experiment 2
Participants.
Forty new participants were recruited from the Texas A&M University community. Participants were compensated with course credit. All reported normal or corrected-to-normal visual acuity and normal color vision. All procedures were approved by the Texas A&M University Institutional Review Board and conformed with the principles outlined in the Declaration of Helsinki.
Apparatus and Experiment Task.
The apparatus, delivery and calibration of shock intensity, training phase, test phase, memory test, awareness assessment, and procedure were identical to Experiment 1 with the exception of how shock was determined from eye movements as described below.
Measurement and Analysis of Eye Position and Delivery of Electric Shock.
The same procedure was used for reading in eye position data and on-line coding of saccades, except that in this experiment, saccades were coded for amplitude regardless of direction. For half of participants, high-amplitude saccades could trigger shock, whereas for the other half of participants, shocks only followed low-amplitude saccades. If the amplitude of a saccade exceeded 15.5° in the x-dimension or 9.1° in the y-dimension (33% of the image extent) in any direction, it was coded as a high-amplitude saccade. If the amplitude of a saccade fell below 5°, it was coded as a low-amplitude saccade (saccades falling in between these amplitude thresholds were not coded and were ignored). Based on pilot data, the probability of shock following a high-amplitude saccade for participants in the high-amplitude shock condition was set to 25% (mean number of shocks delivered in the actual experiment: 37.4). Since low-amplitude saccades were expected to in general be much more frequent than high-amplitude saccades given the nature of the scene viewing task, participants for whom low-amplitude saccades were associated with shock were yoked to participants in the high-amplitude condition. Specifically, each participant in the low-amplitude condition received the same number of shocks per trial as a yoked participant in the high-amplitude condition, which were delivered following low-amplitude saccades as defined above. The difference between the frequency of low- and high-amplitude saccades was compared across training conditions.
Results
Experiment 1
Awareness Assessment and Memory Test.
Only one participant made any statement relating shock to how they moved their eyes in the task. Most participants thought the experiment was about the effects of electric shock on memory for the scenes and/or whether they better remembered the parts of the scenes they looked at. Subsequent analyses focus on the 29 participants who did not give any indication of awareness concerning the relationship between shock and eye movements. Mean accuracy in the memory test was 97.5%, suggesting that participants viewed the images attentively.
Eye Movements.
In the training phase, the balance between leftward and rightward saccades differed by training condition (see Figure 2), with the frequency of saccades in the trained direction being reduced, t(27) = 3.29, p = 0.003, d = 1.23 (see Figure 3). The same pattern was observed in the test phase, both computed over all trials, t(27) = 3.49, p = 0.002, d = 1.29, and when restricting analyses to eye movements made during the viewing of novel images, t(27) = 4.17, p < 0.001, d = 1.55 (see Table 1).
Figure 2.

Heat map depicting the landing point of the first saccade exceeding 11.6° in amplitude (the amplitude required to elicit shock) in any direction in the training phase of Experiment 1, rotated 180° for the participants in the right-saccade training condition such that the shock-associated direction is always to the left. The fixation cross represents the starting point of the saccade (which was not necessarily the center of the screen).
Figure 3.

Frequency (proportion) of the first saccade going in each direction as a function of training condition in Experiment 1. Error bars represent the standard error of the mean (SEM).
Table 1.
Frequency (proportion) of leftward and rightward saccades in the test phase of Experiment 1 as a function of the direction previously associated with shock. Standard deviations are in parentheses.
| Leftward saccades previously associated with shock | Rightward saccades previously associated with shock | |||
|---|---|---|---|---|
| left | right | left | right | |
| All scenes | 0.196 (0.078) | 0.562 (0.108) | 0.338 (0.077) | 0.483 (0.078) |
| Novel scenes only | 0.260 (0.053) | 0.494 (0.070) | 0.351 (0.066) | 0.434 (0.082) |
Experiment 2
Awareness Assessment and Memory Test.
No participant made any statement relating shock to how they moved their eyes in the task. Once again, most participants thought the experiment was about the effects of electric shock on memory for the scenes and/or whether they better remembered the parts of the scenes they looked at. Mean accuracy in the memory test was 96.9%, suggesting that participants viewed the images attentively.
Eye Movements.
In the training phase, the balance between low- and high-amplitude saccades differed by training condition, with the frequency of saccades of the trained amplitude being reduced, t(38) = 2.70, p = 0.010, d = 0.86 (see Figure 4). A similar pattern was observed in the test phase, being marginally significant when computed over all trials, t(38) = 1.89, p = 0.066, d = 0.60, and significant when restricting analyses to eye movements made during the viewing of novel images, t(38) = 2.08, p = 0.044, d = 0.66 (see Table 2).
Figure 4.

Frequency (mean total) of low- and high-amplitude saccades as a function of the amplitude associated with shock during training in Experiment 2. Error bars represent the SEM.
Table 2.
Frequency (mean total) of low- and high-amplitude saccades in the test phase of Experiment 2 as a function of the amplitude previously associated with shock. Standard deviations are in parentheses.
| Low-amplitude saccades previously associated with shock | High-amplitude saccades previously associated with shock | |||
|---|---|---|---|---|
| low | high | low | high | |
| All scenes | 941.2 (142.3) | 129 (41) | 1039.4 (181.3) | 0125.2 (34.5) |
| Novel scenes only | 320 (46.6) | 43.5 (12.7) | 353.6 (61.6) | 38.4 (12.2) |
Discussion
By utilizing electric shock, an unconditioned, naturally aversive stimulus that can be triggered and delivered to a participant very rapidly, it is possible to provide near-real-time feedback with respect to eye movements. With software that quickly identifies and outputs information about saccades (the commercial software running on the EyeLink computer) and software that quickly reads in this information and triggers shock on the basis of the direction and amplitude of saccades (implemented in Matlab using the EyeLink functions for the Psychophysics Toolbox), aversive feedback can be applied as a consequence of a particular saccade before a subsequent saccade can be generated. This is important, as it allows for a tight temporal coupling between the aversive outcome and the specific behavior responsible for eliciting it, which is known to facilitate efficient learning (e.g., Cohen, 1968; Lerman & Vorndran, 2002; Jones, 1962).
Using this novel oculomotor conditioning procedure, it is possible to shape eye movements in a manner that does not seem to depend on awareness of shock contingencies or explicit strategy, persists into extinction, and generalizes to the viewing of stimuli that were never paired with the aversive outcome. In these respects, aversive conditioning with near-real-time feedback offers advantages over alternative approaches to eye movement training that rely more on explicit instruction and strategy (Auffermann et al., 2015a, 2015b; Carroll et al. 2013; Chapman et al., 2002; Koenig et al., 1998; Kok et al., 2016; Litchfield et al., 2008, 2010; Nickles et al., 1998, 2003; Pradhan et al., 2009; Vitak et al., 2012). The magnitude of the bias was modest in the present study, amounting to a change in the frequency of leftward and rightward saccades of less than 10% in each direction (Experiment 1) and a change in the frequency of high- and low-amplitude saccades of less than 15% each (Experiment 2).
A number of prior studies have paired electric shock with particular objects, which produces a bias to orient toward shock-associated objects, which is thought to reflect an automatic attentional bias that facilitates threat detection (e.g., Anderson & Britton, in press; Kim & Anderson, in press b; Nissens et al., 2017; Schmidt et al., 2015). Such object-specific biases can affect early saccades (Nissens et al., 2017). The oculomotor biases observed in the present study cannot be explained by object-specific orienting given that they transfer to the viewing of novel scenes. With regards to this distinction, it is also noteworthy that shock reduced the frequency of associated saccades, which contrasts with the aforementioned studies in which training increases the frequency of eye movements toward a shock-associated object. This is consistent with the distinction between associative learning and instrumental conditioning that has been suggested in the reward and attention literature, with orienting toward reward-predictive objects being driven by the former (Bucker & Theeuwes, 2017; Kim & Anderson, 2019; Le Pelley et al., 2015; Sali et al., 2014) and orienting toward reward-predictive regions of space (Anderson & Kim, 2018a, 2018b) or in reward-predictive directions (Liao et al., 2020) being driven by the latter; in the present study, it appears that aversive conditioning is shaping orienting behavior as a punishable action, mirroring how reward can potentiate orienting behavior that is spatial in nature.
Given how the saccade–shock contingencies were defined in the present study, being independent of the starting position of the eyes in Experiment 1 and being entirely independent of direction in Experiment 2, coupled with the fact that the trained bias was evident during the viewing of novel images, it is unlikely that the observed biases can be explained by a tendency to avoid saccading to specific spatial locations. Instead, it would seem that oculomotor plans per se were modulated by punishment learning. Similarly, given these same considerations, it would seem that training exerted its influence on saccades targeting a location defined in retinotopic space rather than a location defined spatiotopically or in allocentric space, consistent with how habit learning driven by target location probability influences spatial orienting (Jiang & Swallow, 2013). As shifts of covert attention typically precede eye movements (e.g., Deubel & Schneider, 1996; Hoffman & Subramaniam, 1995), the reported approach to eye movement training might also result in a shift in the spatial distribution of covert attention when the eyes are fixated, although understanding any influence on covert attention would require dedicated experimentation.
The approach to eye movement training reported in the present study offers advantages unique to conditioning techniques. Most prominently, it shapes oculomotor behavior in a manner that can persist without the need to consciously engage a particular top-down attentional strategy. This has important translational implications, as conscious strategies can themselves be cognitively effortful and attentionally demanding (e.g., Botvinick & Braver, 2015; Irons et al., 2017; Walton et al. 2003; Weissman et al., 2006) and potentially take focus off of the very task that the strategy is intended to facilitate, and such strategies are also difficult to effectively teach given that individuals have limited awareness of how they move their eyes and therefore how effectively they are engaging the strategies (Chun & Jiang, 1998; Horowitz & Wolfe, 1998; Reingold & Sheridan, 2011; Vo et al., 2016). The fact that the trained bias can generalize to the viewing of novel images indicates some promise in generating a bias that will extend beyond the narrow confines within which it is learned.
The present study serves as a proof-of-concept concerning a novel approach to the training of eye movements, and many important questions remain to be answered. How long can the trained bias persist into extinction, and how far can it generalize? Are their modifications to the training technique that could maximize persistence and generalizability? Can more complex aspects of oculomotor behavior be trained, such as sequences of eye movements? It is unlikely that training something as broad as a left-right directional bias would have much translational potential; training specific sequences of eye movements would be necessary to maximize the translational potential of this technique with respect to the real-world issues outlined in the Introduction. For example, when scanning a radiological image for cancer or an image of a bag for contraband, perhaps punishing saccades returning to a previously-fixated location, at least during the early stages of search, would facilitate more complete visual search and would result in fewer search misses. Similarly, perhaps initially encouraging high-amplitude saccades during search followed by shorter-amplitude saccades later in search, by punishing the converse, could facilitate an initial search of global image characteristics followed by more detailed search of areas of identified interest—an approach to search that could have benefits over a search pattern that more frequently switches back and forth between the two levels. More empirically speaking, it might be possible to take the eye movement patterns of individuals who exhibit exemplary search performance and train the eye movements of others to more closely conform to this pattern by punishing deviations from the pattern of a certain magnitude.
Also important to establishing the translational potential of the approach to training introduced in the present study would be to demonstrate a situation in which behavioral performance in a visual search task is improved as a result of training. That is, a further-refined approach would need to be tested in the specific context and/or for the specific population it is intended to benefit, with measurable results in the speed or accuracy of search. The delivery of electrical stimulation for the purposes of conditioning also has somewhat limited translational appeal, requiring the use of a device that cannot be easily implemented in home or school settings by individuals without appropriate training; future research could explore ways of implementing the near-real-time concept using reward feedback, although there are logistical challenges to be worked out concerning how to communicate the reward in a manner that could be sufficiently processed before a subsequent saccade could be generated. Electrical stimulation was chosen for the present study because it can be so rapidly delivered and perceived, and it is unclear how much slower the processing of feedback could be before the conditioning procedure loses its effectiveness.
Conclusion
Using a technique involving electrical stimulation, the present study offers a solution to the problem of how to effectively condition eye movements during naturalistic visual search with demonstrated potential. By integrating the on-line analysis of eye position with the generation of an electrical pulse, it is possible to achieve very rapid feedback conducive to the learning of associations between eye movements and aversive outcomes. Much remains to be studied concerning the ultimate reach of this technique in influencing eye movements and resultant behavioral performance in visual search tasks, as discussed above. The present study offers an important proof-of-concept with respect to the use of near-real-time feedback to train oculomotor behavior, which holds promise in overcoming multiple limitations of other approaches to eye movement training.
Open Practices Statement.
The reported experiment was not preregistered. Raw data are available from the author upon reasonable request.
Acknowledgements
This study was supported by grants from the Brain & Behavior Research Foundation [NARSAD 26008] and NIH [R01-DA046410] to the author.
Footnotes
Publisher's Disclaimer: This Author Accepted Manuscript is a PDF file of a an unedited peer-reviewed manuscript that has been accepted for publication but has not been copyedited or corrected. The official version of record that is published in the journal is kept up to date and so may therefore differ from this version.
Conflict of Interest Statement
The author declares no conflict of interest.
References
- Ahlstrom U, & Friedman-Berg FJ (2006). Eye movement activity as a correlate of cognitive workload. International Journal of Industrial Ergonomics, 36, 623–636. [Google Scholar]
- Aizenman A, Drew T, Ehinger KA, Georgian-Smith D, & Wolfe JM (2017). Comparing search patterns in digital breast tomosynthesis and full-field digital mammography: an eye tracking study. Journal of Medical Imaging, 4(4), 045501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA (2015). Value-driven attentional capture is modulated by spatial context. Visual Cognition, 23, 67–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, & Britton MK (in press). On the automaticity of attentional orienting to threatening stimuli. Emotion. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, & Kim H (2018a). Mechanisms of value-learning in the guidance of spatial attention. Cognition, 178, 26–36. [DOI] [PubMed] [Google Scholar]
- Anderson BA, & Kim H (2018a). On the representational nature of value-driven spatial attentional biases. Journal of Neurophysiology, 120, 2654–2658. [DOI] [PubMed] [Google Scholar]
- Anderson BA, & Kim H (2019a). On the relationship between value-driven and stimulus-driven attentional capture. Attention, Perception, and Psychophysics, 81, 607–613. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, & Kim H (2019b). Test-retest reliability of value-driven attentional capture. Behavior Research Methods, 51, 720–726. [DOI] [PubMed] [Google Scholar]
- Anderson BA, Laurent PA, & Yantis S (2011). Value-driven attentional capture. Proceedings of the National Academy of Sciences, USA, 108, 10367–10371. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, Laurent PA, & Yantis S (2014). Value-driven attentional priority signals in human basal ganglia and visual cortex. Brain Research, 1587, 88–96. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Auffermann WF, Henry TS, Little BP, Tigges S, & Tridandapani S (2015a). Simulation for teaching and assessment of nodule perception on chest radiography in nonradiology health care trainees. Journal of the American College of Radiology, 12, 1215–1222. [DOI] [PubMed] [Google Scholar]
- Auffermann WF, Little BP, & Tridandapani S (2015b). Teaching search patterns to medical trainees in an educational laboratory to improve perception of pulmonary nodules. Journal of Medical Imaging, 3, 011006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Botvinick M, & Braver T (2015) Motivation and cognitive control: From behavior to neural mechanism. Annual Review of Psychology, 66, 83–113. [DOI] [PubMed] [Google Scholar]
- Brennan PC, Gandomkar Z, Ekpo EU, Tapia K, Trieu PD, Lewis SJ, Wolfe JM & Evans KK (2018). Radiologists can detect the ‘gist’ of breast cancer before any overt signs of cancer appear. Scientific Reports, 8(1), 8717. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Britton ΜK, & Anderson BA (in press). Attentional avoidance of threatening stimuli. Psychological Research. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bucker B, & Theeuwes J (2017). Pavlovian reward learning underlies value driven attentional capture. Attention, Perception, and Psychophysics, 79, 415–428. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carroll M, Kokini C, & Moss J (2013). Training effectiveness of eye tracking-based feedback at improving visual search skills. International Journal of Learning Technology, 8, 147–168. [Google Scholar]
- Chapman P, Underwood G, & Roberts K (2002). Visual search patterns in trained and untrained novice drivers. Transportation Research Part F: Traffic Psychology and Behaviour, 5, 157–167. [Google Scholar]
- Chelazzi L, Estocinova J, Calletti R, Lo Gerfo E, Sani I, Della Libera C, & Santandrea E (2014). Altering spatial priority maps via reward-based learning. Journal of Neuroscience, 34, 8594–8604. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chun MM, & Jiang Y (1998). Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology, 36, 28–71. [DOI] [PubMed] [Google Scholar]
- Cohen PS (1968). Punishment: The interactive effects of delay and intensity of shock. Journal of the Experimental Analysis of Behavior, 11, 789–799 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crundall D, Chapman P, Phelps N & Underwood G (2003). Eye movements and hazard perception in police pursuit and emergency response driving. Journal of Experimental Psychology: Applied, 9, 163–174. [DOI] [PubMed] [Google Scholar]
- Curcio CA, Sloan KR, Kalina RE, & Hendrickson AE (1990). Human photoreceptor topography. Journal of Comparative Neurology, 292, 497–523. [DOI] [PubMed] [Google Scholar]
- Della Libera C, & Chelazzi L (2009). Learning to attend and to ignore is a matter of gains and losses. Psychological Science, 20, 778–784. [DOI] [PubMed] [Google Scholar]
- Deubel H & Schneider WX (1996). Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Research, 36, 1827–1837. [DOI] [PubMed] [Google Scholar]
- Donohue SE, Hopf JM, Bartsch MV, Schoenfeld MA, Heinze HJ, & Woldorff MG (2016). The rapid capture of attention by rewarded objects. Journal of Cognitive Neuroscience, 28, 529–541. [DOI] [PubMed] [Google Scholar]
- Drew T, & Williams LH (2017). Simple eye-movement feedback during visual search is not helpful. Cognitive Research: Principles and Implications, 2, 44. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gregoire L, Britton MK, & Anderson BA (in press). Motivated suppression of value- and threat-modulated attentional capture. Emotion. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Halbherr T, Schwaninger A, Budgell GR, & Wales A (2013). Airport security screener competency: A cross-sectional and longitudinal analysis. International Journal of Aviation Psychology, 23, 113–129. [Google Scholar]
- Hayhoe M, & Ballard D (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9, 188–194. [DOI] [PubMed] [Google Scholar]
- Henderson JM (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7, 498–504. [DOI] [PubMed] [Google Scholar]
- Henderson JM & Choi W (2015). Neural correlates of fixation duration during real-world scene viewing: Evidence from fixation-related (FIRE) fMRI. Journal of Cognitive Neuroscience, 27, 1137–1145. [DOI] [PubMed] [Google Scholar]
- Hickey C, Chelazzi L, & Theeuwes J (2014). Reward priming of location in visual search. PLOS ONE, 9(7), e103372. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hickey C, & Peelen MV (2015). Neural mechanisms of incentive salience in naturalistic human vision. Neuron, 85, 512–518. [DOI] [PubMed] [Google Scholar]
- Hoffman JE, & Subramaniam B (1995). The role of visual attention in saccadic eye movements. Perception and Psychophysics, 57, 787–795. [DOI] [PubMed] [Google Scholar]
- Hollingworth A, & Henderson JM (2002). Accurate visual memory for previously attended objects in natural scenes. Journal of Experimental Psychology: Human Perception and Performance, 28, 113–136. [Google Scholar]
- Horowitz TS, & Wolfe JM (1998). Visual search has no memory. Nature, 394, 575–577. [DOI] [PubMed] [Google Scholar]
- Irons JL, Jeon M, & Leber AB (2017). Pre-stimulus pupil dilation and the preparatory control of attention. PLoS ONE, 12 (12), e0188787. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jacobs RJ (1979). Visual resolution and contour interaction in the fovea and periphery. Vision Research, 19, 1187–1195. [DOI] [PubMed] [Google Scholar]
- Jiang YV, & Swallow KM (2013). Spatial reference frame of incidentally learned attention. Cognition, 126, 378–390. [DOI] [PubMed] [Google Scholar]
- Jones JE (1962). Contiguity and reinforcement in relation to CS-UCS intervals in classical aversive conditioning. Psychological Review, 69, 176–185. [DOI] [PubMed] [Google Scholar]
- Koenig SC, Liebhold GMY, & Gramopadhye AK (1998). Training for systematic search using a job aid. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 42, 1457–1461. [Google Scholar]
- Kim AJ, & Anderson BA (in press a). Threat reduces value-driven but not salience-driven attentional capture. Emotion. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim H, & Anderson BA (2019). Dissociable components of experience-driven attention. Current Biology, 29, 841–845. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim H, & Anderson BA (in press b). How does the attention system learn from aversive outcomes? Emotion. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim AJ, Lee DS, & Anderson BA (in press). The influence of threat on the efficiency of goal-directed attentional control. Psychological Research. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kok EM, Jarodzka H, de Bruin ABH, BinAmir HAN, Robben SGF, & van Merrienboer JJG (2016). Systematic viewing in radiology: seeing more, missing less? Advances in Health Science Education: Theory and Practice, 21, 189–205. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kramer MR, Porfido CL, & Mitroff SR (2019). Evaluation of strategies to train visual search performance in professional populations. Current Opinion in Psychology, 29, 113–118. [DOI] [PubMed] [Google Scholar]
- Le Pelley ΜE, Pearson D, Griffiths O, & Beesley T (2015). When goals conflict with values: Counterproductive attentional and oculomotor capture by reward-related stimuli. Journal of Experimental Psychology: General, 144, 158–171. [DOI] [PubMed] [Google Scholar]
- Lerman DC, & Vorndran CM (2002). On the status of knowledge for using punishment: Implications for treating behavior disorders. Journal of Applied Behavior Analysis, 35, 431–464. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liao M-R, & Anderson BA (2020). Reward learning biases the direction of saccades. Cognition, 196,104145. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Litchfield D, Ball LJ, Donovan T, Manning DJ, & Crawford T (2008). Learning from others: Effects of viewing another person’s eye movements while searching for chest nodules. In Sahiner B & Manning DJ (Eds.), Medical Imaging 2008: Image perception, observer performance, and technology assessment Vol. 6917 (pp. 691715–691715-9). Bellingham: SPIE. [Google Scholar]
- Litchfield D, Ball LJ, Donovan T, Manning DJ, & Crawford T (2010). Viewing another person’s eye movements improves identification of pulmonary nodules in chest x-ray inspection. Journal of Experimental Psychology: Applied, 16, 251–262. [DOI] [PubMed] [Google Scholar]
- Mitroff SR, Ericson JM, & Sharpe B (2018). Predicting airport screening officers’ visual search competency with a rapid assessment. Human Factors, 60, 201–211. [DOI] [PubMed] [Google Scholar]
- Najemnik J, & Geisler WS (2005). Optimal eye movement strategies in visual search. Nature, 434, 387–391. [DOI] [PubMed] [Google Scholar]
- Neider MB, & Zelinsky GJ (2006). Scene context guides eye movements during visual search. Vision Research, 46, 614–621. [DOI] [PubMed] [Google Scholar]
- Nickles GM, Melloy BJ, & Gramopadhye AK (2003). A comparison of three levels of training designed to promote systematic search behavior in visual inspection. International Journal of Industrial Ergonomics, 32, 331–339. [Google Scholar]
- Nickles GM, Sacrez V, & Gramopadhye AK (1998). Can we train humans to be systematic inspectors? Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 42, 1165–1169. [Google Scholar]
- Nissens T, Failing M, & Theeuwes J (2017). People look at the object they fear: oculomotor capture by stimuli that signal threat. Cognition and Emotion, 31, 1707–1714. [DOI] [PubMed] [Google Scholar]
- Peltier C, & Becker MW (2017). Eye movement feedback fails to improve visual search performance. Cognitive Research: Principles and Implications, 2, 47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pradhan AK, Pollatsek A, Knodler M, & Fisher DL (2009). Can younger drivers be trained to scan for information that will reduce their risk in roadway traffic scenarios that are hard to identify as hazardous? Ergonomics, 52, 657–673. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reingold EM, & Sheridan H (2011). Eye movements and visual expertise in chess and medicine. In Liversedge SP, Gilchrist ID, & Everling S, (Eds.) Oxford Handbook on Eye Movements (pp. 528–550). Oxford: Oxford University Press. [Google Scholar]
- Sali AW, Anderson BA, & Yantis S (2014). The role of reward prediction in the control of attention. Journal of Experimental Psychology: Human Perception and Performance, 40, 1654–1664. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sareen P, Ehinger KA, & Wolfe JM (2016). CB Database: A change blindness database for objects in natural indoor scenes. Behavior Research Methods, 48, 1343–1348. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schmidt LJ, Belopolsky AV, & Theeuwes J (2015). Attentional capture by signals of threat. Cognition and Emotion, 29, 687–694. [DOI] [PubMed] [Google Scholar]
- Shomstein S, & Johnson J (2013). Shaping attention with reward: Effects of reward on space-and object-based selection. Psychological Science, 24, 2369–2378. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vitak SA, Ingram JE, Duchowski AT, Ellis S, & Gramopadhye AK (2012). Gaze-augmented think-aloud as an aid to learning. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 2991–3000. [Google Scholar]
- Vo ML-H, Aizenman AM, & Wolfe JM (2016). You think you know where you looked? You better look again. Journal of Experimental Psychology: Human Perception and Performance, 42, 1477–1481. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Voss JL, Bridge DJ, Cohen NJ, & Walker JA (2017). A closer look at the hippocampus and memory. Trends in Cognitive Sciences, 21, 577–588. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Walton ME, Bannerman DM, Alterescu K, & Rushworth MF (2003) Functional specialization within medial frontal cortex of the anterior cingulate for evaluating effort-related decisions. Journal of Neuroscience, 23, 6475–6479. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weissman DH, Roberts KC, Visscher KM, & Woldorff MG (2006). The neural bases of momentary lapses in attention. Nature Neuroscience, 9, 971–978 [DOI] [PubMed] [Google Scholar]
- Yuan L, Haroz S, & Franconeri S (2019). Perceptual proxies for extracting averages in data visualizations. Psychonomic Bulletin and Review, 26, 669–676. [DOI] [PubMed] [Google Scholar]
- Zelinsky GJ, & Loschky LC (2005). Eye movements serialize memory for objects in scenes. Perception and Psychophysics, 67, 676–690. [DOI] [PubMed] [Google Scholar]
- Zelinsky GJ, Rao RPN, Hayhoe MM, & Ballard DH (1997). Eye movements reveal the spatiotemporal dynamics of visual search. Psychological Science, 8, 448–453. [Google Scholar]
