Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Apr 1.
Published in final edited form as: Trends Neurosci. 2015 Feb 16;38(4):195–206. doi: 10.1016/j.tins.2015.01.005

The Unsteady Eye: an Information Processing Stage, not a Bug

Michele Rucci 1,2, Jonathan D Victor 3
PMCID: PMC4385455  NIHMSID: NIHMS666197  PMID: 25698649

Abstract

How is space represented in the visual system? At first glance, the answer to this fundamental question appears straightforward: spatial information is directly encoded in the locations of neurons within maps. This concept has long dominated visual neuroscience, leading to mainstream theories of how neurons encode information. However, an accumulation of evidence indicates that this purely spatial view is incomplete, and that even for static images, the representation is fundamentally spatiotemporal. The evidence for this new understanding centers on recent experimental findings concerning the functional role of fixational eye movements, the tiny movements humans and other species continually perform, even when attending to a single point. Here, we review some of these findings and discuss their functional implications.

Keywords: Vision, eye movements, retina, neural encoding, ocular drift, microsaccades

1 The unsteady eye

Sensory perception and motor behavior are closely coupled. Most species are not passively exposed to the incoming flow of sensory data, but actively seek useful information by coordinating sensory processing with motor activity. A clear example of this interaction is given by the role of eye movements in visual perception. In humans, as in many other species, acuity is not uniform throughout the visual field, but rapidly declines with increasing distance from the foveola (see Glossary), the small rod-free region of the retina covering a visual area the size of the full moon in the sky. As a consequence, humans acquire visual information during brief periods of “fixation” separated by saccades, rapid gaze shifts that enable inspection of the objects of interest with the high-acuity region (Fig. 1A).

Figure 1.

Figure 1

Vision and eye movements. (A) As an observer looks at a static scene, rapid eye movements (saccades) separate the periods of “fixation” in which visual information is acquired. (B) High-resolution recording of oculomotor activity reveals that the eye moves also during these periods. The line of sight continually wanders with a seemingly random trajectory (ocular drift) occasionally interrupted by saccades with small amplitudes (microsaccades; arrow). (C – F) Characteristics of ocular drift when the head is immobilized, a standard practice to measure small eye movements. Subjects freely observed natural scenes [27]. (C) Power spectrum. The dashed line represents the eye-tracker noise level. (D) Distribution of mean instantaneous speed. (E) Length of the inter-saccadic trajectory. (F) Probability (in natural logarithmic scale) that the retinal image shifted by a given distance (horizontal scale) after a given time (vertical scale). (G) As in F, but during normal head-free viewing [28]. (H) Same data as in G, but with head movements artificially eliminated in the reconstruction process. See [28] for more accurate reconstructions of retinal image motion that also consider the optics of the eye.

This coupling between visual sensation and eye movements goes beyond the fixational sequence enabled by saccades. Close examination of gaze position in the intervals in between saccades reveals that the term “fixation” is misleading as the eyes are never at rest. Tiny eye movements—known as fixational eye movements—incessantly occur during the periods in between saccades (see Fig. 1B). These movements are often labeled as microscopic, but they shift the projection of the stimulus over many receptors on the retina. It is remarkable that we are normally not aware of them, as they yield motion signals with speeds that would be immediately visible had they originated from objects in the scene rather than our own eyes [1].

Fixational eye movements have been observed in a wide variety of species [2] including the owl [3], a predator commonly believed not to move its eyes. Yet they are often ignored by theoreticians and regarded as a nuisance by experimentalists. When they are taken into account, they are frequently regarded as a problem that the visual system has to overcome in order to establish fine spatial representations [4] and avoid perceptual blurring of the image [5].

All eye movements transform a static scene into a spatiotemporal input signal to the retina. This article builds upon the proposal that this transformation constitutes a fundamental step for visual perception: the visual system takes advantage of the resulting temporal modulations to encode spatial information in the joint space-time domain. Within this context, here we focus on the role of fixational eye movements, but similar principles and considerations extend to other types of eye movements, saccades in particular. Furthermore, we restrict our focus on the consequences of fixational eye movements for visual encoding and only marginally touch upon decoding mechanisms. For larger movements, a vast body of evidence indicates that oculomotor signals are taken into account in the interpretation of the retinal input [6, 7], and similar strategies may also hold at the scale of fixational eye movements (see Box 2).

Box 2. Outstanding questions.

  • How do modulations from different types of eye movements contribute to encoding space?

    Under natural viewing conditions, fixational periods of smooth retinal motion alternate with saccadic transients. What are the visual consequences of this natural alternation of input modulations? And how are spatial representations updated during the course of post-saccadic fixation? One possibility is that this recurring sequence of transients shapes the dynamics of spatial vision within each fixation. To investigate this hypothesis, time-frequency analyses of the spatiotemporal input to the retina need to be coupled with measurements of contrast sensitivity at various times during post-saccadic fixation. Gaze-contingent display techniques will be needed to precisely control retinal stimulation.

  • How is spatial information from oculomotor transients decoded?

    During ocular drift, both synchronous and non-synchronous modulations in the responses of retinal ganglion cells are likely to contain spatial information. Does the brain only use synchronous modulations, or does it use non-synchronous temporal modulations as well? Furthermore, for larger eye movements it is well established that spatial representation are updated on the basis of neural copies of motor commands. Do similar strategies extend to fixational eye movements? Recent results suggest the presence of extraretinal signals both for microsaccades [23, 40, 74] and ocular drift [75], but it remains unknown how these signals participate in the decoding of spatial information during normal fixation. Decoupling of retinal and extra-retinal signals is necessary to examine this question.

  • How are fixational eye movements controlled?

    A surprising level of control has been found both in microsaccades [7, 41] and ocular drift [28]. How is this control exerted? Neurophysiological investigations have started to unveil the neural mechanisms responsible for microsaccades [42], but little is presently known about the control strategies and mechanisms of the inter-saccadic fixational motion. Can ocular drift be adjusted to tune the range of spatial frequency amplification in the retinal input in a task-dependent manner? And how is the fixational head/eye compensation accomplished?

  • To what extent do disturbances in fixational eye movements contribute to dysfunction in disease?

    The encoding of space in time implies that some visual impairments may have unrecognized motor origins, and, conversely, that motor disturbances may have unsuspected sensory consequences. For example, larger than normal ocular drift yields a reduced range of whitening in the retinal input, which may lead to a reduction in visual acuity. Consistent with this idea, models have suggested that normal fixational eye movements are necessary for refining cortical selectivity during visual development [76, 77]. Analysis of fixational eye movements in clinical populations is needed to investigate this hypothesis.

1.1 What is the function of fixational eye movements?

Like most scientific questions, several kinds of answers are available for why the eyes move incessantly, some more informative than others. Consider, for example, an unrelated fundamental question: “what is the function of breathing?”. A possible answer could be that breathing prevents death by suffocation, but few scientists would find this superficial level of explanation satisfactory. Indeed, if we had stopped at this level, we would have never learned that breathing oxygenates the blood and the associated chains of events. Yet, as explained below, a similar plain answer seems to have satisfied vision researchers for over half a century.

In the 1950s, it became possible for the first time to counteract the physiological motion of the eye so to immobilize the image on the retina, a laboratory procedure known as retinal stabilization. The outcome was striking: the image would progressively lose contrast and eventually fade away [8, 9]. Similar results were observed in later studies in which the eye muscles were paralyzed and subjects even underwent total paralysis [10]. These findings have provided an explanation for the existence of fixational eye movements that has survived to this day. Fixational eye movements serve the purpose of “refreshing” neuronal responses so to prevent the fading of a stationary scene experienced when all motion is eliminated on the retina. Although popular among scientists and often cited in textbooks of vision, this answer is dangerously simplistic. It only provides an empirical description of the consequences of eliminating retinal image motion. It does not explain why vision stops functioning nor identifies the mechanisms by which fixational eye movements make visual perception possible.

Although less well-known to most neuroscientists, theories at deeper levels of explanation have circulated for almost a century. Since the luminance modulations resulting from fixational eye movements contain spatial information at individual retinal locations, it has long been postulated that fixational eye movements may be a critical component of a dynamic strategy for encoding space, an approach that converts spatial information into temporal structure [11, 12, 13, 14, 15, 16, 17, 18]. While these theories differ in the specific mechanisms they propose, they share the common hypothesis that fixational instability plays a central role in structuring—rather than the just refreshing—neural activity for establishing spatial representations.

Early theories lost momentum after experiments that eliminated retinal image motion found little or no change in visual acuity [19, 20]. However, these pioneering experiments also suffered from a number of technological and methodological limitations, which cast serious doubts on extrapolating their conclusions to more natural viewing conditions [17, 21, 22]. Furthermore, the proposal of an involvement of fixational eye movements in spatial perception has recently found new support from multiples sources, including neurophysiological [16, 23, 24] and behavioral investigations [22, 25, 26], statistical examinations of retinal input signals [27, 28], and theoretical analyses of the impact of a continually moving retinal input on neural responses [15, 17, 18, 29, 30].

In this article, we examine the perceptual, computational, and neural consequences of using fixational eye movements to represent space through time and review the mounting evidence supporting this idea. This emerging body of evidence indicates that fixational eye movements constitute a critical information processing stage, not a bug, and challenges established views of early visual processing at a fundamental level. Again, we remind the reader that many of these ideas also apply to larger eye movements. Although saccades generally serve to bring the high-acuity foveola onto the focus of attention, in doing so they generate transients in the visual input to the retina, thereby also converting purely spatial information into a spatiotemporal format.

2 Types of fixational eye movements

That the eye continues to move even during visual ”fixation” has been known for a long time. One of the first scientific reports goes back to 1786, when Robert Darwin (Charles Darwin’s father) noticed that color after-effects appear to move because of fixational instability [31]. It is now known that fixational eye movements come in different varieties [2, 31, 32] (Fig. 1B).

Most studies have focused on microsaccades—miniature replicas of the rapid movements by which humans voluntarily shift gaze—, which, among fixational eye movements, are the easiest to detect and measure. It has long been suggested that microsaccades serve a special role in preventing perceptual fading [8, 33], but this popular hypothesis has remained controversial among researchers [1, 31, 32, 34, 35, 36], and visual impairments similar to those occurring with larger saccades have been observed at the time of microsaccades [37, 38, 39, 40]. Recent studies have shown that when observers are not required to maintain fixation—an unnatural laboratory condition—but are left free to normally move their eyes, microsaccades precisely shift gaze toward nearby interesting locations [41]. This strategy appears to take advantage of a very small preferred retinal locus of fixation that enhances performance in high-acuity tasks [7]. Together with a substantial body of emerging evidence [23, 39, 40, 42, 43, 44, 45], these results suggest that little difference exists between microsaccades and larger saccades in terms of both control and function (for a recent review, see [46]).

In this article, we focus on the motion of the eye in the periods in between saccades and microsaccades. The dynamics of this activity is well delineated by spectral analysis (Fig. 1C) and can be subdivided into two main components. One component, which corresponds to a slow, meandering motion, occupies the frequency range from 0 to 40 Hz, and is most prominent at low temporal frequencies. A second component, with a smaller amplitude and a spectral peak in the range 40–100 Hz, is known as ocular tremor. Here, we consider these components together and use the term ocular drift (or, for brevity, just drift) to refer to the inter-saccadic motion of the eye.

The eye is traditionally believed to move very little and at very slow velocity in the periods in between saccades. Classical studies typically reported amplitude values ranging from about 1.5′ to 4′ (the symbol′ represents minutes of arc, see Glossary), with median velocities around 4′/s [8]. However, these numbers were measured from highly experienced observers while they attempted to maintain steady fixation moving as little as possible. They also represent estimates obtained over relatively long intervals, e.g., the average amplitude of the overall eye displacement in between two successive saccades [8]. But the eye moves more during normal inter-saccadic fixation (especially immediately after a saccade), and changes direction very frequently, so that parameters measured in this way severely underestimate the real drift displacement and speed. Fig. 1D shows the mean distribution of the instantaneous drift speed as subjects freely examined natural scenes. The resulting speed (mean ~ 50′/s) is more than one order of magnitude larger than previous coarse estimates reported in the literature, covering a considerable length in visual space (Fig. 1E).

Studies that examined ocular drift with the subject’s head immobilized—a standard procedure for resolving very small eye movements—have long observed that ocular drift appears to move in an erratic fashion [31, 32, 47]. Fig. 1F shows the probability distribution that the eye moves by any given amount in any given interval. The gaze position becomes progressively more dispersed as time goes by, and the variance of the spatial distribution increases approximately linearly with time, a behavior that is characteristic of Brownian motion. Since the subject’s head was immobilized, this distribution also approximates the motion on the retina of a fixated point. A consequence of this behavior is that the standard deviation of the eye position increases at speed slower than linear (only as t), and the target’s projection remains within a relatively narrow retinal region during the naturally brief periods of intersaccadic fixation.

The notion that ocular drift resembles Brownian motion should not be taken to imply a lack of oculomotor control. This motion could still be centrally generated, and control could be exerted in several ways; for example, by changing the diffusion coefficient, the parameter that regulates the speed of the Brownian process. Indeed, it has long been known that: (a) ocular drift becomes much faster (by a factor of four and more) when the subject no longer clenches on a bite-bar; and (b) drift seems to partially compensate for tiny head movements that occur during fixation, thus revealing a form of slow control [28].

Fig. 1G shows the probability distribution of the motion on the retina of a fixated target during normal head-free fixation. These data were obtained from one subject by means of the Revolving Field Monitor [34], to our knowledge the only eye-tracker with demonstrated precision for resolving ocular drift during normal head movements. Note the similarity between the distributions in Fig. 1F and G. This similarity is striking given that drift speed is significantly higher and that fixational head movements also contribute to retinal image motion in G. For comparison, Fig. 1H shows the distribution that would be obtained by ocular drift alone, i.e., without considering head movements in the reconstruction of retinal image motion. These data indicate that ocular drift is under motor control and is designed to yield retinal image motion with specific characteristics. Furthermore, the compensation for head movements suggests that one source of this control is the vestibulo-ocular reflex [48, 49].

3 Transforming space into time

The idea that the temporal modulations resulting from eye movements might do more than simply “refresh” the responses of retinal neurons follows immediately from examination of the visual input impinging onto retinal receptors during natural fixation. We will explain the main ideas by considering the visual input signals experienced by a linear array of retinal receptors—the visual world of the inhabitants of Abbott’s Flatland (Fig. 2)–, but our considerations are general and directly extend to the two-dimensional visual space.

Figure 2.

Figure 2

Temporal modulations resulting from fixational drift. (A–B) Input signals experienced by a one-dimensional array of retinal receptors. The two panels in B show the spatiotemporal input during a perfectly steady fixation (top) and a normal fixation in which the eye moves (bottom). The luminance fluctuations impinging onto three separate points of the retinal surface (triangles) are shown on the right (C) Ocular drift enhances high spatial frequencies. Luminance modulations (right) experienced by three retinal receptors (circles) during exposure to stimuli at three different spatial frequencies (left). The amplitude of the modulation increases with the spatial frequency. (D) Mean amplification resulting from ocular drift as a function of spatial frequency. Data represent averages across N=5 observers [78]. (E) Same as in C after adjusting contrasts to match the structure of natural images. Fixational modulations now possess similar amplitudes. (F) Comparison between the power of a set of natural images and the temporal power (the sum over all nonzero temporal frequencies) in the modulations caused by ocular drift (red) and Brownian motion with a matched diffusion constant (dashed line). Normal drift equalizes spatial power over a broad frequency range during viewing of natural images [27].

In natural scenes, most objects are stationary. In the absence of eye movements and any other motor activity, the visual signals from these objects would change little on the retina: each retinal receptor would continue to be exposed to a similar level of luminance, as illustrated in the top panel of Fig. 2B. This visual input signal contains a great deal of information at low temporal frequencies. But neurons in the retina and the early visual system are relatively insensitive to an unchanging input; they preferentially respond to a much higher band of temporal frequencies, above 1–2 Hz. Thus, it is not surprising that the visual system functions poorly when all motion signals are eliminated on the retina, as approximated by the condition of retinal stabilization.

Luckily, the scenario of Fig. 2B does not occur in real life. Even during fixation of a stationary scene, the physiological motion of the eye causes retinal receptors to experience continually varying input signals (Fig. 2B, bottom panel). The resulting temporal modulations redistribute spatial information into the temporal domain. As explained below, the specific temporal frequencies resulting from this motion depend on both the objects being observed and the characteristics of the eye movements. But regardless of these specifics, a first important consequence of eye movements is to shift the low temporal frequency power of a static scene into a range that the retina can signal sensitively.

Critically, this redistribution of input power to higher temporal frequencies cannot be regarded as a simple “refreshing” of the image. The amplitude of the fixational modulations is the results of an interaction between how the eye moves and the spatial characteristics of the scene. This redistribution is a linear process in space, so we can analyze it by examining the effects on individual sinusoidal components. Fig. 2C shows an example of the modulations given by the same drift trajectory when looking at stimuli at three different spatial frequencies. The magnitude of the temporal modulations resulting from moving the retina over an image amplifies high spatial frequencies more than low spatial frequencies. An intuitive understanding of why this happens can be gained by considering the luminance change experienced by a retinal receptor during an infinitesimally brief interval: in this period, ocular drift can be regarded as uniform motion (a constant-speed translation), and the amplitude of the modulation is determined by the spatial gradient of image. For a sinusoidal pattern of luminance, like the ones shown in Fig. 2C, the gradient is proportional to the spatial frequency of the stimulus.

For longer periods of time, ocular drift can no longer be approximated by uniform motion, and its Brownian-like character needs to be taken into account. The main implication is that amplification will only occur at spatial frequencies for which drift covers a small fraction of the period. For sufficiently high spatial frequencies—for which the eyes move over a larger fraction of the period—there will instead be attenuation. Fig. 2C shows the actual spatial-frequency amplification resulting from ocular drift averaged over several observers. Up to approximately 15 cycles/deg, the amplitude of the modulations (or, equivalently, the amount of power that spreads into the temporal domain) tends to increase proportionally with the square of the spatial frequency. Above this range the effect starts to be attenuated, but an enhancement of high spatial frequencies can be observed up to 30 cycles/deg.

Thus, ocular drift can be regarded as an operator that transforms space into time. Rather than just “refreshing” the retinal image, this transformation restructures spatial information by emphasizing high spatial frequencies. Interestingly, the cut-off frequency above which amplification no longer occurs approximately matches the spatial resolution limit of the photoreceptor array. Note that our analysis applies to the temporally fluctuating components of the signals incident on the retina, not to the mean level. This is in keeping with the overall dynamics of retinal processing from photoreceptor to ganglion cell: typical ganglion cells are severalfold more sensitive to changes in temporal contrast than to the average DC luminance.

The enhancement of high spatial frequencies shown in Fig. 2D may appear to contrast with the small effects reported by classical retinal stabilization studies with brief stimulus exposures [19, 20]. However, as observed above, these studies suffered from multiple limitations, which prevented analysis of the consequences of normal fixational instability. Note that imperfect stabilization (i.e., partial elimination of retinal image motion) leads to a smaller amplification in Fig. 2D, but one that extends to higher spatial frequencies, complicating interpretation of experimental results. Furthermore, to investigate temporal mechanisms of spatial encoding, special attention needs to be paid to the transients introduced merely by the onset and offset of the stimulus. The data summarized in Box 1 provide evidence that the human visual system is sensitive to drift modulations and that the functional effect of this sensitivity is as predicted above.

Box 1. Are humans sensitive to fixational modulations?

The spatiotemporal reformatting resulting from ocular drift (Fig. 2D) suggests that fixational eye movements enhance vision of high spatial frequencies. To better examine this prediction, we developed a new method of retinal stabilization that enables selective isolation of the motion of the retinal image present during natural fixation. This method relies on a hardware/software system specifically designed to process eye-movement signals in real time and update the display according to the experimenter’s specifications [73]. By moving the stimulus on the monitor to compensate for the subject’s eye movements (Fig. IA), this system accurately stabilizes the stimulus on the retina with video-frame resolution up to 200 Hz, while offering a degree of experimental flexibility far greater than that of previous methods.

Fig. I shows the results of an experiment in which we examined the consequences of eliminating fixational modulations of luminance during examination of fine spatial detail. In this task, subjects reported whether a noisy grating displayed after a saccade was tilted by 45° clockwise or counter-clockwise. Based on the way ocular drift transforms spatial information into temporal modulations, we examined the effect of stabilizing two different types of stimuli: one in which the target (the grating) was at a higher spatial frequency than the noise, and one in which the target was at a lower spatial frequency. These two stimuli yield different predictions: fixational modulations should enhance visibility of the high-spatial-frequency target, but not the low one.

Fig. IC compares the average percentages of correct discrimination measured in the presence of the normal fixational modulations to those reported when they were stabilized on the retina by counteracting the effects of eye movements. Results confirm the predictions of Fig. 2D: eliminating retinal image motion drastically impaired discrimination of high-frequency gratings, but had little effect on the low spatial frequency gratings. These findings conflict with traditional views of the influence of fixational eye movements on vision. The fading prevention hypothesis would have predicted a stronger impact of retinal stabilization with the low-frequency gratings, since low spatial frequencies are the range in which fading is most pronounced (i.e., contrast senstivity most attenuated) when stimuli are presented for unnaturally long periods of time [1, 22].

The bottom row of Fig. I shows results from a separate experiment, in which retinal stabilization was restricted to a single axis. In this experiment, we selectively compensated for eye movements on a given axis, either parallel or orthogonal to the grating, leaving normal motion on the perpendicular axis (Fig. ID). In this way, fixational modulations were only driven by the pattern of noise during motion parallel to the grating, but provided information about the grating when motion occurred on the orthogonal axis. Performance reflected the information content of fixational modulations. Discrimination was impaired when retinal image motion was restricted to the axis parallel to the grating but was instead normal when motion occurred on the orthogonal axis (Fig. IE). These results support the proposal that humans take advantage of the input luminance modulations caused by fixational eye movements.

3.1 Interactions with natural images

Natural scenes do not yield random images on the retina. Statistical regularities are present at multiple scales in natural environments, and it has long been argued that sensory systems are tuned to these regularities and exploit them in establishing neural representations [50, 51, 52, 53, 54]. As we describe below, this tuning begins even before neural processing.

One of the most evident characteristics of a natural image is its very specific spectral distribution: the power spectrum of natural scenes declines with spatial frequency in a way that is approximately proportional to the square of the spatial frequency [55] (see Fig. 2F). A considerable amount of work has focused on the consequences of this input spectrum on early visual representations [51, 56]. However, since the eye is always in motion, a full account of the impact of this statistical regularity inevitably needs to take into consideration not just the image, but the actual spatiotemporal input resulting from the interaction between the image and eye movements. Based on our previous discussion, we can expect that oculomotor activity will transform a stationary natural scene into temporal modulations on the retina, effectively redistributing the power of the image into the joint space-time domain.

Fig. 2E analyzes a fixation in the “natural world”. The same three stimuli of Fig. 2C (three sinusoids at different spatial frequencies) are again observed by a drifting eye, but their contrasts are now adjusted according to the power spectrum of natural scenes. The well-known spectral distribution of a natural image [55] implies that the contrast of each Fourier component decreases as its frequency increases. For this reason, the high-frequency stimulus in Fig. 2E is displayed at a lower contrast than the low-frequency stimulus. This attenuation of contrast affects fixational modulations in a direction opposite to the effect described in Fig. 2C–D. Whereas natural images emphasize low spatial frequencies, the motion of the eye enhances high spatial frequencies. Remarkably the two effects counterbalance each other. As illustrated in Fig. 2E, after scaling contrast to replicate natural images, looking at the three frequency components with the same drift trajectory gives modulations of approximately equal amplitudes.

The net effect of the interaction between normal inter-saccadic eye drift and natural images is summarized in Fig. 2F. This graph compares the power spectrum of a set of images to the average temporal power made available by ocular drift in the form of modulations, as humans freely looked at them. As shown by these data, fixational instability yields temporal modulations with uniform spectral density over a broad range of spatial frequencies [27]. Very similar results were also obtained during head-free viewing, a condition in which eye and head movements combine to form a retinal stimulus with almost identical characteristics [28]. Thus, the signals impinging onto retinal receptors differ sharply from the images presented on the display. Yet behavioral and neurophysiological investigations commonly take these images as the input to the visual system.

In sum, ocular drift causes a very specific spatiotemporal reformatting of the retinal input when humans look at natural scenes. Within the range of peak temporal sensitivity of retinal neurons, the normal fixational fluctuations of luminance possess equalized power across spatial frequencies. This equalization of power—a transformation known as “spectral whitening”—, depends on three factors: the spectral density of natural scenes, the statistics of normal ocular drift, and the temporally bandpass nature of retinal processing from photoreceptor to ganglion cell. It thus reveals a form of matching between the characteristics of the natural world, normal eye movements, and retinal network dynamics.

4 Consequences for neural encoding

Since the fixational modulations of luminance cover temporal frequencies within the range of peak sensitivity of neurons in the retina and thalamus [57], they are likely to profoundly influence neural responses. Many neurons in the visual system are primarily sensitive to time-varying stimuli. During fixation on a natural scene, these neurons will effectively be driven by the equalized signal in Fig. 2F. Other neurons with more sustained responses may also be sensitive to the average pattern of luminance covered by their receptive fields—i.e., what is left of the low-frequency power of the image at 0 Hz. In these neurons, eye movements are expected to induce phasic modulations superimposed on a tonic level of response. In both cases, the resulting synchronous modulations of activity are likely to elicit strong responses in downstream neurons in the cortex [58, 59].

What are the implications of the fixational input reformatting for the mechanisms of neural encoding? The first consideration is that, although this spatiotemporal transformation is linear in space, it is not linear in time. Eye movements create power at temporal frequencies which are not present in the external image: even a static visual environment results in temporal modulations of light on the retina, simply because the retina moves. In the temporal domain, this nonlinearity enables a match between the low temporal frequency power of natural scenes and the higher-frequency sensitivity of retinal neurons. In space, however, eye movements can be regarded as a filtering stage. Because of the spatial processing described in Fig. 2D, presentation of visual stimulation in the absence of eye movements is likely to misgauge neuronal sensitivity during natural vision. As shown in Fig. 3A–B, during normally active fixation, models of retinal ganglion cells respond less to low spatial frequencies and more to high spatial frequencies than their contrast sensitivity functions measured with immobile retinas would suggest.

Figure 3.

Figure 3

Predicted consequences of normal active fixation for retinal responses. (A–B) Changes in the spatial sensitivities of parvocellular (A) and magnocellular (B) ganglion cells. The contrast sensitivity functions of model neurons were measured with (Active) and without fixational eye movements (Passive). The latter functions replicate experimental data collected with immobile retinas [57, 79]. (C–G) Modeling the responses of retinal ganglion cells during natural stimulation [27]. (C) The receptive fields of an array of ON-center parvocellular neurons (six cells shown here; circles) moved according to recorded traces of ocular drift (arrow). (D) Response models were linear filters with the spatial and temporal contrast sensitivities of a cell recorded in the macaque [80]. (E) Grayscale representation of activity in the cell array at a given moment in time. The mean instantaneous firing rate of every simulated neuron is plotted at the center location of the cell’s receptive field. Note the enhancement of edges. (F) Time-course of the responses of the six neurons shown in C. (G) Same as in E during static presentation of the image without fixational eye movements.

Further important consequences for neural representations follow from the interaction between eye movements and natural images shown in Fig. 2F. It has long been suggested that sensory systems remove what is predictable from the general statistics of the environment (i.e., redundant), so that they can focus their resources on what is actually informative [50, 60], a strategy that enables efficient transmission of visual information. The center-surround organization of the receptive fields of ganglion cell is commonly held responsible for discarding redundancy [61, 62] and eliminating the strong broad correlations present in natural scenes [56]. However, experimental evidence in support of these theories has been scarce, and broad correlations have been reported with presentation of natural images in the absence of eye movements [63, 64].

These previous proposals did not take into account the incessant motion of the retinal image and face a fundamental problem raised by the fixational reformatting of the visual input. Since the power spectrum is, by definition, the Fourier transform of the autocorrelation function, an equalization of power in spatial frequency is equivalent to a removal of correlations in space (the autocorrelation becomes localized to a point). In other words, the spectral distribution in Fig. 2F implies that pairs of retinal receptors will experience uncorrelated fluctuations in luminance during a normally active fixation. That is, because of eye movements, the decorrelation believed to be the result of retinal circuitry has already occurred even before light is transduced into chemical signals.

If decorrelation is already accomplished by eye movements, the subsequent spatial filtering carried out by the retinal network must have some other function. To infer this function, we first recall the reason why decorrelation is useful for efficient coding: under specific constraints, a decorrelated representation enables transmission of an image through a limited-capacity channel (the optic nerve) in a way that maximizes transmitted information [65]. Note that this well-known result from information theory treats all aspects of the image equally, regardless of their behavioral value. But intuition suggests that this is an oversimplification: contours (features such as edges and lines) are particularly important in extracting meaning from the image, as they tend to form object boundaries. The further amplification of high spatial frequencies operated by ganglion cells—beyond what is necessary to achieve decorrelation—supports the notion that not all information is treated equally by the retina: the luminance discontinuities present at contours seem to be emphasized. That is, the process of feature extraction commonly believed to take place at higher stages in the passive visual system, appears to begin in the retina during normal active fixation.

This feature enhancement is conveyed by the temporal structure of neural responses in several ways. First, it is a consequence of the specific filtering characteristics of neurons in the retina. Ganglion cells are tuned to non-zero temporal frequencies, a range in which fixational eye movements have already equalized spatial power. Thus, temporal changes in the response of each individual cell emphasize the high spatial frequency content of the scene. Second, luminance discontinuities are also encoded in the temporal structure of cell responses, particularly in the way pairs of neurons respond together. Synchronous responses during drift are likely to encode contours. Third, because of the non-linearity inherent in spike generation, a robust and noise-insensitive neural code may emerge, which uses the motion of the eye as a form of stochastic resonance. The net result at the population level is a code that takes advantage of the efficacy of temporally synchronous responses in propagating contours through neural networks.

To clarify some of these ideas, Fig. 3 shows an example of activity in a neural model. When the photoreceptor array moves, response modulations are synchronous in ganglion cells with receptive fields aligned with a contour (neuronal population A in Fig. 3C). In contrast, ganglion cells with receptive fields over more uniform regions exhibit uncorrelated responses (neuronal population B). This edge enhancement occurs even though model neurons are circularly symmetric and do not possess a preference for oriented stimuli. It is lost in the absence of fixational eye movements (Fig. 3G). Thus, eye movements encode luminance discontinuities in correlated modulations, which are amplified by ganglion cells. Such synchronous activity is likely to be highly effective in driving downstream neurons but could be easily mistaken for noise in neurophysiological recordings, unless careful experimental measurement of fixational eye movements [23] was undertaken.

5 Seeing with an active eye

Fixational eye movements are commonly regarded in a negative light: a source of uncertainty that needs to be overcome to establish unambiguous spatial maps, a way to avoid image fading, and an experimental nuisance difficult to control. These negative views do not do justice to the importance of eye movements in fine spatial vision. Fixational movements are not a “bug” but a “feature”: an integral and critical stage of information processing, which enables the retina to represent space in a temporal fashion and to begin the process of feature extraction.

We have argued that both the amount and the consequences of the incessant inter-saccadic motion of the eye have been severely underestimated. This motion profoundly reshapes the spatiotemporal input impinging onto retinal receptors matching the characteristics of the natural world to those of retinal neurons. It yields an effective input signal for driving neural responses that discards predictable input correlations, an outcome long advocated to be an important goal of early visual processing [50, 60], but until now attributed to neural processing. This redundancy reduction occurs in the temporal domain. Simultaneously, the reshaping of the input pattern generates synchronous modulations that convey image-specific informative features, thus facilitating their extraction and encoding by the neural circuitry [16, 27] and enhancing vision of fine spatial detail [22].

The phenomena described in this paper likely represent only some among the most evident mechanisms of a general sensorimotor strategy for representing space. While we have focused on fixational drift, many of our considerations are general and extend to larger eye movements. Indeed, oculomotor activity always transforms spatial information into temporal structure, but movements as distinct as saccades and ocular drift yield highly different spatiotemporal distributions in the retinal input, suggesting complementary roles in temporally reformatting space [66].

Our proposal that the visual system uses oculomotor behavior to represent space in time may appear at odds with the observation that some aspects of the visual scene can be extracted even with extremely brief stimulus exposures [67, 68, 69, 70]. But in these experiments, the stimulus presentation itself produces very sharp transients—an extreme amplification of the modulations normally caused by saccades—which effectively redistribute spatial information across temporal frequencies. A visual system designed to operate in the joint space-time domain can take advantage of these rich transients in laboratory settings, but has to rely on the modulations caused by eye movements, both large and small, under natural viewing. It is well known that visual functions are severely impaired in the absence of temporal transients [8, 9] and that even major changes in the scene are not perceived if they occur at sufficiently low temporal frequencies [71].

Our proposal that fixational eye movements are part of a strategy for representing space through time challenges current views on the mechanisms of early visual processing at the most fundamental level. It implies that widely accepted encoding theories need to be revised to incorporate the consequences of retinal image motion. It suggests that, in a continually moving eye, the process of edge extraction starts in the retina rather than in the cortex, and that a critical function of early visual processing is to begin to extract features, not just discard redundancy. It implies different encoding/decoding mechanisms for spatial information than the ones commonly postulated, mechanisms reminiscent of somatosensation [15, 72]. More generally, it replaces the traditional notion of the retina as a passive encoding stage that optimizes overall information transmission with that of an active system for feature extraction, whose mechanisms are intrinsically sensorimotor. Given the growing body of evidence indicating that fixational eye movements are under central control [28, 41, 42, 43], our proposal also raise the hypothesis that representations can be flexibly adapted to the task by means of behavior. These considerations suggest that, rather than seek to understand visual processing within the framework of information theory and a single optimal code, a broader framework, which explicitly recognizes the importance of task, dynamics, and flexible codes may be more suited.

Critically, this shift of view has a number of practical consequences. It implies that eye movements are in part responsible for fundamental properties of spatial vision that, at present, are solely attributed to neural mechanisms. It raises the hypothesis that spatial vision impairments present in neurologic disorders may have an unrecognized motor component. Furthermore, it argues that the consequences of oculomotor activity need to be considered in the development of effective prostheses, especially those that bypass normal eye movements. Many questions (some of which in Box 2) need to be answered to understand the mechanisms of active vision. But it is time to abandon the simplistic idea that fixational eye movements serve to prevent fading and start asking how these movements enable us to see.

Figure I.

Figure I

(Box 1). Consequences of eliminating fixational modulations of luminance. (A) A modern retinal stabilization method. The position of the stimulus on a fast display is continually updated according to the subject’s eye movements so to eliminate retinal image motion. (B–E) Results of experiments in which subjects judged the orientation (±45°) of gratings embedded within noise fields with naturalistic spectral distributions [22]. (B) The spatial frequency of the grating was either higher or lower than the band of the noise. (C) Comparison of performance during normal fixational instability and under retinal stabilization. To isolate the normal inter-saccadic motion of the eye, stimuli were displayed at the onset of fixation after the subject performed a saccade toward a randomly cued location. Removal of fixational modulations via retinal stabilization selectively impaired high spatial frequency vision. Error bars represent 95% confidence intervals. (D) Partial stabilization restricting movement of the stimulus to a single axis, either parallel or orthogonal to the grating. Fixational modulations experienced by a retinal receptor (circle) convey information about the grating when motion is restricted to the orthogonal, but not the parallel, axis. (E) Mean percentages of correct discrimination ± s.e.m. for two subjects. High-frequency discrimination is impaired when motion is restricted to the axis parallel to the grating and normal when motion is restricted to the orthogonal axis. (*) and (**) indicate significant differences from complete retinal stabilization (Stab.) and from normal retinal image motion (Normal), respectively (p < 0.05; one-tailed z-tests).

Highlights.

Small eye movements are always present during natural visual fixation.

Fixational movements do not merely prevent the image from fading, they reformat it.

Fixational modulations eliminate correlations prior to neural processing.

Fixational modulations enhance high spatial frequencies and begin edge extraction.

Spatial representations are intrinsically sensorimotor starting from the retina.

Acknowledgements

The authors thank Murat Aytekin and Martina Poletti for many helpful comments on the manuscript. This work was supported by National Institutes of Health grants EY18363 (MR) and EY07977 (JV) and National Science Foundation grants 1127216, 1420212, and 0843304 (MR).

Glossary

Brownian motion

a random-walk process that mimics the jiggling motion of a particle in a fluid. Brownian motion provides a good approximation for the inter-saccadic motion of the retinal image.

Diffusion coefficient

a parameter describing the speed of Brownian motion. The diffusion constant gives the ratio between the expected value of the square of the distance moved and the time elapsed since the onset of the motion.

Fixation

the period in between saccades, in which visual information is acquired and the image on the retina moves relatively little.

Fixational eye movements

small eye movements that incessantly occur during fixation. They include occasional microsaccades, ocular drift, and tremor.

Retinal stabilization

a laboratory procedure that completely eliminates the physiological motion of the retinal image.

Image (or perceptual or Troxler) fading

the progressive disappearance of the visual percept experienced under retinal stabilization.

Fovea centralis (fovea, in brief)

a depression in the surface of the retina with diameter ~1.5 mm used for high-acuity vision.

Foveola

the central region of the fovea (~0.2 mm in diameter, approximately 1° in visual angle) without rod photoreceptors and where cones are most densely packed.

Power spectrum

a representation of how the power of a random signal is distributed across the various frequencies composing the signal.

Minute of arc (or arcmin, or minarc)

a measurement unit of angle, corresponding to one-sixtieth of one degree. It is usually indicated by the symbol′.

Microsaccade

a very small saccade, traditionally with amplitude smaller than 30’ or less. that keeps the attended stimulus within the foveola.

Ocular drift

the relatively slow incessant motion of the eye during the inter-saccadic interval. Here we use this term to also include tremor, a superimposed very small high-frequency motion.

Retinal ganglion cells

the neurons in the output stage of the retina, which relay information to the thalamus and other brainstem areas.

Saccade

a very rapid eye movement normally used to bring the retinal projection of the object of interest onto the high-acuity fovea. Saccades typically occur 2–3 times per second.

Whitening

a signal-processing operation that equalizes the power across all frequencies. After whitening a signal, its power spectrum is flat. This process removes pairwise correlations in the signal.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Kowler E. Eye movements: The past 25 years. Vision Res. 2011;51(13):1457–1483. doi: 10.1016/j.visres.2010.12.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Martinez-Conde S, Macknik SL. Fixational eye movements across vertebrates: comparative dynamics, physiology, and perception. J. Vis. 2008;8(14):1–16. doi: 10.1167/8.14.28. [DOI] [PubMed] [Google Scholar]
  • 3.Steinbach MJ. Owls’ eyes move. Br. J. Ophthalmol. 2004;88(8):1103. doi: 10.1136/bjo.2004.042291. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Burak Y, Rokni U, Meister M, Sompolinsky H. Bayesian model of dynamic image stabilization in the visual system. Proc. Natl. Acad. Sci. USA. 2010;107:19525–19530. doi: 10.1073/pnas.1006076107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Packer O, Williams DR. Blurring by fixational eye movements. Vision Res. 1992;32:1931–1939. doi: 10.1016/0042-6989(92)90052-k. [DOI] [PubMed] [Google Scholar]
  • 6.Wurtz RH, Joiner WM, Berman RA. Neuronal mechanisms for visual stability: progress and problems. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2011;366(1564):492–503. doi: 10.1098/rstb.2010.0186. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Poletti M, Listorti C, Rucci M. Microscopic eye movements compensate for nonhomogeneous vision within the fovea. Curr. Biol. 2013;23(17):1691–1695. doi: 10.1016/j.cub.2013.07.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Ditchburn RW. Eye Movements and Visual Perception. Oxford: Clarendon Press; 1973. [Google Scholar]
  • 9.Yarbus AL. Eye Movements and Vision. New York: Plenum Press; 1967. [Google Scholar]
  • 10.Stevens JK, Emerson RC, Gerstein GL, Kallos T, Neufeld GR, Nichols CW, Rosenquist AC. Paralysis of the awake human: Visual perceptions. Vision Res. 1976;16(1):93–98. doi: 10.1016/0042-6989(76)90082-1. [DOI] [PubMed] [Google Scholar]
  • 11.Averill HI, Weymouth FW. Visual perception and the retinal mosaic. II. The influence of eye movements on the displacement threshold. J. Comp. Psychol. 1925;5:147–176. [Google Scholar]
  • 12.Marshall WH, Talbot SA. Biological Symposia—Visual Mechanisms. Vol. 7. Lancaster, PA: Cattel.; 1942. Recent evidence for neural mechanisms in vision leading to a general theory of sensory acuity. In H. Kluver, editor; pp. 117–164. [Google Scholar]
  • 13.Arend LE. Spatial differential and integral operations in human vision: implications of stabilized retinal image fading. Psychol. Rev. 1973;80(5):374–395. doi: 10.1037/h0020072. [DOI] [PubMed] [Google Scholar]
  • 14.Rucci M, Edelman GM, Wray J. Modeling LGN responses during free-viewing: A possible role of microscopic eye movements in the refinement of cortical orientation selectivity. J. Neurosci. 2000;20(12):4708–4720. doi: 10.1523/JNEUROSCI.20-12-04708.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Ahissar E, Arieli A. Figuring space by time. Neuron. 2001;32:185–201. doi: 10.1016/s0896-6273(01)00466-4. [DOI] [PubMed] [Google Scholar]
  • 16.Greschner M, Bongard M, Rujan P, Ammermuller J. Retinal ganglion cell synchronization by fixational eye movements improves feature estimation. Nat. Neurosci. 2002;5(4):341–347. doi: 10.1038/nn821. [DOI] [PubMed] [Google Scholar]
  • 17.Rucci M. Fixational eye movements, natural image statistics, and fine spatial vision. Network: Comp. Neural Sys. 2008;19(4):253–285. doi: 10.1080/09548980802520992. [DOI] [PubMed] [Google Scholar]
  • 18.Ahissar E, Arieli A. Seeing via miniature eye movements: A dynamic hypothesis for vision. Front. Comput. Neurosci. 2012;6:1–27. doi: 10.3389/fncom.2012.00089. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Riggs LA, Ratliff F, Cornsweet JC, Cornsweet TN. The disappearance of steadily fixated visual test objects. J. Opt. Soc. Am. 1953;43(6):495–501. doi: 10.1364/josa.43.000495. [DOI] [PubMed] [Google Scholar]
  • 20.Tulunay-Keesey U, Jones RM. The effect of micromovements of the eye and exposure duration on contrast sensitivity. Vision Res. 1976;16(5):481–488. doi: 10.1016/0042-6989(76)90026-2. [DOI] [PubMed] [Google Scholar]
  • 21.Steinman RM, Levinson JZ. The role of eye movements in the detection of contrast and spatial detail.chapter 3. In: Kowler E, editor. Eye movements and their role in visual and cognitive processes. Elsevier Science Publishers BV; 1990. pp. 115–212. [PubMed] [Google Scholar]
  • 22.Rucci M, Iovin R, Poletti M, Santini F. Miniature eye movements enhance fine spatial detail. Nature. 2007;447(7146):852–855. doi: 10.1038/nature05866. [DOI] [PubMed] [Google Scholar]
  • 23.Kagan I, Gur M, Snodderly DM. Saccades and drifts differentially modulate neuronal activity in V1: Effects of retinal image motion, position, and extraretinal influences. J. Vis. 2008;8(14):1–25. doi: 10.1167/8.14.19. [DOI] [PubMed] [Google Scholar]
  • 24.Ennis R, Cao D, Lee BB, Zaidi Q. Eye movements and the neural basis of context effects on visual sensitivity. J. Neurosci. 2014;34(24):8119–29. doi: 10.1523/JNEUROSCI.1048-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Rucci M, Desbordes G. Contributions of fixational eye movements to the discrimination of briefly presented stimuli. J. Vis. 2003;3(11):852–864. doi: 10.1167/3.11.18. [DOI] [PubMed] [Google Scholar]
  • 26.Chen CY, Hafed ZM. Post-microsaccadic enhancement of slow eye movements. J. Neurosci. 2013;33(12):5375–5386. doi: 10.1523/JNEUROSCI.3703-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Kuang X, Poletti M, Victor JD, Rucci M. Temporal encoding of spatial information during active visual fixation. Curr. Biol. 2012;20(6):510–514. doi: 10.1016/j.cub.2012.01.050. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Aytekin M, Victor JD, Rucci M. The visual input to the retina during natural head-free fixation. J. Neurosci. 2014;34(38):12701–12715. doi: 10.1523/JNEUROSCI.0229-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Henning MH, Worgotter F. Eye micro-movements improve stimulus detection beyond the Nyquist limit in the peripheral retina. In: Thrun S, Saul LK, Schölkopf B, editors. NIPS. MIT Press; 2004. [Google Scholar]
  • 30.Rucci M, Casile A. Fixational instability and natural image statistics: Implications for early visual representations. Network: Comp. Neural Sys. 2005;16(2–3):121–138. doi: 10.1080/09548980500300507. [DOI] [PubMed] [Google Scholar]
  • 31.Rolfs M. Microsaccades: Small steps on a long way. Vision Res. 2009;49(20):2415–2441. doi: 10.1016/j.visres.2009.08.010. [DOI] [PubMed] [Google Scholar]
  • 32.Collewijn H, Kowler E. The significance of microsaccades for vision and oculomotor control. J. Vis. 2008;8(14):1–21. doi: 10.1167/8.14.20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.McCamy MB, Otero-Millan J, Macknik SL, Yang Y, Troncoso XG, Baer SM, Crook SM, Martinez-Conde S. Microsaccadic efficacy and contribution to foveal and peripheral vision. J. Neurosci. 2012;32(27):9194–9204. doi: 10.1523/JNEUROSCI.0515-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Steinman RM. Gaze control under natural conditions. In: Chalupa LM, Werner JS, editors. The Visual Neurosciences. Cambridge: MIT Press; 2003. [Google Scholar]
  • 35.Poletti M, Rucci M. Fixational eye movements under various conditions of image fading. J. Vis. 2010;10(3)(6):1–18. doi: 10.1167/10.3.6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Kagan I. Microsaccades and image fading during natural vision. Electronic response to McCamy et al. Microsaccadic efficacy and contribution to foveal and peripheral vision. J. Neurosci. 2012;32(27):9194–9204. doi: 10.1523/JNEUROSCI.0515-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Zuber BL, Crider A, Stark L. Saccadic suppression associated with microsaccades. Quarterly Progress Report. 1964;74:244–249. [Google Scholar]
  • 38.Herrington TM, Masse NY, Hachmeh KJ, Smith JE, Assad JA, Cook EP. The effect of microsaccades on the correlation between neural activity and behavior in middle temporal, ventral intraparietal, and lateral intraparietal areas. J. Neurosci. 2009;29(18):5793–57805. doi: 10.1523/JNEUROSCI.4412-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Hass CA, Horwitz GD. Effects of microsaccades on contrast detection and V1 responses in macaques. J. Vis. 2011;11(3):1–17. doi: 10.1167/11.3.3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Hafed ZM. Alteration of visual perception prior to microsaccades. Neuron. 2013;77(4):775–786. doi: 10.1016/j.neuron.2012.12.014. [DOI] [PubMed] [Google Scholar]
  • 41.Ko H-K, Poletti M, Rucci M. Microsaccades precisely relocate gaze in a high visual acuity task. Nat. Neurosci. 2010;13(12):1549–1553. doi: 10.1038/nn.2663. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Hafed ZM, Goffart L, Krauzlis RJ. A neural mechanism for microsaccade generation in the primate superior colliculus. Science. 2009;323(5916):940–943. doi: 10.1126/science.1166112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Goffart L, Hafed ZM, Krauzlis RJ. Visual fixation as equilibrium: Evidence from superior colliculus inactivation. J. Neurosci. 2012;32(31):10627–10636. doi: 10.1523/JNEUROSCI.0696-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Van Horn MR, Cullen KE. Coding of microsaccades in three-dimensional space by premotor saccadic neurons. J. Neurosci. 2012;32(6):1974–80. doi: 10.1523/JNEUROSCI.5054-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Munoz DP, Corneil BD. Overt responses during covert orienting. Neuron. 2014;82(6):1230–43. doi: 10.1016/j.neuron.2014.05.040. [DOI] [PubMed] [Google Scholar]
  • 46.Poletti M, Rucci M. A compact field guide to the study of microsaccades: Challenges and functions. Vision Res. 2015 doi: 10.1016/j.visres.2015.01.018. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Engbert R, Mergenthaler K, Sinn P, Pikovsky A. An integrated model of fixational eye movements and microsaccades. Proc. Natl. Acad. Sci. USA. 2011;108:765–770. doi: 10.1073/pnas.1102730108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Steinman RM, Collewijn H. Binocular retinal image motion during active head rotation. Vision Res. 1980;20(5):415–429. doi: 10.1016/0042-6989(80)90032-2. [DOI] [PubMed] [Google Scholar]
  • 49.Angelaki DE, Cullen KE. Vestibular system: The many facets of a multimodal sense. Annu. Rev. Neurosci. 2008;31:125–150. doi: 10.1146/annurev.neuro.31.060407.125555. [DOI] [PubMed] [Google Scholar]
  • 50.Barlow HB. Possible principles underlying the transformations of sensory messages. In: Rosenblith WA, editor. Sensory Communication. Cambridge, MA: MIT Press; 1961. pp. 217–234. [Google Scholar]
  • 51.Simoncelli E, Olshausen B. Natural image statistics and neural representation. Annu. Rev. Neurosci. 2001;24:1193–1216. doi: 10.1146/annurev.neuro.24.1.1193. [DOI] [PubMed] [Google Scholar]
  • 52.Hyvärinen A, Hurri J, Hoyer PO. Natural Image Statistics: A Probabilistic Approach to Early Computational Vision. New York: Springer-Verlag; 2009. [Google Scholar]
  • 53.Tkačik G, Prentice J, Victor JD, Balasubramanian V. Local statistics in natural scenes predict the saliency of synthetic textures. Proc. Natl. Acad. Sci. USA. 2010;107:18149–18154. doi: 10.1073/pnas.0914916107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Nitzany EI, Victor JD. The statistics of local motion signals in naturalistic movies. J. Vis. 2010;14:1–15. doi: 10.1167/14.4.10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Field DJ. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A. 1987;4(12):2379–2394. doi: 10.1364/josaa.4.002379. [DOI] [PubMed] [Google Scholar]
  • 56.Atick JJ, Redlich A. What does the retina know about natural scenes? Neural Comput. 1992;4:196–210. [Google Scholar]
  • 57.Kaplan E, Benardete E. The dynamics of primate retinal ganglion cells. Prog. Brain Res. 2001;134:17–34. doi: 10.1016/s0079-6123(01)34003-7. [DOI] [PubMed] [Google Scholar]
  • 58.Dan Y, Alonso JM, Usrey WM, Reid RC. Coding of visual information by precisely correlated spikes in the lateral geniculate nucleus. Nat. Neurosci. 1998;6:501–507. doi: 10.1038/2217. [DOI] [PubMed] [Google Scholar]
  • 59.Bruno RM, Sakmann B. Cortex is driven by weak but synchronously active thalamocortical synapses. Science. 2006;312:1622–1627. doi: 10.1126/science.1124593. [DOI] [PubMed] [Google Scholar]
  • 60.Attneave F. Some informational aspects of visual perception. Psychol. Rev. 1954;61:183–193. doi: 10.1037/h0054663. [DOI] [PubMed] [Google Scholar]
  • 61.Srinivasan MV, Laughlin SB, Dubs A. Predictive coding: A fresh view of inhibition in the retina. P. Roy. Soc. Lond. B. Bio. 1982;216:427–459. doi: 10.1098/rspb.1982.0085. [DOI] [PubMed] [Google Scholar]
  • 62.van Hateren JH. A theory of maximizing sensory information. Biol. Cybern. 1992;68:23–29. doi: 10.1007/BF00203134. [DOI] [PubMed] [Google Scholar]
  • 63.Puchalla JL, Schneidman E, Harris RA, Berry MJ. Redundancy in the population code of the retina. Neuron. 2005;46:493–504. doi: 10.1016/j.neuron.2005.03.026. [DOI] [PubMed] [Google Scholar]
  • 64.Ohshiro T, Weliky M. Simple fall-off pattern of correlated neural activity in the developing lateral geniculate nucleus. Nat. Neurosci. 2007;9:1541–1548. doi: 10.1038/nn1799. [DOI] [PubMed] [Google Scholar]
  • 65.Dimitrov AG, Lazar AA, Victor JD. Information theory in neuroscience. J. Comp. Neurosci. 2011;30:1–5. doi: 10.1007/s10827-011-0314-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Desbordes G, Rucci M. A model of the dynamics of retinal activity during natural visual fixation. Visual Neurosci. 2007;24(2):217–230. doi: 10.1017/S0952523807070460. [DOI] [PubMed] [Google Scholar]
  • 67.Bacon-Macé N, Macé M, Fabre-Thorpe M, Thorpe SJ. The time course of visual processing: Backward masking and natural scene categorisation. Vision Res. 2005;45:1459–1469. doi: 10.1016/j.visres.2005.01.004. [DOI] [PubMed] [Google Scholar]
  • 68.Greene MA, Oliva A. The briefest of glances: The time course of natural scene understanding. Psychol. Sci. 2009;20(4):464–472. doi: 10.1111/j.1467-9280.2009.02316.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Thurgood A, Whitfield ATW, Patterson J. Towards a visual recognition threshold: New instrument shows humans identify animals with only 1 ms of visual exposure. Vision Res. 2009;51:1966–1971. doi: 10.1016/j.visres.2011.07.008. [DOI] [PubMed] [Google Scholar]
  • 70.Greene E, Ogden RT. Evaluating the contribution of shape attributes to recognition using the minimal transient discrete cue protocol. Behav. Brain Fun. 2012;8(53):1–14. doi: 10.1186/1744-9081-8-53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Simons DJ, Franconeri SL, Reimer RL. Change blindness in the absence of a visual disruption. Perception. 2000;29(10):1143–1154. doi: 10.1068/p3104. [DOI] [PubMed] [Google Scholar]
  • 72.Diamond ME, von Heimendahl M, Knutsen PM, Kleinfeld D, Ahissar E D. ’Where’ and ’What’ in the whisker sensorimotor system. Nat. Rev. Neurosci. 2008;9(8):601–612. doi: 10.1038/nrn2411. [DOI] [PubMed] [Google Scholar]
  • 73.Santini F, Redner G, Iovin R, Rucci M. EyeRIS: A general-purpose system for eye movement contingent display control. Behav. Res. Methods. 2007;39(3):350–364. doi: 10.3758/bf03193003. [DOI] [PubMed] [Google Scholar]
  • 74.Havermann K, Cherici C, Rucci M, Lappe M. Fine-scale plasticity of microscopic saccades. J. Neurosci. 2014;34(35):11665–11672. doi: 10.1523/JNEUROSCI.5277-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Arathorn DW, Stevenson SB, Yang Q, Tiruveedhula P, Roorda A. How the unstable eye sees a stable and moving world. J. Vis. 2013;13(10):1–19. doi: 10.1167/13.10.22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Rucci M, Casile A. Decorrelation of neural activity during fixational instability: Possible implications for the refinement of V1 receptive fields. Visual Neurosci. 2004;21(5):725–738. doi: 10.1017/S0952523804215073. [DOI] [PubMed] [Google Scholar]
  • 77.Casile A, Rucci M. A theory of the influence of eye movements on the refinement of direction selectivity in the cat’s primary visual cortex. Network: Comp. Neural Sys. 2009;20(4):197–232. doi: 10.3109/09548980903314204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Mostofi N, Boi M, Rucci M. Influence of microsaccades on contrast sensitivity: Theoretical analysis and experimental results. J. Vis. 2014;14(10):109. [Google Scholar]
  • 79.Croner LJ, Kaplan E. Receptive fields of P and M ganglion cells across the primate retina. Vision Res. 1995;35:7–24. doi: 10.1016/0042-6989(94)e0066-t. [DOI] [PubMed] [Google Scholar]
  • 80.Derrington AM, Lennie P. Spatial and temporal contrast sensitivities of neurons in lateral geniculate nucleus of macaque. J. Physiol. 1984;357:219–240. doi: 10.1113/jphysiol.1984.sp015498. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES