Abstract
We accurately perceive the visual scene despite moving our eyes ~3 times per second, an ability that requires incorporation of eye position and retinal information. In this study, we assessed how this neural computation unfolds across three interconnected structures: frontal eye fields (FEF), intraparietal cortex (LIP/MIP), and the superior colliculus (SC). Single-unit activity was assessed in head-restrained monkeys performing visually guided saccades from different initial fixations. As previously shown, the receptive fields of most LIP/MIP neurons shifted to novel positions on the retina for each eye position, and these locations were not clearly related to each other in either eye- or head-centered coordinates (defined as hybrid coordinates). In contrast, the receptive fields of most SC neurons were stable in eye-centered coordinates. In FEF, visual signals were intermediate between those patterns: around 60% were eye-centered, whereas the remainder showed changes in receptive field location, boundaries, or responsiveness that rendered the response patterns hybrid or occasionally head-centered. These results suggest that FEF may act as a transitional step in an evolution of coordinates between LIP/MIP and SC. The persistence across cortical areas of mixed representations that do not provide unequivocal location labels in a consistent reference frame has implications for how these representations must be read out.
NEW & NOTEWORTHY How we perceive the world as stable using mobile retinas is poorly understood. We compared the stability of visual receptive fields across different fixation positions in three visuomotor regions. Irregular changes in receptive field position were ubiquitous in intraparietal cortex, evident but less common in the frontal eye fields, and negligible in the superior colliculus (SC), where receptive fields shifted reliably across fixations. Only the SC provides a stable labeled-line code for stimuli across saccades.
Keywords: coordinate transformation, frontal eye field, intraparietal cortex, superior colliculus, visual saccade
INTRODUCTION
The perceived locations of visual stimuli are derived from a combination of the location of retinal activation and information about the direction of eye gaze. How these signals are combined to synthesize a representation of visual space as the eyes move is unknown. The process is computationally complex, because any site on the retina could correspond to any given site in the visual scene, but only the correct correspondence for a particular eye gaze position is operative at any given time. Coordinate transformations are therefore both flexible and precise, and it has been suggested that they unfold as a gradual process across multiple brain regions.
Which visual areas are truly retinotopic, or eye-centered, and which employ higher order representations incorporating information about eye movements is uncertain. The retina is thought to provide the brain with an eye-centered map of the locations of visual stimuli, but after that, the recurrent interconnectivity of the brain in principle permits adjustment of reference frame. Several studies have indicated effects of eye position or movement on visual responses as early as the lateral geniculate nucleus (Lal and Friedlander 1989, 1990a, 1990b). In V1, some studies have found evidence that eye position modifies visual signals (Trotter and Celebrini 1999), and some have not (Gur and Snodderly 1997; Motter and Poggio 1990). Later visual structures exhibit more extensive sensitivity to eye position/movements (e.g., Bremmer 2000; Bremmer et al. 1997; Squatrito and Maioli 1996).
We focus in this report on a quantitative comparison of the reference frames employed in three interconnected brain regions involved in guiding saccadic eye movements to visual targets (Fig. 1): the lateral and medial banks of the intraparietal sulcus (LIP/MIP), the superior colliculus (SC), and the frontal eye fields (FEF) (e.g., Bisley and Goldberg 2003; Bruce and Goldberg 1985; Huerta et al. 1986; Freedman and Sparks 1997; Lee et al. 1988; Schall 1991; Schall and Hanes 1993; Scudder et al. 2002; Sparks et al. 1976; Stanton et al. 1988; Steenrod et al. 2013; Waitzman et al. 1988; Wurtz and Goldberg 1971; Wurtz et al. 2001). Electrical stimulation in the SC and FEF evokes short-latency saccades at low thresholds (Bruce et al. 1985; Robinson 1972; Robinson and Fuchs 1969; Schiller and Stryker 1972). Stimulation in parietal cortex can also evoke saccades (Constantin et al. 2007, 2009; Thier and Andersen 1996, 1998). Lesions of both the SC and FEF together eliminate saccades, suggesting that these two structures are necessary for saccade generation, whereas lesions of parietal cortex can produce a more general hemineglect (Bisiach and Luzzatti 1978; Dias and Segraves 1999; Schiller et al. 1980, 1987; Sommer and Tehovnik 1997).
Fig. 1.

Schematic of the connections between the areas LIP/MIP, SC, and FEF, their visual inputs, and their projections to the brain stem saccade generator. The LIP/MIP and FEF are highly interconnected and send excitatory projections to the intermediate and deep layers of the SC (continuous arrows indicate direct projections). The FEF also sends inhibitory indirect projections to the SC through the caudate and the substantia nigra pars reticulata (dotted arrows indicate indirect projections). Both the SC and the FEF directly project to the various areas of the brain stem saccade generator system. The LIP/MIP and the FEF receive visual inputs from extrastriate visual areas. The SC receives visual inputs mainly in its superficial layer from the primary and secondary visual cortices and the FEF, and also directly from the retina (for reviews, see Blatt et al. 1990; Schall et al. 1995; Sparks and Hartwich-Young 1989). Connections between oculomotor areas are shown in gray and visual inputs in black.
Parietal cortex was one of the first brain regions in which eye movements and visual signals were shown to interact (Andersen et al. 1993). These response patterns were initially characterized as “gain fields,” in which receptive fields were stable in eye-centered location but exhibited a response modulation with different eye positions (Andersen et al. 1985). Subsequent studies involving complete sampling of receptive field positions as the eyes moved suggested that receptive fields could also adopt new positions on the retina at different eye positions (Mullette-Gillman et al. 2005, 2009). These changes in receptive field position produced a code that varies across neurons and ranges from predominantly eye-centered to predominantly head-centered, with most neurons exhibiting “hybrid” response patterns that could not be categorized as either of these options. In contrast, the SC, though exhibiting gain fields (Van Opstal et al. 1995), employs a predominantly eye-centered code when tested and analyzed the same way as LIP/MIP (Lee and Groh 2012; see also DeSouza et al. 2011; Klier et al. 2001; Sadeh et al. 2015).
Considerable interest has focused recently on eye position gain fields in the FEF (Cassanello and Ferrera 2007) and on how receptive fields in the FEF change transiently at the time of the eye movement (Sommer and Wurtz 2006; Zirnsak and Moore 2014; Zirnsak et al. 2014). In addition, a detailed quantitative assessment of torsional, eye-in-head, and head-on-body components of the FEF’s reference frame has been conducted in a paradigm in which the both the head and eyes were free to move with respect to each other and to the body (Keith et al. 2009; Sajad et al. 2015, 2016). However, a quantitative assessment of the reference frame during steady fixation with the head restrained (important for our larger purpose of comparing visual and auditory coding) has not, to our knowledge, been conducted.
In this article we report that the reference frame of visual signals in the FEF is intermediate between the SC and LIP/MIP. The results support the view that reference frames evolve along brain pathways involved in controlling visually guided behavior, becoming a plausible labeled line for eye-centered stimulus location only at the level of the SC.
MATERIALS AND METHODS
The task and recording conditions for the FEF data set have been explained previously (Caruso et al. 2016). We briefly report them below. All experimental procedures conformed to NIH guidelines (National Research Council 2011) and were approved by the Institutional Animal Care and Use Committee of Duke University. Two adult rhesus monkeys (Macaca mulatta) were implanted with a head holder to immobilize the head, a scleral search coil to track eye movements, and a recording cylinder over the left or right FEF. Similar procedures were used to prepare for recordings in LIP/MIP and SC, as reported previously (Lee and Groh 2012, 2014; Mullette-Gillman et al. 2005, 2009).
During data collection, the monkeys were seated 150 cm from a light-emitting diode (LED) display board with their heads restrained. The experiments took place in a dark (monkey F, male) or dimly illuminated room (monkey N, female); the dim illumination limited normal dark-induced nystagmus (Mulch and Lewitzki 1977) but did not provide any useful visual cues. Indeed, saccade accuracy was comparable for the two monkeys (Caruso et al. 2016) and with previous studies in complete darkness (Metzger et al. 2004; Mullette-Gillman et al. 2005, 2009).
All data were recorded while the monkeys performed visually or aurally guided saccades, randomly interleaved, in an overlap saccade paradigm. Only visual trials were analyzed in this study. In each trial, a target was presented while the monkey fixated a visual fixation stimulus (Fig. 2, A and B; all visual stimuli were produced by green LEDs). The monkey withheld the saccade for 600–900 ms until the offset of the fixation target, permitting the dissociation of sensory-related activity from motor-related activity. The targets were located in front of the monkeys at 0° elevation and between −24° and +24° horizontally (9 locations separated by 6°, Fig. 2A). In each session, all saccades started from three initial fixation locations at −12°, 0°, +12° along the horizontal direction and at an elevation chosen to best sample the receptive field of the neuron under study.
Fig. 2.
Stimuli, task, and classification system of receptive fields. A: locations of stimuli and initial fixations. Varying the initial fixation permits the separation of eye- and head-centered reference frames by measuring the relative alignment of the responses in head- and eye-centered coordinates. B: task. Each trial starts with the appearance of a fixation light, which the monkey is required to fixate. A target then appears, but the monkey needs to wait until the fixation goes out before making a saccade to the target. C: predominantly eye-centered response patterns. The 3 tuning curves obtained for the 3 initial fixation locations align best in eye-centered coordinates (perfect alignment, Reye ≈ 1; right), whereas in head-centered coordinates (left), they are shifted by the distance between the initial eye positions (i.e., steps of 12° in the present task, resulting in Rhead < 1). C1 and C2 depict closed and open receptive fields, respectively. The classification metric is applicable in both cases. D: predominantly head-centered responses. The pattern is the opposite of that in C: the 3 tuning curves are aligned in head-centered coordinates and separated by 12° in eye-centered coordinates. D1 and D2 show that the classification is appropriate for both closed and open receptive fields. E: hybrid-partial shift response pattern. The 3 tuning curves are not well aligned in either head- or eye-centered coordinates: as the initial eye direction shifts left or right (red or blue, respectively), the tuning curves only partially move apart, by less than 12°. E1 and E2 show that both closed and open receptive fields are classified as hybrid-partial shift when the shift is less than the distance between initial fixations. F: hybrid-complex coordinates. The initial eye location affects the shape, gain, and/or alignment of the tuning curves in unpredictable ways that have no obvious relationship in either eye- or head-centered coordinates. G–J: schematics of population analysis. When Rhead is plotted vs. Reye, the data points should lie below the line of slope = 1 if the reference frame is predominantly eye-centered (G), above the line of slope = 1 if head-centered (H), along the line of slope = 1, but at positive values, if hybrid-partial shift (I), and randomly if hybrid-complex (J).
The behavioral paradigm, the acquisition of eye trajectory, and the recordings of single-cell activity were controlled using the Beethoven program (Ryklin Software). Eye gaze was sampled at 500 Hz. Single-neuron extracellular activity was acquired using a Plexon system (Sort Client software; Plexon) through tungsten microelectrodes (FHC; 0.7 to 2.5 MΩ at 1 kHz).
Analysis
All analyses were conducted with custom-made routines in MATLAB (The MathWorks). Only correct trials were included.
Spatial selectivity analysis.
This analysis has been described in detail by Caruso et al. (2016), Mullette-Gillman et al. (2005), and Lee and Groh (2012). Briefly, we defined a baseline period, comprising the 0–500 ms of fixation before the target onset, and a sensory window as the period 0–500 ms after target onset. The sensory window captured both the transient and sustained responses to visual targets. We selected the motor window differently in different areas, to better capture the saccade-related burst, which has different temporal characteristics across regions. The motor window was defined to start before the saccade onset (20 ms before saccade onset for the SC data, 50 ms before onset for the FEF data, and 150 ms before onset for the LIP data) and to end at saccade offset. Saccade onset and offset were defined as the moment, at 2-ms resolution, that the instantaneous speed of the eye movement exceeded or dropped below a threshold of 25°/s. (In addition to these fixed analysis windows, we also analyzed sliding 100-ms windows throughout the interval from target onset to the saccade, detailed below).
Neurons were considered responsive in the sensory/motor intervals if a two-tailed t-test between their baseline activity and relevant response period was significantly different with P < 0.05. Spatial selectivity of responses (in the sensory or motor period) was assessed in both head- and eye-centered reference frames, using two two-way ANOVAs. Each ANOVA involved the three levels of initial eye position (−12°, 0°, +12°) as well as five levels of target location (−12° to +12° in 6° increments), defined in head-centered coordinates for the first ANOVA and in eye-centered coordinates for the second ANOVA. Cells were classified as spatially selective if either of the two ANOVAs yielded a significant main effect for target location or a significant interaction between the target and fixation locations (Lee and Groh 2012, 2014; Mullette-Gillman et al. 2005, 2009). In all tests, statistical significance was defined as P value <0.05 (Table 1). To be consistent with our previous analyses and because these tests were used for inclusion criteria rather than hypothesis testing, we did not apply Bonferroni correction.
Table 1.
Spatially selective populations in LIP/MIP, FEF, and SC
| No. of Spatially Selective Cells |
||||
|---|---|---|---|---|
| No. of Recorded Cells | Sensory period | Motor period | Sensory and motor period | |
| LIP/MIP | 275 | 125 (45%) | 121 (44%) | 69 (25%) |
| FEF | 324 | 174 (54%) | 160 (49%) | 117 (36%) |
| SC | 179 | 162 (91%) | 161 (90%) | 153 (85%) |
For each area (LIP/MIP, FEF, and SC) and each time window (sensory and motor periods; see materials and methods), the number of spatially selective cells is indicated, with percentages in parentheses. A cell was considered spatially selective if its response was modulated by target location in either head- or eye-centered coordinates, according to two 2-way ANOVAs with target location and fixation position as the two factors (one 2-way ANOVA for target locations defined with respect to the eye and one 2-way ANOVA for target locations defined with respect to the head; target location main effects, P < 0.05; interaction terms, P < 0.05).
Reference frame analysis.
To distinguish eye-centered and head-centered reference frames, we quantified the degree of alignment between eye-centered and head-centered tuning curves obtained from trials with initial eye positions at −12°, 0°, +12° along the horizontal axis. This analysis was applied to single cells during different time windows throughout the trials. In particular, for each time window considered, we constructed the three response tuning curves for the three fixation locations with target locations defined in head- or eye-centered coordinates (schematized in Fig. 2) and quantified their relative shift with an index akin to an average correlation coefficient (Eq. 1; Mullette-Gillman et al. 2005). We call it “reference frame index,” and for each response we calculate two indexes, in head-centered and in eye-centered coordinates, according to the formula:
| (1) |
where rL,i, rC,i, and rR,i are the vectors of average responses of the neuron to a target at location i when the monkey’s eyes were fixated at the left (L), right (R), or center (C). Only the target locations that were present for all three fixation positions in both head- and eye-centered frames of reference were included (5 locations: −12°, −6°, 0°, 6°, and 12°). The reference frame index is primarily sensitive to the relative translation of the three tuning curves and is comparatively insensitive to possible gain differences between them, provided the sampling includes some inflection point in the response curve. This can occur either by sampling from both sides of the receptive field center or by sampling from locations that are both inside and outside of the receptive field. The reference frame index values range from −1 to 1, where 1 indicates perfect alignment, 0 indicates no alignment, and −1 indicates perfect negative correlation (Lee and Groh 2012, 2014; Mullette-Gillman et al. 2005, 2009; Porter and Groh 2006). We calculated the 95% confidence intervals of the index with a bootstrap analysis (1,000 iterations of 80% of data for each target-fixation combination). Each set of responses was classified, based on the quantitative comparison between the eye- and head-centered indexes, as 1) eye-centered if the eye-centered reference index was statistically higher than the head-centered reference index (that is, if the 95% confidence interval of eye-centered reference index was positive and larger than the 95% confidence interval of head-centered reference index); 2) head-centered if the opposite pattern was found; 3) hybrid-partial shift if the two indexes were statistically different from each other and at least one of them was statistically different from zero (that is, if the 95% confidence intervals of the eye-and head-centered indexes overlapped with each other and at least one did not include 0), or 4) hybrid-complex if the two indexes were not statistically different from each other and from zero (that is, if the 95% confidence intervals of the eye-and head-centered indexes overlapped with each other and with 0; see Fig. 4). These latter two categories were combined in our previous reference frame analyses of activity patterns in the SC and LIP/MIP (Lee and Groh 2012; Mullette-Gillman et al. 2005, 2009). In this study, we consider them both separately and together as appropriate to provide a more comprehensive comparison of the reference frames across regions. We conducted the reference frame analysis for each cell during the sensory and motor periods (see Figs. 4 and 5, A and B) and across time in 100-ms windows sliding with steps of 50 ms from the target onset and from saccade onset (Figs. 4 and 5, C and D).
Fig. 4.
Reference frames in the FEF population. A and B: the reference frame indexes in head-centered and eye-centered coordinates are plotted for spatially selective cells in each time window: visual sensory (A) and visual motor (B). Responses are classified as eye-centered if the 95% confidence interval of eye-centered coefficient was positive, larger than, and nonoverlapping with the 95% confidence interval of head-centered coefficient (bootstrap analysis; see materials and methods); these responses are indicated in orange. Responses were classified as head-centered with the opposite pattern (blue). Finally, hybrid-partial shift reference frames (dark gray) have non-zero overlapping 95% confidence intervals, whereas hybrid-complex responses (light gray) have reference frame indexes not statistically different from zero. The pie charts summarize the proportion responses classified as eye-centered, head-centered, and the subtypes of hybrid for each time window. C and D: time course of the average eye-centered (black) and head-centered (gray) reference frame indexes for the FEF population of spatially selective responses (see materials and methods). The indexes were calculated in bins of 100 ms, sliding with a step of 50 ms and averaged across the population. Trials were aligned at target onset (C) and at saccade onset (D). Filled circles indicate that the difference between the 2 average indexes is statistically significant (t-test for each bin, P < 0.05). The time frames displayed in C and D overlap and include the variable time to the offset of the fixation light as well as saccade reaction time (averaging ~200 ms) as indicated by the shaded boxes.
Fig. 5.
Reference frame indexes during sensory and motor periods in LIP/MIP, FEF, and SC. A and B: the percentage of cells classified as eye-centered (orange), head-centered (blue), and hybrid (dark gray, hybrid-partial shift; light gray, hybrid-complex) are shown for all spatially selective cells in LIP/MIP, FEF, and SC in the sensory period (A) and in the motor period (B). For the sensory response, the time window is the 500 ms immediately after target onset. For the motor response, the time window changes with the saccade duration, ranging from −20 (SC), −50 (FEF), or −100 ms (LIP/MIP) from saccade onset to saccade offset (see materials and methods). For the FEF, the data are the same as in Fig. 4, A and B. C and D: time course of the eye-centered reference frame indexes for the populations of spatially selective cells (C) and for all recorded cells (D) in LIP/MIP (tan), FEF (green), and SC (dark red). Filled circles indicate that the eye-centered correlation coefficient was significantly larger than the head-centered one (2-tailed t-test for each bin, P < 0.05). Gray boxes indicate approximate timing of fixation light offset as well as saccade onset, as in Fig. 4, C and D. The FEF data are the same as in Fig. 4, C and D. The LIP/MIP and SC data were collected in studies by Lee and Groh (2012, 2014) and Mullette-Gillman et al. (2005, 2009).
LIP/MIP and SC Data Sets
Figure 5 includes data from area LIP/MIP and SC that we have previously collected and described (Lee and Groh 2012, 2014; Mullette-Gillman et al. 2005, 2009). The tasks and recording techniques were the same as in the present study, but each data set was recorded from different monkeys (2 monkeys per brain area). The SC data set consists of a total of 179 single cells recorded in the left and right SC of 2 monkeys. The LIP/MIP data set consists of a total of 275 single cells recorded in the left and right LIP/MIP of 2 monkeys. For these two additional data sets, responsiveness, spatial selectivity, and reference frame were assessed the same way as for the FEF data set. Some of the data in Fig. 5 were reanalyzed in different time windows than in the original studies to allow for a fair comparison across brain areas. However, changing the analysis windows did not change the overall results of the previous studies.
RESULTS
Overview
We first describe our new data concerning the visual reference frame in FEF before quantitatively comparing FEF with our previously reported results in LIP/MIP and the SC (Lee and Groh 2012, 2014; Mullette-Gillman et al. 2005, 2009). We recorded single-cell response profiles while the monkeys performed visually guided saccades, interleaved with aurally guided saccades that were not analyzed in this study (see materials and methods). The task and the location of the stimuli are described in Fig. 2, A and B.
We tested whether the receptive fields of FEF neurons (N = 324) shifted with the eyes or stayed fixed relative to the head by defining two reference frame indexes, Reye and Rhead, in eye- and head-centered coordinates. These indexes are akin to the average correlation coefficient between the tuning curves obtained from the three initial fixations (see materials and methods). Thus they measure the relative alignment of the tuning curves in eye-centered or head-centered coordinates, without regard to changes in overall response magnitude, which does not contribute substantially to this measure. Figure 2 schematically shows our classification system based on the quantitative comparison between Reye and Rhead. Response patterns were considered either “eye-centered” (Fig. 2C; the tuning curves aligned in eye-coordinates better than in head-coordinates), “head-centered” (Fig. 2D; opposite pattern), or “hybrid” if the response patterns were not well described in either coordinate system. A hybrid reference frame refers to circumstances such as when the retinal positions of receptive fields differ across eye positions but not in the way consistent with head-centered coordinates, either because the receptive fields shift by only part of the difference in fixation position (Fig. 2E; we refer to this scenario as “hybrid-partial shift”) or because the receptive fields change across fixations in unpredictable ways that appear random (Fig. 2F, “hybrid-complex”). Note that the classification depends uniquely on the alignment, and not on the shape of the receptive fields (Fig. 2, C1, C2, D1, D2, E1, and E2). Figure 2, G–J, shows the patterns expected when Rhead is plotted against Reye in each of these scenarios: data points would cluster below (G), above (H), or along the diagonal (I and J) depending on the observed coordinate system. It should be emphasized that this classification scheme is merely a tool to capture features of the neural population as a whole; there is little evidence to suggest these labels represent functionally distinct categories, but rather reflect a continuum of coding properties.
A final note is that in these experiments, head, body, and world are immobile with respect to each other. Thus it is not possible to distinguish these reference frames from one another.
Example Neurons in the FEF
Figure 3 shows the responses from nine individual cells during the sensory (Fig. 3, A, C, E, G, and I) and motor periods (Fig. 3, B, D, F, H, and J). During both the sensory period and the motor burst, the most common pattern was eye-centered (Fig. 3, A–D) with an unambiguous difference between high reference frame indexes in eye-centered coordinates (close to 1) and very low reference frame indexes in head-centered coordinates (the examples shown in Fig. 3, A–D, had, respectively: Reye = 0.97, 0.93, 0.82, and 0.97 vs. Rhead = −0.17, 0.18, 0.37, and 0.35).
Fig. 3.
Examples of responses in the FEF. A–J: each panel shows the tuning curves for various example cells during the sensory or motor period (see materials and methods). The tuning curves are plotted in both head-centered (left) and eye-centered coordinates (right), and the reference frame indexes Rhead and Reye are indicated. A–D show examples of eye-centered responses (Reye statistically higher than Rhead) during the sensory period (A and C) and during the motor burst (B and D). The two responses in A and B (from the same cell at different time windows) show a complete sampling of the receptive field, whereas the examples in C and D show a partial sampling of the receptive fields. E: head-centered responses (Reye statistically smaller than Rhead) during the sensory period. F: head-centered responses during the motor burst. G: hybrid-complex responses (Reye not statistically different from Rhead and both not statistically different from zero) during the sensory period. H: hybrid-partial shift responses (Reye not statistically different from Rhead but at least one of them statistically higher than zero) during the motor burst. I: untuned responses during the sensory period. J: untuned responses during the motor burst. The reference frame index R was not calculated for the responses not significantly modulated by the target location as shown in I and J. In A–J, the thin, colored horizontal lines represent the average baselines for the 3 different fixation locations. Ipsi, ipsilateral; Contra, contralateral.
Some neurons were classified as head-centered on the basis of the quantitative comparison of the reference frame indexes (Rhead > Reye), but as can be seen in Fig. 3, E and F, these response patterns were not as strongly head-centered as the eye-centered neurons were eye-centered. The values of Rhead for these two examples were 0.68 and 0.58, whereas the values of Reye were 0.31 and 0.28.
There were also neurons that exhibited hybrid response patterns. The example in Fig. 3G shows hybrid-complex tuning curves: two of the curves are well aligned in head-centered coordinates, representing the receptive fields computed from the left (red) and central (green) fixation locations, and two are well aligned in eye-centered coordinates, representing the receptive fields computed from the center (green) and right (blue) fixation locations. The example in Fig. 3H shows hybrid-partial shift tuning curves: the curves clearly shift laterally when the eyes move to different initial fixation locations, but not by the correct amount. In both of these examples, Reye and Rhead are about equal to each other (Fig. 3G: Reye = Rhead = 0.34; Fig. 3H: Rhead = 0.81, Reye = 0.80).
A final response pattern involved responses that were only weakly modulated by target location in any reference frame. Figure 3, I and J, illustrates examples in which responses exceeded baseline for all locations tested but for which there was little evidence of a circumscribed receptive field among the locations tested. Target-evoked responses even occurred for locations well into the ipsilateral field of space. Note that these responses were not necessarily identical for all locations and could vary with eye position (e.g., the curves in Fig. 3, I and J, show an overall gain sensitivity to eye position). However, the lack of spatial sensitivity in these neurons makes it impossible to evaluate their reference frame. Accordingly, we tested for spatial sensitivity using an ANOVA involving the 5 targets at −12°, −6°, 0°, 6°, and 12° (Table 1; see materials and methods and results, Overview). Neurons that failed to show spatial sensitivity in either head- or eye-centered coordinates made up about half of the sample in FEF (as well as LIP/MIP), and these neurons were excluded from the population analyses described in the next section.
Population Results in the FEF
Eye-centered responses in the FEF are prevalent in both sensory and motor periods, corresponding to ~60% of the population. Figure 4, A and B, shows this quantitatively. Like Fig. 2, D, F, H, and J, these graphs plot Reye vs. Rhead. The cross hairs indicate the 95% confidence intervals on those values (see materials and methods). Data points are color coded orange if the 95% confidence intervals suggest Reye > Rhead (the 95% range of Rhead is lower than the 95% range of Reye), blue if Rhead > Reye, and gray if Reye and Rhead are approximately equal; gray shading corresponds to Reye ≈ Rhead and either Reye > 0 or Rhead > 0 (dark) and Reye ≈ Rhead ≈ 0 (light). The pie charts in Fig. 4, A and B, indicate the percentages of cells classified as eye-centered (orange), head-centered (blue), or the various subtypes of hybrid coding (light and dark gray).
The pattern of reference frames was similar in the sensory vs. motor periods, and there was little evidence of a systematic change in the representation when timescale was investigated more closely. Figure 4, C and D, shows the average Reye and Rhead values in 100-ms time bins sliding in 50-ms increments from target onset (Fig. 4C) and from saccade onset (Fig. 4D). Reye averages ~0.4, whereas Rhead averages ~0.1, with little change except for slight upticks at target onset and saccade onset. Although Reye is consistently significantly greater than Rhead (filled symbols; t-test on each time bin, P < 0.05), its value is not very high in absolute terms. This is because only ~60% of FEF neurons are classifiable as having eye-centered responses, and among these, many Reye values were not very high. On the whole, the reference frame in the FEF is more eye-centered than it is head-centered, but it is not fully eye-centered.
Comparison with LIP/MIP and the SC
We next asked how the degree of eye-centeredness in the representation in the FEF compares with the representations in parietal cortex and the SC. Figure 5 presents a comparison of the signals observed in each brain region under identical experimental conditions [see materials and methods and Lee and Groh (2012, 2014) and Mullette-Gillman et al. (2005, 2009) for descriptions of the 2 additional data sets recorded in different monkeys with the same technique and during the same task]. As in the analysis of the FEF, we focused on two measures. First, we compared the number of cells classified as eye-centered, head-centered, or hybrid across areas (Fig. 5, A and B). Second, we compared the time course of the population-averaged eye-centered reference frame in the LIP/MIP, FEF, and SC (Fig. 5, C and D).
The results indicate a continuum of reference frame across these brain areas, with LIP/MIP predominantly hybrid, SC predominantly eye-centered, and FEF intermediate between the two. In both sensory and motor periods, the proportion of eye-centered responses remains a minority in the LIP/MIP (increasing from ~20% to ~40% in time) and a majority in the SC (~80% consistently in the 2 time periods). The FEF is between the two trends, with ~60% eye-centered responses in both time periods. Conversely, hybrid responses dominate in LIP/MIP (from ~60% to ~40%) and fall to <40% in the FEF and to <20% in the SC. Head-centered responses are a minority in all three areas, although a trend can be seen in both periods with the LIP/MIP having around one-fifth of responses head-centered, to the FEF having fewer than 10%, to the SC having almost no head-centered signals (Fig. 5, A and B). The proportion of neurons classifiable as eye-centered across areas differed significantly according to a χ2 test in both sensory and motor periods (Table 2; P < 0.05). Post hoc analyses comparing FEF to LIP/MIP and FEF to SC confirmed that during both sensory and motor periods, response patterns classifiable as eye-centered are more prevalent in FEF than in LIP/MIP and less prevalent in FEF than in SC (Table 2).
Table 2.
Comparison of eye-centered prevalence across areas
| LIP/MIP | FEF | SC | Chi-Square Test | Post hoc LIP/MIP vs. FEF | Post hoc FEF vs. SC | |
|---|---|---|---|---|---|---|
| Sensory responses | 28/125 | 112/174 | 140/162 | χ2 = 122.81 | χ2 = 51.46 | χ2 = 21.76 |
| df = 2 | df = 1 | df = 1 | ||||
| P = 0 | P = 7 × 10−13 | P = 3 × 10−6 | ||||
| Motor responses | 20/121 | 94/160 | 121/161 | χ2 = 98.50 | χ2 = 50.94 | χ2 = 9.76 |
| df = 2 | df = 1 | df = 1 | ||||
| P = 0 | P = 1 × 10−12 | P = 0.0018 |
A similar picture emerges when the average values of the reference frame indexes across time are considered. In Fig. 5, C and D, the average value of Reye for each brain area is plotted across time. Because the proportion of spatially selective cells varied in the three areas, we repeated the analysis twice: for the subpopulation of spatially selective cells (Fig. 5C) and for all recorded cells (Fig. 5D). In either population, the visual responses are significantly more eye-centered than head-centered in all areas, but with a clear grading from SC being the most eye-centered (average index around 0.6/0.8 during the trial) to FEF (average index around 0.2/0.4 during the trial, occasionally reaching 0.5) to LIP/MIP (average index consistently lower than 0.2).
DISCUSSION
Our quantitative assessment of the reference frame for visual space representation in the FEF shows that visual signals were mostly but not completely eye-centered. Around 60% of individual neurons were classified as eye-centered, whereas one-third of neurons had hybrid reference frames and a minority had weakly head-centered reference frames. At the population level, the eye-centered reference frame index averaged ~0.2–0.4 and was largely stable across time. Compared with those in the SC and LIP/MIP, the pattern of reference frame in the FEF was less eye-centered than in the SC but more eye-centered than in LIP/MIP, which is predominantly hybrid in its coding format.
There is controversy in the literature about the reference frames employed in the FEF and LIP/MIP. Caution is particularly warranted because terms such as “retinotopic” are sometimes used to describe visual organization when eye position is fixed or when visual sampling is shifted for each fixation position; such studies do not directly assess reference frames. The most apt comparisons with our experiments are studies that 1) systematically varied initial eye position, 2) sampled receptive fields along the same dimension as the variation in eye position, and 3) quantified the results at the population level. For a more detailed discussion of how the assessment of reference frame can go wrong when these conditions are not met, see Figs. 10 and 11 of Mullette-Gillman et al. (2009).
With these caveats in mind, our FEF results are actually on the whole consistent with the most relevant previous studies. The closest related studies involved visual memory-guided saccades in head-unrestrained monkeys (Sajad et al. 2015, 2016). These studies allowed natural variation in initial eye and head position and used a different analysis method to assess a variety of different reference frames distinguished in part on the basis of torsion (Keith et al. 2009). An eye-centered reference frame was the most common best match, although most other reference frames could often not be statistically ruled out. Several studies involving receptive field mapping at the time of eye movements have confirmed the proof of principle that many FEF receptive fields do not maintain a fixed position on the retina across eye movements (Sommer and Wurtz 2006; Umeno and Goldberg 1997, 2001; Zirnsak et al. 2014).
Studies employing electrical stimulation have also shown mixed reference frame information. Although stimulation does not typically drive the eyes to a fixed position in the orbits, the direction and amplitude of the stimulation-evoked saccade can vary with initial eye position (Bruce et al. 1985; Dassonville et al. 1992; Monteon et al. 2013; Robinson and Fuchs 1969; Russo and Bruce 1993). Monteon et al. (2013) reported that ~71% of sites yield saccades of a stable vector across different eye positions (i.e., eye-centered) and ~29% yield saccades that vary considerably, more consistent with a head- or body-centered coordinate system.
There is also a previous study of auditory signals in the FEF and how they are affected by eye position (Russo and Bruce 1994); we plan to consider this question further in a future paper.
The range of reference frames observed in the FEF provides a context for interpreting those observed in both the SC and LIP/MIP. The SC has been characterized as an eye-centered structure (Klier et al. 2001; Lee and Groh 2012; Schiller and Stryker 1972), whereas the LIP/MIP is not. Mullette-Gillman et al. (2005, 2009) were the first to describe the reference frame of LIP/MIP as predominantly hybrid. Andersen and colleagues (e.g., Andersen and Mountcastle 1983; Andersen et al. 1985, 1990) had already reported an interaction between visual inputs and eye position in the LIP/MIP, which they interpreted as reflecting eye centered gain fields. However, their sampling of receptive fields sometimes focused on the dimension orthogonal to the change in fixation position, which made their results difficult to interpret. Adopting a more appropriate sampling technique across brain areas, we have demonstrated not only strong eye-centered signals (the vast majority in the SC) but also signals along a continuum between eye-centered and head-centered coordinates (in different proportions, in all 3 areas). Thus the hybrid reference frames that we have characterized are unlikely to be due to methodological shortfalls. Furthermore, the identification, under the same conditions, of a gradual shift in the strength of eye-centered representations from LIP/MIP to FEF to SC supports the view that there is a genuine transition in coding between these different brain areas. This shift in coding was evident even though the different brain areas were assessed in different individual monkeys. How this observation may interact with potential differences in the breadth of receptive fields across these structures remains to be determined. In general, although these structures are reciprocally interconnected, it appears that there is a sequential element in how signals are processed from LIP/MIP to FEF to SC.
Why the brain uses reference frames that are impure, and why this should vary across brain areas, is unclear. The receptive field as a labeled line for stimulus location is a concept that dates back to the discovery of receptive fields. However, if receptive fields can vary in their position and do not show stability with respect to either eye- or head-centered coordinates, such neurons on their own cannot provide a labeled line for stimulus location. Rather, the activity of a population of such neurons would be required to disambiguate the spatial signals. Although this is very possible (e.g., Pouget and Sejnowski 1997), why it is desirable from a coding perspective has not been shown. Models of coordinate transformations show that the brain should be capable of transforming reference frames in a single step without the use of intermediate stages (Groh and Sparks 1992).
An understanding of how coordinate transformations unfold in a sensorimotor context has implications more broadly, because this is an example of a many-to-many mapping that occurs in numerous other contexts. For example, numerous types of perceptual constancy involve physical stimuli that vary but produce similar percepts (space constancy, size constancy, color constancy, the same musical melody in different keys, etc.). Such constancy can require a mapping from nearly any possible range within a physical stimulus dimension to a common perceptual dimension. Similarly, categorization requires mapping different types of physical stimuli that can be grouped into a category, and associative learning involves relating two different physical stimuli in a potentially arbitrary fashion. The underlying computation supporting these abilities may be very similar to that involved with spatial coordinate transformations.
GRANTS
This work was supported by National Institute of Neurological Disorders and Stroke Grant R01 NS50942-05.
DISCLOSURES
No conflicts of interest, financial or otherwise, are declared by the authors.
AUTHOR CONTRIBUTIONS
V.C.C. and J.M.G. conceived and designed research; V.C.C. and D.S.P. performed experiments; V.C.C. analyzed data; V.C.C. and J.M.G. interpreted results of experiments; V.C.C. and J.M.G. prepared figures; V.C.C. and J.M.G. drafted manuscript; V.C.C., D.S.P., M.A.S., and J.M.G. edited and revised manuscript; V.C.C., D.S.P., M.A.S., and J.M.G. approved final version of manuscript.
ACKNOWLEDGMENTS
We thank Jessi Cruger, Karen Waterstradt, Christie Holmes, and Stephanie Schlebusch for animal care and Tom Heil for technical assistance. We have benefitted from thoughtful discussions with JungAh Lee, Kurtis Gruters, Shawn M. Willett, Jeffrey M. Mohl, David Murphy, Bryce Gessell, and James Wahlberg.
REFERENCES
- Andersen RA, Bracewell RM, Barash S, Gnadt JW, Fogassi L. Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque. J Neurosci 10: 1176–1196, 1990. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Andersen RA, Essick GK, Siegel RM. Encoding of spatial location by posterior parietal neurons. Science 230: 456–458, 1985. doi: 10.1126/science.4048942. [DOI] [PubMed] [Google Scholar]
- Andersen RA, Mountcastle VB. The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex. J Neurosci 3: 532–548, 1983. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Andersen RA, Snyder LH, Li CS, Stricanne B. Coordinate transformations in the representation of spatial information. Curr Opin Neurobiol 3: 171–176, 1993. doi: 10.1016/0959-4388(93)90206-E. [DOI] [PubMed] [Google Scholar]
- Bisiach E, Luzzatti C. Unilateral neglect of representational space. Cortex 14: 129–133, 1978. doi: 10.1016/S0010-9452(78)80016-1. [DOI] [PubMed] [Google Scholar]
- Bisley JW, Goldberg ME. The role of the parietal cortex in the neural processing of saccadic eye movements. Adv Neurol 93: 141–157, 2003. [PubMed] [Google Scholar]
- Blatt GJ, Andersen RA, Stoner GR. Visual receptive field organization and cortico-cortical connections of the lateral intraparietal area (area LIP) in the macaque. J Comp Neurol 299: 421–445, 1990. doi: 10.1002/cne.902990404. [DOI] [PubMed] [Google Scholar]
- Bremmer F. Eye position effects in macaque area V4. Neuroreport 11: 1277–1283, 2000. doi: 10.1097/00001756-200004270-00027. [DOI] [PubMed] [Google Scholar]
- Bremmer F, Ilg UJ, Thiele A, Distler C, Hoffmann KP. Eye position effects in monkey cortex. I. Visual and pursuit-related activity in extrastriate areas MT and MST. J Neurophysiol 77: 944–961, 1997. doi: 10.1152/jn.1997.77.2.944. [DOI] [PubMed] [Google Scholar]
- Bruce CJ, Goldberg ME. Primate frontal eye fields. I. Single neurons discharging before saccades. J Neurophysiol 53: 603–635, 1985. doi: 10.1152/jn.1985.53.3.603. [DOI] [PubMed] [Google Scholar]
- Bruce CJ, Goldberg ME, Bushnell MC, Stanton GB. Primate frontal eye fields. II. Physiological and anatomical correlates of electrically evoked eye movements. J Neurophysiol 54: 714–734, 1985. doi: 10.1152/jn.1985.54.3.714. [DOI] [PubMed] [Google Scholar]
- Caruso VC, Pages DS, Sommer MA, Groh JM. Similar prevalence and magnitude of auditory-evoked and visually evoked activity in the frontal eye fields: implications for multisensory motor control. J Neurophysiol 115: 3162–3173, 2016. doi: 10.1152/jn.00935.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cassanello CR, Ferrera VP. Computing vector differences using a gain field-like mechanism in monkey frontal eye field. J Physiol 582: 647–664, 2007. doi: 10.1113/jphysiol.2007.128801. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Constantin AG, Wang H, Martinez-Trujillo JC, Crawford JD. Frames of reference for gaze saccades evoked during stimulation of lateral intraparietal cortex. J Neurophysiol 98: 696–709, 2007. doi: 10.1152/jn.00206.2007. [DOI] [PubMed] [Google Scholar]
- Constantin AG, Wang H, Monteon JA, Martinez-Trujillo JC, Crawford JD. 3-Dimensional eye-head coordination in gaze shifts evoked during stimulation of the lateral intraparietal cortex. Neuroscience 164: 1284–1302, 2009. doi: 10.1016/j.neuroscience.2009.08.066. [DOI] [PubMed] [Google Scholar]
- Dassonville P, Schlag J, Schlag-Rey M. The frontal eye field provides the goal of saccadic eye movement. Exp Brain Res 89: 300–310, 1992. doi: 10.1007/BF00228246. [DOI] [PubMed] [Google Scholar]
- DeSouza JF, Keith GP, Yan X, Blohm G, Wang H, Crawford JD. Intrinsic reference frames of superior colliculus visuomotor receptive fields during head-unrestrained gaze shifts. J Neurosci 31: 18313–18326, 2011. doi: 10.1523/JNEUROSCI.0990-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dias EC, Segraves MA. Muscimol-induced inactivation of monkey frontal eye field: effects on visually and memory-guided saccades. J Neurophysiol 81: 2191–2214, 1999. doi: 10.1152/jn.1999.81.5.2191. [DOI] [PubMed] [Google Scholar]
- Freedman EG, Sparks DL. Activity of cells in the deeper layers of the superior colliculus of the rhesus monkey: evidence for a gaze displacement command. J Neurophysiol 78: 1669–1690, 1997. doi: 10.1152/jn.1997.78.3.1669. [DOI] [PubMed] [Google Scholar]
- Groh JM, Sparks DL. Two models for transforming auditory signals from head-centered to eye-centered coordinates. Biol Cybern 67: 291–302, 1992. doi: 10.1007/BF02414885. [DOI] [PubMed] [Google Scholar]
- Gur M, Snodderly DM. Visual receptive fields of neurons in primary visual cortex (V1) move in space with the eye movements of fixation. Vision Res 37: 257–265, 1997. doi: 10.1016/S0042-6989(96)00182-4. [DOI] [PubMed] [Google Scholar]
- Huerta MF, Krubitzer LA, Kaas JH. Frontal eye field as defined by intracortical microstimulation in squirrel monkeys, owl monkeys, and macaque monkeys: I. Subcortical connections. J Comp Neurol 253: 415–439, 1986. doi: 10.1002/cne.902530402. [DOI] [PubMed] [Google Scholar]
- Keith GP, DeSouza JF, Yan X, Wang H, Crawford JD. A method for mapping response fields and determining intrinsic reference frames of single-unit activity: applied to 3D head-unrestrained gaze shifts. J Neurosci Methods 180: 171–184, 2009. doi: 10.1016/j.jneumeth.2009.03.004. [DOI] [PubMed] [Google Scholar]
- Klier EM, Wang H, Crawford JD. The superior colliculus encodes gaze commands in retinal coordinates. Nat Neurosci 4: 627–632, 2001. doi: 10.1038/88450. [DOI] [PubMed] [Google Scholar]
- Lal R, Friedlander MJ. Gating of retinal transmission by afferent eye position and movement signals. Science 243: 93–96, 1989. doi: 10.1126/science.2911723. [DOI] [PubMed] [Google Scholar]
- Lal R, Friedlander MJ. Effect of passive eye position changes on retinogeniculate transmission in the cat. J Neurophysiol 63: 502–522, 1990a. doi: 10.1152/jn.1990.63.3.502. [DOI] [PubMed] [Google Scholar]
- Lal R, Friedlander MJ. Effect of passive eye movement on retinogeniculate transmission in the cat. J Neurophysiol 63: 523–538, 1990b. doi: 10.1152/jn.1990.63.3.523. [DOI] [PubMed] [Google Scholar]
- Lee C, Rohrer WH, Sparks DL. Population coding of saccadic eye movements by neurons in the superior colliculus. Nature 332: 357–360, 1988. doi: 10.1038/332357a0. [DOI] [PubMed] [Google Scholar]
- Lee J, Groh JM. Auditory signals evolve from hybrid- to eye-centered coordinates in the primate superior colliculus. J Neurophysiol 108: 227–242, 2012. doi: 10.1152/jn.00706.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee J, Groh JM. Different stimuli, different spatial codes: a visual map and an auditory rate code for oculomotor space in the primate superior colliculus. PLoS One 9: e85017, 2014. doi: 10.1371/journal.pone.0085017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Metzger RR, Mullette-Gillman OA, Underhill AM, Cohen YE, Groh JM. Auditory saccades from different eye positions in the monkey: implications for coordinate transformations. J Neurophysiol 92: 2622–2627, 2004. doi: 10.1152/jn.00326.2004. [DOI] [PubMed] [Google Scholar]
- Monteon JA, Wang H, Martinez-Trujillo J, Crawford JD. Frames of reference for eye-head gaze shifts evoked during frontal eye field stimulation. Eur J Neurosci 37: 1754–1765, 2013. doi: 10.1111/ejn.12175. [DOI] [PubMed] [Google Scholar]
- Motter BC, Poggio GF. Dynamic stabilization of receptive fields of cortical neurons (VI) during fixation of gaze in the macaque. Exp Brain Res 83: 37–43, 1990. doi: 10.1007/BF00232191. [DOI] [PubMed] [Google Scholar]
- Mulch G, Lewitzki W. Spontaneous and positional nystagmus in healthy persons demonstrated only by electronystagmography: physiological spontaneous nystagmus or “functional scar”? Arch Otorhinolaryngol 215: 135–145, 1977. doi: 10.1007/BF00455860. [DOI] [PubMed] [Google Scholar]
- Mullette-Gillman OA, Cohen YE, Groh JM. Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus. J Neurophysiol 94: 2331–2352, 2005. doi: 10.1152/jn.00021.2005. [DOI] [PubMed] [Google Scholar]
- Mullette-Gillman OA, Cohen YE, Groh JM. Motor-related signals in the intraparietal cortex encode locations in a hybrid, rather than eye-centered reference frame. Cereb Cortex 19: 1761–1775, 2009. doi: 10.1093/cercor/bhn207. [DOI] [PMC free article] [PubMed] [Google Scholar]
- National Research Council Guide for the Care and Use of Laboratory Animals (8th ed.). Washington, DC: National Academies Press, 2011. [Google Scholar]
- Porter KK, Groh JM. The “other” transformation required for visual-auditory integration: representational format. Prog Brain Res 155: 313–323, 2006. doi: 10.1016/S0079-6123(06)55018-6. [DOI] [PubMed] [Google Scholar]
- Pouget A, Sejnowski TJ. Spatial transformations in the parietal cortex using basis functions. J Cogn Neurosci 9: 222–237, 1997. doi: 10.1162/jocn.1997.9.2.222. [DOI] [PubMed] [Google Scholar]
- Robinson DA. Eye movements evoked by collicular stimulation in the alert monkey. Vision Res 12: 1795–1808, 1972. doi: 10.1016/0042-6989(72)90070-3. [DOI] [PubMed] [Google Scholar]
- Robinson DA, Fuchs AF. Eye movements evoked by stimulation of frontal eye fields. J Neurophysiol 32: 637–648, 1969. doi: 10.1152/jn.1969.32.5.637. [DOI] [PubMed] [Google Scholar]
- Russo GS, Bruce CJ. Effect of eye position within the orbit on electrically elicited saccadic eye movements: a comparison of the macaque monkey’s frontal and supplementary eye fields. J Neurophysiol 69: 800–818, 1993. doi: 10.1152/jn.1993.69.3.800. [DOI] [PubMed] [Google Scholar]
- Russo GS, Bruce CJ. Frontal eye field activity preceding aurally guided saccades. J Neurophysiol 71: 1250–1253, 1994. doi: 10.1152/jn.1994.71.3.1250. [DOI] [PubMed] [Google Scholar]
- Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts. Eur J Neurosci 42: 2934–2951, 2015. doi: 10.1111/ejn.13093. [DOI] [PubMed] [Google Scholar]
- Sajad A, Sadeh M, Keith GP, Yan X, Wang H, Crawford JD. Visual-motor transformations within frontal eye fields during head-unrestrained gaze shifts in the monkey. Cereb Cortex 25: 3932–3952, 2015. doi: 10.1093/cercor/bhu279. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sajad A, Sadeh M, Yan X, Wang H, Crawford JD. Transition from target to gaze coding in primate frontal eye field during memory delay and memory-motor transformation. eNeuro 3: ENEURO.0040-16.2016, 2016. doi: 10.1523/ENEURO.0040-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schall JD. Neuronal activity related to visually guided saccades in the frontal eye fields of rhesus monkeys: comparison with supplementary eye fields. J Neurophysiol 66: 559–579, 1991. doi: 10.1152/jn.1991.66.2.559. [DOI] [PubMed] [Google Scholar]
- Schall JD, Hanes DP. Neural basis of saccade target selection in frontal eye field during visual search. Nature 366: 467–469, 1993. doi: 10.1038/366467a0. [DOI] [PubMed] [Google Scholar]
- Schall JD, Morel A, King DJ, Bullier J. Topography of visual cortex connections with frontal eye field in macaque: convergence and segregation of processing streams. J Neurosci 15: 4464–4487, 1995. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schiller PH, Sandell JH, Maunsell JH. The effect of frontal eye field and superior colliculus lesions on saccadic latencies in the rhesus monkey. J Neurophysiol 57: 1033–1049, 1987. doi: 10.1152/jn.1987.57.4.1033. [DOI] [PubMed] [Google Scholar]
- Schiller PH, Stryker M. Single-unit recording and stimulation in superior colliculus of the alert rhesus monkey. J Neurophysiol 35: 915–924, 1972. doi: 10.1152/jn.1972.35.6.915. [DOI] [PubMed] [Google Scholar]
- Schiller PH, True SD, Conway JL. Deficits in eye movements following frontal eye-field and superior colliculus ablations. J Neurophysiol 44: 1175–1189, 1980. doi: 10.1152/jn.1980.44.6.1175. [DOI] [PubMed] [Google Scholar]
- Scudder CA, Kaneko CS, Fuchs AF. The brainstem burst generator for saccadic eye movements: a modern synthesis. Exp Brain Res 142: 439–462, 2002. doi: 10.1007/s00221-001-0912-9. [DOI] [PubMed] [Google Scholar]
- Sommer MA, Tehovnik EJ. Reversible inactivation of macaque frontal eye field. Exp Brain Res 116: 229–249, 1997. doi: 10.1007/PL00005752. [DOI] [PubMed] [Google Scholar]
- Sommer MA, Wurtz RH. Influence of the thalamus on spatial visual processing in frontal cortex. Nature 444: 374–377, 2006. doi: 10.1038/nature05279. [DOI] [PubMed] [Google Scholar]
- Sparks DL, Hartwich-Young R. The deep layers of the superior colliculus. In: The Neurobiology of Saccadic Eye Movements, edited by Wurtz RH and Goldberg ME. New York: Elsevier, 1989, p. 213–255. [PubMed] [Google Scholar]
- Sparks DL, Holland R, Guthrie BL. Size and distribution of movement fields in the monkey superior colliculus. Brain Res 113: 21–34, 1976. doi: 10.1016/0006-8993(76)90003-2. [DOI] [PubMed] [Google Scholar]
- Squatrito S, Maioli MG. Gaze field properties of eye position neurones in areas MST and 7a of the macaque monkey. Vis Neurosci 13: 385–398, 1996. doi: 10.1017/S0952523800007628. [DOI] [PubMed] [Google Scholar]
- Stanton GB, Goldberg ME, Bruce CJ. Frontal eye field efferents in the macaque monkey: II. Topography of terminal fields in midbrain and pons. J Comp Neurol 271: 493–506, 1988. doi: 10.1002/cne.902710403. [DOI] [PubMed] [Google Scholar]
- Steenrod SC, Phillips MH, Goldberg ME. The lateral intraparietal area codes the location of saccade targets and not the dimension of the saccades that will be made to acquire them. J Neurophysiol 109: 2596–2605, 2013. doi: 10.1152/jn.00349.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thier P, Andersen RA. Electrical microstimulation suggests two different forms of representation of head-centered space in the intraparietal sulcus of rhesus monkeys. Proc Natl Acad Sci USA 93: 4962–4967, 1996. doi: 10.1073/pnas.93.10.4962. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thier P, Andersen RA. Electrical microstimulation distinguishes distinct saccade-related areas in the posterior parietal cortex. J Neurophysiol 80: 1713–1735, 1998. doi: 10.1152/jn.1998.80.4.1713. [DOI] [PubMed] [Google Scholar]
- Trotter Y, Celebrini S. Gaze direction controls response gain in primary visual-cortex neurons. Nature 398: 239–242, 1999. doi: 10.1038/18444. [DOI] [PubMed] [Google Scholar]
- Umeno MM, Goldberg ME. Spatial processing in the monkey frontal eye field. I. Predictive visual responses. J Neurophysiol 78: 1373–1383, 1997. doi: 10.1152/jn.1997.78.3.1373. [DOI] [PubMed] [Google Scholar]
- Umeno MM, Goldberg ME. Spatial processing in the monkey frontal eye field. II. Memory responses. J Neurophysiol 86: 2344–2352, 2001. doi: 10.1152/jn.2001.86.5.2344. [DOI] [PubMed] [Google Scholar]
- Van Opstal AJ, Hepp K, Suzuki Y, Henn V. Influence of eye position on activity in monkey superior colliculus. J Neurophysiol 74: 1593–1610, 1995. doi: 10.1152/jn.1995.74.4.1593. [DOI] [PubMed] [Google Scholar]
- Waitzman DM, Ma TP, Optican LM, Wurtz RH. Superior colliculus neurons provide the saccadic motor error signal. Exp Brain Res 72: 649–652, 1988. doi: 10.1007/BF00250610. [DOI] [PubMed] [Google Scholar]
- Wurtz RH, Goldberg ME. Superior colliculus cell responses related to eye movements in awake monkeys. Science 171: 82–84, 1971. doi: 10.1126/science.171.3966.82. [DOI] [PubMed] [Google Scholar]
- Wurtz RH, Sommer MA, Paré M, Ferraina S. Signal transformations from cerebral cortex to superior colliculus for the generation of saccades. Vision Res 41: 3399–3412, 2001. doi: 10.1016/S0042-6989(01)00066-9. [DOI] [PubMed] [Google Scholar]
- Zirnsak M, Moore T. Saccades and shifting receptive fields: anticipating consequences or selecting targets? Trends Cogn Sci 18: 621–628, 2014. doi: 10.1016/j.tics.2014.10.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zirnsak M, Steinmetz NA, Noudoost B, Xu KZ, Moore T. Visual space is compressed in prefrontal cortex before eye movements. Nature 507: 504–507, 2014. doi: 10.1038/nature13149. [DOI] [PMC free article] [PubMed] [Google Scholar]




