Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2011 Jan 19;31(3):913–921. doi: 10.1523/JNEUROSCI.4417-10.2011

Distinct Representations of a Perceptual Decision and the Associated Oculomotor Plan in the Monkey Lateral Intraparietal Area

Sharath Bennur 1, Joshua I Gold 1,
PMCID: PMC3380543  NIHMSID: NIHMS382311  PMID: 21248116

Abstract

Perceptual decisions that are used to select particular actions can appear to be formed in an intentional framework, in which sensory evidence is converted directly into a plan to act. However, because the relationship between perceptual decision-making and action selection has been tested primarily under conditions in which the two could not be dissociated, it is not known whether this intentional framework plays a general role in forming perceptual decisions or only reflects certain task conditions. To dissociate decision and motor processing in the brain, we recorded from individual neurons in the lateral intraparietal area of monkeys performing a task that included a flexible association between a decision about the direction of random-dot motion and the direction of the appropriate eye-movement response. We targeted neurons that responded selectively in anticipation of a particular eye-movement response. We found that these neurons encoded the perceptual decision in a manner that was distinct from how they encoded the associated response. These decision-related signals were evident regardless of whether the appropriate decision–response association was indicated before, during, or after decision formation. The results suggest that perceptual decision-making and action selection are different brain processes that only appear to be inseparable under particular behavioral contexts.

Introduction

A perceptual decision is a deliberative process that converts sensory information into a categorical judgment. Our understanding of how and where in the brain this process is implemented has benefited from a focus on motor intention: when a decision is used to select a particular action, brain regions that contribute to selecting that action also represent the associated decision process (Gold and Shadlen, 2007). However, the implications of these findings remain unclear. One view is that these findings represent a form of embodiment, which casts decision-making and other aspects of higher brain function primarily in behavioral terms (Clark, 1998; O'Regan and Noe, 2001; Cisek, 2006). Alternatively, these findings might be specific to certain task designs, in which perceptual decisions are explicitly linked to real or potential motor plans. Our goal was to distinguish between these alternatives and clarify the relationship between perceptual decision-making and action selection in the brain.

We trained monkeys to decide the direction of random-dot motion and indicate their decision with an eye movement to a visual response target. When the targets are located at predictable spatial locations, neurons in several brain regions including the lateral intraparietal area (LIP), superior colliculus (SC), and the frontal eye field that encode the choice of a particular response target also encode the process of converting incoming motion evidence into that choice (Horwitz and Newsome, 1999; Kim and Shadlen, 1999; Shadlen and Newsome, 2001; Roitman and Shadlen, 2002). This decision-related activity, particularly in area LIP, is consistent with the idea of a “priority map” in which different forms of evidence, including diverse sensory cues or cognitive variables such as value expectation, are interpreted in terms of the behavioral relevance of a given spatial location (Platt and Glimcher, 1999; Roitman and Shadlen, 2002; Sugrue et al., 2004; Yang and Shadlen, 2007; Bisley and Goldberg, 2010).

Other results suggest that LIP might play a role in perceptual decision-making that extends beyond this spatial framework. Certain LIP neurons can exhibit selectivity for nonspatial features of visual stimuli, including color, shape, and motion direction (Sereno and Maunsell, 1998; Toth and Assad, 2002; Freedman and Assad, 2006; Fanini and Assad, 2009). This kind of selectivity does not require an overt saccade, can extend to stimuli placed outside of the response field (RF) of the neuron, and can reflect the subject's perceptual report (Williams et al., 2003; Freedman and Assad, 2009). Accordingly, the role of LIP in decision-making might not necessarily be tied to the role of a given neuron in saccadic or spatial processing but rather its selectivity for a particular visual feature.

Given that these spatial and nonspatial forms of selectivity coexist and can have overlapping functions in terms of sensory processing, a key, unresolved question is if and how their relative contributions to perceptual decision-making differ under different behavioral conditions. Does the brain typically interpret sensory evidence in terms of motor plans and the plans themselves become more abstract (e.g., less tied to a specific spatial location) when necessary? Or does the brain typically form perceptual decisions and plan movements separately and only appear to link the two under certain conditions? Here we support the latter interpretation by showing that individual LIP neurons encode a visual perceptual decision in a manner that is distinct from how they encode the subsequent oculomotor response.

Materials and Methods

All training and surgical and experimental procedures were performed in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the University of Pennsylvania Institutional Animal Care and Use Committee. We used two rhesus monkeys, one male (At) and one female (Av). Both monkeys had been trained extensively on a prosaccade version of the direction-discrimination task, in which the two choice targets were placed at known locations along the axis of motion (Connolly et al., 2009), before being trained on the colored-target version of the task used in this study.

Behavioral task.

The colored-target task required the monkeys to decide the direction of random-dot motion and indicate their decision with an eye movement to one of two equiluminant targets of different colors: red for rightward motion and green for leftward motion. The motion stimulus, described in detail previously (Gold et al., 2008), was presented in a 5° diameter circular aperture centered on the fixation point for 800 ms. The percentage of coherently moving dots (99.9, 25.6, or 6.4%) and motion direction (one of two possible directions, separated by 180°) were interleaved randomly from trial to trial. One target was placed in the RF of a given neuron at a distance of 9° from the fixation point, the other 180° opposite the fixation point at the same eccentricity. The targets were initially shown in a neutral color (blue). We used three versions of the task that differed in terms of when the color of the targets changed from neutral to red/green: task 1, 200 ms before motion onset; task 2, 400 ms after motion onset; and task 3, 300 ms after motion offset. During motion viewing, the monkey maintained fixation within ±2° (there was no systematic relationship between small, horizontal eye movements made within this window and the direction of motion on correct trials across tasks for either monkey; Wilcoxon's test for H0: median difference in eye velocity on trials with rightward vs leftward motion, p = 0.55 for At, 0.11 for Av). After fixation-point offset, the monkey was rewarded for making a saccadic eye movement within 800 ms to foveate the target of the appropriate target. The identity of the red and green targets was chosen randomly on each trial. In each session, the monkey performed each of the three tasks (Fig. 1a) in blocks.

Figure 1.

Figure 1.

Task and recording locations. a, Task design. After the monkey fixates, two blue targets appear, one of which is placed in the neural RF (shaded). A random-dot stimulus is shown for 800 ms, followed by a 700 ms delay period and then fixation-point offset. At 200 ms before (task 1), 400 ms after the start of (task 2), or 300 ms after the end of (task 3) motion viewing, the neutral targets change color, one red and the other green. The monkey was rewarded for making a saccadic eye movement to the red target for rightward motion or the green target for leftward motion. b, Magnetic resonance images showing a projection of the recording cylinder (green circle) onto the surface of each monkey's brain (Kalwani et al., 2008). The black squares show locations of recording sites within the cylinder.

Electrophysiology.

Each monkey was surgically implanted with an eye-coil, head-holding device, and recording cylinder. Area LIP was targeted using stereotaxic coordinates and magnetic resonance imaging (Fig. 1b) (Kalwani et al., 2009). A sterile guide tube inserted through a plastic grid (Crist Instruments) was used to place a glass-coated tungsten electrode to the dural surface. The electrode was advanced using a NaN microdrive (Plexon). Spike waveforms were stored and sorted offline (Plexon). We searched for LIP neurons using a memory-saccade task and selected neurons with spatially selective activity during the delay period (Roitman and Shadlen, 2002). This spatial tuning defined the RF of the neuron, which we used to place one of the two choice targets on the discrimination task.

Analysis of behavioral data.

We fit behavioral data describing Fright, the fraction of rightward (red) choices, as a function of SCOH, signed motion coherence (negative coherence for leftward motion, positive coherence for rightward motion), to a logistic function of the following form:

graphic file with name zns00311-9335-m01.jpg

where λ, β0, and β1 are fit parameters. λ is the lapse rate, corresponding to the fraction of incorrect choices at the highest motion strength. β0 is a measure of choice bias, with positive (negative) values implying a tendency to choose the red (green) target. β1 reflects perceptual sensitivity, with higher values implying higher sensitivity. Fit parameters and their uncertainty (SEM) were determined using maximum-likelihood methods (Watson, 1979; Meeker and Escobar, 1995). Threshold (the motion strength corresponding to d′ = 1, or 76.02% correct for an unbiased observer) was computed as 1.151/β1.

Analysis of neural data.

Neuronal selectivity for target color, motion direction, and saccadic choice was quantified using both a multiple ANOVA and a receiver operating characteristic (ROC)-based index that describes the ability of an ideal observer to predict the value of the given variable based solely on the neural responses (Parker and Newsome, 1998). Both were computed in 200 ms bins, offset by 50 ms. Peak selectivity was measured 200–900 ms after motion onset for direction selectivity, 100–300 ms after the target-color change for color selectivity, and from 100 ms before until 100 ms after fixation-point offset for choice selectivity. We found no qualitative difference in the distributions of selectivity indices for the two monkeys and therefore combined data for all neural analyses.

Results

We trained two monkeys to decide the direction of coherent motion in a random-dot stimulus and indicate their decision with an eye movement to a target of a particular color (red for rightward motion, green for leftward motion). In a given session, the two targets always appeared at known locations, but the color shown at each location was not predictable until the colored targets appeared. To prevent the monkeys from using previously formed associations between motion direction and target location (Connolly et al., 2009), the targets were typically placed approximately perpendicular to the horizontal axis of motion (see Fig. 3d). We used three versions of the task that differed in terms of when the colored targets appeared: either before (task 1), during (task 2), or after (task 3) motion viewing (Fig. 1a). This design allowed us to control the time when the decision was formed relative to when the decision was associated with a specific eye-movement response. We examined how these manipulations affected the representation of sensory, decision, and motor activity in area LIP.

Figure 3.

Figure 3.

Properties of the neural population on a memory-saccade task. a, Spatial tuning of an example neuron. The one of eight possible spatial locations (all 9° in eccentricity) that yielded the maximum neural response was defined as the RF, or Tin (in this case 90°), and the 180° opposite location (in this case, 270°) was designated Tout. b, Spiking activity of the example neuron during Tin and Tout trials. Symbols (+) indicate the timing of target onset (light blue), target offset (dark blue), fixation offset (red), and saccade onset (green). c, Summary of neuronal selectivity (computed as an ROC index) of the Tin versus Tout location on correct trials during visual, memory, and saccadic periods of the task. Symbols are projections of each three-dimensional data point onto one of three planes, thereby showing the three pairwise relationships for each neuron. Filled symbols indicate Wilcoxon's test for H0: difference in median Tin versus Tout responses = 0, p < 0.05. d, Circular histograms of the angular location of the spatial RF measured during the memory period of the memory-saccade task for monkeys Av (left) and At (right). We targeted neurons with RFs that were approximately vertical to fixation, thus requiring eye-movement responses that were approximately perpendicular to the horizontal axis of motion.

Behavior

Both monkeys used the color, not location, of the targets to govern their choices (Fig. 2). For all three tasks, the target of the appropriate color was chosen on 72% of trials by monkey At and 69% by monkey Av (Fig. 2, o). By comparison, from the 56 of 102 total behavioral sessions in which the targets were not directly perpendicular to the axis of motion (Fig. 3d), the target in the direction of motion was chosen at chance levels (49% of trials for monkey At, 50% for monkey Av) (Fig. 2, ×). Moreover, performance depended systematically on the strength of the motion stimulus. For high-coherence stimuli, error (“lapse”) rates were 6–18% for the three tasks and two monkeys, indicating that, for easily perceptible stimuli, the monkeys performed well above chance but not perfectly on these difficult tasks. Performance degraded for lower coherences but without systematic choice biases (best-fitting logistic functions from the equation in Materials and Methods, parameterized by terms describing the choice bias, β0, and coherence dependence, β1, are shown in Fig. 2).

Figure 2.

Figure 2.

Behavioral performance and psychometric functions. Psychometric functions for tasks 1–3 (columns) for monkeys Av and At (rows), showing the fraction of rightward choices (encoded correctly as the red target, shown as circles; or incorrectly as the target placed in the direction of presented motion, shown as crosses) as a function of signed motion strength (negative for leftward motion, positive for rightward motion). Black lines are logistic fits to color-encoded choices; insets show best-fitting values and SEM of the sensitivity (β0), bias (β1), and lapse (λ) parameters from the equation in Materials and Methods, and threshold (T, in units of unsigned percentage coherence corresponding to d′ = 1) from these fits. Gray lines in the middle column are fits to data from two alternative conditions, in which the targets changed color 200 and 600 ms after motion onset.

Each monkey performed somewhat similarly on the three tasks, suggesting that their strategies did not differ substantially when the oculomotor mapping was indicated before, during, or after decision formation (despite quantitative differences in best-fitting parameters when comparing task-specific fits in Fig. 2, which were applied to data across all sessions, likelihood-ratio tests comparing session-by-session fits to data from the three tasks considered separately vs together were <0.01, implying differences across tasks, for only 15 of 52 sessions for monkey At and 4 of 27 sessions for Av). For task 2, in which the colored targets appeared during motion viewing, there was also minimal effect on performance of changing the time at which the colored targets appeared (fits for tasks in which the targets appeared either 200, 400, or 600 ms after motion onset differed in 1 of 52 sessions for At and 0 of 19 sessions for Av; likelihood-ratio test, p < 0.01) (Fig. 2). These results imply that the monkeys were not just attending before or after appearance of the targets. Thus, the three tasks seemed to require similar perceptual decision-making processes that differed primarily in terms of when the appropriate action could be selected. We tested how this difference in the timing of the signal indicating the sensorimotor mapping affected the representation of the decision process in area LIP.

LIP selectivity for motion direction, target color, and saccadic choice

We recorded from 84 individual LIP neurons (n = 51 from At, 33 from Av) (Fig. 1b) while the monkeys performed the tasks. We selected neurons with spatially selective responses during the delay period of a memory-saccade task [initially measured qualitatively online, then later quantified offline (Fig. 3)], like in previous studies of decision-related activity in LIP (Shadlen and Newsome, 2001; Roitman and Shadlen, 2002). We used the memory-period selectivity to define an RF in which we subsequently placed one of the colored choice targets on the direction-discrimination task (Tin, as opposed to Tout). We typically searched for neurons with RFs located below (monkey Av, found 4000–8500 μm below the cortical surface along two separate electrode trajectories, as shown in Fig. 1b) or above (monkey At, found 4000–8500 μm below the cortical surface along three separate electrode trajectories) fixation, consistent with the task geometry. Of the 84 neurons we found, 71 (84.5%) had responses that were modulated between motion viewing and the saccadic response on the discrimination task, as described below.

Individual LIP neurons were selective for different combinations of motion direction, target color, and saccadic choice. For example, the neuron shown in Figure 4a tended to respond more strongly when the target in the RF of the neuron changed from neutral to red, as opposed to green, and for Tin versus Tout choices, which is consistent with the definition of the RF from the memory-saccade task. In contrast, the neuron shown in Figure 4b tended to respond more strongly to leftward versus rightward motion during motion viewing and the subsequent delay period and then became selective for Tin versus Tout choices around the time of the saccadic response. The direction-selective responses of this neuron were evident when the colored targets appeared before (task 1), during (task 2), or after (task 3) motion viewing and regardless of the direction of the subsequent saccadic choice.

Figure 4.

Figure 4.

Representative LIP neuronal responses on the colored-target tasks. a, Spike–density functions (spike trains convolved with a Gaussian kernel with an SD of 100 ms) of activity of an example neuron that was selective for target color and saccadic choice. Lines are mean responses (solid for rightward motion, dashed for leftward motion), and colored ribbons are SEM (red/green correspond to the color of the target in the RF). Vertical lines indicate the timing of task events, as shown in the leftmost panel. b, Spike–density functions of activity for another neuron that was selective for dot direction and saccadic choice. Same conventions as in a.

The population of recorded LIP neurons exhibited selectivity for motion direction, target color, and saccadic choice, each of which evolved differently as a function of time for the three tasks. We quantified these different forms of selectivity using a multiple ANOVA applied to time-binned spike-count data from individual trials, with motion direction, target color, and their interaction (i.e., selectivity for saccadic choice: for correct trials, a red target in the RF of a neuron and rightward motion implied a Tin choice, whereas a green target and leftward motion implied a Tout choice) as factors (Fig. 5a). For all three tasks, selectivity for motion direction appeared soon after motion onset, peaked midway through motion viewing, then declined steadily through the end of motion viewing and the delay period preceding the saccadic choice (24.5% of all responses from individual neurons considered separately for all time bins and tasks shown in Fig. 5a were selective for motion direction; of these, 65.1% were selective for rightward motion and 34.9% for leftward motion). In contrast, selectivity for target color appeared just after they changed color and then declined over the remainder of the trial (34.8% of all responses in Fig. 5a, of which 81.1% were selective for red and 18.9% for green). Selectivity for saccadic choice also appeared after the target-color change, indicating the sensorimotor mapping, but tended to increase over the course of the trial, until the choice was made (34.1% of all responses in Fig. 5a, of which 92.2% were selective for Tin and 7.8% for Tout choices).

Figure 5.

Figure 5.

Selectivity of LIP population responses. a, Proportion of recorded neurons that were selective for motion direction (purple), target color (orange), or saccadic choice (blue), computed using an ANOVA in 200 ms bins stepped in 50 ms increments (p < 0.05). Vertical lines indicate the timing of task events, as shown in the leftmost panel. b, Selectivity for each parameter quantified for each neuron using an ROC-based index (purple, motion direction: 0, selective for leftward; 1, selective for rightward; orange, target color: 0, selective for green; 1, selective for red; blue, saccadic choice: 0, selective for Tout; 1, selective for Tin). Selectivities were measured for the time intervals shown at the top of a. Symbols are projections of each three-dimensional data point onto one of three planes, thereby showing the three pairwise relationships for each neuron. Filled symbols indicate significant selectivity (H0: index of 0.5) for the given color-coded parameter. c, d, ROC-based selectivity index for motion direction computed separately for correct trials in which the target in the RF of a given neuron was red (abscissa) or green (ordinate) either 400–700 ms (c) or 1500–1800 ms (d) after motion onset. Points are data from individual neurons with at least 10 trials per condition. Red/green points indicate significant selectivity for motion when the given target color was in the RF (Mann–Whitney test for H0: median difference in responses for the two directions was 0, p < 0.05). Lines are linear fits to all points (insets show Pearson's r and associated p value for H0: r = 0).

We further quantified these forms of selectivity using an ROC-based index that describes the ability of an ideal observer to distinguish the given task parameter using only spike-rate data from individual neurons (Hanley and McNeil, 1982; Parker and Newsome, 1998). We computed this index for each neuron with respect to motion direction (a value >0.5 implies larger responses for rightward vs leftward motion, whereas a value <0.5 implies larger responses for leftward vs rightward motion), target color (>0.5 for red, <0.5 for green), and saccadic choice (>0.5 for Tin, <0.5 for Tout).

Individual LIP neurons exhibited combinations of selectivity for the three parameters during each of the three tasks (Fig. 5b). For task 1, 23 of the 71 recorded neurons showed significant selectivity for all three parameters (H0: index of 0.5, p < 0.05, measured around the time of peak selectivity as shown in Fig. 5a), 30 showed selectivity for two of the three parameters (3 for motion direction and target color, 5 for motion direction and saccadic choice, and 22 for target color and saccadic choice), and 9 showed selectivity for just one of the three parameters (1 for motion direction, 4 for target color, and 4 for saccadic choice). For task 2, 18 neurons showed significant selectivity for all three parameters, 28 showed selectivity for two of the three parameters (5 for motion direction and target color, 4 for motion direction and saccadic choice, and 19 for target color and saccadic choice), and 21 showed selectivity for just one of the three parameters (3 for motion direction, 5 for target color, and 13 for saccadic choice). For task 3, 12 neurons showed significant selectivity for all three parameters, 22 showed selectivity for two of the three parameters (3 for motion direction and target color, 7 for motion direction and saccadic choice, and 12 for target color and saccadic choice), and 26 showed selectivity for just one of the three parameters (3 for motion direction, 10 for target color, and 13 for saccadic choice). Thus, individual LIP neurons exhibited a range of response properties, including selectivity for different combinations of key task variables.

To better interpret the relationship between motion and saccade selectivity, we computed the value of the selectivity index for motion direction separately for trials in which the red or green target was in the RF of the neuron. If the index had matching values when either the red or green target was shown in the RF, then the responses were selective for motion direction independent of the saccadic choice. Conversely, if the index corresponded to opposite direction selectivities for the different target colors, then the responses were selective for saccadic choice.

For all three tasks, the population of recorded LIP neurons tended to exhibit selectivity for motion direction that was essentially independent of saccadic choice during motion viewing but then selectivity for saccadic choice that was essentially independent of motion direction around the time of the saccade. During motion viewing, the population of selectivity indices included values that were both greater than and less than 0.5, implying selectivity for both directions (the median [5th, 95th percentiles] index values across tasks were 0.59 [0.39, 0.83] and 0.55 [0.33, 0.71] when the red or green target was in the RF, respectively). Moreover, these values were positively correlated when comparing trials in which either a red or green target was shown in the RF of a given neuron (Fig. 5c). In contrast, around the time of the saccadic choice, the same neurons tended to be selective for rightward motion when the red target was in the RF but leftward motion when the green target was in the RF (index values of 0.78 [0.47, 0.97] and 0.20 [0.05, 0.42] when the red or green target was in the RF, respectively), which is equivalent to selectivity for Tin choices (Fig. 5d). These findings were similar across the three tasks, with a strong, positive correlation between the value of the selectivity index measured on one task versus another (Spearman's ρ had values between 0.51 and 0.87 for each comparison; H0: ρ = 0, p < 0.001 in all cases).

There was also no clear relationship between the selectivity for motion direction of a given neuron and the location of RF of that neuron. Of the 71 recorded neurons, 37 had RFs that were not located directly along the vertical meridian (Fig. 3d; all but one of these were located slightly to the left). For task 1, 12 of these 37 neurons had significant direction selectivity during motion viewing (H0: selectivity index of 0.5, p < 0.05), of which six preferred motion in the same direction (relative to the vertical meridian) as the RF of the neuron and six preferred the opposite direction. For task 2, 11 of these neurons had significant direction selectivity, of which four preferred motion in the same direction as the RF of the neuron and seven preferred the opposite direction. For task 3, 15 of these neurons had significant direction selectivity, of which three preferred motion in the same direction as the RF of the neuron and 12 preferred the opposite direction. Thus, selectivity for the spatial location of a saccade target could not account for the direction preferences we measured in the context of the colored-target task.

Moreover, the timing of selectivity for motion direction, unlike the timing of selectivity for saccadic choice, did not depend on the time at which the colored targets were shown (Fig. 6). Selectivity for motion direction tended to appear ∼200 ms after the onset of the motion stimulus for all tasks. In contrast, selectivity for saccadic choice tended to occur, on average, after selectivity for motion direction was established (paired Wilcoxon's test for H0: median difference in selectivity onset of 0, p < 0.001) and after the target color change. Thus, the appearance of the colored targets affected the onset of choice-selective responses but not motion direction-selective responses.

Figure 6.

Figure 6.

Effect of task timing on neural selectivity for motion direction (thin) and saccadic choice (thick). Curves are cumulative histograms of the timing of the onset of peak selectivity for either motion direction or saccadic choice, computed separately for each neuron sampled. Onset time was quantified as the time at which the selectivity index, measured in running 200 ms bins stepped by 50 ms, became >5% of the peak value and stayed above that value until reaching the peak. Vertical lines indicate the timing of task events, as shown in a. Panels correspond to different times of colored-target onset relative to motion onset: a, −200 ms; b, +200 ms; c, +400 ms; d, +600 ms; e, +1100. Arrows indicate population medians. Task timing affected the onset of selectivity for saccadic choice (ANOVA, p < 0.001) but not motion direction (p = 0.73).

Representation of the perceptual decision

We further analyzed these patterns of selectivity in LIP with respect to two key features of perceptual decision-making: first, selectivity for not just the categorical judgment but also the sensory evidence used to arrive at that judgment; and second, selectivity on correct versus error trials, to relate the responses more directly to the perceptual report (Shadlen and Newsome, 2001; Roitman and Shadlen, 2002).

For the colored-target tasks, the strength of the sensory evidence was reflected in neuronal selectivity for motion direction but not for target color or saccadic choice (Fig. 7). This coherence dependence was computed using a similar ROC-based index as in Figure 5, b and c, but encoded for each neuron with respect to its preferred value, computed around the time of peak selectivity (see Materials and Methods), for motion direction, target color, or saccadic choice. Therefore, increasing values of this index above 0.5 imply increasingly selective responses of the neuron for the preferred versus anti-preferred value of the given property. The neural responses were increasingly selective for motion direction as a function of increasing coherence, starting early in motion viewing and lasting into the delay period preceding the saccadic response, regardless of whether the targets changed color before, during, or after motion viewing. In contrast, there was no systematic coherence dependence with respect to the selectivity of the neurons for saccadic choice or target color. These results are consistent with the idea that LIP activity represents the process of converting motion information into a categorical direction judgment that, in turn, instructs the saccadic choice.

Figure 7.

Figure 7.

Effect of motion strength on neuronal selectivity. Each panel shows the value of an ROC-based predictive index that quantifies neural selectivity for each of the three task variables (a, motion direction; b, target color; c, saccadic choice), computed separately for the three different motion strengths, as indicated, in 200 ms bins stepped by 50 ms. Values >0.5 indicate selectivity for the preferred value of the given neuron measured at the time of average peak selectivity (see Materials and Methods) for that variable and task. Curves are mean values for the population of all recorded neurons. Vertical lines indicate the timing of task events, as shown in the leftmost panel. Dots at the top of each panel (evident only in a) indicate a significant effect of coherence on selectivity across the population (ANOVA, p < 0.05) and monotonically increasing mean selectivity as a function of coherence in the given time bin.

The time course of LIP selectivity also differed for motion direction and saccadic choice. As noted above, even for task 1, when the sensorimotor mapping was specified in advance, selectivity for motion direction tended to be established before selectivity for saccadic choice (Fig. 6a). Once established, the temporal dynamics of these different forms of selectivity differed considerably. After motion onset and the target-color change, selectivity for saccadic choice tended to build up slowly, reaching a peak around the time of the saccade (Fig. 7c). These temporal dynamics are reminiscent of those described in LIP for a reaction-time (RT) version of the pro-saccade task, in which the monkey initiated the saccadic response as soon as it formed the decision (Roitman and Shadlen, 2002). In contrast, selectivity for motion direction tended to build up quickly, starting soon after motion onset, and then reaching a peak after ∼500 ms of motion viewing and declining into the delay period (Fig. 7a). The relatively brief rising phase of this selectivity is reminiscent of the temporal dynamics in LIP on a pro-saccade version of the task in which the stimulus was presented for a fixed duration, like in our study (Shadlen and Newsome, 2001; Roitman and Shadlen, 2002). Together, these results imply that different selection processes represented in LIP can have different temporal dynamics, which might be difficult to distinguish when perceptual and oculomotor decisions have a fixed relationship, like in the pro-saccade task.

A comparison of responses on correct and error trials further supports the idea that direction-selective responses in LIP were related to the monkeys' perceptual judgments about motion direction and not simply the stimulus itself. For all three tasks, individual neurons tended to have similar selectivity on correct and error trials for either target color or saccadic choice, implying that the errors did not arise from mis-encoding of either variable (Fig. 8d,e). In contrast, selectivity for motion direction tended to be negatively correlated for low-coherence (6.4%) stimuli for all three tasks and for middle-coherence (25.6%) stimuli for task 1 but slightly positively correlated (for task 1) or uncorrelated (for tasks 2 and 3) for high-coherence (99%) stimuli (Fig. 8a–c). These results are consistent with the idea that the errors arose from two different sources. The first is an inappropriate direction color mapping. Assuming that these mapping errors are the primary source of the nonzero lapse rates, it follows that motion direction is encoded in a similar manner on correct and error trials with high-coherence stimuli (Fig. 8c). The second source of error is from perceptual processing, which is expected to be more prevalent for weaker stimuli. Accordingly, the negative correlation in selectivity for lower coherences implies that these neurons encode the perceived, not actual, direction of motion (Fig. 8a,b).

Figure 8.

Figure 8.

Summary of neuronal selectivity on error versus correct trials. Columns are tasks, as indicated. a–c, Rows show selectivity for the actual direction of motion (a, low coherence; b, middle coherence; c, high coherence), target color (d), and saccadic choice (e). Leftmost panels indicate how the values of the indices relate to each parameter. Lines are linear fits (insets show Pearson's r and associated p value for H0: r = 0).

Discussion

Previous studies showed that, in monkeys trained to indicate a decision about the direction of random-dot motion with a saccadic eye movement to a visual target at a predictable location in the same direction, neurons in LIP encode the process of converting incoming visual information into the saccadic choice (Shadlen and Newsome, 2001; Roitman and Shadlen, 2002). However, because that task design explicitly linked the perceptual decision with a specific oculomotor response, it was impossible to dissociate the decision about the direction of motion with the selection of the appropriate action. To overcome this limitation, we used a task in which the association between the direction decision and the saccadic choice was based on the color, not location, of the visual target. We identified a neural correlate of the decision process in LIP, which was present regardless of whether the appropriate decision–response association was indicated before, during, or after the decision was formed. This activity, which included not just selectivity for the given stimulus feature but also sensitivity to the input, timing, and outcome of the decision process, was found in the same neurons that subsequently encode the saccadic response. These results imply that LIP can play multiple roles in perceptual and saccadic processing.

We do not know the limits of these roles. One possibility is that the decision-related activity represents purely perceptual processing and is thus independent of potential or actual actions that follow. This idea might be further tested using tasks in which the decision is formed before the monkey is informed about whether a response is needed at all or whether to use different modalities (say, eye or arm movements) to indicate the response. However, a challenge with such designs is that it can be difficult to rule out the possibility that the response not used was also not planned. Another possibility is that our results in LIP reflected particular aspects of the task design, such as the fact that we always showed a visual target in the RF of a given neuron or that we always required an approximately vertical eye-movement response. This idea implies that the role of the LIP in perceptual decision-making, and its relationship to saccade planning, depends on the task context, including not just the spatial configuration and sensorimotor association but also other factors known to be encoded in LIP, such as reward expectation (Platt and Glimcher, 1999; Dorris and Glimcher, 2004; Sugrue et al., 2004). Additional studies are needed to characterize how all of these spatial and nonspatial factors affect the representation of perceptual decision-making across the population of neurons in LIP.

Nevertheless, either interpretation implies a flexible relationship between perceptual decision-making and spatial processing in LIP. In particular, our results seem inconsistent with the idea that a given neuron represents a perceptual decision only insofar as the decision is used to direct attention or intention toward or away from the RF of that neuron. Because selectivity for motion direction did not correspond to selectivity for target color or saccadic choice, motion-driven responses were not predictive of a particular target color or choice to a given spatial location. Moreover, a previous study using a version of the colored-target task similar to task 3 found no evidence for spatially organized saccade plans that corresponded to a particular direction decision (Gold and Shadlen, 2003). Thus, even if the direction-selective responses we found in LIP represent a sort of temporary plan to generate a particular eye movement or focus of attention either toward or away from a given target (Gnadt and Andersen, 1988; Barash et al., 1991; Colby and Goldberg, 1999; Snyder et al., 2000; Zhang and Barash, 2000), this plan is not organized with respect to the same spatial map defined by the RF of the neurons measured on the memory-saccade task.

Experience likely played an important role in establishing and shaping these flexible, task-relevant responses in LIP (Freedman and Assad, 2006; Law and Gold, 2008, 2009). The monkeys used in this study were previously trained on a pro-saccade version of the direction-discrimination task (Connolly et al., 2009). That task used only red targets, which might help to explain the preponderance of red-selective neurons we found in this study. Moreover, training on that task gives rise to responses that encode the strength and direction of the moving visual stimulus. These responses are found in the same subpopulation of neurons, with spatially selective activity during the delay period of the memory-saccade task that we sampled (Shadlen and Newsome, 2001; Roitman and Shadlen, 2002; Law and Gold, 2008). However, when measured in the context of the pro-saccade task, these responses are strongly spatial, reflecting both the direction decision and the impending oculomotor response (Gold and Shadlen, 2000; Gold and Shadlen, 2003). In contrast, after training on the colored-target task, we found that the same subpopulation of LIP neurons can encode the direction decision and oculomotor plan separately. Together, these results suggest that experience plays an ongoing role in shaping LIP response properties to be appropriate for the task at hand (Freedman and Assad, 2006; Law and Gold, 2008).

We also do not know whether the decision-related signals we measured originated in LIP or were computed elsewhere and sent as copies to LIP. In principle, these signals could arise from numerous brain regions that provide direct or indirect input to LIP and are thought to be involved in decision-making, including in the prefrontal cortex and basal ganglia (Kim and Shadlen, 1999; Heekeren et al., 2004; Balleine et al., 2007). However, none of these brain regions has been examined using the kind of task we present here. Another possibility is the SC, which has been shown to include a small subset of neurons with direction-selective activity that is not strongly tied to a given saccadic response (Horwitz et al., 2004). However, those results were obtained using a task in which the spatial configuration of the choice targets always included a component in the direction of motion, leaving open the possibility that the neural activity was selective for that spatial component of the saccadic response and not the perceptual decision.

Conversely, LIP receives direct and indirect input from middle temporal area MT of extrastriate visual cortex that could be used directly to form the direction decision (Blatt et al., 1990). On an RT version of the pro-saccade task, electrical microstimulation in LIP affects RTs in a manner consistent with a causal role in the decision process (Hanks et al., 2006). It would be interesting to design an RT version of the colored task to more effectively analyze the time course of the perceptual decision and test for similar causality when the decision is not explicitly linked to a specific eye-movement response.

Footnotes

This work was supported by the McKnight Endowment Fund for Neuroscience, the Burroughs Wellcome Fund, and National Institutes of Health Grants R01-EY015260 and R03-MH087798. We thank M. Shadlen and M. Nassar for helpful comments on this manuscript and J. Zweigle for expert technical assistance.

References

  1. Balleine BW, Delgado MR, Hikosaka O. The role of the dorsal striatum in reward and decision-making. J Neurosci. 2007;27:8161–8165. doi: 10.1523/JNEUROSCI.1554-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Barash S, Bracewell RM, Fogassi L, Gnadt JW, Andersen RA. Saccade-related activity in the lateral intraparietal area. II. Spatial properties. J Neurophysiol. 1991;66:1109–1124. doi: 10.1152/jn.1991.66.3.1109. [DOI] [PubMed] [Google Scholar]
  3. Bisley JW, Goldberg ME. Attention, intention, and priority in the parietal lobe. Annu Rev Neurosci. 2010;33:1–21. doi: 10.1146/annurev-neuro-060909-152823. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Blatt GJ, Andersen RA, Stoner GR. Visual receptive field organization and cortico-cortical connections of the lateral intraparietal area (area LIP) in the macaque. J Comp Neurol. 1990;299:421–445. doi: 10.1002/cne.902990404. [DOI] [PubMed] [Google Scholar]
  5. Cisek P. Integrated neural processes for defining potential actions and deciding between them: a computational model. J Neurosci. 2006;26:9761–9770. doi: 10.1523/JNEUROSCI.5605-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Clark A. Being there: putting brain, body, and world together again. Cambridge, MA: Massachusetts Institute of Technology; 1998. [Google Scholar]
  7. Colby CL, Goldberg ME. Space and attention in parietal cortex. Annu Rev Neurosci. 1999;22:319–349. doi: 10.1146/annurev.neuro.22.1.319. [DOI] [PubMed] [Google Scholar]
  8. Connolly PM, Bennur S, Gold JI. Correlates of perceptual learning in an oculomotor decision variable. J Neurosci. 2009;29:2136–2150. doi: 10.1523/JNEUROSCI.3962-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Dorris MC, Glimcher PW. Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Neuron. 2004;44:365–378. doi: 10.1016/j.neuron.2004.09.009. [DOI] [PubMed] [Google Scholar]
  10. Fanini A, Assad JA. Direction selectivity of neurons in the macaque lateral intraparietal area. J Neurophysiol. 2009;101:289–305. doi: 10.1152/jn.00400.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Freedman DJ, Assad JA. Experience-dependent representation of visual categories in parietal cortex. Nature. 2006;443:85–88. doi: 10.1038/nature05078. [DOI] [PubMed] [Google Scholar]
  12. Freedman DJ, Assad JA. Distinct encoding of spatial and nonspatial visual information in parietal cortex. J Neurosci. 2009;29:5671–5680. doi: 10.1523/JNEUROSCI.2878-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Gnadt JW, Andersen RA. Memory related motor planning activity in posterior parietal cortex of macaque. Exp Brain Res. 1988;70:216–220. doi: 10.1007/BF00271862. [DOI] [PubMed] [Google Scholar]
  14. Gold JI, Shadlen MN. Representation of a perceptual decision in developing oculomotor commands. Nature. 2000;404:390–394. doi: 10.1038/35006062. [DOI] [PubMed] [Google Scholar]
  15. Gold JI, Shadlen MN. The influence of behavioral context on the representation of a perceptual decision in developing oculomotor commands. J Neurosci. 2003;23:632–651. doi: 10.1523/JNEUROSCI.23-02-00632.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Gold JI, Shadlen MN. The neural basis of decision making. Annu Rev Neurosci. 2007;30:535–574. doi: 10.1146/annurev.neuro.29.051605.113038. [DOI] [PubMed] [Google Scholar]
  17. Gold JI, Law CT, Connolly P, Bennur S. The relative influences of priors and sensory evidence on an oculomotor decision variable during perceptual learning. J Neurophysiol. 2008;100:2653–2668. doi: 10.1152/jn.90629.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Hanks TD, Ditterich J, Shadlen MN. Microstimulation of macaque area LIP affects decision-making in a motion discrimination task. Nat Neurosci. 2006;9:682–689. doi: 10.1038/nn1683. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143:29–36. doi: 10.1148/radiology.143.1.7063747. [DOI] [PubMed] [Google Scholar]
  20. Heekeren HR, Marrett S, Bandettini PA, Ungerleider LG. A general mechanism for perceptual decision-making in the human brain. Nature. 2004;431:859–862. doi: 10.1038/nature02966. [DOI] [PubMed] [Google Scholar]
  21. Horwitz GD, Newsome WT. Separate signals for target selection and movement specification in the superior colliculus. Science. 1999;284:1158–1161. doi: 10.1126/science.284.5417.1158. [DOI] [PubMed] [Google Scholar]
  22. Horwitz GD, Batista AP, Newsome WT. Representation of an abstract perceptual decision in macaque superior colliculus. J Neurophysiol. 2004;91:2281–2296. doi: 10.1152/jn.00872.2003. [DOI] [PubMed] [Google Scholar]
  23. Kalwani RM, Bloy L, Elliott MA, Gold JI. A method for localizing microelectrode trajectories in the macaque brain using MRI. J Neurosci Methods. 2009;176:104–111. doi: 10.1016/j.jneumeth.2008.08.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Kim JN, Shadlen MN. Neural correlates of a decision in the dorsolateral prefrontal cortex of the macaque. Nat Neurosci. 1999;2:176–185. doi: 10.1038/5739. [DOI] [PubMed] [Google Scholar]
  25. Law CT, Gold JI. Neural correlates of perceptual learning in a sensory-motor, but not a sensory, cortical area. Nat Neurosci. 2008;11:505–513. doi: 10.1038/nn2070. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Law CT, Gold JI. Reinforcement learning can account for associative and perceptual learning on a visual-decision task. Nat Neurosci. 2009;12:655–663. doi: 10.1038/nn.2304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Meeker WQ, Escobar LA. Teaching about approximate confidence regions based on maximum likelihood estimation. Am Stat. 1995;49:48–53. [Google Scholar]
  28. O'Regan JK, Noe A. A sensorimotor account of vision and visual consciousness. Behav Brain Sci. 2001;24:939–973. doi: 10.1017/s0140525x01000115. [DOI] [PubMed] [Google Scholar]
  29. Parker AJ, Newsome WT. Sense and the single neuron: probing the physiology of perception. Annu Rev Neurosci. 1998;21:227–277. doi: 10.1146/annurev.neuro.21.1.227. [DOI] [PubMed] [Google Scholar]
  30. Platt ML, Glimcher PW. Neural correlates of decision variables in parietal cortex. Nature. 1999;400:233–238. doi: 10.1038/22268. [DOI] [PubMed] [Google Scholar]
  31. Roitman JD, Shadlen MN. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. J Neurosci. 2002;22:9475–9489. doi: 10.1523/JNEUROSCI.22-21-09475.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Sereno AB, Maunsell JH. Shape selectivity in primate lateral intraparietal cortex. Nature. 1998;395:500–503. doi: 10.1038/26752. [DOI] [PubMed] [Google Scholar]
  33. Shadlen MN, Newsome WT. Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. J Neurophysiol. 2001;86:1916–1936. doi: 10.1152/jn.2001.86.4.1916. [DOI] [PubMed] [Google Scholar]
  34. Snyder LH, Batista AP, Andersen RA. Intention-related activity in the posterior parietal cortex: a review. Vision Res. 2000;40:1433–1441. doi: 10.1016/s0042-6989(00)00052-3. [DOI] [PubMed] [Google Scholar]
  35. Sugrue LP, Corrado GS, Newsome WT. Matching behavior and the representation of value in the parietal cortex. Science. 2004;304:1782–1787. doi: 10.1126/science.1094765. [DOI] [PubMed] [Google Scholar]
  36. Toth LJ, Assad JA. Dynamic coding of behaviourally relevant stimuli in parietal cortex. Nature. 2002;415:165–168. doi: 10.1038/415165a. [DOI] [PubMed] [Google Scholar]
  37. Watson AB. Probability summation over time. Vision Res. 1979;19:515–522. doi: 10.1016/0042-6989(79)90136-6. [DOI] [PubMed] [Google Scholar]
  38. Williams ZM, Elfar JC, Eskandar EN, Toth LJ, Assad JA. Parietal activity and the perceived direction of ambiguous apparent motion. Nat Neurosci. 2003;6:616–623. doi: 10.1038/nn1055. [DOI] [PubMed] [Google Scholar]
  39. Yang T, Shadlen MN. Probabilistic reasoning by neurons. Nature. 2007;447:1075–1080. doi: 10.1038/nature05852. [DOI] [PubMed] [Google Scholar]
  40. Zhang M, Barash S. Neuronal switching of sensorimotor transformations for antisaccades. Nature. 2000;408:971–975. doi: 10.1038/35050097. [DOI] [PubMed] [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES