Summary
Although the parietal cortex is traditionally associated with spatial attention and sensorimotor integration, recent evidence also implicates it in higher-order cognitive functions. We review relevant results from neuron recording studies showing that inferior parietal neurons integrate information regarding target location with a variety of non-spatial signals. Some of these signals are modulatory and alter a stimulus-evoked response according to the action, category or reward associated with the stimulus. Other non-spatial inputs act independently, encoding the context or rules of a task even before the presentation of a specific target. Despite the ubiquity of non-spatial information in individual neurons, reversible inactivation of the parietal lobe affects only spatial orienting of attention and gaze, but not non-spatial aspects of performance. This suggests that non-spatial signals contribute to an underlying spatial computation, possibly allowing the brain to determine which targets are worthy of attention or action in a given task context.
Introduction
Practically all behaviors require spatial and non-spatial computations. The simplest of acts, such as picking up an apple, begins with global processes that generate the desire and motivation to reach for the apple in the first place. This drive must then be integrated with visuo-motor computations that recognize the apple, locate it in space and ultimately move the eyes, body and limbs toward it. Physiological recordings show that neurons in many brain areas – in particular higher order association areas – typically combine information about multiple task-related variables, including those in the spatial and non-spatial domains. Even though such combinatorial coding is ubiquitous, we have little understanding of its unique significance or properties in different brain areas. In this review we discuss this topic in the context of one model system, the attention-oculomotor system of the parietal lobe. We discuss relevant experiments carried out in the lateral intraparietal area (LIP) and, to some extent also in area 7a. which are located, respectively, in the lateral bank and gyral surface of the interaparietal sulcus (Fig. 1a). Both areas are implicated in orienting attention and gaze but recent evidence shows that they receive multiple non-spatial signals that shape their spatial responses. We review the most relevant recent findings on this integration, with a view toward detecting common features and generating testable hypotheses for future research.
Figure 1. Differentiation of spatial responses according to non-spatial factors.
a. Lateral view of the monkey brain indicating the approximate location of individual parietal areas. Our discussion focuses on areas LIP and 7a, which are located laterally to the intraparietal sulcus and belong to the inferior (or posterior) parietal cortex. Area 5 located medially and dorsally relative to the IPS, is part of the functionally distinct superior parietal lobe. The inferior parietal lobe includes areas 7a, LIP (the lateral intraparietal area, ventral and dorsal divisions) and VIP (ventral intraparietal area). Produced using Caret, http://www.nitrc.org/projects/caret”. b. Modulation by manual release. Monkeys maintained gaze at the center of a display containing several letter-like shapes (panels, central dot). One shape, a right or left-facing “E” appeared at variable location, which could fall inside the neuron’s RF (left column, gray shading) or at a non-RF location (right column). Monkeys were rewarded for maintaining fixation and reporting the orientation of the “E” – right or left facing - by releasing a bar held, respectively, in the right or left paw. The bars themselves were outside of the field of view. Rightward-facing cues could appear on the left and vice versa, so the laterality of the motor response was independent of the laterality of the visual cue. The lower panels show activity of a neuron with dual sensitivity to “E” location and manual release. The neuron responded only if the “E” appeared in the RF but was silent if a distractor did (left vs. right column). In addition, when an “E” appeared in its RF, the cell was more active if the monkey released the left bar than the right bar (blue vs. red traces). Raster plots in the top panels show individual trials. Each dot represents the time of an action potential aligned on cue onset, and the black dots show the time of manual release. Trials are sorted offline in order of manual reaction time. The bottom panel shows the corresponding averaged spike density histograms (smoothed with a Gaussian kernel, sigma 10 ms). Adapted, with permission, from [17] c. Modulation by stimulus category. The top left panel illustrates the behavioral task. Monkeys viewed a sample stimulus containing random dot motion in one of 8 possible directions. After a delay period (650 to 1650 ms) a test motion stimulus appeared and monkeys had to release a bar if the test stimulus matched the category of motion of the sample, but continue to hold the bar otherwise. Monkeys were initially trained to categorize directions according to one category boundary (black dotted line) and then retrained to use a different boundary (green dashed line). Top right panel shows representative LIP neuron that had visual and delay period activity following presentation of a sample inside its RF as well as sensitivity to sample category. Firing rates were much more strongly modulated by changes in direction across, relative to within a category boundary, dissociating this modulation from simple selectivity for motion direction. Modified, with permission from [24] d. Non-spatial modulations differentiate the neuronal population encoding location. Each blue dot represents one parietal neuron and the dot location corresponds to the spatial region represented by the neuron’s RF. The colored dots show the population of neurons that might be activated by an attended stimulus in the upper right quadrant. Some of these neurons will respond similarly regardless of the associations of the attended stimulus (gray); others will respond more strongly if the stimulus is associated with a particular category or motor response (dark blue and red). Thus, different overlapping neuronal subsets (gray plus blue or gray plus red) will encode a particular stimulus depending on the category or motor response with which it is associated.
Parietal areas provide spatially organized priority maps and receive non-spatial information
LIP and 7a are high-order association areas with connections to nearby parietal areas as well as to sensory, motor and cognitive systems. LIP is defined by its strong links with the visual and oculomotor system. It receives strong visual input from multiple visual areas (including V2, V3, V3A, V4 and the middle temporal area (MT)), is reciprocally connected with the frontal eye field (FEF) and sends descending projections to the superior colliculus [1,2]. While area 7a shares many of the same connections as LIP, it has somewhat weaker direct links to early visual areas and somewhat stronger connections with some parts of prefrontal cortex (e.g., area 45), cingulate and parahippocampal cortex [3,4]. LIP and 7a are also distinguished by their thalamic inputs, with LIP receiving input from both medial and lateral pulvinar but 7a receiving exclusively medial pulvinar input [5].
Consistent with a role in visuo-spatial processing, a large fraction of neurons in areas LIP and 7a have visual receptive fields (RF), which, for LIP, are typically confined to a single contralateral quadrant and for 7a may cover the entire contralateral hemifield [1,6-8]. LIP but not 7a is retinotopically mapped. The topography in LIP is weak and inconsistent at the level of single units [1,6,7] but becomes clear when local metabolic or optical imaging is used, although there remain unexpected discrepancies across techniques [8-10]. Visual RFs in LIP and 7a have been classically described as retinotopic -- that is, moving relative to head-centered space with each shift of gaze -- with additional extraretinal signals that could support computations in head- or body-centered coordinates [9,10]. A number of more recent studies have reported world-centered or object-centered coding in 7a [10-12] and explicit head- or body-centered coding in LIP [9]. Taken together these findings suggest that LIP and 7a code space primarily in a retinotopic frame but can also support spatially accurate computations that compensate for the observers’ own movements.
A hallmark of parietal visual responses is that, rather than responding to any object entering their RF, neurons respond selectively for task-relevant or physically salient stimuli. Neurons in LIP and 7a respond robustly to stimuli that pop-out by virtue of an abrupt onset or contrasting color [13-15], and neurons in LIP also encode the top-down selection of informative but inconspicuous cues [13,16,17]. Prior studies have linked the selective activity in LIP to both overt saccades [18-20] and covert (internal) shifts of attention [14,16,17], and it was recently suggested that these two functions may be differentially represented in, respectively, the dorsal and ventral subdivisions of this areas [21]. In this review we do not draw a strong distinction between mechanisms of spatial attention and eye movements but we ask how these functions – both of which require visuo-spatial selection – interface with non-spatial aspects of a task.
Converging evidence shows that both LIP and 7a receive information about non-spatial variables and these signals fall roughly into two categories. One category includes modulatory signals that do not act on their own but modify the spatially-coded response to a visual stimulus or motor target. A second category includes independent signals that affect neuronal activity independent prior to specific visual or motor information. We first describe the evidence for these two signals in neural recordings, followed by a discussion of inactivation experiments that pinpoint the functional significance of the non-spatial activity for the behavioral output.
Non-spatial signals modulate visually evoked responses
A central function of attention and eye movements is to orient towards stimuli that are relevant or informative in a given task. In many circumstances, the relevance of a stimulus is determined not by its physical salience but by its predictive power – i.e., its ability to predict other variables such as an action or an outcome. For example, a street sign in a busy intersection may not be physically conspicuous but will be attended because of its association with an action that leads to a desired goal (e.g., walking to a specific destination). Consistent with these considerations, a number of studies show that visuo-spatial responses in LIP are not stereotyped but depend on the task-related associations of the selected input.
Limb selection
A study by Oristaglio et al. has shown that the LIP responses to a visual cue are shaped by the motor action instructed by the cue [17]. The authors used a covert visual search task where monkeys were required to maintain gaze straight ahead and discriminate a peripheral cue (an “E” like shape) embedded among distractors (Fig. 1b). The cue could face to the right or to the left, and monkeys were rewarded for reporting its orientation by releasing a bar held, respectively, in the right or left paw. The task therefore had a spatial and a non-spatial component: monkeys had to select the relevant cue using visuo-spatial attention but report on this cue using a non-spatial (non-targeting) manual response.
Most LIP neurons responded more strongly when the cue relative to a distractor fell in the RF, providing the expected signal of visuo-spatial selection [16,17,22]. However, in about half of the cells this response was modulated by the manual release. Some cells had stronger responses to the cue if the cue instructed a left rather than a right bar release (Fig. 1b, left column, red vs. blue). Other cells had the complementary preference, responding best if the cue instructed a right bar release. Control experiments ruled out alternative explanations based on the shape of the cue or the spatial location of the limb, thus linking the selectivity with the effector itself. Importantly however, the limb response vanished if a distractor was in the RF and attention was directed out of the RF even though the manual release remained constant (Fig. 1b, right panels, blue vs. red). Thus, the primary response conveyed by LIP neurons was one of visual selection, but this response was modulated by the choice of the active limb.
The presence of limb information in LIP is consistent with previous reports that this area receives effector information [18,23]. These earlier studies found that neurons responded more strongly when monkeys were preparing to make a saccade rather than a reach movement [18] and in some cells, this preference appeared early in the trial, even before monkeys were shown the specific motor target [23]. This suggests that effector information may play a number of different functions in LIP. One function may be to link a relevant stimulus with its associated response and another may be to select the effector system (eye or limb) that should be active in a given task.
Categorization
An experiment by Freedman and Assad showed that LIP neurons can also reflect more abstract associations – namely the categorical membership of an imperative cue [24]. Monkeys were shown a sample cue containing one of several motion directions that were arbitrarily assigned to one of two categories (Fig. 1c, left). After a delay period, monkeys were shown a second (test) stimulus and were rewarded for releasing a bar if the motion direction in the test stimulus matched the category of the sample cue. Category modulations were found during the sample and delay periods, before monkeys could decide on a manual release. If the sample appeared in the RF, neurons had robust visual and sustained responses to this stimulus; these responses were modulated by stimulus category, with neurons responding more strongly to a stimulus if it belonged to one or the other categories (Fig. 1c, right). Sensitivity to category was distinct from selectivity to motion direction, and changed to reflect a new categorization scheme after monkeys were re-trained to use a new category boundary. Similar to the limb effect of Oristaglio et al., category coding was modulatory, as it was strongest when the sample stimulus appeared in the RF but much weaker if the stimulus appeared at the opposite location [25].
Both the results of Oristaglio et al. and those of Freedman and Assad suggest that visuo-spatial responses in LIP reflect not only the location of an attended object but also the associations of that object in a given task. This response pattern may be represented as in Fig. 1d, where each LIP cell (blue dot) is shown as encoding a particular location. An attended stimulus at a given location (e.g., up and to the right) is encoded by one subset of neurons if it encodes one alternative and by a different subset of cells if it encodes another alternative in the task (e.g., red and dark blue dots). A third population, shown in gray, encodes the locus of attention regardless of specific stimulus associations. Consistent with this interpretation, two studies have shown that some parietal neurons encode target selection in context-independent manner, while others do so selectively, only for targets associated with specific attributes or feature dimensions [26,27]. Thus, selecting a particular location is not a stereotyped process but seems to be accomplished by different neuronal populations depending on the non-spatial demands of the task.
Reward
In the limb and category experiments described above each of the two alternatives signaled by the cue (the two effectors or categories) were behaviorally equivalent and had a comparable representation in LIP. However, some behavioral associations may be asymmetric, with one alternative being clearly more desirable than the other. A salient example is expected reward, where animals clearly prefer stimuli associated with higher gains rather than those predicting lower gain. Consistent with this, LIP neurons are modulated by expected reward in asymmetric fashion, with most cells increasing and only very few decreasing their activity as a function of expected reward [28-30].
A recent study has shown that reward associations modify not only the representation of an upcoming saccade but also the bottom-up salience of a visual stimulus independently of a motor output [30]. Each trial began with presentation of a conditioned stimulus (CS), an abstract pattern that indicated whether the trial would result in a reward (CS+) or no reward (CS−). The CS appeared randomly either inside or opposite the RF, and was followed by a delay period and by presentation of a second target that appeared unpredictably at the same or at the opposite location relative to the CS. Thus the location of the reward predictor (the CS) was statistically dissociated from that of the motor target, so that reward learning could affect either the visual or the motor response.
The visual response in LIP was strongly modulated by reward valence independently of the motor response. As shown in Fig. 2a, when the CS appeared in the RF neurons had a fast transient visual response followed by sustained activity that persisted during the delay period. For a CS+ the visual response was strong and was followed by a sustained delay period activation. In contrast, for a CS− the initial visual response was weaker and was followed by sustained suppression during the delay period. These modulations appeared specifically at the CS location, producing a neuronal bias that was attractive toward a CS+ location but repulsive away from a CS− location (Fig. 2a, right panels, black vs. gray). The monkey’s saccades reflected the spatial biases in LIP. Following a CS+ saccades were slightly facilitated if they happened to be directed toward rather than away from the CS+ (Fig. 2b, blue bars). In contrast, following a CS− saccades were strongly impaired if they happened to be directed toward rather than away from the CS− location (Fig. 2b, red bars). Both the neural and behavioral biases were subject to learning and were stronger for familiar (over-learned) CS relative to novel (“newly-learned”) CS (Fig. 2b).
Figure 2. Modulation of spatial responses by expected reward.
a. Population firing rates (normalized) following presentation of a CS+ (blue) or CS− (red) in the RF. Each CS category contained several individual, abstract patterns that were initially novel to the monkey and were equated for size and luminance. CS presentation (300 ms, thick horizontal bar), was followed by a 600 ms delay period during which monkeys maintained fixation. The stars show time bins when firing rates were significantly modulated by reward. The right panels show, for each CS category, a comparison of activity when the CS appeared inside the RF (black) or opposite the RF (gray). Cartoons indicate trial configurations, with the dashed oval showing the RF and the magenta start, the CS location. The y axis is truncated to highlight delay-period activity. After a CS+ (top panel) or CS− (bottom panel), sustainted activity was, respectively, higher or lower at the CS location relative to the opposite, non-stimulated location, indicating a spatial bias toward or away from the CS. b. After the end of the delay period a saccade target appeared unpredictably at either location and monkeys made a visually-guided saccade to the target. Bars show saccade accuracy for each configuration (mean and standard error, defined as the normalized angular distance between the target and the saccade endpoint). Accuracy was impaired specifically on CS− trials in which the target happened to coincide with the CS− location, indicating a repulsion from the location of the CS−. This impairment increased with training, being stronger following an over-learned relative to a newly-learned CS. c. The insets show saccade endpoints on CS− congruent trials on a representative session. Each point represents one saccade, and coordinates are rotated so that the target appears on the right horizontal. Saccades show a large degree of scatter, especially after an over-learned CS−. This is remarkable given that the target remains lit and clearly visible until the end of the movement. Modified, with permission, from [30].
As mentioned above, the reward effects modulated the visual response and strikingly, they persisted even though they interfered with the optimal action. This is most clearly seen on CS− trials when the saccade target was spatially congruent with a preceding CS− (Fig. 2b,c). Many saccades on these trials were dysmetric and failed to reach the designated target (Fig. 2c), thus being scored as errors and triggering the immediate repetition of the same (unrewarded) trial. However, despite its detrimental effect, the repulsion evoked by the CS− increased rather than decreased with training. As shown in Fig. 2b, c (right vs left panel) the dysmetria became worse for a familiar relative to a newly learned CS−. In sum therefore, reward modulations modify spatial representations in asymmetric manner – with high and low expected gain producing, respectively, attraction and repulsion – and they act on visual representations, modifying the bottom-up salience of stimuli independently of a required action.
Non-spatial signals encode context and rules
While the modulations described above acted on a stimulus-evoked response, two studies show that parietal neurons can also encode behavioral context or rule as independent changes in their firing rates, before the specification of a spatial locus of attention or action.
Rules
Rule related activity was found in LIP and 7a in a task in which, on each trial, a particular rule was cued and was followed, after a short delay, by presentation of a stimulus [31] (Fig. 3a). The rule cue instructed animals to evaluate either the color or the orientation of the stimulus, and to respond by reaching to a button on the right or left. Thus the correct response could not be known in advance but depended on a conjunction of the task rule and the properties of the stimulus bar. Moreover, the rule cue appeared outside the RF and did not itself activate the cells.
Figure 3. Rule and contextual modulations that precede spatial orienting.
a. Task schematic. On each trial, a cue (upright or inverted triangle) first appeared and instructed the rule for that trial. After a delay, a stimulus appeared, and animals responded by pressing a button on the right or left, with the correct response being determined by the combination of rule and stimulus. We asked if cells encode the task rule, and how activity is temporally related to response execution. b. Population firing rate sorted by rule type and response direction. Cells initially coded the (non-spatial) rule, then switched to coding the (spatial) response direction. Modified, with permission, from [31]. c. To highlight the relationship of cell activity to movement in cells with and without rule responsivity, the data were collapsed across tasks, non-preferred direction responses were subtracted from preferred direction responses, and traces were aligned on response onset. Cells that did not encode the task rule (task−, top) showed a consistent temporal relationship of activity to movement onset, as might be expected of a cell that contributes to movement execution. In contrast, cells that encoded task rules (task+, bottom) responded earlier on easier compared to more difficult trials (dark versus light traces), relative to the onset of movement. Modified, with permission, from [32]. d. Contextual modulation according to the potential significance of a salient stimulus (perturbation). Population responses on the “E” search task shown in Fig. 1b, when a transient visual perturbation appeared 200 ms before the search display. Baseline and perturbation-evoked responses were enhanced in a relevant context, when the perturbation validly cued the target’s location, relative to an irrelevant context, when the location of the perturbation and target were statistically independent. The ratio of firing rates was constant during the pre-perturbation and perturbation response, suggesting a multiplicative gain. Modified, with permission, from [33].
Many cells responded after the rule was cued but before the bar appeared by increasing their firing for one rule (e.g., evaluate color) or another (e.g., evaluate orientation) (Fig. 3b). Control experiments established that these modulations were truly rule-specific, and not dependent on the visual properties of the cue. Once the stimulus bar appeared, these cells ceased to code the rule and began to encode the motor response, e.g., higher firing for a rightward or a leftward reach (Fig. 3b). Like the limb and category effects described in the previous sections, the rule-related activity effectively divided parietal neurons into two groups, those preferring one rule versus those preferring the other rule. However, here this segregation occurred before the monkey received spatial information. Interestingly, cells that encoded a rule (“rule+”) showed a different pattern of response to movement compared to cells lacking a rule effect [32] (rule−; Fig. 3c). In rule− cells, activity predictive of the reach direction was synchronized to movement onset across all trials. In rule+ cells, the predictive activity preceded the movement by a larger amount on easier trials. Thus the predictive and spatially-selective activity in these cells seems to reflect computations that help select the response, yet are temporally uncoupled from response onset.
Context
Using a variant of the manual release task described in Fig. 1b, Balan and Gottlieb showed that neurons encode a current task context defined by the statistical relationship between a two stimuli – a relevant target and a salient perturbation [33]. Monkeys performed manual release task described in Fig. 1b, but before seeing the search display they were presented with a “visual perturbation” - a brief change in one of the display elements. The perturbation was shown in one of two contexts (trial blocks). In the “relevant” context the perturbation appeared at the same location as the search target, thereby effectively cueing the locus of attention. In the “irrelevant” context the perturbation appeared at a random location, thereby providing no information relevant to the search. Thus, while monkeys could not predict the actual stimulus location within a trial they were aware of the current context, defined by the statistical relationship between the perturbation and target locations.
Monkeys were sensitive to this contextual manipulation and learned to use or ignore the perturbation as appropriate for the current task. LIP neurons reflected this contextual sensitivity (Fig. 3d). During the fixation period before the perturbation appeared, neurons showed elevated responses in the “relevant” relative to the “irrelevant” context. When the perturbation appeared it evoked a spatially specific visual response that was likewise enhanced in the relevant context. The ratio of activity in the two contexts remained constant in the fixation and perturbation epochs, suggesting a multiplicative effect. The contextual modulation shown by Balan and Gottlieb was therefore of a hybrid type, weighting the response to the perturbation itself but also modulating pre-stimulus firing rates according to the known relationship between two stimuli.
Non-spatial signals serve spatial orienting
The findings reviewed above raise a conundrum. Does the plethora of non-spatial activity in LIP and 7a reflect a role of these areas in a range of unrelated computations, or do these signals contribute to spatial orienting through attention or gaze? To answer this question Balan and Gottlieb tested the effects of unilateral inactivations (using the GABA-A agonist, muscimol) on three tasks that had been shown to evoke spatial and non-spatial responses in LIP neurons [22].
In all three tasks, inactivation affected only the spatial but not the non-spatial aspects of performance. One paradigm was the “E” search task described above, where neurons encoded a signal of visual selection modulated by the manual release. After muscimol inactivation performance was impaired only if the cue was in the contralesional field but not if it was in the ipsilesional field, suggesting a role in visuo-spatial selection (Fig. 4a). In contrast, the inactivation did not cause global or limb-specific deficits in manual release, suggesting that LIP is not critical for limb motor planning (Fig. 4a). A second task tested the significance of temporal anticipation (temporal hazard rate), which had also been shown to modulate saccade-related activity in LIP [34]. Monkeys made simple visually guided saccades but the distribution of delays prior to the go signal varied bi-modally, being either short or long on the majority of trials. Saccade reaction times were highest at the intermediate (least likely) delays and lowest at the most frequent (shortest and longest) delays, suggesting that monkeys used temporal information to modulate their motor plan [34]. Inactivation had a spatial effect, producing a uniform increase in reaction times for contralesional saccades (Fig. 4b). However, it had no effect on the sensitivity to delay, suggesting that LIP does not have a strong contribution to computing temporal anticipation (Fig. 4b). Finally, in a reward task monkeys chose between a red and green target that were associated with unequal rewards. Inactivation produced a mild ipsilesional bias, slightly reducing choices of the contralesional targets. However, there was no effect on non-spatial aspects of the decisions, including sensitivity to reward or the ability to switch preference upon reversal of reward contingencies (Fig. 4c). These findings suggest that the non-spatial responses in LIP do not indicate direct involvement in non-targeting manual actions; instead, they appear to be feedback signals related to the selection of a relevant location.
Figure 4. Unilateral inactivation of LIP impairs spatial but not non-spatial aspects of performance on three distinct tasks.
a. Performance on the “E” search task shown in Fig. 1b, segregated according to the hemifield of the cue (left vs. right column) and the active limb (white vs. black symbols).Symbols show mean and standard error. Reaction times are normalized by substracting the session mean. Inactivation (24-26 □g muscimol at 8 mg/ml) lowered accuracy (top left) and elevated reaction times (bottom left) if the cue was in the hemifield contralateral to the inactivated hemisphere (left column) but not if the cue was in the ipsilesional hemifield. b. Performance on a visually guided saccade task where the saccade go signal was given, on 90% of trials, after a bimodally distributed delay (modes 300 and 1600 ms), and on 10% of trials at intermediate delays. Reaction times peaked at the intermediate (least frequent) delays. If the target was contralesional (top row) there was a significant increase in reaction time following inactivation, consistent with a lateralized saccadic deficit. However, inactivation had no effect on the sensitivity to delay – i.e., the triangular reaction time pattern. Each gray trace shows a single experiment and the black traces show the overall mean. c. Performance on a reward based choice task where either the contralesional or the ipsilesional target received the higher reward. The location of the more highly rewarded target reversed without warning every 60-80 trials. The data points show a running average of the fraction of choices of the contralesional target, around the time of transition between low-contralateral to low-ipsilateral blocks, aligned on the reversal trial (trial 0). Although there was a slight decrement in the fraction of contralesional choice there was no impairment in the flexibility or speed of transition and other aspects of the decision. Modified, with permission, from [22]
Summary
Parietal neurons encode spatial information, but they also encode non-spatial, executive and motivational aspects of complex tasks. Reversible inactivation of the parietal lobe affects primarily spatial functions, suggesting that the non-spatial signals represent feedback that informs orienting through attention and gaze [22].
Many questions remain regarding the precise mechanisms of this interaction. Computational models suggest that an efficient computational solution to identifying task-relevant targets is to use feedback from an output layer to assign credit to those stimuli that are associated with successful actions [35,36]. This type of feedback was proposed to account for categorization responses in ventral stream neurons [35], as well as for perceptual biases reflecting optimal feature gain settings during visual search [36]. We propose that the non-spatial feedback found in the parietal lobe may reflect, in part, this type of computation, through which the brain identifies stimuli that are associated with – and thus can predict - other variables of interest such as an action, rule or expected reward [37,38]. Determining whether this is indeed the case and how precisely such learning occurs, as well as how this may relate to observations on the human parietal lobes (see Box) are exciting questions for future research.
Box 1 Non-spatial functions of human parietal lobe.
Much evidence also suggests that human posterior parietal cortex operates at the interface of spatial and non-spatial cognition. Among the non-spatial functions associated with human PPC is the ability to recruit attention in time, with specific involvement in sustained attention, the attentional blink, and the movement of attention associated with apparent motion [39,40]. In addition, functional imaging studies implicate the posterior parietal cortex in episodic memory retrieval [41] and in mathematical reasoning and certain forms of time perception [42]. The neuronal correlates of these functions remain largely unknown, although modulations by numerosity and time have been reported in LIP [34,43]. An important issue is the homology between human and monkey parietal areas. In humans, non-spatial deficits in temporal attention (which can be demonstrated at a fixed, central visual location) are most closely associated with the ventral (inferior) rather than the superior parietal lobule [39]. Memory retrieval activation seems to appear at loci distinct from those activated by attention, which have no known correlate in the monkey [41]. Finally, the human parietal lobe has a clear hemispheric asymmetry that seems to be absent in monkeys, whereby the right hemisphere has a predominant and possibly non-lateralized role in directing attention [39]. Thus, understanding the non-spatial functions of the parietal cortex requires better understanding of the single-neuron correlates of higher cognitive functions, and of the comparative functional anatomy in humans and monkeys. However, the picture emerging from studies in both species is that it is a mistake to view the parietal lobe as being devoted narrowly to orienting and motor planning. At the very least, this lobe encodes the underlying computations through which spatial orienting is coordinated with non-spatial aspects of a task. More broadly speaking, it serves as a cognitive interface that is implicated in multiple aspects of cognition and intelligence both in the spatial and non-spatial domains [44].
Acknowledgements
This research was supported by The National Alliance for Research on Schizophrenia and Depression and the Gatsby Charitable Foundation (JG) and the National Eye Institute (JG and LS).
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Disclosure statement:
The authors declare that they have no conflict of interest.
References
- 1.Blatt GJ, Andersen RA, Stoner GR. Visual receptive field organization and cortico-cortical connections of the lateral intraparietal area (area LIP) in the macaque. J. Comp. Neurol. 1990;299:421–445. doi: 10.1002/cne.902990404. [DOI] [PubMed] [Google Scholar]
- 2.Lewis JW, Van Essen DC. Corticocortical connections of visual, sensorimotor, and multimodal processing areas in the parietal lobe of the macaque monkey. Journal of Comparative Neurology. 2000;428:112–137. doi: 10.1002/1096-9861(20001204)428:1<112::aid-cne8>3.0.co;2-9. [DOI] [PubMed] [Google Scholar]
- 3.Cavada C, Goldman-Rakic PS. Posterior parietal cortex in rhesus monkey: II. Evidence for segregated corticocortical networks linking sensory and limbic areas with the frontal lobe. J. Comp. Neurol. 1989;287:422–445. doi: 10.1002/cne.902870403. [DOI] [PubMed] [Google Scholar]
- 4.Cavada C, Goldman-Rakic PS. Posterior parietal cortex in rhesus monkey: I. Parcellation of areas based on distinctive limbic and sensory corticocortical connections. J. Comp. Neurol. 1989;287:393–421. doi: 10.1002/cne.902870402. [DOI] [PubMed] [Google Scholar]
- 5.Hardy SG, Lynch JC. The spatial distribution of pulvinar neurons that project to two subregions of the inferior parietal lobule in the macaque. Cereb Cortex. 1992;2:217–230. doi: 10.1093/cercor/2.3.217. [DOI] [PubMed] [Google Scholar]
- 6.Ben Hamed S, Duhamel JR, Bremmer F, Graf W. Representation of the visual field in the lateral intraparietal area of macaque monkeys: a quantitative receptive field analysis. Exp Brain Res. 2001;140:127–144. doi: 10.1007/s002210100785. [DOI] [PubMed] [Google Scholar]
- 7.Platt ML, Glimcher PW. Response fields of intraparietal neurons quantified with multiple saccadic targets. Exp Brain Res. 1998;121:65–75. doi: 10.1007/s002210050438. [DOI] [PubMed] [Google Scholar]
- 8.Barash S, Bracewell RM, Fogassi L, Gnadt JW, Andersen RA. Saccade-related activity in the lateral intraparietal area. II. Spatial properties. J. Neurophysiol. 1991;66:1109–1124. doi: 10.1152/jn.1991.66.3.1109. [DOI] [PubMed] [Google Scholar]
- 9.Mullette-Gillman OA, Cohen YE, Groh JM. Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus. J Neurophysiol. 2005;94:2331–2352. doi: 10.1152/jn.00021.2005. [DOI] [PubMed] [Google Scholar]
- 10.Snyder LH, Grieve KL, Brotchie P, Andersen RA. Separate body- and world-referenced representations of visual space in parietal cortex. Nature. 1998;394:887–891. doi: 10.1038/29777. [DOI] [PubMed] [Google Scholar]
- 11.Crowe DA, Averbeck BB, Chafee MV. Neural ensemble decoding reveals a correlate of viewer- to object-centered spatial transformation in monkey parietal cortex. J Neurosci. 2008;28:5218–5228. doi: 10.1523/JNEUROSCI.5105-07.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Rawley JB, Constantinidis C. Effects of task and coordinate frame of attention in area 7a of the primate posterior parietal cortex. J Vis. 10(12):11–16. doi: 10.1167/10.1.12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Gottlieb JP, Kusunoki M, Goldberg ME. The representation of visual salience in monkey parietal cortex. Nature. 1998;391:481–484. doi: 10.1038/35135. [DOI] [PubMed] [Google Scholar]
- 14.Bisley JW, Goldberg ME. Neuronal activity in the lateral intraparietal area and spatial attention. Science. 2003;299:81–86. doi: 10.1126/science.1077395. [DOI] [PubMed] [Google Scholar]
- 15.Constantinidis C, Steinmetz MA. Posterior parietal cortex automatically encodes the location of salient stimuli. J Neurosci. 2005;25:233–238. doi: 10.1523/JNEUROSCI.3379-04.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Balan PF, Oristaglio J, Schneider DM, Gottlieb J. Neuronal correlates of the set-size effect in monkey lateral intraparietal area. PLoS Biol. 2008;6:e158. doi: 10.1371/journal.pbio.0060158. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.** Oristaglio J, Schneider DM, Balan PF, Gottlieb J. Integration of visuospatial and effector information during symbolically cued limb movements in monkey lateral intraparietal area. J Neurosci. 2006;26:8310–8319. doi: 10.1523/JNEUROSCI.1779-06.2006. During performance of a covert visual search task using a non-targeting manual report, LIP neurons encoded the location of the cue, consistent with a role in covert top-down attention. Suprisingly, responses to the cue were modulated by the effector monkeys used report cue orientation, with some neurons preferring the right and others preferring the left limb. This unexpected result shows that LIP is not a saccade-specific area, but that it encodes visual selection in a manner that is sensitive to limb motor planning.
- 18.Snyder LH, Batista AP, Andersen RA. Coding of intention in the posterior parietal cortex. Nature. 1997;VI - 386:167–170. doi: 10.1038/386167a0. [DOI] [PubMed] [Google Scholar]
- 19.Mazzoni P, Bracewell RM, Barash S, Andersen RA. Motor intention activity in the macaque’s lateral intraparietal area. I. Dissociation of motor plan from sensory memory. J. Neurophysiol. 1996;76:1439–1456. doi: 10.1152/jn.1996.76.3.1439. [DOI] [PubMed] [Google Scholar]
- 20.Ipata AE, Gee AL, Bisley JW, Goldberg ME. Neurons in the lateral intraparietal area create a priority map by the combination of disparate signals. Exp Brain Res. 2009;192:479–488. doi: 10.1007/s00221-008-1557-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.*Liu Y, Yttri EA, Snyder LH. Intention and attention: different functional roles for LIPd and LIPv. Nat Neurosci. 13:495–500. doi: 10.1038/nn.2496. Muscimol inactivations that are restricted to the dorsal portion of LIP (LIPd) impair simple visually guided saccades but not performance on a saccade-based search task. In contrast, inactivations restricted to the ventral portion of LIP (LIPv) impair both visual search and simple saccades. This suggests that LIPd may have a more restricted role in saccades while LIPv may be more broadly implicated in attention..
- 22.Balan PF, Gottlieb J. Functional significance of nonspatial information in monkey lateral intraparietal area. J Neurosci. 2009;29:8166–8176. doi: 10.1523/JNEUROSCI.0243-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Dickinson AR, Calton JL, Snyder LH. Nonspatial saccade-specific activation in area LIP of monkey parietal cortex. J Neurophysiol. 2003;90:2460–2464. doi: 10.1152/jn.00788.2002. [DOI] [PubMed] [Google Scholar]
- 24.Freedman DJ, Assad JA. Experience-dependent representation of visual categories in parietal cortex. Nature. 2006;443:85–88. doi: 10.1038/nature05078. [DOI] [PubMed] [Google Scholar]
- 25.** Freedman DJ, Assad JA. Distinct encoding of spatial and nonspatial visual information in parietal cortex. J Neurosci. 2009;29:5671–5680. doi: 10.1523/JNEUROSCI.2878-08.2009. Information about stimulus category modulates visual responses in area LIP, showing that neurons receive abstract non-spatial information.
- 26.*Ogawa T, Komatsu H. Condition-dependent and condition-independent target selection in the macaque posterior parietal cortex. J Neurophysiol. 2009;101:721–736. doi: 10.1152/jn.90817.2008. An important extension of non-spatial modulations to visual search, showing that target selection activity in the parietal lobe is not stereotyped but depends on the visual properties (features and dimension) of the target, with subpopulations of neurons showing context-independent target selection activity and others showing target selection in some contexts but not others.
- 27.Sereno AB, Amador SC. Attention and memory-related responses of neurons in the lateral intraparietal area during spatial and shape-delayed match-to-sample tasks. J Neurophysiol. 2006;95:1078–1098. doi: 10.1152/jn.00431.2005. [DOI] [PubMed] [Google Scholar]
- 28.Sugrue LP, Corrado GS, Newsome WT. Matching behavior and the representation of value in the parietal cortex. Science. 2004;304:1782–1787. doi: 10.1126/science.1094765. [DOI] [PubMed] [Google Scholar]
- 29.Platt ML, Glimcher PW. Neural correlates of decision variables in parietal cortex. Nature. 1999;400:233–238. doi: 10.1038/22268. [DOI] [PubMed] [Google Scholar]
- 30.** Peck CJ, Jangraw DC, Suzuki M, Efem R, Gottlieb J. Reward modulates attention independently of action value in posterior parietal cortex. J Neurosci. 2009;29:11182–11191. doi: 10.1523/JNEUROSCI.1929-09.2009. Reward learning modifies bottom-up salience and visually evoked activity in LIP. Stimuli conditioned to predict reward evoked an enhanced visual response followed by sustained excitatory activity at their location. Stimuli conditioned as predicting no reward evoked a weaker visual response followed by sustained inhibition at their location. These responses were associated with automatic spatially attractive or repulsive biases produced, respectively, by the two stimulus types on subsequent saccades. After long-term training salience became automatic, transferring to a novel context in which the stimuli no longer signaled reward. The results show that reward learning in LIP affects a visual rather than a motor representation and is powerful even when it interferes with an optimal action.
- 31.** Stoet G, Snyder LH. Single neurons in posterior parietal cortex of monkeys encode cognitive set. Neuron. 2004;42:1003–1012. doi: 10.1016/j.neuron.2004.06.003. When a cue specifies a rule guiding the motor response to a subsequent stimulus, some neurons in areas LIP and 7a encode the current rule in their pre-stimulus activity. Parietal activity is sensitive to abstract rules in the absence of visual or motor stimulation.
- 32.Stoet G, Snyder LH. Correlates of stimulus-response congruence in the posterior parietal cortex. J Cogn Neurosci. 2007;19:194–203. doi: 10.1162/jocn.2007.19.2.194. [DOI] [PubMed] [Google Scholar]
- 33.** Balan PF, Gottlieb J. Integration of exogenous input into a dynamic salience map revealed by perturbing attention. J Neurosci. 2006;26:9239–9249. doi: 10.1523/JNEUROSCI.1898-06.2006. LIP neurons show global changes in firing rates reflecting the context of a salient stimulus. Baseline responses are enhanced when the stimulus is potentially relevant for the task, and are followed by an enhanced response to the stimulus itself. The results suggest that the attentional enhancement in the parietal cortex may reflect a multiplicative scaling of visually evoked activity driven by top-down knowledge of task context.
- 34.Janssen P, Shadlen MN. A representation of the hazard rate of elapsed time in macaque area LIP. Nat Neurosci. 2005 doi: 10.1038/nn1386. [DOI] [PubMed] [Google Scholar]
- 35.** Roelfsema PR, van Ooyen A. Attention-gated reinforcement learning of internal representations for classification. Neural Comput. 2005;17:2176–2214. doi: 10.1162/0899766054615699. The paper presents a computational model using a three-layer architecture that reproduces, among others, the selective encoding of diagnostic features by high-level visual neurons. The model uses a global reward signal that gates Hebbian plasticity as well as feedback from the output layer to the intermediate layers, which further restricts the plasticity to those synapses that are responsible for the rewarded action. Although not directly modeling attention, the model provides an excellent account of the role of feedback to solving the spatial credit assignment problem.
- 36.Navalpakkam V, Itti L. Search goal tunes visual features optimally. Neuron. 2007;53:605–617. doi: 10.1016/j.neuron.2007.01.018. [DOI] [PubMed] [Google Scholar]
- 37.*Gottlieb J, Balan PF. Attention as a decision in information space. Trends in cognitive science. 2010;14(6):240–248. doi: 10.1016/j.tics.2010.03.001. A thorough review of the differences between the decision-based and attention based interpretations of LIP. The review highlights experimental observations that are not accounted for by current decision models and suggests an alternative interpretation of LIP as encoding an intermediate decision related to acquiring information rather that directly gathering a reward.
- 38.Gottlieb J, Balan P, Oristaglio J, Suzuki M. Parietal control of attentional guidance: the significance of sensory, motivational and motor factors. Neurobiol Learn Mem. 2009;91:121–128. doi: 10.1016/j.nlm.2008.09.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.*Husain M, Nachev P. Space and the parietal cortex. Trends Cogn Sci. 2007;11:30–36. doi: 10.1016/j.tics.2006.10.011. Comprehensive review of neuropsychological studies of the parietal cortex, detailing deficits in spatial and non-spatial functions and open issues regarding the homolgy of parietal areas in human and non-human primates.
- 40.Battelli L, Cavanagh P, Thornton IM. Perception of biological motion in parietal patients. Neuropsychologia. 2003;41:1808–1816. doi: 10.1016/s0028-3932(03)00182-9. [DOI] [PubMed] [Google Scholar]
- 41.Hutchinson JB, Uncapher MR, Wagner AD. Posterior parietal cortex and episodic retrieval: convergent and divergent effects of attention and memory. Learn Mem. 2009;16:343–356. doi: 10.1101/lm.919109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Nieder A, Dehaene S. Representation of number in the brain. Annu Rev Neurosci. 2009;32:185–208. doi: 10.1146/annurev.neuro.051508.135550. [DOI] [PubMed] [Google Scholar]
- 43.Roitman JD, Brannon EM, Platt ML. Monotonic coding of numerosity in macaque lateral intraparietal area. PLoS Biol. 2007;5:e208. doi: 10.1371/journal.pbio.0050208. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.*Duncan J. The multiple-demand (MD) system of the primate brain: mental programs for intelligent behaviour. Trends Cogn Sci. 14:172–179. doi: 10.1016/j.tics.2010.01.004. Highlights the role of parietal, along with prefrontal cortex, in complex aspects of behavior, including sequence planning and adaptation to novel contexts.




