Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Jun 1.
Published in final edited form as: Trends Cogn Sci. 2010 Apr 17;14(6):240–248. doi: 10.1016/j.tics.2010.03.001

Attention as a decision in information space

Jacqueline Gottlieb 1,2, Puiu Balan 1
PMCID: PMC2908203  NIHMSID: NIHMS187696  PMID: 20399701

Abstract

Decision formation and attention are two fundamental processes through which we select, respectively, appropriate actions or sources of information. While both functions have been studied in the oculomotor system, we lack a unified view explaining both forms of selection. We review evidence showing that parietal neurons encoding saccade motor decisions also carry signals of attention (perceptual selection) that are independent of the metrics, modality and even reward of an action. We propose that attention implements a specialized form of decision based on the utility of information. Thus, oculomotor control depends on two interacting but distinct processes, attentional decisions that assign value to sources of information and motor decisions that flexibly link information with action.

Eye movements serve visual exploration

To successfully negotiate our world we are constantly called upon to make decisions, ranging from the simple (look right or left when crossing the street) to the very complex (choose a mate or a career path). Decision making occupies most of our cognitive capacity, and its failure results in devastating behavioral and psychiatric disorders. Thus, understanding the neuronal mechanisms of decision formation is a central goal of cognitive neuroscience.

In recent years, significant progress in this direction was made possible by the development of behavioral tasks suitable for use in experimental animals. In these tasks animals are trained to make simple decisions based on sensory evidence or rewards and express these decisions through specific actions1, 2. This strategy has been particularly fruitful in the oculomotor system, where monkeys report decisions by making a rapid eye movement (saccade). Most studies of oculomotor decisions have focused on the lateral intraparietal area (LIP), a cortical area located at the interface between visual input and oculomotor output [1,2]. LIP has been an attractive target of investigation because its neurons encode the direction of an upcoming saccade in a manner that depends on the evidence for that saccade, suggesting that they provide a window into the mechanisms of decision formation13.

In this article we examine the decision literature from the perspective of a second line of investigation that has also been prominent in LIP - studies of visuo-spatial attention. Experiments examining the role of LIP in attention show that, in addition to its saccade-related activity, neurons also have robust responses to salient or task-relevant stimuli that are not saccade targets4. These attentional responses are similar to saccade-related activity in that they encode a form of visual orienting; however, they diverge from the predictions of decision models because they can be dissociated from the metrics, modality, probability, and even reward of an action. We propose that responses to attention represent a distinct form of decision that assigns value to sources of information rather than to specific actions. Thus, we propose that oculomotor control entails two distinct types of decisions – attentional decisions that select sources of information, and motor decisions that select an appropriate action. In the following sections we survey the key findings emerging from the decision and attention literatures in LIP, and end by proposing a synthesis that incorporates findings from both lines of investigation.

Parietal neurons encode saccade decisions

The control of rapid eye movements (saccades) relies on a distributed network encompassing subcortical and cortical areas5. In the neocortex, two areas that are particularly important for oculomotor control are the LIP and the frontal eye field (FEF)6, 7. Neurons in both areas have visual receptive fields (RF) and selectively encode the locations of salient or task-relevant objects810. These neurons are thought to provide spatially organized “priority representations” – sparse maps of the visual world that highlight selected locations and mediate orienting toward those locations (ibid).

Studies of oculomotor decisions have focused on one node in this distributed network, area LIP. The central insight from these studies is that LIP neurons reflect not merely the final outcome of a decision – the metrics of an upcoming saccade – but also the accumulation of evidence leading to that decision. In one well known paradigm saccade decisions are based on a sensory cue. On each trial monkeys see a patch containing random dot motion directed toward one of several saccade targets and are rewarded for selecting the target whose location is congruent with motion direction11 (Fig. 1a). As expected, LIP neurons encode the direction of the upcoming saccade, responding more if the monkey plans a saccade toward the RF relative to the opposite direction (Fig. 1a). Importantly, these responses depend on the strength (signal to noise ratio) of the sensory evidence supporting the saccade (Fig. 1b). Activity ramps up slowly if the discriminandum contains low coherence motion but faster for high-coherence cues, and converges to a common, coherence-independent level just before the saccade1214, 15. In other studies saccade decisions are based not on sensory evidence but on the reward history of the alternative targets9, 1618. Monkeys learn to track the targets’ histories and typically choose the optimal (higher-reward) target; again, neural responses are not stereotyped but increase as a function of the subjective desirability of the selected action2, 3.

Figure 1.

Figure 1

Overlap of visual and saccade responses in LIP a,b Encoding of saccade direction on the random dot discrimination task. a Task layout (top) and single-neuron response (bottom panel) Saccade targets (black dots) are placed in and opposite the RF (pale oval). The motion cue appears near the center of gaze, outside the RF. Bottom panels show responses of a representative neuron encoding saccade direction. Activity is aligned to saccade onset (vertical bar) and carets show the time of cue onset. The average histograms show average firing rate in spikes per second. Reproduced with permission from12. b Population responses Activity preceding a saccade to the RF increases with a rate proportional to motion coherence, but converges to a common level just before the saccade. Reproduced with permission from12. c,d Encoding of saccade direction and cue location in a match-to-sample decision task. c Two task layouts, with the cue outside (top) or inside the RF (bottom) Monkeys viewed a circular array containing 8 peripheral saccade targets (only two are depicted here for clarity of presentation). On each trial a cue was flashed for 200 ms and, after a 600–800 ms delay monkeys were rewarded for making a saccade to the target matching the cue. Rasters show responses of a representative neuron. If the cue was out of the RF (top) the neuron encoded saccade direction. If the cue was in the RF (bottom) the neuron responded first to the cue and later encoded saccade direction. Modified with permission from25. d Population responses when the cue appeared in the RF. The 8 traces correspond to the 8 different saccade directions (bold traces marking the 3 best directions). The saccade directional response was preceded by a strong saccade-independent response to the cue. Modified with permission from25.

Taken together these findings led to a view of LIP as a sort of “final path” for saccade motor decisions (Fig. 4a). LIP is thought to integrate integrates multiple sources of evidence and encode the accumulation of that evidence into an evolving motor plan, culminating in the final selection of an action. This idea has been implemented in a number of computational models, including models based on probabilistic decision variables19, attractor networks20 or saccade probability in a Bayesian framework21.

Figure 4.

Figure 4

(a) Computational stages involved in decision formation in the decision literature, an internal decision layer (proposed to be encoded in LIP) selects among alternative actions based on sensory and reward information. (b) Proposed expansion of the internal decision layer to include a stage of visual selection (attention) that is distinct from action selection. The visual selection stage can highlight several objects or locations, for example, those that are cues or targets for action. This stage communicates with downstream ocular or skeletal motor areas that select an appropriate action. Feedback from action selection layers modulates the dynamics of visual selection.

Neurons also encode covert attention

Studies of the role of LIP in attention have targeted a similar population of neurons as that studied in decision experiments – neurons with spatially tuned visual, sustained and pre-saccadic activity2224. However, these studies have examined a different aspect of the neural response. Rather than focusing solely on responses to saccade targets, studies of attention ask how neurons encode target or non-target stimuli.

A consistent outcome these studies is that LIP neurons with saccade-related activity also have strong motor-independent visual responses. A particularly good example came in an early experiment where, similar to the motion discrimination task, monkeys chose a saccade target based on a visual cue8, 25. One difference was that the saccade decision in this task was based not on visual motion but on a match between cue and target properties (Fig. 1c). The key difference, however, was that we placed not only the target but also the cue in the RF, allowing us to examine neural responses to both stimuli. If the target was in the RF, neurons had directionally tuned delay and pre-saccadic activity similar to that described in decision paradigms (compare Fig. 1a and 1c,top). However, if the cue appeared in the RF neurons had an early cue-evoked response that was independent of the subsequent saccade (Fig. 1c (bottom), d).

As we describe in more detail below, the cue-evoked responses reflect a broader role of LIP in attention. However, these responses are problematic for models of decision formation, because they break the presumed relation between LIP activity and saccade probability. Rather than scaling with saccade probability as suggested by the results in Fig. 1b, in our task activity was strongest in response to the cue – a stimulus that was practically never targeted with a saccade (cue-directed saccades were penalized in the context of the task). Given the similarities in other aspects of the data and in particular the strong saccade-related activity found in both studies, it is unlikely that this apparent discrepancy was due to sampling of different neuronal populations. The most likely explanation is that in the motion discrimination task cues were always placed outside of the RF (near the center of gaze), so that investigators simply did not observe the response to the cue (Fig. 1a). However, our findings show that cue-evoked responses were readily elicited by simply placing the cue in the RF.

This dual, visual and saccadic nature of the LIP response significantly complicates the readout of the LIP response in downstream areas. The common assumption of decision models is that the readout involves a simple comparison to a threshold, whereby a saccade is triggered whenever activity in some portion of the LIP map reaches or exceeds a certain level19, 20. However, given the strong visual responses shown in Fig. 1c–d this simple scheme would trigger more erroneous saccades to non-target stimuli than correct saccades to a target. One possibility is that saccade thresholds are variable, for example being set high throughout a delay period and lowered upon receipt of the “go” signal (extinction of the fixation point). To address this possibility we carried out another experiment in which monkeys were required to make immediate saccades opposite a salient cue (antisaccades)26. After achieving central fixation monkeys saw a cue that flashed either inside the RF or at the opposite location. The fixation point disappeared simultaneous with cue onset and monkeys were rewarded for making a saccade opposite the cue as soon as possible after cue onset. In this condition of acute visuo-motor conflict neurons encoded only visual but not motor selection: neurons responded whenever the cue appeared in their RF but did not encode saccade direction even as monkeys were executing the saccade itself. This suggests that the signal that is consistently encoded in LIP is one of visual rather than motor selection. To choose the appropriate action (in this case, an antisaccade) the brain requires additional mechanisms that can contravene or supplement the visual signal from LIP. In some circumstances the final motor decision is not faithfully encoded in LIP, suggesting that it is computed in downstream areas.

In a subsequent series of experiments we further examined the role of LIP in visual selection and specifically in effortful, top-down attention4, 27. To examine top-down attention we trained monkeys on a task in which the relevant cue (an “E” like shape”) was not physically conspicuous but appeared in the visual periphery among similar distractors22, 28, 29. The cue appeared at an unpredictable location and could face to the right or to the left (Fig. 2a). While holding central fixation, monkeys were rewarded for reporting cue orientation by releasing a bar held in their right or left hand. Performance declined as a function of the number of distractors, revealing a set-size effect that is diagnostic of attentionally demanding visual search29. In addition, because monkeys used a non-targeting manual response, the task separated the metrics of attention (linked to the visual display) from those of motor selection (linked to the bar release).

Figure 2.

Figure 2

LIP encodes top-down attention a Monkeys reported an orientation discrimination using a manual report. A display containing several figure-8 placeholders remained stably on the screen. A trial began when the monkeys achieved central fixation and grasped two response bars, leading to presentation of several distractors and a cue (a letter “E”). Monkeys reported “E” orientation by releasing the right or left bar. b Population responses for search with displays of 2, 4 or 6 elements. After presentation of the search display (time 0) firing rates strongly increased if the “E” was in the RF (solid traces) but not if a distractor was in the RF (dashed traces). Increasing set-size lowered firing rates. Reproduced with permission from29. c Effects of reversible inactivation during covert search. If the cue is in the hemifield contralateral to the inactivation site, inactivation lowers accuracy (top left) and elevates reaction time (bottom left). Deficits are not seen for ipsilateral cues (right panels) and do not depend on the limb used to indicate the decision (black vs. white symbols). Modified with permission from28 d Neuron selective for cue location and bar release The neuron increased its activity if the “E” was in its RF (left column) but not if a distractor was in the RF (right column). The neuron also responded more if the monkey released the left relative to the right bar (blue vs. red). Trials are sorted offline in order of manual latency. Reproduced with permission from22. e Population activity encoding bottom-up shifts of attention. Monkeys kept in mind the location of a briefly presented target for a later saccade; a salient distractor was presented at an unpredictable location during the delay period. When the target was in the RF (blue) LIP neurons maintained sustained activity encoding the target’s location. If the distractor appeared in the RF (red), the distractor-evoked response far surpassed the target-related activity for a brief period before returning to baseline. Reproduced with permission from23.

Consistent with a role in attention, over 80% of LIP neurons encoded the location of the cue in the visual array, responding strongly if the “E” appeared in their RF but much less if a distractor was in the RF22, 29. These responses declined as a function of the number of distractors, correlating with the behavioral set-size effect29 (Fig. 2b). The findings suggest that LIP activity, like that in other visual areas30, is subject to competitive normalization reducing activity in multi-element displays. These findings are consistent with prior reports that transient unilateral inactivation of LIP with muscimol (a GABA-A receptor agonist) impairs performance of attentionally demanding visual search31, 32, a finding confirmed by a subsequent experiment in our laboratory28 (Fig. 2c).

The presence of a strong signal of visuo-spatial selection on the “E” search task shows that this signal is not exclusively linked to saccades or even to an oriented skeletal response (such as a reach). Instead, visual selection is independent of motricity and arises even for a non-targeting action such as a bar release. Unexpectedly, however, we found that the visual selection signal in LIP was modulated by the manual action22. This effect is shown for a representative neuron in Fig. 2d. The neuron responded more if the cue than if a distractor appeared in its RF, providing the expected signal of cue location. However, its response to a cue in the RF was stronger when the monkey released the left limb than when she released the right limb. While for this neuron the preferred limb was congruent with the hemifield of the RF, other neurons preferred the RF-incongruent limb (the limb ipsilateral to the recording hemisphere), with the relative proportion of contralateral and ipsilateral limb preferences varying across subjects. Control experiments showed that this selectivity described the active limb rather than the orientation of the cue, as it persisted when the monkey discriminated a second set of shapes. In addition, selectivity was linked to the effector and not to the effector’s location in space, as it was unchanged if monkeys performed the task with their limbs crossed over the body midline22.

Taken together, these findings suggest a different view of LIP activity relative to that suggested by studies of decision formation. First, the findings show that LIP encodes visual selection in conjunction with ocular or skeletal actions. Second, they show that LIP receives feedback from multiple motor modalities, including ocular and skeletal, non-targeting actions. Finally, the results suggest that motor feedback is expressed in variable fashion, gated by visual and attentional demands. Earlier we mentioned the fact that saccade activity is found in LIP in some cases (e.g., before visually guided or delayed saccades, Fig. 1) but not in others (e.g., before an antisaccade to an unmarked location26). Likewise, sensitivity to limb movement on the “E” search task appeared only if attention was deployed to the RF (Fig. 2d, left panels) but not if attention was deployed outside the RF (right panels), even though the manual release was equivalent in both cases. Moreover, the deficits produced by muscimol inactivation were fully explained by the location of the cue but were equivalent regardless of the active limb, suggesting that LIP was not critical for the manual response28 (Fig. 2c). Based on these findings we propose that LIP encodes a stage of visual selection that communicates with but is distinct from a stage of motor selection (Fig. 4b). Although neurons receive feedback about the selected action they express this feedback variably, depending on visual and attentional demands.

Attention is based on the utility of information

Perhaps the most important tenet of the decision literature, and one that has the broadest appeal, is that any decision requires an estimate of the utility of the available options2, 3. The attention-related responses described above encode a form of selection and as such, may also require the weighting of alternative options. However, while in decision studies utility is determined by a physical reward – the food or drink that can be harvested by an action - responses to attention do not seem to have a simple relation to a physical reward. As we discuss in this section, attention seems to be based on a more abstract metric related to sources of information.

Consider, for example, the cue selection response on the “E” search task (Fig. 2b). As we described above, this response encodes a shift of attention to the cue, which, if appropriate, may be accompanied by an overt saccade to the cue. However, whether overt or covert, such orienting is not the decision in the task: monkeys were rewarded for releasing a bar and would have been penalized had they directed gaze to the “E”. Had LIP simply reported the desirability of an action, its cue response may have been expected to diminish and perhaps disappear with prolonged training, as monkeys learn that it is not desirable to make a saccade to the “E”. One possibility is that in this task the cue had indirect reward value by virtue of its ability to inform the later manual action. However, the evidence we discuss below shows that attention can be even more starkly dissociated from a physical reward. When driven by physical conspicuity or emotional significance attention is automatically allocated to salient or predictive stimuli even when this consistently interferes with an optimal action23, 33, 34.

As we mentioned above, LIP neurons have strong responses to abruptly appearing stimuli (Fig. 1c,d), and several experiments have linked these responses with reflexive, bottom-up attention. Neurons respond selectively for abruptly changing relative to stable, inconspicuous stimuli, and the strength of the visual response correlates with the distracting effects of such stimuli on a current task8, 23, 33, 35. The dynamics of bottom-up attention were examined in one experiment where monkeys remembered the location of a target for a later saccade while viewing a salient distractor presented during the memory interval23. Neurons responded both to the target and to the distractor (Fig. 2e). Attention (measured at several time points through a secondary contrast sensitivity task) was allocated to the locus of the highest activity in LIP, switching first from the target to the distractor and back to the location of the target as the distractor response waned in LIP. Notably, attention and LIP activity were inexorably driven by the distractor even though this stimulus could only impair, never improve, the odds of harvesting a reward.

An additional source of automatic influence on attention is the biological significance - the Pavlovian or emotional associations of external objects. In human subjects emotional stimuli automatically attract attention36 and bias visual activity in primary visual cortex independently of their relevance to the task37, 38. However, little is known about the substrates of Pavlovian attention in single neurons in the monkey brain.

To address this question we tested LIP responses to reward predictors - cues that signaled an upcoming outcome but not the required (optimal) action34. Each trial began with a brief conditioned stimulus (CS), an abstract pattern announcing whether the trial will end in a reward (CS+) or no reward (CS−). The CS appeared randomly either inside or opposite the neuron’s RF but its location was irrelevant to the task. To complete a trial monkeys had to make a saccade to a second target that appeared after extinction of the CS, and was located unpredictably at the same or at the opposite location relative to the CS.

Despite their lack of operant significance, the CS automatically biased attention based on their salience and Pavlovian associations. Immediately upon their onset both CS+ and CS− produced a strong visual response, suggesting that they transiently attracted attention to their location. At longer delays, after disappearance of the CS, this gave way to a sustained bias that depended on the valence of the CS. In the wake of a CS+ neurons had slightly higher delay activity if the CS+ appeared in the RF relative to the opposite location, suggesting that the CS+ produced an attractive bias toward its location (Fig. 3b, left). In contrast, after an over-learned CS-neurons had lower activity at the CS- location relative to the opposite location (Fig. 3a, right and 3b, right), suggesting that a CS− produced an inhibitory bias at its location. Consistent with this, saccades were slightly facilitated if they were congruent with the location of a recent CS+ (Fig. 3c,d, left) but were impaired (in both accuracy and reaction times) if they were directed to the location of a recent CS− (Fig. 3c, d, right). Thus, following its initial bottom-up effect, a CS+ automatically attracted attention while a CS− automatically repelled attention from its location.

Figure 3.

Figure 3

Pavlovian cues bias attention and LIP activity a. Neurons have valence-selectivity activity that grows with training Normalized population firing rates to CS+ (blue) or CS− (red). Over-learned stimuli had been trained for several months before recordings began. Newly-learned stimuli were trained at the beginning of each session until monkeys learned their valence, as indicated by their anticipatory licking. Shading represents SEM. Stars show time bins of significant differences. The CS was present for 300 ms (black bar) followed by a 600 ms delay period where two identical placeholders were visible at each location. b CS-evoked responses are spatially specific. Traces show population responses for overtrained CS, appearing inside (black) or opposite the RF (gray). The vertical axis is truncated to highlight delay period activity. c Saccade accuracy is impaired for CS- congruent saccades. Landing positions of saccades in a representative session, when the saccade target was congruent with the location of an overtrained CS+ (left) or CS− (right). Each gray dot shows one saccade. Axes show horizontal and vertical eye position, normalized so that the target appeared at position (1,0). Saccades on CS- trials were often dysmetric, falling outside the allowed accuracy window. d Saccade impairment is spatially specific Panels show mean and SEM for saccade accuracy across all sessions. On CS+ trials accuracy is slightly but significantly higher for saccades that are spatially congruent with the CS. On CS− trials, accuracy is much lower for CS− congruent relative to incongruent saccades, and this difference increases with training. Modified with permission from34.

The fact that CS-evoked responses were modulated by expected reward is consistent with prior reports of strong reward influences in this area9, 16. However, the specific form of these effects is inconsistent with an interpretation in terms of reward-based action selection. The strong CS− evoked repulsion may be understood as an instinctive tendency on the part of the monkey to avoid a source of bad news. However, this tendency was maladaptive, as it interfered with the monkey’s ability to make the required saccade to the target. Thus reinforcement models predict that it should subside with training, as subjects learn the value of the optimal action9, 39. And yet, the experiment yielded precisely the opposite result: saccade accuracy was worse on trials with an overlearned than on those with a newly-learned CS−, so that learning only exacerbated an initial maladaptive effect (Fig. 3a,d). This, we propose, is a very strong result. It suggests that learning in the system of attention was not based on the expected reward of an action. Instead, it was based on the predictive value of information, increasing as monkeys had more opportunity to learn the stimulus-reward associations.

The need for autonomous visual and motor selection

A central achievement of the decision literature is its ability to suggest a conceptual framework for formally describing goal directed behaviors13. The above discussion highlights both the importance of this approach and the need to expand its current instantiation. We reviewed a substantial body of evidence that, in addition to its saccade-related activity, LIP encodes signals of visual selection that are not captured by current action based decision models, as they are independent of the metrics, reward or even occurrence of an action. This suggests that LIP is not near the end point of a motor decision stage as suggested by the current decision literature (Fig. 4a). Instead it represents an earlier stage encoding visual or attentional selection (Fig. 4b). This stage can provide top-down feedback to sensory systems as proposed in computational models of attention40 and also interact with downstream areas forging motor decisions.

The evidence we reviewed also addresses a longstanding but still open question regarding the purpose of such an explicit representation of attention. Given that eye movements are often congruent with the locus of attention, why would the brain have a stage of visual selection that is distinct from that of saccade motor selection? We suggest that the purpose of such representation is twofold. First, this stage may be required for behavioral flexibility – the ability to variably link selected information with actions. Second, it is needed for implementing a distinct form of decision, decisions that identify and assign priority to sources of information. We discuss each in turn.

To succeed in complex environments we must act in flexible manner as appropriate for a given task. The need for flexibility is clear in the case of skeletal actions. We are not limited to reaching to an attended object but can perform actions that are unrelated to the locus of attention, such as shifting gears while looking at the traffic light, running away from an attended ball, or playing the piano while reading a score. Although less obvious, flexibility is also required for oculomotor control. In complex environments we look at some stimuli while covertly attending to others4143 and we separate attention from gaze in social situations.

Such flexible coordination may be difficult to achieve with a single stage that equates visual and motor selection. In contrast, a two-stage model of the type shown in Fig. 4b allows for both autonomy and coordination between visual and motor selection. By virtue of its feedforward projections the visual selection in LIP may bias downstream areas to generate a saccade to the locus of attention. However, these biases do not amount to motor command, and downstream areas may also block or supplement this descending signal as required in a given task. Likely sites for saccade motor decisions are visuomovement and movement neurons in the FEF and the superior colliculus along with related basal ganglia circuitry4447. In addition to this feedforward link LIP receives feedback about motor selection, which provides another possible venue for coordination between attention and actions.

A second critical function of an explicit attentional representation is, we propose, to assign priority to sources of information. Studies of attention have largely focused on the mechanisms by which the brain enhances the representation of an attended stimulus relative to distractors30. However, these studies tell us little about how the brain identifies the relevant stimulus to begin with – how it decides which stimulus is worthy of attention. In Box 1 we outline some of the most obvious questions regarding the metric, or metrics that may guide an attentional decision. Briefly, the above discussion suggests three possible answers. One metric guiding attention may be the task-relevance or information provided by a stimulus – the ability of a stimulus to enhance the reward of a later action (Fig. 2) 48, 49. Another metric may be based on biological significance (Fig. 3). Yet other metrics may be related to the conspicuity of stimuli or their ability to change an existing belief, which may be important for learning or exploration40, 50, 51.

Box 1 Questions for future research

Our view of attention as involving decisions about the value of information opens a number of fundamental questions that have not been addressed in the literature. Below we outline several broad classes of questions.

  1. Identifying valuable sources of information is one of the most difficult problems faced by the brain, and it is unlikely that it involves only LIP. What is the relation between the associative coding we find in LIP and the frontal lobe, which is thought to provide memory-based, top-down control of attention52, 53?

  2. How is attention related to a reward or, more generally, to the goal of a task? We have seen that attending to task-relevant cues can increase the reward of a later action (Fig. 2). Is the “relevance” of a cue related to the strength of its action associations? If so, do animals track cue-action associations even independently of a physical reward? How is a metric of information based on the validity of a cue (the strength of its predictive associations) related to a metric based on its sensory strength (signal to noise ratio)54?

  3. How is attention related to learning and exploration? Is more attention allocated to more reliable cues, or is attention allocated to more uncertain stimuli in order to facilitate learning about them? How do other factors that influence saccadic exploration, such as visual conspicuity or surprise relate to an information metric50, 51? A recent study in adaptive systems suggested that a metric of acquired information could be used to motivate exploration, guiding it toward situations that increase learning55. Can this framework be fruitfully applied to neuroscientific studies of attention and learning, and can it provide a bridge for cross-talk between studies in neuroscience and robotics?

  4. What are the mechanisms that motivate attention? A recent study suggests that dopamine neurons encoding prediction error for physical rewards also encode a cognitive reward signal related to the anticipation of information56. Is this reward signal critical for motivating attention? If so, how is it computed and how does it affect attention? Is attention deficit disorder related to deficits in processing delayed or cognitive rewards?

  5. What is the relation between the phenomenology described by decision and attention experiments? Does the coherence-dependent rise of activity in the motion discrimination task (Fig. 1b) truly reflect the “accumulation of evidence” or does it reflect competitive interactions between the saccade target and the attentionally demanding cue29? If information accumulation does occur, does it apply to covert attention (when it is not necessary to cross a motor threshold) and if so, how?

  6. Is the task-related feedback we find in LIP transmitted to sensory areas (e.g., Moores, 2003 #7773}) and from there to perception? Is attention directed differently depending on the information content of stimuli, i.e., whether the stimuli describe actions, emotionally significant outcomes, or abstract categorical variables57? Does the perception of a stimulus depend on the action or category associated with that stimulus?

  7. The attentional maps in LIP and FEF inform both ocular and skeletal motor output22, 58. Are these maps important for allowing flexible selection and coordination across motor modalities, and if so, how?

A moment’s consideration shows that, just as action selection requires the learning of action-reward associations, attention may need to rely on associative learning regarding visual stimuli. A visual stimulus is not valuable in and of itself but only becomes so if it predicts other variables of interest such as an action or an expected reward. Thus, the “E” in our search task is only relevant because it predicts an optimal action, and a CS in the Pavlovian task is only relevant because it predicts a specific reward. One possibility is that the reward and motor modulations found in LIP are part of a broader mechanism by which the brain learns to identify predictive stimuli based on their task-specific associations. Thus the feedback in systems of attention may not serve solely visuo-motor coordination; it may be part of a broader mechanism where motor, emotional and cognitive factors define which stimulus should be attended in the context of a task.

Concluding remarks

We highlighted a few critical open questions emerging from the study of attention and oculomotor decisions in the parietal cortex. Our discussion suggests that a synthesis between these lines of research can significantly expand the outlook provided by each camp alone. On one hand, the presence of attention-related activity suggests the need to expand the current study of decision formation to consider decisions regarding sources of information. On the other hand, these studies also call for a more principled account of the control of attention. Rather than intuitively defining attention as a form of “focusing” we can attempt to understand it as a distinct form of cognitive decision. By using tasks in which monkeys choose between alternative cues we can understand the metric, or metrics that guide this decision. This will provide important insights into the control of attention and, in so doing, into broader processes of learning, exploration and the active search for information.

Acknowledgments

This research was supported by The National Eye Institute, The Keck Foundation, the McKnight Fund for Neuroscience, The Klingenstein Fund for Neuroscience, the Sloan Foundation, the National Alliance for Research on Schizophrenia and Depression, the Human Frontiers Program, the Swiss National Science Foundation and the Gatsby Charitable Foundation.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Disclosure statement:

The authors declare that they have no conflict of interest.

References

  • 1.Gold JI, Shadlen MN. The neural basis of decision making. Annu Rev Neurosci. 2007;30:535–574. doi: 10.1146/annurev.neuro.29.051605.113038. [DOI] [PubMed] [Google Scholar]
  • 2.Sugrue LP, et al. Choosing the greater of two goods: neural currencies for valuation and decision making. Nat Rev Neurosci. 2005;6:363–375. doi: 10.1038/nrn1666. [DOI] [PubMed] [Google Scholar]
  • 3.Kable JW, Glimcher PW. The neurobiology of decision: consensus and controversy. Neuron. 2009;63:733–745. doi: 10.1016/j.neuron.2009.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Gottlieb J. From thought to action: the parietal cortex as a bridge between perception, action, and cognition. Neuron. 2007;53:9–16. doi: 10.1016/j.neuron.2006.12.009. [DOI] [PubMed] [Google Scholar]
  • 5.Kandel E, Schwartz J. Principles of Neural Science. McGraw-Hill; 2000. [Google Scholar]
  • 6.Schall JD. On the role of frontal eye field in guiding attention and saccades. Vision Res. 2004;44:1453–1467. doi: 10.1016/j.visres.2003.10.025. [DOI] [PubMed] [Google Scholar]
  • 7.Goldberg ME, et al. Saccades, salience and attention: the role of the lateral intraparietal area in visual behavior. Prog Brain Res. 2006;155:157–175. doi: 10.1016/S0079-6123(06)55010-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Gottlieb J, et al. The representation of visual salience in monkey parietal cortex. Nature. 1998;391:481–484. doi: 10.1038/35135. [DOI] [PubMed] [Google Scholar]
  • 9.Sugrue LP, et al. Matching behavior and the representation of value in the parietal cortex. Science. 2004;304:1782–1787. doi: 10.1126/science.1094765. [DOI] [PubMed] [Google Scholar]
  • 10.Thompson KG, Bichot NP. A visual salience map in the primate frontal eye field. Prog Brain Res. 2005;147:251–262. doi: 10.1016/S0079-6123(04)47019-8. [DOI] [PubMed] [Google Scholar]
  • 11.Britten KH, et al. The analysis of visual motion: a comparison of neuronal and psychophysical performance. J Neurosci. 1992;12:4745–4765. doi: 10.1523/JNEUROSCI.12-12-04745.1992. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Roitman JD, Shadlen MN. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. J Neurosci. 2002;22:9475–9489. doi: 10.1523/JNEUROSCI.22-21-09475.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Kiani R, et al. Bounded integration in parietal cortex underlies decisions even when viewing duration is dictated by the environment. J Neurosci. 2008;28:3017–3029. doi: 10.1523/JNEUROSCI.4761-07.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Huk AC, Shadlen MN. Neural activity in macaque parietal cortex reflects temporal integration of visual motion signals during perceptual decision making. J Neurosci. 2005;25:10420–10436. doi: 10.1523/JNEUROSCI.4684-04.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Churchland AK, et al. Decision-making with multiple alternatives. Nat Neurosci. 2008;11:693–702. doi: 10.1038/nn.2123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Platt ML, Glimcher PW. Neural correlates of decision variables in parietal cortex. Nature. 1999;400:233–238. doi: 10.1038/22268. [DOI] [PubMed] [Google Scholar]
  • 17.Dorris MC, Glimcher PW. Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Neuron. 2004;44:365–378. doi: 10.1016/j.neuron.2004.09.009. [DOI] [PubMed] [Google Scholar]
  • 18.Yang T, Shadlen MN. Probabilistic reasoning by neurons. Nature. 2007;447:1075–1080. doi: 10.1038/nature05852. [DOI] [PubMed] [Google Scholar]
  • 19.Mazurek ME, et al. A role for neural integrators in perceptual decision making. Cereb Cortex. 2003;13:1257–1269. doi: 10.1093/cercor/bhg097. [DOI] [PubMed] [Google Scholar]
  • 20.Wang XJ. Decision making in recurrent neuronal circuits. Neuron. 2008;60:215–234. doi: 10.1016/j.neuron.2008.09.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Beck JM, et al. Probabilistic population codes for Bayesian decision making. Neuron. 2008;60:1142–1152. doi: 10.1016/j.neuron.2008.09.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Oristaglio J, et al. Integration of visuospatial and effector information during symbolically cued limb movements in monkey lateral intraparietal area. J Neurosci. 2006;26:8310–8319. doi: 10.1523/JNEUROSCI.1779-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Bisley JW, Goldberg ME. Neuronal activity in the lateral intraparietal area and spatial attention. Science. 2003;299:81–86. doi: 10.1126/science.1077395. [DOI] [PubMed] [Google Scholar]
  • 24.Ganguli S, et al. One-dimensional dynamics of attention and decision making in LIP. Neuron. 2008;58:15–25. doi: 10.1016/j.neuron.2008.01.038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Gottlieb J, et al. Simultaneous representation of saccade targets and visual onsets in monkey lateral intraparietal area. Cereb Cortex. 2005;15:1198–1206. doi: 10.1093/cercor/bhi002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Gottlieb J, Goldberg ME. Activity of neurons in the lateral intraparietal area of the monkey during an antisaccade task. Nature Neurosci. 1999;2:906–912. doi: 10.1038/13209. [DOI] [PubMed] [Google Scholar]
  • 27.Bisley JW, et al. The lateral intraparietal area: a priority map in posterior parietal cortex. In: Jenkin M, Harris L, editors. Cortical Mechanisms of Vision. Cambridge University Press; 2009. pp. 5–30. [Google Scholar]
  • 28.Balan PF, Gottlieb J. Functional significance of nonspatial information in monkey lateral intraparietal area. J Neurosci. 2009;29:8166–8176. doi: 10.1523/JNEUROSCI.0243-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Balan PF, et al. Neuronal correlates of the set-size effect in monkey lateral intraparietal area. PLoS Biol. 2008;6:e158. doi: 10.1371/journal.pbio.0060158. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Reynolds JH, Heeger DJ. The normalization model of attention. Neuron. 2009;61:168–185. doi: 10.1016/j.neuron.2009.01.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Wardak C, et al. A deficit in covert attention after parietal cortex inactivation in the monkey. Neuron. 2004;42:501–508. doi: 10.1016/s0896-6273(04)00185-0. [DOI] [PubMed] [Google Scholar]
  • 32.Wardak C, et al. Saccadic target selection deficits after lateral intraparietal area inactivation in monkeys. J Neurosci. 2002;22:9877–9884. doi: 10.1523/JNEUROSCI.22-22-09877.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Balan PF, Gottlieb J. Integration of exogenous input into a dynamic salience map revealed by perturbing attention. J Neurosci. 2006;26:9239–9249. doi: 10.1523/JNEUROSCI.1898-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Peck CJ, et al. Reward modulates attention independently of action value in posterior parietal cortex. J Neurosci. 2009;29:11182–11191. doi: 10.1523/JNEUROSCI.1929-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Ipata AE, et al. LIP responses to a popout stimulus are reduced if it is overtly ignored. Nat Neurosci. 2006;9:1071–1076. doi: 10.1038/nn1734. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Phelps EA, et al. Emotion facilitates perception and potentiates the perceptual benefits of attention. Psychol Sci. 2006;17:292–299. doi: 10.1111/j.1467-9280.2006.01701.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Padmala S, Pessoa L. Affective learning enhances visual detection and responses in primary visual cortex. J Neurosci. 2008;28:6202–6210. doi: 10.1523/JNEUROSCI.1233-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Lim SL, et al. Affective learning modulates spatial competition during low-load attentional conditions. Neuropsychologia. 2008;46:1267–1278. doi: 10.1016/j.neuropsychologia.2007.12.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Sutton RS, Barto AG. Reinforcement learning: an introduction. MIT Press; 1998. [Google Scholar]
  • 40.Itti L, Koch C. Computational modelling of visual attention. Nature Reviews Neuroscience. 2001;2:194–203. doi: 10.1038/35058500. [DOI] [PubMed] [Google Scholar]
  • 41.Egly R, et al. Shifting visual attention between objects and locations: evidence from normal and parietal lesion subjects. J Exp Psychol Gen. 1994;123:161–177. doi: 10.1037//0096-3445.123.2.161. [DOI] [PubMed] [Google Scholar]
  • 42.Bichot NP, et al. Parallel and serial neural mechanisms for visual search in macaque area V4. Science. 2005;308:529–534. doi: 10.1126/science.1109676. [DOI] [PubMed] [Google Scholar]
  • 43.Maunsell JH, Treue S. Feature-based attention in visual cortex. Trends Neurosci. 2006;29:317–322. doi: 10.1016/j.tins.2006.04.001. [DOI] [PubMed] [Google Scholar]
  • 44.Brown JW, et al. Relation of frontal eye field activity to saccade initiation during a countermanding task. Exp Brain Res. 2008;190:135–151. doi: 10.1007/s00221-008-1455-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Juan CH, et al. Dissociation of spatial attention and saccade preparation. Proc Natl Acad Sci U S A. 2004;101:15541–15544. doi: 10.1073/pnas.0403507101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Murthy A, et al. Neural control of visual search by frontal eye field: effects of unexpected target displacement on visual selection and saccade preparation. J Neurophysiol. 2009;101:2485–2506. doi: 10.1152/jn.90824.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Liu P, Basso MA. Substantia nigra stimulation influences monkey superior colliculus neuronal activity bilaterally. J Neurophysiol. 2008;100:1098–1112. doi: 10.1152/jn.01043.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Rothkopf CA, et al. Task and context determine where you look. J Vis. 2007;7(16):11–20. doi: 10.1167/7.14.16. [DOI] [PubMed] [Google Scholar]
  • 49.Najemnik J, Geisler WS. Eye movement statistics in humans are consistent with an optimal search strategy. J Vis. 2008;8(4):1–14. doi: 10.1167/8.3.4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Einhauser W, et al. A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition. J Vis. 2007;7(6):1–13. doi: 10.1167/7.10.6. [DOI] [PubMed] [Google Scholar]
  • 51.Itti L, Baldi P. Bayesian surprise attracts human attention. Vision Res. 2009;49:1295–1306. doi: 10.1016/j.visres.2008.09.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Navalpakkam V, Itti L. Modeling the influence of task on attention. Vision Res. 2005;45:205–231. doi: 10.1016/j.visres.2004.07.042. [DOI] [PubMed] [Google Scholar]
  • 53.Desimone R, Duncan J. Neural mechanisms of selective visual attention. Ann. Rev. Neurosci. 1995;18:183–222. doi: 10.1146/annurev.ne.18.030195.001205. [DOI] [PubMed] [Google Scholar]
  • 54.Angelaki DE, et al. Multisensory integration: psychophysics, neurophysiology, and computation. Curr Opin Neurobiol. 2009;19:452–458. doi: 10.1016/j.conb.2009.06.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Oudeyer PY, et al. Instrinsic motivation systems for autonomous mental development. IEEE Transactions on Evolutionary Computations. 2007;11:265–286. [Google Scholar]
  • 56.Bromberg-Martin ES, Hikosaka O. Midbrain dopamine neurons signal preference for advance information about upcoming rewards. Neuron. 2009;63:119–126. doi: 10.1016/j.neuron.2009.06.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Freedman DJ, Assad JA. Experience-dependent representation of visual categories in parietal cortex. Nature. 2006;443:85–88. doi: 10.1038/nature05078. [DOI] [PubMed] [Google Scholar]
  • 58.Thompson KG, et al. Neuronal basis of covert spatial attention in the frontal eye field. J Neurosci. 2005;25:9479–9487. doi: 10.1523/JNEUROSCI.0741-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES