Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Mar 1.
Published in final edited form as: Ann N Y Acad Sci. 2015 Jan 7;1339(1):154–164. doi: 10.1111/nyas.12606

The what, where, and why of priority maps and their interactions with visual working memory

Gregory J Zelinsky 1,2, James W Bisley 2,3,4
PMCID: PMC4376606  NIHMSID: NIHMS641138  PMID: 25581477

Abstract

Priority maps are winner-take-all neural mechanisms thought to guide the allocation of covert and overt attention. Here, we go beyond this standard definition and argue that priority maps play a much broader role in controlling goal-directed behavior. We start by defining what priority maps are and where they might be found in the brain; we then ask why they exist—the function that they serve. We propose that this function is to communicate a goal state to the different effector systems, thereby guiding behavior. Within this framework, we speculate on how priority maps interact with visual working memory and introduce our common source hypothesis, the suggestion that this goal state is maintained in visual working memory and used to construct all of the priority maps controlling the various motor systems. Finally, we look ahead and suggest questions about priority maps that should be asked next.

Keywords: saliency map, prioritization, salience

What is this paper about?

Despite its widespread adoption, the term priority map is often used as a simple conceptual shorthand for some mechanism responsible for the allocation of attention over visual space. Our aim in this paper is to delve deeper into the theory of priority maps, clarifying not only what priority maps are and where they likely exist in the brain, but also asking why they exist at all—what function they serve. We will argue that this function is largely one of a filter that selects goal-relevant information in visual space for the purpose of controlling and coordinating activity across different effector systems. This paper will not consider how priority maps work, the competitive process of selecting a winner. We also speculate about the relationship between priority maps and visual working memory (VWM), suggesting that neither concept can be fully understood in isolation from the other.

What is a priority map?

Models of visual attention and search have long appealed to the notion that locations in space are prioritized for some upcoming behavior (e.g., an eye movement) or process (e.g., recognition). This prioritization of location, however, has been referred to by many names, with the chosen term often reflecting the chosen task, or lack thereof. For example, in free viewing, there is no explicit task, so activity on this map is mostly dominated by perceptually salient contrasts between visual features—hence the term saliency map, a purely bottom-up prioritization of incoming visual information.14 Contrast this with visual search, a task having a well-defined goal state—the features composing the target pattern.a The efficient detection of most targets requires the top-down prioritization of visual input with respect to this goal state. This is accomplished by neurons coding for the locations offering the most evidence for the target becoming more active than those offering less evidence, with the totality of this target evidence across visual space sometimes referred to as a target map.79 The peak-to-peak navigation of this activation landscape results in search being guided efficiently to the target goal.10 The idea of a target map has recently been extended to targets that can be any member of an object category. Physiological work has shown that some neurons in the lateral intraparietal area (LIP), a putative priority map, preferentially respond to stimuli in one over-trained category compared to another.11 Although this has been interpreted as evidence against the LIP being a priority map, recent computational work has shown that a categorical target map can be constructed and used to guide search to a categorically defined target in much the same way as a target map derived for a specific target goal.12 The implication of this is that the representation of a goal state, even in the straightforward context of a search task, need not be as simple as a static template of features, but rather may be a dynamic neural weighting of those features forming a classification boundary between a goal state (the target category) and a non-goal state (all non-target categories).

Given the importance of a task or goal in prioritizing information, we encourage the adoption of the following naming convention. Task-specific terms like saliency maps and target maps are used when conclusions are to be limited to the contribution of specific operations in the assignment of priority—for example, feature contrast in the case of saliency maps and comparison to a search target in the case of a target map.b However, it is important to also refer to these maps in the collective to highlight their shared function, and for this we suggest using the term priority map.13,14 We define a priority map as a neural representation of a topographic space in which activity codes for the priority of locations in that space irrespective of the bottom-up or top-down factors contributing to this prioritization. Individual priority signals may originate from various sources, not only feature contrast in the sensory input and goal states corresponding to planned behaviors, but also top-down factors best described as expectations or predictions based on past experience.15,16 This general and source-neutral term therefore subsumes saliency maps and target maps and whatever other task-specific maps that have been described in the literature, allowing a focus on the prioritization itself rather than the individual operations leading up to this prioritization. We should note that some studies use the term salience map in the same way that we use the term priority map:17,18 to refer to any factor that attracts gaze as salience regardless of whether this factor is bottom-up or purely endogenous in origin.19 This usage, however, is inconsistent with studies using the term salience to refer to purely bottom-up feature contrast,3,2022 creating the potential for confusion. Adopting our suggested naming convention would avoid this potential problem.

Distinguishing between general priority signals and those arising from bottom-up salience also aids in the clear comparison of the factors affecting priority. Because we assume that a priority map reflects some combination of priority signals, it is appropriate to talk about the relative contributions of salience and top-down goals (as in the goal of searching for a particular target) in the context of a priority map, but it would not be appropriate to make these comparisons in the context of a task- or source-specific map. Similarly, the existence of multiple contributing priority signals creates the potential for these signals to be in conflict—the target of a person’s search need not also be the most salient object, and typically is not. In the context of a well-specified goal, as in the case of a target preview in a search task, performance is known to be strongly dominated by the goal state,23 but even in these cases there may be some minimal competing influence of object salience owing to the concurrent contribution of this priority signal. This is particularly true in the case of sudden onsets or object motion, which produce strong responses in putative priority maps in the brain.24,25 It has also been argued that feature pop-out in static displays produces similarly strong responses in the LIP, the frontal eye field (FEF) and the superior colliculus,2628 but the majority of these studies rewarded animals for looking at the pop-out stimulus, thereby introducing a confound with the goal state. When this confound was removed, pop-out alone was found to increase responses in the LIP by only about 10%.29 Of course, as a goal becomes less well defined, as in the case of a free viewing task or even categorical search, object salience may play a larger role in driving activation on a priority map (for additional discussion of object salience, see Refs. 1 and 3032), but regardless of the specific mixture of priority signals, priority maps are probably the more appropriate construct for predicting behavior.

What is the mechanism by which a goal state becomes instantiated on a priority map? Converging evidence from behavioral, computational, and neurophysiological studies suggest that this is accomplished through a selective weighting of the low-level features matching the features representing a goal state8,10,3337 (see Ref. 38 for a general framework). We also subscribe to this view. This could be implemented at the physiological level by changing synaptic weights on the feed-forward pathways such that the features corresponding to the goal are given greater weight. If one is looking for a red square among objects of other colors and shapes, weighting the contributions of red and square features across low-level retinotopic feature maps will give priority to the locations of these features in downstream priority maps. Consequently, target locations will be disproportionately weighted on these maps relative to non-targets having neither of these goal features, resulting in near effortless parallel search. Conversely, to the extent that non-targets share these goal features, their locations will also be prioritized, resulting in a lower signal-to-noise ratio and less efficient search. In the extreme, the priorities of target and non-target locations may not be discriminable, with this producing an unguided sequence of overt and covert attentional movements often described as serial search. Regardless of how a feature bias is implemented in the brain, as synaptic weighting or as a property of the larger network, the point here is that parallel and serial search behaviors may both be products of a common mechanism, one in which the locations of goal-modulated feature information are competing on a priority map.

The existence of priority maps also has implications for the distinction between feature-based and spatial attention. Feature-based attention is often described as being non-spatial, but this characterization overlooks a fundamental fact: the weighting of features in visual space exists in the context of a spatial organization. Exemplifying this problem, physiologists have tended to describe activity on a priority map in terms of behavioral relevance, focusing on whether a given location in space is important for some ongoing or upcoming behavior.39 However, this view neglects the possibility that this activity may simply reflect evidence for the features of a goal across retinotopic space. Rather than neural activity indicating just the location of a behavioral goal, such as an upcoming saccade,40,41 we contend that this activity actually indicates the degree that patterns falling within the receptive fields of retinotopically organized neurons match the features of the goal state. Features and locations are therefore inexorably bound, with the neural instantiation of this binding being activity over a priority map. Under this view, the binding problem may be solved, not in the sensory system, but in the motor system, with features being bound in order to achieve some behavioral goal.

Where are the priority maps?

Having considered what priority maps are, we now shift our attention to where they might be found in the brain. Map-based representations of space are ubiquitous throughout the brain, particularly in the sensory42,43 and motor44,45 systems. These representations constitute a sort of anatomical prerequisite for a priority map; although not every topological neural representation need serve as a priority map, all priority maps must have some topological representation. Here we focus on two questions: what areas in the primate brain may function as priority maps, and is it likely that any one area is the priority map for a given effector system? We will first address these questions for the visual attention/oculomotor system, and then ask whether our answers might generalize to other effectors.

The oculomotor system is unique, not only in its speed, but also in the simplicity of its visuomotor transformation. Both the visual inputs and the motor outputs (saccades) share the same retinotopic reference frame, at least to the level of the superior colliculus. This is largely a consequence of oculomotor programming following Listing’s law46 allowing for a straightforward spatial registration between the visual and motor maps. If an eye movement is to be made to a visual stimulus at a given location on the retina, this same location can be used to coordinate the motor maps used by the oculomotor system, enabling the quick and largely effortless generation of saccades. Oculomotor programming is therefore best understood as a network of brain areas, starting with visual areas in which receptive fields grow with each new level along the visual pathway and ending with motor areas and ultimately the oculomotor plants in the brain stem. Between these input and output areas are several association areas, such as the LIP, the FEF and the superior colliculus, each of which has been proposed to act as a priority map.4749

Do the LIP, the FEF, and the superior colliculus have similar roles, acting as three redundant priority maps, or do they work as a network to prioritize overt and covert movements of attention? Most studies of these areas have used single-unit responses in simple tasks: visually guided saccades, memory-guided saccades, and pop-out visual search. Under these conditions, neuronal responses seem fairly similar across the areas, with the primary difference being the inclusion of movement neurons in the FEF and the superior colliculus that project directly down to brain stem oculomotor areas. However, under more complex stimulus conditions, differences do appear. Activity in the LIP shows strong set-size modulation (responses decrease as set size increases5052) that is absent from subsets of FEF neurons,53 suggesting that the FEF might represent postprocessed signals from the LIP. We speculate that this processing step performs a normalization function,51 effectively taking activity in varying response ranges in the parietal cortex, which would represent relative priority, to a common response range in the prefrontal cortex, which would represent absolute priority. Once in a common response range, actions can be triggered when activity reaches an absolute threshold, as seen in the superior colliculus. Although this step may seem small, it suggests that each area along this pathway performs a slightly different function leading up to the behavioral response, and that they are not just redundantly coding information.

As already noted, the oculomotor system is unique in that the affector and effector systems share the same reference frame; would the same relationship between priority maps apply to less straightforward visuomotor systems? We think so. As a general principle, we believe that there is a systematic prioritization of information, starting in the posterior parietal cortex, extending into the premotor areas of the frontal lobe, and ultimately resulting in prioritized activity in the motor cortex and movement. Supporting this view, within the intraparietal sulcus there are a number of cortical areas that transform afferent signals (primarily, but not exclusively, visual) into maps that appear to be important for different effectors. For example, neurons in the anterior intraparietal area (AIP) respond similarly to those in the LIP, but for grasping actions rather than for eye movements.54 The AIP projects to F5 (or PMVr),55 a premotor area associated with visual and visuomotor responses to grasping actions.56 Likewise, neurons in the ventral intraparietal area (VIP) respond to stimuli in peri-personal space and have congruent multi-modal tactile and visual receptive fields.57,58 The VIP projects to F4 (or PMVc),55 a premotor area that is also responsive to stimuli and actions in peri-personal space.59 And the medial intraparietal area (MIP), often described as the parietal reach region (PRR), has neurons that respond to stimuli that will become the targets of reaches.60 It projects to F2 (or PMDc),61 a premotor area that is responsive during reaches and reach planning.62

From just these few examples, it is clear that there are multiple maps that start out in a retinotopic reference frame in the parietal cortex but transform to a motor reference frame as they project to the frontal lobe. We suggest that each of these paired areas acts as a single functional priority map, with one end existing in afferent space and the other end in effector space. Crucially, a single prioritization is maintained from the parietal cortex to the motor cortex. Indeed, this is similar to what is believed to be happening from the LIP to the FEF, but because those areas use a common reference frame, we are able to see more easily the subtle changes occurring to bring activity into a common response range. We predict that, if tested in this way, responses in the premotor cortices would show a similar transformation, not only from afferent to efferent space, but also in the adoption of a common response range for the selection of the appropriate next movement.

Why are there priority maps?

There are many ways to address the question of why structures exist in the brain. Here we will focus on function: why is it useful to have a map-based prioritization of information?

One function of a priority map may be to serve as a sort of filter on the world, telling all downstream brain areas which bits of incoming information are important for a given task. This suggestion is not entirely new. The concept of a filter has a long history in the study of attention,63 with its suggested raison d’être being a need to reduce the amount of incoming sensory information that must be processed by higher-level mechanisms. Note that this function of a filter is essentially one of priority control: too much information is arriving from the world, so some of it must be filtered out or attenuated,64 thereby giving priority to the information allowed through. On what dimensions this information is filtered, and especially at what stage in information processing this filtering occurs (e.g., whether it is early, before object recognition; or late, after object recognition) have been topics of intense debate6568 (see Ref. 69 for an extended discussion). Less debated is the fact that filters can be set in both feature spaces (feature-based attention) and topographic spaces (spatial attention). This dual application of a filter dovetails nicely with our conception of a priority map. Under this view, a priority map is a neural weighting of feature importance at each location in a topographic space, and priority control is the competitive process by which the most important locations can be selected.

Another function of priority maps may be to coordinate the different effector systems so as to achieve some goal. Rather than serving as passive filters on the world, this view has priority maps being more active participants in controlling actionable behavior, actually interfacing with motor systems. Efficient task performance commonly requires the fluid coordination of multiple actions. Reaching for a coffee mug can be accomplished with just an arm movement to the object, but the task can be performed more efficiently and accurately if the eyes are also directed to that goal. From the reviewed neuroanatomical evidence, priority maps seem well suited to accomplish this coordination task. The features of a goal, such as the handle of a coffee mug, are identified in visual space, and these prioritized locations are propagated via multiple visual-to-motor projections to the effector systems that enable various interactions with that object.

But are these filtering and action selection/coordination functions really different? Filters are a means of prioritizing information for more efficient subsequent processing, one that involves the reduction or exclusion of information such that the postfiltered space is smaller than the prefiltered space. Selection is the process of choosing one thing, or a set of things, from a larger set of things. Both characterizations apply to action selection. There are an uncountable number of locations in space to which the eyes or the arms can be moved, and each of these would require a different (in some cases, only slightly different) setting of parameters to make that specific movement possible. In order to mediate a goal-directed action, the nervous system must therefore select from a space of potential motor-system configurations the specific settings needed for a specific action, and filter out all the rest. Filtering and selection are therefore opposite sides of the same coin; where there is one you will find evidence for the other.70 Priority maps are in some sense the metaphorical coins, where a filtering of feature-based information over a topographic space selects a point or region in this space for the purpose of mediating an action.

For some simple tasks (or subtasks), a goal can be reduced to just a single point in a space. In addition to the coordination and action selection functions discussed above, an even more basic function of priority maps may be to orient motor systems to specific locations. Arms and heads and eyes can only be oriented to one place at a time, and out of all these potential positions, the one corresponding to the goal must be selected. This involves a sort of competition between the different motor vectors within a given effector system, with the arm movement to the mug and the eye movement to the handle occurring only after this competition is resolved in the respective priority maps. It is the resolution of this competition that results in the activation of the specific muscle groups needed to acquire a specific location in space. Using dynamic neural field modeling,71 this competition-resolution process has been implemented as a laterally interconnected neural network that converges over time on a single state,7274 with the outcome of this selection process being the goal location to which a particular effector is directed (see also Ref. 8, which uses spatiotemporal pruning of a target map to select a saccade goal). In this sense, the metaphor of “shifting attention” is literal: navigating the priority map is equivalent to shifting or moving effector systems to the next most prioritized point.

Under this view, attention would be our ability to prioritize information, and an attentional state would be the prioritized pattern of neuronal firing at a given moment in time. We suggest that this prioritized pattern is expressed in multiple motor systems. Eye movements are the expression of this prioritized pattern by the oculomotor system; reaching and grasping and full-body orientation would be expressions by others. However, because saccades are made with great frequency, due in part to their straightforward visuomotor transformation and their low energy cost, the oculomotor system is better able to sample, and therefore allow us to see, what this prioritization function actually looks like. It is this spatiotemporal resolution afforded by eye movements that likely results in the near perfect agreement between fixations of gaze and what may be called “attention.” If attention is a prioritized state, and saccades are the motor behavior best able to sample this state, then it follows that changes in fixation would be our best estimate of a distribution of attention in an explicitly observable motor behavior. It is this relatively high resolution that has likely caused the debate over how attention and eye movements are related75,76 (for a review, see Ref. 77); they are related because the oculomotor priority maps contain our best guess of what this distribution of attention—this prioritized landscape of neural activity—looks like in visual space. Understanding this visual prioritization is of singular importance because it is this pattern that may be propagated to all the effector systems, making it quite literally responsible for driving our behavior.

The implication of this common source hypothesis is that there may be only one prioritization of a goal, with the different priority maps throughout the brain existing mainly to interact with different effector systems.48 As we will argue in the next section, we believe that working memory plays a pivotal role in this goal prioritization. For visually guided behaviors, a top-down goal in VWM weights feature information on parietal priority maps, perhaps through the synaptic modulation of low-level feature channels in early visual areas, with this bias then feeding forward to downstream priority maps interfacing with all the effector systems.

Priority maps and visual working memory

Research on VWM does not often appeal to priority maps, but we believe that a relationship exists between the two that is central to both concepts. This relationship is clearest in the case of the large and growing literature on feature templates that are maintained in VWM and used to guide behavior, and the brain mechanisms engaged in this maintenance.7880 These templates are thought to be neural representations of goal states that can be used to mediate a variety of tasks, with change detection and visual search being among the most commonly studied.5,8183 In the case of search, a VWM representation of a search target is compared to incoming visual information for the purpose of creating a target map. The activation landscape of this map can then be navigated, typically by changing gaze, so as to make better and more confident search decisions.8 In change-detection tasks, the goal state is often a specific configuration of colored boxes, with the VWM template being the spatial configuration of this feature information. As in the case of search, this prechange template is compared to a postchange template in order to generate a priority map signaling the change84 (see also Ref. 85, which explicitly shows this in the LIP). Framed in this way, one relationship between VWM and priority maps is clear: VWM embodies the goal state from which priority maps are built, at least for the vast majority of endogenously driven behaviors that rely on VWM. The VWM template changes the filter settings on the feed-forward information entering the priority maps, effectively biasing responses on these maps and weighting those features that are important for satisfying the goal state. From this perspective, the VWM template is the common source that we hypothesize to exist—the prioritized weighting of features from which all priority maps are constructed (see also the idea of a “task buffer” in Ref. 86).

The above-described relationship emphasized the role of VWM in constructing a priority map, but we speculate that VWM is also important in sustaining the goal state so as to keep active an ongoing goal-directed behavior. Central to this suggestion is the observation that behaviors happen over time. Take, for example, the disproportionately studied task of making a cup of tea.87,88 This task can take several minutes, and requires the orderly execution of several subtasks, from the initial reach for the handle of the kettle to the final stirring movement to mix the tea and milk. During this time, there are untold opportunities for disruption—telephones ring, doors are knocked, and conversations are had, all while the tea-making task is unfolding. Each of these disruptions might cause the eye or head or body to veer off task momentarily—the arm may reach out towards the phone or gaze may dart off to meet a partner’s eyes during a poignant point in a conversation. These disruptions are behavioral evidence for some fleeting task (e.g., answering a phone) interrupting the current goal state and, consequently, seizing control of the ongoing behavior by changing the priority map. At these moments, the tea-making task technically stops, and only resumes when that goal state in working memory (WM) can reconstitute the proper sequence of priority settings needed to configure the body to again make a cup of tea. We hypothesize that a fundamental function of WM is to not only reinstate a goal state following an interrupt, but to prevent such interrupts from gaining priority in the first place. Absent this goal-maintenance function of WM, new and salient patterns in the continuous sensory input would frequently win priority and seize behavior, making the fluid execution of any goal-directed behavior next to impossible. The battlefield in this war between goals and interrupts, be they top-down or bottom-up in origin, is the priority map. To the extent that a goal state in WM is well specified (perhaps through extensive practice) and is used continuously to impose prioritized activation on the maps controlling the effectors, the tea-making task will unfold efficiently and without disruption from minor distractions—the eyes will remain on the kettle spout as the water is pouring instead of drifting to the television.

Visual search offers some of the clearest examples of what problems might arise when a goal state in VWM, a target representation, is not correctly reflected on a priority map owing to some goal degradation or interference. When people are asked to search for two or more targets, the VWM goal state must either be broadened to include features from both targets, which would be expected to lower the signal-to-noise ratio on the priority map (as roughly half of the target representation would not match the target features in the search display), or a person might gamble and choose to represent only one of the targets in VWM, which would result in a correctly prioritized map and strong search guidance when the correct target was chosen but weak guidance when this gamble did not pay off. We suspect that these different strategies for target representation and prioritization might explain, in part, the discrepant results that have been reported using such multi-target search tasks.80,89 A related problem exists in dual-task situations requiring the simultaneous maintenance of search and memory goals.90 One such paradigm asked people to maintain an object in VWM (in anticipation of a memory test) while performing a search task during the retention interval.91 These situations create the opportunity for a form of cross-talk, where the memory goal might be used to prioritize visual information for the search task, resulting in the expected cost to search efficiency when a feature of the memory goal matches a search distractor.92 Such cross-talk might occur when these goal states recruit the same populations of neurons93 or when neuronal populations are modified in an attempt to keep the goal states distinct (a form of neural reorganization in VWM94). However, to the extent that different goal states can be well specified in VWM while remaining distinct, task-appropriate priority maps might be created and used to control behavior, and such interference may be minimized (see Ref. 95 for a review of efficient dual-task performance in non-search contexts, and Ref. 96 for evidence suggesting a nearly simultaneous allocation of attention to a saccade target and a reach target despite these targets appearing in different spatial locations).

Looking forward

This review attempted to highlight the profoundly important role that priority maps likely play in controlling goal-directed behavior. However, as this perspective on priority maps broadens beyond its competitive winner-take-all mechanism, new questions arise that cannot be addressed given the available data. Here, we focus on four such questions that should be prioritized in future work.

What neurocomputations are used to construct priority maps? Fundamental to the concept of a priority map is that there is a goal state and a state of the world and that the two are compared. In this paper, we used terms such as filtering, weighting, and selection to describe the consequences of this comparison, but the computation itself was not specified. This comparison may also depend on the goal being prioritized. In a visual-search task, the signals on a priority map are coding the similarity between these two states, while in a change-detection task these priority signals are coding dissimilarity. Given that similarity and dissimilarity are essentially the same computation with a reversed sign, is it the case that priority maps are limited to this one operation, or is there a repertoire of elemental operations that can be used to construct a priority map? Future work should continue to study priority maps using many types of tasks so as to better understand the computation of these priority signals.

Are the different priority maps in the brain simultaneously coding different prioritization landscapes, or are they the same? Common source theory suggests that the same prioritization of information should exist across all of these maps because they all originate from a common source. The existence of a common source might also explain why damage to a priority map often induces temporary97 or minimal98 behavioral effects: information about the prioritized goal is not lost because it originates from the common source, requiring only the rerouting of connections to a given effector system. We propose that this common source is a goal state maintained in working memory. An important direction for future work will be to test this proposal. Also, if a goal state is the common source, where does it exist in the brain (the prefrontal cortex?) and how would damage to this brain region be reflected in all the priority maps relying on this goal? Answers to these questions will tell us whether priority maps are best understood as single and relatively independent entities or as networks of interconnected priority landscapes all functionally (and perhaps, anatomically) linked by a common goal.

What signals are competing in each of the different priority maps? Even if priority maps are specific to different effector systems, understanding the signals being coded by each priority map is essential to understanding how coordinated goal-directed behavior is accomplished by the brain. Models of the superior colliculus assume that this competition is between saccade movement vectors,99 but other systems are not this straightforward.100 Arm movements and body movements are far more complex than saccades, and are coded in qualitatively different ways.101103 These movements require the specification of many more parameters—a reaching movement requires the fluid coordination of elbow, wrist, and finger joints if it is to be successful. Does the goal specification on this priority map capture this coordination, or does this happen downstream of the priority map? The answer to this question will tell us the degree to which higher level action schema compete on a priority map, as has been suggested by early work linking attention and action.104

Finally, is there an essential difference between priority maps and other systems for competitive goal selection? Concepts compete for retrieval from memory and, ultimately, awareness—should this competition be characterized as a priority map and, if not, why? The information represented by priority maps is, by definition, organized into a map of some space. Is there something about this map-based organization that requires a qualitatively different competitive process than one that may exist between the nodes of a highly distributed neural network?86 At issue here is the importance of a topology to the concept of a priority map, and the decisions that must be made in topographic spaces (do I run left or right?). Are priority maps simply instances where this competition occurs across a topology and can therefore be more readily discovered in neural tissue? Other competitive mechanisms may be more diffusely distributed throughout the brain, and therefore more difficult to observe, but this hardly seems a defining distinction. Is there something unique to topological competition that results in priority maps filling an essential neurocomputational niche? If not for priority maps, would we have a sense of attention shifting from one place in space to another, and would our ability to navigate through and orient to our environment be as efficient and fluid? How closely tied are priority maps to attention and action? Answers to these more philosophical questions will determine how the definition of a priority map should evolve moving forward.

Acknowledgments

GJZ is supported by a grant from the National Institute on Mental Health (R01-MH063748) and National Science Foundation Grants IIS-1111047 and IIS-1161876. JWB is supported by the National Eye Institute (R01 EY019273-01) and the Defense Advanced Research Projects Agency (DARPA-14-08-RAM-PA-010).

Footnotes

a

We will use the term goal state to refer to the pattern of spiking activity across some population of neurons coding a desired or planned behavior. By this definition, all goals, whether they are complex reaching movements or the features defining a target, are states. Moreover, we assume that all of these states are dynamic, albeit some more so than others. Complex actions certainly emerge from highly dynamic systems, but even seemingly static target goals likely emerge from the dynamic weighting of features allowing expected targets to be discriminated from expected distractors. In this sense, the target of a search task is not a static entity or “template,” but rather a goal state that emerges over time.5,6

b

By this definition, both salience maps and target maps become theoretical abstractions that would rarely, if ever, exist in a pure form. Even in a free viewing task there is likely some minimal, and typically unknown, goal injecting a top-down influence into what may be predominately a saliency map. Similarly, bottom-up feature contrast may exert some small influence on the search for a specific target, thereby diluting the purely top-down prioritization with respect to a search target ostensibly captured by a target map.

Conflict of interest

The authors report no conflicts of interest.

References

  • 1.Borji A, Sihite DN, Itti L. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans Image Process. 2013;22:55–69. doi: 10.1109/TIP.2012.2210727. [DOI] [PubMed] [Google Scholar]
  • 2.Itti L, Koch C. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Res. 2000;40:1489–1506. doi: 10.1016/s0042-6989(99)00163-7. [DOI] [PubMed] [Google Scholar]
  • 3.Itti L, Koch C. Computational modelling of visual attention. Nature Reviews Neuroscience. 2001;2:194–203. doi: 10.1038/35058500. [DOI] [PubMed] [Google Scholar]
  • 4.Koch C, Ullman S. Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol. 1985;4:219–227. [PubMed] [Google Scholar]
  • 5.Schmidt J, et al. More target features in visual working memory leads to poorer search guidance: evidence from contralateral delay activity. J Vis. 2014;14:8. doi: 10.1167/14.3.8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Schmidt J, Zelinsky GJ. Visual search guidance is best after a short delay. Vision Res. 2011;51:535–545. doi: 10.1016/j.visres.2011.01.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Rutishauser U, Koch C. Probabilistic modeling of eye movement data during conjunction search via feature-based attention. J Vis. 2007;7:5. doi: 10.1167/7.6.5. [DOI] [PubMed] [Google Scholar]
  • 8.Zelinsky GJ. A theory of eye movements during target acquisition. Psychol Rev. 2008;115:787–835. doi: 10.1037/a0013118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Zelinsky GJ. TAM: Explaining off-object fixations and central fixation tendencies as effects of population averaging during search. Vis cogn. 2012;20:515–545. doi: 10.1080/13506285.2012.666577. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Wolfe JM. Guided Search 2.0 A revised model of visual search. Psychon Bull Rev. 1994;1:202–238. doi: 10.3758/BF03200774. [DOI] [PubMed] [Google Scholar]
  • 11.Freedman DJ, Assad JA. Experience-dependent representation of visual categories in parietal cortex. Nature. 2006;443:85–88. doi: 10.1038/nature05078. [DOI] [PubMed] [Google Scholar]
  • 12.Zelinsky GJ, et al. Modelling eye movements in a categorical search task. Philos Trans R Soc Lond B Biol Sci. 2013;368:20130058. doi: 10.1098/rstb.2013.0058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Fecteau JH, Munoz DP. Salience, relevance, and firing: a priority map for target selection. Trends Cogn Sci. 2006;10:382–390. doi: 10.1016/j.tics.2006.06.011. [DOI] [PubMed] [Google Scholar]
  • 14.Serences JT, Yantis S. Selective visual attention and perceptual coherence. Trends Cogn Sci. 2006;10:38–45. doi: 10.1016/j.tics.2005.11.008. [DOI] [PubMed] [Google Scholar]
  • 15.Herwig A, Horstmann G. Action-effect associations revealed by eye movements. Psychon Bull Rev. 2011;18:531–537. doi: 10.3758/s13423-011-0063-3. [DOI] [PubMed] [Google Scholar]
  • 16.Herwig A, Schneider WX. Predicting object features across saccades: Evidence from object recognition and visual search. J Exp Psychol Gen. 2014;143:1903–1922. doi: 10.1037/a0036781. [DOI] [PubMed] [Google Scholar]
  • 17.Foley NC, et al. Novelty enhances visual salience independently of reward in the parietal lobe. J Neurosci. 2014;34:7947–7957. doi: 10.1523/JNEUROSCI.4171-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Purcell BA, et al. From salience to saccades: multiple-alternative gated stochastic accumulator model of visual search. J Neurosci. 2012;32:3433–3446. doi: 10.1523/JNEUROSCI.4622-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Heitz RP, Schall JD. Neural chronometry and coherency across speed-accuracy demands reveal lack of homomorphism between computational and neural mechanisms of evidence accumulation. Philos Trans R Soc Lond B Biol Sci. 2013;368:20130071. doi: 10.1098/rstb.2013.0071. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Treisman AM, Gelade G. A feature-integration theory of attention. Cogn Psychol. 1980;12:97–136. doi: 10.1016/0010-0285(80)90005-5. [DOI] [PubMed] [Google Scholar]
  • 21.Wolfe JM. Visual Search. In: Pashler H, editor. Attention. UCL Press; London: 1998. pp. 13–74. [Google Scholar]
  • 22.Findlay JM, Walker R. A model of saccade generation based on parallel processing and competitive inhibition. Behav Brain Sci. 1999;22:661–674. doi: 10.1017/s0140525x99002150. discussion 674–721. [DOI] [PubMed] [Google Scholar]
  • 23.Chen X, Zelinsky GJ. Real-world visual search is dominated by top-down guidance. Vision Res. 2006;46:4118–4133. doi: 10.1016/j.visres.2006.08.008. [DOI] [PubMed] [Google Scholar]
  • 24.Bisley JW, Krishna BS, Goldberg ME. A rapid and precise on-response in posterior parietal cortex. J Neurosci. 2004;24:1833–1838. doi: 10.1523/JNEUROSCI.5007-03.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Kusunoki M, Gottlieb J, Goldberg ME. The lateral intraparietal area as a salience map: the representation of abrupt onset, stimulus motion, and task relevance. Vision Res. 2000;40:1459–1468. doi: 10.1016/s0042-6989(99)00212-6. [DOI] [PubMed] [Google Scholar]
  • 26.Thomas NW, Pare M. Temporal processing of saccade targets in parietal cortex area LIP during visual search. J Neurophysiol. 2007;97:942–947. doi: 10.1152/jn.00413.2006. [DOI] [PubMed] [Google Scholar]
  • 27.McPeek RM, Keller EL. Saccade target selection in the superior colliculus during a visual search task. J Neurophysiol. 2002;88:2019–2034. doi: 10.1152/jn.2002.88.4.2019. [DOI] [PubMed] [Google Scholar]
  • 28.Thompson KG, et al. Perceptual and motor processing stages identified in the activity of macaque frontal eye field neurons during visual search. J Neurophysiol. 1996;76:4040–4055. doi: 10.1152/jn.1996.76.6.4040. [DOI] [PubMed] [Google Scholar]
  • 29.Arcizet F, Mirpour K, Bisley JW. A pure salience response in posterior parietal cortex. Cereb Cortex. 2011;21:2498–2506. doi: 10.1093/cercor/bhr035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Einhauser W, Spain M, Perona P. Objects predict fixations better than early saliency. J Vis. 2008;8:18, 11–26. doi: 10.1167/8.14.18. [DOI] [PubMed] [Google Scholar]
  • 31.Elazary L, Itti L. Interesting objects are visually salient. J Vis. 2008;8:3, 1–15. doi: 10.1167/8.3.3. [DOI] [PubMed] [Google Scholar]
  • 32.Nuthmann A, Henderson JM. Object-based attentional selection in scene viewing. J Vis. 2010;10:20. doi: 10.1167/10.8.20. [DOI] [PubMed] [Google Scholar]
  • 33.Blaser E, Sperling G, Lu ZL. Measuring the amplification of attention. Proc Natl Acad Sci U S A. 1999;96:11681–11686. doi: 10.1073/pnas.96.20.11681. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Chelazzi L, et al. Responses of neurons in inferior temporal cortex during memory-guided visual search. J Neurophysiol. 1998;80:2918–2940. doi: 10.1152/jn.1998.80.6.2918. [DOI] [PubMed] [Google Scholar]
  • 35.Motter BC. Focal attention produces spatially selective processing in visual cortical areas V1, V2, and V4 in the presence of competing stimuli. Journal of Neurophysiology. 1993;70:909–919. doi: 10.1152/jn.1993.70.3.909. [DOI] [PubMed] [Google Scholar]
  • 36.Navalpakkam V, Itti L. Modeling the influence of task on attention. Vision Res. 2005;45:205–231. doi: 10.1016/j.visres.2004.07.042. [DOI] [PubMed] [Google Scholar]
  • 37.Pomplun M. Saccadic selectivity in complex visual search displays. Vision Res. 2006;46:1886–1900. doi: 10.1016/j.visres.2005.12.003. [DOI] [PubMed] [Google Scholar]
  • 38.Desimone R, Duncan J. Neural mechanisms of selective visual attention. Annu Rev Neurosci. 1995;18:193–222. doi: 10.1146/annurev.ne.18.030195.001205. [DOI] [PubMed] [Google Scholar]
  • 39.Gottlieb J. Parietal mechanisms of target representation. Curr Opin Neurobiol. 2002;12:134–140. doi: 10.1016/s0959-4388(02)00312-4. [DOI] [PubMed] [Google Scholar]
  • 40.Gottlieb JP, Kusunoki M, Goldberg ME. The representation of visual salience in monkey parietal cortex. Nature. 1998;391:481–484. doi: 10.1038/35135. [DOI] [PubMed] [Google Scholar]
  • 41.Ipata AE, et al. Activity in the lateral intraparietal area predicts the goal and latency of saccades in a free-viewing visual search task. J Neurosci. 2006;26:3656–3661. doi: 10.1523/JNEUROSCI.5074-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Kaas JH, et al. Multiple representations of the body within the primary somatosensory cortex of primates. Science. 1979;204:521–523. doi: 10.1126/science.107591. [DOI] [PubMed] [Google Scholar]
  • 43.Allman JM, Kaas JH. Representation of the visual field in striate and adjoining cortex of the owl monkey (Aotus trivirgatus) Brain Res. 1971;35:89–106. doi: 10.1016/0006-8993(71)90596-8. [DOI] [PubMed] [Google Scholar]
  • 44.Penfield W, Boldrey E. Somatic motor and sensory representation in the cerebral cortex of man studied by electrical stimulation. Brain. 1937;60:389–443. [Google Scholar]
  • 45.Robinson DA. Eye movements evoked by collicular stimulation in the alert monkey. Vision Res. 1972;12:1795–1808. doi: 10.1016/0042-6989(72)90070-3. [DOI] [PubMed] [Google Scholar]
  • 46.Tweed D, Cadera W, Vilis T. Computing three-dimensional eye position quaternions and eye velocity from search coil signals. Vision Res. 1990;30:97–110. doi: 10.1016/0042-6989(90)90130-d. [DOI] [PubMed] [Google Scholar]
  • 47.Thompson KG, Bichot NP. A visual salience map in the primate frontal eye field. Prog Brain Res. 2005;147:251–262. doi: 10.1016/S0079-6123(04)47019-8. [DOI] [PubMed] [Google Scholar]
  • 48.Bisley JW, Goldberg ME. Attention, intention, and priority in the parietal lobe. Annu Rev Neurosci. 2010;33:1–21. doi: 10.1146/annurev-neuro-060909-152823. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Krauzlis RJ, Lovejoy LP, Zenon A. Superior colliculus and visual spatial attention. Annu Rev Neurosci. 2013;36:165–182. doi: 10.1146/annurev-neuro-062012-170249. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Balan PF, et al. Neuronal correlates of the set-size effect in monkey lateral intraparietal area. PLoS Biol. 2008;6:e158. doi: 10.1371/journal.pbio.0060158. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Mirpour K, Bisley JW. Dissociating activity in the lateral intraparietal area from value using a visual foraging task. Proc Natl Acad Sci U S A. 2012;109:10083–10088. doi: 10.1073/pnas.1120763109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Churchland AK, Kiani R, Shadlen MN. Decision-making with multiple alternatives. Nat Neurosci. 2008;11:693–702. doi: 10.1038/nn.2123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Cohen JY, et al. Neural basis of the set-size effect in frontal eye field: timing of attention during visual search. J Neurophysiol. 2009;101:1699–1704. doi: 10.1152/jn.00035.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Murata A, et al. Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP. J Neurophysiol. 2000;83:2580–2601. doi: 10.1152/jn.2000.83.5.2580. [DOI] [PubMed] [Google Scholar]
  • 55.Luppino G, et al. Largely segregated parietofrontal connections linking rostral intraparietal cortex (areas AIP and VIP) and the ventral premotor cortex (areas F5 and F4) Exp Brain Res. 1999;128:181–187. doi: 10.1007/s002210050833. [DOI] [PubMed] [Google Scholar]
  • 56.Rizzolatti G, et al. Functional organization of inferior area 6 in the macaque monkey. II. Area F5 and the control of distal movements. Exp Brain Res. 1988;71:491–507. doi: 10.1007/BF00248742. [DOI] [PubMed] [Google Scholar]
  • 57.Colby CL, Duhamel JR, Goldberg ME. Ventral intraparietal area of the macaque: anatomic location and visual response properties. J Neurophysiol. 1993;69:902–914. doi: 10.1152/jn.1993.69.3.902. [DOI] [PubMed] [Google Scholar]
  • 58.Duhamel JR, Colby CL, Goldberg ME. Ventral intraparietal area of the macaque: congruent visual and somatic response properties. J Neurophysiol. 1998;79:126–136. doi: 10.1152/jn.1998.79.1.126. [DOI] [PubMed] [Google Scholar]
  • 59.Gentilucci M, et al. Functional organization of inferior area 6 in the macaque monkey. I. Somatotopy and the control of proximal movements. Exp Brain Res. 1988;71:475–490. doi: 10.1007/BF00248741. [DOI] [PubMed] [Google Scholar]
  • 60.Ferraina S, et al. Visual control of hand-reaching movement: activity in parietal area 7m. Eur J Neurosci. 1997;9:1090–1095. doi: 10.1111/j.1460-9568.1997.tb01460.x. [DOI] [PubMed] [Google Scholar]
  • 61.Matelli M, et al. Superior area 6 afferents from the superior parietal lobule in the macaque monkey. J Comp Neurol. 1998;402:327–352. [PubMed] [Google Scholar]
  • 62.Kurata K. Information processing for motor control in primate premotor cortex. Behav Brain Res. 1994;61:135–142. doi: 10.1016/0166-4328(94)90154-6. [DOI] [PubMed] [Google Scholar]
  • 63.Broadbent DE. Perception and communication. Pergamon; London: 1958. [Google Scholar]
  • 64.Treisman A. Contextual cues in selective listening. Q J Exp Psychol. 1960;12:242–248. [Google Scholar]
  • 65.Broadbent DE. A mechanical model for human attention and immediate memory. Psychol Rev. 1957;64:205–215. doi: 10.1037/h0047313. [DOI] [PubMed] [Google Scholar]
  • 66.Corteen RS, Dunn D. Shock associated words in a nonattended message: A test for momentary awareness. J Exp Psychol. 1973;102:1143–1144. [Google Scholar]
  • 67.Deutsch JA, Deutsch D. Attention: Some theoretical considerations. Psychol Rev. 1963;70:80–90. doi: 10.1037/h0039515. [DOI] [PubMed] [Google Scholar]
  • 68.Gray J, Wadderburn A. Grouping strategies with simultaneous stimuli. Q J Exp Psychol. 1960;12:180–184. [Google Scholar]
  • 69.Pashler H. The psychology of attention. MIT Press; Cambridge, MA: 1998. [Google Scholar]
  • 70.Neumann O. A functional view of attention. In: Heuer H, Sanders AF, editors. Perspectives on perception and action. Lawrence Erlbaum Associates Inc; Hillsdale, NJ: 1987. pp. 361–394. [Google Scholar]
  • 71.Amari S. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern. 1977;27:77–87. doi: 10.1007/BF00337259. [DOI] [PubMed] [Google Scholar]
  • 72.Kopecz K, Schoner G. Saccadic motor planning by integrating visual information and pre-information on neural dynamic fields. Biol Cybern. 1995;73:49–60. doi: 10.1007/BF00199055. [DOI] [PubMed] [Google Scholar]
  • 73.Meeter M, Van der Stigchel S, Theeuwes J. A competitive integration model of exogenous and endogenous eye movements. Biol Cybern. 2010;102:271–291. doi: 10.1007/s00422-010-0365-y. [DOI] [PubMed] [Google Scholar]
  • 74.Wilimzig C, Schneider S, Schoner G. The time course of saccadic decision making: dynamic field theory. Neural Netw. 2006;19:1059–1074. doi: 10.1016/j.neunet.2006.03.003. [DOI] [PubMed] [Google Scholar]
  • 75.Deubel H, Schneider WX. Saccade target selection and object recognition: evidence for a common attentional mechanism. Vision Res. 1996;36:1827–1837. doi: 10.1016/0042-6989(95)00294-4. [DOI] [PubMed] [Google Scholar]
  • 76.Klein RM, Pontefract A. Does oculomotor readiness mediate cognitive control of visual attention? Revisited! In: Umilta C, Moskovitch M, editors. Attention and performance XV. MIT Press; Cambridge, MA: 1994. pp. 333–350. [Google Scholar]
  • 77.Findlay JM, Gilchrist ID. Active vision. Oxford University Press; New York: 2003. [Google Scholar]
  • 78.Brady TF, Konkle T, Alvarez GA. A review of visual memory capacity: Beyond individual items and toward structured representations. J Vis. 2011;11:4. doi: 10.1167/11.5.4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Luck SJ, Vogel EK. Visual working memory capacity: from psychophysics and neurobiology to individual differences. Trends Cogn Sci. 2013;17:391–400. doi: 10.1016/j.tics.2013.06.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Olivers CN, et al. Different states in visual working memory: when it guides attention and when it does not. Trends Cogn Sci. 2011;15:327–334. doi: 10.1016/j.tics.2011.05.004. [DOI] [PubMed] [Google Scholar]
  • 81.Carlisle NB, et al. Attentional templates in visual working memory. J Neurosci. 2011;31:9315–9322. doi: 10.1523/JNEUROSCI.1097-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Kyllingsbaek S, Bundesen C. Changing change detection: improving the reliability of measures of visual short-term memory capacity. Psychon Bull Rev. 2009;16:1000–1010. doi: 10.3758/PBR.16.6.1000. [DOI] [PubMed] [Google Scholar]
  • 83.Luck SJ, Vogel EK. The capacity of visual working memory for features and conjunctions. Nature. 1997;390:279–281. doi: 10.1038/36846. [DOI] [PubMed] [Google Scholar]
  • 84.Zelinsky GJ. Detecting changes between real-world objects using spatiochromatic filters. Psychon Bull Rev. 2003;10:533–555. doi: 10.3758/bf03196516. [DOI] [PubMed] [Google Scholar]
  • 85.Arcizet F, et al. Decision making and attention in parietal cortex during a change detection task submitted. [Google Scholar]
  • 86.Franconeri SL, Alvarez GA, Cavanagh P. Flexible cognitive resources: competitive content maps for attention and memory. Trends Cogn Sci. 2013;17:134–141. doi: 10.1016/j.tics.2013.01.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Hayhoe M, Ballard D. Eye movements in natural behavior. Trends Cogn Sci. 2005;9:188–194. doi: 10.1016/j.tics.2005.02.009. [DOI] [PubMed] [Google Scholar]
  • 88.Land MF, Hayhoe M. In what ways do eye movements contribute to everyday activities? Vision Res. 2001;41:3559–3565. doi: 10.1016/s0042-6989(01)00102-x. [DOI] [PubMed] [Google Scholar]
  • 89.Beck VM, Hollingworth A, Luck SJ. Simultaneous control of attention by multiple working memory representations. Psychol Sci. 2012;23:887–898. doi: 10.1177/0956797612439068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Woodman GF, Luck SJ, Schall JD. The role of working memory representations in the control of attention. Cereb Cortex. 2007;17(Suppl 1):i118–124. doi: 10.1093/cercor/bhm065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Woodman GF, Luck SJ. Do the contents of visual working memory automatically influence attentional selection during visual search? J Exp Psychol Hum Percept Perform. 2007;33:363–377. doi: 10.1037/0096-1523.33.2.363. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Soto D, et al. Early, involuntary top-down guidance of attention from working memory. J Exp Psychol Hum Percept Perform. 2005;31:248–261. doi: 10.1037/0096-1523.31.2.248. [DOI] [PubMed] [Google Scholar]
  • 93.Genovesio A, Brasted PJ, Wise SP. Representation of future and previous spatial goals by separate neural populations in prefrontal cortex. J Neurosci. 2006;26:7305–7316. doi: 10.1523/JNEUROSCI.0699-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Warden MR, Miller EK. Task-dependent changes in short-term memory in the prefrontal cortex. J Neurosci. 2010;30:15801–15810. doi: 10.1523/JNEUROSCI.1569-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Allport DA. Attention and performance. In: Claxton G, editor. Cognitive psychology. Routledge & Kegan Paul; London: 1980. pp. 112–153. [Google Scholar]
  • 96.Jonikaitis D, Deubel H. Independent allocation of attention to eye and hand targets in coordinated eye-hand movements. Psychol Sci. 2011;22:339–347. doi: 10.1177/0956797610397666. [DOI] [PubMed] [Google Scholar]
  • 97.Hier DB, Mondlock J, Caplan LR. Recovery of behavioral abnormalities after right hemisphere stroke. Neurology. 1983;33:345–350. doi: 10.1212/wnl.33.3.345. [DOI] [PubMed] [Google Scholar]
  • 98.Schiller PH, True SD, Conway JL. Effects of frontal eye field and superior colliculus ablations on eye movements. Science. 1979;206:590–592. doi: 10.1126/science.115091. [DOI] [PubMed] [Google Scholar]
  • 99.Van Opstal AJ, Van Gisbergen JA. A nonlinear model for collicular spatial interactions underlying the metrical properties of electrically elicited saccades. Biol Cybern. 1989;60:171–183. doi: 10.1007/BF00207285. [DOI] [PubMed] [Google Scholar]
  • 100.Graziano MS, Taylor CS, Moore T. Complex movements evoked by microstimulation of precentral cortex. Neuron. 2002;34:841–851. doi: 10.1016/s0896-6273(02)00698-0. [DOI] [PubMed] [Google Scholar]
  • 101.Evarts EV. Relation of pyramidal tract activity to force exerted during voluntary movement. J Neurophysiol. 1968;31:14–27. doi: 10.1152/jn.1968.31.1.14. [DOI] [PubMed] [Google Scholar]
  • 102.Georgopoulos AP, et al. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. J Neurosci. 1982;2:1527–1537. doi: 10.1523/JNEUROSCI.02-11-01527.1982. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.Moran DW, Schwartz AB. Motor cortical representation of speed and direction during reaching. J Neurophysiol. 1999;82:2676–2692. doi: 10.1152/jn.1999.82.5.2676. [DOI] [PubMed] [Google Scholar]
  • 104.Norman D, Shallice T. Attention to action: Willed and automatic control of behaviour. In: Davison R, Shwartz G, Shapiro D, editors. Consciousness and self regulation: Advances in research and theory. Plenum; New York: 1986. pp. 1–18. [Google Scholar]

RESOURCES