Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2008 Aug 22.
Published in final edited form as: Cogn Affect Behav Neurosci. 2007 Dec;7(4):413–422. doi: 10.3758/cabn.7.4.413

PROBING HUMAN AND MONKEY ANTERIOR CINGULATE CORTEX IN VARIABLE ENVIRONMENTS

Mark E Walton 1, Rogier B Mars 2,3
PMCID: PMC2519031  EMSID: UKMS2231  PMID: 18189014

Abstract

Previous research has identified the anterior cingulate cortex (ACC) as an important node in the neural network underlying decision making in primates. Decision making can, however, be studied under large variety of circumstances, ranging from the standard well-controlled lab situation to more natural, stochastic settings during which multiple agents interact. Here, we illustrate how these different varieties of decision making studied can influence theories of ACC function in monkeys. Converging evidence from unit recordings and lesions studies now suggest that the ACC is important for interpreting outcome information according to the current task context to guide future action selection. We then apply this framework to the study of human ACC function and discuss its potential implications.

Keywords: Anterior cingulate cortex, ACC, decision making, action selection, reward, monkey, human, foraging theory


Human and animal decision making comprises the combined cognitive processes leading to the selection of a course of action among alternatives. In order to complete these processes successfully, a decision maker (agent) needs to identify the current environmental state, its own behavioral goals given this environmental state and its own internal state, compute the relative contribution of each action toward obtaining this goal, and finally select and execute the most appropriate action.

A large network of regions in the frontal cortex and basal ganglia of the primate brain has been shown to be involved in the selection of goal-directed actions (Passingham, 1993), but the precise role of each node in this network remains a topic of ongoing debate. A central role in decision making has been attributed to the anterior cingulate cortex (ACC), which has been suggested to be involved in executive attention, supervisory attentional control, selection for action, conflict detection, and several varieties of performance monitoring (e.g., Posner, Petersen, Fox, & Raichle, 1988; Posner & DiGirolamo, 1998; Botvinick, Braver, Barch, Carter, & Cohen, 2001; Ridderinkhof, Nieuwenhuis, Crone, & Ullsperger, 2004). Rather than incorporating all these functions within a single framework by attributing to the ACC the role of all-powerful homunculus presiding over action selection, the challenge is to find a common denominator of these functions, which accurately describes the role of the ACC in decision making.

In this paper, we examine how this challenge has been taken up in the study of decision making in primates. Our main thesis is that there is a large variety of circumstances in which decision making can be studied, and that the specific variety studied has a strong influence on the conclusions that can be drawn. We will concentrate specifically and separately on the ideas to emerge from work on non-human and human primates, and illustrate the differences in approaches in studying these different species. We will start by illustrating the various types of decision making that can be identified and briefly describe the anatomical underpinnings of ACC function. Following this, we will illustrate the paradigms employed in monkey ACC research within this decision making framework and discuss the conclusions that can be drawn. Finally, we will apply this knowledge to the study of human ACC function.

Varieties of decision making

Figure 1 illustrates some of the varieties of decision making referred to in this paper. In its simplest form, decision making is concerned with a single agent, acting in a stable environment and having complete information (Figure 1A). In these circumstances decision making, although not trivial, is relatively straightforward. Given a particular stimulus, the agent knows that a certain action will lead to a certain reward, whereas the alternative action will not. Sometimes the properties of the environment are not known and have to be obtained through learning. Given the stability of the environment, however, this learning can be relatively straightforward, such as simple trial-and-error learning based on external feedback.

Figure 1. Pay-off matrices associated with various decision making environments.

Figure 1

For each environment, the pay-off matrix for each of two actions in a single trial is given. In a stable environment (A) the decision maker (agent) potentially has full information over which action will yield the highest reward and for any particular stimulus this is always the same action; in a changing environment (B) the decision maker can be in either of two task environments, the pay-off of which is fully known to the actor; in a stochastic environment (C) the reward associated with any particular action changes with a certain probability; in an interactive environment (D) multiple decision makers (here denoted as ‘decision maker 2’ (the pay-offs in each cell are for decision maker 1 and 2, respectively) are present and the performance of any action influences the environment (i.e., the other decision maker) and changes the action-reward associations.

Primate decision making is, however, usually not that simple. We inhabit an uncertain and continuously changing environment in which we interact with other agents (cf. Glimcher, 2003). Studying decision making in a more ‘natural’ setting thus requires the introduction of a changeable environment, such has been studied using for instance the task-switching paradigm (Figure 1B; as with the stable environment, the agent can either be informed of the task context or may have to learn this), or a stochastic environment in which actions will be rewarded only some of the time (Figure 1C). The most complex setting, and the one which comes closest to imitating the natural everyday world, is one containing multiple interacting agents (Figure 1D). In this setting the actions of one agent can influence both the likelihood of a particular outcome occurring and the choices subsequently made by the other agents who are also working to maximize their own reward. Such a scenario thus places particular emphasis on the history of recent actions and their outcomes to guide action selection. This type of setting has been captured in a number of mathematical models which have been used extensively to describe the behavior of animals in situations reaching from animals competing for food (e.g., Stephens & Krebs, 1986) to professional tennis players (Walker & Wooders, 2001).

Anterior cingulate cortex

As described eloquently by Paus (2001), the ACC is positioned at the interface of the frontal cortex, the motor system, and subcortical structures, putting it in an ideal position to integrate information for the control of behavior. The ventral part of the ACC, consisting of the cingulate gyrus (containing Brodmann Areas 24a, 24b, 25), is traditionally considered part of the limbic system, while the dorsal part, which is mostly buried in the cingulate sulcus (ACS) (containing Brodmann Areas 24c and 32' or 32ac), is termed the paralimbic part.

Studies investigating the role of the primate ACC in decision making have tended to focus on dorsal, paralimbic ACC, particularly around the cingulate motor areas, which, in monkeys, are known to be buried within the ACS. Based on anatomical connections and functional properties, three cingulate motor areas have been identified in the monkey brain: the rostral cingulate motor area (CMAr, located in area 24c) and the dorsal and ventral cingulate motor areas (CMAd and CMAv) located ventral to the pre-SMA/SMA (Picard & Strick, 1996) (Figure 2A). Tentative homologues of these areas in humans have been described as the rostral cingulate zone anterior (RCZa) and posterior (RCZp) and the caudal cingulate zone (CCZ), respectively (Picard & Strick, 1996) (Figure 2B). The cingulate motor areas, particularly CMAr, receive a convergence of inputs from adjacent motor structures and the prefrontal cortex (the latter possibly coming indirectly from the tissue in the rostral cingulate sulcus: see Hatanaka et al., 2003) as well have having access to information from the limbic system and receiving, indirectly, projections from the striatum (Hatanaka et al., 2003; Bates & Goldman-Rakic, 1993; Van Hoesen, Morecraft, & Vogt, 1993; Morecraft & Van Hoesen, 1998). Each of these cingulate motor areas has direct connections to the primary motor cortex and the spinal cord, giving them the possibility to directly influence movement (He, Dum, & Strick, 1995; Wang, Shima, Sawamura, & Tanji, 2001). Area 24c (probably including CMAr) also sends efferents to orbital and lateral prefrontal regions and strongly projects to parts of the rostral striatum, particularly around the striatal cell bridges and ventrolateral putamen (Takada et al., 2001; Kunishio & Haber, 1994). The CMAs are recipients of dense monoamine innervations, with there being many dopamine, serotonin and noradrenaline terminals in these regions, all of which are likely to be crucial for modulating information processing here (Berger, Trottier, Verney, Gasper, & Alvarez, 1988; Williams & Goldman-Rakic, 1998).

Figure 2. Comparative anatomy of monkey and human ACC.

Figure 2

(A) Depiction of monkey ACC with a section of the mid-dorsal cingulate enlarged in the lower panel to illustrate the position of the cingulate motor areas (CMAs). Redrawn and adapted, based on Dum and Strick (1993). (B) Depiction of human ACC with a section of medial frontal cortex enlarged in the lower panel to illustrate putative homologies of the CMAs – the anterior and posterior rostral cingulate zone (RCZa and RCZp) and caudal cingulate zone (CCZ) (as proposed by Picard & Strick, 1996).

It is important to point out that all of the single unit recording studies in monkeys on the role of the ACC in decision making have focused on the ACS, either around the CMAr or in the tissue rostral to here. The majority too have recorded from the dorsal bank of the ACS. Given the large anatomical variability in ACC structure between subjects and the limited resolution of functional imaging methods, many studies on human ACC these days refer to a rostral cingulate zone, comprising both RCZa and the RCZp (Ridderinkhof et al., 2004). As the purpose of this review is to attempt a convergence between studies in monkeys and humans, the focus of our discussions will centre on this region. However, it is worth noting that the ACS is densely interconnected with all other ACC subregions (Morecraft & Van Hoesen, 1998; Van Hoesen et al., 1993; Hatanaka et al., 1993), meaning that similar principles may apply to the ACC as a whole.

Decision making in monkey ACC

Decision making in a stable environment

Animals are usually trained to work for reward. Whether one is studying learning, memory, sensory discrimination, or motor behavior, the animal has to be given some kind of incentive after each appropriate response both to reinforce this behavior and to motivate it to continue to perform the task. On one level, this can be seen as a potential confound of such experiments. Many aspects of human memory, for instance, do not seem to require such encouragement. In terms of the study of the ACC, however, it can be argued that it is precisely this feature that has been invaluable in informing our understanding.

A large number of studies in monkeys have focused on their performance in a ‘stable’ environment, i.e. the monkey was trained extensively on stimulus-response-reward mappings prior to either electrophysiological recording or receiving a lesion and the task contingencies remained constant during all subsequent experimental sessions. In the terminology of decision theory, this situation concerns the individual agent being equipped with complete information about the current context (e.g., Figure 1A). Such scenarios have long been favored by experimental psychologists for the ability it affords them to control and manipulate the parameters of a task separate from issues of current motivation. Several single unit recording studies during well-taught static tasks in which monkeys use stimuli to guide action selection have reported cells in the ACS that respond either to the anticipation or receipt of reward or to the recognition of an error (Niki & Watanabe, 1976; Niki & Watanabe, 1979; Nishijo, Yamamoto, Ono, Uwano, Yamashita, & Yamashima, 1997; Ito, Stuphorn, Brown, & Shall, 2003; Nakamura, Roesch, & Olson, 2005). However, extensive lesions to the ACC generally produce little if any change in performance when comparable paradigms such as delayed response and delayed alternation have been taught pre-operatively, which raises the question of the functional importance of the information encoded by the ACC neurons (Pribram & Fulton, 1954; Murray, Davidson, Gaffan, Olton, & Suomi, 1989; Meunier, Bachevalier, & Mishkin, 1997; Rushworth, Hadland, Gaffan, & Passingham, 2003). The one clear deficit that has been observed in monkeys with ACC lesions in such a stable task is in using reward-related information to guide action selection, even though they were still able to select appropriate actions in a visual discrimination task (Hadland, Rushworth, Gaffan, & Passingham, 2003).

A first derivation on the above class of paradigm is to study the situation in which the agent – in this case, the monkey – does not yet have complete information, but is instead required to learn the relative values of actions in a certain, stable environment. To the best of our knowledge, this situation has never yet been explicitly tested in experiments focusing on the ACC.

Beyond simple mappings

A number of studies have fruitfully investigated the role of the ACC in choice behavior when the task environment is experimentally changed, meaning that the animals have to adapt to the new circumstances. Here, the consequences of a particular choice are not fixed, although, depending on the task, the appropriate response in each context may either be well known or need to be found. By systematically varying the relationship between two stimuli, two responses and two outcomes, it was shown that cells in ACS that encode particular response-reward associations are much more common than cells that encode stimulus-reward associations (Matsumoto, Suzuki, & Tanaka, 2003), supporting the emphasis on actions and outcomes in the ACC indicated by the study by Hadland and colleagues (Hadland et al., 2003). This study used varying stimulus-response-outcome combinations in order to isolate the effects of each of these factors on neuronal firing, and did not attempt to investigate how changing the task environment affects cell activity. By contrast, Procyk and colleagues (Procyk, Tanaka, & Joseph, 2000) recorded from the ACS during a simple trial-and-error learning task in which the monkey had to work out and repeat the correct order of a changing three movement sequence. As well as again finding particular neurons sensitive to rewards and errors, they discovered that a large number of cells showed greater activity when an animal was learning, and not just reproducing, the sequence.

ACS cells also respond to reductions in reward that signal a requirement to change behavior (Shima & Tanji, 1998). Nonetheless, merely presenting a monkey with a task in which it has to flexibly change its behavior is not always sufficient to engage the ACC. Little cell activity has been reported in the ACS when monkeys were taught to switch between well-learned, categorical responses on the basis of a sensory cue (Shima & Tanji, 1998) and animals with large ACC lesions were no more impaired at response selection at the time when a stimulus instructs a switch between one of two response sets than when selecting an action with the response rule well-established (Rushworth et al., 2003).

The evidence presented above from tasks which switch the contingencies between actions and responses might appear to implicate the ACC in being important for linking actions with outcomes and using alterations in feedback information to drive changes in response selection. However, the results of a recent study indicate that this may only be a partial description of this region's function. Kennerley and colleagues (Kennerley, Walton, Behrens, Buckley, & Rushworth, 2006) tested monkeys on a reinforcement-guided reversal task in which the animals chose between one of two joystick responses, only one of which was ever rewarded at one time. The contingency between the actions and reward also switched periodically. Following lesions to the ACS, monkeys were still able to update their choices in a comparable manner to the controls. Instead, the lesioned animals displayed difficulties in sustaining the correct response in this changeable environment.

Given the emphasis on guiding and updating choices based on action-outcome associations that had emerged from previous studies, this might initially seem a puzzling result. One interesting aspect of this task was that the control animals, in spite of being highly trained on the task, only gradually became more likely to change their behavior after an imposed switch over the succeeding trials as they acquired more evidence that the response-reward contingencies had altered. To them, the receipt or absence of reward was not equivalent to feedback demonstrating whether or not a choice was correct; instead, they appeared to view the reinforcement as pieces of information to be weighed up in the context of the outcome of previous choices. Such behavior may appear surprising in the context of the way humans should perform such a task. However, to a monkey deciding where might be best to forage, outcomes are seldom categorically right or wrong; the availability or lack of food following a particular course of action has to be weighed in the context of the previous history of what such choices yielded. Close analysis of the pattern of impairments in the ACS-lesioned animals implied that whereas the normal monkeys would consider outcomes from several trials into the past, the lesion group behaved myopically, only taking into account the outcome of the immediately preceding trial.

Integrating past rewards in a stochastic world

In the studies described above, while the task environment may have been changeable, the outcome of each individual decision was deterministic, either always resulting in a reward or never doing so. An optimal strategy in Kennerley and colleagues' (2006) switching experiment, for instance, would have been to use a win-stay, lose-switch rule to guide action selection. The fact that the monkeys seemed spontaneously to adopt a policy of basing their choices on the discounted history of actions and outcomes stretching back across several trials is indicative that their brains have evolved to operate in a stochastic world. The question therefore arises as to what role the ACC plays in learning about and representing the value of choices in an environment where the outcome of each option is determined probabilistically.

A recent study by Amiez and colleagues (Amiez, Joseph, & Procyk, 2006) investigated exactly this issue. They recorded from cells in ACS while monkeys chose between two options which delivered either a large or small amount of juice at unequal probabilities (one gave the large reward on 70% of trials whereas the other only on 30%) and compared activity to that in a “no-choice” condition in which the reward size was fixed as indicated by a visual cue. In the choice task, therefore, the optimal stimulus could not be determined from a single outcome, but instead had to be ascertained by monitoring the history of choices and payoffs across a number of trials. While some neurons signaled the received reward quantity, others appeared to encode the average probabilistic value of the likely available rewards. Furthermore, injections of the GABA agonist muscimol into this region caused impairments in locating the optimal stimulus. This suggests that the ACC can be important for acquiring and using stimulus-reward based associations when the link between the two factors cannot be determined from the outcome of a single trial.

Decision making in a dynamically changing environment

While the study by Amiez and colleagues (2006) introduced probabilistic outcomes into the experimental set-up, it can be considered as a type of static stochasticism, whereby the animal can learn the value of each option with increasing precision over time. In the natural environment, however, animals' choices influence the likelihood of a particular outcome being forthcoming in the future. For instance, a monkey that is successfully foraging in a fruit tree is both simultaneously acquiring food and depleting the immediately available resource; at some point, it has to decide when it would be worth moving on to explore other potential patches of food. Similarly, if the richest source of food is already overrun with other conspecifics, then it is likely to be more fruitful to move to an alternative location with fewer competitors, even if this patch contains fewer resources than the other. These, and similar questions, have long been considered within behavioral ecology: the question of when to switch between modes of behavior, for instance is captured by the marginal value theorem (Charnov, 1976; Stephens & Krebs, 1986) and how animals distribute themselves amongst available food by the theory of Ideal Free Distribution (Fretwell, 1972). If the ACC is crucial for guiding decision making based on the history of choices and payoffs, then it should be seen to reflect some of these parameters in an interactive environment, allowing an animal to explore whether the value of available options is dynamically changing and to take into consideration the behavior of other animals.

To examine this question, Kennerley and colleagues (2006) trained monkeys on a dynamic matching task, originally designed by Herrnstein (1997), in which the animals could choose between two responses that were rewarded with unequal probabilities. However, where this paradigm differs from the static stochastic type of Amiez is that rewards were assigned independently to each option on every trial and remained available until that response was chosen. Therefore, optimal behavior did not simply involve integrating the history of choices and outcomes to discover the response with the richest yield and then persisting with it; the gradual accumulation of reward on the unselected option meant that there would always come a point when the less profitable response, if ignored, would become more likely to produce a reward than the continually chosen high probability option. Importantly, to perform the task efficiently, therefore, the monkeys not only had to learn the probabilistic value of each response, but also to interpret received outcomes within the context of the task where both action value and choice history are important components to be able to switch appropriately between the options. While monkeys with ACS lesions behaved comparably to controls when the outcomes of the two responses were deterministic (always or never rewarded), as soon as the task required knowledge of the history of payoffs by presenting probabilistic outcomes, the lesioned animals were significantly slower to reach the optimal response allocation. Furthermore, whereas normal animals appeared to consider each individual outcome in the context of their extended reward history, being more likely to switch away from the less profitable option than the more profitable one when the previous trial was rewarded and, conversely, to sustain behaviour on the more profitable option than the less profitable one when the previous trial was unrewarded, such patterns of choices were not observed consistently in animals with ACS lesions.

By investigating the ACC using a task which mimics several of the features faced by a foraging animal, this result implies that, more than simply learning and representing an extended history of choices and payoffs, the ACC – or, at least the ACS – might be important for interpreting outcome information according to the current task context and biasing action selection accordingly. If an animal is in a region or time of year of sparse food resources then it might have to make do with anything it can discover; infrequent reward information may come to be particularly valuable to guide behavior. By contrast, if there are rich pickings to be had, then the same animal may be more willing to forgo what they would have previously taken in order to hunt for more plentiful sources of food; here, the lack of reward may very rapidly cause an adjustment in behavioral strategy. There have been few studies to date that have examined how the ACC responds when the task environment is systematically manipulated. However, there is known to be a small population of cells in ACS that encode the proximity of a reward in a multi-step task that indicates this region may have the capacity to encode the outcome of choices in a manner dependent on current context (Shidara & Richmond, 2002). How the statistics of the local environment might be determined – including its likely average yield, its overall stability, the number of potential competitors, and what the variance is in the estimates of any of these factors, for instance – and whether these are coded separate or intrinsic to overall values for each available option remain open questions.

Decision making in human ACC

Error processing

Studies of human ACC in decision making have focused predominantly on error processing. The discovery of the error-related negativity (ERN; Falkenstein, Hohnsbein, Hoormann, & Blanke, 1990; Gehring, Goss, Coles, Meyer, & Donchin, 1993), a potential believed to be generated in the dorsal ACC (Ullsperger & Von Cramon, 2001; Holroyd et al., 2004b), has generated a surge of interest in the processes underlying the optimizing of behavior in humans. Similar to early research in monkeys, studies into the processes underlying the ERN focused strongly on a single agent with complete information, acting mostly in the context of over-trained speeded-response time tasks, such as the Eriksen flanker task and the Stroop task. The discovery of a similar potential related to negative performance feedback (Miltner, Braun, & Coles, 1997) opened the possibility of studying the development of ACC activity during learning of the stimulus-response-reward contingencies in a stable environment.

The strong focus on error processing in human is at odds with the work on monkeys which has focused both on positive reward and negative rewards (i.e., punishments or errors). Although several authors have suggested that human ACC should also be activated following positive reinforcement under some circumstances (e.g., Ridderinkhof et al., 2004; Mars et al., 2005), to our knowledge there is only one study showing this effect. In an fMRI study on task switching Walton and colleagues (Walton, Devlin, & Rushworth, 2004) asked participants to switch to one of two different stimulus-response mappings. Participants were not explicitly informed to which mapping they were switching, but had to uncover this themselves using performance feedback. On the first trial following the switch, participants' ACC was equally active following positive and negative feedback, suggesting that it might not be the error information per se that is responsible for the ACC activity. A possible explanation for the discrepancy between this result and those of earlier studies is that in this study on the first trial after a switch the informational value of the positive and negative feedback was equal; after both positive and negative feedback participants knew exactly in which mapping rule set they should be using. In contrast, in the more standard speeded-response tasks correct trials contain no information on how to adjust behavior. Similarly, most learning studies (e.g., Holroyd & Coles, 2002; Mars et al., 2005) contain only few ‘first correct’ trials; most of the correct trials in these experiments are after learning was completed and thus contained no new information.

Response-outcome associations and human ACC

Although the results obtained in humans mimic some of the results obtained in monkeys, they do not provide unambiguous evidence on whether the ACC is involved just in monitoring for the outcome of actions depending on the context or is actively involved in the selection of appropriate actions, as would be suggested by results obtained in monkeys. Evaluative models of the ERN interpreted the ACC as a comparator of representations of the correct response and the actually executed response (see Coles, Scheffers, & Holroyd, 2001) or as a monitor for simultaneous response activations (Botvinick et al., 2001). However, these results are seemingly at odds with early positron emission tomography studies of human action selection which often reported involvement of the ACC in conditions of learning on the basis of feedback and also in conditions where participants were in asked to freely select responses or simply had to attend to the next movement in a pre-learned sequence (Jueptner, Stephan, Frith, Brooks, Frackowiak, & Passingham, 1997; Jueptner, Frith, Brooks, & Frackowiak, & Passingham, 1997).

A number of research lines have recently started to integrate the two viewpoints. In a recent fMRI study, Walton and colleagues (Walton et al., 2004) addressed the relationship between selection and monitoring within the ACC. They compared conditions in which participants had to execute an instructed response (FIXED), had to monitor the outcome of a previously instructed movement (FIXED+MONITOR), had to freely select a movement (GENERATE), or had to freely select and monitor the outcome in order to adjust their behavior on the next trial (GENERATE+MONITOR). ACC was active more in the GENERATE than in the FIXED+MONITOR condition, suggesting a role for the ACC in selection when no performance monitoring is required. Importantly, ACC was activated strongest in the GENERATE+MONITOR condition, when participants additionally had to monitoring the outcome of the selected action. These results point to an interaction of selection and monitoring processes within the human ACC and open the possibility that, just as in monkeys, the human ACC has a role in action selection based on the expected outcome of the reward (Holroyd, Nieuwenhuis, Mars, & Coles, 2004b; Rushworth, Walton, Kennerley, & Bannerman, 2004).

Further evidence for the involvement of human ACC in action selection comes from studies on learning. The learning studies discussed above showed a strong modulation of error-related ACC activity by learning (Holroyd & Coles, 2002; Nieuwenhuis et al., 2002; Mars et al., 2005), which was predicted by a computational model of action selection in the ACC based on evaluative information from the dopamine system (Holroyd & Coles, 2002). However, these studies did not show a direct correlation between ACC activity and the decision on the next trial. A recent study from Hester and colleagues (Hester, Silk, & Mattingly, 2006) addresses this issue directly. They asked participants to play a memory game, in which they had to indicate the location on a visual display of certain two-digit numbers. Following each trial, they were given performance feedback. Negative performance feedback elicited strong ACC activation, and more so when the participants learned from their error and responded correctly on the next trial. Again, these results point to an interaction between evaluation and selection within the human ACC.

Towards human decision making in variable environments

The results described above illustrate how the gap between the monkey and human literatures may be bridged when employing slightly more complicated designs, such as task switching and learning. The human results then tend to point to similar conclusions as reached in monkeys, that the ACC is involved in biasing action selection based on the predicted outcome of an action (Holroyd et al., 2004; Rushworth et al., 2004).

This approach might also shed some light on human lesion studies. Although there is ample evidence of error-related responses in the human ACC in a number of speeded response-time tasks (see above), a recent assessment of cognitive control function, using Stroop and go-nogo tasks, in 12 patients with ACC lesions did not reveal any impairment as compared to healthy controls (Fellows & Farah, 2005). Similar results have been obtained by Baird (Baird, Deward, Critchley, Gilbert, Dolan, & Cipolotti, 2006), although some case studies suggesting stronger deficits have been reported (Turken & Swick, 1999; Swick & Jovanovic, 2002). These results suggest that, under these circumstances, error-related activity, although prominent, may not play an essential role in guiding behavior. As illustrated above, ACC lesions in monkeys also hardly affect performance in well-learned, relatively static tasks (Rushworth et al., 2003), even though error- and reward-related signals are found during performance of these tasks (e.g., Niki & Watanabe, 1976; 1979). Conversely, the necessity of a fully functional ACC for optimal behavior was strongly evident in tasks in which the contingencies of the environment are not fully known, such as in learning tasks, or when the environment is unpredictable or stochastic (Kennerley et al., 2006). Studying ACC patients in similar paradigms might provide insights into the specific cognitive deficits in these patients. To date, we are not aware of any studies accessing the necessity of ACC in this type of behavior.

In summary, we suggest that applying the approach advocated in monkeys, focusing on more ‘natural’, dynamic environments might also be applied successfully to the study of human ACC function. This approach might shed some light on some of the outstanding controversies, such as whether ACC purely deals with errors and why some lesion studies have provided conflicting results. A challenge for the future is to investigate human ACC function in fully dynamic environments, in which multiple agents interact and which place more emphasis on reward history. Although studies on human ACC in environments with multiple agents have mainly focused on relatively artificial settings (Miltner, Brauer, Hecht, Trippe, & Coles, 2004; Van Schie, Mars, Coles, & Bekkering, 2004), this challenge is starting to be taken up by several groups (e.g., Cohen & Ranganath, 2007; Hewig, Trippe, Hecht, Coles, Holroyd, & Miltner, 2007; Holroyd & Coles, 2007).

Monkeys and humans

We have shown that by taking into account more ‘natural’ experimental settings, we can gain new insights into the role of the monkey ACC in decision making. In doing so, we have borrowed heavily from decision theory and optimal foraging theory, both disciplines aimed at providing formal models of animal behavior. We have also suggested that, to certain extent, a similar approach may be applied to research on human ACC function and may provide a way of reconciling some of the controversies in the current human decision making literature. However, there are some questions that need to be addressed before one can safely attempt to fully integrate the two literatures.

Ecological niches

One important question that immediately arises is, of course, how far this analogy can be extended. In this review, we have used arguments related to the natural environment of the monkey to guide the development of appropriate experimental paradigms, which have, in turn, resulted in novel perspectives on ACC function. However, while the monkey is essentially a foraging animal, one can wonder whether this qualification equally applies to humans. Even though our ancestors were hunter-gatherers, it is evident that our everyday lives now do not need to be so directly focused on competing with others in a search for nutrition and shelter. Nevertheless, it is important to keep in mind the individual ecological niche in which a species evolved (cf. Seth, in press). Indeed, most of the principles discussed in this article – of living in an uncertain, changeable world where our actions influence both the environment and others within it – remain pertinent to all mammalian decision making and as such need to be incorporated into our psychological theories.

What constitutes and error?

Since the approaches we have discussed here strongly hinge on the use of rewards and errors (or lack of reinforcement) to guide action selection, it is also important to consider whether this concept translates easily between humans and monkeys. One of the essential components that needs to be identified in any decision model is how the animal calculates the utility of any possible action, in other words the currency used to compute the satisfaction of the reward gained given the particular situation in which it was obtained.

A first issue in this calculation is that the value of a rewarding stimulus is highly context dependent. To a hungry animal, for instance, locating a plentiful source of food may be the most important goal to be achieved, yet to a thirsty one it may come much lower down in the hierarchy of priorities. Furthermore, in some studies of response switching in monkeys (Shima & Tanji, 1998; Kennerley et al., 2006) and in comparable experiments in humans (Bush et al., 2002; Williams, Bush, Rauch, Cosgrove, & Eskandar, 2004), a requirement to alter the manner of responding need not be signaled by a complete failure to receive reward or negative feedback, but can instead be indicated by a reduction in the magnitude of the expected outcome or a increased delay to its occurrence. Equally, in any task where the consequences of a choice are probabilistically determined, the absence of reward/positive feedback on any single trial may not imply that the chosen response was not optimal for the situation. Moreover, the value of a reward can be altered by the response cost, such as travel time and energy expenditure for instance, that may have needed to be overcome to achieve that outcome (Walton, Kennerley, Bannerman, Phillips, & Rushworth, 2006; Platt, 2002; Kacelnik, 1997). It is therefore important to realize, that what constitutes an error (or reward) can vary almost endlessly as long as it has behavioral relevance to the task at hand.

Consistent with this notion, reward-related processing in both monkey and human ACC appears to be highly context-dependent, indicating that reward and action-reward associations are represented in a manner suitable to the current task circumstances (cf. Nieuwenhuis, Heslenfeld, Alting von Geusau, Mars, Holroyd, & Yeung, 2005). For instance, error-related activity in the human ACC has also proven to be highly context-dependent, being modulated by the importance of an error in a speeded-response task (Gehring et al., 1993), the frequency of errors (Holroyd, Larsen, & Cohen, 2004a), and the relevant stimulus dimensions (Nieuwenhuis, Yeung, Holroyd, Schurger, & Cohen, 2004).

A related question is whether it is appropriate to use the same terminology to describe the rewards and errors in both species. In all studies on monkeys described here, the inappropriateness of behavior was signaled by a decrease of absence of a food rewards. Conversely, studies on humans have only rarely used primary reinforcers, preferring to employ secondary reinforcers such as money (e.g., Bush et al., 1998; Holroyd et al., 2003; Nieuwenhuis et al., 2004) or simply stimuli signaling ‘correct’ or ‘incorrect’ (e.g., Holroyd & Coles, 2002; Mars et al., 2005). Although it has been suggested that the ACC uses reward-related information across a wide variety of levels of information processing (Ullsperger, Volz, & Von Cramon, 2004), it is difficult to establish whether the different types of reinforces are processed equally across species. For example, even though regions of the orbitofrontal cortex are known to respond comparably to both primary and secondary rewards (O'Doherty, Kringelbach, Rolls, Hornak, & Andrews, 2001; O'Doherty, Deichmann, Critchley, & Dolan, 2002), there are examples of how different categories of reinforcers (e.g., juice versus monetary tokens) can result in different patterns of brain activation during decision making (McClure, Ericson, Laibson, Loewenstein, & Cohen, 2007).

Furthermore, this does not speak to the issue of how action-reward representations are established. The manner in which experimental subjects come to know these associations and rules differs greatly between monkey research, where the animal is usually taught the task over weeks or months of training by trial and error, and humans, who can rely on simple verbal instruction. Such differences in methodology, both in instruction and presentation of a task, can cause difficulties in interpretation and controversies when separate strands of research do not necessarily lead to consistent answers (cf. Kacelnik, 2003).

Different signals, different loci

As indicated when describing ACC anatomy, the monkey studies described in this paper either record from the ACS, predominately within or rostral to CMAr, or study animals with lesions or inactivations of the ACS. The imaging studies we have described tend to claim to image the human equivalent of this area, although this is hard to determine due to both spatial smoothing of fMRI and individual variability between subjects. Source localization of ERPs is even less accurate, although fMRI studies using the same paradigms as employed in ERP studies have yielded activations in the areas expected (e.g., Ullsperger et al., 2001; Mars et al., 2005).

A related point is that, by necessity, the signals recorded from human ACC are the aggregate of the activity of many neurons (over several seconds, in the case of fMRI), while recordings in monkeys track the activity of individual neurons with a millisecond time-resolution. This latter variety of research has shown that multiple types of signals can be represented by ACC neurons simultaneously and separately during the course of a trial (e.g., Ito et al., 2003; Matsumoto, Matsumoto, Abe, & Tanaka, 2007). Although this does not mean that the conclusions draw from the more course signals recorded in humans are invalid, it does warrant caution when interpreting fMRI and ERP data as some of the complexities of the response may be masked by both temporal and spatial summation.

Finally, although monkey recording and lesion studies often focus on the ACC in isolation, the ACC is of course a node in a larger network of brain areas. As previously discussed, one factor that makes the ACC a prime candidate for integrating multiple types of information and biasing action selection is its extensive anatomical connectivity with prefrontal, limbic and motor regions, along with its prominent monoaminergic innervation. Cells in many interconnected regions appear to encode outcome information, whether in terms of the reward received or the difference between the expected and actual reinforcement, and it is clearly imperative for future research to resolve the role of the ACC within these extended networks. Indeed, we would contend that perhaps the best way of determining such functions will be within the framework we have proposed. The increasing richness of the concept of ‘outcome value’, for instance, that is present in tasks where animals have to weigh up costs and benefits of a variety of alternatives, choose actions based on an extended history of reinforcement, the environmental context and their internal motivational state, and consider choice strategies which include complex social interactions, has already begun to provide important clues into the particular role of the ACC and other regions in aspects of decision making (e.g., Sugrue, Corrado, & Newsome, 2004; Glimcher, 2003; Lee & Seo, in press).

Conclusion

We have argued that, in spite of some caveats and cautions, much insight into the role of the ACC in decision making may be gained by moving away from a reliance on static experimental environments to ones that mimic more closely the situations faced in the everyday world, where action selection is guided by context, outcomes are seldom certain and are liable to change over time, and choices are influenced by the expected behavior of other people. Humans are not monkeys, but there is an enormous amount of common evolutionary history between humans and monkeys that has shaped the primate brain. Moreover, there is a large degree of overlap between the issues considered by behavioral ecologists studying foraging decisions in animals and behavioral scientists researching human decision making. By finding these areas of common interest, it may be possible to formalize the questions of importance, leading to a common language that can inform experiments in both monkeys and humans.

The evidence from the history of research into monkey ACC is that both cell activity and lesion deficits can sometimes be slightly misleading markers of function and that moving towards using changeable, stochastic, and dynamic paradigms can result in more sophisticated descriptions of the ACC's role in decision making. Whether the same will be seen to be true for the human literature is an empirical question, but the ingredients are certainly comparable: for instance, the presence of ACC activity, as indexed by the ERN, in a variety of simple, static tasks seems to be somewhat at odds with evidence from patients with lesions to this region. It will be fascinating to observe over the coming years how our understanding of the ACC will evolve if we move towards designing our experiments to reflect the environment in which our brains have evolved.

Acknowledgements

M.E.W. is supported by the M.R.C. and the Wellcome Trust. R.B.M. is supported by the Wellcome Trust.

References

  1. Amiez C, Joseph JP, Procyk E. Reward encoding in the monkey anterior cingulate cortex. Cerebral Cortex. 2006;16:1040–1055. doi: 10.1093/cercor/bhj046. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Baird A, Deward BK, Chritchley H, Gilbert SJ, Dolan RJ, Cipolotti L. Cognitive function after medial frontal lobe damage including the anterior cingulate cortex: A preliminary investigation. Brain and Cognition. 2006;60:166–175. doi: 10.1016/j.bandc.2005.11.003. [DOI] [PubMed] [Google Scholar]
  3. Bates JF, Goldman-Rakic PS. Prefrontal connections of medial motor areas in the rhesus monkey. Journal of Comparative Neurology. 1993;336:211–228. doi: 10.1002/cne.903360205. [DOI] [PubMed] [Google Scholar]
  4. Berger B, Trottier S, Verney C, Gaspar P, Alvarez C. Regional and laminar distribution of the dopamine and serotonin innervation in the macaque cerebral cortex: A radioautographic study. Journal of Comparative Neurology. 1988;273:99–119. doi: 10.1002/cne.902730109. [DOI] [PubMed] [Google Scholar]
  5. Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD. Conflict monitoring and cognitive control. Psychological Review. 2001;108:624–652. doi: 10.1037/0033-295x.108.3.624. [DOI] [PubMed] [Google Scholar]
  6. Bush G, Vogt BA, Holmes J, Dale AM, Greve D, Jenike MA, Rosen BR. Dorsal anterior cingulate cortex: A role in reward-based decision making. Proceedings of the National Academy of Sciences USA. 2002;99:523–528. doi: 10.1073/pnas.012470999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Charnov EL. Optimal foraging: The marginal value theorem. Theoretical Population Biology. 1976;9:129–136. doi: 10.1016/0040-5809(76)90040-x. [DOI] [PubMed] [Google Scholar]
  8. Cohen MX, Ranganath C. Reinforcement learning signals predict future decisions. Journal of Neuroscience. 2007;27:371–378. doi: 10.1523/JNEUROSCI.4421-06.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Coles MGH, Scheffers MK, Holroyd CB. Why is there an ERN/Ne on correct trials? Response representations, stimulus-related components, and the theory of error processing. Biological Psychology. 2001;56:173–189. doi: 10.1016/s0301-0511(01)00076-x. [DOI] [PubMed] [Google Scholar]
  10. Dum RP, Strick PL. Cingulate motor areas. In: Vogt BA, Gabriel M, editors. Neurobiology of cingulate cortex and limic thalamus: A comprehensive handbook. Boston: Birkhauser; 1993. pp. 415–441. [Google Scholar]
  11. Falkenstein M, Hohnsbein J, Hoormann J, Blanke L. Effects of errors in choice reaction tasks on the ERP under focused and divided attention. In: Brunia CHM, Gaillard AWK, Kok A, editors. Psychophysiological brain research. Tilburg: Tilburg University Press; 1990. pp. 192–195. [Google Scholar]
  12. Fellows LK, Farah MJ. Is anterior cingulate cortex necessary for cognitive control? Brain. 2005;128:788–796. doi: 10.1093/brain/awh405. [DOI] [PubMed] [Google Scholar]
  13. Fretwell SD. Populations in a seasonal environment. Princeton: Princeton University Press; 1972. [PubMed] [Google Scholar]
  14. Gehring WJ, Goss B, Coles MGH, Meyer DE, Donchin E. A neural system for error detection and compensation. Psychological Science. 1993;4:385–390. [Google Scholar]
  15. Glimcher PW. Decisions, uncertainty, and the brain. The science of neuroeconomics. Cambridge: MIT Press; 2003. [Google Scholar]
  16. Hadland KA, Rushworth MFS, Gaffan D, Passingham RE. The anterior cingulate and reward-guided selection of actions. Journal of Neurophysiology. 2003;89:1161–1164. doi: 10.1152/jn.00634.2002. [DOI] [PubMed] [Google Scholar]
  17. Hatanaka T, Tokuno M, Hamada I, Masahiko I, Ito Y, Imanishi M, Hasegawa N, Zkazawa T, Nambu A, Takada M. Thalamocortical and intracortical connections of monkey cingulate motor areas. Journal of Comparative Neurology. 2003;462:121–138. doi: 10.1002/cne.10720. [DOI] [PubMed] [Google Scholar]
  18. He SQ, Dum RP, Strick PL. Topographic organization of corticospinal projections from the frontal lobe: Motor areas on the medial surface of the hemisphere. Journal of Neuroscience. 1995;15:3284–3306. doi: 10.1523/JNEUROSCI.15-05-03284.1995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Herrnstein RJ. In: The Matching Law: Papers in psychology and economics. Rachlin H, Laibson DI, editors. Cambridge: Harvard University Press; 1997. [Google Scholar]
  20. Hester R, Silk TJ, Mattingley JB. Learning from errors: An event-related design comparing error-related neural activity prior to performance adaptation or continued failure (abstract) NeuroImage. 2006;31:S83. [Google Scholar]
  21. Hewig J, Trigger R, Hecht H, Coles MGH, Holroyd CB, Miltner WHR. Decision-making in blackjack: An electrophysiological analysis. Cerebral Cortex. 2007;17:865–877. doi: 10.1093/cercor/bhk040. [DOI] [PubMed] [Google Scholar]
  22. Holroyd CB, Coles MGH. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review. 2002;109:679–709. doi: 10.1037/0033-295X.109.4.679. [DOI] [PubMed] [Google Scholar]
  23. Holroyd CB, Coles MGH. Dorsal anterior cingulate cortex integrates reward history to guide voluntary behavior. 2007 doi: 10.1016/j.cortex.2007.08.013. Manuscript submitted for publication. [DOI] [PubMed] [Google Scholar]
  24. Holroyd CB, Larsen JT, Cohen JD. Context dependence of the event-related brain potential associated with reward and punishment. Psychophysiology. 2004a;41:45–53. doi: 10.1111/j.1469-8986.2004.00152.x. [DOI] [PubMed] [Google Scholar]
  25. Holroyd CB, Nieuwenhuis S, Mars RB, Coles MGH. Anterior cingulate cortex, selection for action, and error processing. In: Posner MI, editor. Cognitive neuroscience of attention. New York: Guildford Press; 2004b. pp. 219–231. [Google Scholar]
  26. Holroyd CB, Nieuwenhuis S, Yeung N, Nystrom L, Mars RB, Coles MGH, Cohen JD. Dorsal anterior cingulate cortex responds to internal and external error processing. Nature Neuroscience. 2004c;7:747–749. doi: 10.1038/nn1238. [DOI] [PubMed] [Google Scholar]
  27. Ito S, Stuphorn V, Brown JW, Schall JD. Performance monitoring by the anterior cingulate cortex during saccade countermanding. Science. 2003;302:120–122. doi: 10.1126/science.1087847. [DOI] [PubMed] [Google Scholar]
  28. Jueptner M, Frith CD, Brooks DJ, Frackowiak RSJ, Passingham RE. Anatomy of motor learning. II. Subcortical structures and learning by trial and error. Journal of Neurophysiology. 1997;77:1325–1337. doi: 10.1152/jn.1997.77.3.1325. [DOI] [PubMed] [Google Scholar]
  29. Jueptner M, Stephan KM, Frith CD, Brooks DJ, Frackowiak RSJ, Passingham RE. Anatomy of motor learning. I. Frontal cortex and attention to action. Journal of Neurophysiology. 1997;77:1313–1324. doi: 10.1152/jn.1997.77.3.1313. [DOI] [PubMed] [Google Scholar]
  30. Kacelnik A. Normative and descriptive models of decision making: Time discounting and risk sensitivity. Ciba Foundation Symposium. 1997;208:51–67. doi: 10.1002/9780470515372.ch5. discussion 67-70. [DOI] [PubMed] [Google Scholar]
  31. Kacelnik A. The evolution of patience. In: Loewenstein G, Read D, Baumeister R, editors. Time and decision: Economic and psychological perspectives on intertemporal choice. New York: Russell Sage Foundation; 2003. pp. 115–138. [Google Scholar]
  32. Kennerley SW, Walton ME, Behrens TEJ, Buckley MJ, Rushworth MFS. Optimal decision making and the anterior cingulate cortex. Nature Neuroscience. 2006;9:940–947. doi: 10.1038/nn1724. [DOI] [PubMed] [Google Scholar]
  33. Kunishio K, Haber SN. Primate cingulostriatal projection: Limbic striatal versus sensorimotor striatal input. Journal of Comparative Neurology. 1994;350:337–356. doi: 10.1002/cne.903500302. [DOI] [PubMed] [Google Scholar]
  34. Lee D, Seo H. Mechanisms of reinforcement learning and decision making in the primate dorsolateral prefrontal cortex. Annals of the New York Academy of Sciences. 2007 doi: 10.1196/annals.1390.007. [DOI] [PubMed] [Google Scholar]
  35. Mars RB, Coles MGH, Grol MJ, Holroyd CB, Nieuwenhuis S, Hulstijn W, Toni I. Neural dynamics of error processing in medial frontal cortex. NeuroImage. 2005;28:1007–1013. doi: 10.1016/j.neuroimage.2005.06.041. [DOI] [PubMed] [Google Scholar]
  36. Matsumoto M, Matsumoto K, Abe H, Tanaka K. Medial prefrontal cell activity signaling prediction errors of action values. Nature Neuroscience. 2007;10:647–656. doi: 10.1038/nn1890. [DOI] [PubMed] [Google Scholar]
  37. Matsumoto K, Suzuki W, Tanaka K. Neuronal correlates of goal-based motor selection in the prefrontal cortex. Science. 2003;301:229–232. doi: 10.1126/science.1084204. [DOI] [PubMed] [Google Scholar]
  38. McClure SM, Ericson KM, Laibson DI, Loewenstein G, Cohen JD. Time discounting for primary rewards. Journal of Neuroscience. 2007;27:5796–5804. doi: 10.1523/JNEUROSCI.4246-06.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Meunier M, Bachevalier J, Mishkin M. Effects of orbital frontal and anterior cingulate lesions on object and spatial memory in rhesus monkeys. Neuropsychologia. 1997;35:999–1015. doi: 10.1016/s0028-3932(97)00027-4. [DOI] [PubMed] [Google Scholar]
  40. Miltner WHR, Brauer J, Hecht H, Trippe R, Coles MGH. Parallel brain activity for self-generated and observed errors. In: Ullsperger M, Falkenstein M, editors. Errors, conflicts, and the brain. Current opinions on performance monitoring. Leipzig: MPI of Cognitive Neuroscience; 2004. pp. 124–129. [Google Scholar]
  41. Miltner WHR, Braun CH, Coles MGH. Event-related brain potentials following incorrect feedback in a time-estimation task: Evidence for a “generic” neural system for error detection. Journal of Cognitive Neuroscience. 1997;9:788–798. doi: 10.1162/jocn.1997.9.6.788. [DOI] [PubMed] [Google Scholar]
  42. Morecraft RJ, Van Hoesen GW. Convergence of limbic input to the cingulate motor cortex in the rhesus monkey. Brain Research Bulletin. 1998;45:209–232. doi: 10.1016/s0361-9230(97)00344-4. [DOI] [PubMed] [Google Scholar]
  43. Murray EA, Davidson M, Gaffan D, Olton DS, Suomi S. Effects of fornix transaction and cingulate cortical ablation on spatial memory in rhesus monkeys. Experimental Brain Research. 1989;74:192–186. doi: 10.1007/BF00248291. [DOI] [PubMed] [Google Scholar]
  44. Nakaramura K, Roesch MR, Olson CR. Neuronal activity in macaque SEF and ACC during performance of tasks involving conflict. Journal of Neurophysiology. 2005;93:884–908. doi: 10.1152/jn.00305.2004. [DOI] [PubMed] [Google Scholar]
  45. Nieuwenhuis S, Heslenfeld DJ, Alting van Geusau NJ, Mars RB, Holroyd CB, Yeung N. Activity in human reward-sensitive brain areas is strongly context-dependent. NeuroImage. 2005;25:1302–1309. doi: 10.1016/j.neuroimage.2004.12.043. [DOI] [PubMed] [Google Scholar]
  46. Nieuwenhuis S, Ridderinkhof KR, Talsma D, Coles MGH, Holroyd CB, Kok A, Van der Molen MW. A computational account of altered error processing in older age: Dopamine and the error-related negativity. Cognitive, Affective, and Behavioral Neuroscience. 2002;2:19–36. doi: 10.3758/cabn.2.1.19. [DOI] [PubMed] [Google Scholar]
  47. Nieuwenhuis S, Yeung N, Holroyd CB, Schurger A, Cohen JD. Sensitivity of electrophysiological activity to utilitarian and performance feedback. Cerebral Cortex. 2004;14:741–747. doi: 10.1093/cercor/bhh034. [DOI] [PubMed] [Google Scholar]
  48. Niki H, Watanabe M. Cingulate unit activity and delayed response. Brain Research. 1976;110:381–381. doi: 10.1016/0006-8993(76)90412-1. [DOI] [PubMed] [Google Scholar]
  49. Niki H, Watanabe M. Prefrontal and cingulate unit activity during timing behavior in the monkey. Brain Research. 1979;171:213–224. doi: 10.1016/0006-8993(79)90328-7. [DOI] [PubMed] [Google Scholar]
  50. Nishijo H, Yamamoto Y, Ono T, Uwano T, Yamashita J, Yamashima T. Single neuron responses in the monkey anterior cingulate cortex during visual discrimination. Neuroscience Letters. 1997;227:79–82. doi: 10.1016/s0304-3940(97)00310-8. [DOI] [PubMed] [Google Scholar]
  51. O'Doherty JP, Deichmann R, Critchley HD, Dolan RJ. Neural responses during anticipation of a primary taste reward. Neuron. 2002;33:815–826. doi: 10.1016/s0896-6273(02)00603-7. [DOI] [PubMed] [Google Scholar]
  52. O'Doherty JP, Kringelbach ML, Rolls ET, Hornak J, Andrews C. Abstract reward and punishment representations in the human orbitofrontal cortex. Nature Neuroscience. 2001;4:95–102. doi: 10.1038/82959. [DOI] [PubMed] [Google Scholar]
  53. Passingham RE. The frontal lobes and voluntary action. Oxford: Oxford University Press; 1993. [Google Scholar]
  54. Paus T. Primate anterior cingulate cortex: Where motor control, drive and cognition interface. Nature Reviews Neuroscience. 2001;2:417–424. doi: 10.1038/35077500. [DOI] [PubMed] [Google Scholar]
  55. Picard N, Strick PL. Motor areas of the medial wall: A review of their location and functional significance. Cerebral Cortex. 1996;6:342–353. doi: 10.1093/cercor/6.3.342. [DOI] [PubMed] [Google Scholar]
  56. Platt ML. Neural correlates of decisions. Current Opinion in Neurobiology. 2002;12:141–148. doi: 10.1016/s0959-4388(02)00302-1. [DOI] [PubMed] [Google Scholar]
  57. Posner MI, DiGirolamo GJ. Executive attention: Conflict, target detection, and cognitive control. In: Parasuraman R, editor. The attentive brain. Cambridge: MIT Press; 1998. pp. 401–423. [Google Scholar]
  58. Posner MI, Petersen SE, Fox PT, Raichle ME. Localization of cognitive operations in the human brain. Science. 1988;240:1627–1631. doi: 10.1126/science.3289116. [DOI] [PubMed] [Google Scholar]
  59. Pribram KH, Fulton JF. An experimental critique of the effects of anterior cingulate ablations in monkey. Brain. 1954;77:34–33. doi: 10.1093/brain/77.1.34. [DOI] [PubMed] [Google Scholar]
  60. Procyk E, Tanaka YL, Joseph JP. Anterior cingulate activity during routine and non-routing sequential behaviors in humans. Nature Neuroscience. 2000;4:502–508. doi: 10.1038/74880. [DOI] [PubMed] [Google Scholar]
  61. Ridderinkhof KR, Nieuwenhuis S, Crone EA, Ullsperger M. The role of the medial frontal cortex in cognitive control. Science. 2004;306:4473–447. doi: 10.1126/science.1100301. [DOI] [PubMed] [Google Scholar]
  62. Rushworth MFS, Hadland KA, Gaffan D, Passingham RE. The effect of cingulate cortex lesions on task switching and working memory. Journal of Cognitive Neuroscience. 2003;15:338–353. doi: 10.1162/089892903321593072. [DOI] [PubMed] [Google Scholar]
  63. Rushworth MFS, Walton ME, Kennerley SW, Bannerman DM. Action sets and decisions in medial frontal cortex. Trends in Cognitive Sciences. 2004;8:410–417. doi: 10.1016/j.tics.2004.07.009. [DOI] [PubMed] [Google Scholar]
  64. Seth AK. The ecology of action selection: Insights from artificial life. Philosophical Transactions of the Royal Society B. doi: 10.1098/rstb.2007.2052. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Shidara M, Richmond BJ. Anterior cingulate: Single neuronal signals related to degree of reward expectancy. Science. 2002;296:1709–1711. doi: 10.1126/science.1069504. [DOI] [PubMed] [Google Scholar]
  66. Shima K, Tanji J. Role for cingulate motor area cells in voluntary movement selection based on reward. Science. 1998;282:1335–1338. doi: 10.1126/science.282.5392.1335. [DOI] [PubMed] [Google Scholar]
  67. Stephens DW, Krebbs JR. Foraging theory. Princeton: Princeton University Press; 1986. [Google Scholar]
  68. Sugrue LP, Corrado GS, Newsome WT. Matching behavior and the representation of value in the parietal cortex. Science. 2004;304:1782–1787. doi: 10.1126/science.1094765. [DOI] [PubMed] [Google Scholar]
  69. Swick D, Jovanovic J. Anterior cingulate cortex and the Stroop task: Neuropsychological evidence for topographic specificity. Neuropsychologia. 2002;40:1240–1253. doi: 10.1016/s0028-3932(01)00226-3. [DOI] [PubMed] [Google Scholar]
  70. Takada M, Tokuno H, Hamada I, Inase M, Ito Y, Imanishi M, Hasegawa N, Akazawa T, Hatanaka N, Nambu A. Organization of inputs from cingulate motor areas to basal ganglia in macaque monkey. European Journal of Neuroscience. 2001;14:1633–1650. doi: 10.1046/j.0953-816x.2001.01789.x. [DOI] [PubMed] [Google Scholar]
  71. Turken AU, Swick D. Response selection in the human anterior cingulate cortex. Nature Neuroscience. 1999;2:920–924. doi: 10.1038/13224. [DOI] [PubMed] [Google Scholar]
  72. Ullsperger M, Von Cramon DY. Subprocesses of performance monitoring: A dissociation of error processing and response competition revealed by event-related fMRI and ERPs. NeuroImage. 2001;14:1387–1401. doi: 10.1006/nimg.2001.0935. [DOI] [PubMed] [Google Scholar]
  73. Ullsperger M, Volz KG, Von Cramon DY. A common neural system signaling the need for behavioral changes. Trends in Cognitive Sciences. 2004;8:445–446. doi: 10.1016/j.tics.2004.08.013. [DOI] [PubMed] [Google Scholar]
  74. Van Hoesen GW, Morecraft RJ, Vogt BA. Connections of the monkey cingulate cortex. In: Vogt BA, Gabriel M, editors. Neurobiology of cingulate cortex and limic thalamus: A comprehensive handbook. Boston: Birkhauser; 1993. pp. 249–283. [Google Scholar]
  75. Van Schie HT, Mars RB, Coles MGH, Bekkering H. Modulation of activity in medial frontal and motor cortices during error observation. Nature Neuroscience. 2004;7:549–554. doi: 10.1038/nn1239. [DOI] [PubMed] [Google Scholar]
  76. Walker M, Wooders J. Minimax play at Wimbledon. American Economic Review. 2001;91:1521–1538. [Google Scholar]
  77. Walton ME, Devlin JT, Rushworth MFS. Interactions between decision making and performance monitoring within prefrontal cortex. Nature Neuroscience. 2004;7:1259–1265. doi: 10.1038/nn1339. [DOI] [PubMed] [Google Scholar]
  78. Walton ME, Kennerley SW, Bannerman DM, Phillips PEM, Rushworth MFS. Weighing up the benefits of work: Behavioral and neural analyses of effort-related decision making. Neural Networks. 2006;19:1302–1314. doi: 10.1016/j.neunet.2006.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Wang Y, Shima K, Sawamura H, Tanji J. Spatial distribution of cingulate cells projecting to the primary, supplementary, and pre-supplementary motor areas: A retrograde multiple labeling study in the macaque monkey. Neuroscience Research. 2001;39:39–49. doi: 10.1016/s0168-0102(00)00198-x. [DOI] [PubMed] [Google Scholar]
  80. Williams SM, Goldman-Rakic PS. Widespread origin of the primate mesofrontal dopamine system. Cerebral Cortex. 1998;8:321–345. doi: 10.1093/cercor/8.4.321. [DOI] [PubMed] [Google Scholar]
  81. Williams ZM, Bush G, Rauch SL, Cosgrove GR, Eskandar EN. Human anterior cingulate neurons and the integration of monetary reward with motor responses. Nature Neuroscience. 2004;7:1370–1375. doi: 10.1038/nn1354. [DOI] [PubMed] [Google Scholar]

RESOURCES