Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Feb 23.
Published in final edited form as: J Physiol Paris. 2015 Feb 23;109(0):118–128. doi: 10.1016/j.jphysparis.2015.02.002

The role of supplementary eye field in goal-directed behavior

Veit Stuphorn 1
PMCID: PMC4441541  NIHMSID: NIHMS666698  PMID: 25720602

Abstract

The medial frontal cortex has been suggested to play a role in the control, monitoring, and selection of behavior. The supplementary eye field (SEF) is a cortical area within medial frontal cortex that is involved in the regulation of eye movements. Neurophysiological studies in the SEF of macaque monkeys have systematically investigated the role of SEF in various behavioral control and monitoring functions. Inhibitory control studies indicate that SEF neurons do not directly participate in the initiation of eye movements. Instead, recent value-based decision making studies suggest that the SEF participates in the control of eye movements by representing the context-dependent action values of all currently possible oculomotor behaviors. These action value signals in SEF would be useful in directing the activity distribution in more primary oculomotor areas, to guide decisions towards behaviorally optimal choices. SEF also does not participate in the fast, inhibitory control of eye movements in response to sudden changes in the task requirements. Instead, it participates in the long-term regulation of oculomotor excitability to adjust the speed-accuracy tradeoff. The context-dependent control signals found in SEF (including the action value signals) have to be learned and continuously adjusted in response to changes in the environment. This is likely the function of the large number of different response monitoring and evaluation signals in SEF. In conclusion, the overall function of SEF in goal-directed behavior seems to be the learning of context-dependent rules that allow predicting the likely consequences of different eye movements. This map of action value signals could be used so that eye movements are selected that best fulfill the current long-term goal of the agent.

Keywords: frontal cortex, decision making, control, evaluation, primate

1. Introduction

Voluntary or goal-directed behavior requires the ability to flexibly choose a behavioral response that fits the current overall goal of the organism and the current state of the environment. However, the state of the environment can change and actions might have an unanticipated effect. Accordingly, the organism has to constantly monitor the outcome of each action. If the selected responses do not match the requirements of the environment or the overall behavioral goal, the selection process needs to be adjusted appropriately. Thus, goal directed behavior requires to the ability to freely select behavior, to monitor and evaluate behavior, and to control behavior. For successful goal-directed behavior all three of these functions are necessary.

In primates, a large neuronal network is involved in generating goal-directed behavior, including frontal and parietal cortical regions, as well as a number of subcortical nuclei that are connected with these cortical areas. Among these different brain regions, the medial frontal cortex (MFC) has long been suggested to be of central importance for response selection, evaluation, and control. MFC consists of a number of different areas along the medial frontal cortex, including a group of cortical areas that have long been recognized as playing a role in higher-order motor control. These areas are the anterior cingulate cortex (ACC), the pre-supplementary motor area (pre-SMA), supplementary motor area (SMA), and the supplementary eye field (SEF). These motor- related areas contain neurons that are active during movements of various body parts. Some of these areas, such as the ACC and the pre-SMA, seem to be involved in the control of many different types of motor responses (Sumner et al., 2007). In contrast, the SMA seems to be specialized in the control of skeletomotor movements, such as movements of the arm and the hand (Fujii et al., 2002). The pre-SMA and SMA, which are reciprocally connected, differ in their connectivity, with pre-SMA connected to prefrontal cortex but not motor regions, and SMA to motor regions but not prefrontal cortex (Johansen-Berg et al., 2004; Luppino et al., 1991; Tanji, 1996).

The SEF is a region adjacent to the SMA that can be seen as an oculomotor extension of the SMA. SEF has been suggested to be involved in the supervisory control of eye movements (Stuphorn and Schall, 2002). SEF has appropriate connections for such a role, since it receives input from areas that represent value, such as the OFC and the amygdala, and from regions that represent contextual cognitive signals, such as dorsolateral prefrontal cortex (Ghashghaei et al., 2007; Huerta and Kaas, 1990). Through these inputs, SEF receives information about the state of the environment, including the presence of possible behavioral goals, and the set of behavioral rules that currently determine the relationship between actions and their outcomes. SEF projects in turn to oculomotor areas, such as frontal eye field (FEF), lateral intraparietal (LIP) cortex, and superior colliculus (SC) (Huerta and Kaas, 1990). This set of anatomical connections indicates that SEF is in a unique position to control and regulate the selection and generation of oculomotor behavior.

2. Experimental evidence for the function of SEF

The principles governing goal-directed behavior seem to be similar across different effector systems (Stuphorn and Schall, 2002). Understanding the role of SEF in the control of goal-directed eye movements might therefore allow general insights in the role of MFC in the control of more complex behavior. With this goal in mind, a series of experiments was performed to study the function of SEF in the selection, evaluation, and control of saccadic eye movements. In the following, we will first describe the results of these different experiments, before we discuss the relationship between these different functions and the overall role of SEF (and MFC in general) in the guidance of behavior.

2.1. The role of SEF in the selection of behavior

Decision-making consists of selecting a behavioral response that will lead to the best possible outcome, given the current context, the state of the environment, and the momentary goal of the decision-maker. Our understanding of the neural processes underlying this selection process is most advanced in the case of perceptual decisions, which are driven by external sensory stimuli (Gold and Shadlen, 2007). In contrast, value-based decisions, which are based on internal subjective value estimations, are less well understood.

Value-based decision making is the process of selecting an action among several alternatives based on the value of their expected outcomes. This requires the brain to first estimate the value of the outcome of each possible response, and then to select one of them on the basis of those value estimates (Balleine and Dickinson, 1998; Daw et al., 2005; Rangel et al., 2008; Yin and Knowlton, 2006). A number of recent studies have found neural responses that are correlated with some form of value signal in a variety of brain regions (Kable and Glimcher, 2009; Padoa-Schioppa, 2011; Rangel et al., 2008; Vickery et al., 2011). Neurons in the orbitofrontal cortex (OFC) and amygdala encode the value of different goals (Bermudez and Schultz, 2010; Grabenhorst et al., 2012; Padoa-Schioppa, 2011; Plassmann et al., 2007; Plassmann et al., 2010). These value signals respond to the presence of particular sensory stimuli (typically abstract visual cues) that are associated with particular outcomes. These ‘option value’ signals represent predictions of possible future states of the world, but they typically do not represent the actions that would lead to these future states. To select an appropriate response, it is therefore necessary that the option value signals are combined with the specific motor signals necessary to obtain the option. This type of value signal is known as ‘action value’ and encodes the reward amount that is expected to follow from a particular behavioral response (which could include no overt motor response at all) (Samejima et al., 2005; Lau and Glimcher, 2008). Action value signals underlie the selection of the response that is expected to lead to the highest momentary available value. Where and how this translation from option to action value signals is achieved is still debated (Cisek, 2012). As described earlier, anatomical evidence suggests that the supplementary eye field (SEF) might participate in the process of value-based decision making in the case of eye movements (Lau and Glimcher, 2008; Shook et al., 1991). In this review, the use of the terms ‘option value’ and ‘action value’ stresses the functional difference between value signals that refer to objects or goods by themselves and value signals that refer to the expected outcome of an action. It is important to keep in mind that both of these types of signals can by further distinguished based on a different functional difference. During decision making one can distinguish signals that encode the value of available options or possible actions independent of whether they are chosen or not. These signals can be thought of as the input signals into the decision process. On the other hand, there are signals that encode the outcome of the decision process, the chosen option or action. Signals encoding the value of all available option, as well as the value of the chosen option have been found in OFC (Padoa-Schioppa and Assad, 2006). Signals encoding the value of the available and of the chosen actions have been found in the caudate (Lau and Glimcher, 2008).

We have begun to examine the role of SEF in a common form of value-based decision making, namely decisions under risk (So and Stuphorn, 2010). In such risky decisions the consequences of an action is not certain and can lead to outcomes of high or low value. The estimation of the overall value of an action depends therefore on the integration of the anticipated value of the possible outcomes weighed by their probability. Thus, for a risk-neutral decision-maker the subjective value of a gamble is equal to its average payoff, known as its expected value. However, the subjective attitude of a decision maker to risk can greatly modulate the subjective value of a gamble. For example, fear of an unfavorable outcome can lead to risk-averse decisions, so that the subjective value of a gamble is lower than its expected value. On the other hand excitement about a favorable outcome can lead to risk-seeking behavior, so that the subjective value of a gamble is higher than its expected value.

In our gambling task, monkeys choose between a variety of visual target cues associated with different reward values, due to different combinations of reward amounts and probabilities (Figure 1A). In choice trials, we always presented a target that was associated with a gamble option and a target associated with a sure option. The sure options resulted in a certain reward amount, while the gamble options led with varying probabilities to one of two possible reward amounts. The monkeys chose between the gamble and sure option by shifting their gaze to the preferred target. We compared each gamble option systematically with four different sure options that ranged in value from the minimum to the maximum outcome of the gamble and found the amount of sure value that was equivalent to the value of the gamble. This allowed us to estimate the subjective value of gambles. In all monkeys tested with our gambling task, the subjective value of the gambles was higher than their expected value, i.e., the monkeys behaved in a risk-seeking fashion (So and Stuphorn, 2010). This is in agreement with the behavioral findings in other studies that tested monkeys preferences with respect to uncertain reward options (Hayden et al., 2008; McCoy and Platt, 2005; O’Neill and Schultz, 2010).

Figure 1. The role of SEF in the selection of behavior.

Figure 1

(A) The oculomotor gambling task. (left) The sequence of events during choice and no-choice trials in the gambling task. (right) Visual cues used in the gambling ask: left, sure options; right, gamble options. We designed a system of color cues, to explicitly indicate to the monkeys the reward amounts and probabilities associated with a particular target (Figure 1A, right). Seven different colors indicated seven reward amounts (increasing from 1 to 7 units of water, where 1 unit equaled 30 μl). Targets indicating a sure option consisted of only one color. Targets indicating a gamble option consisted of two colors corresponding to the two possible reward amounts. The portion of a color within the target corresponded to the probability of receiving that reward amount. (B) Three SEF neurons with different degrees of value and direction-selectivity. The spike density histograms show activity during saccades in four directions to sure option targets, which yielded small rewards (30–60 μl; light grey), medium-sized rewards (90–150 μl; dark grey), and large rewards (180–210 μl; black). (top): A neuron representing exclusively value (i.e., option value signal). (middle): A neuron representing both direction and value (i.e., action value signal). (bottom): A neuron representing exclusively direction (i.e. motor signal). (C) Temporal dynamics of the mean information about value and direction carried by different groups of SEF neurons. The red line represents value, and the blue line represents direction information. Dotted lines indicate onset of information accumulation. Please note that value information is computed based both on sure and gamble targets. (Modified after So and Stuphorn, 2010)

Next, we investigated the effect of subjective value and saccade direction on neuronal activity in the SEF. We concentrated first on activity in no-choice trials. This simplified the analysis, because here the neural activity reflected only the influence of a single target. However, since no choice was necessary or possible on these trials, we cannot distinguish signals that encode the value of a possible action from signals encoding the value of a chosen action. Accordingly, the data presented in So and Stuphorn (2010) are most relevant with respect to the transformation between option and action values. However, preliminary data in further studies of these SEF neurons indicate that they encode early in the trial the value of possible actions and then later in the trial the value of the chosen action (Chen et al., 2012).

We found that SEF neurons represent three different types of signals (So and Stuphorn, 2010). The first group of neurons encoded the value of reward options, independent of their location in the visual field and the saccade direction necessary to obtain it (Figure 1B, top). Such option value signals are similar to signals found in the OFC. The second group of neurons combined information about the value of an option with information about the direction of the saccade necessary to obtain the reward (Figure 1B, middle). Such action value signals are of central importance in selecting an appropriate action during value-based decision-making (Kable and Glimcher, 2009; Rangel et al., 2008; Uchida et al., 2007) and have also been reported in the human SEF and SMA (Wunderlich et al., 2009). Action value signals encode the value of the consequences that should result from acting in a particular fashion. They represent therefore the amount of evidence or the reason for choosing a specific action. By comparing the strength of action value signals, the action can be selected that is supported by the strongest expectation of a rewarding outcome. Option and action value signals both represented the subjective value of the reward options, not their expected value (So and Stuphorn, 2010). The third group of neurons only carried eye movement related signals (Figure 1B, bottom).

Information about saccade direction in SEF developed later than the information about value (with a delay of about 60 ms) (Figure 1C). This temporal succession of value- to motor-related signals suggests that SEF serves as a bridge between the value and the motor systems. The SEF is part of a larger network of brain areas that is active during value-based decision making (Kable and Glimcher, 2009). Shortly after a cue indicates an available reward, neuronal signals in the OFC, ACC, and amygdala reflect the value of the cue (Matsumoto et al., 2003; Padoa-Schioppa and Assad, 2006; Paton et al., 2006; Roesch and Olson, 2004; Wallis and Miller, 2003). The option value signal in the SEF likely reflects input from these areas, which all project to the SEF. In this context, it is noteworthy that during the decision period the SEF neurons reflected the monkeys’ risk-attitude in their value signal, but we did not find neurons that explicitly reflected expected reward amounts or risk. This indicates that the integration of information about possible outcomes and their probabilities likely occurs outside of SEF, for example in OFC (O’Neill and Schultz, 2010), while SEF reflects the value estimate, the outcome of this process.

The transformation from option to action value signals requires the combination of option value signals with spatial sensory information about the location of the cues that indicate the available reward. The brain areas that encode option value are connected to a number of other brain regions, besides SEF. SEF is therefore likely not the only cortical area that encodes the action value of eye movements. However, neurons in SEF become active before value-based saccades, much earlier than neurons in FEF and LIP (Coe et al., 2002).

These findings suggested that SEF participates in value-based decision making by computing an action value map of the existing saccade options. Competition within this map could select the action associated with the highest value, which in turn could be used to guide the selection and execution of the appropriate eye movement. However, the activity of saccade related neurons in SEF is insufficient to trigger saccades. Instead, most reflected value even at the moment of saccade initiation. This was equally true for neurons carrying motor signals, as for those neurons carrying either option value or action value signals. Thus, SEF neurons are not well-suited to control the final commitment to perform a specific saccade. This finding fits with other experiments using a stop signal task (Stuphorn et al., 2010), as described later. Together, these findings support the notion that there is a fundamental difference in the functional role of SEF and other oculomotor areas in motor control.

In sum, our findings support the hypothesis that SEF participates in value-based decision-making, not by initiating the final saccade motor command, but by transforming subjective value assessments of reward options in the environment (i.e. option value signals) into the motivational drive to act on the reward option (i.e. action value signals) (So and Stuphorn, 2010). These different action value signals in SEF could compete among each other and bias the saccade selection in other oculomotor areas that can initiate the final motor command.

2.2. The role of SEF in the evaluation of behavior

In most real-life decisions, the outcomes of actions are uncertain or can change over time. The values assigned to particular options or actions are therefore only approximations and need to be continually updated in an ongoing learning process. This updating process requires a system that can evaluate the outcome of actions that were taken by comparing the expected with the actual outcome. SEF neurons are known to carry signals related to response evaluation. For example, they respond to the anticipation and delivery of reward (Amador et al., 2000; Stuphorn et al., 2000), as well as to errors and response conflict (Nakamura et al., 2005; Stuphorn et al., 2000). Thus, the SEF contains in the same cortical area both action value signals and evaluative signals that could be used to update the action value signals.

To better understand the role of SEF neurons in evaluating value-based decision- making, we studied their responses after a choice in the oculomotor gambling task (Figure 1A). The evaluation process is often described in terms of reinforcement learning models (Pearce and Hall, 1980; Rescorla and Wagner, 1972). A key element in many of these models is the ‘reward prediction error’, i.e. the difference between the expected and the actual reward outcome (Sutton and Barto, 1998). The reward prediction error reflects both the direction (valence) and magnitude (salience) of the mismatch between expected and actual outcome. Although many studies reported reward prediction error-related signals in various brain areas, it is still unclear where these evaluative signals originate. To compute a reward prediction error signal, it is necessary to represent both its precursor signals, i.e., the expected and the actual value of the outcome. The oculomotor gambling task allowed us to differentiate between these two signals by analyzing neural activity in two different time periods following the choice. During the initial delay period, when the outcome of the choice is still unknown, the expected value of the chosen action should be represented. After the outcome of the choice has been revealed, the actual value should be represented and possibly the comparison of the two value signals.

We found indeed that SEF neurons carry various monitoring signals throughout the delay and result period (So and Stuphorn, 2012). In particular, SEF neurons represent the expected value of the chosen option throughout the delay period (Figure 2A) and the actual reward during the result period (Figure 2B, top). During the result period, we also observed SEF neurons that represented a reward prediction error signal, i.e., the comparison of the other two value signals (Figure 2C). The signed reward prediction error (RPES) signal reflected both valence and salience of the outcome. Such a RPES signal is equivalent to the teaching signal that is predicted in the Rescorla-Wagner model of reinforcement learning (Rescorla and Wagner, 1972), and is similar to the well-known signal carried by midbrain dopamine and habenular neurons (Matsumoto and Hikosaka, 2007; Schultz et al., 1997). Thus, our findings suggest that SEF could compute a reward prediction error signal using locally represented signals about expected and actual reward.

Figure 2. The role of SEF in the evaluation of behavior.

Figure 2

(A) An example neuron carrying a value signal during both the early (left) and late (right) delay period. In the spike density histograms, trials are sorted into three groups based on their chosen option value (black: high value, dark gray: medium value, light gray: low value). Upper row represents the neuronal activities in gamble option trials, and lower row represents the neuronal activities in sure option trials. The regression plots to the right of each histogram display the best models (lines) and the mean neuronal activities (dots) during the time periods indicated by the shaded areas in the histograms. Red lines and dots represent gamble option trials, while blue lines and dots describe sure option trials. Error bars represent SEM. (B) Single neuron examples representing reward amount during result period. Neuronal activities are sorted by the absolute reward amount (1 (left), 4 (center), and 7 (right) units of reward) and by context (black for loss; red for win). Examples from three different types of reward amount-representing signals are shown. Reward signal reflects the absolute reward amount (top), while Win (middle) and Loss signal (bottom) reflect the different context of reward, winning and losing, respectively. Best regression models, along with the mean neuronal activities, are plotted against the absolute reward amount for each cell, to the right side of the spike density histograms. Error bars represent SEM. (C) Example neurons carrying different types of reward prediction error (RPE) signals, representing signed (RPES), unsigned (RPEUS), win-exclusive (RPEW), and loss-exclusive RPE (RPEL). Best regression models for each neuron (lines), along with the mean neuronal activities (circles for low value gambles; triangles for high value gambles), are plotted against the reward prediction error. The best-fitting model of some neurons includes other variables, such as the reward amount, in addition to the RPE indicated on the x-axis. In case the best regression model identified an additional modulation by reward amount, a dotted line represents either winning (red dotted line) or losing (black dotted line) for low value gambles, while a solid line represents high value gamble results. When there is no additional modulation by reward amount, a single solid line describes both cases. Error bars represent SEM. (Modified after So and Stuphorn, 2012)

We concentrated here on the neural activity in SEF that is relevant for the computation of reward prediction error signals, but we found many other monitoring signals as well. During the delay period, one group of SEF neurons encoded the type of reward option (gamble/sure) in a binary fashion, while another group encoded expected value contingent on reward option type (i.e., chosen value exclusively for gamble or sure choices). During the result period, some SEF neurons represented the outcome in relative (Win or Loss) rather than in absolute terms (Figure 2B, middle & bottom). This neuronal activity reflects only the valence of the outcome, but not its salience. Such a signal also represents an outcome evaluation signal, but it clearly contains less information than the RPES signal.

In addition to the RPES signal, we also found a number of other reward prediction error-like signals during the result period (Figure 2C). The unsigned reward prediction error (RPEUS) signal represents only the salience of a prediction error, without regard to its valence. In the Pearce-Hall model of reinforcement learning, the salience signal controls the amount of attention that is paid to a task event and thus indirectly the amount of learning (Pearce and Hall, 1980). The role of the SEF in attention is not very well understood, but SEF is known to be active in attentional tasks (Kastner et al., 1999; Purcell et al., 2012). It is therefore possible that the valence- independent salience signals in the SEF serve to guide attention towards motivationally important events. In addition, many SEF neurons encoded reward prediction error selectively for outcomes of a specific valence (i.e., win or loss: RPEW and RPEL signals). These valence-dependent RPE representations might act in an opponent manner and could be used to directly adjust action value representations in SEF (Seo and Lee, 2009; So and Stuphorn, 2010).

Outcome-related neuronal activities such as the ones we have found in SEF are also found in other brain structures, such as the dopaminergic midbrain neurons (Matsumoto and Hikosaka, 2009) and in the amygdala (Belova et al., 2007). It is possible that the reward prediction error signals in SEF simply reflect input coming from these areas (Holroyd and Coles, 2002). However, the fact that SEF also represented the precursor signals necessary to compute reward prediction error suggests instead an alternative possibility. SEF could compute the reward prediction error signals locally without input from other structures and send them to the dopaminergic midbrain nuclei and the habenula via connections through the basal ganglia (Calzavara et al., 2007) (Hong and Hikosaka, 2008). In addition, local computation of reward prediction error could occur also in other cortical areas, as suggested by recent studies in OFC (Sul et al., 2010) and ACC (Seo and Lee, 2007). All of these local computations are likely to be context and effector-dependent. For example, SEF would be expected to compute reward prediction error signals only in the context of eye movements. More general signals might be generated through converging inputs from multiple specialized evaluation systems onto a central, general node, such as the midbrain dopamine and the habenula neurons.

In conclusion, our results show that SEF neurons carry various monitoring and evaluative signals in a task that requires decision-making under uncertainty. The SEF contains all signals necessary to compute reward prediction error signals, and is therefore in a position to independently evaluate the outcome of saccadic behavior. The evaluative signals in SEF represent both valence-sensitive and valence-insensitive reward prediction error signals. This finding matches recent results in other brain regions and suggests the usefulness of reinforcement learning models that incorporate both types of signals.

2.3. The role of SEF in the control of behavior

Adaptive behavior requires the ability to flexibly control actions. This can occur either proactively to anticipate task requirements, or reactively in response to sudden changes (Braver et al., 2007). Proactive control is a form of early selection in which goal-relevant information is actively maintained in a sustained manner, before the occurrence of cognitively demanding events, to optimally bias attention, perception, and action systems in a goal-driven manner (Miller and Cohen, 2001). Reactive control is recruited as a late correction mechanism that is mobilized only as needed, in a just-in- time manner, such as after a high interference event is detected (Jacoby, 1998). Thus, proactive control relies upon the anticipation and prevention of interference before it occurs, whereas reactive control relies upon the detection and resolution of interference after its onset.

The stop signal or countermanding task has been used to investigate the neural control of movement initiation and inhibition in rats, awake behaving monkeys, and human subjects (Aron et al., 2007; Curtis et al., 2005; Logan, 1994; Schall et al., 2002). A network of brain areas in the frontal cortex and the basal ganglia have been implicated in playing a key role in behavioral control (Aron, 2007; Floden and Stuss, 2006; Picton et al., 2007) and specifically during the stop signal paradigm (Aron et al., 2007; Aron and Poldrack, 2006; Curtis et al., 2005; Li et al., 2006). A critical component of this network is the medial frontal cortex, in particular the supplementary eye field (SEF), pre-supplementary motor area (pre-SMA) and adjacent supplementary motor area (SMA). In an ongoing action by requiring subjects to inhibit a planned movement in response to an infrequent stop signal, which they do with variable success depending on the delay of the stop signal. Stop signal task performance can be accounted for by a race between a process that initiates the movement (GO process) and by one that inhibits the movement (STOP process). This race model provides an estimate of the stop signal reaction time (SSRT), which is the time required to inhibit the planned movement. The rationale and approach for the analysis of the neural stop signal data has been described previously (Hanes et al., 1998). Briefly, the chief virtue of the stop signal paradigm is that one can determine whether a neural signal (e.g., single units, local field potentials (LFPs), evoked potentials (ERPs)) is sufficient to control the initiation of movements. First, the neural signal must be different when a movement is initiated versus when it is inhibited. Second and most important, this difference in activity must evolve before the SSRT elapses. Signals sufficient to control movement initiation are reactive control signals that are exerted in response to the sudden occurrence of a stop signal.

Importantly, the stop signal task evokes both reactive and proactive forms of control. The dual mechanisms of control account provide strong predictions about the temporal dynamics of brain activity related to proactive versus reactive control. Proactive control should be associated with sustained and/or anticipatory activation, which reflects the active maintenance of task goals. This activity may serve as a source of top-down bias that can facilitate processing of expected upcoming events. By contrast, reactive control should be reflected in transient activation subsequent to unexpected events.

In the context of the stop signal task, reactive control is recruited as a late correction mechanism that is mobilized only as needed, in a just-in-time manner, such as the instant a stop signal is perceived. Because this control mechanism is engaged only at short notice, it requires the ability to generate control signals at high speed that are capable of influencing ongoing motor activity even at a late stage of the movement preparation. This form of behavioral control is therefore likely to be found within and interacting with the primary motor systems that directly control the relevant effectors.

The FEF, located in the rostral bank of the arcuate sulcus in macaque monkeys, participates in the transformation of visual signals into saccade motor commands (Schall, 1997). Two of the functional subpopulations of neurons that have been observed in the FEF during gaze shifts are movement and fixation neurons. Movement neurons in the FEF exhibit increased discharge before and during saccades (Bruce and Goldberg, 1985; Hanes and Schall, 1996; Schall, 1991) while fixation neurons are active during fixation and exhibit decreased discharge preceding saccades (Hanes et al., 1998; Sommer and Wurtz, 2000). FEF neurons innervate the superior colliculus (Segraves and Goldberg, 1987; Sommer and Wurtz, 2000) and the neural circuit in the brainstem that generates saccades (Segraves, 1992).

Movement and fixation neurons in FEF generate signals sufficient to control the production of gaze shifts (Hanes et al., 1998). Saccades were initiated if and only if the activity of FEF movement neurons reached a specific and constant threshold activation level which is independent to the response time (Brown et al., 2008; Hanes and Schall, 1996). Movement neurons, whose activity increased as saccades were prepared, decayed in response to the stop signal before the SSRT elapsed. Fixation cells that decreased firing before saccades exhibited elevated activity in response to the stop signal before the SSRT elapsed. Similar results were observed for movement and fixation neurons in the superior colliculus (Pare and Hanes, 2003).

The activity of SEF neurons is very similar to the one of FEF neurons. For example, many SEF neurons show an activity increase as saccades are prepared. We examined therefore single unit activity in SEF during the stop signal task (Stuphorn et al., 2010; Stuphorn et al., 2000). However, unlike their counterparts in the FEF, the SEF movement neurons do not exhibit a reliable threshold and vanishingly few neurons in the SEF generate signals that are sufficient to control gaze (Figure 3A) (Stuphorn et al., 2010; Stuphorn et al., 2000). These findings are similar to the results obtained during the oculomotor gambling task, where SEF neurons also showed no consistent level of activity at the moment of saccade initiation (So and Stuphorn, 2010). Studies of arm movement control in pre-SMA and SMA suggest that this is a general property of the medial frontal cortex (Scangos and Stuphorn, 2010). Importantly, in the current context, no neurons were observed that showed enhanced activity on trial were the monkey successfully cancelled the saccade generation (Stuphorn et al., 2000). Thus, SEF does not seem to carry reactive control signals.

Figure 3. The role of SEF in the control of behavior.

Figure 3

(A) Activity of representative SEF neuron with presaccadic activity in the countermanding task. Activity in canceled stop signal trials (thick line) with SSDs of 269 ms (left) and 369 ms (right) is compared with activity in latency-matched no stop signal trials (thin line). The activity difference is indicated by the red line. Stop signal delay (SSD) indicated by solid vertical line. Stop signal reaction time (SSRT) indicated by dotted vertical line. Solid horizontal line indicates the mean difference between the spike density functions in the 600 ms time interval preceding the target onset; dashed horizontal lines mark two standard deviations above and below this average. Red arrow marks the first time at which the difference in activity exceeds the criterion difference of two standard deviations. Note that the difference in discharge rate arises after SSRT. (B) Relationship of saccade response time to SEF activity. The activity of four representative neurons is illustrated aligned on target presentation (left) and on saccade initiation (right). All trials with no stop signal in which the target was presented in the neuron’s receptive field were divided into three groups according to saccade response time: fastest (thin line), intermediate (middle line), and slowest (thick line). Discharge rate was measured in three intervals (indicated by gray background) −100 ms before target onset (baseline), 100–200 ms following target onset (target onset), and 100 ms before saccade initiation (movement generation). The saccade response time is plotted against activity on that trial with the linear regressions indicated by the red line if it was significant (p< 0.05). (C) The effects of microstimulation in SEF. (Left) Representative site, where subthreshold microstimulation enhanced canceling of both contraversive and ipsiversive saccades (combined shift = 102 ms; D = 87.16; p < 0.0001; R2 = 0.82). The inhibition function plots the proportion of errant non-canceled saccades as a function of stop signal delay. Closed circles plot performance on control trials without stimulation; open circles, performance with stimulation for contraversive or ipsiversive saccades. Best-fitting logistic regression is plotted for control trials (solid) and stimulation data (dashed). (Right) Magnitudes of inhibition function shift produced by SEF stimulation for contraversive and ipsiversive saccades. Black bars indicate significantly improved countermanding of contra- and ipsiversive saccades (such as in the example on the left), gray bars indicate significantly improved countermanding of ipsiversive saccades but worse countermanding of contraversive saccades, while hatched bars indicate significantly improved countermanding of contraversive saccades but worse countermanding of ipsiversive saccades. Open bars plot the remainder of the cases in which no significant change was measured. (Modified after Stuphorn et al., 2010 and Stuphorn and Schall, 2006)

Proactive control adjusts the response selection and preparation process in anticipation of known task demands. Proactive control is guided by endogenous signals, instead of external triggers, and is constantly present throughout response selection and preparation. It can reflect a variety of factors such as the incentives for choosing different responses, and the frequency of task-relevant events. Task performance in the stop signal task is clearly influenced by factors that are independent of the presence of an actual stop signal (Verbruggen and Logan, 2009). Behavioral studies in monkeys and humans show that the mean response time during no stop signal trials is delayed relative to a situation when no stop signal is expected (Claffey et al., 2010; Stuphorn and Schall, 2006; Verbruggen et al., 2004). Short-term changes in stop signal frequency lead to behavioral adjustments (Chen et al., 2010; Emeric et al., 2007; Mirabella et al., 2006; Nelson et al., 2010). These systematic modulations in the mean reaction time indicate the presence of proactive control. In the context of the stop signal task, proactive control is mostly related to a regulation of the level of excitability of the motor system. By adjusting the level of excitation and inhibition of the motor system, the proactive control system sets the threshold for initiating a response. In making these adjustments the proactive system has to negotiate the tradeoff between speed (reaction time) and accuracy (cancelation likelihood) (Bogacz et al., 2010).

Human neuroimaging experiments show that activity levels in and around the pre- SMA increased when response speed is emphasized during speed-accuracy tradeoff experiments (Forstmann et al., 2008; Ivanoff et al., 2008; van Veen et al., 2008). This suggests that dorsomedial frontal cortex, including the SEF, might be the source of the proactive control signal that modulates the baseline motor activity. Indeed, the activity of many SEF neurons was correlated with response time and varied with sequential adjustments in response latency (Figure 3B) (Stuphorn et al., 2010). Trials in which monkeys inhibited or produced a saccade in a stop signal trial were distinguished by significant differences in discharge rate of these SEF neurons before stop signal or target presentation. Similar results were observed in the SMA during stop signal tasks using arm movements (Chen et al., 2010).

Further support for a role of SEF in proactive regulation of motor excitation levels comes from microstimulation experiments during the stop signal task (Stuphorn and Schall, 2006). Microstimulation of the SEF resulted in better control over the generation of saccades during the stop signal task (Figure 3C). Our data indicate that microstimulation of the SEF exerted a context-dependent influence on saccade generation. If no stop signal occurred, and thus no executive control was necessary, stimulation of the SEF resulted in faster saccade latencies. However, if stop signals occurred, calling for executive control, stimulation of the SEF delayed saccade generation. This is adaptive because saccades generated later have a greater chance of being canceled if a stop signal is presented than saccades generated earlier. Thus, the inhibitory effect of SEF microstimulation is context dependent, which is consistent with a previous report that the effect of microstimulation in the SEF depends on the behavioral state of the animal (Fujii et al., 1995).

In conclusion, our findings indicate that neurons in the SEF, in contrast to FEF/SC movement and fixation cells, do not contribute directly and immediately to the initiation of visually guided saccades. SEF neurons also do not contribute to fast, reactive control of behavior. However the SEF may proactively regulate movement initiation by adjusting the level of excitation and inhibition of the occulomotor and skeletomotor systems based on prior performance and anticipated task requirements.

3. Overall function of SEF

Our research in SEF has demonstrated that this brain region participates in the selection, monitoring, and control of oculomotor behavior. The fact that this one cortical area in the dorsomedial frontal cortex is involved in such a wide range of different functions leads to the question of their relationship and whether they all can be explained as components of one overarching function.

An important difference between SEF and other oculomotor areas is the fact that SEF neurons do not have the ability to control whether or not eye movements are generated. This means that SEF cannot play a direct role in the selection of oculomotor behavior. The final decision to generate a particular saccade and not another equally possible one is controlled by primary oculomotor areas, such as FEF and SC (Brown et al., 2008; Hanes and Schall, 1996; Pare and Hanes, 2003). Accordingly, SEF can influence the selection of behavior only indirectly, through modulation of this primary selection process in the motor structures.

Anatomical studies show that SEF could influence neuronal activity in oculomotor areas, such as FEF and SC (Huerta and Kaas, 1990) that do directly control gaze (Hanes et al., 1998; Pare and Hanes, 2003). Interestingly, FEF neurons do not reflect the value of saccadic targets (Leon and Shadlen, 1999). SEF’s influence could be exerted either directly through connections with FEF and SC, or indirectly through projections to the caudate nucleus, with which it forms a cortico-basal ganglia loop. Caudate neurons carry action value signals (Lau and Glimcher, 2008) and are thought to create an action bias that favors a motor response associated with a high reward (Lauwereyns et al., 2002a; Lauwereyns et al., 2002b).

While SEF only indirectly controls the selection of behavior, it nevertheless can have a profound influence on behavior. Voluntary behavior is characterized by the motivation to act in order to obtain a particular goal. Decision making and action selection is likely a distributed process that takes place within a larger network of areas, including SEF (Hernandez et al., 2010; Ledberg et al., 2007). The affordance competition hypothesis suggests that action selection (including value-based decision- making) is based on a competition within an interconnected network of parietal and frontal areas, such as FEF and LIP (Cisek, 2007; Cisek and Kalaska, 2010; Shadlen et al., 2008). The competition among the potential motor responses is strongly influenced and controlled by executive signals from the prefrontal cortex, including the SEF (Isoda and Hikosaka, 2007; Johnston et al., 2007; Stuphorn and Schall, 2006).

The action value signals we found in the SEF are an instance of such executive control signals. They guide the action selection in cases, such as in the gambling task, where external stimuli are not sufficient to select the best possible action. Instead, option value estimates are necessary to determine the degree of motivation to act on a particular reward option. Only if the motivation to act (i.e. the action value) is sufficiently larger than the motivation to respond in some other fashion (including not to act at all), should and will that particular action be executed.

As a whole, the neurons in the medial frontal cortex could represent a map of action values for all possible eye- and body movements (including sequences of such actions, and no-go responses). This interpretation is supported by findings that electrical stimulation of pre-SMA and SMA in humans generates the urge to perform particular actions (Fried et al., 1991). Furthermore, lesions of the pre-SMA and SMA may lead to apathy, because the motivational drive that normally links reward expectation with specific actions is absent. However, since the motor system is still functional, external stimuli may still trigger automatic or habitual movements. This is, in fact, what is observed for SMA lesions in monkeys (Thaler et al., 1995; Thaler et al., 1988) and humans (Levy and Dubois, 2006; Schmidt et al., 2008).

There exists a close relationship between these motivational control signals found in the oculomotor gambling task and the proactive control signals found in the stop signal task. In the stop signal task, there are two mutually exclusive motivations that compete with each other. On the one hand, there is a motivation to GO resulting from the very frequent link between movement execution and reward delivery. On the other hand, there is a motivation to WAIT (not to stop per se) generated by the awareness that on any given trial a stop signal might be given. These two motivations (or action values for GO and WAIT) vary in strength according to the most recent reward and trial history. The relative strength of these motivations determines the level of excitability and the momentary speed-accuracy tradeoff of the subject at any moment in the task. However, this changing modulation of the level of excitability of the motor system was exactly what was discussed as a proactive control system earlier.

Thus, we can understand SEF activity as a context-dependent action value map that controls oculomotor behavior by selectively enhancing or inhibiting motor neurons that produce desired or undesired eye movements, respectively. However, in the case of the stop signal paradigm, the monkeys initially did not respond to reappearance of the fixation light, or at least not necessarily by inhibiting their saccade preparation. This response was acquired during training. Even after training, the monkeys did not show saccade inhibition, when outside of the task setting or at the end of the recording session, when their motivation was low. Likewise, the monkeys have to learn the distribution of outcomes that are associated with the different colored targets in the oculomotor gambling task. Thus, there is clearly a task set that the monkeys learn during training and that guides their behavior as long as they are motivated to do so (Sakai, 2008).

The presence of the monitoring and evaluation signals in SEF is most likely related to the need to learn the context-specific task sets that guide the relationships between actions and the various outcomes. SEF, but not FEF, shows systematic changes in activity during learning of new stimulus-response associations (Chen and Wise, 1995a, b). Even after the task-set has been learned, monitoring of behavior is necessary to catch changes in the environment, or possible mistakes due to response conflict or inadequate attention to the task requirements (Botvinick et al., 2001; Coles et al., 1995; Holroyd and Coles, 2002).

4. Conclusions

The medial frontal cortex has been suggested to play a role in the control, monitoring, and selection of behavior. Evidence from different recording studies in the SEF of macaque monkeys has systematically investigated the role of SEF in these various suggested behavioral functions. SEF neurons participate in the selection of eye movements by representing the context-dependent action value of various possible oculomotor behaviors (So and Stuphorn, 2010). SEF does not have the ability to directly select eye movements, but the action value signals in SEF could influence the activity distribution in more primary oculomotor areas control, so that they guide decisions, especially when external environmental factors instruct more than one possible response (Stuphorn et al., 2010). SEF also does not participate in the fast, inhibitory control of eye movements in response to sudden changes in the task requirements. Instead, it might participate in the long-term regulation of oculomotor excitability to adjust the speed-accuracy tradeoff (Stuphorn et al., 2010). The context-dependent control signals found in SEF (including the action value signals) have to be learned and continuously adjusted in response to changes in the environment. This is likely the function of the large number of different response monitoring and evaluation signals in SEF (So and Stuphorn, 2012; Stuphorn et al., 2000). In conclusion, the likely overall function of SEF in goal-directed behavior is the acquisition of context-dependent rules that allow predicting the consequences of different eye movements. These action value signals are likely used to influence the oculomotor regions in such a way, so that the eye movement is selected that best fulfills the long-term goal of the organism.

Highlights.

  • Different recording studies in the SEF of macaque monkeys have systematically investigated the role of SEF in the selection, control and monitoring of oculomotor behavior.

  • SEF neurons participate in the selection of eye movements by representing the context-dependent action value of various possible oculomotor behaviors.

  • SEF does not have the ability to directly initiate eye movements, but the action value signals in SEF can influence the activity distribution in more primary oculomotor areas control, so that they support the selection of behaviorally optimal eye movements.

  • The function of the large number of different response monitoring and evaluation signals in SEF is the learning and updating of the contextual control signals.

Acknowledgments

This work was supported by the National Eye Institute through grant R01-EY019039 to VS.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Amador N, Schlag-Rey M, Schlag J. Reward-predicting and reward-detecting neuronal activity in the primate supplementary eye field. J Neurophysiol. 2000;84(4):2166–2170. doi: 10.1152/jn.2000.84.4.2166. [DOI] [PubMed] [Google Scholar]
  2. Aron AR. The neural basis of inhibition in cognitive control. Neuroscientist. 2007;13(3):214–228. doi: 10.1177/1073858407299288. [DOI] [PubMed] [Google Scholar]
  3. Aron AR, Durston S, Eagle DM, Logan GD, Stinear CM, Stuphorn V. Converging evidence for a fronto-basal-ganglia network for inhibitory control of action and cognition. J Neurosci. 2007;27(44):11860–11864. doi: 10.1523/JNEUROSCI.3644-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Aron AR, Poldrack RA. Cortical and subcortical contributions to Stop signal response inhibition: role of the subthalamic nucleus. J Neurosci. 2006;26(9):2424–2433. doi: 10.1523/JNEUROSCI.4682-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Balleine BW, Dickinson A. Goal-directed instrumental action: contingency and incentive learning and their cortical substrates. Neuropharmacology. 1998;37(4–5):407–419. doi: 10.1016/s0028-3908(98)00033-1. [DOI] [PubMed] [Google Scholar]
  6. Belova MA, Paton JJ, Morrison SE, Salzman CD. Expectation modulates neural responses to pleasant and aversive stimuli in primate amygdala. Neuron. 2007;55(6):970–984. doi: 10.1016/j.neuron.2007.08.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bermudez MA, Schultz W. Reward magnitude coding in primate amygdala neurons. Journal of Neurophysiology. 2010;104(6):3424–3432. doi: 10.1152/jn.00540.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bogacz R, Wagenmakers EJ, Forstmann BU, Nieuwenhuis S. The neural basis of the speed-accuracy tradeoff. Trends in Neurosciences. 2010;33(1):10–16. doi: 10.1016/j.tins.2009.09.002. [DOI] [PubMed] [Google Scholar]
  9. Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD. Conflict monitoring and cognitive control. Psychol Rev. 2001;108(3):624–652. doi: 10.1037/0033-295x.108.3.624. [DOI] [PubMed] [Google Scholar]
  10. Braver TS, Gray JR, Burgess GC. Explaining the Many Varieties of Working Memory Variation: Dual Mechanisms of Cognitive Control. In: Conway ARA, Jarrold C, Kane MJ, Miyake A, Towse JN, editors. Variation in Working Memory. Oxford University Press; Oxford: 2007. [Google Scholar]
  11. Brown JW, Hanes DP, Schall JD, Stuphorn V. Relation of frontal eye field activity to saccade initiation during a countermanding task. Exp Brain Res. 2008;190(2):135–151. doi: 10.1007/s00221-008-1455-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Bruce CJ, Goldberg ME. Primate frontal eye fields. I. Single neurons discharging before saccades. J Neurophysiol. 1985;53(3):603–635. doi: 10.1152/jn.1985.53.3.603. [DOI] [PubMed] [Google Scholar]
  13. Calzavara R, Mailly P, Haber SN. Relationship between the corticostriatal terminals from areas 9 and 46, and those from area 8A, dorsal and rostral premotor cortex and area 24c: an anatomical substrate for cognition to action. The European journal of neuroscience. 2007;26(7):2005–2024. doi: 10.1111/j.1460-9568.2007.05825.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Chen LL, Wise SP. Neuronal activity in the supplementary eye field during acquisition of conditional oculomotor associations. 1995a;73(3):1101. doi: 10.1152/jn.1995.73.3.1101. [DOI] [PubMed] [Google Scholar]
  15. Chen LL, Wise SP. Supplementary eye field contrasted with the frontal eye field during acquisition of conditional oculomotor associations. 1995b;73(3):1122. doi: 10.1152/jn.1995.73.3.1122. [DOI] [PubMed] [Google Scholar]
  16. Chen X, Scangos KW, Stuphorn V. Supplementary motor area exerts proactive and reactive control of arm movements. J Neurosci. 2010;30(44):14657–14675. doi: 10.1523/JNEUROSCI.2669-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Chen X, Mihalas S, Stuphorn V. Program No. 499.11 Neuroscience Meeting Planner. New Orleans, LA: Society for Neuroscience; 2012. Competition between different action value signals in supplementary eye field during value- based decision making. Online. [Google Scholar]
  18. Cisek P. Cortical mechanisms of action selection: the affordance competition hypothesis. Philosophical transactions of the Royal Society of London Series B, Biological sciences. 2007;362(1485):1585–1599. doi: 10.1098/rstb.2007.2054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Cisek P. Making decisions through a distributed consensus. Curr Opin Neurobiol. 2012 doi: 10.1016/j.conb.2012.05.007. [DOI] [PubMed] [Google Scholar]
  20. Cisek P, Kalaska JF. Neural mechanisms for interacting with a world full of action choices. Annual review of neuroscience. 2010;33:269–298. doi: 10.1146/annurev.neuro.051508.135409. [DOI] [PubMed] [Google Scholar]
  21. Claffey MP, Sheldon S, Stinear CM, Verbruggen F, Aron AR. Having a goal to stop action is associated with advance control of specific motor representations. Neuropsychologia. 2010;48(2):541–548. doi: 10.1016/j.neuropsychologia.2009.10.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Coe B, Tomihara K, Matsuzawa M, Hikosaka O. Visual and anticipatory bias in three cortical eye fields of the monkey during an adaptive decision-making task. J Neurosci. 2002;22(12):5081–5090. doi: 10.1523/JNEUROSCI.22-12-05081.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Coles MG, Scheffers MK, Fournier L. Where did you go wrong? Errors, partial errors, and the nature of human information processing. Acta Psychol (Amst) 1995;90(1–3):129–144. doi: 10.1016/0001-6918(95)00020-u. [DOI] [PubMed] [Google Scholar]
  24. Curtis CE, Cole MW, Rao VY, D’Esposito M. Canceling planned action: an FMRI study of countermanding saccades. Cereb Cortex. 2005;15(9):1281–1289. doi: 10.1093/cercor/bhi011. [DOI] [PubMed] [Google Scholar]
  25. Daw ND, Niv Y, Dayan P. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature neuroscience. 2005;8(12):1704–1711. doi: 10.1038/nn1560. [DOI] [PubMed] [Google Scholar]
  26. Emeric EE, Brown JW, Boucher L, Carpenter RH, Hanes DP, Harris R, Logan GD, Mashru RN, Pare M, Pouget P, Stuphorn V, Taylor TL, Schall JD. Influence of history on saccade countermanding performance in humans and macaque monkeys. Vision Res. 2007;47(1):35–49. doi: 10.1016/j.visres.2006.08.03. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Floden D, Stuss DT. Inhibitory control is slowed in patients with right superior medial frontal damage. J Cogn Neurosci. 2006;18(11):1843–1849. doi: 10.1162/jocn.2006.18.11.1843. [DOI] [PubMed] [Google Scholar]
  28. Forstmann BU, Dutilh G, Brown S, Neumann J, von Cramon DY, Ridderinkhof KR, Wagenmakers EJ. Striatum and pre-SMA facilitate decision-making under time pressure. Proc Natl Acad Sci U S A. 2008;105(45):17538–17542. doi: 10.1073/pnas.0805903105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Fried I, Katz A, McCarthy G, Sass KJ, Williamson P, Spencer SS, Spencer DD. Functional organization of human supplementary motor cortex studied by electrical stimulation. J Neurosci. 1991;11(11):3656–3666. doi: 10.1523/JNEUROSCI.11-11-03656.1991. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Fujii N, Mushiake H, Tamai M, Tanji J. Microstimulation of the supplementary eye field during saccade preparation. Neuroreport. 1995;6(18):2565–2568. doi: 10.1097/00001756-199512150-00028. [DOI] [PubMed] [Google Scholar]
  31. Fujii N, Mushiake H, Tanji J. Distribution of eye- and arm-movement-related neuronal activity in the SEF and in the SMA and Pre-SMA of monkeys. J Neurophysiol. 2002;87(4):2158–2166. doi: 10.1152/jn.00867.2001. [DOI] [PubMed] [Google Scholar]
  32. Ghashghaei HT, Hilgetag CC, Barbas H. Sequence of information processing for emotions based on the anatomic dialogue between prefrontal cortex and amygdala. Neuroimage. 2007;34(3):905–923. doi: 10.1016/j.neuroimage.2006.09.046. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Gold JI, Shadlen MN. The neural basis of decision making. Annual review of neuroscience. 2007;30:535–574. doi: 10.1146/annurev.neuro.29.051605.113038. [DOI] [PubMed] [Google Scholar]
  34. Grabenhorst F, Hernadi I, Schultz W. Prediction of economic choice by primate amygdala neurons. Proc Natl Acad Sci U S A. 2012;109(46):18950–18955. doi: 10.1073/pnas.1212706109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Hanes DP, Patterson WF, 2nd, Schall JD. Role of frontal eye fields in countermanding saccades: visual, movement, and fixation activity. J Neurophysiol. 1998;79(2):817–834. doi: 10.1152/jn.1998.79.2.817. [DOI] [PubMed] [Google Scholar]
  36. Hanes DP, Schall JD. Neural control of voluntary movement initiation. Science. 1996;274(5286):427–430. doi: 10.1126/science.274.5286.427. [DOI] [PubMed] [Google Scholar]
  37. Hayden BY, Heilbronner SR, Nair AC, Platt ML. Cognitive influences on risk-seeking by rhesus macaques. Judgm Decis Mak. 2008;3(5):389–395. [PMC free article] [PubMed] [Google Scholar]
  38. Hernandez A, Nacher V, Luna R, Zainos A, Lemus L, Alvarez M, Vazquez Y, Camarillo L, Romo R. Decoding a perceptual decision process across cortex. Neuron. 2010;66(2):300–314. doi: 10.1016/j.neuron.2010.03.031. [DOI] [PubMed] [Google Scholar]
  39. Holroyd CB, Coles MG. The neural basis of human error processing: reinforcement learning, dopamine, and the error-related negativity. Psychol Rev. 2002;109(4):679–709. doi: 10.1037/0033-295X.109.4.679. [DOI] [PubMed] [Google Scholar]
  40. Hong S, Hikosaka O. The globus pallidus sends reward-related signals to the lateral habenula. Neuron. 2008;60(4):720–729. doi: 10.1016/j.neuron.2008.09.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Huerta MF, Kaas JH. Supplementary eye field as defined by intracortical microstimulation: Connections in macaques. J Comp Neurol. 1990;293:299–330. doi: 10.1002/cne.902930211. [DOI] [PubMed] [Google Scholar]
  42. Isoda M, Hikosaka O. Switching from automatic to controlled action by monkey medial frontal cortex. Nat Neurosci. 2007;10(2):240–248. doi: 10.1038/nn1830. [DOI] [PubMed] [Google Scholar]
  43. Ivanoff J, Branning P, Marois R. fMRI evidence for a dual process account of the speed-accuracy tradeoff in decision-making. PLoS One. 2008;3(7):e2635. doi: 10.1371/journal.pone.0002635. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Jacoby LL. The role of cognitive control: Early selection vs. late correction. Psychologia. 1998;41(4):288–288. [Google Scholar]
  45. Johansen-Berg H, Behrens TE, Robson MD, Drobnjak I, Rushworth MF, Brady JM, Smith SM, Higham DJ, Matthews PM. Changes in connectivity profiles define functionally distinct regions in human medial frontal cortex. Proc Natl Acad Sci U S A. 2004;101(36):13335–13340. doi: 10.1073/pnas.0403743101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Johnston K, Levin HM, Koval MJ, Everling S. Top-down control-signal dynamics in anterior cingulate and prefrontal cortex neurons following task switching. Neuron. 2007;53(3):453–462. doi: 10.1016/j.neuron.2006.12.023. [DOI] [PubMed] [Google Scholar]
  47. Kable JW, Glimcher PW. The neurobiology of decision: consensus and controversy. Neuron. 2009;63(6):733–745. doi: 10.1016/j.neuron.2009.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Kastner S, Pinsk MA, De Weerd P, Desimone R, Ungerleider LG. Increased activity in human visual cortex during directed attention in the absence of visual stimulation. Neuron. 1999;22(4):751–761. doi: 10.1016/s0896-6273(00)80734-5. [DOI] [PubMed] [Google Scholar]
  49. Lau B, Glimcher PW. Value representations in the primate striatum during matching behavior. Neuron. 2008;58(3):451–463. doi: 10.1016/j.neuron.2008.02.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Lauwereyns J, Takikawa Y, Kawagoe R, Kobayashi S, Koizumi M, Coe B, Sakagami M, Hikosaka O. Feature-based anticipation of cues that predict reward in monkey caudate nucleus. Neuron. 2002a;33(3):463–473. doi: 10.1016/s0896-6273(02)00571-8. [DOI] [PubMed] [Google Scholar]
  51. Lauwereyns J, Watanabe K, Coe B, Hikosaka O. A neural correlate of response bias in monkey caudate nucleus. Nature. 2002b;418(6896):413–417. doi: 10.1038/nature00892. [DOI] [PubMed] [Google Scholar]
  52. Ledberg A, Bressler SL, Ding M, Coppola R, Nakamura R. Large-scale visuomotor integration in the cerebral cortex. Cereb Cortex. 2007;17(1):44–62. doi: 10.1093/cercor/bhj123. [DOI] [PubMed] [Google Scholar]
  53. Leon MI, Shadlen MN. Effect of expected reward magnitude on the response of neurons in the dorsolateral prefrontal cortex of the macaque. Neuron. 1999;24(2):415–425. doi: 10.1016/s0896-6273(00)80854-5. [DOI] [PubMed] [Google Scholar]
  54. Levy R, Dubois B. Apathy and the functional anatomy of the prefrontal cortex-basal ganglia circuits. Cereb Cortex. 2006;16(7):916–928. doi: 10.1093/cercor/bhj043. [DOI] [PubMed] [Google Scholar]
  55. Li CS, Huang C, Constable RT, Sinha R. Imaging response inhibition in a stop-signal task: neural correlates independent of signal monitoring and post-response processing. J Neurosci. 2006;26(1):186–192. doi: 10.1523/JNEUROSCI.3741-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Logan GD. On the ability to inhibit thought and action: A users’ guide to the stop signal paradigm. In: Dagenbach D, Carr TH, editors. Inhibitory Processes in Attention, Memory and Language. Academic Press; San Diego: 1994. pp. 189–239. [Google Scholar]
  57. Luppino G, Matelli M, Camarda RM, Gallese V, Rizzolatti G. Multiple representations of body movements in mesial area 6 and the adjacent cingulate cortex: an intracortical microstimulation study in the macaque monkey. J Comp Neurol. 1991;311(4):463–482. doi: 10.1002/cne.903110403. [DOI] [PubMed] [Google Scholar]
  58. Matsumoto K, Suzuki W, Tanaka K. Neuronal correlates of goal-based motor selection in the prefrontal cortex. Science. 2003;301(5630):229–232. doi: 10.1126/science.1084204. [DOI] [PubMed] [Google Scholar]
  59. Matsumoto M, Hikosaka O. Lateral habenula as a source of negative reward signals in dopamine neurons. Nature. 2007;447(7148):1111–1115. doi: 10.1038/nature05860. [DOI] [PubMed] [Google Scholar]
  60. Matsumoto M, Hikosaka O. Two types of dopamine neuron distinctly convey positive and negative motivational signals. Nature. 2009;459(7248):837–841. doi: 10.1038/nature08028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. McCoy AN, Platt ML. Risk-sensitive neurons in macaque posterior cingulate cortex. Nat Neurosci. 2005;8(9):1220–1227. doi: 10.1038/nn1523. [DOI] [PubMed] [Google Scholar]
  62. Miller EK, Cohen JD. An integrative theory of prefrontal cortex function. Annu Rev Neurosci. 2001;24:167–202. doi: 10.1146/annurev.neuro.24.1.167. [DOI] [PubMed] [Google Scholar]
  63. Mirabella G, Pani P, Pare M, Ferraina S. Inhibitory control of reaching movements in humans. Exp Brain Res. 2006;174(2):240–255. doi: 10.1007/s00221-006-0456-0. [DOI] [PubMed] [Google Scholar]
  64. Nakamura K, Roesch MR, Olson CR. Neuronal activity in macaque SEF and ACC during performance of tasks involving conflict. J Neurophysiol. 2005;93(2):884–908. doi: 10.1152/jn.00305.2004. [DOI] [PubMed] [Google Scholar]
  65. Nelson MJ, Boucher L, Logan GD, Palmeri TJ, Schall JD. Nonindependent and nonstationary response times in stopping and stepping saccade tasks. Atten Percept Psychophys. 2010;72(7):1913–1929. doi: 10.3758/APP.72.7.1913. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. O’Neill M, Schultz W. Coding of Reward Risk by Orbitofrontal Neurons Is Mostly Distinct from Coding of Reward Value. Neuron. 2010;68(4):789–800. doi: 10.1016/j.neuron.2010.09.031. [DOI] [PubMed] [Google Scholar]
  67. Padoa-Schioppa C. Neurobiology of economic choice: a good-based model. Annu Rev Neurosci. 2011;34:333–359. doi: 10.1146/annurev-neuro-061010-113648. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Padoa-Schioppa C, Assad JA. Neurons in the orbitofrontal cortex encode economic value. Nature. 2006;441(7090):223–226. doi: 10.1038/nature04676. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Pare M, Hanes DP. Controlled movement processing: superior colliculus activity associated with countermanded saccades. J Neurosci. 2003;23(16):6480–6489. doi: 10.1523/JNEUROSCI.23-16-06480.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Paton JJ, Belova MA, Morrison SE, Salzman CD. The primate amygdala represents the positive and negative value of visual stimuli during learning. Nature. 2006;439(7078):865–870. doi: 10.1038/nature04490. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Pearce JM, Hall G. A model for Pavlovian learning: variations in the effectiveness of conditioned but not unconditioned stimuli. Psychol Rev. 1980;87:532–552. [PubMed] [Google Scholar]
  72. Picton TW, Stuss DT, Alexander MP, Shallice T, Binns MA, Gillingham S. Effects of focal frontal lesions on response inhibition. Cereb Cortex. 2007;17(4):826–838. doi: 10.1093/cercor/bhk031. [DOI] [PubMed] [Google Scholar]
  73. Plassmann H, O’Doherty J, Rangel A. Orbitofrontal cortex encodes willingness to pay in everyday economic transactions. J Neurosci. 2007;27(37):9984–9988. doi: 10.1523/JNEUROSCI.2131-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Plassmann H, O’Doherty JP, Rangel A. Appetitive and aversive goal values are encoded in the medial orbitofrontal cortex at the time of decision making. The Journal of neuroscience : the official journal of the Society for Neuroscience. 2010;30(32):10799–10808. doi: 10.1523/JNEUROSCI.0788-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Purcell BA, Weigand PK, Schall JD. Supplementary eye field during visual search: salience, cognitive control, and performance monitoring. J Neurosci. 2012;32(30):10273–10285. doi: 10.1523/JNEUROSCI.6386-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Rangel A, Camerer C, Montague PR. A framework for studying the neurobiology of value-based decision making. Nat Rev Neurosci. 2008;9(7):545–556. doi: 10.1038/nrn2357. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Rescorla RA, Wagner AR. A theory of Pavlovian conditioning: variations in the effectiveness of reinforcement and nonreinforcement. In: Black AH, Prokasy WF, editors. Classical Conditioning II: Current Research and Theory. Appleton-Century-Crofts; New York: 1972. pp. 64–99. [Google Scholar]
  78. Roesch MR, Olson CR. Neuronal activity related to reward value and motivation in primate frontal cortex. Science. 2004;304(5668):307–310. doi: 10.1126/science.1093223. [DOI] [PubMed] [Google Scholar]
  79. Sakai K. Task set and prefrontal cortex. Annual review of neuroscience. 2008;31:219–245. doi: 10.1146/annurev.neuro.31.060407.125642. [DOI] [PubMed] [Google Scholar]
  80. Samejima K, Ueda Y, Doya K, Kimura M. Representation of action-specific reward values in the striatum. Science. 2005;310(5752):1337–1340. doi: 10.1126/science.1115270. [DOI] [PubMed] [Google Scholar]
  81. Scangos KW, Stuphorn V. Medial Frontal Cortex Motivates but does not Control Movement Initiation in the Countermanding task. J Neurosci. 2010;30(5):1968–1982. doi: 10.1523/JNEUROSCI.4509-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Schall JD. Neuronal activity related to visually guided saccades in the frontal eye fields of rhesus monkeys: comparison with supplementary eye fields. J Neurophysiol. 1991;66(2):559–579. doi: 10.1152/jn.1991.66.2.559. [DOI] [PubMed] [Google Scholar]
  83. Schall JD. Visuomotor areas of the frontal lobe. In: Rockland K, Peters A, Kaas JH, editors. Cerebral Cortex. Plenum; New York: 1997. pp. 527–638. [Google Scholar]
  84. Schall JD, Stuphorn V, Brown JW. Monitoring and control of action by the frontal lobes. Neuron. 2002;36(2):309–322. doi: 10.1016/s0896-6273(02)00964-9. [DOI] [PubMed] [Google Scholar]
  85. Schmidt L, d’Arc BF, Lafargue G, Galanaud D, Czernecki V, Grabli D, Schupbach M, Hartmann A, Levy R, Dubois B, Pessiglione M. Disconnecting force from money: effects of basal ganglia damage on incentive motivation. Brain. 2008;131(Pt 5):1303–1310. doi: 10.1093/brain/awn045. [DOI] [PubMed] [Google Scholar]
  86. Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275(5306):1593–1599. doi: 10.1126/science.275.5306.1593. [DOI] [PubMed] [Google Scholar]
  87. Segraves MA. Activity of monkey frontal eye field neurons projecting to oculomotor regions of the pons. J Neurophysiol. 1992;68(6):1967–1985. doi: 10.1152/jn.1992.68.6.1967. [DOI] [PubMed] [Google Scholar]
  88. Segraves MA, Goldberg ME. Functional properties of corticotectal neurons in the monkey’s frontal eye field. J Neurophysiol. 1987;58(6):1387–1419. doi: 10.1152/jn.1987.58.6.1387. [DOI] [PubMed] [Google Scholar]
  89. Seo H, Lee D. Temporal filtering of reward signals in the dorsal anterior cingulate cortex during a mixed-strategy game. The Journal of neuroscience : the official journal of the Society for Neuroscience. 2007;27(31):8366–8377. doi: 10.1523/JNEUROSCI.2369-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Seo H, Lee D. Behavioral and neural changes after gains and losses of conditioned reinforcers. J Neurosci. 2009;29(11):3627–3641. doi: 10.1523/JNEUROSCI.4726-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Shadlen MN, Kiani R, Hanks TD, Churchland AK. Neurobiology of Decision Making, An Intentional Framework. In: Engel C, Singer W, editors. Better Than Conscious? MIT Press; Cambridge, MA: 2008. pp. 71–101. [Google Scholar]
  92. Shook BL, Schlag-Rey M, Schlag J. Primate supplementary eye field. II. Comparative aspects of connections with the thalamus, corpus striatum, and related forebrain nuclei. J Comp Neurol. 1991;307(4):562–583. doi: 10.1002/cne.903070405. [DOI] [PubMed] [Google Scholar]
  93. So NY, Stuphorn V. Supplementary eye field encodes option and action value for saccades with variable reward. Journal of Neurophysiology. 2010;104(5):2634–2653. doi: 10.1152/jn.00430.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. So NY, Stuphorn V. Supplementary eye field encodes reward prediction error. J Neurosci. 2012;32(9):2950–2963. doi: 10.1523/JNEUROSCI.4419-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Sommer MA, Wurtz RH. Composition and topographic organization of signals sent from the frontal eye field to the superior colliculus. J Neurophysiol. 2000;83(4):1979–2001. doi: 10.1152/jn.2000.83.4.1979. [DOI] [PubMed] [Google Scholar]
  96. Stuphorn V, Brown JW, Schall JD. Role of supplementary eye field in saccade initiation: executive, not direct, control. Journal of Neurophysiology. 2010;103(2):801–816. doi: 10.1152/jn.00221.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Stuphorn V, Schall JD. Neuronal control and monitoring of initiation of movements. Muscle Nerve. 2002;26(3):326–339. doi: 10.1002/mus.10158. [DOI] [PubMed] [Google Scholar]
  98. Stuphorn V, Schall JD. Executive control of countermanding saccades by the supplementary eye field. Nat Neurosci. 2006;9(7):925–931. doi: 10.1038/nn1714. [DOI] [PubMed] [Google Scholar]
  99. Stuphorn V, Taylor TL, Schall JD. Performance monitoring by the supplementary eye field. Nature. 2000;408(6814):857–860. doi: 10.1038/35048576. [DOI] [PubMed] [Google Scholar]
  100. Sul JH, Kim H, Huh N, Lee D, Jung MW. Distinct roles of rodent orbitofrontal and medial prefrontal cortex in decision making. Neuron. 2010;66(3):449–460. doi: 10.1016/j.neuron.2010.03.033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Sumner P, Nachev P, Morris P, Peters AM, Jackson SR, Kennard C, Husain M. Human medial frontal cortex mediates unconscious inhibition of voluntary action. Neuron. 2007;54(5):697–711. doi: 10.1016/j.neuron.2007.05.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Sutton RS, Barto AG. Reinforcement Learning. The MIT Press; Cambridge, Massachusetts: 1998. [Google Scholar]
  103. Tanji J. New concepts of the supplementary motor area. Curr Opin Neurobiol. 1996;6(6):782–787. doi: 10.1016/s0959-4388(96)80028-6. [DOI] [PubMed] [Google Scholar]
  104. Thaler D, Chen YC, Nixon PD, Stern CE, Passingham RE. The functions of the medial premotor cortex. I. Simple learned movements. Exp Brain Res. 1995;102(3):445–460. doi: 10.1007/BF00230649. [DOI] [PubMed] [Google Scholar]
  105. Thaler DE, Rolls ET, Passingham RE. Neuronal activity of the supplementary motor area (SMA) during internally and externally triggered wrist movements. Neurosci Lett. 1988;93(2–3):264–269. doi: 10.1016/0304-3940(88)90093-6. [DOI] [PubMed] [Google Scholar]
  106. Uchida Y, Lu X, Ohmae S, Takahashi T, Kitazawa S. Neuronal activity related to reward size and rewarded target position in primate supplementary eye field. The Journal of neuroscience : the official journal of the Society for Neuroscience. 2007;27(50):13750–13755. doi: 10.1523/JNEUROSCI.2693-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. van Veen V, Krug MK, Carter CS. The neural and computational basis of controlled speed-accuracy tradeoff during task performance. J Cogn Neurosci. 2008;20(11):1952–1965. doi: 10.1162/jocn.2008.20146. [DOI] [PubMed] [Google Scholar]
  108. Verbruggen F, Liefooghe B, Vandierendonck A. The interaction between stop signal inhibition and distractor interference in the flanker and Stroop task. Acta Psychol (Amst) 2004;116(1):21–37. doi: 10.1016/j.actpsy.2003.12.011. [DOI] [PubMed] [Google Scholar]
  109. Verbruggen F, Logan GD. Proactive adjustments of response strategies in the stop-signal paradigm. J Exp Psychol Hum Percept Perform. 2009;35(3):835–854. doi: 10.1037/a0012726. [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Vickery TJ, Chun MM, Lee D. Ubiquity and specificity of reinforcement signals throughout the human brain. Neuron. 2011;72(1):166–177. doi: 10.1016/j.neuron.2011.08.011. [DOI] [PubMed] [Google Scholar]
  111. Wallis JD, Miller EK. Neuronal activity in primate dorsolateral and orbital prefrontal cortex during performance of a reward preference task. Eur J Neurosci. 2003;18(7):2069–2081. doi: 10.1046/j.1460-9568.2003.02922.x. [DOI] [PubMed] [Google Scholar]
  112. Wunderlich K, Rangel A, O’Doherty JP. Neural computations underlying action-based decision making in the human brain. Proc Natl Acad Sci U S A. 2009;106(40):17199–17204. doi: 10.1073/pnas.0901077106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Yin HH, Knowlton BJ. The role of the basal ganglia in habit formation. Nat Rev Neurosci. 2006;7(6):464–476. doi: 10.1038/nrn1919. [DOI] [PubMed] [Google Scholar]

RESOURCES