Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2009 Mar 11;29(10):3259–3270. doi: 10.1523/JNEUROSCI.5353-08.2009

Reward-Dependent Modulation of Working Memory in Lateral Prefrontal Cortex

Steven W Kennerley 1, Jonathan D Wallis 1,
PMCID: PMC2685205  NIHMSID: NIHMS101113  PMID: 19279263

Abstract

Although research implicates lateral prefrontal cortex (PFC) in executive control and goal-directed behavior, it remains unclear how goals influence executive processes. One possibility is that goal-relevant information, such as expected rewards, could modulate the representation of information relating to executive control, thereby ensuring the efficient allocation of cognitive resources. To investigate this, we examined how reward modulated spatial working memory. Past studies investigating spatial working memory have focused on dorsolateral PFC, but this area only weakly connects with areas processing reward. Ventrolateral PFC has better connections in this regard. Thus, we contrasted the functional properties of single neurons in ventrolateral and dorsolateral PFC as two subjects performed a task that required them to hold spatial information in working memory under different expectancies of reward for correct performance. We balanced the order of presentation of spatial and reward information so we could assess the neuronal encoding of the two pieces of information independently and conjointly. Neurons in ventrolateral PFC encoded both spatial and reward information earlier, stronger and in a more sustained manner than neurons in dorsolateral PFC. Within ventrolateral PFC, spatial selectivity was more prevalent on the inferior convexity than within the principal sulcus. Finally, when reward increased spatial selectivity, behavioral performance improved, whereas when reward decreased spatial selectivity, behavioral performance deteriorated. These results suggest that ventrolateral PFC may be a locus whereby information about expected rewards can modulate information in working memory. The pattern of results is consistent with a role for ventrolateral PFC in attentional control.

Introduction

The activity of neurons in lateral prefrontal cortex (LPFC) encodes much information relevant to executive control, including behavior-guiding rules (White and Wise, 1999; Hoshi et al., 2000; Wallis et al., 2001), task context (Asaad et al., 2000), strategies (Genovesio et al., 2005) and sensory working memory (Goldman-Rakic, 1987; Rao et al., 1997). One important feature of high-level cognition is that it is goal-directed. LPFC neurons encode information important for establishing goals, such as the expected reward for a particular behavior (Leon and Shadlen, 1999; Kobayashi et al., 2002; Matsumoto et al., 2003; Roesch and Olson, 2003; Wallis and Miller, 2003b; Amemori and Sawaguchi, 2006). Moreover, damage to LPFC impairs the ability to organize behavior toward goals effectively (Owen et al., 1990), an impairment termed “goal neglect” (Duncan et al., 1996). Thus, a complete understanding of LPFC function should account for how information relevant to goal-directed behavior, such as expected rewards, can modulate information pertaining to executive control.

One possibility is that increasing the expected reward of a behavior enhances the representation of executive control processes, an effect which may underlie the tendency for behavioral performance to improve as the expected reward increases (Roesch and Olson, 2007). With this aim, a useful executive process to study is spatial working memory, since it has a long association with LPFC function (Jacobsen, 1935; Fuster and Alexander, 1971; Kubota and Niki, 1971; Funahashi et al., 1989) and it is easily parameterized, unlike other forms of visual working memory or executive information. Consequently, it provides a sensitive measure against which to measure the effects of goal-relevant information, such as reward. Thus, we sought to characterize better how LPFC neurons integrate reward information with spatial working memory. We did this by comparing neuronal activity in LPFC when either spatial or reward information alone was available to neuronal activity when both types of information were present. In terms of the anatomy underlying these processes, cortex in ventral LPFC (VLPFC) may be better connected than dorsal LPFC (DLPFC). VLPFC strongly connects with both sensory areas (Cavada and Goldman-Rakic, 1989; Preuss and Goldman-Rakic, 1989; Carmichael and Price, 1995b; Petrides and Pandya, 2002) and areas processing reward, such as the amygdala and orbitofrontal cortex (Barbas and Pandya, 1989; Carmichael and Price, 1995a; Petrides and Pandya, 2002). In contrast, although DLPFC receives input from parietal cortex, it has relatively weak connections to other sensory and reward areas (Barbas and Mesulam, 1985; Cavada and Goldman-Rakic, 1989; Petrides and Pandya, 1999). Our hypothesis was that the integration of spatial and reward information would be strongest in VLPFC. In addition, we predicted that reward should increase spatial selectivity in LPFC neurons. Such sharpening of spatial information in LPFC could provide a top-down signal to posterior sensory cortex to sharpen the neuronal selectivity related to processing the sensory information at that location and improve behavioral performance (Spitzer et al., 1988; Connor et al., 1997; Womelsdorf et al., 2006). This may provide a mechanism by which LPFC neurons could allocate executive resources in a goal-directed manner.

Materials and Methods

Subjects and neurophysiological procedures.

Subjects were two male rhesus monkeys (Macaca mulatta) that were 5–6 years of age and weighed 8–11 kg at the time of recording. We regulated the daily fluid intake of our subjects to maintain motivation on the task. Our methods for neurophysiological recording are reported previously in detail (Wallis and Miller, 2003a). Briefly, we implanted both subjects with a head positioner for restraint, and two recording chambers, the positions of which were determined using a 1.5 T magnetic resonance imaging (MRI) scanner. We recorded simultaneously from DLPFC and VLPFC, as well as anterior cingulate cortex and orbitofrontal cortex, using arrays of 10–24 tungsten microelectrodes (FHC Instruments). The current study focuses on the results from DLPFC and VLPFC. In subject A, we recorded from DLPFC in both hemispheres and VLPFC in the right hemisphere. In subject B, we recorded from VLPFC in the left hemisphere and DLPFC in the right hemisphere. We recorded DLPFC neurons from the dorsal bank of the principal sulcus and 6 mm of cortex dorsal to the principal sulcus. We recorded VLPFC neurons from the ventral bank of the principal sulcus and 6 mm of cortex ventral to the principal sulcus. All recordings were anterior to the superior and inferior limbs of the arcuate sulcus (Fig. 1). Based on the position of our recording locations relative to sulcal landmarks, we estimate that our recordings in DLPFC were largely from area 9 and 9/46d, while our recordings from VLPFC were largely from area 9/46v and 45A, although potentially also including area 47/12. However, there is considerable interindividual variability in the position of sulcal landmarks and so these estimates should serve only as guidelines.

Figure 1.

Figure 1.

A, In the RS task, the subject sees two cues separated by a delay, the first of which indicates the amount of juice to expect for successful performance of the task, and the second of which he must maintain in spatial working memory to saccade to its location 1 s later. The fixation cue changes to yellow to tell the subject to initiate his saccade. The SR task is identical except the cues appear in the opposite order. There are five different reward amounts, each predicted by one of two cues, and 24 spatial locations. The inset lateral view of the macaque brain indicates the position of the recording chamber. B, Flattened reconstructions of the cortex indicating the locations of recorded neurons in DLPFC (red circles) and VLPFC (blue circles). The size of the circles indicates the number of neurons recorded at that location. We measured the anterior–posterior position from the interaural line (x-axis), and the dorsoventral position relative to the genu of the ventral bank of the principal sulcus and the lateral surface of the inferior convexity (0 point on y-axis). Gray shading indicates unfolded sulci. See Materials and Methods for details regarding the reconstruction of the recording locations. SA, Superior arcuate sulcus; IA, inferior arcuate sulcus; P, principal sulcus.

We determined the approximate distance to lower the electrodes from the MRI images and advanced the electrodes using custom-built, manual microdrives until they were located just above the cell layer. We then slowly lowered the electrodes into the cell layer until we obtained a neuronal waveform. We randomly sampled neurons and did not attempt to select neurons based on responsiveness to reduce bias in the comparison of neuronal properties between the different brain regions. Waveforms were digitized and analyzed off-line (Plexon Instruments). All procedures were in accord with the National Institute of Health guidelines and the recommendations of the University of California at Berkeley Animal Care and Use Committee.

We reconstructed our recording locations by measuring the position of the recording chambers using stereotactic methods. We plotted the positions onto the MRI sections using commercial graphics software (Adobe Illustrator). We confirmed the correspondence between the MRI sections and our recording chambers by mapping the position of sulci and gray and white matter boundaries using neurophysiological recordings. We traced and measured the distance of each recording location along the cortical surface from the genu of the ventral bank of the principal sulcus and the lateral surface of the inferior convexity. We also measured the positions of the other sulci relative to the principal sulcus in this way, allowing the construction of the unfolded cortical maps shown in Figure 1.

Behavioral task.

We used NIMH Cortex (http://www.cortex.salk.edu) to control the presentation of the stimuli and the task contingencies. We monitored eye position and pupil dilation using an infrared system with a sampling rate of 125 Hz (ISCAN). Each trial began with the subject fixating a central square cue 0.3° in width (Fig. 1). The subject had to maintain fixation within ±2° of the fixation cue throughout the trial until the fixation cue changed color, indicating that the subject could indicate his response. Failure to maintain fixation resulted in a 5 s “timeout” and the abortion of the trial. After acquisition of fixation, two cues appeared sequentially and separated by a delay, one of which was a spatial location that the subject had to hold in working memory (the mnemonic stimulus), and one of which indicated to the subject how much reward they would receive for performing the task correctly (the reward-predictive cue). After a second delay, the fixation spot changed color, which indicated that the subject could make a saccade to the location of the mnemonic stimulus. Once the subject initiated his eye movement to indicate his response (the eye left the ±2° fixation window), he had 400 ms in which to saccade into the target location (within ±3° of the target location). If the target location was acquired within this 400 ms period, the subject then had to fixate the target for 150 ms. If the subject failed to acquire the target within 400 ms or failed to maintain fixation of the target (within ±3° of the target location) for 150 ms after target acquisition, we recorded an inaccurate response and terminated the trial without delivery of reward. We tested 24 spatial locations that formed a 5 × 5 matrix centered at fixation with each location separated by 4.5°. There were five different sizes of reward. We instructed each reward amount with one of two pictures, which allowed us to distinguish neuronal responses to the visual properties of the picture from neuronal responses that encoded the reward predicted by the picture. We therefore had two picture sets, each with five pictures associated with five different amounts of reward. In the RS task, the first cue was the reward-predictive cue and the second cue the spatial mnemonic stimulus, while the cues occur in the opposite order for the SR task. We fully counterbalanced all experimental factors, and the different trial types were randomly intermingled. RS trials were randomly intermingled with SR trials. Subjects completed ∼600 correct trials per day.

Statistical methods.

We conducted all statistical analyses using MATLAB (MathWorks). We began by analyzing the subjects' behavior. For each recording session, we calculated the subjects' mean accuracy for each experimental condition. We defined this measure as the number of accurate responses relative to the total number of accurate and inaccurate responses. In addition, for each recording session we also calculated the proportion of trials on which the subject failed to maintain fixation relative to the total number of trials in the session. Finally, we calculated the subjects' reaction times, defined as the time from the fixation cue changing color to the time at which the subject's eye first left the fixation window. To analyze pupil dilation, we first determined the baseline pupil diameter by calculating the mean pupil diameter during the fixation period across all conditions. We then plotted the time course of the mean pupil diameter across various experimental conditions expressed as a percentage of its baseline diameter.

Our first step in analyzing the neuronal data was to visualize it by constructing spike density histograms. We calculated the mean firing rate of the neuron across the appropriate experimental conditions using a sliding window of 100 ms. We then analyzed neuronal activity in six predefined epochs, corresponding to the presentation of the two cues and the first and second half of each of the delays. We chose the epochs to ensure each was of equivalent size. For each neuron, we calculated its mean firing rate on each trial during each epoch. We used this information to visualize the mnemonic field of the neuron during each epoch. For each spatial location, we calculated the neuron's standardized firing rate by subtracting the mean firing rate of the neuron across all spatial locations and dividing by its SD. We then performed five iterations of a two dimensional linear interpolation and plotted the resulting matrix on a pseudocolor plot.

To determine whether a neuron encoded an experimental factor, we used linear regression to quantify how well the experimental manipulation predicted the neuron's firing rate. Before conducting the regression, we standardized our dependent variable by subtracting the mean from each data point, and dividing each data point by the SD of the distribution, and we centered the independent variables. We evaluated the significance of selectivity at the single neuron level using an α level of 0.01. We performed these analyses for each neuron and each epoch in turn. We also included an estimate of the prevalence of neurons that showed selectivity in any of the epochs (see Table 1). This potentially overestimates the true prevalence of such neurons, since it requires multiple statistical comparisons (one statistical test for each of the epochs under consideration). However, it does serve as a useful estimate of the likely upper limit of the prevalence of selectivity in our neuronal populations.

Table 1.

Percentage of neurons in VLPFC and DLPFC that encode different experimental parameters during the first cue and delay epochs

Cue 1
Delay 1a
Delay 1b
Any epoch
D V D V D V D V
Reward 8 26 16 31 16 30 30 60
Space 14 30 13 31 11 26 21 47
Reward and space 0 11 2 11 1 8 2 19

Every proportion is significantly greater in VLPFC (V) than DLPFC (D) (χ2 test, p < 0.05).

We first examined how neurons encoded reward information by analyzing neuronal activity during the first cue and delay epochs of the RS task. To quantify whether a neuron encoded reward information, we performed a linear regression on the neuron's mean firing rate (F) during the first cue and delay epochs of the RS condition (i.e., when the subject only had reward information available) using the size of the reward predicted by the cue as the predictive variable (P1). We classified a neuron as reward selective if the equation F = b0 + b1P1 significantly predicted the neuron's firing rate. To examine the time course of reward selectivity we performed a “sliding” regression analysis to calculate at each time point whether the expected reward size significantly predicted the neuron's firing rate. We fit the regression equation to neuronal firing for overlapping 200 ms windows, beginning with the first 200 ms of the fixation period and then incrementing the window in 10 ms steps until we had analyzed the entire trial. If the equation significantly predicted the neuron's firing rate for three consecutive time bins (evaluated at p < 0.005), then we took the first of these time bins as the neuron's latency. We used the r2 value to calculate the percentage of the variance (PEV) in the neuron's firing rate that was explained by the size of the expected reward (PEVreward).

To examine how neurons encoded spatial information, we analyzed neuronal activity during the first cue and delay epochs of the SR task. To quantify whether a neuron encoded spatial information we performed a linear regression using the neuron's mean firing rate (F) during the first cue and delay epochs of the SR condition as the dependent variable and the X and Y coordinates of the spatial cue as two predictive variables (P1 and P2, respectively). Thus, we classified a neuron as spatially selective if the equation F = b0 + b1P1 + b2P2 significantly predicted the neuron's firing rate. Two types of neuronal selectivity were evident in our population. In many neurons it took the form that one typically expects of a selective neuron, that is, a low firing rate to the majority of the locations and a high firing rate to a specific location. We refer to such neurons as exhibiting “standard selectivity.” However, in some neurons it consisted of the opposite pattern, that is, a high firing rate to the majority of the locations and a low firing rate to a specific location. We refer to these neurons as showing “inverse selectivity.” For every spatially selective neuron, we determined which of these forms of selectivity it exhibited by calculating the percentage of locations that the neuron's firing rate exceeded its mean firing rate across all locations. To examine the time course of spatial selectivity we performed a sliding regression analysis analogous to that used to calculate the time course of reward selectivity. We used the r2 value to calculate the percentage of the variance in the neuron's firing rate that was explained by the spatial location of the mnemonic cue (PEVspace). This metric was independent of the two encoding schemes by which the neurons encoded spatial information.

For the time course analyses, we determined our criterion by examining how many neurons reached the criterion during a baseline period consisting of a 500 ms epoch centered in the middle of the fixation period preceding the onset of the first cue. At this stage of the task, the subject has no information about the upcoming trial and so any neurons that reached criterion must have done so by chance. Consequently, we can use this information to determine the false alarm rate of our criterion. For both the reward time course and spatial time course we calculated the proportion of neurons during the baseline period where the significance of the regression equation exceeded p < 0.005 for three consecutive time bins. We repeated this using significance levels of p < 0.01 and p < 0.05. For the encoding of reward information, 2% of neurons reached the criterion using p < 0.005. If we evaluated our criterion at p < 0.01 our false alarm rate increased to 3.7%, while a criterion of p < 0.05 yielded a false alarm rate of 20%. We obtained similar values for the encoding of spatial information (p < 0.005 = 3.2%, p < 0.01 = 5.2%, p < 0.05 = 21%). Thus, we used p < 0.005 as our criterion, since this yielded a reasonable false alarm rate that was <5% for both the spatial and reward time course.

To determine the exact manner by which spatial and reward information interacted, we focused on the second cue and delay epochs. For each task and each epoch we calculated the strength of spatial selectivity when the expected reward was small (i.e., when the two smallest rewards were expected) and contrasted this to when the expected reward was large (i.e., when the two largest rewards were expected). We grouped the trials in this way to ensure that there were sufficient trials for each spatial location to permit an analysis with sufficient statistical power. We calculated our measure of spatial selectivity (PEVspace) for each of the two groups of trials by performing a linear regression using the neuron's mean firing rate (F) as the dependent variable and the X and Y coordinates of the spatial cue as two predictive variables (P1 and P2, respectively), using the equation F = b0 + b1P1 + b2P2. We then compared this value when the subject expected a large reward to when they expected a small reward. Using PEVspace as a measure of neuronal selectivity is particularly useful when comparing neurons in different brain areas since it is independent of the neuron's firing rate and dynamic firing range (Pasupathy and Miller, 2005; Sugase-Miyamoto and Richmond, 2005).

Results

We recorded the activity of 200 neurons from DLPFC (142 from subject A and 58 from subject B) and 201 neurons from VLPFC (141 from subject A and 60 from subject B). We collected the data across 33 recording sessions for subject A and 12 recording sessions for subject B.

Behavior

We examined how our behavioral measures varied across trial types using a three-way ANOVA with the factors of Reward size, Set (which picture set the reward-predictive cues were from), and Task (RS or SR). Figure 2A shows the mean accuracy of subject A and B. Subject A was 84% accurate across 33 recording sessions, while subject B was 88% accurate across 12 recording sessions. Both subjects showed a significant interaction between Reward and Task (A: F(4,640) = 3.2, p < 0.05; B: F(4,220) = 2.9, p < 0.05). An analysis of the simple effects showed that in both subjects increasing reward significantly improved accuracy on the RS task (A: F(4,640) = 13, p < 5 × 10−6; B: F(4,220) = 5.2, p < 0.001) but not the SR task (A: F(4,640) = 1.1, p > 0.1; B: F(4,220) = 1.2, p > 0.1). Furthermore, accuracy was consistently better on the RS task compared with the SR task (A: F(1,640) > 25; p < 5 × 10−6 for all five reward amounts; B: F(1,220) > 5, p < 0.01 for all but the second lowest reward amount). Figure 2B shows the mean percentage of trials where the subject failed to maintain fixation for the duration of the trial. This occurred on 16% of the trials for subject A and 12% of the trials for subject B. Both subjects showed a significant interaction between Reward size and Task (A: F(4,640) = 4.5, p < 0.01; B: F(4,220) = 4.2, p < 0.01). An analysis of the simple effects revealed that for both subjects, increasing reward size significantly decreased the number of break fixations for both tasks (A: F(4,640)>28, p < 1 × 10−16 for both tasks; B: F(4,640)>12, p < 5 × 10−7), although the effect was consistently stronger for the RS task than the SR task. In summary, for both subjects there was evidence that increasing the size of expected reward led to an improvement in behavioral performance by decreasing break fixations (both tasks) or increasing accuracy (RS task only).

Figure 2.

Figure 2.

The behavioral performance (mean ± SEM) of the two subjects as a function of the size of the expected reward. A–C, The darker lines indicate trials where the reward-predictive cue was from picture set A, while the lighter lines indicate trials where the reward-predictive cue was from picture set B.

The effect of reward on reaction times showed more variability between the subjects (Fig. 2C). Subject A showed a significant interaction between Reward size and Task (A: F(4,16316) = 67.5, p < 1 × 10−15). An analysis of the simple effects showed that increasing reward size led to progressively faster reaction times on the RS task, but slower reaction times on the SR task. For subject B there was a main effect of Reward (F) = 20, p < 1 × 10−15). A post hoc analysis using a trend analysis revealed that the relationship was nonlinear, with a significant quadratic effect (F(1,5535) = 54, p < 1 × 10−12) and nonsignificant linear effect (F(1,5535) = 2.8, p > 0.05). Subject B also showed a main effect of Task (F(1,5535) = 64, p < 1 × 10−14) showing significantly faster reaction times on the SR task. In summary, both subjects showed systematic effects of reward size and task on reaction times, although the nature of these effects differed between the two subjects.

As a further measure of the effect of reward size on our subjects, we monitored pupil dilation across the course of the trial. The sympathetic nervous system controls pupil dilation and consequently it can be used as a measure of the level of arousal in the autonomic nervous system (Hess and Polt, 1960). Figure 3 shows the time course of pupil diameter changes across the course of the trial for the different tasks and subjects. We have plotted the data grouped according to the magnitude of reward predicted by the reward cue. During the presentation of the reward-predictive cue itself, there is little relationship between pupil diameter and the size of the predicted reward. Instead, the luminance of the cue appears to control pupil diameter, as evidenced by the lack of consistency between the two picture sets (the pictures were not luminance matched). However, once the cues disappear, a consistent relationship emerges for both picture sets: there is a positive relationship between the size of the reward that the pictures predict and the diameter of the pupil. This suggests that the size of the expected reward is driving pupil dilation, rather than the visual properties of the pictures, since it is the size of the expected reward that is consistent across both picture sets.

Figure 3.

Figure 3.

Percentage change in pupil diameter (relative to the mean value during the baseline fixation period) for each subject, task order and picture set. The gray bars indicate the presentation of the first and second cue. During the initial presentation of the reward-predictive cues, there is no relationship between pupil diameter and the size of reward predicted by the cue. After the pictures, however, a monotonic relationship develops, with increasing reward producing greater pupil dilation, consistent with an arousing effect on the autonomic nervous system.

To quantify these effects we performed a two-way ANOVA on the mean pupil diameter during defined epochs of the trial, with factors of Reward (the size of the predicted reward) and Set (which picture set the reward predictive cue was from). We focused on two epochs. The first epoch began 100 ms from the onset of the reward-predictive cue until 100 ms after its offset. This epoch captured the presentation of the reward-predictive cue allowing for the latency of the pupil response. The second epoch consisted of the last 500 ms of the second delay, which ensured that the pupil diameter had recovered from luminance-induced changes. We performed this analysis for each task and each subject in turn. During the presentation of the reward-predictive cues, both subjects showed a significant interaction between Reward and Set for both the RS (A: F(4,9443) = 82, p < 1 × 10−15; B: F(4,3382) = 11, p < 1 × 10−8) and SR (A: F(4,8292) = 61, p < 1 × 10−15; B: F(4,3090) = 4.2, p < 0.01) task orders. This pattern of results indicated that the visual properties of specific cues controlled the pupil diameter, consistent with a luminance-induced change. A different pattern emerged by the end of the second delay. Subject B showed a significant main effect of Reward, with no other main effects or interactions, for both the RS (F(4,3382) = 2.9, p < 0.05) and SR (F(4,3090) = 17, p < 1 × 10−12) tasks. Such a pattern is consistent with an increase in arousal caused by the expectancy of receiving a reward. The specific identity of the pictures no longer affects pupil size; rather, pupil diameter is explained solely by the size of the expected reward. Thus, our analysis of subject B's pupil diameter appears to confirm that he knew the size of reward predicted by the pictures.

The data from subject A were less clear, since we saw a significant Task × Set interaction for the RS (F(4,9443) = 6.3, p < 0.0001) and SR (F(4,8292) = 2.8, p < 0.01) task orders. However, closer inspection of Figure 3 reveals that this was largely due to a single modest anomaly: subject A appeared to transpose the two most valuable rewards in the first set of cues. To verify this, we flipped these cues in the analysis by recoding conditions from the first set of pictures so that the cue predicting the fourth largest reward was recoded as if it predicted the largest reward and vice versa. There was now a highly significant main effect of reward for the RS (F(4,9443) = 68, p < 1 × 10−15) and SR (F(4,8292) = 84, p < 1 × 10−15) tasks, with no other main effects or interactions. Thus, with the exception of a single transposition, our analysis of subject A's pupil diameter also appears to confirm that he knew the size of reward predicted by the pictures, although we acknowledge that there is a certain degree of circularity inherent in our explanation of this anomaly.

Initial encoding of spatial and reward information

We examined the ability of neurons to encode spatial or reward information independently by focusing on the first cue and delay epochs of the SR and RS tasks respectively. Figure 4A illustrates a VLPFC neuron that encodes the size of the reward in the RS task, showing an increase in firing rate as the size of the predicted reward increases, but shows no selectivity to the mnemonic cue. We determined the proportion of reward-selective neurons in the first three epochs of the RS task: the first cue epoch (cue 1) and the two delay epochs corresponding to each half of the first delay (delay 1a and delay 1b). In every epoch, there were significantly more reward-selective neurons in VLPFC than DLPFC (Table 1). In total, 121 of 201 (60%) of VLPFC neurons exhibited reward selectivity in at least one of the three epochs compared with 59/200 (30%) of DLPFC neurons (χ2 = 37, p < 5 × 10−8). In VLPFC, there was an approximately equal number of neurons that had a positive relationship between firing rate and reward value as had a negative relationship (binomial test, p > 0.05 in all three epochs). In DLPFC, however, during both the cue epoch and the second half of the delay, significantly more of the reward-selective neurons had a negative relationship between firing rate and expected reward as opposed to a positive relationship (cue 1: 76 vs 24%, binomial test, p < 0.05; delay 2b: 73 vs 27%, binomial test, p < 0.01).

Figure 4.

Figure 4.

A, Spike density histogram from a VLPFC neuron that encodes reward information during the RS task, but does not encode spatial information during the SR task. The spike density histogram for the RS task shows the neuron's activity sorted by reward size, whereas the spike density histogram for the SR task shows the same neuron's activity sorted by the location of the mnemonic cue. To enable clear visualization we have collapsed the spatial data into four groups. Each group consists of 6 of the 24 locations as indicated by the spatial key. The inset indicates the mean standardized firing rate of the neuron across the 24 spatial locations from the epoch that elicited the maximum spatial selectivity (cue 1, delay 1a, or delay 1b). The gray bars indicate the presentation of the first and second cue. B, A VLPFC neuron that encodes spatial information during the SR task, but does not encode reward information during the RS task. C, A VLPFC neuron that encodes reward information during the cue epoch of the RS task, and spatial information during the cue epoch of the SR task. D, A VLPFC neurons that encodes reward information during the delay epoch of the RS task and spatial information during the delay epoch of the SR task.

We also investigated the effect of a more complete regression model that included the picture set from which the reward-predictive cue was drawn as a second dummy predictive variable (P2) in addition to reward size (P1). We used the linear equation F = b0 + b1P1 + b2P2 + b3P1P2. On average, just 3% of the neurons showed a significant predictive relationship between their firing rate and either the picture set or the interaction between picture set and reward size (cue 1: 4%, delay 1a: 3%, delay 1b: 4%). This indicates that the firing rate of most neurons was driven by the size of the reward indicated by the reward-predictive cue, rather than the visual properties or identity of the reward-predictive cue.

To investigate the latency at which neurons encoded reward information, we used the sliding regression analysis to calculate at each time-point the percentage of variance in the neuron's firing rate that we could attribute to the size of the expected reward (PEVreward). We focused on those neurons that reached criterion during the first cue epoch (Fig. 5A). The selective VLPFC neurons tended to encode reward information earlier in the cue epoch than the selective DLPFC neurons (VLPFC: median 250 ms, interquartile range 165 ms; DLPFC: median 310 ms, interquartile range 220 ms, Wilcoxon's rank-sum test, p < 0.05). In addition, VLPFC neurons tended to maintain reward information more strongly once the first cue disappeared (Fig. 5B).

Figure 5.

Figure 5.

A, Histogram of neuronal latencies for encoding reward information in DLPFC (top) and VLPFC (middle). Bottom panel indicates the cumulative percentage of the selective neurons that have reached the criterion for encoding reward information. Asterisks indicate that the proportions in the two areas were significantly different from one another (χ2 test, p < 0.05). The selective VLPFC neurons tended to exhibit reward selectivity earlier in the cue epoch than the selective DLPFC neurons. B, Time course of reward selectivity averaged across the DLPFC and VLPFC neuronal populations. We performed a t test at each time point comparing DLPFC and VLPFC selectivity. The asterisk and horizontal line indicate those time points where VLPFC selectivity was significantly stronger than DLPFC selectivity (p < 0.05). VLPFC maintains reward information across the delay relative to DLPFC. C, As Figure 5A, except for spatial information. The selective VLPFC neurons tended to exhibit spatial selectivity earlier in the cue epoch than the selective DLPFC neurons. D, As Figure 5B, except for spatial information. VLPFC encodes spatial information more strongly than DLPFC in both the cue and delay epochs.

Figure 4B illustrates a VLPFC neuron that encodes spatial location during the SR task, showing a higher firing rate through the delay epoch when the subject had to remember cues presented in the lower left quadrant of the screen, but little selectivity to the reward-predictive cue. We determined the proportion of spatially selective neurons in the first three epochs of the SR task. In every epoch, there were significantly more spatially selective neurons in VLPFC than DLPFC (Table 1). In total, 94/201 (47%) of VLPFC neurons exhibited spatial selectivity in at least one of the three epochs compared with 43/200 (21%) of DLPFC neurons (χ2 = 27, p < 5 × 10−6). The majority of the neurons (cue 1: 89%, delay 1a: 90%, delay 1b: 84%) showed standard selectivity, while the remainder showed inverse selectivity (see Materials and Methods). There was no difference between the two areas in the prevalence of these two encoding schemes (χ2<1, p > 0.1 for all three epochs).

We investigated the latency at which neurons encoded spatial information using the sliding regression analysis to calculate PEVspace and focusing on those neurons that reached criterion during the first cue epoch (Fig. 5C). The selective VLPFC neurons tended to encode spatial information earlier in the cue epoch than the selective DLPFC neurons (VLPFC: median 180 ms, interquartile range 113 ms; DLPFC: median 240 ms, interquartile range 185 ms, Wilcoxon's rank-sum test, p < 0.05). VLPFC neurons encoded spatial information more strongly than DLPFC neurons during the presentation of the cue as well as the majority of the subsequent delay (Fig. 5D). Note the latency for encoding of spatial information was earlier than the latency for reward information, a difference that was significant in VLPFC (Wilcoxon's rank-sum test, p < 0.001) but not DLPFC (Wilcoxon's rank-sum test, p > 0.1). This time difference could reflect the time necessary to identify the picture and recall the size of reward that the picture predicts.

Figure 4, C and D, illustrate two VLPFC neurons that exhibit both reward selectivity in the RS task and spatial selectivity in the SR task. The neuron in Figure 4C exhibits selectivity during the first cue epoch, showing an increased firing rate to large rewards in the RS task and spatial cues in the lower left of the screen in the SR task. The neuron in Figure 4D exhibits selectivity during the first delay epoch, showing an increased firing rate to large rewards in the RS task and spatial cues in the lower right of the screen in the SR task. The proportion of such neurons was significantly higher in VLPFC than DLPFC (Table 1). While the low proportions in Table 1 might appear to suggest that separate populations of neurons encode reward and space, they lie within the range that one would expect based on the incidence of reward and space encoding independently. Thus, in VLPFC in any given epoch approximately a third of the neurons encode reward and a third encode space, and we find ∼11% encode both pieces of information (0.33 × 0.33 = 0.11). In DLPFC, where ∼12% encode reward and 12% encode space, we find ∼1% encode both reward and space. Thus, there was no evidence for either separate or specialized neuronal populations for encoding reward and space.

To summarize, we observed neurons that encoded the reward predictive cue and the mnemonic cue in both DLPFC and VLPFC. However, they were more prevalent in VLPFC, and the encoding in VLPFC was stronger, earlier and was sustained even in the absence of the mnemonic or reward predictive cue, indicative of VLPFC participating in both spatial and reward working memory.

Reward modulation of spatial selectivity

To determine how neurons in the two brain areas integrated spatial and reward information we first focused on the second cue and delay epochs in the RS task. We compared the strength of spatial encoding when the expected reward was small (the two smallest rewards were expected) as opposed to when the expected reward was large (the two largest rewards were expected).We grouped the trials in this way to ensure that there were sufficient trials for each spatial location to enable us to calculate spatial selectivity. Figures 6, A and B, show two examples of neurons whose activity was influenced by both the spatial location of the cue as well as the amount of predicted reward in the RS task. The neuron in Figure 6A shows stronger spatial selectivity when the subject expects a large reward as opposed to a small reward during the presentation of the spatial cue. The neuron showed inverse selectivity, with a low firing rate when the spatial cue appeared in the lower left of the screen compared with higher firing rates when the spatial cue appeared at other locations. The neuron in Figure 6B shows strong spatial selectivity during the presentation of the spatial cue when the subject expects a large reward, but virtually no spatial selectivity when the subject expects a small reward. The selectivity consisted of a higher firing rate when the spatial cue appeared in the top right of the screen relative to other locations.

Figure 6.

Figure 6.

A, Spike density histogram from a VLPFC neuron during the RS task for different spatial locations of the mnemonic cue when the subject expects one of the two smallest rewards (top) or one of the two largest rewards (bottom). Conventions are otherwise the same as Figure 4. When the subject expects one of the two larger rewards, this neuron shows a lower firing rate when the mnemonic cue is in the lower left of the screen compared with other locations. The neuron's firing rate shows less discrimination between the different locations when the subject expects one of the two smaller rewards. B, The activity of a VLPFC neuron during the RS task. When the subject expects one of the two larger rewards, this neuron shows a high firing rate when the mnemonic cue appears in the top left of the screen relative to the other locations. The neuron's firing rate shows little discrimination between the different locations when the subject expects one of the two smaller rewards. C, The activity of a VLPFC neuron during the SR task. When the subject expects one of the two larger rewards the neuron shows a higher firing rate when the mnemonic cue appears in the top left of the screen and a lower firing rate when the mnemonic cue appears in the bottom right of the screen. However, when the subject expects one of the two smaller rewards, the neuron's firing rate shows little difference across the different locations. D, The activity of a VLPFC neuron during the SR task. When the subject expects one of the two smaller rewards, this neuron shows a high firing rate when the mnemonic cue appears in the lower left of the screen. This response is much reduced when the subject expects one of the two larger rewards.

To determine how reward information affected spatial selectivity across the neuronal population during the RS task, we calculated each neuron's spatial selectivity (PEVspace) when the subject expected the two smallest rewards, and compared this to the spatial selectivity when the subject expected the two largest rewards. We did this for each of the three epochs during the second cue and delay. We focused only on those neurons where the spatial location significantly predicted the neuronal firing rate for at least one of the two reward conditions (large or small), evaluated at p < 0.01. There was consistently stronger spatial selectivity in both VLPFC and DLPFC neurons when the subject expected one of the two larger rewards as opposed to one of the two smaller rewards (Fig. 7, top). We note that this effect could have arisen due to an increase in spatial selectivity for larger rewards, a decrease in spatial selectivity for smaller rewards or a combination of both of these effects.

Figure 7.

Figure 7.

Mean spatial selectivity in VLPFC and DLPFC for the RS task (top) and SR task (bottom) when the subject expects one of the two smallest or two largest rewards. Asterisks indicate that the difference between the two reward conditions is significant, evaluated using a t test at p < 0.05. The number underneath the bars indicates the number of neurons included in the analysis.

We next examined how neurons in the two brain areas integrated spatial and reward information during the second cue and delay epochs in the SR task. Figure 6, C and D, illustrate two examples of neurons that encoded spatial and reward information in the SR task. The neuron in Figure 6C encodes locations in the top left of the screen throughout the first delay. However, during the second cue and delay epochs the spatial selectivity depends on the size of reward predicted by the cue. The neuron maintains spatial information when the cue predicts one of the two larger rewards, but shows reduced selectivity when the cue predicts one of the two smaller rewards. Unlike the RS task, where spatial selectivity generally increased with larger rewards, some neurons in the SR task clearly demonstrated increased spatial selectivity with smaller rewards. The neuron in Figure 6D illustrates one such example. It encodes the bottom left location during the first delay epoch. However, this spatial selectivity dramatically increases during the second cue epoch, but only if the cue predicts one of the two smaller rewards. We performed the same analysis comparing spatial selection across the neuronal populations when the subjects expected either a small or a large reward (Fig. 7, bottom). In general, spatial selectivity was larger when the subject expected a smaller reward, and this difference reached significance in the first half of the delay for the DLPFC neurons, and during the second half of the delay for the VLPFC neurons.

Difference in neuronal selectivity within VLPFC

We examined whether there was evidence for a dissociation of functional properties within VLPFC. In particular, we contrasted activity in and around the ventral bank of the principal sulcus (VS), with more ventral regions that constitute the inferior convexity (VIC). We wanted to see whether there would be more spatially selective neurons in VS since investigators frequently consider this area of cortex as part of DLPFC. We defined VS neurons as those in the ventral bank of the principal sulcus or within 2 mm of the ventral bank (comprising 91 neurons), and VIC neurons as those that were >2 mm ventral to the ventral bank (comprising 110 neurons). We determined the proportion of neurons encoding spatial or reward information during the first cue and delay epochs of the SR and RS tasks (Table 2). We found no significant differences between the areas in the proportion of neurons encoding the reward, but spatially selective neurons were more prevalent in VIC than VS.

Table 2.

Percentage of neurons in VS and VIC that encode different experimental parameters during the first cue and delay epochs

Cue 1
Delay 1a
Delay 1b
Any epoch
VS VIC VS VIC VS VIC VS VIC
Reward 20 32 27 33 30 30 55 65
Space 20 38 24 36 19 31 35 56
Reward and space 5 15 8 13 7 9 12 24

The numbers in bold indicate that the proportion in VIC is significantly greater than in VS2 test, p < 0.05).

Discussion

We found stronger encoding of spatial and reward information in the cortex ventral to the principal sulcus (VLPFC) relative to cortex dorsal to the principal sulcus (DLPFC). The initial encoding of information occurred earlier in VLPFC and a greater proportion of the neurons encoded the information. Furthermore, VLPFC neurons maintained space and reward information in working memory across the delay more strongly than DLPFC neurons. Once both pieces of information were available, VLPFC appeared to integrate them. In the RS task, anticipation of a larger reward increased spatial selectivity during the cue period and at the beginning of the delay, while in the SR task anticipation of a larger reward decreased spatial selectivity at the end of the delay.

Functional properties of VLPFC and DLPFC

Previous studies emphasized the role of DLPFC in encoding spatial information in working memory (Goldman-Rakic, 1996). In part, this was driven by the domain-specific working memory hypothesis regarding the functional organization of prefrontal cortex, which argued that different prefrontal areas maintain different modalities of information depending on their connections with sensory cortex. Since DLPFC has strong connections with dorsal visual pathways, including parietal cortex, that are important for spatial processing (Cavada and Goldman-Rakic, 1989), its neurons should maintain spatial information in working memory. In contrast, VLPFC, which connects with high-level visual areas in the temporal lobe (Barbas, 1988; Carmichael and Price, 1995b; Petrides and Pandya, 1999), should maintain information about pictures and objects (Wilson et al., 1993). However, findings from neuropsychology (Petrides, 1995, 1996), neuroimaging (Owen, 1997, 1999; Postle et al., 2000) and neurophysiology (Rao et al., 1997) failed to support this dissociation (Rushworth and Owen, 1998). Consistent with this, we found neurons that encoded spatial information present throughout LPFC. However, fewer neurons encoded spatial information in regions within and dorsal to the principal sulcus, while the incidence of spatial selectivity progressively increased as we moved ventrally from the principal sulcus toward area 45A of VLPFC. While this may appear at odds with previous results, which stressed the importance of DLPFC in encoding spatial information (Funahashi et al., 1989; Wilson et al., 1993; Leon and Shadlen, 1999; Sawaguchi and Yamane, 1999), it is compatible with the anatomy. Although not often emphasized, strong connections exist between parietal cortex and VLPFC (Petrides and Pandya, 1984; Cavada and Goldman-Rakic, 1989; Schall et al., 1995) and there are connections between VLPFC and dorsal and medial regions of PFC that could also provide VLPFC with spatial information (Petrides and Pandya, 2002). In addition, recent findings in humans have also emphasized the role of VLPFC in spatial encoding (Rizzuto et al., 2005; Kastner et al., 2007; Chase et al., 2008). This has led some researchers to suggest there is a difference in the organization of LPFC between monkeys and humans, with the map found in the monkey DLPFC having shifted to VLPFC in the human (Kastner et al., 2007). However, given our findings, the postulation of such a species difference may be unnecessary.

Our results extend the findings of Miller and colleagues with regard to the role of VLPFC in integrating task-relevant information (Rao et al., 1997; Rainer et al., 1998). These authors found that many neurons in VLPFC encode both spatial and object information. In their studies, reward was not manipulated; the identification of the object and its location were the behaviorally relevant information. In our task, identification of the object was not a requirement but informed the subject how rewarding a trial was. Under these circumstances, we found very few neurons encoded object identity, but instead encoded how much juice the picture predicted. This suggests that the role of VLPFC in using object information to guide goal-directed behavior is not restricted to encoding “what” and “where” information, and may be an important locus for allowing goal-related information to modulate sensory information. Our account of the interaction between reward and space in VLPFC is also compatible with recent findings in humans. Dopamine signals, which provide LPFC with reward information, correlate with activations of VLPFC induced by increasing working memory load (Landau et al., 2008).

VLPFC and attentional control

Recently, there has been concern regarding the interpretation of the functional role of neurons that encode expected rewards. Such neurons might reflect attentional processes, rather than reward per se, since one pays more attention when a larger reward is expected (Maunsell, 2004; Bendiksby and Platt, 2006). Our results from VLPFC neurons that show a reward-dependent modulation of spatial selectivity seem more compatible with an attentional account than a primary role for encoding reward, since larger expected reward led to increased spatial selectivity in the RS task, but decreased spatial selectivity in the SR task. A potential explanation for this finding lies in the behavioral data: subjects clearly found the SR task more difficult than the RS task. One possibility is that in the SR task the centrally presented reward cue competes for attentional resources, interfering with ongoing working memory or spatial attention processes that are attempting to maintain the location of the peripherally presented mnemonic cue (Awh and Jonides, 2001). Reduced spatial selectivity to larger expected rewards may result if the degree of attentional diversion increases as the amount of reward predicted by the cue increases. Thus, the reward-predictive cue in the SR task could compete with the mnemonic cue for representation by a limited capacity neuronal mechanisms (Duncan, 2001). Such a mechanism is plausible as emotional or arousing stimuli preferentially attract attentional resources (Dolan, 2002; Vuilleumier, 2005). This interpretation of our results is also compatible with recent results from neurophysiology (Lebedev et al., 2004), neuropsychology (Rushworth et al., 2005) and neuroimaging (Brass and von Cramon, 2004; Roth et al., 2006) studies that suggest a primary role for VLPFC in attentional control. Increasing the attentional demands of a conditional visuomotor task markedly impaired performance of animals with lesions of VLPFC (Rushworth et al., 2005). In a task that required subjects to remember one spatial location while attending to an alternative location, VLPFC neurons encoded solely the attentional locus, while DLPFC neurons encoded both the attended and the remembered location (Lebedev et al., 2004).

We note that although accuracy differed between the RS and SR tasks, in other respects behavior was similar. Increasing reward led to a decrease in breaks of fixation and an increase in pupil dilation in both tasks. This raises the possibility that the effects of reward on attentional processes are behaviorally dissociable from more nonspecific arousal or motivational effects. These behavioral dissociations must be considered when interpreting neuronal effects. For example, the encoding of expected reward increases within the frontal lobe as one moves away from the prefrontal cortex and toward the motor system, but this may relate more to the effects of reward on motivational rather than attentional processes (Roesch and Olson, 2003; Roesch et al., 2007).

Our results are consistent with recent theoretical frameworks that have differentiated the functional roles of DLPFC and VLPFC. For example, the two areas are proposed to play a differential role in attentional control (Corbetta et al., 2002). A dorsal network, comprising dorsal parietal areas and the frontal eye fields, is responsible for directing attention based on current goals and pre-existing information about contingencies. A ventral network, comprising ventral parietal areas and VLPFC is responsible for detecting behaviorally relevant events and, in conjunction with the dorsal network, redirecting attention appropriately. Applying this framework to our task, the overarching goal is to remember the location of the spatial stimulus, but the reward-predictive cue serves as an additional behaviorally relevant event. This may explain why VLPFC neurons encoded both the spatial location as well as the reward-predictive cue. More generally, our results are also consistent with the theoretical framework of Petrides, which argues that VLPFC is important for the maintenance of task-relevant information and the filtering of irrelevant information, while DLPFC is more important for monitoring and manipulation of the information (Owen et al., 1996, 1999; Petrides, 1996). The current task simply required the subject to maintain information and consequently we saw greater neuronal selectivity in VLPFC than DLPFC. Our previous findings are also compatible with this framework. We found that VLPFC neurons preferentially encoded the maintenance of pictures in working memory, while DLPFC neurons encoded the abstract rule that needed to be applied to those pictures (Wallis et al., 2001).

Conclusion

In a task that requires the maintenance of information in spatial working memory under conditions of different amounts of expected reward, neurons in VLPFC show selectivity that is consistent with a role in attentional control. When reward increased the spatial selectivity of VLPFC neurons, behavioral performance increased, but when reward decreased spatial selectivity of VLPFC neurons, behavioral performance was impeded. In contrast, DLPFC neurons showed less involvement in the task. Our results provide further neurophysiological evidence that the cortex above and below the principal sulcus of the macaque is functionally distinct, and are consistent with the notion that VLPFC serves as a sensory gateway into the prefrontal cortex, ensuring the maintenance of task-relevant information across delays.

Footnotes

This work was supported by National Institute on Drug Abuse Grant R01DA19028 and National Institute of Neurological Disorders and Stroke Grant P01NS040813 to J.D.W. and National Institute of Mental Health Training Grant F32MH081521 to S.W.K. S.W.K. contributed to all aspects of this project, and J.D.W. contributed to experimental design, data analysis, writing of this manuscript, and supervision of this project. We thank Aspandiar Dahmubed for preliminary analysis of the dataset.

The authors declare no competing financial interests.

References

  • Amemori and Sawaguchi, 2006.Amemori K, Sawaguchi T. Contrasting effects of reward expectation on sensory and motor memories in primate prefrontal neurons. Cereb Cortex. 2006;16:1002–1015. doi: 10.1093/cercor/bhj042. [DOI] [PubMed] [Google Scholar]
  • Asaad et al., 2000.Asaad WF, Rainer G, Miller EK. Task-specific neural activity in the primate prefrontal cortex. J Neurophysiol. 2000;84:451–459. doi: 10.1152/jn.2000.84.1.451. [DOI] [PubMed] [Google Scholar]
  • Awh and Jonides, 2001.Awh E, Jonides J. Overlapping mechanisms of attention and spatial working memory. Trends Cogn Sci. 2001;5:119–126. doi: 10.1016/s1364-6613(00)01593-x. [DOI] [PubMed] [Google Scholar]
  • Barbas, 1988.Barbas H. Anatomic organization of basoventral and mediodorsal visual recipient prefrontal regions in the rhesus monkey. J Comp Neurol. 1988;276:313–342. doi: 10.1002/cne.902760302. [DOI] [PubMed] [Google Scholar]
  • Barbas and Mesulam, 1985.Barbas H, Mesulam MM. Cortical afferent input to the principalis region of the rhesus monkey. Neuroscience. 1985;15:619–637. doi: 10.1016/0306-4522(85)90064-8. [DOI] [PubMed] [Google Scholar]
  • Barbas and Pandya, 1989.Barbas H, Pandya DN. Architecture and intrinsic connections of the prefrontal cortex in the rhesus monkey. J Comp Neurol. 1989;286:353–375. doi: 10.1002/cne.902860306. [DOI] [PubMed] [Google Scholar]
  • Bendiksby and Platt, 2006.Bendiksby MS, Platt ML. Neural correlates of reward and attention in macaque area LIP. Neuropsychologia. 2006;44:2411–2420. doi: 10.1016/j.neuropsychologia.2006.04.011. [DOI] [PubMed] [Google Scholar]
  • Brass and von Cramon, 2004.Brass M, von Cramon DY. Selection for cognitive control: a functional magnetic resonance imaging study on the selection of task-relevant information. J Neurosci. 2004;24:8847–8852. doi: 10.1523/JNEUROSCI.2513-04.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Carmichael and Price, 1995a.Carmichael ST, Price JL. Limbic connections of the orbital and medial prefrontal cortex in macaque monkeys. J Comp Neurol. 1995a;363:615–641. doi: 10.1002/cne.903630408. [DOI] [PubMed] [Google Scholar]
  • Carmichael and Price, 1995b.Carmichael ST, Price JL. Sensory and premotor connections of the orbital and medial prefrontal cortex of macaque monkeys. J Comp Neurol. 1995b;363:642–664. doi: 10.1002/cne.903630409. [DOI] [PubMed] [Google Scholar]
  • Cavada and Goldman-Rakic, 1989.Cavada C, Goldman-Rakic PS. Posterior parietal cortex in rhesus monkey: II. Evidence for segregated corticocortical networks linking sensory and limbic areas with the frontal lobe. J Comp Neurol. 1989;287:422–445. doi: 10.1002/cne.902870403. [DOI] [PubMed] [Google Scholar]
  • Chase et al., 2008.Chase HW, Clark L, Sahakian BJ, Bullmore ET, Robbins TW. Dissociable roles of prefrontal subregions in self-ordered working memory performance. Neuropsychologia. 2008;46:2650–2661. doi: 10.1016/j.neuropsychologia.2008.04.021. [DOI] [PubMed] [Google Scholar]
  • Connor et al., 1997.Connor CE, Preddie DC, Gallant JL, Van Essen DC. Spatial attention effects in macaque area V4. J Neurosci. 1997;17:3201–3214. doi: 10.1523/JNEUROSCI.17-09-03201.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Corbetta et al., 2002.Corbetta M, Kincade JM, Shulman GL. Neural systems for visual orienting and their relationships to spatial working memory. J Cogn Neurosci. 2002;14:508–523. doi: 10.1162/089892902317362029. [DOI] [PubMed] [Google Scholar]
  • Dolan, 2002.Dolan RJ. Emotion, cognition, and behavior. Science. 2002;298:1191–1194. doi: 10.1126/science.1076358. [DOI] [PubMed] [Google Scholar]
  • Duncan, 2001.Duncan J. An adaptive coding model of neural function in prefrontal cortex. Nat Rev Neurosci. 2001;2:820–829. doi: 10.1038/35097575. [DOI] [PubMed] [Google Scholar]
  • Duncan et al., 1996.Duncan J, Emslie H, Williams P, Johnson R, Freer C. Intelligence and the frontal lobe: the organization of goal-directed behavior. Cogn Psychol. 1996;30:257–303. doi: 10.1006/cogp.1996.0008. [DOI] [PubMed] [Google Scholar]
  • Funahashi et al., 1989.Funahashi S, Bruce CJ, Goldman-Rakic PS. Mnemonic coding of visual space in the monkey's dorsolateral prefrontal cortex. J Neurophysiol. 1989;61:331–349. doi: 10.1152/jn.1989.61.2.331. [DOI] [PubMed] [Google Scholar]
  • Fuster and Alexander, 1971.Fuster JM, Alexander GE. Neuron activity related to short-term memory. Science. 1971;173:652–654. doi: 10.1126/science.173.3997.652. [DOI] [PubMed] [Google Scholar]
  • Genovesio et al., 2005.Genovesio A, Brasted PJ, Mitz AR, Wise SP. Prefrontal cortex activity related to abstract response strategies. Neuron. 2005;47:307–320. doi: 10.1016/j.neuron.2005.06.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Goldman-Rakic, 1987.Goldman-Rakic PS. Circuitry of primate prefrontal cortex and regulation of behavior by representational memory. In: Plum F, editor. Handbook of physiology, the nervous system, higher functions of the brain. Bethesda, MD: American Physiological Society; 1987. pp. 373–417. [Google Scholar]
  • Goldman-Rakic, 1996.Goldman-Rakic PS. Regional and cellular fractionation of working memory. Proc Natl Acad Sci U S A. 1996;93:13473–13480. doi: 10.1073/pnas.93.24.13473. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Hess and Polt, 1960.Hess EH, Polt JM. Pupil size as related to interest value of visual stimuli. Science. 1960;132:349–350. doi: 10.1126/science.132.3423.349. [DOI] [PubMed] [Google Scholar]
  • Hoshi et al., 2000.Hoshi E, Shima K, Tanji J. Neuronal activity in the primate prefrontal cortex in the process of motor selection based on two behavioral rules. J Neurophysiol. 2000;83:2355–2373. doi: 10.1152/jn.2000.83.4.2355. [DOI] [PubMed] [Google Scholar]
  • Jacobsen, 1935.Jacobsen CF. Functions of the frontal association area in primates. Arch Neurol Psych. 1935;33:558–569. [Google Scholar]
  • Kastner et al., 2007.Kastner S, DeSimone K, Konen CS, Szczepanski SM, Weiner KS, Schneider KA. Topographic maps in human frontal cortex revealed in memory-guided saccade and spatial working-memory tasks. J Neurophysiol. 2007;97:3494–3507. doi: 10.1152/jn.00010.2007. [DOI] [PubMed] [Google Scholar]
  • Kobayashi et al., 2002.Kobayashi S, Lauwereyns J, Koizumi M, Sakagami M, Hikosaka O. Influence of reward expectation on visuospatial processing in macaque lateral prefrontal cortex. J Neurophysiol. 2002;87:1488–1498. doi: 10.1152/jn.00472.2001. [DOI] [PubMed] [Google Scholar]
  • Kubota and Niki, 1971.Kubota K, Niki H. Prefrontal cortical unit activity and delayed alternation performance in monkeys. J Neurophysiol. 1971;34:337–347. doi: 10.1152/jn.1971.34.3.337. [DOI] [PubMed] [Google Scholar]
  • Landau et al., 2008.Landau SM, Lal R, O'Neil JP, Baker S, Jagust WJ. Striatal dopamine and working memory. Cereb Cortex. 2008;19:445–454. doi: 10.1093/cercor/bhn095. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Lebedev et al., 2004.Lebedev MA, Messinger A, Kralik JD, Wise SP. Representation of attended versus remembered locations in prefrontal cortex. PLoS Biol. 2004;2:e365. doi: 10.1371/journal.pbio.0020365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Leon and Shadlen, 1999.Leon MI, Shadlen MN. Effect of expected reward magnitude on the response of neurons in the dorsolateral prefrontal cortex of the macaque. Neuron. 1999;24:415–425. doi: 10.1016/s0896-6273(00)80854-5. [DOI] [PubMed] [Google Scholar]
  • Matsumoto et al., 2003.Matsumoto K, Suzuki W, Tanaka K. Neuronal correlates of goal-based motor selection in the prefrontal cortex. Science. 2003;301:229–232. doi: 10.1126/science.1084204. [DOI] [PubMed] [Google Scholar]
  • Maunsell, 2004.Maunsell JH. Neuronal representations of cognitive state: reward or attention? Trends Cogn Sci. 2004;8:261–265. doi: 10.1016/j.tics.2004.04.003. [DOI] [PubMed] [Google Scholar]
  • Owen, 1997.Owen AM. The functional organization of working memory processes within human lateral frontal cortex: the contribution of functional neuroimaging. Eur J Neurosci. 1997;9:1329–1339. doi: 10.1111/j.1460-9568.1997.tb01487.x. [DOI] [PubMed] [Google Scholar]
  • Owen et al., 1990.Owen AM, Downes JJ, Sahakian BJ, Polkey CE, Robbins TW. Planning and spatial working memory following frontal lobe lesions in man. Neuropsychologia. 1990;28:1021–1034. doi: 10.1016/0028-3932(90)90137-d. [DOI] [PubMed] [Google Scholar]
  • Owen et al., 1996.Owen AM, Evans AC, Petrides M. Evidence for a two-stage model of spatial working memory processing within the lateral frontal cortex: a positron emission tomography study. Cereb Cortex. 1996;6:31–38. doi: 10.1093/cercor/6.1.31. [DOI] [PubMed] [Google Scholar]
  • Owen et al., 1999.Owen AM, Herrod NJ, Menon DK, Clark JC, Downey SP, Carpenter TA, Minhas PS, Turkheimer FE, Williams EJ, Robbins TW, Sahakian BJ, Petrides M, Pickard JD. Redefining the functional organization of working memory processes within human lateral prefrontal cortex. Eur J Neurosci. 1999;11:567–574. doi: 10.1046/j.1460-9568.1999.00449.x. [DOI] [PubMed] [Google Scholar]
  • Pasupathy and Miller, 2005.Pasupathy A, Miller EK. Different time courses of learning-related activity in the prefrontal cortex and striatum. Nature. 2005;433:873–876. doi: 10.1038/nature03287. [DOI] [PubMed] [Google Scholar]
  • Petrides, 1995.Petrides M. Impairments on nonspatial self-ordered and externally ordered working memory tasks after lesions of the mid-dorsal part of the lateral frontal cortex in the monkey. J Neurosci. 1995;15:359–375. doi: 10.1523/JNEUROSCI.15-01-00359.1995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Petrides, 1996.Petrides M. Specialized systems for the processing of mnemonic information within the primate frontal cortex. Philos Trans R Soc Lond B Biol Sci. 1996;351:1455–1461. doi: 10.1098/rstb.1996.0130. discussion 1461–1462. [DOI] [PubMed] [Google Scholar]
  • Petrides and Pandya, 1984.Petrides M, Pandya DN. Projections to the frontal cortex from the posterior parietal region in the rhesus monkey. J Comp Neurol. 1984;228:105–116. doi: 10.1002/cne.902280110. [DOI] [PubMed] [Google Scholar]
  • Petrides and Pandya, 1999.Petrides M, Pandya DN. Dorsolateral prefrontal cortex: comparative cytoarchitectonic analysis in the human and the macaque brain and corticocortical connection patterns. Eur J Neurosci. 1999;11:1011–1036. doi: 10.1046/j.1460-9568.1999.00518.x. [DOI] [PubMed] [Google Scholar]
  • Petrides and Pandya, 2002.Petrides M, Pandya DN. Comparative cytoarchitectonic analysis of the human and the macaque ventrolateral prefrontal cortex and corticocortical connection patterns in the monkey. Eur J Neurosci. 2002;16:291–310. doi: 10.1046/j.1460-9568.2001.02090.x. [DOI] [PubMed] [Google Scholar]
  • Postle et al., 2000.Postle BR, Stern CE, Rosen BR, Corkin S. An fMRI investigation of cortical contributions to spatial and nonspatial visual working memory. Neuroimage. 2000;11:409–423. doi: 10.1006/nimg.2000.0570. [DOI] [PubMed] [Google Scholar]
  • Preuss and Goldman-Rakic, 1989.Preuss TM, Goldman-Rakic PS. Connections of the ventral granular frontal cortex of macaques with perisylvian premotor and somatosensory areas: anatomical evidence for somatic representation in primate frontal association cortex. J Comp Neurol. 1989;282:293–316. doi: 10.1002/cne.902820210. [DOI] [PubMed] [Google Scholar]
  • Rainer et al., 1998.Rainer G, Asaad WF, Miller EK. Selective representation of relevant information by neurons in the primate prefrontal cortex. Nature. 1998;393:577–579. doi: 10.1038/31235. [DOI] [PubMed] [Google Scholar]
  • Rao et al., 1997.Rao SC, Rainer G, Miller EK. Integration of what and where in the primate prefrontal cortex. Science. 1997;276:821–824. doi: 10.1126/science.276.5313.821. [DOI] [PubMed] [Google Scholar]
  • Rizzuto et al., 2005.Rizzuto DS, Mamelak AN, Sutherling WW, Fineman I, Andersen RA. Spatial selectivity in human ventrolateral prefrontal cortex. Nat Neurosci. 2005;8:415–417. doi: 10.1038/nn1424. [DOI] [PubMed] [Google Scholar]
  • Roesch and Olson, 2003.Roesch MR, Olson CR. Impact of expected reward on neuronal activity in prefrontal cortex, frontal and supplementary eye fields and premotor cortex. J Neurophysiol. 2003;90:1766–1789. doi: 10.1152/jn.00019.2003. [DOI] [PubMed] [Google Scholar]
  • Roesch and Olson, 2007.Roesch MR, Olson CR. Neuronal activity related to anticipated reward in frontal cortex: does it represent value or reflect motivation? Ann N Y Acad Sci. 2007;1121:431–446. doi: 10.1196/annals.1401.004. [DOI] [PubMed] [Google Scholar]
  • Roesch et al., 2007.Roesch MR, Calu DJ, Schoenbaum G. Dopamine neurons encode the better option in rats deciding between differently delayed or sized rewards. Nat Neurosci. 2007;10:1615–1624. doi: 10.1038/nn2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Roth et al., 2006.Roth JK, Serences JT, Courtney SM. Neural system for controlling the contents of object working memory in humans. Cereb Cortex. 2006;16:1595–1603. doi: 10.1093/cercor/bhj096. [DOI] [PubMed] [Google Scholar]
  • Rushworth and Owen, 1998.Rushworth MFS, Owen AM. The functional organization of the lateral frontal cortex: conjecture or conjuncture in the electrophysiology literature. Trends Cogn Sci. 1998;2:46–53. doi: 10.1016/s1364-6613(98)01127-9. [DOI] [PubMed] [Google Scholar]
  • Rushworth et al., 2005.Rushworth MFS, Buckley MJ, Gough PM, Alexander IH, Kyriazis D, McDonald KR, Passingham RE. Attentional selection and action selection in the ventral and orbital prefrontal cortex. J Neurosci. 2005;25:11628–11636. doi: 10.1523/JNEUROSCI.2765-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Sawaguchi and Yamane, 1999.Sawaguchi T, Yamane I. Properties of delay-period neuronal activity in the monkey dorsolateral prefrontal cortex during a spatial delayed matching-to-sample task. J Neurophysiol. 1999;82:2070–2080. doi: 10.1152/jn.1999.82.5.2070. [DOI] [PubMed] [Google Scholar]
  • Schall et al., 1995.Schall JD, Morel A, King DJ, Bullier J. Topography of visual cortex connections with frontal eye field in macaque: convergence and segregation of processing streams. J Neurosci. 1995;15:4464–4487. doi: 10.1523/JNEUROSCI.15-06-04464.1995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Spitzer et al., 1988.Spitzer H, Desimone R, Moran J. Increased attention enhances both behavioral and neuronal performance. Science. 1988;240:338–340. doi: 10.1126/science.3353728. [DOI] [PubMed] [Google Scholar]
  • Sugase-Miyamoto and Richmond, 2005.Sugase-Miyamoto Y, Richmond BJ. Neuronal signals in the monkey basolateral amygdala during reward schedules. J Neurosci. 2005;25:11071–11083. doi: 10.1523/JNEUROSCI.1796-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Vuilleumier, 2005.Vuilleumier P. How brains beware: neural mechanisms of emotional attention. Trends Cogn Sci. 2005;9:585–594. doi: 10.1016/j.tics.2005.10.011. [DOI] [PubMed] [Google Scholar]
  • Wallis and Miller, 2003a.Wallis JD, Miller EK. From rule to response: neuronal processes in the premotor and prefrontal cortex. J Neurophysiol. 2003a;90:1790–1806. doi: 10.1152/jn.00086.2003. [DOI] [PubMed] [Google Scholar]
  • Wallis and Miller, 2003b.Wallis JD, Miller EK. Neuronal activity in primate dorsolateral and orbital prefrontal cortex during performance of a reward preference task. Eur J Neurosci. 2003b;18:2069–2081. doi: 10.1046/j.1460-9568.2003.02922.x. [DOI] [PubMed] [Google Scholar]
  • Wallis et al., 2001.Wallis JD, Anderson KC, Miller EK. Single neurons in prefrontal cortex encode abstract rules. Nature. 2001;411:953–956. doi: 10.1038/35082081. [DOI] [PubMed] [Google Scholar]
  • White and Wise, 1999.White IM, Wise SP. Rule-dependent neuronal activity in the prefrontal cortex. Exp Brain Res. 1999;126:315–335. doi: 10.1007/s002210050740. [DOI] [PubMed] [Google Scholar]
  • Wilson et al., 1993.Wilson FA, Scalaidhe SP, Goldman-Rakic PS. Dissociation of object and spatial processing domains in primate prefrontal cortex. Science. 1993;260:1955–1958. doi: 10.1126/science.8316836. [DOI] [PubMed] [Google Scholar]
  • Womelsdorf et al., 2006.Womelsdorf T, Anton-Erxleben K, Pieper F, Treue S. Dynamic shifts of visual receptive fields in cortical area MT by spatial attention. Nat Neurosci. 2006;9:1156–1160. doi: 10.1038/nn1748. [DOI] [PubMed] [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES