Skip to main content
Frontiers in Human Neuroscience logoLink to Frontiers in Human Neuroscience
. 2009 Sep 11;3:23. doi: 10.3389/neuro.09.023.2009

Neural Substrates of Contingency Learning and Executive Control: Dissociating Physical, Valuative, and Behavioral Changes

O'Dhaniel A Mullette-Gillman 1,2,3,*, Scott A Huettel 1,2,3
PMCID: PMC2759373  PMID: 19826625

Abstract

Contingency learning is fundamental to cognition. Knowledge about environmental contingencies allows behavioral flexibility, as executive control processes accommodate the demands of novel or changing environments. Studies of experiential learning have focused on the relationship between actions and the values of associated outcomes. However, outcome values have often been confounded with the physical changes in the outcomes themselves. Here, we dissociated contingency learning into valuative and non-valuative forms, using a novel version of the two-alternative choice task, while measuring the neural effects of contingency changes using functional magnetic resonance imaging (fMRI). Changes in value-relevant contingencies evoked activation in the lateral prefrontal cortex (LPFC), posterior parietal cortex (PPC), and dorsomedial prefrontal cortex (DMPFC) consistent with prior results (e.g., reversal-learning paradigms). Changes in physical contingencies unrelated to value or to action produced similar activations within the LPFC, indicating that LPFC may engage in generalized contingency learning that is not specific to valuation. In contrast, contingency changes that required behavioral shifts evoked activation localized to the DMPFC, supplementary motor, and precentral cortices, suggesting that these regions play more specific roles within the executive control of behavior.

Keywords: executive control, decision making, prefrontal cortex, anterior cingulate cortex, fMRI, dorsomedial prefrontal cortex, cognitive control

Introduction

Contingency learning is a fundamental component of cognition. By identifying the relationships between actions and events, humans and other animals can produce goal-directed and flexible behavior that accounts for changes in their environments. Models of goal-directed behavior, such as those of reinforcement learning, have traditionally examined the contingencies between actions and their rewarding or punishing outcomes (Thorndike, 1898; Pavlov, 1928; Skinner, 1938; Herrnstein, 1970). Such models can account for simple reward-seeking behaviors (e.g., foraging), and even describe quite complex aspects of behavior and decision making (Sutton and Barto, 1998). However, effective behavior requires learning not only about relations between actions and received rewards, but also about environmental contingencies that do not themselves influence reward outcomes (Tolman, 1932), such as information about the available options, state space, or stimulus-stimulus contingencies.

Studies of the neural basis of contingency learning – often using reversal tasks in which reward contingencies change unexpectedly – have identified a host of involved brain areas (Cools et al., 2002; O'Doherty et al., 2003; Remijnse et al., 2005; Xue et al., 2008). Collectively, many of these regions have been described as constituting the dorsal executive network (Duncan and Owen, 2000) or central executive (Goldman-Rakic, 1996). This network may contribute to contingency learning in other non-rewarding contexts, as well. For example, regions of lateral and medial prefrontal cortex exhibit increased activation to events that violate local temporal patterns of stimuli, even when those patterns arise because of random chance and even in the absence of explicit awareness (Squires et al., 1976; Huettel et al., 2002; Fan et al., 2007). Such findings fit the broad theory that the dorsal executive network detects environmental changes and implements cognitive processes necessary for modifying behavior appropriately (Wise et al., 1996; Botvinick et al., 2001; Miller and Cohen, 2001; Ridderinkhof et al., 2004; Walton et al., 2004; Mansouri et al., 2009). The resulting behavioral changes are postulated to reflect biasing signals directed at other brain systems (Botvinick et al., 2001; Miller and Cohen, 2001). However, most prior studies of executive control have used tasks with co-occurring contingency changes and engagement of executive control mechanisms.

Here, we adapted the classic two-option reversal-learning paradigm to dissociate multiple forms of contingency learning. Across a series of rapidly presented trials, the environmental contingencies changed in three independent ways: (1) changes in outcome valence, which resulted in a behavioral change on the next trial; (2) changes in outcome magnitude, which affected obtained rewards but did not produce a behavioral change; and (3) changes in the physical effect of an action, through new visual feedback that was completely unrelated to rewards or required actions. We hypothesized that brain areas previously implicated in learning and responding to changes in action-outcome contingencies also process contingency changes that are behaviorally and motivationally irrelevant. If obtained, such results would indicate that these regions support the generalized updating of models about the current environment, including but not limited to the changes in the anticipated value of actions, with other regions contributing to the executive control of action.

Materials and Methods

Participants

Fourteen healthy, neurologically normal young-adult volunteers (six female; age range: 18–29 years; mean age: 22.4 years) participated in a single fMRI session. All participants acclimated to the testing environment using a mock MRI scanner. Two participants were removed from the analyses, one because of technical issues with stimulus presentation and the other because of excessive head motion. All participants gave written informed consent as part of a protocol approved by the Institutional Review Board of Duke University Medical Center. Subjects’ payment was contingent on the choices made during the experiment (mean payment: $49 out of a possible $50).

Task

Participants engaged in a modified two-alternative choice task (Figure 1A). As a framing story, each participant was told to act as an investment broker who selects between two factories in which to invest. On each trial, one of the two choice options resulted in a monetary gain, while the other resulted in an equal magnitude loss (range: ±18 to 93¢). The outcome comprised two simultaneously presented visual components: the received value and an abstract visual object (described as the product of the factory). A total of eight different objects were presented across all trials, constructed through the factorial combination of two values for each of three dimensions (shape, color, and orientation of a diagonal slash). Participants were given explicit instructions that the abstract objects were not predictive of future outcomes nor of changes in the value of those outcomes; moreover, they were told that outcomes could alter after as few as one trial, but that on average the outcomes would remain stable for several trials at a time. Participants were instructed that their commission (i.e., payment) was proportional to the total amount earned across all decisions.

Figure 1.

Figure 1

Task. (A) Example trial structure. Following an initial stimulus display, the onset of the trial was signaled by a change in color of the response circles. When the participant indicated the choice for that trial, the selected option changed color, whereupon the outcome of the trial was indicated by a visual object and a monetary reward. Trials were each 2.8 s, with 1.4 s for response and choice presentation and 1.4 s for outcome presentation. Intertrial intervals ranged between 0.1 and 0.5 s, to prevent subjects from predicting the onset of each trial. (B) Possible outcomes derived from different changes in contingencies. On Standard trials, the outcome was maintained from the prior trial. In Reversal Changes, the action-outcome contingencies switch, and thus the participant loses the amount of money they were expecting to gain. In Value Changes, the participant gains money, but either more or less than they expected. Finally, in Effect Changes, the participant receives the amount of money they were expecting, but the visual stimulus changes.

Value and effect contingencies were altered independently, through four different possible trial types (see Figure 1B). The first type comprised the Standard trials (76% of total), where no stimuli, rewards, or actions changed from the previous trial. The second was a Reversal Change (6% of trials) in which the associated values of each option switch sign (without changing magnitude); this trial type is equivalent to the critical events in the canonical reversal learning task. Given the complementary relationship between the valences of the two options, Reversal Changes should guide the participants to select the other option on the next trial. The third type of trials involved Value Change (6% of trials), such that the absolute magnitude of the current value was changed, either up or down, without a change of sign. Because the participants still received a positive reward (albeit not the expected amount), they should continue to select the same option on succeeding trials. The fourth type of trials, called Effect Change (12% of trials, split evenly between one- and two-dimensional), involved a behaviorally- and motivationally-irrelevant contingency change in the visual object associated with each action (i.e., the object shown on the screen). This could involve either a change in one dimension of the object (e.g., color alone) or two dimensions of the object (e.g., both color and shape), with equal probability. However, because the reward value remained constant, the participant should continue selecting the same option on future trials.

Participants fixated on a cross at the center of the display throughout the experiment. Failure to respond in the allotted time (1.4 s after the trial start) resulted in a monetary loss equivalent to the value of the worse option on the current trial. To ensure comprehension of instructions and to provide experience with task contingencies, all participants performed a 7-min behavioral training session before the fMRI experiment.

Participants carried out four runs of this task while in the scanner, each consisting of 150 trials including 10 reversal changes, 10 value changes, 10 one-dimensional effect changes, and 10 two-dimensional effect changes. We used a constant hazard probability for contingency shifts of p ∼ 0.28. Each run also included four non-task pauses, in which the inter-trial interval was extended to 10–20 s. We predetermined the timing and order of contingency shifts to maximize the statistical dissociation between the hypothesized hemodynamic responses evoked by each contingency shift type. The order of runs was randomized across participants. Stimuli presentation and behavioral data collection were carried out using the Psychophysics Toolbox for Matlab (Brainard, 1997), with stimuli presented through MR-Compatible LCD goggles and behavioral selections made using the first two fingers on the right hand on an MR-compatible response box.

fMRI data collection and analysis

We acquired data with a 4T GE scanner using an inverse-spiral pulse sequence with standard imaging parameters [TR = 2000 ms; TE = 30 ms; 34 axial slices parallel to the AC-PC plane; voxel size of 3.75 × 3.75 × 3.8 mm]. High resolution 3D full-brain SPGR anatomical images were acquired and used for normalizing and co-registering individual participants’ data.

Analyses were performed using FEAT (FMRI Expert Analysis Tool) Version 5.92, part of the FSL package (Smith et al., 2004; Woolrich et al., 2009). The following pre-statistics processing steps were applied: motion correction using MCFLIRT, slice-timing correction, removal of non-brain voxels using BET (Smith, 2002), spatial smoothing with a Gaussian kernel of full-width-half-maximum of 8 mm, and 50 ms high-pass temporal filtering. Registration to high resolution and standard images was carried out using FLIRT (Jenkinson and Smith, 2001).

Our first-level FEAT model contained five factors: three coding the onsets of contingency changes (e.g., Reversal Changes, Value Changes, and Effect Changes) and the fourth a nuisance variable to code for the infrequent missed trials. An impulse of unit duration and unit weight was used for each of these events. The fifth factor coded the timing of the non-task pauses by their duration and a unit weight. Each of these factors was then convolved with a double-gamma hemodynamic response function to create the final regressors within our design matrix. Of note, this model uses the Standard trials as a task-related baseline. Thus, activations associated with the performance of Standard trials, such as stimuli presentation and motor response execution, are controlled by comparison to the baseline fMRI signal.

Second-level FEAT analyses to combine runs within-participants used a fixed-effects model, while third-level, across-participants analyses used FLAME (stages 1 and 2) random-effects analysis, with automatic outlier de-weighting (Woolrich, 2008). All statistical inferences, including data visualization, use whole-brain-corrected cluster-significance thresholds of p < 0.05 (z > 2.3). Finally, to quantify the percent change in activation across different contingency change types, we created spheres with 8 mm radii around centroids of our functionally-defined regions-of-interest (ROIs) using MRICRON (Rorden et al., 2007).

Activation cluster peaks presented in Tables 1, 2, and 4 were produced using FSL. For each table, 30 peaks were determined for each cluster, and labeled with their Harvard-Oxford designation using FSLview. Only the peak voxel for each anatomical designation is listed. Activation Table 3 was produced using MRICRON to calculate the centroid of each cluster.

Table 1.

Regions exhibiting main effects of Reversal Changes.

z-stat X (mm) Y (mm) Z (mm) Hemisphere Region
3.63 0 16 38 Mid Anterior cingulate gyrus (*DMPFC)
4.2 40 8 30 Right Precentral gyrus
3.68 18 4 50 Right Superior frontal gyrus
3.58 40 4 52 Right Middle frontal gyrus (*LPFC)
3.57 −10 2 50 Left Supplementary motor cortex
4.1 −24 0 58 Left Superior frontal gyrus
4.29 −32 −2 58 Left Middle frontal gyrus (*LPFC)
3.93 −36 −4 52 Left Precentral gyrus
3.72 2 −12 −2 Right Thalamus
3.52 −10 −20 −2 Left Thalamus
3.37 −24 −26 −8 Left Hippocampus
4.36 −42 −30 46 Left Postcentral gyrus
3.44 10 −30 −6 Right Parahippocampal gyrus
4.02 −36 −32 34 Left Supramarginal gyrus, anterior division
3.35 −10 −32 −10 Left Parahippocampal gyrus
3.71 −46 −42 40 Left Supramarginal gyrus, posterior division
3.68 −32 −42 48 Left Superior parietal lobule (*PPC)
4 34 −46 40 Right Superior parietal lobule (*PPC)
3.59 −38 −46 38 Left Supramarginal gyrus, posterior division
3.46 42 −52 −18 Right Temporal occipital fusiform cortex
3.6 28 −74 −12 Right Occipital fusiform gyrus

Coordinates indicate the peak voxel (i.e., maximal z score) within each anatomical region. *Indicates functional label used in text.

Table 2.

Regions exhibiting main effects of Effect Changes.

z-stat X (mm) Y (mm) Z (mm) Hemisphere Region
3.38 −26 40 16 Left Frontal pole
3.41 48 36 26 Right Frontal pole
3.23 −28 36 20 Left Middle frontal gyrus
3.44 −30 26 −4 Left Frontal orbital cortex
3.24 −34 22 8 Left Frontal operculum cortex
3.42 28 20 −2 Right Insular cortex
3.32 −32 18 8 Left Insular cortex (*aINS)
3.55 36 14 38 Right Middle frontal gyrus
3.98 50 10 4 Right Inferior frontal gyrus (*LPFC)
3.35 50 8 8 Right Precentral gyrus
3.54 62 4 22 Right Precentral gyrus
3.29 −20 −2 8 Left Putamen
3.25 −20 −4 0 Left Pallidum
3.74 16 −8 12 Right Thalamus
3.27 −10 −10 6 Left Thalamus
3.36 −52 −44 30 Left Supramarginal gyrus
3.87 58 −46 −16 Right Inferior temporal gyrus
3.51 −26 −50 44 Left Superior parietal lobule (*PCC)
3.43 −60 −54 16 Left Angular gyrus
3.88 64 −56 4 Right Middle temporal gyrus
4.54 26 −68 −14 Right Occipital fusiform cortex
3.82 8 −74 12 Right Intracalcarine sulcus
4.14 12 −76 60 Right Lateral occipital cortex
3.65 −12 −80 4 Left Intracalcarine sulcus
4.13 −4 −82 −2 Left Lingual gyrus

Coordinates indicate the peak voxel (i.e., maximal z score) within each anatomical region. *Indicates functional label used in text.

Table 4.

Regions exhibiting significant differences between Reversal Changes and Effect Changes.

z-stat X (mm) Y (mm) Z (mm) Hemisphere Region
EFFECT CHANGE > REVERSAL CHANGE
3.33 −38 −68 −26 Left Occipital fusiform gyrus
3.37 56 −70 −20 Right Lateral occipital cortex
3.65 −30 −88 −26 Left Lateral occipital cortex
3.64 −24 −92 −34 Left Cerebellum
3.83 ±20 −96 −30 Bilateral Occipital pole
REVERSAL CHANGE > EFFECT CHANGE
3.56 4 −2 56 Right Supplementary motor cortex
3.6 −14 −2 66 Left Superior frontal gyrus
4.25 −28 −8 60 Left Precentral gyrus
3.81 −4 −12 48 Left Supplementary motor cortex
4.4 −40 −30 46 Left Postcentral gyrus
3.69 −42 −38 42 Left Supramarginal gyrus
3.56 −34 −44 54 Left Superior parietal lobule

Coordinates indicate the peak voxel (i.e., maximal z score) within each anatomical region.

Table 3.

Results of a conjunction analysis across Reversal Changes and Effect Changes.

# voxels X (mm) Y (mm) Z (mm) Hemisphere Region
308 34 22 −2 Right Anterior insula (*aINS)
440 42 12 26 Right Inferior frontal gyrus (*LPFC)
502 38 2 56 Right Middle frontal gyrus (*LPFC)
315 −32 −52 46 Left Superior parietal lobule (*PPC)
836 32 −54 48 Right Superior parietal lobule (*PPC)
1893 30 −66 −10 Right Occipital fusiform gyrus
1077 −34 −72 −8 Left Occipital fusiform gyrus

Coordinates indicate the centroid of the activation within each anatomical region. Voxels are 2 mm3. *Indicates functional label used in text.

Results

Behavioral data

Following a random guess on the first trial, optimal behavior in this task was to select the option that was rewarded on the previous trial (i.e., follow a win-stay/lose-shift strategy, WSLS). A feature of this task is that subjects should engage in one-trial learning, which minimizes the problems of temporal credit assignment that occur within probabilistic reversal learning tasks. Note that given the low likelihood and unpredictability of the reversal change trials, attempts to predict such shifts in value contingencies would reduce overall payment. Thus, we measured behavioral performance in reference to an optimal WSLS strategy. On average, participants performed the WSLS strategy on 99.4% of trials, with no individual run for any participant below 95% performance. All participants described (in their own words) following a WSLS strategy in a post-study questionnaire. In addition, very few trials were missed due to slow responses (mean: 0.7%), with only one participant missing over 2% of trials in any individual run.

To examine the effects of contingency changes upon subsequent behavior, we evaluated whether participants’ response times were slowed on the trial following each type of contingency change. Following standard trials, the mean response time across participants was 396 ms (SD: 144 ms). Response time increased significantly following each type of contingency change (repeated measures ANOVA, main effect p < 0.05): for reversal changes, 418 ms; for value changes, 411 ms; for one-dimensional effect changes, 410 ms; and for two-dimensional effect changes, 425 ms. Thus, our behavioral data indicate that participants performed at a nearly optimal level throughout the experiment, but were nevertheless influenced by each sort of contingency change.

fMRI data

We first examined the main effect of Reversal Change, as a contrast between this event regressor and the Standard trials baseline. Significant areas of activation included posterior parietal cortex (PPC), anterior insula cortex (aINS), lateral prefrontal cortex (LPFC), dorsomedial prefrontal cortex (DMPFC), precuneus, supplementary motor cortex (SMC), and precentral cortex (Figure 2A and Table 1). This set of regions replicates that described by previous studies in which contingency changes involved behavioral shifts (Cools et al., 2002; O'Doherty et al., 2003; Remijnse et al., 2005; Xue et al., 2008). For example, Cools et al. (2002) found greater activation in the LPFC for reversal errors relative to probabilistic errors.

Figure 2.

Figure 2

Neural activation in response to contingency changes. (A) On Reversal Change trials, the contingencies between actions and their rewarding outcomes switch, so that the participant should select a different action on the subsequent trial. Such changes evoke activation in a dorsal executive network comprising regions of medial and lateral prefrontal cortex, parietal cortex, and insular cortex, among other regions. (B) On Effect Change trials, only the visual effect of the action changes; the outcome has the same value, and the participant does not need to switch behavioral responses. Activation was again observed in lateral prefrontal and parietal cortices, along with regions of visual cortex.

Next, we examined the main effect of Value Change, again as a contrast between this task-related regressor and the Standard trial baseline. No brain regions survived our standard statistical criterion. This absence of activation could be due to the specific bimodal distribution of value changes in our task, which contained both large negative events (i.e., Reversal changes) and small positive and negative changes (i.e., Value changes).

Finally, we identified the main effect of Effect Change, contrasting this regressor with the Standard trial baseline. We found significant activations in the striatum, PPC, LPFC, aINS, and temporal cortices (Figure 2B and Table 2). This pattern of activation contains many of the same regions as observed for Reversal Change trials, consistent with the interpretation that similar networks process each sort of contingency change.

To determine regions whose activation increased significantly for both Reversal Changes (i.e., valuative contingency shifts) and Effect Changes (i.e., non-valuative contingency shifts), we examined the intersection of voxels whose activation increased significantly to both types of change, independently (voxelwise z > 2.3, with whole-brain cluster correction of p < 0.05). This conjunction analysis revealed activations in aINS, PPC, and LPFC (Figure 3 and Table 3). ROI analyses were performed to compare the levels of activation produced by the reversal and effect changes. No significant differences between the reversal and effect changes were found within these co-activated regions (Figures 3B–D).

Figure 3.

Figure 3

Conjunction of activations for Reversal Changes and Effect Changes. (A) Shown are regions that exhibited significant activation in both the Reversal Change and Effect Change conditions. Subsequent functional ROI analyses extracted the relative signal change evoked for each trial type – Reversal Changes, Value Changes, and Effect Changes – within the (B) right anterior insula cortex, (C) right posterior parietal cortex (PPC), and (D) right middle frontal gyrus (MFG). Horizontal lines indicate pairs of conditions with significant differences in activation amplitude (p < 0.05).

To identify regions whose activation differed based on the nature of contingency change, we examined the contrasts of Effect Change > Reversal Change, and Reversal Change > Effect Change; (Figure 4 and Table 4). Effect Changes only produced significantly greater activations in a small number of posterior regions (including the middle temporal gyrus) that have been previously implicated in object processing (Martin et al., 1996; Weisberg et al., 2007). Reversal Changes produced significantly greater activations in the DMPFC, SMC, and precentral gyrus, regions previously implicated in the selection of actions and the production of motor responses. ROI analyses of these regions are presented in Figures 4B–D.

Figure 4.

Figure 4

Contrasts between Reversal Changes and Effect Changes. (A) Shown are regions that exhibited significant differences in activation between the Reversal Change and Effect Change conditions. Functional ROI analyses extracted the relative signal change evoked for each trial type – Reversal Changes, Value Changes, and Effect Changes – within the (B) right middle temporal gyrus (MTG), (C) right precentral gyrus (PCG), and (D) dorsomedial prefrontal cortex (DMPFC). Horizontal lines indicate pairs of conditions with significant differences in activation amplitude (p < 0.05).

Discussion

Humans, like many other animals, possess a high degree of behavioral flexibility. We can learn the values of different actions and can select new courses of behavior when those values change. However, human learning extends well beyond action-value contingencies to include learning about the physical effects of our actions, which can be generalized to novel situations with new valuative contingencies. Although numerous studies have explored the neural basis of valuative contingency learning, many have confounded the change in the reward with the physical change in the rewarding stimulus. Here, we show that, when compared within the same task, the brain regions associated with learning reward-action contingencies are also engaged by behaviorally- and reward-irrelevant contingency changes.

Valuative vs. non-valuative contingency processing

Our data suggest that the prefrontal and parietal cortex activations associated with value learning could be attributable to the co-occurring physical changes in the rewarding stimulus. This strong interpretation is supported by the differential effects of prefrontal lesions across a range of species. Lesions in the ventromedial prefrontal cortex (including orbitofrontal areas) appear to remove the motivation or ability to learn values and respond appropriately, such as in the reversal-learning task (Doar et al., 1987; Bechara et al., 1994; Dias et al., 1996; Hornak et al., 2004; Izquierdo et al., 2004; Rudebeck and Murray, 2008). In contrast, LPFC lesions disrupt the learning or accessing of contingency information, such as in an extradimensional set shifting task (Owen et al., 1991; Dias et al., 1996; Hornak et al., 2004; see also Barcelo et al., 2007). Alternatively, LPFC could contribute to both sorts of contingencies: the physical effect of our action and the valuation of that effect. These two interpretations cannot be distinguished when examining value learning as a categorical change, as in the reversal learning task, as new value contingencies also reflect newly rewarded stimuli.

The key to determining which of these interpretations is correct is to parametrically dissociate the change in value from the changes in the physical stimulus. Studies which have examined the processing of parametric value signals using fMRI and physiological recording techniques have identified several brain regions which appear to encode parametric value signals, including the medial prefrontal cortex, orbitofrontal cortex, amygdala, dorsal and ventral striatum, nucleus accumbens, and posterior parietal cortex (PPC) (O'Doherty et al., 2001, 2003; Delgado et al., 2003; Dorris and Glimcher, 2004; Knutson et al., 2005; Kable and Glimcher, 2007; Lau and Glimcher, 2007; Plassmann et al., 2007, 2008; Hare et al., 2008; Schiller et al., 2008). Notably absent from this list of value-encoding regions are the MFG, IFG, and aINS, in which activation was evoked during both the Reversal Change and Effect Change trials. This suggests these lateral prefrontal and insular cortices encode the non-valuative contingency information related to learning about the environment.

Of the brain areas we identified as processing multiple sorts of contingency information, only the PPC has previously been implicated in processing parametric value information (Platt and Glimcher, 1999; Dorris and Glimcher, 2004). This suggests that the PPC may play a role in the integration of actions with both valuative and non-valuative outcomes (Assad, 2003). This agrees with the hypothesis that the PPC acts as a decision map, relating actions to the expected value of their effects (Platt and Glimcher, 1999; Beck et al., 2008; Churchland et al., 2008). Such an integrative role could provide a unitary framework for the myriad functions supported by PPC subregions, such as in multi-modal integration of sensory input (Cohen and Andersen, 2000; Toth and Assad, 2002; Cohen et al., 2004; Mullette-Gillman et al., 2005), the attentional-intentional processes relating to motor planning (Colby and Goldberg, 1999; Snyder et al., 2000), working memory (Stoet and Snyder, 2004; Vingerhoets, 2008), and visuomotor learning (Grafton et al., 2008).

Contingency vs. control processing

An important question is how contingency processing interacts with the executive control processes responsible for producing behavioral changes when necessitated by a contingency change. The LPFC and aINS have long been hypothesized to play a critical role in working memory and other executive functions, based upon converging evidence from single-unit (Goldman-Rakic, 1996; Chafee and Goldman-Rakic, 1998), lesion (Doar et al., 1987; Hornak et al., 2004), and fMRI studies (Elliott et al., 1997; Rowe et al., 2000). Our LPFC and aINS activations during both reversal changes and effect changes agree with the interpretation that the prefrontal cortex is involved in forming, updating, and accessing models that relate stimuli to actions (Passingham, 1975; Cohen and Servan-Schreiber, 1992; Grafman et al., 1994; Wise et al., 1996; Miller and Cohen, 2001).

Co-occurring LPFC, aINS, and DMPFC activity has also been observed across a large number of studies examining executive control processes, including auditory detection, pattern detection, working memory, and response selection (Goldman-Rakic, 1996; McCarthy et al., 1997; Duncan and Owen, 2000; Miller and Cohen, 2001; Huettel et al., 2002; Robbins, 2007; Hyafil et al., 2009). Models of executive control processing have suggested that the DMPFC (referred to as anterior cingulate cortex, ACC) detects changes in contingencies and then activates the LPFC, which exerts executive control over behavior by biasing activity in other brain areas (Botvinick et al., 2001; Miller and Cohen, 2001; Ridderinkhof et al., 2004; Walton et al., 2004; Behrens et al., 2007; Mansouri et al., 2009). However, a recent study by Sridharan et al. (2008) using Granger causality analysis, found the aINS exhibited a causal influence on the LPFC and DMPFC (referred to as ACC, and anterior to our specific DMPFC activation) (Sridharan et al., 2008), an inversion of the directionality of influence suggested by the previously mentioned executive function models (see also Markela-Lerenc et al., 2004).

We suggest that these discrepancies reflect, at least in part, the conflation of contingency detection and control processes in many paradigms. For example, although activations of the LPFC during an oddball task are often described in terms of behavioral control or inhibition, these activations have been shown to be produced by contingency changes in the mapping of stimuli to responses independently of changes in the specific motor response (Huettel and McCarthy, 2004). Similarly, Carter et al. (2006) found activity in the LPFC (specifically, MFG) correlated with the trial-by-trial level of explicit contingency knowledge during a classical conditioning paradigm in which there was no rewarded, or ‘correct’, response (Carter et al., 2006). These studies show that the LPFC processes contingency information in the absence of engaged control processes.

Our task allowed the dissociation of contingency and executive control processing within the same task to examine the functional roles of these brain areas. The contrast of Reversal Change > Effect Change allowed us to determine which brain areas are significantly more activated during the engagement of control and concurrent valuation processes, while controlling for contingency changes. We found increased activations in the posterior dorsomedial prefrontal cortex (DMPFC, including dorsal anterior cingulate), supplementary cortex (SMC), and precentral cortex during reversal changes contrasted with effect changes. As contingency change detection occurs in both Effect and Reversal Changes, this suggests that the aINS and LPFC process the contingency change (and possibly, exert the required cognitive control), with activation of the DMPFC only when control processes are required to produce a change in behavioral response. This is consistent with the directionality found by recent studies (Markela-Lerenc et al., 2004; Sridharan et al., 2008), and suggests dissociable functional roles for these brain areas which are an inversion of previous models.

Conclusions

We examined the potentially distinct neural mechanisms underlying behaviorally relevant valuative and behaviorally irrelevant non-valuative contingency learning. We found that the brain areas previously suggested to be involved in valuative contingency learning also contribute to the processing of behaviorally and motivationally irrelevant contingency changes. This suggests two key conclusions. First, the processing of value information may co-opt a more general executive system for contingency learning. Second, because non-valuative contingency changes are behaviorally irrelevant, the executive system may play an informational rather than control role in many tasks.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We thank McKell Carter, John Clithero, Yale Cohen, Brandi Newell, David Smith, and Lihong Wang for comments on data analysis and the manuscript. This research was supported by the US National Institute of Mental Health (NIMH-70685) and by the US National Institute of Neurological Disease and Stroke (NINDS-41328). SAH is supported by an Incubator Award from the Duke Institute for Brain Sciences.

References

  1. Assad J. A. (2003). Neural coding of behavioral relevance in parietal cortex. Curr. Opin. Neurobiol. 13, 194–197 10.1016/S0959-4388(03)00045-X [DOI] [PubMed] [Google Scholar]
  2. Barcelo F., Perianez J. A., Nyhus E. (2007). An information theoretical approach to task-switching: evidence from cognitive brain potentials in humans. Front. Hum. Neurosci. 1, 13. 10.3389/neuro.09.013.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bechara A., Damasio A. R., Damasio H., Anderson S. W. (1994). Insensitivity to future consequences following damage to human prefrontal cortex. Cognition 50, 7–15 10.1016/0010-0277(94)90018-3 [DOI] [PubMed] [Google Scholar]
  4. Beck J. M., Ma W. J., Kiani R., Hanks T., Churchland A. K., Roitman J., Shadlen M. N., Latham P. E., Pouget A. (2008). Probabilistic population codes for Bayesian decision making. Neuron 60, 1142–1152 10.1016/j.neuron.2008.09.021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Behrens T. E., Woolrich M. W., Walton M. E., Rushworth M. F. (2007). Learning the value of information in an uncertain world. Nat. Neurosci. 10, 1214–1221 10.1038/nn1954 [DOI] [PubMed] [Google Scholar]
  6. Botvinick M. M., Braver T. S., Barch D. M., Carter C. S., Cohen J. D. (2001). Conflict monitoring and cognitive control. Psychol. Rev. 108, 624–652 10.1037/0033-295X.108.3.624 [DOI] [PubMed] [Google Scholar]
  7. Brainard D. H. (1997). The psychophysics toolbox. Spat. Vis. 10, 433–436 10.1163/156856897X00357 [DOI] [PubMed] [Google Scholar]
  8. Carter R. M., O'Doherty J. P., Seymour B., Koch C., Dolan R. J. (2006). Contingency awareness in human aversive conditioning involves the middle frontal gyrus. Neuroimage 29, 1007–1012 10.1016/j.neuroimage.2005.09.011 [DOI] [PubMed] [Google Scholar]
  9. Chafee M. V., Goldman-Rakic P. S. (1998). Matching patterns of activity in primate prefrontal area 8a and parietal area 7ip neurons during a spatial working memory task. J. Neurophysiol. 79, 2919–2940 [DOI] [PubMed] [Google Scholar]
  10. Churchland A. K., Kiani R., Shadlen M. N. (2008). Decision-making with multiple alternatives. Nat. Neurosci. 11, 693–702 10.1038/nn.2123 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cohen J. D., Servan-Schreiber D. (1992). Context, cortex, and dopamine: a connectionist approach to behavior and biology in schizophrenia. Psychol. Rev. 99, 45–77 10.1037/0033-295X.99.1.45 [DOI] [PubMed] [Google Scholar]
  12. Cohen Y. E., Andersen R. A. (2000). Reaches to sounds encoded in an eye-centered reference frame. Neuron 27, 647–652 10.1016/S0896-6273(00)00073-8 [DOI] [PubMed] [Google Scholar]
  13. Cohen Y. E., Cohen I. S., Gifford G. W., 3rd. (2004). Modulation of LIP activity by predictive auditory and visual cues. Cereb. Cortex 14, 1287–1301 10.1093/cercor/bhh090 [DOI] [PubMed] [Google Scholar]
  14. Colby C. L., Goldberg M. E. (1999). Space and attention in parietal cortex. Annu. Rev. Neurosci. 22, 319–349 10.1146/annurev.neuro.22.1.319 [DOI] [PubMed] [Google Scholar]
  15. Cools R., Clark L., Owen A. M., Robbins T. W. (2002). Defining the neural mechanisms of probabilistic reversal learning using event-related functional magnetic resonance imaging. J. Neurosci. 22, 4563–4567 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Delgado M. R., Locke H. M., Stenger V. A., Fiez J. A. (2003). Dorsal striatum responses to reward and punishment: effects of valence and magnitude manipulations. Cogn. Affect. Behav. Neurosci. 3, 27–38 10.3758/CABN.3.1.27 [DOI] [PubMed] [Google Scholar]
  17. Dias R., Robbins T. W., Roberts A. C. (1996). Dissociation in prefrontal cortex of affective and attentional shifts. Nature 380, 69–72 10.1038/380069a0 [DOI] [PubMed] [Google Scholar]
  18. Doar B., Finger S., Almli C. R. (1987). Tactile-visual acquisition and reversal learning deficits in rats with prefrontal cortical lesions. Exp. Brain Res. 66, 432–434 10.1007/BF00243317 [DOI] [PubMed] [Google Scholar]
  19. Dorris M. C., Glimcher P. W. (2004). Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Neuron 44, 365–378 10.1016/j.neuron.2004.09.009 [DOI] [PubMed] [Google Scholar]
  20. Duncan J., Owen A. M. (2000). Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends Neurosci. 23, 475–483 10.1016/S0166-2236(00)01633-7 [DOI] [PubMed] [Google Scholar]
  21. Elliott R., Frith C. D., Dolan R. J. (1997). Differential neural response to positive and negative feedback in planning and guessing tasks. Neuropsychologia 35, 1395–1404 10.1016/S0028-3932(97)00055-9 [DOI] [PubMed] [Google Scholar]
  22. Fan J., Kolster R., Ghajar J., Suh M., Knight R. T., Sarkar R., McCandliss B. D. (2007). Response anticipation and response conflict: an event-related potential and functional magnetic resonance imaging study. J. Neurosci. 27, 2272–2282 10.1523/JNEUROSCI.3470-06.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Goldman-Rakic P. S. (1996). The prefrontal landscape: implications of functional architecture for understanding human mentation and the central executive. Philos. Trans. R. Soc. Lond., B, Biol. Sci. 351, 1445–1453 10.1098/rstb.1996.0129 [DOI] [PubMed] [Google Scholar]
  24. Grafman J., Pascual-Leone A., Alway D., Nichelli P., Gomez-Tortosa E., Hallett M. (1994). Induction of a recall deficit by rapid-rate transcranial magnetic stimulation. Neuroreport 5, 1157–1160 10.1097/00001756-199405000-00034 [DOI] [PubMed] [Google Scholar]
  25. Grafton S. T., Schmitt P., Van Horn J., Diedrichsen J. (2008). Neural substrates of visuomotor learning based on improved feedback control and prediction. Neuroimage 39, 1383–1395 10.1016/j.neuroimage.2007.09.062 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Hare T. A., O'Doherty J., Camerer C. F., Schultz W., Rangel A. (2008). Dissociating the role of the orbitofrontal cortex and the striatum in the computation of goal values and prediction errors. J. Neurosci. 28, 5623–5630 10.1523/JNEUROSCI.1309-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Herrnstein R. J. (1970). On the law of effect. J. Exp. Anal. Behav. 13, 243–266 10.1901/jeab.1970.13-243 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Hornak J., O'Doherty J., Bramham J., Rolls E. T., Morris R. G., Bullock P. R., Polkey C. E. (2004). Reward-related reversal learning after surgical excisions in orbito-frontal or dorsolateral prefrontal cortex in humans. J. Cogn. Neurosci. 16, 463–478 10.1162/089892904322926791 [DOI] [PubMed] [Google Scholar]
  29. Huettel S. A., Mack P. B., McCarthy G. (2002). Perceiving patterns in random series: dynamic processing of sequence in prefrontal cortex. Nat. Neurosci. 5, 485–490 [DOI] [PubMed] [Google Scholar]
  30. Huettel S. A., McCarthy G. (2004). What is odd in the oddball task? Prefrontal cortex is activated by dynamic changes in response strategy. Neuropsychologia 42, 379–386 10.1016/j.neuropsychologia.2003.07.009 [DOI] [PubMed] [Google Scholar]
  31. Hyafil A., Summerfield C., Koechlin E. (2009). Two mechanisms for task switching in the prefrontal cortex. J. Neurosci. 29, 5135–5142 10.1523/JNEUROSCI.2828-08.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Izquierdo A., Suda R. K., Murray E. A. (2004). Bilateral orbital prefrontal cortex lesions in rhesus monkeys disrupt choices guided by both reward value and reward contingency. J. Neurosci. 24, 7540–7548 10.1523/JNEUROSCI.1921-04.2004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Jenkinson M., Smith S. (2001). A global optimisation method for robust affine registration of brain images. Med. Image Anal. 5, 143–156 10.1016/S1361-8415(01)00036-6 [DOI] [PubMed] [Google Scholar]
  34. Kable J. W., Glimcher P. W. (2007). The neural correlates of subjective value during intertemporal choice. Nat. Neurosci. 10, 1625–1633 10.1038/nn2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Knutson B., Taylor J., Kaufman M., Peterson R., Glover G. (2005). Distributed neural representation of expected value. J. Neurosci. 25, 4806–4812 10.1523/JNEUROSCI.0642-05.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Lau B., Glimcher P. W. (2007). Action and outcome encoding in the primate caudate nucleus. J. Neurosci. 27, 14502–14514 10.1523/JNEUROSCI.3060-07.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Mansouri F. A., Tanaka K., Buckley M. J. (2009). Conflict-induced behavioural adjustment: a clue to the executive functions of the prefrontal cortex. Nat. Rev. Neurosci. 10, 141–152 10.1038/nrn2538 [DOI] [PubMed] [Google Scholar]
  38. Markela-Lerenc J., Ille N., Kaiser S., Fiedler P., Mundt C., Weisbrod M. (2004). Prefrontal-cingulate activation during executive control: which comes first? Brain Res. Cogn. Brain Res. 18, 278–287 10.1016/j.cogbrainres.2003.10.013 [DOI] [PubMed] [Google Scholar]
  39. Martin A., Wiggs C. L., Ungerleider L. G., Haxby J. V. (1996). Neural correlates of category-specific knowledge. Nature 379, 649–652 10.1038/379649a0 [DOI] [PubMed] [Google Scholar]
  40. McCarthy G., Luby M., Gore J., Goldman-Rakic P. (1997). Infrequent events transiently activate human prefrontal and parietal cortex as measured by functional MRI. J. Neurophysiol. 77, 1630–1634 [DOI] [PubMed] [Google Scholar]
  41. Miller E. K., Cohen J. D. (2001). An integrative theory of prefrontal cortex function. Annu. Rev. Neurosci. 24, 167–202 10.1146/annurev.neuro.24.1.167 [DOI] [PubMed] [Google Scholar]
  42. Mullette-Gillman O. A., Cohen Y. E., Groh J. M. (2005). Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus. J. Neurophysiol. 94, 2331–2352 10.1152/jn.00021.2005 [DOI] [PubMed] [Google Scholar]
  43. O'Doherty J., Critchley H., Deichmann R., Dolan R. J. (2003). Dissociating valence of outcome from behavioral control in human orbital and ventral prefrontal cortices. J. Neurosci. 23, 7931–7939 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. O'Doherty J., Kringelbach M. L., Rolls E. T., Hornak J., Andrews C. (2001). Abstract reward and punishment representations in the human orbitofrontal cortex. Nat. Neurosci. 4, 95–102 10.1038/82959 [DOI] [PubMed] [Google Scholar]
  45. Owen A. M., Roberts A. C., Polkey C. E., Sahakian B. J., Robbins T. W. (1991). Extra-dimensional versus intra-dimensional set shifting performance following frontal lobe excisions, temporal lobe excisions or amygdalo-hippocampectomy in man. Neuropsychologia 29, 993–1006 10.1016/0028-3932(91)90063-E [DOI] [PubMed] [Google Scholar]
  46. Passingham R. (1975). Delayed matching after selective prefrontal lesions in monkeys (Macaca mulatta). Brain Res. 92, 89–102 10.1016/0006-8993(75)90529-6 [DOI] [PubMed] [Google Scholar]
  47. Pavlov I. P. (1928). Lectures on Conditioned Reflexes. New York, Liveright Publishing Corp. [Google Scholar]
  48. Plassmann H., O'Doherty J., Rangel A. (2007). Orbitofrontal cortex encodes willingness to pay in everyday economic transactions. J. Neurosci. 27, 9984–9988 10.1523/JNEUROSCI.2131-07.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Plassmann H., O'Doherty J., Shiv B., Rangel A. (2008). Marketing actions can modulate neural representations of experienced pleasantness. Proc. Natl. Acad. Sci. U.S.A. 105, 1050–1054 10.1073/pnas.0706929105 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Platt M. L., Glimcher P. W. (1999). Neural correlates of decision variables in parietal cortex. Nature 400, 233–238 10.1038/22268 [DOI] [PubMed] [Google Scholar]
  51. Remijnse P. L., Nielen M. M., Uylings H. B., Veltman D. J. (2005). Neural correlates of a reversal learning task with an affectively neutral baseline: an event-related fMRI study. Neuroimage 26, 609–618 10.1016/j.neuroimage.2005.02.009 [DOI] [PubMed] [Google Scholar]
  52. Ridderinkhof K. R., Ullsperger M., Crone E. A., Nieuwenhuis S. (2004). The role of the medial frontal cortex in cognitive control. Science 306, 443–447 10.1126/science.1100301 [DOI] [PubMed] [Google Scholar]
  53. Robbins T. W. (2007). Shifting and stopping: fronto-striatal substrates, neurochemical modulation and clinical implications. Philos. Trans. R. Soc. Lond., B, Biol. Sci. 362, 917–932 10.1098/rstb.2007.2097 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Rorden C., Karnath H. O., Bonilha L. (2007). Improving lesion-symptom mapping. J. Cogn. Neurosci. 19, 1081–1088 10.1162/jocn.2007.19.7.1081 [DOI] [PubMed] [Google Scholar]
  55. Rowe J. B., Toni I., Josephs O., Frackowiak R. S., Passingham R. E. (2000). The prefrontal cortex: response selection or maintenance within working memory? Science 288, 1656–1660 10.1126/science.288.5471.1656 [DOI] [PubMed] [Google Scholar]
  56. Rudebeck P. H., Murray E. A. (2008). Amygdala and orbitofrontal cortex lesions differentially influence choices during object reversal learning. J. Neurosci. 28, 8338–8343 10.1523/JNEUROSCI.2272-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Schiller D., Levy I., Niv Y., LeDoux J. E., Phelps E. A. (2008). From fear to safety and back: reversal of fear in the human brain. J. Neurosci. 28, 11517–11525 10.1523/JNEUROSCI.2265-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Skinner B. F. (1938). The Behavior of Organisms. New York, Appleton-Century-Crofts [Google Scholar]
  59. Smith S. M. (2002). Fast robust automated brain extraction. Hum. Brain Mapp. 17, 143–155 10.1002/hbm.10062 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Smith S. M., Jenkinson M., Woolrich M. W., Beckmann C. F., Behrens T. E., Johansen-Berg H., Bannister P. R., De Luca M., Drobnjak I., Flitney D. E., Niazy R. K., Saunders J., Vickers J., Zhang Y., De Stefano N., Brady J. M., Matthews P. M. (2004). Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 23(Suppl. 1), S208–S219 10.1016/j.neuroimage.2004.07.051 [DOI] [PubMed] [Google Scholar]
  61. Snyder L. H., Batista A. P., Andersen R. A. (2000). Intention-related activity in the posterior parietal cortex: a review. Vision Res. 40, 1433–1441 10.1016/S0042-6989(00)00052-3 [DOI] [PubMed] [Google Scholar]
  62. Squires K. C., Wickens C., Squires N. K., Donchin E. (1976). The effect of stimulus sequence on the waveform of the cortical event-related potential. Science 193, 1142–1146 10.1126/science.959831 [DOI] [PubMed] [Google Scholar]
  63. Sridharan D., Levitin D. J., Menon V. (2008). A critical role for the right fronto-insular cortex in switching between central-executive and default-mode networks. Proc. Natl. Acad. Sci. U.S.A. 105, 12569–12574 10.1073/pnas.0800005105 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Stoet G., Snyder L. H. (2004). Single neurons in posterior parietal cortex of monkeys encode cognitive set. Neuron 42, 1003–1012 10.1016/j.neuron.2004.06.003 [DOI] [PubMed] [Google Scholar]
  65. Sutton R. S., Barto A. G. (1998). Reinforcement learning: an introduction. IEEE Trans. Neural Netw. 9, 1054. 10.1109/TNN.1998.712192 [DOI] [Google Scholar]
  66. Thorndike E. (1898). Some experiments on animal intelligence. Science 7, 818–824 10.1126/science.7.181.818 [DOI] [PubMed] [Google Scholar]
  67. Tolman E. C. (1932). Purposive Behavior in Animals and Men. New York, The Century Co. [Google Scholar]
  68. Toth L. J., Assad J. A. (2002). Dynamic coding of behaviourally relevant stimuli in parietal cortex. Nature 415, 165–168 10.1038/415165a [DOI] [PubMed] [Google Scholar]
  69. Vingerhoets G. (2008). Knowing about tools: neural correlates of tool familiarity and experience. Neuroimage 40, 1380–1391 10.1016/j.neuroimage.2007.12.058 [DOI] [PubMed] [Google Scholar]
  70. Walton M. E., Devlin J. T., Rushworth M. F. (2004). Interactions between decision making and performance monitoring within prefrontal cortex. Nat. Neurosci. 7, 1259–1265 10.1038/nn1339 [DOI] [PubMed] [Google Scholar]
  71. Weisberg J., van Turennout M., Martin A. (2007). A neural system for learning about object function. Cereb. Cortex 17, 513–521 10.1093/cercor/bhj176 [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Wise S. P., Murray E. A., Gerfen C. R. (1996). The frontal cortex-basal ganglia system in primates. Crit. Rev. Neurobiol. 10, 317–356 [DOI] [PubMed] [Google Scholar]
  73. Woolrich M. (2008). Robust group analysis using outlier inference. Neuroimage 41, 286–301 10.1016/j.neuroimage.2008.02.042 [DOI] [PubMed] [Google Scholar]
  74. Woolrich M. W., Jbabdi S., Patenaude B., Chappell M., Makni S., Behrens T., Beckmann C., Jenkinson M., Smith S. M. (2009). Bayesian analysis of neuroimaging data in FSL. Neuroimage 45, S173–S186 10.1016/j.neuroimage.2008.10.055 [DOI] [PubMed] [Google Scholar]
  75. Xue G., Ghahremani D. G., Poldrack R. A. (2008). Neural substrates for reversing stimulus-outcome and stimulus-response associations. J. Neurosci. 28, 11196–11204 10.1523/JNEUROSCI.4001-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Frontiers in Human Neuroscience are provided here courtesy of Frontiers Media SA

RESOURCES