Zahn et al. 10.1073/pnas.0607061104. |
Fig. 3. Response times for related and unrelated (according to >50% of participants' decision) word pairs during fMRI. Error bars represent 95% confidence intervals (C.I.). One-way ANOVA [effect of stimulus type on RT, trials with response only: F(2, 222) = 16.61; P = 0.0000002]. The number of subjects judging a concept pair as related (c2 = 1.65; P = 0.44) and the number of subjects with omissions (c2 = 1.16; P = 0.56) was equal across all conditions during fMRI (Kruskal-Wallis test, df = 2, asymptotic). Follow-up pairwise comparisons showed significantly faster responses for related than for unrelated word pairs of each stimulus type [two-sided t tests; animal function, t(72) = 4.25, P = 0.0001; positive social, t(70) = 4.82, P = 0.0001; negative social, t(71) = 2.77, P = 0.007].
Fig. 4. Illustrated are the differences of activity for positive and negative social concepts in the anterior medial prefrontal cortex, as opposed to equal activity for positive and negative social concepts observed at right superior anterior temporal and medial prefrontal peak coordinates. Parameter estimates and SEs for animal function concepts vs. fixation, positive social concepts vs. fixation, and negative social concepts vs. fixation are displayed. A repeated measures ANOVA (SPSS11: http://www.spss.com) was used to test the interaction between Region (sup aTL, med PFC, ant mPFC) and Condition (ANI vs. FIX, POS SOC vs. FIX, NEG SOC vs. FIX). Main effects of Region (n = 26, df = 2, F = 4.83, P = 0.01) and Condition (F = 7.964, P = 0.001) as well as their interaction (Region x Condition, n = 26, df = 4, F = 3.451, P = 0.01) were significant. Positive [T(25) = 4.23, P = 0.0001] but not negative social concepts [T(25) = -.451, P = 0.66] increase activity in the anterior medial PFC, and this difference is significant [T(25) = 3.17, P = 0.004]. Both types of social concepts show no differences in activation of superior anterior temporal [T(25) = -.41, P = 0.68] and more posterior medial prefrontal cortex [T(25) = 0.80, P = 0.43]. For the pairwise comparisons of social compared to animal function concepts in these regions, see SI Table 2 and Fig. 2d.
Fig. 5. The fMRI paradigm. Participants saw word pairs or a visual pattern during fMRI. Each stimulus was displayed for 2.5 sec, then a fixation asterisk appeared on the screen for a mean of 4.6 sec (jittered interval). Subjects indicated whether each word pair was related or unrelated in meaning by pressing a key during the display period.
Fig. 6. Coronal and axial slice through the right inferior anterior temporal lobe of a normalized echoplanar image from a representative participant shows full coverage of the anterior temporal lobes.
SI Methods
Stimuli and Prestudies. Prestudy 1.
In our first normative prestudy, we selected positive (e.g., "honor") and negative (e.g., "tactless") social concept terms [with kind permission of the authors (1)], for which American social desirability Z-score norms were available. Positive and negative concepts were defined as those above or below the mean social desirability score over all words (Z > 0 ³ positive; Z < 0 ³ negative). Animal function concepts were derived from published semantic feature norms [with kind permission of the authors (2)], such that nonsensory feature descriptions (e.g., "is trainable") were shortened to one word conveying the main information (e.g., "trainable"). Words were selected for which important psycholinguistic variables on the concept could be obtained from the MRC psycholinguistic database (3): word familiarity, Kucera Francis word frequency, imageability, and concreteness. A total of 295 words was split into two lists. Forty (20 male and 20 female) healthy participants per list (list A, age = 31.2 ± 8.9, years of education = 17.3 ± 2.2; list B, age = 29.7 ± 7.2, years of education = 17.4 ± 2.2 years) performed Likert scale (1-7) ratings in single 2-h sessions on a computerized seven-step visual analogue scale.The instructions [adapted from John et al. (4)] asked participants to mark how well each word described a detailed specific set of social behaviors of persons (social concepts) or biological behaviors of animals as a measure of behavior descriptiveness. Furthermore, as a measure of category breadth [adapted from Hampson et al. (1)], they had to rate how many different kinds of social behaviors of persons (social concepts) or animal behaviors (animal concepts) the word can apply to. Only good raters (List A, n = 29, 16 male; List B, n = 36, 19 male), i.e., participants with at least 75% of rated words with less than three raw scores difference from the group mode (most frequent value) for that word, were included to compute normative data. Mean raw scores over all good raters for that list were transformed to Z-scores to derive a descriptiveness and category-breadth value for each word in relation to all other words in the same list, including all three stimulus types. Category breadth and descriptiveness were highly inversely correlated (Pearson R = -0.98, P = 0.0001).
Prestudy 2.
For the second normative prestudy, we formed 274 word pairs out of words from the first prestudy to semantically derive related and unrelated pairs for each condition (positive-positive social concept pair, e.g., generous-charitable; negative-negative social concept pair, e.g., tactless-impolite; animal function-animal function concept pair, e.g., caught-eaten). The exact meaning relatedness of these 274 word pairs, split in two lists, was then determined by asking 38 (19 male and 19 female) healthy participants per list (list A, age = 32.1 ± 9.5, years of education = 17.4 ± 2.4; list B, age = 30.4 ± 8.8, years of education = 17.1 ± 2.2 years) to rate how related in meaning the two words were (on a Likert scale from 1-7). Three additional experimental measures of relatedness were obtained to gain further information on the nature of meaning relations: (i) What proportion of persons (social concepts) or animals (animal concepts) both words can be applied to ("persons/animals in common"), (ii) what proportion of social or animal behaviors both words can be applied to ("social/animal behaviors in common"), and (iii) how similar the descriptions of behaviors given by the two words are ("similar in description"). Mean raw scores over all participants from one list were transformed to Z-scores over all word pairs from that list, including all three stimulus types. The traditional measure of meaning relatedness (i.e., semantic relatedness) was highly positively correlated (Pearson R > 0.90), with all of the other measures of relatedness suggesting a contribution of all these different kinds of meaning relations to the overall measure of meaning relatedness used as a predictor in the image analysis.In addition, word associativity for each single word, which occurred in the set of word pairs, was determined by asking participants to write down the first word that comes to their mind. To this end, a list of single words was given to each participant in the second prestudy before he was exposed to the word pairs. From the set of 274 word pairs, we selected 75 word pairs for each of the three conditions (n = 225 in total) of the final fMRI experiment, such that conditions were matched on as many psycholinguistic variables as possible.
In all prestudies, the sequence of stimuli was individually randomized for each participant, and the sequence of scales to rate was counterbalanced across participants. All normative data and a complete list of the stimuli can be obtained from the authors.
Psycholinguistic Stimulus Properties.
We compared the three stimulus conditions using one-way ANOVAs for each of the 20 psycholinguistic variables [F(2,222) = 5.5, P = 0.005, significance level for each comparison, equivalent to an approximate Bonferroni-corrected threshold of P = 0.10, which minimized Type II errors of falsely rejecting true psycholinguistic differences between the different stimulus conditions]. No difference emerged for first- and second-word familiarity, as well as Kucera Francis word frequency. The difference in category-breadth Z-scores for each word pair was not different across conditions as well as the social desirability difference within individual positive and negative social concept pairs. Also, there were no differences of associativity, behaviors in common, similarity of description, or differences of both measures per word pair.Significant differences (F > 8.8, P < 0.0001) across stimulus types emerged for first- and second-word number of syllables (social vs. animal), imageability and concreteness (animal vs. social), as well as descriptiveness (animal and negative social vs. positive) and category-breadth (positive vs. animal and negative social), which had to be accounted for in the SPM model (see below).
Differences in the distribution of word classes were looked at by using cross-tabs and contingency coefficients (CC). There were no significant differences between positive and negative social concepts (approx significant second word, P = 0.110, n = 150, CC = 0.197; first word, P = 0.362, CC = 0.116). However, there were highly significant differences between each class of social concepts and animal function words (CC > 0.55, P < 0.0001), with adjectives being more frequent for social concepts and nouns and verbs more frequent for animal concepts.
fMRI Paradigm.
Participants saw word pairs or a visual pattern on a screen via a mirror system attached to the head coil. Stimuli (18-point font type) were back-projected onto a translucent screen placed at the feet of the participant with a magnetically shielded LCD video projector. The stimuli were presented in an event-related design with pseudorandom order of different stimulus types (positive social concept pairs, negative social concept pairs, animal function concept pairs, fixation) within each fMRI run and across the three runs. Visual stimulus presentation was controlled by Experimental Run Time System (Berisoft Cooperation, Germany, http://www.erts.de). Each stimulus was displayed for 2.5 sec, then a fixation asterisk appeared in the center of the screen for a mean of 4.6 sec (jittered from 2.6 to 6.6 sec in 500-msec steps; see also SI Fig. 5). Participants indicated whether each word pair was related or unrelated in meaning by using the index or middle finger of their right hand to press a key. The order of runs and key/finger assignments to the related or unrelated decisions was counterbalanced across participants. Participants received a 5-min practice session on the actual task with a different set of stimuli to get familiarized with the experiment. Response times and responses were recorded for each trial.Image Analysis.
Image analyses were performed using SPM5 (http://www.fil.ion.ucl.ac.uk/spm/software/spm5). The following preprocessing steps were applied: realignment and unwarping, slice timing correction, normalization (3 ´3 ´ 3 voxel size), smoothing with a small kernel (FWHM = 6 mm) to be anatomically precise. Estimated translation and rotation parameters were inspected and one participant was excluded because of >3 mm and 2° of motion. As a basis function, we used the canonical hemodynamic response function (double-g HRF) with time and dispersion derivative. On the first level of analysis, we specified a general linear model (5) for each participant, which included the four trial types (positive social concepts, negative social concepts, animal function concepts, and fixation). Mean descriptiveness and meaning relatedness as parameters of interest and all variables that were not matched between the stimulus conditions, except for word class, were entered as separate parametric regressors convolved with the stimulus-specific HRF [mean value for first and second word: imageability (Z-score transformed), number of syllables and social concepts' desirability per stimulus]. Concreteness was not entered into the model because it was highly correlated with imageability (Pearson R = 0.92). Category breadth was also not modeled separately because of highly inverse correlations with descriptiveness (Pearson R =-0.98; regarding the fallacies of including highly correlated variables as predictors in multiple regression models, see ref. 6).
A measure of multivariate distance of word pairs (Mahalanobis) on all these variables (except for syllables) was computed to exclude multivariate outliers. There was only 1 of 225 stimuli above 14.86 [which is below the expected frequency of values greater the 99.5 percentile in a n = 225 sample (6)].
A separate model was set up including all above variables with the addition of response time for each stimulus condition to test whether domain-specific effects were due to response time effects. Note that by default all reported effects are partial regression effects adjusted for the effect of all the other predictors in the model, which are covaried out.
Categorical contrasts were formed by summing up all effects of interest per condition: (i) condition-specific HRF, (ii) effect of behavior descriptiveness convolved with HRF, and (iii) effect of meaning relatedness convolved with HRF. Reported statistics were performed on the second level using a random-effects model. A comparison of positive with negative social concepts including all effects of interest revealed no differences in the temporal lobe (even at P = 0.05 uncorrected). We therefore proceeded with main comparisons of both social conditions vs. the animal condition.
To identify domain-specific regions, we contrasted social concepts (sum of effects of interest) with animal function concepts (sum of effects of interest) and vice versa. To rule out pseudoactivation caused by deactivation in the subtracted condition, we then inclusively masked each of these categorical contrasts with a contrast against the low-level control condition [e.g., social (all effects of interest) vs. fixation]. Because there was no region specific to animal function concepts, we looked at the stimulus-specific HRF compared to the low-level baseline (fixation).
For the conjunction null analysis (7) to investigate whether there was a brain region, where domain-specific effect (social vs. animal), descriptiveness of social behavior, and meaning relatedness of social concepts were detectable in conjunction, we set up a separate factorial model at the second level. The factorial model included the following contrasts: (i) condition-specific HRF compared to fixation HRF, (ii) effect of behavior descriptiveness, and (iii) meaning relatedness of each condition convolved with the respective HRF.
All analyses were performed at the whole-brain level and within a predefined anterior temporal lobe region of interest (aTL ROI) according to our anatomical hypotheses (8). The ROI was created using bilateral Brodmann's area [BA (38, 22, 21)] maps from the WFU Pickatlas (9) integrated in SPM5. The original maps were cropped to exclude tissue posterior to MNI y coordinate = -10. Furthermore, to enhance temporal lobe specificity, we created insula and orbitofrontal masks using MRIcro (10) (http://www.sph.sc.edu/comd/rorden/mricro.html) and excluded parts of the original masks that fell onto adjacent insula and orbitofrontal cortex on the standard brain template.
Individual participant analyses (n = 26) were performed by using the coordinates of significant voxels at P = 0.05 (anterior temporal ROI analysis) nearest to the group activation peak for the contrasts: (i) social vs. animal function concepts, and (ii) animal function concepts vs. fixation separately in the right and left temporal lobe. Individual activation coordinates were classified as (i) superior anterior temporal [BA38 as well as anterior BA22 (anterior to y = 11)] including the superior temporal sulcus and gyrus, (ii) middle anterior temporal [BA38 inferior to the superior temporal sulcus and anterior BA21 (anterior to y = 11)] inferior to the superior temporal sulcus, or (iii) other (other temporal region or no significant temporal lobe voxel). The association of individual localization (superior vs. middle anterior temporal cortex) and domain (social vs. animal vs. animal vs. fixation) was analyzed by cross-tab statistics and Fisher's Exact test.
Parameter estimates and SEs for the parametric effects of descriptiveness of social behavior and meaning relatedness convolved with the HRF for social concepts were extracted at peak coordinates of the conjunction analysis (right anterior superior temporal cortex, BA38, MNI: 51, 15, -12) and the contrast social vs. animal function concepts (right lateral orbitofrontal/inferior frontal cortex, BA47/45, MNI: 54, 33, 6; left dorsomedial prefrontal cortex, BA8, MNI: -6, 21, 54) as well as at peak coordinates of the contrast animal function concepts vs. fixation (ROI analysis: right anterior middle temporal cortex, BA21, MNI: 57, -3, -21 and left anterior inferior/middle temporal cortex, BA20, MNI: -45, 9, -33). In SI Fig. 4, we additionally displayed parameter estimates within the anterior medial prefrontal (BA 10/32) cortex peak (MNI: -12, 54, 18) for positive vs. negative social concepts. Cohen's effect sizes (11) were calculated using GPOWER software (12).
Localization of areas was determined by using an anatomical atlas (13) and looking at activations in original MNI space projected onto a standard MNI template. In addition, Talairach transformed coordinates (using Matthew Brett's formula, http://www.mrc-cbu.cam.ac.uk/Imaging/Common/mnispace.shtml) were used to identify corresponding Brodmann's areas on the Talairach atlas (14).
Whole-brain analyses were based on uncorrected voxel-level thresholds of P = 0.005 (minimum cluster size = 10 voxels) in a priori -predicted regions known from the social and semantic neuroscience literature (8, 15, 16) (medial prefrontal cortex, lateral orbitofrontal cortex, anterior temporal lobes, posterior superior temporal sulcus, parietotemporal junction, fusiform gyrus). Results from anterior temporal lobe ROI analyses (ROI: BA2 and BA22 anterior to MNI y coordinate = -11 and BA38) were displayed on an uncorrected P = 0.05 voxel-level threshold (minimum cluster size = 10 voxels) to show the extent of activation and corroborate regional specificity.
1. Hampson S, Goldberg L, John O (1987) European Journal of Personality 1:241-258.
2. McRae K, Cree GS, Seidenberg MS, McNorgan C (2005) Behav Res Methods 37:547-559.
3. Martin A, Weisberg J (2003) Cogn Neuropsychol 20:575-587.
4. John OP, Hampson SE, Goldberg LR (1991) J Pers Soc Psychol 60:348-361.
5. Friston KJ, Frith CD, Turner R, Frackowiak RS (1995) Neuroimage 2:157-165.
6. Stevens J (1996) Applied Multivariate Statistics for the Social Sciences (Lawrence Erlbaum Associates, Mahwah, NJ).
7. Friston KJ, Penny WD, Glaser DE (2005) Neuroimage 25:661-667.
8. Moll J, Zahn R, de Oliveira-Souza R, Krueger F, Grafman J (2005) Nat Rev Neurosci 6:799-809.
9. Maldjian JA, Laurienti PJ, Kraft RA, Burdette JH (2003) Neuroimage 19:1233-1239.
10. Rorden C, Brett M (2000) Behavioural Neurology 12:191-200.
11. Cohen J (1992) Psychological Bulletin 112:155-159.
12. Erdfelder E, Faul F, Buchner A (1996) Behavior Research Methods, Instruments, & Computers 28:1-11.
13. Mai J, Assheuer J, Paxinos G (2004) Atlas of the Human Brain (Elsevier Academic, Amsterdam/Boston).
14. Talairach J, Tournoux P (1988) Co-Planar Stereotaxic Atlas of the Human Brain (Thieme Medical Publishers, New York).
15. Caramazza A, Mahon BZ (2003) Trends Cogn Sci 7:354-361.
16. Blakemore SJ, Winston J, Frith U (2004) Trends Cogn Sci 8:216-222.