Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2007 Nov 26.
Published in final edited form as: Biol Psychol. 2005 Aug 29;71(3):326–340. doi: 10.1016/j.biopsycho.2005.07.003

The neural organization of semantic memory: Electrophysiological activity suggests feature-based segregation

Tatiana Sitnikova a,b,d,*, W Caroline West a,b,d, Gina R Kuperberg b,c,d, Phillip J Holcomb e
PMCID: PMC2094699  NIHMSID: NIHMS17110  PMID: 16129544

Abstract

Despite decades of research, it remains controversial whether semantic knowledge is anatomically segregated in the human brain. To address this question, we recorded event-related potentials (ERPs) while participants viewed pictures of animals and tools. Within the 200–600-ms epoch after stimulus presentation, animals (relative to tools) elicited an increased anterior negativity that, based on previous ERP studies, we interpret as associated with semantic processing of visual object attributes. In contrast, tools (relative to animals) evoked an enhanced posterior left-lateralized negativity that, according to prior research, might reflect accessing knowledge of characteristic motion and/or more general functional properties of objects. These results support the hypothesis of the neuroanatomical knowledge organization at the level of object features: the observed neurophysiological activity was modulated by the features that were most salient for object recognition. The high temporal resolution of ERPs allowed us to demonstrate that differences in processing animals and tools occurred specifically within the time-window encompassing semantic analysis.

Keywords: Semantics, Human, Brain mapping, Cerebral cortex, Anatomy, Anatomy, Physiology, Visual perception, Evoked potentials, Electroencephalography

1. Introduction

In a lifetime, we acquire knowledge about numerous objects in our environment. This knowledge includes their names, their properties (visual, acoustic, motor, olfactory, etc.), our own bodily movements involved in interactions with them, as well as more abstract general knowledge (e.g., a giraffe is an African herbivore). How these diverse semantic memories are organized in the brain has been of intense interest to cognitive neuroscientists for several decades (Chang, 1996; Thompson-Schill, 2003). However, while much has been learned about the nature of these representations, there are still competing theoretical perspectives. According to one account, different types of knowledge are stored within different brain regions. In contrast, the alternative framework posits that all semantic information is coded within a unitary neural system. Below we outline each of these theoretical approaches in more detail in relation to the neuropsychological literature, and then briefly review existent functional neuroimaging evidence for and against them. Finally, we describe how we used electrophysiological recordings of the brain activity to distinguish between these theories.

2. The accounts of semantic memory organization: neuropsychological evidence

It has long been known that some patients with focal brain damage show selective deficits in knowledge about particular object categories (e.g., animals, plants, tools; Basso et al., 1988; Damasio et al., 1996; De Renzi and Lucchelli, 1994; Farah and McClelland, 1991; Farah et al., 1991; Hart and Gordon, 1992; Humphreys and Forde, 2001; Laiacona et al., 1993, 1997; McCarthy and Warrington, 1988; Pietrini et al., 1988; for review see Saffran and Schwartz, 1994). This finding has been often cited as evidence in favor of the neuroanatomical segregation of semantic representations. For example, Caramazza and Shelton (1998) have suggested that semantic memories are organized in the brain at the level of whole objects. They speculated that different object categories (e.g., animals, plants, tools) might be supported by distinct brain regions because each category has played a different role in our survival in the course of evolution.

Warrington and McCarthy (1987), Warrington and Shallice (1984) were among the first to notice that the patterns of cognitive impairment observed in different patients could be classified according to the type of attributes that are particularly important for the identification of objects from the affected categories. For example, some patients were less proficient with items whose visual features are most salient for object recognition (e.g., animals and plants). In contrast, other patients had more difficulties with objects that are best defined by their functional properties (e.g., manipulatable man-made objects and body parts). Based on these observations, Warrington and colleagues proposed a currently popular feature-based account of semantic memory organization (Chao et al., 1999; Chao and Martin, 1999, 2000; Holcomb et al., 1999; Holcomb and McPherson, 1994; Martin and Chao, 2001; Martin et al., 1995, 1996; McPherson and Holcomb, 1999; Paivio, 1971, 1986, 1991; Sitnikova et al., 2003; Warrington and McCarthy, 1987; Warrington and Shallice, 1984; West and Holcomb, 2002). According to this view, different types of object features (e.g., visual, auditory, motor, olfactory, abstract/verbal) are stored in distinct brain regions, and category-specific deficits are a byproduct of selective damage to these feature-specific neurocognitive mechanisms.

The alternative account of semantic memory representation posits that all types of knowledge are supported by a unitary neural system with no correspondence between locations in the brain and the content of stored semantic information (Anderson and Bower, 1973; Devlin et al., 2002; Gernsbacher, 1985; Kroll and Potter, 1984; Pylyshyn, 1980; Tyler and Moss, 2001; Tyler et al., 2000, 2003b). Proponents of this model have argued that it is not inconsistent with category-specific cognitive deficits observed in neuropsychological patients. For example, Moss and Tyler (2000), Moss et al. (1998), Tyler and Moss (2001), Tyler et al. (2000, 2003b) pointed out that selective recognition impairments for animals frequently occur in less severe brain damage and involve inability to identify individual animals while the knowledge of the broad category to which the objects belong remains spared. In contrast, selective recognition impairments for tools are more common in relatively severe brain damage and are characterized by major difficulties in object recognition with only some knowledge of the animal world being preserved. These authors argued that this pattern of deficits can be explained by such factors as increased number of shared, inter-correlated (i.e., consistently occurring together in individual category members) features and reduced number of distinctive features associated with animals relative to tools. As a result, milder brain damage may be more likely to impair patients’ ability to use distinctive features to discriminate between different animals than between different tools, but patients with severe brain injury may be more likely to correctly classify animals while being unable to comprehend tools at all.

3. Functional neuroimaging evidence

Recently, positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) methods have been applied to the study of the neural basis of semantic knowledge. Many of these studies were designed to test the feature-based organization model by examining brain activity elicited by living things (mainly animals) and man-made manipulable objects (mainly tools) in healthy participants (for a review, see Martin, 2001; Martin and Chao, 2001; Martin et al., 2000). In some of these investigations, living things evoked more activity than man-made objects in the fusiform gyrus (Perani et al., 1995, 1999; Thompson-Schill et al., 1999), an area that is part of the ventral object-processing stream and is thought to mediate access to representations of visual features of objects1. On the other hand, man-made objects evoked category-specific activity in a separate network of brain regions including the left posterior middle temporal gyrus (Chao et al., 1999, 2002; Martin et al., 1996; Moore and Price, 1999; Mummery et al., 1998; Perani et al., 1999), which has been reported to be activated during the generation of action words (Fiez et al., 1996; Martin et al., 1995; Wise et al., 1991), and to moving images of tools (Beauchamp et al., 2002, 2003). Man-made objects also have been reported to produce increased activity in the left premotor and left posterior parietal cortical regions (Chao and Martin, 2000; Chao et al., 2002; Grabowski et al., 1998; Grafton et al., 1997; Martin et al., 1996) that previously have been linked to motor-control and motor-imagery (Binkofski et al., 1998; Decety et al., 1994; Grafton et al., 1996; Stephan et al., 1995).

The above studies have been interpreted as supporting feature-based organization of semantic memory. However, Devlin et al. (2002), Tyler and Moss (2001) have noted that the precise locations of the category-specific activations are not entirely consistent across studies, and frequently do not converge with the brain-damage deficit data. In addition, their studies have failed to replicate the category-specific effects reported above (Devlin et al., 2002; Pilgrim et al., 2002; Tyler et al., 2003a). Therefore, these authors have argued that a single semantic system account is a more parsimonious explanation for the extant data.

4. Event related potentials and the current study

In the present investigation, we aimed to shed further light on how semantic knowledge is organized in the brain by using event-related potentials (ERPs)—a technique that records electrophysiological brain potentials time-locked to the stimuli of interest. Unlike PET and fMRI, ERPs have a temporal resolution of milliseconds, which can be used to detect processing within the specific time-window that is known to encompass online semantic processing (Barrett and Rugg, 1990; Coles and Rugg, 1995; Holcomb and McPherson, 1994; McPherson and Holcomb, 1999). Moreover, the scalp topography of ERPs can be employed to dissociate neurocognitive processes that overlap in time (Holcomb et al., 1999; Kutas, 1993). As a result of this multidimensional nature, ERP data can help to distinguish subcomponents of semantic analysis (see Barrett and Rugg, 1990; Federmeier and Kutas, 2001; Ganis et al., 1996; Hamm et al., 2002; Holcomb et al., 1999; Holcomb and McPherson, 1994; Kellenbach et al., 2002; Kounios and Holcomb, 1994; McPherson and Holcomb, 1999; Sitnikova, 2003; Sitnikova et al., 2003; West and Holcomb, 2000, 2002).

In the present study, we used ERPs to determine whether non-identical neurocognitive mechanisms mediate the semantic analysis of objects from animal and tool categories. We focused on the subprocesses reflected by the N400 ERP waveform—a negativity evoked between 200 and 600 ms after presentation of a meaningful stimulus. This waveform was initially characterized as being sensitive to semantic variables in the language domain (Bentin et al., 1985; Holcomb, 1988; Kutas and Hillyard, 1980, 1984) but has also been described in association with processing pictorial (Barrett and Rugg, 1990; Federmeier and Kutas, 2001; Ganis et al., 1996; Hamm et al., 2002; Holcomb and McPherson, 1994; McPherson and Holcomb, 1999; Sitnikova et al., 2003; West and Holcomb, 2002) and other types of stimuli (Van Petten and Rheinfelder, 1995). Perhaps the most widely accepted account of the N400 posits that this ERP component reflects mental effort involved in the analysis of meaning (Brown and Hagoort, 1993; Holcomb, 1993).

Previous studies have documented that the N400 evoked by verbal stimuli is characterized by a parietal-occipital scalp topography (Friederici et al., 1993; Hagoort and Brown, 2000; Holcomb et al., 1999; Kutas and Van Petten, 1994; van Berkum et al., 1999), while the negativities elicited by pictures (Barrett and Rugg, 1990; Hamm et al., 2002; Holcomb and McPherson, 1994; McPherson and Holcomb, 1999; West and Holcomb, 2002) and silent videos (Sitnikova, 2003; Sitnikova et al., 2003) are typically distributed over more anterior electrode sites. Moreover, the N400 elicited by concrete words has a more anterior scalp topography than that evoked by abstract words (Holcomb et al., 1999; Kellenbach et al., 2002; Kounios and Holcomb, 1994; West and Holcomb, 2000), whereas words denoting manipulable objects and human actions elicit a posterior, slightly left-lateralized N400 (Kellenbach et al., 2002). Differences in scalp topography between ERP components are generally interpreted as indicating non-identical underlying neural sources (e.g., Holcomb et al., 1999; Kutas, 1993). Therefore, these results suggest that the N400 is comprised of several separable late negativities that may reflect processing within distinct feature-specific semantic neural networks (see Holcomb et al., 1999; Holcomb and McPherson, 1994; Kellenbach et al., 2002; McPherson and Holcomb, 1999; Sitnikova et al., 2003; West and Holcomb, 2002). In particular, the more anterior negativities elicited by concrete words and visual images may reflect activation of semantic representations of objects’ visual attributes2. In contrast, the posterior N400 may be associated with activation of brain regions selectively devoted to processing representations of verbal and possibly other types of knowledge.

It is noteworthy that in several studies using picture stimuli, the ERP negativity observed in the earlier semantic-processing epoch (between approximately 200–350 ms) had a relatively focal scalp topography limited to more anterior electrode sites. This ERP component has been labeled the N300 (Barrett and Rugg, 1990; Hamm et al., 2002; Holcomb and McPherson, 1994; McPherson and Holcomb, 1999). One interpretation of the functional significance of the N300 is that it reflects the immediate, direct access to visual-feature semantic representations of objects (e.g., McPherson and Holcomb, 1999).

This ERP literature informs our current hypotheses. If, as predicted by the feature-based organization model, identification of animals (relative to tools) involves increased activity in the visual-feature semantic system (Martin and Chao, 2001; Warrington and McCarthy, 1987), animals might be expected to evoke a relatively greater anterior N400 effect than tools. Furthermore, if identification of tools relies predominantly on the retrieval of their functional representations (e.g., coding their typical associated motion—Martin and Chao, 2001), then they might be expected to evoke greater posterior N400 activity than animals.

In the current study, participants viewed pictures of animals and tools. Therefore, we expected that the anterior category-related ERP differences due to processing of visual object properties, predicted by the feature-based organization model, should occur in the earlier, N300 time-window. On the other hand, the posterior negativity effect due to processing functional properties was hypothesized to develop somewhat later, as pictures presumably do not directly access non-visual, functional semantic representations (at the posterior electrode sites, the late negativity to pictures was previously reported to peak at around 450 ms after stimulus onset, and was labeled the N450, see Barrett and Rugg, 1990; McPherson and Holcomb, 1999; Paivio, 1986).

5. Methods

5.1. Construction of materials and normative studies

Stimuli in the present study were full-color drawings of animals and tools obtained from ‘Art Explosion’ CD-ROM package (Nova Development Corporation). The animal category included mammals, reptiles, marine creatures, and insects. The tool category included a variety of man-made objects used to accomplish a specific task (e.g., a comb, a broom, an axe). Prior to the ERP experiment, we conducted two normative studies (see Table 1 for results) to ensure that object naming task would be of a comparable difficulty both for animals and tools selected for our ERP study.

Table 1.

Parameters that were matched between animal and tool pictures

Parameter Object category Mean Standard deviation t-value
Naming accuracy (%) Animals 88.00 14.50 −0.424
Tools 86.67 16.84
Name-verification accuracy (%) Animals 98.00 3.20 −1.201
Tools 97.13 4.03
Name-verification reaction time (ms) Animals 568.34 43.84 1.009
Tools 577.99 49.79
Picture familiarity rating (0—unfamiliar, 3—familiar) Animals 2.72 0.54 0.807
Tools 2.80 0.45
Name printed word familiarity (1—unfamiliar, 7—familiar) Animals 5.06 0.57 −0.064
Tools 5.05 0.36
Name word frequency Animals 11.03 22.43 0.092
Tools 11.49 19.01
Number of letters in the name Animals 6.16 2.11 1.595
Tools 6.92 2.62
Number of syllables in the name Animals 2.10 0.91 −0.552
Tools 2.00 0.90

Note: in all comparisons, degrees of freedom = 98; p > 0.1. Naming accuracy and picture familiarity data were obtained from a different sample of participants than name-verification accuracy and reaction time data. We were able to obtain name printed word familiarity from the online MRC psycholinguistic database (http://www.itd.clrc.ac.uk) only for 25 animals and 25 tools. Similarly, name word frequency norms were obtained from Kuèera and Francis (1967) and the MRC psycholinguistic database only for 35 animals and 35 tools. All ERP and behavioral analyses were repeated for these subsets of stimuli, and the obtained results essentially were not different from the results with the full stimuli set.

5.1.1. Normative study 1: naming accuracy and picture familiarity

Twelve Tufts University undergraduates (six females, six males; all right-handed native English speakers) who did not participate in the ERP experiment named pictures of 90 animals and 90 tools. Presentation procedure was identical to the ERP study (see below). Participants were told to provide a name that they would use for an object in real life. An experimenter recorded on-line each response and whether the name was produced immediately. Off-line, we determined the most frequently given name for each object (e.g., butterfly, dog, mallet, paintbrush), and then scored each trial as accurate if such correct name was produced immediately, and as “a miss” if such name was not produced or if the response was given after hesitation. The naming accuracy for each picture was determined as a percentage of correct responses across all participants. The only difference from the ERP study was that in this pretest, after naming each picture, participants also rated on a 0–3 scale the item’s familiarity (answered the question: ‘How familiar are you with the item?’). The picture familiarity rating for each item was determined as an average of ratings given by all participants.

5.1.2. Normative study 2: name-verification

A new group of sixteen Tufts University undergraduates (eight females, eight males; all right-handed native English speakers) who did not participate in the ERP experiment performed a speeded name-verification task. Pictures of animals and tools were preceded with a SOA of 600 ms either by a matching object name (chosen based on the Normative study 1) or a mismatching name from the same object category (e.g., ‘spider’–‘elephant’, or ‘rake’–‘screwdriver’). Participants were instructed to decide whether each object matched the preceding name, and signal their decision as quickly as possible using a response box. For each picture, the name-verification accuracy was determined as a percentage of correct responses across all participants, and the name-verification reaction time was determined as an average of reaction times across all participants.

5.1.3. Selected materials

Based on these normative studies, we selected 50 animal and 50 tool pictures (see Fig. 1, for examples, Appendix A for a list of object names, and http://neurocog.psy.tufts.edu/anitool1 for a full set of object pictures) that were matched for their naming accuracy, name-verification accuracy, name-verification reaction time, and picture familiarity. Moreover, the selected animals and tools had names of similar word length (number of letters and number of syllables). Table 1 shows means and standard deviations for the matched parameters in this final set of stimuli. Finally, we also considered name word frequency and name word familiarity of the selected items, see the Note below Table 1.

Fig. 1.

Fig. 1

Examples of animal (A) and tool (B) pictures.

5.2. Participants

Fifteen (seven female, eight male) right-handed undergraduate students from Tufts University aged 18–23 (mean age 19.5) took part in this study. All participants were native speakers of English and had normal or corrected-to-normal vision.

5.3. Procedure

Each participant sat in a comfortable chair in a room equipped with a video camera and a microphone connected via a close circuit to a TV set placed in the experimenters’ room. The stimuli were presented to participants on a computer monitor and were centered on a white background. Fifty animal pictures and 50 tool pictures were presented in pseudo-random order (strongly associated items, such as a spoon and a fork, were separated by at least 20 trials). Each trial began with a small green circle (subtending ~1° of visual angle) and participants were asked to press a “GO” button to trigger, 900 ms later, the presentation of an animal or tool picture (subtending ~5° of visual angle). After being displayed for 500 ms, the picture was replaced by a blank screen for 700 ms, and then the green circle re-appeared on the screen. Participants were instructed to name each item out loud as quickly as possible after seeing the green circle to reappear on the monitor. In this way, the naming response was delayed until after the ERP recording epoch to avoid muscle-contraction artifacts resulting from speech articulation. Participants were instructed to avoid eye-movements and to keep their eyes on the center of the monitor throughout each trial. Participants proceeded from trial to trial at their own pace. Each person was given 12 practice trials prior to the ERP experiment.

Overt naming was used so that we could subsequently exclude ERPs to items that were not correctly and rapidly named from analyses. This was achieved via on-line coding of subjects’ naming accuracy (a response was scored as accurate if the correct name – chosen based on the Normative study 1 described above – was given without any hesitation, immediately after the re-appearance of the green circle).

5.4. Electrophysiological recording

The electroencephalogram (bandpass, 0.01–40 Hz, 6 dB cutoffs; sampling rate, 200 Hz) was recorded from 61 tin electrodes held in place on the scalp by an elastic cap (Electro-Cap International, Eaton, OH), infra-ocular electrodes located below each eye (IO1/IO2), and an electrode positioned over the right mastoid bone. All of these active electrodes were referenced to an electrode placed on the left mastoid. The scalp sites (see diagram in Fig. 2) included 17 standard International 10–20 System locations: FP1, FP2, FPz, F7, F8, Fz, C3, C4, Cz, T3, T4, T5, T6, Pz, O1, O2, and Oz. Other 24 sites were placed at the extended 10–20 system locations: AF7, AF8, FT7, FT8, TP7, TP8, PO7, PO8, FC5, FC3, FC1, FC2, FC4, FC6, C5, C1, C2, C6, CP5, CP3, CP1, CP2, CP4, and CP6. Finally, 20 additional locations included: AF3, AF1, AF2, and AF4 (placed at increments of 20% of the distance between AF7 and AF8); F5, F1, F2, and F6 (20% increments of F7–F8 distance); P5, P1, P2 and P6 (20% increments of T5–T6 distance); PO3, PO1, PO2, and PO4 (20% increments of PO7–PO8 distance); F9 and F10 (at the outer canthi of eyes); and T9 and T10 (at the upper mastoid bones).

Fig. 2.

Fig. 2

Schematic locations of parasagittal columns of scalp electrodes: (1) midline, (2) inner-medial, (3) outer-medial, (4) inner-lateral, and (5) outer-lateral (inferior columns, including IO1/IO2, F9/F10, and T9/T10 sites, are not shown).

5.5. Data analysis

For each participant, mean ERPs (epoch length = 100 ms before picture presentation to 1187 ms after picture presentation) were formed off-line by selectively averaging across trials from each condition. Following this, the mean ERPs for each participant were re-referenced to an average of the left and right mastoids, and the group average ERPs were created.

We were careful to include only the trials free of ocular artifacts (trials with activity exceeding 60 μV below eyes, above eyes, or at the eye canthi were excluded; trials with voltage difference exceeding 40 μV between the channels below and above each eye were excluded; each ERP trial of each participant was also visually inspected to ensure that there were no signs of ocular artifacts, i.e., there was no evidence of reversal in polarity of ERPs between the electrode sites positioned immediately below and above each eye; percent of excluded trials across all participants was 7.47% for animals and 6.27% for tools; t = 0.908, p > 0.1). Furthermore, the trials were included only if the participant was able to name the object immediately after the speaking prompt (the green circle) was shown on the screen. All in all, the group average ERP waveforms were created based on 77.33% of trials for animals and 78.40% of trials for tools (t = −0.491, p > 0.1).

To determine whether there were differences in early sensory processing between animal and tool pictures, we compared early (P1/N1) ERP waveforms. Latencies of these sensory potentials were quantified by measuring the timing at PO7 & PO8 sites of the largest positive peak within 150 ms after picture presentation (P1) and the largest negative peak between 150 and 200 ms after picture presentation (posterior N1); and at F7 & F8 sites of the largest negative peak within 150 ms after picture presentation (anterior N1). These latency data as well as average ERPs at these electrodes and time-windows (measured relative to the 100-ms baseline prior to picture presentation) were entered into two three-way repeated-measures analyses of variance (ANOVAs) examining category-related differences in the early potentials’ time-course and amplitude, respectively. Each ANOVA included factors of Object Category (animals and tools), ERP Potential (P1, posterior N1, and anterior N1), and Hemisphere (left and right).

To examine semantic processing of animal and tool pictures, we calculated the mean ERP amplitudes (relative to the 100-ms baseline prior to picture presentation) within 200–300 ms and 300–600 ms time-windows after picture presentation. These time-windows roughly correspond to the time-windows previously used to quantify the earlier anterior negativity (the N300) and the later more posterior negativity (the N450) evoked by picture stimuli. For each of these time-windows, six ANOVAs for repeated measures were conducted in order to examine parasagittal columns of scalp electrodes along the anterior–posterior axis of the head (see Fig. 2). All analyses had an Object Category factor (animals and tools) and all but midline analyses had a Hemisphere factor (left and right). The midline analysis had five levels of Electrode Site (FPz, Fz, Cz, Pz, Oz). The inner-medial analysis had five levels of Electrode Site (AF1/AF2, FC1/FC2, C1/C1, CP1/CP2, PO1/PO2). The outer-medial analysis had seven levels of Electrode Site (FP1/FP2, F1/F2, FC3/FC4, C3/C4, CP3/CP4, P1/P2, O1/O2). The inner-lateral analysis had seven levels of Electrode Site (AF3/AF4, F5/F6, FC5/FC6, C5/C6, CP5/CP6, P5/P6, PO3/PO4). The outer-lateral analysis had seven levels of Electrode Site (AF7/AF8, F7/F8, FT7/FT8, T3/T4, TP7/TP8, T5/T6, PO7/PO8). The inferior analysis had three levels of Electrode Site (IO1/IO2, F9/F10, T9/T10). The Geisser–Greenhouse correction was applied to all repeated measures with more than one degree of freedom (Geisser and Greenhouse, 1959).

It is controversial whether data normalization improves our ability to distinguish between underlying neural generators of ERPs based on differences in scalp topography (see McCarthy and Wood, 1985; Urbach and Kutas, 2002). Therefore, we report all instances where the raw data showed significant interactions between the Object Category factor and any of the topographic variables (i.e., Hemisphere and/or Electrode Site). In addition, we also report the instances where the latter interactions remained significant after voltage values were normalized (using z-scores) within each level of the Object Category variable.

To determine whether the scalp topography of the ERP effect between animals and tools changed over time (from the N300 to N450 epoch), we conducted six additional repeated-measures ANOVAs. In these ANOVAs, the dependent variable was the difference in voltage between ERPs elicited by animals and tools, averaged across 200–300 ms (for the N300 epoch) and across 400–500 ms (for the N450 epoch)3. The independent variables were as described above, but, in place of the Object Category factor, we included a Time-Window factor (N300 and N450).

6. Results

6.1. Behavioral data

Participants were able to name the pictures rather accurately. The average rate of correct naming responses that were given immediately after the speaking prompt was 83.47% for animals and 83.20% for tools. The accuracy rates were not significantly different between animals and tools (t = 0.111, p > 0.1).

6.2. Event-related potential data

For pictures that were named correctly immediately after the speaking prompt, ERPs averaged across participants are shown in Fig. 3. In addition, Fig. 4A displays enlarged plots of these ERPs at two representative channels. All pictures elicited clear sensory/perceptual components. At more anterior sites, a negative-going potential peaking at around 120 ms (N1) was followed by a positivity that was maximal at approximately 190 ms (P2). At more posterior sites, the configuration was somewhat different, and included a positivity with a peak at about 110 ms (P1), a negativity peaking at around 160 ms (N1), and a positivity with a peak at approximately 220 ms (P2). This series of early ERPs was followed by negative-going late components peaking at approximately 250 and 450 ms (N300 and N450, respectively). Particularly at more posterior regions, these late negative-going potentials overlapped with a prominent late positivity, with a maximum at about 325 ms.

Fig. 3.

Fig. 3

ERPs time-locked to the presentation of animal and tool pictures averaged across all participants.

Fig. 4.

Fig. 4

ERPs time-locked to the presentation of pictures shown at two representative electrode sites (A), and the corresponding difference waves obtained by subtracting the ERPs to tool pictures from the ERPs to animal pictures (B). Head diagram in the center shows approximate locations of the shown electrode sites.

Within the N300/N450 time-window, the ERPs were more negative to animals than to tools at frontal–central (e.g., FC1 and FC2) and anterior–inferior (e.g., IO1 and IO2) electrode sites. In contrast, the effect of an opposite polarity with the ERPs being more negative to tools than to animals was evident primarily at more posterior electrodes (e.g., PO3 and PO4). Fig. 4B demonstrates these category-related differences at the representative frontal–central and posterior electrode sites. In addition, Fig. 5 illustrates scalp topography of these category-related differences.

Fig. 5.

Fig. 5

Voltage maps (created using the EMSE Data Editor software; source signal imaging, San Diego, CA) of the category-related ERP differences within the semantic-processing epoch. These maps were derived from the difference waves obtained by subtracting the ERPs to tool pictures from the ERPs to animal pictures: the maps show the data averaged across four consecutive 100-ms-long time-windows. Black contours demarcate change in voltage of 0.2 μV. Scalp regions where animals evoked more negative ERPs than tools are shown in blue with dotted contours, and scalp regions where tools evoked more negative ERPs than animals are shown in red with solid contours.

6.2.1. 0–150-ms and 150–200-ms (P1/N1) epochs

There were no statistically significant differences between animals and tools in either latency or amplitude of the ERP potentials in these time-windows (in all comparisons, F < 2.000, p > 0.1).

6.2.2. 200–300-ms (N300) epoch

During this epoch, ERPs were more negative to animals than tools over frontal–central and anterior–inferior scalp regions. A reversed pattern, with ERPs to tools being more negative than ERPs to animals, was observed over occipital and posterior temporal–parietal areas, where the effect was strongly left-lateralized (see Figs. 35). This was manifest by significant Object Category by Electrode Site interactions in midline, inner-lateral, and inferior analyses and Object Category by Hemisphere by Electrode Site interactions in inner-medial, outer-medial, and outer-lateral analyses (see Table 2). With normalized data, all analyses except that at the inner-lateral electrode column yielded the same findings as with the raw data (see Table 2, last column).

Table 2.

Higher-order interactions obtained in the ANOVAs examining parasagittal columns of scalp electrodes

Analysis Interaction type Degrees of freedom F-value F-value (z-scores)
200–300 ms (N300)
 Midline C × E 4,60 7.417** 3.695*
 Inner-medial C × H × E 4,60 7.278** 7.541**
 Outer-medial C × H × E 6,90 3.877* 4.438*
 Inner-lateral C × E 6,90 8.045**
 Outer-lateral C × H × E 6,90 7.007** 6.114**
 Inferior C × E 2,30 43.070** 26.601**
300–600 ms (N400)
 Inner-medial C × H × E 4,60 5.321** 4.861**
 Outer-medial C × H × E 6,90 3.465* 3.385*
 Inner-lateral C × H 6,90 5.292* 4.772*
 Outer-lateral C × H × E 6,90 5.852** 4.400*
 Inferior C × H × E 2,30 9.762** 3.500*
200–600 ms (entire semantic epoch)
 Midline C × E 4,60 3.940*
 Inner-medial C × H × E 4,60 6.854** 5.398**
 Outer-medial C × H × E 6,90 4.086* 4.012*
 Inner-lateral C × H × E 6,90 3.398*
 Outer-lateral C × H × E 6,90 6.791** 5.306**
 Inferior C × H × E 2,30 8.771** 3.812*

C: Object Category factor; E: Electrode Site factor; H: Hemisphere factor.

*

p < 0.05.

**

p < 0.01.

Planned comparisons at each electrode site showed that the increase in the anterior negativity to animals relative to tools was significant at frontal (F1, F2, & Fz), frontal–central (FC1, FC2, FC3, FC4, FC5, & FC6), frontal–temporal (FT7 & FT8), central (C1, C2, & Cz), and anterior–inferior sites (F9, IO1, & IO2). The posterior differences in negativity to tools relative to animals were significant at occipital (O1 & O2), parietal–occipital (PO1, PO3, PO4, PO7, & PO8) and temporal (T5) sites.

6.2.3. 300–600-ms (N450) epoch

In this time-window, the increased negativity to animals compared to tools was present only at anterior–inferior electrode sites and became slightly left-lateralized (see Fig. 3) as indicated by a significant Object Category by Hemisphere by Electrode Site interaction at inferior electrode columns (see Table 2). On the contrary, the enhanced negativity to tools relative to animals became more widespread, extending to more anterior parietal sites. This effect peaked over the left hemisphere as well (see Figs. 3 and 5; also see Fig. 4 showing this effect at the representative posterior electrode site). Significant interactions were obtained between Object Category, Hemisphere, and Electrode Site factors in inner-medial, outer-medial, and outer-lateral analyses, and between Object Category and Hemisphere factors in the inner-lateral analysis (see Table 2). All of the interactions found with the raw data were replicated in the analyses on the normalized data (see Table 2, last column).

Planned comparisons at each electrode site demonstrated that the anterior–inferior negativity effect to animals relative to tools was significant at the infra-ocular electrodes (IO1 & IO2). The increase in the posterior negativity to tools compared to animals was significant at occipital (O1 & O2), parietal–occipital (PO1, PO2, PO3, PO4, & PO7), parietal (P1, P2, P5 & P6), central-parietal (CP5), and temporal (T5, T9, & TP7) sites.

6.2.4. Comparison between the N300 and N450 epochs

The scalp topography of the ERP differences between animals and tools changed from the earlier (N300) to later (N450) epoch of semantic processing (see Fig. 5), as was determined by a direct comparison between the category-related ERP differences in these epochs. Significant Time-Window by Electrode Site and/or Time-Window by Hemisphere by Electrode Site interactions were obtained in all analyses that used the raw data (see Table 3). Time-Window by Electrode Site interactions remained significant after the data were normalized in the midline, inner-medial, outer-medial, and inferior analyses (see Table 3, last column).

Table 3.

Higher-order interactions obtained in the ANOVAs examining differences in parasagittal columns of scalp electrodes between 200–300 ms and 400–500 ms time-windows

Analysis Interaction type Degrees of freedom F-value F-value (z-scores)
Midline T × E 4,60 53.999** 4.664*
Inner-medial T × E 4,60 5.359** 5.348*
Inner-medial T × H × E 4,60 20.651**
Outer-medial T × E 6,90 55.400** 5.050*
Outer-medial T × H × E 6,90 21.826**
Inner-lateral T × H × E 6,90 17.412**
Outer-lateral T × H × E 6,90 18.856**
Inferior T × E 2,30 37.492** 4.059**
Inferior T × H × E 2,30 55.992**

H: Hemisphere factor; E: Electrode Site factor; T: Time-Window factor. Note: these analyses were performed on voltage differences between animals and tools.

*

p < 0.05.

**

p < 0.01.

7. Discussion

The ERPs elicited by pictures of animals and tools had distinct spatial distributions across the scalp within a time-window known to index semantic processing (Barrett and Rugg, 1990; Coles and Rugg, 1995; Holcomb and McPherson, 1994; McPherson and Holcomb, 1999). At the earlier part of this time-window (200–300 ms: the N300 epoch), animals elicited a more negative waveform than tools over frontal–central and anterior–inferior electrode sites. In contrast, tools elicited a more negative waveform than animals over occipital, posterior–temporal and posterior–parietal sites, primarily over the left hemisphere. At the later part of the semantic-processing time-window (300–600 ms: the N450 epoch), ERPs were more negative to animals than to tools only at the anterior–inferior sites and the increased posterior negativity elicited to tools, compared to animals, spread toward more anterior parietal scalp areas.

This combination of category-related topographic and time-course differences suggests that semantic processing of animals and tools does not occur in identical brain regions, providing evidence against a unitary semantic system theory (Anderson and Bower, 1973; Devlin et al., 2002; Gernsbacher, 1985; Kroll and Potter, 1984; Pylyshyn, 1980; Tyler and Moss, 2001; Tyler et al., 2000, 2003b). It seems unlikely that these results could be accounted for by category-related differences in lower-level perceptual features of the stimuli (e.g., visual complexity) because differences such as these would probably have affected early ERP components such as the P1 or N1. We found no significant differences between pictures of animals and tools prior to 200 ms. Below we consider our findings in further detail, showing how they support a feature-based model of neuroanatomical organization of semantic knowledge.

7.1. Topographic differences between ERPs to animals and tools

7.1.1. The anterior negativity to animals (versus tools)

The frontal–central distribution of the N300 to animals (relative to tools) resembled a negativity effect that has previously been documented in association with pictures relative to words (Federmeier and Kutas, 2001; Ganis et al., 1996) and in association with concrete relative to abstract words (Holcomb et al., 1999; Kellenbach et al., 2002; Kounios and Holcomb, 1994; West and Holcomb, 2000). Topographic similarities between ERP effects are usually interpreted as suggesting that their neural generators are either identical or located in close proximity to one another (Holcomb et al., 1999; Kutas, 1993). It seems unlikely that separate brain regions, each specializing in processing of animals, pictures, or concrete words, would be located next to each other simply by chance. Therefore, we take this result as evidence against a neuroanatomical organization at the level of whole objects (as was proposed by Caramazza and Shelton, 1998). A hypothesis that the relative increase in frontal–central negativity reflects processing of visual object characteristics is more plausible, given the particular importance of visual features in the identification of all of these stimuli in comparison with their respective controls: pictures in comparison with words, concrete in comparison with abstract words, and animals in comparison with tolls (see Farah and McClelland, 1991; Holcomb et al., 1999; Martin et al., 2000; Paivio, 1986; Warrington and McCarthy, 1987). Similar logic also discounts the possibility that the frontal–central negativity simply reflects increased difficulty in identifying animals relative to tools due to a larger number of shared, inter-correlated features associated with animals (e.g., Moss et al., 1998; Tyler et al., 2003b). A similar negativity was previously observed to pictures relative to words even when the pictures and words referred to the same identical set of concepts (e.g., Federmeier and Kutas, 20014; Ganis et al., 1996). Thus, the finding that the frontal–central N300 was larger to animals than tools fits well a feature-based model of semantic memory organization in the brain.

It is intriguing that the increased negativity to animals (compared to tools) was also observed between 200 and 600 ms at anterior–inferior scalp regions, peaking over the infra-ocular electrode sites. Notwithstanding their relatively remote location from the brain, these infra-ocular electrodes were sensitive to the electroencephalographic activity, because the data at these electrodes (as at all other active leads) were collected with the mastoid reference. One possibility is that the anterior–inferior effect between animals and tools was generated in anterior temporal and/or ventral prefrontal cortical regions5. In neuroimaging literature, these brain areas have been reported to display increased activity to living relative to non-living items (Leube et al., 2001; Moore and Price, 1999; Mummery et al., 1996) and were proposed to be involved in the retrieval of knowledge about affective valence associated with animal concepts (Leube et al., 2001) and/or in the selection of specific object identity (McRae et al., 1997; Moore and Price, 1999; Moss et al., 2005; Tyler et al., 2004; selecting unique object identity is necessary for naming, and this process might be more demanding for animals than tools because animals have more shared, inter-correlated visual/semantic features than tools).

7.1.2. The posterior negativity to tools (versus animals)

Tools (in comparison with animals) elicited a negativity between 200 and 600 ms that had a posterior scalp distribution. Of note, however, this negativity was markedly different in its topography from the posterior N400 evoked by verbal stimuli, abstract words in particular. Theverbal N400 is characterized by a widespread parietal–occipital distribution with a dorsal maximum (Holcomb et al., 1999; Kutas and Van Petten, 1994). In contrast, the present effect to tools was restricted to more posterior parietal–occipital–temporal regions and peaked over left-lateralized inferior electrodes.

One explanation for these topographic dissimilarities could be that the current negativity and the verbal effect have distinct neural sources. The posterior left-lateralized distribution of the negativity to tools (relative to animals) is consistent with a neural generator in the lateral portion of the left posterior temporal cortex, which may mediate representations of object motion (Chao et al., 1999; Chao and Martin, 2000; Martin, 2001; Martin and Chao, 2001; Martin et al., 2000, 1996). Thus, we tentatively propose that this effect may reflect access to knowledge about objects’ typical motion – functional information that is critical for tool identification. This interpretation is further supported by a previous ERP study that reported an enhanced posterior late negativity peaking over the left hemisphere to words referring to manipulable objects or human actions (relative to words not associated with human actions; Kellenbach et al., 2002).

On the other hand, the topography of the present negativity to tools (compared to animals) is not inconsistent with the activation of the same neural generator as in the verbal paradigms. The ERP negativity resulting from such activation could be cancelled out at the parietal electrodes by the overlapping anterior negativity effect in the opposite direction (i.e., larger negativity to animals than tools)6. Thus, we cannot exclude the possibility that identifying tools was more taxing on the verbal functional system than identifying animals (see Riddoch and Humphreys, 1987; Tyler and Moss, 1997).

Regardless of whether the posterior effect to tools (versus animals) indexed differences in verbal processing demands or was mediated primarily by accessing the knowledge of object motion, this finding is consistent with a feature-based model of neuroanatomical organization of semantic memory. It suggests that tools, primarily identifiable by their functional properties, activated brain areas storing semantic representations of these object features more than animals.

7.2. Time-course differences in processing animals and tools

The topography of category-based ERP effects changed from the earlier to later stages of semantic processing, providing additional evidence against the “whole object” organization model proposed by Caramazza and Shelton (1998). The latter framework only predicts spatial differences in the processing of animals and tools (but no distinctions in the time-course of these category-specific processes), which should have led to ERP effects with a scalp distribution that is constant over time. On the contrary, a feature-based organization model can explain such temporal changes in topography as arising from differences in accessibility of different feature-specific semantic systems (McPherson and Holcomb, 1999; Paivio, 1986; Thompson-Schill et al., 1999). In previous studies, the relatively early appearance of an anterior negativity to pictures of individual objects (the N300) has been interpreted as reflecting a direct activation of visual-feature semantic representations7, while the slightly later appearance of a posterior negativity (the N400 or N450) has been hypothesized to reflect the extra time needed for the activation to spread from visual onto other types of representations (Barrett and Rugg, 1990; Holcomb and McPherson, 1994; McPherson and Holcomb, 1999). In the present study, the increased frontal–central negativity to animals (compared to tools) was observed during the earlier (N300) epoch, suggesting that direct activation of the semantic system storing objects’ visual features was enhanced in response to pictures of animals.

Interestingly, the posterior negativity effect to tools (relative to animals) also started in the N300 epoch. One interpretation of this early posterior effect could be that tools might be different from many other object categories in that their form directly (and quickly) accesses functional semantic representations, possibly due to over-learned associations between these two types of attributes. While animals and certain other objects can be identified exclusively based on an invariant relationship between their visual form and identity, the recognition of tools might rely on an invariant association between their function and identity (as there is no clear correspondence between tools’ visual features and identity; see Farah and McClelland, 1991; Martin et al., 2000). This explanation is supported by our observation that the times to recognize the tool and animal stimuli used in the current study were very similar (see Section 5.1.3). The shift in the scalp distribution of the posterior negativity effect from left-lateralized inferior sites in the N300 epoch toward more dorsal parietal sites in the later (N450) time-window could be due to the sequential activation of different types of functional representations. Tools might have first accessed representations of objects’ typical motion, argued to be stored within the left posterior temporal cortex (Martin, 2001; Martin and Chao, 2001; Martin et al., 2000), and later activated verbal representations, associated with the dorsal parietal–occipital ERP negativity (Holcomb et al., 1999; Kutas and Van Petten, 1994).

Alternatively, the posterior category-related effect in the ERPs between 200 and 300 ms might reflect differences in perceptual processing between animal and tool pictures. A similar effect has been previously reported in a subset of the ERP studies that used picture stimuli (Federmeier and Kutas, 2001; Hamm et al., 2002; McPherson and Holcomb, 1999). This effect has been argued to originate from a modulation of a posterior P2 positivity that is sensitive to factors such as perceptual familiarity but not to changes in semantic context8 (Federmeier and Kutas, 2001; McPherson and Holcomb, 1999). In the present study, animals might have evoked a larger posterior P2 than tools because they have more shared, inter-correlated visual features (McRae et al., 1997; Moore and Price, 1999), and therefore, identification of animals might require more intense perceptual analysis than identification of tools.

8. Conclusions

To summarize, our findings suggest that the semantic processing of animal and tool pictures is mediated by nonidentical brain regions, and provide further evidence for a feature-based organization of semantic knowledge in the brain. The high temporal resolution of ERPs allowed us to demonstrate that differences in neural processing between animals and tools occurred within a time-window of semantic analysis and to obtain finer-grained information about the precise time-course of the category-related modulation of this neurocognitive processing.

Acknowledgments

This research was supported by a grant HD25889 to PJH and in part by the Institute for Mental Illness and Neuroscience Discovery (MIND). We thank Jacob Bender, Lauren Dennis, Kristi Kiyonaga, and Sonya Jairaj for their assistance in preparing stimuli and collecting data.

Appendix A. Names of objects used in the study

Animals Tools
Alligator Anvil
Armadillo Axe
Bear Binoculars
Beaver Broom
Bee Chopsticks
Boar Clamp
Butterfly Comb
Camel Compass
Cat Corkscrew
Cheetah Divider
Cow Drill
Crow Dropper
Deer Dustpan
Dog File
Dolphin Flashlight
Donkey Fork
Dragonfly Gavel
Duck Hairbrush
Eagle Hairdryer
Eel Hammer
Elephant Hoe
Flamingo Iron
Fly Knife
Giraffe Ladder
Grasshopper Ladle
Horse Mallet
Kangaroo Microscope
Koala Mixer
Llama Mower
Lobster Paintbrush
Millipede Pencil Sharpener
Moose Pitchfork
Octopus Pliers
Ostrich Plunger
Owl Pocket knife
Parakeet Pump
Pelican Rake
Penguin Ruler
Platypus Scales
Raccoon Scissors
Rhinoceros Screwdriver
Scorpion Shovel
Sheep Spoon
Snake Stapler
Spider Stethoscope
Squid Tweezers
Squirrel Typewriter
Stingray Weeder
Tiger Wheelbarrow
Walrus Wrench

Footnotes

1

Pictures of animals activated the fusiform gyrus more than pictures of tools in PET studies that, due to a relatively low spatial resolution of this technique, compared the brain response collapsed across this entire brain region (Perani et al., 1995, 1999). It is important to note, however, that the fMRI technique with its high spatial resolution revealed two separate regions within the fusiform gyrus that show category-related activity modulation: a lateral area that is activated more by animals than tools and a medial area that is activated more by tools than animals (Chao et al., 1999, 2002). Taken together, these findings are consistent with the notion that even though animals and tools engage somewhat different fusiform areas, overall processing in the fusiform gyrus is increased to animals relative to tools. Moreover, in an fMRI study that asked participants to read object names while answering questions about their non-visual properties, the fusiform gyrus was activated only by animals but not tools (Thompson-Schill et al., 1999), consistent with an argument about the importance of visual semantic knowledge for animal concepts.

2

ERPs recorded at the scalp, taken alone, are ambiguous with regard to the precise location of their underlying neural sources (Dale and Sereno, 1993). Therefore, an anterior-dorsal distribution of an ERP effect is not inconsistent with a neural source in the inferior-temporal brain regions. cf., even though concrete words usually evoke a more anterior N400 than abstract words (e.g., Holcomb et al., 1999), recent neuroimaging evidence suggests that increased processing to concrete (relative to abstract) words is localized in the left inferior-temporal cortex, whereas increased processing to abstract (relative to concrete) words is localized in the left inferior frontal gyrus (Fiebach and Friederici, 2004).

3

In these analyses, we matched the time-window length between the N300 and N450 epochs to ensure comparability of data between the epochs.

4

Each picture was normed to ensure naming agreement with the corresponding word.

5

The focal distribution of this effect, observed primarily at the infra-ocular but not other electrode sites, suggests nearby neural generators. Also note that we carefully confirmed that this effect was not due to eye-movement artifact (see Section 5.5 above). If this effect were due to increased eye-movements during viewing pictures of animals, a similar-size effect, but of an opposite polarity, would be expected at the electrodes positioned immediately above the eyes (i.e., FP1 & FP2). Electrodes below and above each eye are known to register electrophysiological potentials of an opposite polarity in response to any vertical or diagonal eye movements. However, no significant differences of any polarity were observed at the FP1 & FP2 electrodes between animals and tools in our study.

6

Even though the anterior negativity that is typically elicited by pictures (versus words) and concrete (versus abstract) words peaks over frontal–central electrodes, it usually extends across many parietal sites (e.g., Holcomb et al., 1999; McPherson and Holcomb, 1999). This anterior component to pictures continues throughout the 200–600 ms epoch (e.g., McPherson and Holcomb, 1999). The dorsal parietal-occipital N400 to words, on the other hand, was obtained to contextually inappropriate relative to contextually appropriate items in a paradigm that did not generate any overlapping anterior effect in the opposite direction.

7

A more specific hypothesis about the nature of visual semantic processing reflected by the N300 (Hamm et al., 2002, also see Laeng et al., 2003; Large et al., 2004) proposes that the N300 might index categorization of objects into basic perceptual categories (e.g., a dog versus a cat) that precedes object identification as a more specific exemplar (e.g., poodle). This interpretation, however, warrants further research, as under some experimental conditions, the N300 was found sensitive to object differences within such basic categories (Federmeier and Kutas, 2001). Furthermore, other evidence associates the perceptual categorization with the 100–200 ms time-window after picture presentation (Boshyan et al., 2005; Schmid et al., 2005).

8

Note that Hamm et al. (2002) reported ERPs referenced to the average across all active electrode channels, which makes it difficult to compare their scalp topography findings with the studies that used a mastoid reference.

References

  1. Anderson JR, Bower GH. Human Associative Memory. Wiley; New York: 1973. [Google Scholar]
  2. Barrett SE, Rugg MD. Event-related potentials and the semantic matching of pictures. Brain and Cognition. 1990;14:201–212. doi: 10.1016/0278-2626(90)90029-n. [DOI] [PubMed] [Google Scholar]
  3. Basso A, Capitani E, Laiacona M. Progressive language impairment without dementia: a case with isolated category specific semantic defect. Journal of Neurology, Neurosurgery and Psychiatry. 1988;51:1201–1207. doi: 10.1136/jnnp.51.9.1201. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Beauchamp MS, Lee KE, Haxby JV, Martin A. Parallel visual motion processing streams for manipulable objects and human movements. Neuron. 2002;34:149–159. doi: 10.1016/s0896-6273(02)00642-6. [DOI] [PubMed] [Google Scholar]
  5. Beauchamp MS, Lee KE, Haxby JV, Martin A. FMRI responses to video and point-light displays of moving humans and manipulable objects. Journal of Cognitive Neuroscience. 2003;15:991–1001. doi: 10.1162/089892903770007380. [DOI] [PubMed] [Google Scholar]
  6. Bentin S, McCarthy G, Wood CC. Event-related potentials, lexical decision and semantic priming. Electroencephalography and Clinical Neurophysiology. 1985;60:343–355. doi: 10.1016/0013-4694(85)90008-2. [DOI] [PubMed] [Google Scholar]
  7. Binkofski F, Dohle C, Posse S, Stephan KM, Hefter H, Seitz RJ, Freund HJ. Human anterior intraparietal area subserves prehension: a combined lesion and functional MRI activation study. Neurology. 1998;50:1253–1259. doi: 10.1212/wnl.50.5.1253. [DOI] [PubMed] [Google Scholar]
  8. Boshyan J, Kassam KS, Schmid AM, Bar M. Low spatial frequencies trigger early top-down facilitation of visual object recognition. A Supplement of the Journal of Cognitive Neuroscience. 2005:148. [Google Scholar]
  9. Brown C, Hagoort P. The processing nature of the N400: evidence from masked priming. Journal of Cognitive Neuroscience. 1993;5:34–44. doi: 10.1162/jocn.1993.5.1.34. [DOI] [PubMed] [Google Scholar]
  10. Caramazza A, Shelton JR. Domain-specific knowledge systems in the brain the animate–inanimate distinction. Journal of Cognitive Neuroscience. 1998;10:1–34. doi: 10.1162/089892998563752. [DOI] [PubMed] [Google Scholar]
  11. Chang T. Semantic memory: facts and models. Psychological Bulletin. 1996;99:199–220. [Google Scholar]
  12. Chao LL, Haxby JV, Martin A. Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects. Nature Neuroscience. 1999;2:913–919. doi: 10.1038/13217. [DOI] [PubMed] [Google Scholar]
  13. Chao LL, Martin A. Cortical regions associated with perceiving, naming, and knowing about colors. Journal of Cognitive Neuroscience. 1999;11:25–35. doi: 10.1162/089892999563229. [DOI] [PubMed] [Google Scholar]
  14. Chao LL, Martin A. Representation of manipulable man-made objects in the dorsal stream. Neuroimage. 2000;12:478–484. doi: 10.1006/nimg.2000.0635. [DOI] [PubMed] [Google Scholar]
  15. Chao LL, Weisberg J, Martin A. Experience-dependent modulation of category-related cortical activity. Cerebral Cortex. 2002;12:545–551. doi: 10.1093/cercor/12.5.545. [DOI] [PubMed] [Google Scholar]
  16. Coles MGH, Rugg MD. Event-related potentials: an introduction. In: Rugg MD, Coles MGH, editors. Electrophysiology of Mind. Oxford University Press; New York: 1995. [Google Scholar]
  17. Dale AM, Sereno MI. Improved localization of cortical activity by combining EEG and MEG with MRI cortical surface reconstruction: a linear approach. Journal of Cognitive Neuroscience. 1993;5:162–176. doi: 10.1162/jocn.1993.5.2.162. [DOI] [PubMed] [Google Scholar]
  18. Damasio H, Grabowski TJ, Tranel D, Hichwa RD, Damasio AR. A neural basis for lexical retrieval. Nature. 1996;380:499–505. doi: 10.1038/380499a0. [DOI] [PubMed] [Google Scholar]
  19. De Renzi E, Lucchelli F. Are semantic systems separately represented in the brain? The case of living category impairment. Cortex. 1994;30:3–25. doi: 10.1016/s0010-9452(13)80322-x. [DOI] [PubMed] [Google Scholar]
  20. Decety J, Perani D, Jeannerod M, Bettinardi V, Tadary B, Woods R, Mazziotta JC, Fazio F. Mapping motor representations with positron emission tomography. Nature. 1994;371:600–602. doi: 10.1038/371600a0. [DOI] [PubMed] [Google Scholar]
  21. Devlin JT, Russell RP, Davis MH, Price CJ, Moss HE, Fadili MJ, Tyler LK. Is there an anatomical basis for category-specificity? Semantic memory studies in PET and fMRI. Neuropsychologia. 2002;40:54–75. doi: 10.1016/s0028-3932(01)00066-5. [DOI] [PubMed] [Google Scholar]
  22. Farah MJ, McClelland JL. A computational model of semantic memory impairment: modality specificity and emergent category specificity. Journal of Experimental Psychology: General. 1991;120:339–357. [PubMed] [Google Scholar]
  23. Farah MJ, McMullen PA, Meyer MM. Can recognition of living things be selectively impaired? Neuropsychologia. 1991;29:185–193. doi: 10.1016/0028-3932(91)90020-9. [DOI] [PubMed] [Google Scholar]
  24. Federmeier KD, Kutas M. Meaning and modality: influences of context, semantic memory organization, and perceptual predictability on picture processing. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2001;27:202–224. [PubMed] [Google Scholar]
  25. Fiebach CJ, Friederici AD. Processing concrete words: fMRI evidence against a specific right-hemisphere involvement. Neuropsychologia. 2004;42:62–70. doi: 10.1016/s0028-3932(03)00145-3. [DOI] [PubMed] [Google Scholar]
  26. Fiez JA, Raichle ME, Balota DA, Tallal P, Petersen SE. PET activation of posterior temporal regions during auditory word presentation and verb generation. Cerebral Cortex. 1996;6:1–10. doi: 10.1093/cercor/6.1.1. [DOI] [PubMed] [Google Scholar]
  27. Friederici AD, Pfeifer E, Hahne A. Event-related brain potentials during natural speech processing: effects of semantic, morphological and syntactic violations. Cognitive Brain Research. 1993;1:183–192. doi: 10.1016/0926-6410(93)90026-2. [DOI] [PubMed] [Google Scholar]
  28. Ganis G, Kutas M, Sereno MI. The search for “common sense”: an electrophysiological study of the comprehension of words and pictures in reading. Journal of Cognitive Neuroscience. 1996;8:89–106. doi: 10.1162/jocn.1996.8.2.89. [DOI] [PubMed] [Google Scholar]
  29. Geisser S, Greenhouse S. On methods in the analysis of profile data. Psychometrika. 1959;24:95–112. [Google Scholar]
  30. Gernsbacher MA. Surface information loss in comprehension. Cognitive Psychology. 1985;17:324–363. doi: 10.1016/0010-0285(85)90012-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Grabowski TJ, Damasio H, Damasio AR. Premotor and prefrontal correlates of category-related lexical retrieval. Neuroimage. 1998;7:232–243. doi: 10.1006/nimg.1998.0324. [DOI] [PubMed] [Google Scholar]
  32. Grafton ST, Arbib MA, Fadiga L, Rizzolatti G. Localization of grasp representations in humans by positron emission tomography. 2. Observation compared with imagination. Experimental Brain Research. 1996;112:103–111. doi: 10.1007/BF00227183. [DOI] [PubMed] [Google Scholar]
  33. Grafton ST, Fadiga L, Arbib MA, Rizzolatti G. Premotor cortex activation during observation and naming of familiar tools. Neuroimage. 1997;6:231–236. doi: 10.1006/nimg.1997.0293. [DOI] [PubMed] [Google Scholar]
  34. Hagoort P, Brown CM. ERP effects of listening to speech: semantic ERP effects. Neuropsychologia. 2000;38:1518–1530. doi: 10.1016/s0028-3932(00)00052-x. [DOI] [PubMed] [Google Scholar]
  35. Hamm JP, Johnson BW, Kirk IJ. Comparison of the N300 and N400 ERPs to picture stimuli in congruent and incongruent contexts. Clinical Neurophysiology. 2002;113:1339–1350. doi: 10.1016/s1388-2457(02)00161-x. [DOI] [PubMed] [Google Scholar]
  36. Hart J, Jr, Gordon B. Neural subsystems for object knowledge. Nature. 1992;359:60–64. doi: 10.1038/359060a0. [DOI] [PubMed] [Google Scholar]
  37. Holcomb PJ. Automatic and attentional processing: an event-related brain potential analysis of semantic priming. Brain and Language. 1988;35:66–85. doi: 10.1016/0093-934x(88)90101-0. [DOI] [PubMed] [Google Scholar]
  38. Holcomb PJ. Semantic priming and stimulus degradation: implications for the role of the N400 in language processing. Psychophysiology. 1993;30:47–61. doi: 10.1111/j.1469-8986.1993.tb03204.x. [DOI] [PubMed] [Google Scholar]
  39. Holcomb PJ, Kounios J, Anderson JE, West WC. Dual-coding, context-availability, and concreteness effects in sentence comprehension: an electrophysiological investigation. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1999;25:721–742. doi: 10.1037//0278-7393.25.3.721. [DOI] [PubMed] [Google Scholar]
  40. Holcomb PJ, McPherson WB. Event-related brain potentials reflect semantic priming in an object decision task. Brain and Cognition. 1994;24:259–276. doi: 10.1006/brcg.1994.1014. [DOI] [PubMed] [Google Scholar]
  41. Humphreys GW, Forde EM. Hierarchies, similarity, and interactivity in object recognition: “Category-specific” neuropsychological deficits. Behavioral Brain Science. 2001;24:453–509. [PubMed] [Google Scholar]
  42. Kellenbach ML, Wijers AA, Hovius M, Mulder J, Mulder G. Neural differentiation of lexico-syntactic categories or semantic features? Event-related potential evidence for both. Journal of Cognitive Neuroscience. 2002;14:561–577. doi: 10.1162/08989290260045819. [DOI] [PubMed] [Google Scholar]
  43. Kounios J, Holcomb PJ. Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1994;20:804–823. doi: 10.1037//0278-7393.20.4.804. [DOI] [PubMed] [Google Scholar]
  44. Kroll JF, Potter MC. Recognizing words, pictures, and concepts: a comparison of lexical, object, and reality decisions. Journal of Verbal Learning and Verbal Behavior. 1984;23:39–66. [Google Scholar]
  45. Kuèera H, Francis WN. Computational Analysis of Present-Day American English. Brown University Press; Providence, RI: 1967. [Google Scholar]
  46. Kutas M. In the company of other words: electrophysiological evidence for single-word and sentence context effects. Language & Cognitive Processes. 1993;8:533–572. [Google Scholar]
  47. Kutas M, Hillyard SA. Reading senseless sentences: brain potentials reflect semantic incongruity. Science. 1980;207:203–205. doi: 10.1126/science.7350657. [DOI] [PubMed] [Google Scholar]
  48. Kutas M, Hillyard SA. Brain potentials during reading reflect word expectancy and semantic association. Nature. 1984;307:161–163. doi: 10.1038/307161a0. [DOI] [PubMed] [Google Scholar]
  49. Kutas M, Van Petten CK. Psycholinguistics electrified: event-related brain potential investigations. In: Gernsbacher MA, editor. Handbook of Psycholinguistics. Academic Press Inc; San Diego, CA: 1994. pp. 83–143. [Google Scholar]
  50. Laeng B, Zarrinpar A, Kosslyn SM. Do separate processes identify objects as exemplars versus members of basic-level categories? Evidence from hemispheric specialization. Brain and Cognition. 2003;53:15–27. doi: 10.1016/s0278-2626(03)00184-2. [DOI] [PubMed] [Google Scholar]
  51. Laiacona M, Barbarotto R, Capitani E. Perceptual and associative knowledge in category specific impairment of semantic memory: a study of two cases. Cortex. 1993;29:727–740. doi: 10.1016/s0010-9452(13)80293-6. [DOI] [PubMed] [Google Scholar]
  52. Laiacona M, Capitani E, Barbarotto R. Semantic category dissociations: a longitudinal study of two cases. Cortex. 1997;33:441–461. doi: 10.1016/s0010-9452(08)70229-6. [DOI] [PubMed] [Google Scholar]
  53. Large ME, Kiss I, McMullen PA. Electrophysiological correlates of object categorization: back to basics. Cognitive Brain Research. 2004;20:415–426. doi: 10.1016/j.cogbrainres.2004.03.013. [DOI] [PubMed] [Google Scholar]
  54. Leube DT, Erb M, Grodd W, Bartels M, Kircher TT. Activation of right fronto-temporal cortex characterizes the ‘living’ category in semantic processing. Cognitive Brain Research. 2001;12:425–430. doi: 10.1016/s0926-6410(01)00068-4. [DOI] [PubMed] [Google Scholar]
  55. Martin A. Functional neuroimaging of semantic memory. In: Cabaza R, Kingstone A, editors. Handbook of Functional Neuroimaging of Cognition. MIT Press; Cambridge, MA: 2001. pp. 153–186. [Google Scholar]
  56. Martin A, Chao LL. Semantic memory and the bra structure processes. Current Opinions in Neurobiology. 2001;11:194–201. doi: 10.1016/s0959-4388(00)00196-3. [DOI] [PubMed] [Google Scholar]
  57. Martin A, Haxby JV, Lalonde FM, Wiggs CL, Ungerleider LG. Discrete cortical regions associated with knowledge of color and knowledge of action. Science. 1995;270:102–105. doi: 10.1126/science.270.5233.102. [DOI] [PubMed] [Google Scholar]
  58. Martin A, Ungerleider LG, Haxby JV. Category-specificity and the brain: the sensory-motor model of semantic representations of objects. In: Gazzaniga MS, editor. The Cognitive Neurosciences. 2. MIT Press; Cambridge, MA: 2000. pp. 1023–1036. [Google Scholar]
  59. Martin A, Wiggs CL, Ungerleider LG, Haxby JV. Neural correlates of category-specific knowledge. Nature. 1996;379:649–652. doi: 10.1038/379649a0. [DOI] [PubMed] [Google Scholar]
  60. McCarthy G, Wood CC. Scalp distributions of event-related potentials: an ambiguity associated with analysis of variance models. Electroencephalography and Clinical Neurophysiology. 1985;62:203–208. doi: 10.1016/0168-5597(85)90015-2. [DOI] [PubMed] [Google Scholar]
  61. McCarthy RA, Warrington EK. Evidence for modality-specific meaning systems in the brain. Nature. 1988;334:428–430. doi: 10.1038/334428a0. [DOI] [PubMed] [Google Scholar]
  62. McPherson WB, Holcomb PJ. An electrophysiological investigation of semantic priming with pictures of real objects. Psychophysiology. 1999;36:53–65. doi: 10.1017/s0048577299971196. [DOI] [PubMed] [Google Scholar]
  63. McRae K, de Sa VR, Seidenberg MS. On the nature and scope of featural representations of word meaning. Journal of Experimental Psychology: General. 1997;126:99–130. doi: 10.1037//0096-3445.126.2.99. [DOI] [PubMed] [Google Scholar]
  64. Moore CJ, Price CJ. A functional neuroimaging study of the variables that generate category-specific object processing differences. Brain. 1999;122:943–962. doi: 10.1093/brain/122.5.943. [DOI] [PubMed] [Google Scholar]
  65. Moss HE, Rodd JM, Stamatakis EA, Bright P, Tyler LK. Anteromedial temporal cortex supports fine-grained differentiation among objects. Cerebral Cortex. 2005;15:616–627. doi: 10.1093/cercor/bhh163. [DOI] [PubMed] [Google Scholar]
  66. Moss HE, Tyler LK. A progressive category-specific semantic deficit for non-living things. Neuropsychologia. 2000;38:60–82. doi: 10.1016/s0028-3932(99)00044-5. [DOI] [PubMed] [Google Scholar]
  67. Moss HE, Tyler LK, Durrant-Peatfield M, Bunn EM. Two eyes of a see-through: Impaired and intact semantic knowledge in a case of selective deficit for living things. Neurocase. 1998;4:291–310. [Google Scholar]
  68. Mummery CJ, Patterson K, Hodges JR, Price CJ. Functional neuroanatomy of the semantic system: divisible by what? Journal of Cognitive Neuroscience. 1998;10:766–777. doi: 10.1162/089892998563059. [DOI] [PubMed] [Google Scholar]
  69. Mummery CJ, Patterson K, Hodges JR, Wise RJ. Proceedings of the Royal Society of London: Biological Science. Vol. 263. 1996. Generating ‘tiger’ as an animal name or a word beginning with T: differences in brain activation; pp. 989–995. [DOI] [PubMed] [Google Scholar]
  70. Paivio A. Imagery and Verbal Processes. Holt, Rinehart & Winston; New York: 1971. [Google Scholar]
  71. Paivio A. Mental Representations: A Dual Coding Approach. Oxford University Press; New York: 1986. [Google Scholar]
  72. Paivio A. Dual coding theory: retrospect and current status. Canadian Journal of Psychology. 1991;45:255–287. [Google Scholar]
  73. Perani D, Cappa SF, Bettinardi V, Bressi S, Gorno-Tempini M, Matarrese M, Fazio F. Different neural systems for the recognition of animals and man-made tools. Neuroreport. 1995;6:1637–1641. doi: 10.1097/00001756-199508000-00012. [DOI] [PubMed] [Google Scholar]
  74. Perani D, Schnur T, Tettamanti M, Gorno-Tempini M, Cappa SF, Fazio F. Word and picture matching: a PET study of semantic category effects. Neuropsychologia. 1999;37:293–306. doi: 10.1016/s0028-3932(98)00073-6. [DOI] [PubMed] [Google Scholar]
  75. Pietrini V, Nertempi P, Vaglia A, Revello MG, Pinna V, Ferro-Milone F. Recovery from herpes simplex encephalitis: selective impairment of specific semantic categories with neuroradiological correlation. Journal of Neurology, Neurosurgery and Psychiatry. 1988;51:1284–1293. doi: 10.1136/jnnp.51.10.1284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Pilgrim LK, Fadili J, Fletcher P, Tyler LK. Overcoming confounds of stimulus blocking: an event-related fMRI design of semantic processing. Neuroimage. 2002;16:713–723. doi: 10.1006/nimg.2002.1105. [DOI] [PubMed] [Google Scholar]
  77. Pylyshyn ZW. Computation and cognition: issues in the foundations of cognitive science. Behavioral and Brain Sciences. 1980;3:111–132. [Google Scholar]
  78. Riddoch MJ, Humphreys GW. Visual object processing in optic aphasia: a case of semantic access agnosia. Cognitive Neuropsychology. 1987;4:131–185. [Google Scholar]
  79. Saffran EM, Schwartz MF. Of cabbages and things: semantic memory from a neuropsychological perspective—A tutorial review. In: Umilta C, Moscovitch M, editors. Attention and Performance 15: Conscious and Nonconscious Information Processing. MIT press; Cambridge, Mass: 1994. pp. 507–536. [Google Scholar]
  80. Schmid AM, Eddy M, Holcomb PJ. Integration of bottom-up and top-down processes in visual object recognition. A Supplement of the Journal of Cognitive Neuroscience. 2005:145. [Google Scholar]
  81. Sitnikova T. Doctoral Dissertation. Tufts University; Medford, MA: 2003. Comprehension of Videos of Real-World Events: Electrophysiological Evidence. [Google Scholar]
  82. Sitnikova T, Kuperberg G, Holcomb PJ. Semantic integration in videos of real-world events: an electrophysiological investigation. Psychophysiology. 2003;40:160–164. doi: 10.1111/1469-8986.00016. [DOI] [PubMed] [Google Scholar]
  83. Stephan KM, Fink GR, Passingham RE, Silbersweig D, Ceballos-Baumann AO, Frith CD, Frackowiak RS. Functional anatomy of the mental representation of upper extremity movements in healthy subjects. Journal of Neurophysiology. 1995;73:373–386. doi: 10.1152/jn.1995.73.1.373. [DOI] [PubMed] [Google Scholar]
  84. Thompson-Schill SL. Neuroimaging studies of semantic memory: inferring “how” from “where”. Neuropsychologia. 2003;41:280–292. doi: 10.1016/s0028-3932(02)00161-6. [DOI] [PubMed] [Google Scholar]
  85. Thompson-Schill SL, Aguirre GK, D’Esposito M, Farah MJ. A neural basis for category and modality specificity of semantic knowledge. Neuropsychologia. 1999;37:671–676. doi: 10.1016/s0028-3932(98)00126-2. [DOI] [PubMed] [Google Scholar]
  86. Tyler LK, Bright P, Dick E, Tavares P, Pilgrim L, Fletcher P, Greer M, Moss H. Do semantic categories activate distinct cortical regions? Evidence for a distributed neural semantic system. Cognitive Neuropsychology. 2003a;20:541–559. doi: 10.1080/02643290244000211. [DOI] [PubMed] [Google Scholar]
  87. Tyler LK, Stamatakis EA, Dick E, Bright P, Fletcher P, Moss H. Objects and their actions: evidence for a neurally distributed semantic system. Neuroimage. 2003b;18:542–557. doi: 10.1016/s1053-8119(02)00047-2. [DOI] [PubMed] [Google Scholar]
  88. Tyler LK, Moss HE. Functional properties of concepts: Studies of normal and braindamaged patients. Cognitive Neuropsychology. 1997;14:511–545. [Google Scholar]
  89. Tyler LK, Moss HE. Towards a distributed account of conceptual knowledge. Trends in Cognitive Sciences. 2001;5:244–252. doi: 10.1016/s1364-6613(00)01651-x. [DOI] [PubMed] [Google Scholar]
  90. Tyler LK, Moss HE, Durrant-Peatfield MR, Levy JP. Conceptual structure and the structure of concepts: a distributed account of category-specific deficits. Brain and Language. 2000;75:195–231. doi: 10.1006/brln.2000.2353. [DOI] [PubMed] [Google Scholar]
  91. Tyler LK, Stamatakis EA, Bright P, Acres K, Abdallah S, Rodd JM, Moss HE. Processing objects at different levels of specificity. Journal of Cognitive Neuroscience. 2004;16:351–362. doi: 10.1162/089892904322926692. [DOI] [PubMed] [Google Scholar]
  92. Urbach TP, Kutas M. The intractability of scaling scalp distributions to infer neuroelectric sources. Psychophysiology. 2002;39:791–808. doi: 10.1111/1469-8986.3960791. [DOI] [PubMed] [Google Scholar]
  93. van Berkum JJ, Hagoort P, Brown CM. Semantic integration in sentences and discourse: evidence from the N400. Journal of Cognitive Neuroscience. 1999;11:657–671. doi: 10.1162/089892999563724. [DOI] [PubMed] [Google Scholar]
  94. Van Petten C, Rheinfelder H. Conceptual relationships between spoken words and environmental sounds: event-related brain potential measures. Neuropsychologia. 1995;33:485–508. doi: 10.1016/0028-3932(94)00133-a. [DOI] [PubMed] [Google Scholar]
  95. Warrington EK, McCarthy RA. Categories of knowledge. Further fractionations and an attempted integration. Brain. 1987;110:1273–1296. doi: 10.1093/brain/110.5.1273. [DOI] [PubMed] [Google Scholar]
  96. Warrington EK, Shallice T. Category specific semantic impairments. Brain. 1984;107:829–854. doi: 10.1093/brain/107.3.829. [DOI] [PubMed] [Google Scholar]
  97. West WC, Holcomb PJ. Imaginal, semantic, and surface-level processing of concrete and abstract words: an electrophysiological investigation. Journal of Cognitive Neuroscience. 2000;12:1024–1037. doi: 10.1162/08989290051137558. [DOI] [PubMed] [Google Scholar]
  98. West WC, Holcomb PJ. Event-related potentials during discourse-level semantic integration of complex pictures. Cognitive Brain Research. 2002;13:363–375. doi: 10.1016/s0926-6410(01)00129-x. [DOI] [PubMed] [Google Scholar]
  99. Wise R, Chollet F, Hadar U, Friston K, Hoffner E, Frackowiak R. Distribution of cortical neural networks involved in word comprehension and word retrieval. Brain. 1991;114(Pt 4):1803–1817. doi: 10.1093/brain/114.4.1803. [DOI] [PubMed] [Google Scholar]

RESOURCES