Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2008 Apr 15;30(3):976–989. doi: 10.1002/hbm.20561

Accessing newly learned names and meanings in the native language

Annika Hultén 1,2,, Minna Vihla 1, Matti Laine 2, Riitta Salmelin 1
PMCID: PMC6870721  PMID: 18412130

Abstract

Ten healthy adults encountered pictures of unfamiliar archaic tools and successfully learned either their name, verbal definition of their usage, or both. Neural representation of the newly acquired information was probed with magnetoencephalography in an overt picture‐naming task before and after learning, and in two categorization tasks after learning. Within 400 ms, activation proceeded from occipital through parietal to left temporal cortex, inferior frontal cortex (naming) and right temporal cortex (categorization). Comparison of naming of newly learned versus familiar pictures indicated that acquisition and maintenance of word forms are supported by the same neural network. Explicit access to newly learned phonology when such information was known strongly enhanced left temporal activation. By contrast, access to newly learned semantics had no comparable, direct neural effects. Both the behavioral learning pattern and neurophysiological results point to fundamentally different implementation of and access to phonological versus semantic features in processing pictured objects. Hum Brain Mapp, 2009. © 2008 Wiley‐Liss, Inc.

Keywords: lexical learning, magnetoencephalography, picture naming, categorization, phonology, semantics

INTRODUCTION

Our internal repository of words, the mental lexicon, is a dynamic system that not only maintains the representations of and explicit access to known words, but also continuously updates itself through acquisition of new words. Our aim was to study these two aspects of the mental lexicon by tracking brain responses during explicit access to newly learned words and comparing the spatiotemporal sequence of activation to that obtained with known words. Here, we focused on two major aspects of word processing, namely access to word sounds (phonology) and to word meanings (semantics).

There is extensive research on the neural substrates of known words [for a recent review, see Laine and Martin, 2006], but corresponding studies on word learning are scarce. It has been argued that the acquisition, representation, and use of words depends strongly on declarative memory [Ullman, 2004], but a recent fMRI study [Breitenstein et al., 2005] on learning of the phonological word form showed successful association of new word sounds to familiar objects through short‐term implicit associative learning. This would indicate that both explicit (declarative) and implicit memory systems are involved in word learning. The fMRI data showed that modulation of activity in the left hippocampus and ipsilateral posterior associative cortex during the learning phase predicted successful acquisition of the novel vocabulary.

Concerning explicit word learning in a second language, Raboyeau et al. [ 2004] conducted a 1‐month study with French adults acquiring English names of a set of pictures. Their pre‐ versus post‐training positron emission tomography (PET) naming results deviated considerably for overlearned (French) versus recently learned (English) words. Naming in French elicited a left‐sided activation pattern (the fusiform, inferior and middle temporal gyrus, middle frontal regions) commonly reported for naming of familiar objects [Moore and Price, 1999; Murtha et al., 1999; Salmelin et al., 1994], whereas learning to name in English elicited partly bilateral increase of fronto‐temporo‐cerebellar activation. Another recent PET study on phonological word learning by Grönholm et al. [ 2005] explicitly taught novel names on previously unknown objects (ancient farming equipment) to elderly normals. Neuroimaging data, collected only after training, also showed increased left inferior frontal, left anterior‐superior temporal and cerebellar activation for naming of novel trained versus familiar control objects.

The spatiotemporal dynamics of cortical functioning during learning to name objects was tracked in two earlier magnetoencephalography (MEG) experiments. Cornelissen et al. [ 2003] trained three anomic aphasics to regain access to names of familiar objects. In another study, a small group of young healthy individuals learned to name ancient farming tools [Cornelissen et al., 2004]. These two studies suggested that increased long‐latency activation (300–600 ms poststimulus) of the left inferior parietal cortex was associated with successful learning of new names. This finding was tentatively linked with the storage component of the phonological working memory, suggesting to be important in new word learning [Baddeley et al., 1998].

As with learning of word sounds, learning of word meanings has been linked with both increased hippocampal activity [Heckers et al., 2002], and with left fronto‐temporo‐parietal activation increases [Weisberg et al., 2007]. In the latter study, the participants were explicitly taught to use novel tool‐like objects and were scanned by fMRI before and after training while performing a picture‐matching task with the tool‐like objects. The authors suggested that the training created memory representations for motion (increased left middle temporal gyrus activation) and for manipulation (increased left premotor cortex and left intraparietal sulcus activation) for the novel objects, and that the visual identification of objects is accompanied by the widespread activation of object properties beyond mere visuoperceptual features.

Differences may also emerge when neural effects of phonological training are compared with semantic training. An fMRI study found differential activation patterns for phonological and semantic training in pseudoword reading [Sandak et al., 2004]. Semantically trained items elicited increased activation of the left superior and middle temporal gyri, whereas phonologically trained items were accompanied by activation decreases in the left inferior frontal gyrus, supramarginal gyrus, and occipitotemporal areas. Note, however, that in the two studies reviewed earlier where the participants learned the names of ancient farming equipment [Cornelissen et al., 2004; Grönholm et al., 2005], provision of semantic information in the form of verbal definition did not affect the brain activation patterns of naming. Cornelissen et al. [ 2004] speculated that the lack of semantic learning effects may have been partly due to the fact that the participants were explicitly instructed to just learn the names and not the definitions.

The short review above shows a considerable variation in the brain activation patterns associated with word learning. This may be due to a number of reasons, including differences in word learning tasks (e.g., emphasis on explicit versus implicit, semantic versus phonological information), measurement sessions (during encoding versus during retrieval), imaging methods, and subject populations. Although it may be unrealistic to expect that a single dedicated neural system would be related to word learning, the imaging studies by Cornelissen et al. [ 2003, 2004; see also Breitenstein et al., 2005] provided support for the idea that the phonological working memory system/left inferior parietal lobe activity would play a key role in new word learning [Baddeley et al., 1998]. We thus set out to explicitly teach names and/or definitions of a set of unfamiliar objects used by Cornelissen et al. [ 2004] to a group of normal subjects, and to follow up training‐related changes in brain activity during naming. However, important changes were introduced to the protocol. First, our subjects were instructed to learn both the names and the definitions, not just the names. This should presumably lead to a more natural learning process instead of a focus at mere picture‐letter string associations. Second, MEG measurement during oral naming of the novel pictures before and after training was complemented by phonological and semantic picture categorization tasks administered after the training. These tasks ensure that our findings will not be specific to the naming task only. Moreover, MEG data collected during picture categorization can provide additional information on the neural correlates of phonological versus semantic access to freshly acquired lexical knowledge. Third, our group of 10 participants was twice as large as that of Cornelissen et al. [ 2004].

Picture naming, a key task in this study, is particularly suitable for experimental purposes as it is a simple and natural task that involves all the core components of word production, proceeding from object recognition, semantic access, and word sound access to articulation [Glaser, 1992; Indefrey and Levelt, 2004]. Based on a review of a large number of studies, Indefrey and Levelt [ 2004] suggested distinct neural correlates and time windows for these main stages. Specifically, conceptually driven (semantic) word selection was associated with activation of the left middle temporal gyrus region at ∼175–250 ms poststimulus onset, and access to the phonological form of the word with activation around 200–400 ms in the left posterior middle and superior temporal gyrus. The final stages of syllabification and self‐monitoring are thought to occur somewhat in parallel, roughly from 300 ms onward, being related to the activity of the left inferior frontal region and the superior temporal cortex, respectively. These spatiotemporal coordinates, albeit tentative, serve as a valuable framework in interpreting our results on picture naming and categorization for the newly learned objects, as MEG provides both the location of the task‐relevant cortical generators and the temporal unfolding of their activity at a millisecond level.

As noted earlier, picture categorization was chosen to complement the naming task as it is closely related to picture naming but depending on the instructions should specifically enhance one of the two major stages of word production, i.e., either semantic or phonological retrieval. In semantic categorization of pictures, a decision can be made (and the processing terminated) as soon as semantic information has been accessed, whereas in phonological categorization the process will have to continue further to name retrieval and analysis of its phonological structure [Humphreys et al., 1999; Indefrey and Levelt, 2004]. To maximize reliance on semantic information and minimize the potential for categorization merely based on visual features, we focused on a subtle semantic distinction among different types of tools (“detect tools used for fishing”). Our phonological categorization task (“detect names starting with the phoneme/r/”), in turn, required access to and decomposition of the phonological form of the word.

We asked (i) how integration of new lexical knowledge in the mental lexicon is reflected in cortical activation, when naming of newly acquired items is compared with naming of familiar items, and (ii) how the neural responses to naming and semantic versus phonological categorization of novel items are affected by the type of information (semantic versus phonological) provided during learning.

MATERIALS AND METHODS

This study proceeded as follows: (i) one pretraining MEG session with picture naming, (ii) training phase outside the scanner (lasting 3–6 days), (iii) one post‐training MEG measurement with picture naming, and (iv) one post‐training MEG session with picture categorization, including separate runs on semantic categorization and phonological categorization. In the following sections, we will first detail the stimuli and then explain the training procedure. The MEG measurements will be described separately for the naming task and the categorization tasks. The same participants were included throughout the study, the same stimuli were used both in training and in the MEG measurements, and the same MEG recording and analysis procedures were applied both before and after training and in the naming and categorization experiments.

Subjects and Stimuli

Ten healthy native Finnish‐speaking volunteers (5 males, 5 females; 21–30 years, mean 26 years), all right‐handed [lateralization quotient 80–100%, mean 93%, Edinburgh Handedness Scale; Oldfield, 1971] university students or graduates, gave their informed written consent to participate in the experiment that was approved by the local ethics committee.

The stimuli were 250 black‐and‐white drawings of objects (see Fig. 1 for examples); 200 pictures represented previously unknown (real but old or rarely used) tools/utensils and 50 represented modern familiar objects (Fam). The previously unknown objects were divided into four categories, 50 pictures in each, depending on which type of information was presented during training: name and brief definition describing the use of the object (NameDef); only name (Name); only definition (Def); no linguistic information (unfamiliar, Unfam). The Fam items were also presented without linguistic information. Three pictures in the Unfam category were slightly edited (rotated, a new part added, a part replaced) to reduce the possibility for associations to known items. The names were 4–10 letters long (average length 6 letters) and they were the real Finnish names of the actual objects, most of which were farming or nautical tools/utensils. The definitions, in Finnish, were 3 to 6 words long.

Figure 1.

Figure 1

Experimental design. (A) Training procedure. In daily computerized sessions the participants were shown pictures of initially unfamiliar, real objects. The pictures included the item name (category Name), a brief definition of its usage (Def), both types of information (NameDef), or no information at all (unfamiliar category, Unfam). A control category consisting of familiar items (Fam) was also included in the training but presented without name or definition. (B) Procedure in the MEG recording. MEG data during direct confrontation naming was collected both before and after the training phase. MEG during item categorization was recorded only after training and the Fam items where not included. In one session, items were categorized based on their semantic properties (Semcat: Is the item related to fishing?) and in another session based on their phonological properties (Phoncat: Does the name begin with/r/?). The empty screen appearing after the object presentation was the prompt for giving the item name (or the generic name “object” when the name was not known), or responding “yes” when the object fell in the predefined semantic or phonological category.

There was no difference in word length between the stimulus categories [number of letters, one‐way ANOVA F(2, 98) P < 1, n.s.]; this also applies to the number of phonemes because of the practically one‐to‐one grapheme‐to‐phoneme correspondence in the Finnish language. The item names provided for the Name, NameDef, and Fam categories had a similar phonological/orthographic neighborhood size (i.e., the number of real words that differed from the stimulus names only by one phoneme/letter): mean ± SD value for NameDef 3.0 ± 4.1; Name 3.7 ± 4.5; Fam 3.8 ± 3.9.

The visual complexity was similar among stimulus categories. Eight Finnish‐speaking naive participants (none of them participated in the MEG recording) attempted to name and/or define the usage of the pictures and rated them with respect to the visual complexity of the drawing (on a scale from 1 to 5 ranging from low to high complexity). One‐way ANOVA showed that the visual complexity ratings did not differ among the stimulus categories [F(4, 245) = 1.6, P = 0.2, n.s.; mean ± SD values: NameDef 2.8 ± 0.9; Name 2.8 ± 0.9; Def 2.9 ± 0.8; Unfam 2.8 ± 0.8; Fam 2.5 ± 0.7]. This pretest also demonstrated that the items were unfamiliar to modern‐day people. None of the names for the objects were known to the evaluators and though they stated that they knew the usage for 11.8% (SD 6.2%) of the items, only 1.6% (SD 1.8%) of their definitions turned out to be correct.

The unfamiliar items were impossible to categorize semantically without some description of their usage. Four naive participants (none of them participated in the MEG recording) were asked to detect items related to fishing based solely on their visual features. All participants performed at chance level.

Training

The training period consisted of at least three daily 40‐min computerized sessions (see Fig. 1A), until the participant had learned 98% of the names. Before training, the participants were instructed to learn both the names and the definitions. During training sessions, each of the 250 pictures was shown for 8 s in a random order, followed by a 2‐s pause before the next picture. The name and the definition, when either or both were given, were simultaneously presented on the screen above the picture. After each training session, the acquisition of names and meanings was monitored by a questionnaire.

We chose to train the participants until a specific criterion was reached (98% of the names mastered), as opposed to setting a fixed number of training sessions across participants. Our aim was to equalize the goal, i.e., vocabulary acquisition, and to minimize possible differences in the consolidation of the new words that might arise from natural variation in individual learning ability and use of strategies. By allowing for the possibility of individual differences in the learning strategy as well as emphasizing the learning of both phonological and semantic information (when provided), we also sought to avoid unrealistic language learning in the form of simple stimulus pairing but instead tap into more natural mechanisms of vocabulary growth.

MEG Experiment

During the MEG recording, the stimuli appeared on a back‐projection screen set at 1 m in front of the participant, fitting in an area of 3° × 3° of the central visual field. The stimulus was shown for 150 ms, followed by a blank screen for 850 ms (see Fig. 1B). A question mark then appeared for 1 s, prompting the participant to respond to a target stimulus (see later). The next trial started after a 1‐s interval. The stimulus onset asynchrony (SOA) was 3 s.

The picture‐naming task was performed twice, the first time 1 day before the onset of training and the second time within 1 day of reaching the criterion level (in one case after 2 days due to a weekend). The participants' task was to name the object or to say “object” (“esine” in Finnish) if they did not know or remember the name. The participants were instructed to respond during the question mark that followed the stimulus. A brief practice session preceded the recording. A delayed naming task (as opposed to immediate naming that would provide reaction time data) was chosen to equalize, as well as possible, the process of word retrieval, prior to actual vocalization, between the familiar and unfamiliar/newly learned objects and thus to facilitate direct comparison. This setup also helped to reduce mouth‐movement artifacts. Stimuli were presented in sequences containing all 250 stimuli (all five stimulus categories) in a random order and lasting ∼13 min. Three or four sequences were presented to each participant, depending on the amount of blinks and eye movements (see “MEG Data Analysis” section for rejection of trials).

The categorization experiment was performed in a separate session within 2 days following the first post‐training recording. There were two parts, a phonological categorization task (Phoncat) and a semantic categorization task (Semcat), the order of which within the session was counterbalanced across participants. In the Phoncat condition, the targets were objects with names starting with the phoneme/letter “r” (note that in the Finnish language, there is a practically complete one‐to‐one correspondence between graphemes and phonemes). In the Semcat condition, the targets were objects associated with fishing. The participants were instructed to respond to the targets (six/seven items in the Phoncat/Semcat task) by saying “yes” (“joo” in colloquial Finnish) during the question mark. Only the nontargets (no response) were used in the analysis; the targets and the immediately following trial were excluded. The 200 initially unfamiliar stimuli (NameDef, Name, Def, and Unfam) were presented in a pseudorandomized order. One sequence lasted for 10 min. Three or four sequences, depending on the amount of blinking and eye movements, were required to collect a sufficient number of trials per stimulus category. The Fam items were not included in the categorization tasks to keep the total measurement time acceptable.

The MEG recordings took place in a magnetically shielded room, using a helmet‐shaped 306‐channel whole‐head neuromagnetometer (Vectorview™, Neuromag, Helsinki, Finland). The system contains 102 triple sensor elements composed of two orthogonal planar gradiometers and one magnetometer. The planar gradiometers, used in the data analyses in this study, detect the maximum signal directly above an active cortical area. The band‐pass was 0.03–200 Hz and the sampling rate 600 Hz. Eye movements and blinking artifacts were monitored with electrodes attached vertically and horizontally around the eyes. Mouth movements were monitored with two electrodes attached to the upper and diagonally lower corner of the mouth. The position of the participant's head within the magnetometer was determined by measuring the location of four coils, attached to the participant's head, first in the head coordinate system (set by the nasion and points in front of the ear canals) by using a three‐dimensional digitizer and second, in the magnetometer coordinate system, by energizing the coils briefly before each recording session.

MEG Data Analysis

The MEG signals were low‐pass filtered at 40 Hz and averaged off‐line over a 1200‐ms interval, from 200 ms before to 1000 ms after stimulus onset. The 200‐ms prestimulus interval was used as a baseline. Epochs contaminated by blinks or eye movements were rejected. The typical rejection level was 150 μV but was adjusted individually when needed. Epochs during which an MEG sensor signal exceeded the level of 3,000 fT/cm (reflecting an external disturbance) were also rejected. Behavioral data from the first MEG recording (picture naming) ensured that the stimuli were novel to the participants. In the rare event that a participant was familiar with any object (max five items/participant), those objects were excluded from the analysis for the participant in question. In the picture‐naming task, epochs during which the participant answered incorrectly or did not respond at all were excluded. In the categorization task, analysis was performed on the nontargets, excluding the trials immediately following the targets. The very rare trials during which the participant had erroneously responded to nontargets were excluded as well. The mean number of off‐line averages was 119 (SD 18) for the naming task and 119 (SD 17) for the categorization task.

To obtain an initial overview of the data, areal mean signals (AMS) were calculated over seven areas of interest: left and right frontal, temporal, and parietal areas, and the occipital area. First, vector sums of each gradiometer pair were computed by squaring the two MEG signals, summing them together, and calculating the square root of this sum. The areal mean signals were obtained by averaging these vector sums for each area of interest, individually for each subject. Finally, the areal mean signals were averaged across subjects. Because of the way the sensor‐level areal mean signals are calculated (square root of sum of squared signals) they always have a positive value (>0).

The neuronal sources generating the MEG signals were modeled as Equivalent Current Dipoles (ECD) [Hämäläinen et al., 1993], following established analysis procedures [e.g. Lounasmaa et al., 1996; Salmelin, 2007; Salmelin et al., 2000; Vihla et al., 2006]. An ECD estimates the mean location, orientation, and strength of the cortical current flow from the distribution of the magnetic field. The data were scanned visually to find dipolar field patterns, signaling local synchronous neural activation. The ECDs were determined one‐by‐one, each by selecting a subset of sensors at the time point showing the clearest dipolar field pattern. The ECDs were accepted only if the goodness‐of‐fit value, which indicates how much of the measured field is accounted for by the ECD, exceeded 80%. To obtain the time course of activation in the different source areas, the ECDs were included simultaneously in a multidipole model. The locations and orientations of the ECDs were kept fixed, whereas their strengths were allowed to vary to best account for the signals detected by all MEG sensors over the entire analysis interval.

To locate the sources anatomically, ECDs were displayed on individual magnetic resonance images (MRI). For group level visualization, all sources were displayed on one participant's brain surface, using an elastic transformation [Schormann et al., 1996; Woods et al., 1998].

Statistical Tests

For group‐level statistical analysis, the individual sources were grouped on the basis of spatial proximity and similarity of their time behavior. Regions in which six or more participants showed activation were included in statistical comparisons. The time courses of activation (source waveforms) in the individual participants were characterized by the maximum level of activation and the time at which it was reached (“peak strength”, “peak latency”). If the source strength did not exceed baseline variation, the peak strength was marked as zero and no latency values were defined. The mean level of activation was calculated as well by selecting a fixed time window per area that encapsulated the onset, peak and decline of the response across all participants, i.e., the onset of the time window was defined by the participant with the earliest onset and the offset by the participant in whom the activation persisted longest. The same time window was then used for all participants. For slowly changing, sustained activation, the shape of the build‐up, and decline of the response were additionally described by the time at which activation had reached or diminished to 50% of the maximum activation. In the picture‐naming task, these values were tested statistically using a repeated‐measures ANOVA with a 2 × 5 (pre/post‐training × stimulus category) factorial design. In the categorization task, a 2 × 4 repeated‐measures ANOVA was used with task (Phoncat and Semcat) and stimulus category (NameDef, Name, Def, and Unfam) as within‐subject factors. Greenhouse‐Geisser correction was applied when necessary. Note that grand average waveforms were only used for visualization; activation strengths and latencies were collected from the individual source waveforms.

RESULTS

Behavioral Data

The subjects learned 98% of the names over the course of three to six training sessions (see Fig. 2). During these sessions, they simultaneously learned 93–100% of the definitions. When interviewed after MEG recordings, the participants reported that the definitions were easier to learn than the names. This observation is reflected in the learning curves in Figure 2, with significantly higher learning scores for the meanings than the names after only one training session [t(8) = 9.0, P < 0.01]. The number of training sessions required to master the criterion level of 98% of item names was not affected by whether the names were presented alone (Name) or accompanied by definitions of usage (NameDef) [t(8) = −1.0, P = 0.35, n.s.; mean ± SD values: Name 4.3 ± 1.2; NameDef 4.2 ± 1.1]. There was nearly a significant tendency to learn the definitions within fewer training sessions when presented alone (Def) than when accompanied by the name (NameDef) [t(8) = 2.3, P = 0.051; mean ± SD values: Def 3.0 ± 1.1; NameDef 3.6 ± 1.3]. Variation between participants (vertical bars in Fig. 2) was greater for learning names than for learning definitions. Most participants reported that during the training period they associated new items with similar familiar items. One participant found computerized training sessions unmotivating, and he was allowed to read his own written notes in addition to participating in five computerized training sessions (but not daily as the other participants). His learning rate was monitored in the same way as other participants' learning rates, and he needed ∼1 month to reach the same level as others. His MEG responses were comparable with those of other participants.

Figure 2.

Figure 2

Average learning rates. Left: Learning of item names when only names (Name) or both names and definitions (NameDef) were provided. The vertical bars denote the range of values from minimum to maximum. Right: Learning of definitions of item usage when only definitions (Def) or both names and definitions (NameDef) were provided. Nine participants participated in at least three computerized training sessions daily until they had learned 98% of the names. The 10th participant did not participate in computerized sessions daily, and his data are excluded from the curves.

In the MEG experiment, the actual vocal responses were delayed until 1 s after picture onset in an attempt to equate the search and preparation phase between stimulus categories. Error rates were thus considered more meaningful measures than reaction times for evaluating differences between stimulus types. The mean number of omissions or other errors per session varied significantly between the stimulus categories [one‐way ANOVA F(4, 36) = 23.8, P < 0.01, mean ± SD values: NameDef 3.9 ± 1.7; Name 4.1 ± 2.1; Def 0.3 ± 0.4; Unfam 0.5 ± 0.6; Fam 1.5 ± 0.4 ]. As to be expected, planned comparisons showed that retrieving an actual name instead of saying “object” produced more errors [Fam > Unfam t(9) = 2.3, P < 0.05; Fam > Sem t(9) = 3.4, P < 0.01; NameDef > Unfam t(9) = 6.1, P < 0.01; Name > Unfam t(9) = 5.4, P < 0.01]. The participants also made more errors when naming a newly acquired item than a familiar object [NameDef > Fam t(9) = 3.8, P < 0.01; Name > Fam t(9) = 4.9, P < 0.01]. There was no difference between the NameDef and Name categories, indicating that availability of semantic knowledge did not affect the accuracy in naming.

The participants were interviewed after the categorization experiment. About half of them rated the Phoncat task as easy and the Semcat task as more demanding, while the other half rated the tasks the other way around. Nevertheless, all subjects performed at ceiling level on both tasks; they missed or identified wrong objects as targets on average only 0.4 times/session in the Phoncat task and 0.8 times/session in the Semcat task.

Areal Mean Signals

Figure 3 displays the average MEG signal strength as a function of time over seven regions of interest in the naming (before and after training) and categorization tasks (phonological and semantic, after training). A subset of the stimulus categories, the NameDef and the Unfam items, as well as the Fam items in the naming task, exemplify the patterns of activation. At the sensor level, the overall spatiotemporal sequence was remarkably similar for all tasks, with an early posterior response at around 150 ms, followed by sustained signals over the temporal and frontal lobes, in addition to the occipital and parietal areas. Both training effects in the naming task and differences between stimulus types in the categorization tasks were concentrated to the sustained responses over the temporal and frontal areas, starting at about 300 ms after stimulus onset.

Figure 3.

Figure 3

Grand areal mean signals of all subjects over seven areas of interest. The to‐be‐learned item types are exemplified by the NameDef category and contrasted with unfamiliar items for which no information was provided (Unfam) and with familiar items (Fam). Top: Naming task before and after training. Bottom: Phonological (Phoncat) and semantic (Semcat) categorization tasks performed after training.

Picture Naming: Source Analysis

In each participant, the same set of source areas (6–10 in total), represented by Equivalent Current Dipoles (ECDs), accounted for the MEG signals in both the pre‐ and post‐training recordings and for all stimulus categories. The activation pattern was largely similar across the participants. As depicted in Figure 4, cortical activation proceeded bilaterally from occipital (<200 ms) through parietal (200–400 ms) to left temporal and bilateral frontal cortex (>300 ms). Occipital sources were found in all participants, with activity peaking on average, across measurements, at 98 ± 13 ms (mean ± SD) post stimulus. Parietal activity was detected in seven participants in the left hemisphere (LH), peaking at 298 ± 93 ms, and in all 10 participants in the right hemisphere (RH), peaking at 294 ± 81 ms. Left posterior temporal activation, found in eight participants, peaked at 417 ± 163 ms. Frontal sources were observed in nine participants in the LH and in seven participants in the RH, with LH activation peaking at 606 ± 188 ms and RH activation at 727 ± 180 ms. Although the left temporal sources were anatomically divided into superior and inferior/middle groups (see Fig. 4), the time courses of activation were similar. We detected either a superior or an inferior/middle left temporal source in any one participant, never both.

Figure 4.

Figure 4

Group‐level results in the naming task. Left: Clustering of sources in six cortical regions of interest. Each dot represents the center of an active cortical patch in one individual. For each cluster, the number of participants with a source in that area is given in parentheses. The figure and analyses include one source per region per participant. Middle: Grand average time courses of activation in the depicted source clusters before and after training. The different stimulus categories are indicated with different line types. Right: Peak amplitudes (mean + SEM) for the different stimulus categories (NameDef = ND; Name = N; Def = D; Unfam = Uf; Fam = F). Asterisks above the bars indicate significant difference from the Unfam category (within a session); asterisks on the bars indicate significant between‐session difference in activation (per each stimulus category).

Note that the grand average waveforms of activation before and after learning displayed in Figure 4 were only used for visualization. All statistical analyses are based on individual data. The following results are based on the peak activation strengths. The mean strengths across the response duration generally portrayed the same results. The Unfam items for which no phonological or semantic information was provided during training served as the critical control condition for relevant changes in neuronal activation. Learning effects should emerge, after training, as differences between activations elicited by the NameDef/Name/Def versus Unfam items.

In the left parietal region, a repeated measures ANOVA (pre/post‐training × stimulus category) of the peak amplitudes showed a significant main effect of stimulus category [F(4, 24) = 2.8, P < 0.05], which was due to larger amplitudes for Fam than Unfam items [t(7) = 3.0, P < 0.05]. There were no significant changes in the peak activation in the right parietal region.

Clear learning effects were evident in the left temporal and bilateral frontal regions (see Fig. 4). In the left temporal region, the peak strength showed a main effect of pre/post‐training sessions [F(4, 28) = 6.5, P < 0.01] and an interaction between pre/post‐training sessions and stimulus category [F(4, 28) = 4.9, P < 0.01]. Similar effects were found in both the left and right frontal areas [pre/post‐training effect: left F(1, 8) = 5.6, P < 0.05, right F(1,6) = 11.5, P < 0.05; pre/post‐training × stimulus interaction: left F(4, 32) = 9.9, P < 0.01, ϵ = 0.4, right n.s.] together with a main effect of stimulus category [left F(4, 32) = 7.2, P < 0.05, ϵ = 0.3]. Pairwise comparisons showed that the activation was significantly stronger to the Fam than Unfam items before training in the left temporal [t(7) = 4.4, P < 0.01] and left frontal [t(5) = 2.0, P < 0.05] regions. The activation elicited by the Name and NameDef items was significantly increased from pre to post‐training in the left temporal and bilateral frontal regions [Name items: left temporal t(7) = 2.0, P = 0.05, left frontal t(8) = 3.9, P < 0.05, right frontal t(6) = 3.4, P = 0.01; NameDef items: left temporal t(7) = 2.0, P = 0.05, left frontal t(8) = 3.0, P < 0.05, right frontal t(6) = 3.2, P < 0.05]. A similar increase for the Def items was observed only in the frontal regions [left frontal t(8) = 2.0, P < 0.05, right frontal t(6) = 2.0, P < 0.05]. The interaction between pre/post‐training and stimulus category is additionally explained by the fact that the Name and NameDef items evoked a significantly stronger response than Unfam items in the post‐training measurement [Name items: left temporal t(7) = 2.6, P < 0.05, left frontal t(8) = 3.3, P = 0.01, right frontal t(6) = 3.3, P < 0.05; NameDef items: left temporal t(7) = 2.1, P < 0.05, left frontal t(8) = 3.1, P = 0.01, right frontal t(6) = 3.1, P < 0.05], whereas they did not differ from each other in the pre‐training measurement. In the frontal areas, the Fam items evoked stronger activation than the Unfam items also after training [left frontal t(8) = 2.4, P < 0.05, right frontal t(6) = 2.7, P < 0.05].

Semantic versus Phonological Categorization: Source Analysis

As in picture naming, one set of ECDs (six to nine in total) accounted for the MEG signals for all stimulus types in both categorization tasks. Although the source distribution was quite similar to the naming task it was not fully identical, hence ECD sets were constructed separately for naming and categorization data. In the categorization tasks, the activation proceeded from the occipital cortex (<200 ms) through bilateral parietal regions (200–400 ms) to the left and right temporal cortex (>300 ms). The cortical dynamics were largely similar across participants (see Fig. 5). All participants had at least one source in the occipital cluster (peak latency 108 ± 14 ms). Parietal activation at 200–400 ms was detected in 8 of 10 participants (bilaterally in six participants; peak latency 306 ± 86 ms in the LH and 289 ± 79 ms in the RH). Clusters of sources in the temporal cortex, active at 300–800 ms, were observed for 7 participants in the LH and for 6 participants in the RH (bilaterally in four participants; peak latency 467 ± 124 ms in the LH and 436 ± 115 ms in the RH). Left frontal sources at 300–1,000 ms were identified in 6 participants (peak latency 387 ± 128 ms).

Figure 5.

Figure 5

Group‐level results in the categorization tasks. Left: Source clusters in the different cortical areas. For each cluster, the number of participants with a source in that area is given in parentheses. The figure and analyses include one source per region per participant. Middle: Grand average time courses of activation in the depicted source clusters for the Phoncat and Semcat tasks. The different stimulus categories are indicated with different line types. Right: Peak amplitudes (mean + SEM) for the different stimulus categories (NameDef = ND; Name = N; Def = D; Unfam = Uf). Asterisks above the bars indicate significant difference from the Unfam category.

Significant group‐level statistical effects of stimulus and/or task were detected only in the temporal lobes. Similarly to the naming task, the statistical analysis is based on the peak activation strengths. The mean activation strength generally conveyed the same information. In the left temporal lobe, a two‐way ANOVA (stimulus category × task) showed a significant main effect of stimulus category [F(3,18) = 6.0, P < 0.01] and interaction between stimulus category and task [F(3,18) = 4.3, P < 0.05]. Pairwise comparisons showed that in the Phoncat task activation was stronger to the NameDef than Unfam stimuli [t(6) = 2.7, P < 0.05], and that the difference between the Name and Unfam stimuli approached significance [t(6) = 2.4, P = 0.057]. The Def stimuli, in contrast, did not differ significantly from the Unfam category. In the Semcat task, no significant differences in activation strength were detected. As for the duration, the descending slope of the activation revealed main effects of category [F(3,18) = 11.1, P < 0.01, epsilon = 0.6] and task [F(1,6) = 15.9, P < 0.01] at the time point when the activation had reduced to 50% of the maximum. The activation persisted longer in the Phoncat than in the Semcat task (50% latency on the descending slope: Phoncat 720 ± 192 ms, Semcat 638 ± 135 ms). Pairwise comparisons showed that responses to NameDef and Name stimuli lasted longer than those to Unfam stimuli in both the Phoncat [t(6) > 3.1, P < 0.05] and Semcat tasks [t(6) > 2.5, P < 0.05].

In the right temporal lobe, the peak activation strength displayed a significant main effect of stimulus category [F(3,15) = 12.7, P < 0.01], and interaction between stimulus category and task approached significance [F(3,15) = 3.1, P = 0.06]. In the Phoncat task, the activation was significantly stronger to the NameDef, Name, and Def stimuli than to the Unfam stimuli [t(5) = 4.4, P < 0.01; t(5) = 8.6, P < 0.01; t(5) = 3.1, P < 0.05, respectively]. No significant stimulus effects emerged in the Semcat task.

DISCUSSION

The neural implementation of phonological and semantic processing of new lexical entries was probed by picture naming and semantic and phonological categorization tasks. In the picture‐naming task, we asked how the access to a newly learned item is influenced by knowledge of its name, usage, or both types of information. Manipulating these attributes taps into different stages of the picture‐naming process, as it advances from visual through semantic to phonological analysis [Indefrey and Levelt, 2004]. To gain a more dynamic view of lexical processing, we used picture‐categorization tasks to evaluate the combined effect of available knowledge (name, usage, or both) and task requirements (decision based on either name or usage). Theoretically, categorization of pictured items should proceed largely similarly to that of naming the items, except that categorization by item usage should be possible as soon as semantic representation is accessed and the process could, in principle, thereafter be terminated. For categorization based on the item name, analysis would need to proceed to the next level where phonological information is accessed [Humphreys et al., 1999; Indefrey and Levelt, 2004].

The spatiotemporal sequence of cortical activation measured by MEG was remarkably similar for picture naming and both categorization tasks, progressing from occipital through parietal and temporal to frontal regions within 500 ms poststimulus. This pattern of activation is well in line with earlier MEG reports on picture naming [Levelt et al., 1998; Salmelin et al., 1994; Sörös et al., 2003] and categorization of familiar everyday items [Vihla et al., 2006]. The overall spatiotemporal sequence of activation was the same for familiar and initially unfamiliar items and remained unchanged when new linguistic knowledge was gained, indicating similar processing mechanisms for new and well‐established items.

In this study, the sequence of activation between the naming and categorization tasks diverged in the frontal lobe. Picture naming was accompanied by bilateral activation of the inferior frontal cortex, stronger to items with known names (Fam, Name, NameDef) than to items referred to with the generic name “object” (Unfam, Def). Categorization of newly learned items, however, showed more dorsal left frontal activation that did not differentiate between stimulus types. These findings are in line with the proposed role of the inferior frontal cortex in phonological access and motor preparation for articulation [Bookheimer, 2002; Kuriki et al., 1999; Salmelin et al., 1994; Vigneau et al., 2006]. In picture naming, a decision about knowledge of the item name, followed by verbal output, had to be made for every stimulus. It is reasonable that phonological encoding and motor preparation for production of a newly learned name taxed the processing in this area more than generation of the relatively stereotyped response “object”. In the categorization task, however, no response was required for the nontargets (the targets themselves were not included in the analysis). The lack of inferior frontal activation in the categorization part of this study suggests that processing of newly learned items does not automatically proceed to postlexical processing stages such as syllabification. For familiar items it apparently does, as suggested by an earlier MEG study that showed an effect of phonological categorization in the inferior frontal cortex [Vihla et al., 2006]. The more dorsal sources in the categorization than naming task in this study could reflect decision‐specific processes found in some phonological decision tasks [Bookheimer, 2002].

Across all tasks, the most consistent effects of phonological processing (Name and NameDef items) were detected in the left temporal cortex after about 300 ms poststimulus. Naming and phonological categorization enhanced this activation for items with newly learned names when compared with the unfamiliar items, but no comparable enhancement was observed in the semantic categorization. A similar increase of the sustained left temporal activation has also been reported in phonological versus semantic categorization of familiar everyday items [Vihla et al., 2006]. The strong left‐hemisphere effect of phonological information and active access to phonology is in line with the results of an extensive meta‐analysis of behavioral and neuroimaging studies on word production [Indefrey and Levelt, 2004], suggesting that the left posterior temporal lobe is associated with retrieval of the lexically stored phonological code starting at about 250 ms. In another meta‐analysis focusing on fMRI studies [Vigneau et al., 2006], posterior and middle parts of the temporal lobe have similarly been linked to phonological processing (both as part of an auditory‐motor loop and as an interface between semantics and phonology). A recent study, seeking to distinguish the effect of word frequency from that of familiarity and word length in picture naming, suggested that increased activation of the left posterior superior temporal gyrus reflects increased phonological processing cost required for low‐frequency words [Graves et al., 2007].

Knowledge of verbal semantics (usage), in contrast, had no direct effect on cortical activation in either naming or semantic categorization. One might argue that confrontation naming does not necessarily tap verbal semantics very strongly. However, high‐level semantic categorization of tools and utensils that are all visually fairly similar should do that. Yet, no direct neural effect was detected in concordance with an earlier MEG study on semantic categorization of familiar items [Vihla et al., 2006]. Categorization studies have typically focused on the distinction between living versus nonliving items [Caramazza and Shelton, 1998; Devlin et al., 2002; Löw et al., 2003; Martin and Chao, 2001; Perani et al., 1999; Rahman et al., 2003; Tranel et al., 2005]. In this study, categorization on a subordinate level, by separating fishing tools from other tools, may have led our participants to spontaneously apply some type of semantic categorization even for the items for which no semantic information was explicitly provided. It is also possible that object‐like pictures automatically elicit semantic processing [Boucart and Humphreys, 1992; Grill‐Spector and Kanwisher, 2005; Pins et al., 2004]. In more general terms, this issue is related to a long‐standing dispute in neuropsychology concerning the separability of object recognition systems from semantics [Grill‐Spector and Kanwisher, 2005; Laine and Martin, 2006].

Assuming that the search process for potential meaning was evoked similarly for each item, it should involve the same general neuronal network for all stimulus types, both before and after learning. Activation of the left temporal cortex, detected in all experimental conditions, is a likely candidate for reflecting semantic processing, in addition to its salient role in phonological processing. An earlier MEG study on picture naming reported that left temporal activation at about 200 ms poststimulus was different for pictures presented in the context of items from the same versus different semantic categories, suggesting that the left temporal cortex was involved in the semantic interference effect and thus also in lexical semantic retrieval [Maess et al., 2002]. Activation of the posterior temporal lobe has been reported for naming at a superordinate level [Tyler et al., 2004], and the middle temporal lobe has been suggested to store semantic features typical to man‐made objects [Devlin et al., 2002; Perani et al., 1999; Vitali et al., 2005].

In the categorization tasks, the items for which the participants did not have the relevant information but some knowledge nonetheless (Name items in the Semcat task, Def items in the Phoncat task) evoked activation that differed from that to Unfam items. In the left temporal region, the Name items elicited longer‐lasting responses than Unfam items in the semantic categorization task. In the phonological categorization task, in contrast, the effect of processing of meaning‐only items (Def) emerged in the right temporal cortex, as stronger activation to Def than Unfam items. The data thus imply that the neural effects are related both to the specific tasks and types of provided knowledge, and that the neural implementation of phonological and semantic analysis is fundamentally different in nature. Behaviorally, the phonological and semantic learning processes were clearly different as well. Item usage was mastered significantly more rapidly than names, regardless of whether only one or both the types of information were provided for an item.

The left‐hemisphere effect to Name items in semantic categorization appeared to be a diluted version of that observed in the phonological categorization task. Therefore, a likely interpretation is that the search for item name was not completely prevented in semantic categorization. Activation of the right temporal cortex was enhanced in phonological categorization when either semantic or phonological information was available for the new item. An apparent explanation would be that the definitions of the Def items doubled as “names” on which the subjects might have tried to perform phonological categorization. In that case, however, one would have expected marked activation to Def items (> Unfam) also in the left hemisphere where name learning (Name, NameDef) had a strong effect. Furthermore, behavioral data indicated that Def items were never identified as targets in the phonological categorization task. Thus, it may be worthwhile to consider alternative explanations.

The right hemisphere is thought to become markedly involved only when a language task is particularly challenging, such as in bilingual language processing [Evans et al., 2002; Stowe et al., 2005] or when analyzing ambiguous semantic information [Jung‐Beeman, 2005; Stowe et al., 2005]. Our data thus suggest that, in contrast to analysis of phonological information with robust neural correlates, access to semantic knowledge may not be readily detectable as an isolated phenomenon but neural signatures reflecting the influence of semantic information may appear when the task becomes particularly demanding, such as when accessing and evaluating novel lexical entries. It is possible that the right‐hemisphere effect reflects an unspecific response to any item to which some new information has been attached, representing activation that would be actively inhibited in normal, effortless language processing. Inhibition of the right hemisphere by the language‐dominant left hemisphere is likely to occur quite rapidly for the Semcat task but, because of the additional processing steps required for phonological categorization, it should take longer for the Phoncat task, thus allowing more time for build‐up of right‐hemisphere activation; the present findings support this possible interpretation.

In this study, name learning effects were concentrated to the temporal cortex, whereas the two earlier MEG studies on lexical learning which used a similar training procedure and picture naming on healthy [Cornelissen et al., 2004] and anomic participants [Cornelissen et al., 2003] found learning effects in the left inferior parietal cortex instead. The apparent discrepancy probably stems from details of the training procedure. In those earlier studies, participants were specifically instructed to learn the item names and only the knowledge of names was tested. In this study, the participants were explicitly required to learn both the names and the definitions, with acquisition of both types of information tested behaviorally, thus encouraging a broader and more realistic learning experience. Indeed, the proposed functional roles of the sustained activation of the temporal cortex and the combined influence of task and type of information available suggest that, in this study, the new phonological forms were accessed through verbal semantics. The setup of Cornelissen et al. [ 2003, 2004], in contrast, encouraged direct visual‐phonological associations that would have strongly engaged the phonological loop and, thus, the left inferior parietal cortex thought to serve as a phonological store [Awh et al., 1996; Baddeley et al., 1998]. The differences between the results cannot be accounted for by the phonological/orthographic neighborhood sizes of the recently learned names either, as they did not differ between our study and that of Cornelissen et al. [ 2004] (mean values per stimulus category ranging between 3.0 and 3.7, with no significant main effects of experiment or stimulus category nor interaction in a two‐way experiment x stimulus category ANOVA).

CONCLUSION

This study suggests the following: (i) New and well‐established lexical items are processed in a very similar manner at the cortical level. (ii) Left temporal long‐latency activation, found in both naming and categorization, reflects access to phonological code, possibly via verbal semantics. (iii) Accessing newly learned semantic information had no direct effect on the brain activation patterns in the present setup. One might expect this type of pattern if some meaning was attached during the training phase to all object‐like pictures or if processing of any object‐like picture always elicits some level of semantic processing. (iv) Both the behavioral learning pattern and the neurophysiological results in the present experiment indicate that semantic and phonological features can be implemented and accessed in a different manner.

REFERENCES

  1. Awh E,Jonides J,Smith EE,Schumacher EH,Koeppe RA,Katz S ( 1996): Dissociation of storage and reheral in verbal working memory: Evidence from positron emission tomography. Psychol Sci 7: 25–31. [Google Scholar]
  2. Baddeley A,Gathercole S,Papagno C ( 1998): The phonological loop as a language learning device. Psychol Rev 105: 158–173. [DOI] [PubMed] [Google Scholar]
  3. Bookheimer S ( 2002): Functional MRI of language: New approaches to understanding the cortical organization of semantic processing. Annu Rev Neurosci 25: 151–188. [DOI] [PubMed] [Google Scholar]
  4. Boucart M,Humphreys GW ( 1992): Global shape cannot be attended without object identification. J Exp Psychol Hum Percept Perform 18: 785–806. [DOI] [PubMed] [Google Scholar]
  5. Breitenstein C,Jansen A,Deppe M,Foerster A‐F,Sommer J,Wolbers T,Knecht S ( 2005): Hippocampus activity differentiates good from poor learners of a novel lexicon. Neuroimage 25: 958–968. [DOI] [PubMed] [Google Scholar]
  6. Caramazza A,Shelton JR ( 1998): Domain‐specific knowledge systems in the brain the animate‐inanimate distinction. J Cogn Neurosci 10: 1–34. [DOI] [PubMed] [Google Scholar]
  7. Cornelissen K,Laine M,Tarkiainen A,Järvensivu T,Martin N,Salmelin R ( 2003): Adult brain plasticity elicited by anomia treatment. J Cogn Neurosci 15: 444–461. [DOI] [PubMed] [Google Scholar]
  8. Cornelissen K,Laine M,Renvall K,Saarinen T,Martin N,Salmelin R ( 2004): Learning new names for new objects: Cortical effects as measured by magnetoencephalography. Brain Lang 89: 617–622. [DOI] [PubMed] [Google Scholar]
  9. Devlin JT,Moore CJ,Mummery CJ,Gorno‐Tempini ML,Phillips JA,Noppeney U,Frackowiak RSJ,Friston KJ,Price CJ ( 2002): Anatomic constraints on cognitive theories of category specificity. Neuroimage 15: 675–685. [DOI] [PubMed] [Google Scholar]
  10. Evans J,Workman L,Mayer P,Crowley K ( 2002): Differential bilingual laterality: mythical monster found in Wales. Brain Lang 83: 291–299. [DOI] [PubMed] [Google Scholar]
  11. Glaser WR ( 1992): Picture naming. Cognition 42: 61–105. [DOI] [PubMed] [Google Scholar]
  12. Graves WW,Grabowski TJ,Mehta S,Gordon JK ( 2007): A neural signature of phonological access: Distinguishing the effects of word frequency from familiarity and length in overt picture naming. J Cogn Neurosci 19: 617–631. [DOI] [PubMed] [Google Scholar]
  13. Grill‐Spector K,Kanwisher N ( 2005): Visual recognition: As soon as you know it is there, you know what it is. Psychol Sci 16: 152–160. [DOI] [PubMed] [Google Scholar]
  14. Grönholm P,Rinne JO,Vorobyev V,Laine M ( 2005): Naming of newly learned objects: A PET activation study. Cogn Brain Res 25: 359–371. [DOI] [PubMed] [Google Scholar]
  15. Hämäläinen M,Hari R,Ilmoniemi RJ,Knuutila J,Lounasmaa OV ( 1993): Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev Mod Phys 65: 413–497. [Google Scholar]
  16. Heckers S,Weiss AP,Alpert NM,Schacter DL ( 2002): Hippocampal and brain stem activation during word retrieval after repeated and semantic encoding. Cereb Cortex 12: 900–907. [DOI] [PubMed] [Google Scholar]
  17. Humphreys GW,Price CJ,Riddoch MJ ( 1999): From objects to names: A cognitive neuroscience approach. Psychol Res 62: 118–130. [DOI] [PubMed] [Google Scholar]
  18. Indefrey P,Levelt WJM ( 2004): The spatial and temporal signatures of word production components. Cognition 92: 101–144. [DOI] [PubMed] [Google Scholar]
  19. Jung‐Beeman M ( 2005): Bilateral brain processes for comprehending natural language. Trends Cogn Sci 9: 512–518. [DOI] [PubMed] [Google Scholar]
  20. Kuriki S,Mori T,Hirata Y ( 1999): Motor planning center for speech articulation in the normal human brain. Neuroreport 10: 765–769. [DOI] [PubMed] [Google Scholar]
  21. Laine M,Martin A ( 2006): Anomia: Theoretical and Clinical Aspects. Hove, UK: Psychology Press. [Google Scholar]
  22. Levelt WJ,Praamstra P,Meyer AS,Helenius P,Salmelin R ( 1998): An MEG study of picture naming. J Cogn Neurosci 10: 553–567. [DOI] [PubMed] [Google Scholar]
  23. Lounasmaa OV,Hämäläinen M,Hari R,Salmelin R ( 1996): Information processing in the human brain: magnetoencephalographic approach. Proc Natl Acad Sci USA 93: 8809–8815. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Löw A,Bentin S,Rockstroh B,Silberman Y,Gomolla A,Cohen R,Elbert T ( 2003): Semantic categorization in the human brain: Spatiotemporal dynamics revealed by magnetoencephalography. Psychol Sci 14: 367–372. [DOI] [PubMed] [Google Scholar]
  25. Maess B,Friederici AD,Damian M,Meyer AS,Levelt WJM ( 2002): Semantic category interference in overt picture naming: sharpening current density localization by PCA. J Cogn Neurosci 14: 455–462. [DOI] [PubMed] [Google Scholar]
  26. Martin A,Chao LL ( 2001): Semantic memory and the brain: Structure and processes. Curr Opin Neurobiol 11: 194–201. [DOI] [PubMed] [Google Scholar]
  27. Moore CJ,Price CJ ( 1999): Three distinct ventral occipitotemporal regions for reading and object naming. Neuroimage 10: 181–192. [DOI] [PubMed] [Google Scholar]
  28. Murtha S,Chertkow H,Beauregard M,Evans A ( 1999): The neural substrate of picture naming. J Cogn Neurosci 11: 399–423. [DOI] [PubMed] [Google Scholar]
  29. Oldfield RC ( 1971): The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia 9: 97–113. [DOI] [PubMed] [Google Scholar]
  30. Perani D,Schnur T,Tettamanti M,Cappa SF,Fazio F ( 1999): Word and picture matching: A PET study of semantic category effects. Neuropsychologia 37: 293–306. [DOI] [PubMed] [Google Scholar]
  31. Pins D,Meyer ME,Foucher J,Humphreys G,Boucart M ( 2004): Neural correlates of implicit object identification. Neuropsychologia 42: 1247–1259. [DOI] [PubMed] [Google Scholar]
  32. Raboyeau G,Marie N,Balduyck S,Gros H,Demonet J‐F,Cardebat D ( 2004): Lexical learning of the English language: A PET study in healthy French subjects. Neuroimage 22: 1808–1818. [DOI] [PubMed] [Google Scholar]
  33. Rahman RA,van Turennout M,Levelt WJM ( 2003): Phonological encoding is not contingent on semantic feature retrieval: An electrophysiological study on object naming. J Exp Psychol Learn Mem Cogn 29: 850–860. [DOI] [PubMed] [Google Scholar]
  34. Salmelin R ( 2007): Clinical neurophysiology of language: The MEG approach. Clin Neurophysiol 118: 237–254. [DOI] [PubMed] [Google Scholar]
  35. Salmelin R,Hari R,Lounasmaa OV,Sams M ( 1994): Dynamics of brain activation during picture naming. Nature 368: 463–465. [DOI] [PubMed] [Google Scholar]
  36. Salmelin R,Schnitzler A,Schmitz F,Freund HJ ( 2000): Single word reading in developmental stutterers and fluent speakers. Brain 123: 1184–1202. [DOI] [PubMed] [Google Scholar]
  37. Sandak R,Mencl WE,Frost SJ,Rueckl JG,Katz L,Moore DL,Mason SA,Fulbright RK,Constable RT,Pugh KR ( 2004): The neurobiology of adaptive learning in reading: A contrast of different training conditions. Cogn Affect Behav Neurosci 4: 67–88. [DOI] [PubMed] [Google Scholar]
  38. Schormann T,Henn S,Zilles K ( 1996): A new approach to fast elastic alignment with applications to human brains. Lecture Notes Comput Sci 1131: 337–342. [Google Scholar]
  39. Sörös P,Cornelissen K,Laine M,Salmelin R ( 2003): Naming actions and objects: Cortical dynamics in healthy adults and in an anomic patient with a dissociation in action/object naming. Neuroimage 19: 1787–1801. [DOI] [PubMed] [Google Scholar]
  40. Stowe LA,Haverkort M,Zwarts F ( 2005): Rethinking the neurological basis of language. Lingua 115: 997–1042. [Google Scholar]
  41. Tranel D,Grabowski TJ,Lyon J,Damasio H ( 2005): Naming the same entities from visual or from auditory stimulation engages similar regions of left Inferotemporal cortices. J Cogn Neurosci 17: 1293–1305. [DOI] [PubMed] [Google Scholar]
  42. Tyler LK,Stamatakis EA,Bright P,Acres K,Abdallah S,Rodd JM,Moss HE ( 2004): Processing objects at different levels of specificity. J Cogn Neurosci 16: 351–362. [DOI] [PubMed] [Google Scholar]
  43. Ullman MT ( 2004): Contributions of memory circuits to language: The declarative/procedural model. Cognition 92: 231–270. [DOI] [PubMed] [Google Scholar]
  44. Vigneau M,Beaucousin V,Herve PY,Duffau H,Crivello F,Houde O,Mazoyer B,Tzourio‐Mazoyer N ( 2006): Meta‐analyzing left hemisphere language areas: Phonology, semantics, and sentence processing. Neuroimage 30: 1414–1432. [DOI] [PubMed] [Google Scholar]
  45. Vihla M,Laine M,Salmelin R ( 2006): Cortical dynamics of visual/semantic vs. phonological analysis in picture confrontation. Neuroimage 33: 732–738. [DOI] [PubMed] [Google Scholar]
  46. Vitali P,Abutalebi J,Tettamanti M,Rowe J,Scifo P,Fazio F,Cappa SF,Perani D ( 2005): Generating animal and tool names: An fMRI study of effective connectivity. Brain Lang 93: 32–45. [DOI] [PubMed] [Google Scholar]
  47. Weisberg J,van Turennout M,Martin A ( 2007): A neural system for learning about object function. Cereb Cortex 17: 513–521. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Woods RP,Grafton ST,Watson JDG,Sicotte NL,Mazziotta JC ( 1998): Automated image registration. II. Intersubject validation of linear and nonlinear models. J Comput‐Assisted Tomogr 22: 153–165. [DOI] [PubMed] [Google Scholar]

Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES