Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 1998 Mar 3;95(5):2703–2708. doi: 10.1073/pnas.95.5.2703

Neural correlates of the episodic encoding of pictures and words

Cheryl L Grady 1,*, Anthony R McIntosh 1, M Natasha Rajah 1, Fergus I M Craik 1
PMCID: PMC19469  PMID: 9482951

Abstract

A striking characteristic of human memory is that pictures are remembered better than words. We examined the neural correlates of memory for pictures and words in the context of episodic memory encoding to determine material-specific differences in brain activity patterns. To do this, we used positron emission tomography to map the brain regions active during encoding of words and pictures of objects. Encoding was carried out by using three different strategies to explore possible interactions between material specificity and types of processing. Encoding of pictures resulted in greater activity of bilateral visual and medial temporal cortices, compared with encoding words, whereas encoding of words was associated with increased activity in prefrontal and temporoparietal regions related to language function. Each encoding strategy was characterized by a distinctive activity pattern, but these patterns were largely the same for pictures and words. Thus, superior overall memory for pictures may be mediated by more effective and automatic engagement of areas important for visual memory, including medial temporal cortex, whereas the mechanisms underlying specific encoding strategies appear to operate similarly on pictures and words.


Humans have a remarkable ability to remember pictures. It was shown several decades ago that people can remember more than 2,000 pictures with at least 90% accuracy in recognition tests over a period of several days, even with short presentation times during learning (1). This excellent memory for pictures consistently exceeds our ability to remember words (2, 3). In addition, various manipulations that affect memory performance do so differentially for pictures and words. One such manipulation is the levels of processing effect, which is the advantage for later retrieval of more elaborate or semantic processing of stimuli during encoding (4, 5). This levels effect is greater for words than for pictures because of superior picture memory even after shallow or nonsemantic encoding (6). One theory of the mechanism underlying superior picture memory is that pictures automatically engage multiple representations and associations with other knowledge about the world, thus encouraging a more elaborate encoding than occurs with words (2, 5, 7). This theory implies that there are qualitative differences between the ways words and pictures are processed during memory.

From an evolutionary perspective, the ability to remember various aspects of one’s visual environment must be vital for survival, so it is not surprising that memory for pictorial material is particularly well developed. However, the brain mechanisms underlying this phenomenon are not well understood. Neuroimaging experiments using verbal or nonverbal materials as stimuli have suggested that there are differences in the brain areas participating in the processing of these two kinds of stimulus. For example, previous neuroimaging experiments have shown medial temporal activation during encoding of faces and other nonverbal visual stimuli (813), but not consistently during encoding of words (1416). Conversely, activation of medial temporal areas has been found during word retrieval (17, 18), but not consistently during retrieval of nonverbal material (10, 11, 19, 20). A comparison of recall for words and pictures failed to find any difference between them, but because recall of the name corresponding to the picture also was required, differences between the two conditions may have been reduced (21). These results suggest differences between the functional neuroanatomy for word and picture memory, but sufficient direct comparisons are lacking. We examined the neural correlates of memory for pictures and words in the context of memory encoding to determine whether material-specific brain networks for memory could be identified. In addition, encoding was carried out under three different sets of instructions to see whether material specificity is a general property of memory or is dependent on how the material is processed.

MATERIALS AND METHODS

Twelve young right-handed subjects (six males, six females, mean age ± SD = 23.0 ± 3.5 years) participated in the experiment. An additional 12 subjects participated in a pilot experiment, and their data have been included in the behavioral analysis. The stimuli used in the experiment were concrete, high-frequency words or line drawings of familiar objects (22). All stimuli were presented on a computer monitor in black with a white background. There were three encoding tasks for both words and pictures, requiring three lists of pictures and three lists of words. All lists were matched for word frequency, word length, familiarity, and complexity of the picture regardless of whether the list was presented as words or pictures. For two of the encoding conditions, subjects were instructed to make certain decisions about the stimuli, but were not explicitly asked to remember them; memory for items presented during these conditions therefore was incidental. One incidental condition involved nonsemantic or shallow processing of the stimuli (size of picture or case of letters), and the other required semantic or deep processing of the stimuli (living/nonliving decision). These two conditions were chosen because previous work has shown that information that has been processed during deep encoding, i.e., with greater elaboration or by relating it via semantic associations to other knowledge, is remembered better than information processed in a shallow fashion, e.g., on a purely perceptual basis (4, 5). During the third condition, intentional learning, subjects were instructed to memorize the pictures or words and were told that they would be tested on these items. After the scans, subjects completed two recognition memory tasks, one for stimuli encoded as words and one for stimuli encoded as pictures. These tasks consisted of 10 targets from each of the three encoding conditions for words or pictures and 30 distracters (i.e., 60 items total). All stimuli in the recognition tasks were presented as words, regardless of whether they originally were presented as words or pictures, to prevent ceiling effects for picture recognition.

Six positron emission tomography scans, with injections of 40 mCi of H215O each and separated by 11 min, were performed on all subjects while they were encoding the stimuli described above. Scans were performed on a GEMS PC2048–15B tomograph, which has a reconstructed resolution of 6.5 mm in both transverse and axial planes. This tomograph allows 15 planes, separated by 6.5 mm (center to center), to be acquired simultaneously. Emission data were corrected for attenuation by means of a transmission scan obtained at the same levels as the emission scans. Head movement during the scans was minimized by using a thermoplastic mask that was molded to each subject’s head and attached to the scanner bed. Each task started 20 sec before isotope injection and continued throughout the 1-min scanning period.

For the six scans, the three lists were assigned to the three encoding conditions in a counterbalanced fashion, and the order of conditions also was counterbalanced across subjects. During all scans subjects pressed a button with the right index or middle finger to either indicate their decisions about the stimulus or, during the intentional learning condition, to simply make a motor response.

Behavioral data were analyzed by using a repeated measures ANOVA with stimulus type and encoding condition as the repeated measures. Positron emission tomography scans were registered by using air (23), and spatially normalized (to the Talairach and Tournoux atlas coordinate system, ref. 24) and smoothed (to 10 mm) by using spm95 (25). Ratios of regional cerebral blood flow (rCBF) to global cerebral blood flow (CBF) within each scan for each subject were computed and analyzed by using partial least squares (PLS) (26) to identify spatially distributed patterns of brain activity related to the different task conditions. PLS is a multivariate analysis that operates on the covariance between brain voxels and the experimental design to identify a new set of variables (so-called latent variables or LVs) that optimally relate the two sets of measurements. We used PLS to analyze the covariance of brain voxel values with orthonormal contrasts coding for the experimental design. The outcome is sets of mutually independent spatial activity patterns depicting the brain regions that, as a whole, show the strongest relation to (i.e., are covariant with) the contrasts. These patterns are displayed as singular images (Fig. 1) that show the brain areas that covary with the contrast or contrasts that contribute to each LV. Each brain voxel has a weight, known as a salience, that is proportional to these covariances, and multiplying the rCBF value in each brain voxel for each subject by the salience for that voxel, and summing across all voxels gives a score for each subject on a given LV. The significance for each LV as a whole was assigned by using a permutation test (26, 27). Five LVs were identified in this experiment, all of which were significant by permutation test (P < 0.001). The first three LVs identified brain regions associated with the main effects of stimulus type and encoding condition, and the fourth and fifth LVs identified interactions between stimulus type and encoding condition. Because saliences are derived in a single analytic step, no correction for multiple comparisons of the sort done for univariate image analyses is required.

Figure 1.

Figure 1

Voxels shown in color are those that best characterize the patterns of activity identified by LVs 1–3 from the PLS analysis (see Materials and Methods). Areas are displayed on a standard magnetic resonance image from −28 mm to +48 mm relative to the anterior commissure-posterior commissure (AC-PC) line (in 4-mm increments). Numbers shown on the left indicate the level in mm of the

In addition to the permutation test, a second and independent step in PLS analysis is to determine the stability of the saliences for the brain voxels characterizing each pattern identified by the LVs. To do this, all saliences were submitted to a bootstrap estimation of the standard errors (28, 29). This estimation involves randomly resampling subjects, with replacement, and computing the standard error of the saliences after a sufficient number of bootstrap samples. Peak voxels with a salience/SE ratio ≥ 2.0 were considered stable. Local maxima for the brain areas with stable saliences on each LV were defined as the voxel with a salience/SE ratio higher than any other voxel in a 2-cm cube centered on that voxel. Locations of these maxima are reported in terms of brain region, or gyrus, and Brodmann area (BA) as defined in the Talairach and Tournoux atlas. Selected local maxima are shown in Tables 2 and 3, with the results of corresponding contrasts from SPM95 (i.e., main effects and interactions) as a comparison. Univariate tests were performed on selected maxima as an adjunct to the PLS analysis to aid in the interpretation of interaction effects, not as a test of significance. The inferential component of our analysis comes from the permutation test and the reliability assessed through the bootstrap estimates.

Table 2.

Selected cortical areas with differential activity during encoding: Main effects

Region, gyrus Hem BA PLS
SPM
Z score
X Y Z X Y Z
Pictures > words (LV 1)
Extrastriate, GL Right 18 10 −82 −12 10 −90 −12 5.4
Extrastriate, GOm Right 19 30 −82 12 34 −84 4 6.2
Temporal, GH Right 36 28 −22 −28 36 −30 −24 4.6
Temporal, GH Left 36 −16 −16 −24 −16 −14 −28 3.0
Words > pictures (LV 1)
Prefrontal, GFm Right 8/9 26 38 32
Prefrontal, GFm Left 9 −16 48 16 −14 48 20 3.3
Temporal, GTs Right 41 46 −12 4 46 −12 0 4.5
Temporal, GTm Left 21 −52 −38 4 −48 −38 4 3.7
Parietal, LPi Left 39/40 −40 −50 24 −52 −52 16 3.3
Semantic > nonsemantic and intentional learning (LV 2)
Prefrontal, GFd/GC Left 10/32 −8 44 −4 −6 36 −4 4.2
Prefrontal, GFs Left 9 −10 56 32 −10 48 36 4.1
Insula/GH Left −30 −20 −4 −36 −24 −8 4.0
Extrastriate, GF Left 37 −36 −60 0 −50 −46 −8 3.8
Extrastriate, GL Right 18 10 −70 0 10 −76 0 2.6
Intentional learning > nonsemantic and semantic (LV 3)
Prefrontal, GFm Left 10 −30 54 16 −30 54 16 3.0
Prefrontal, GFm Left 45 −40 32 20 −30 38 28 4.2
Premotor, GPrC Left 6 −34 −2 40 −34 −6 40 4.6
Extrastriate, GF Right 37 32 −58 −20

Coordinates and Brodmann’s areas from Talairach and Tournoux (24). X (right/left), negative values are in the left hemisphere; Y (anterior/posterior), negative values are posterior to the zero point (located at the anterior commissure); Z (superior/inferior), negative values are inferior to the plane defined by the anterior and posterior commissures. Maxima from the PLS analysis and from SPM contrasts corresponding to the effect identified on the LV are presented (Z scores are from SPM). Hem, hemisphere; BA, Brodmann’s area; GF, fusiform gyrus; GL, lingual gyrus; GOm, middle occipital gyrus; GH, parahippocampal gyrus; GF(s,m,i,d), frontal gyrus (superior, middle, inferior, medial); GOb, orbitofrontal gyrus; GC, cingulate gyrus; GPrC, precentral gyrus; GPoC, postcentral gyrus; GT(s,m,i), temporal gyrus (superior, middle, inferior); GTT, transverse temporal gyrus; LPi, inferior parietal. 

Table 3.

Selected cortical areas with differential activity during encoding: Interactions

Region, gyrus Hem BA PLS
SPM
Z score
X Y Z X Y Z
Words NS > words LN, opposite effect in pictures (LV 4)
Extrastriate, GL Right 18 10 −74 −8 14 −76 −8 2.5
Extrastriate, GL Left 18 −20 −82 0 −38 −84 4 3.2
Extrastriate, GTi Left 37 −54 −68 0 −48 −68 −8 2.6
Temporal, GTs Right 22 40 −26 8 48 −46 4 2.8
Temporal, GH Right 28 −22 −8
Words LN > words NS, opposite effect in pictures (LV 4)
Prefrontal, GFi Right 45 32 18 4 28 16 0 4.5
Prefrontal, GFm Left 9 −40 16 28
Midbrain/GH Left 36 −12 −30 −20 −18 −26 −12 3.3
Temporal, GTT Right 41 28 −32 12 32 −36 12 2.8
Words NS & LN > words SM, opposite effect in pictures (LV 5)
Prefrontal, GFs Right 9 20 52 28 14 30 24 3.1
Prefrontal, GFm Left 8/9 −32 32 36 −26 30 32 3.0
Premotor, GPrC Right 6 42 4 16 36 −6 4 2.6
Motor, GPrC Left 4 −56 0 12 −56 4 8 3.1
Words SM > words NS & LN, opposite effect in pictures (LV 5)
Prefrontal, GOb Left 11 −16 48 −8 −10 28 −12 4.0
Prefrontal, GFs Left 9 −8 50 36 −12 52 16 2.6
Prefrontal, GFs Left 8 −18 38 44 −6 32 48 2.6

NS, nonsemantic; LN, learn; SM, semantic. Maxima and Z scores from SPM95 are from contrasts denoting stimulus × encoding interactions. Other abbreviations are the same as in Table 2

RESULTS

Pictures were remembered better than words overall (Table 1), and both semantic processing and intentional learning resulted in better recognition than nonsemantic encoding. In addition, there was a significant interaction of stimulus type and encoding strategy on recognition performance, caused by a larger difference between memory for pictures and words during the nonsemantic condition.

Table 1.

Recognition performance for pictures and wordsleft most image in each row relative to the AC-PC line. The right side of the image represents the right side of the brain. (A) Brain areas with increased rCBF during encoding of pictures are shown in yellow and red, and areas with increased activity during encoding of words are shown in blue (LV1). (B) Brain areas with increased rCBF during semantic encoding, compared with the other two conditions (LV2), are shown in red. (C) Brain areas with increased rCBF during intentional learning, compared with the other two conditions (LV3), are shown in red. Selected maxima from these regions are shown in Table 2.

Encoding condition Pictures Words
Incidental nonsemantic 64.8 ± 3.5 46.1 ± 4.9
Incidental semantic 73.0 ± 3.4 73.5 ± 4.6
Intentional learning 83.9 ± 2.6 76.9 ± 4.6

Values are percent of “old” items correctly identified (i.e., proportion of hits) expressed as mean ± SE. N = 23 [12 subjects from pilot study and 11 from positron emission tomography study (one positron emission tomography subject had missing data)]. There was a significant main effect of stimulus type (F = 10.1, P < 0.004), a significant main effect of encoding condition (F = 39.4, P < 0.0001), and a significant interaction of stimulus type and encoding (F = 4.2, P < 0.025). 

Three patterns of rCBF activity predominantly related to the main effects of stimulus type and encoding condition were identified. One pattern distinguished encoding of pictures from that of words, one characterized semantic encoding from nonsemantic processing and intentional learning, and a third dissociated intentional learning from the other two conditions. There was greater activation during encoding of pictures, compared with words, in a widespread area of bilateral ventral and dorsal extrastriate cortex, and in bilateral medial temporal cortex, particularly the ventral portion (Fig. 1A and Table 2). In both of these regions the increase in rCBF was more extensive in the right hemisphere. In extrastriate cortex, rCBF was increased during picture encoding over word encoding equally across all three encoding strategy conditions, whereas in medial temporal cortex this stimulus-specific difference was greater during the nonsemantic processing condition (Fig. 2 A and C). Encoding of words, on the other hand, was associated with greater rCBF across all conditions in bilateral prefrontal cortex and anterior portions of middle temporal cortex (Fig. 1A and Table 2). In contrast to the rCBF increases during picture encoding, the increases in prefrontal and temporal cortices during word encoding were more extensive in the left hemisphere. Increased rCBF also was found in left parietal cortex during encoding of words.

Figure 2.

Figure 2

Ratios of rCBF to whole brain CBF in areas of the brain that showed interactions between stimulus type and encoding condition. The medial temporal regions from LV1 (A and C, coordinates shown in parentheses) showed greater rCBF during picture encoding compared with word encoding (P < 0.001 for the right hemisphere and P < 0.02 for the left). These regions also had condition × stimulus interactions by univariate test (both P < 0.05), indicating a larger difference between pictures and words in the nonsemantic condition. B and D show medial temporal regions from LV4 that showed stimulus × encoding interactions involving the nonsemantic and intentional learning conditions (univariate interaction for right hemisphere P = 0.02; left hemisphere P = 0.07). E and F show regions from LV5 with stimulus × encoding interactions involving nonsemantic and semantic conditions (univariate interaction for left motor region, P = 0.01; interaction for left orbitofrontal region, P = 0.006). Additional regions with stimulus × encoding interactions are shown in Table 3. nonsem, nonsemantic encoding; sem, semantic encoding; learn, intentional learning.

The brain regions with increased activity during the semantic encoding condition, compared with the other two conditions, were mainly in the left hemisphere. These regions included ventral and dorsal portions of medial prefrontal cortex, and an area that included both the medial temporal region and the posterior portion of the insula (Fig. 1B and Table 2). Semantic encoding also led to an increase of rCBF in bilateral posterior extrastriate cortex. This pattern of rCBF increase during semantic encoding was found for both pictures and words. Increased rCBF during intentional learning, compared with both incidental encoding conditions, also was seen in left prefrontal cortex, but in left ventrolateral prefrontal cortex, in contrast to the medial and anterior areas activated during semantic encoding (Fig. 1C and Table 2). In addition, increased rCBF was found in left premotor cortex and caudate nucleus, and in bilateral ventral extrastriate cortex during intentional learning. As was the case with semantic encoding, the rCBF pattern seen in these regions during intentional learning characterized both pictures and words.

There were a few brain regions that showed an interaction between stimulus type and encoding condition (Table 3), particularly the medial temporal regions. In addition to the difference already noted in these areas during nonsemantic encoding, there was another region in right medial temporal cortex that showed an interaction involving the nonsemantic and intentional learning conditions (identified on LV4). This interaction was caused by sustained activity in this region across the picture encoding conditions, with a reduction in activity during intentional learning of words compared with the nonsemantic condition (Fig. 2B). There also was an area in the left medial temporal cortex that showed the opposite interaction, consisting of a larger increase in activity during learning of words, compared with the nonsemantic condition (Fig. 2D). Finally, there was an interaction in left motor cortex (identified on LV5) caused by an increase in activity in the semantic condition for pictures, compared with the nonsemantic condition, with the opposite pattern for words (Fig. 2E). Conversely, there was an increase in activity during semantic encoding in left orbitofrontal cortex, but only for words (Fig. 2F).

DISCUSSION

The results of this experiment address three questions about the neurobiology of memory, the first of which is why pictures are remembered better than words. The behavioral results showed a general difference in recognition accuracy between pictures and words that was greatest on those items that had been processed via nonsemantic encoding. The brain activity measures identified regions that showed a general pattern of differences between pictures and words, as well as regions that had differences mainly during nonsemantic processing. Increased rCBF during the picture-encoding conditions was found in bilateral extrastriate and ventral medial temporal cortices. Extrastriate cortex is activated during the visual perception of both verbal and nonverbal material (3033) and may have been more active during picture encoding because the pictures, although simple line drawings, were probably more visually complex than the words. This difference in visual characteristics could have influenced medial temporal activity as well. On the other hand, medial temporal cortex has long been known from lesion experiments to be important for episodic memory (3438) and may be particularly important for encoding new information (39). The greater activity in medial temporal cortex during encoding of pictures compared with words suggests that pictures more directly or effectively engage these memory-related regions in the brain, thereby resulting in superior recollection of these items. This effect may be related in part to distinctiveness or novelty, which has been shown to activate medial temporal cortex (13), considering that the pictures, even though they were of familiar objects, might be more novel than familiar words. In addition, because better memory for pictures and activation of medial temporal cortex both were more evident in the nonsemantic encoding condition, engagement of memory networks by pictures may be automatic and result in more durable memory traces (40). Therefore, this type of information is apparently better represented and more readily accessible to retrieval mechanisms, regardless of the ostensible encoding task. Words, on the other hand, activate left hemisphere regions previously shown to be involved in language tasks, including left frontal, temporal, and parietal regions (30, 41, 42). This result implies that encoding of words primarily invokes a distributed system of regions involved in linguistic processing that is less able to support later retrieval from episodic memory. It also should be noted that, in addition to any advantages afforded to pictures during the initial processing, material specificity also is likely to be found during retrieval. That is, in real-world situations, part of the reason for superior picture memory is probably caused by the specificity of the match between internal representations of the picture and the picture itself when it is re-encountered and recognized.

The second question is whether different encoding strategies lead to the participation of different brain areas. Performance on the recognition tests showed essentially equivalent memory for pictures and words after either semantic processing or intentional learning. However the brain activity patterns during these two conditions were quite different, showing differential activity primarily in prefrontal and extrastriate cortices. Previous neuroimaging experiments have shown left prefrontal activation during both semantic processing and intentional learning that is distinct from right prefrontal activation during memory retrieval, leading to the development of the HERA, or hemispheric encoding/retrieval asymmetry model (43, 44). In our experiment, semantic processing was accompanied by increased activity in ventromedial and dorsomedial regions of left prefrontal cortex that have shown increased activity during semantic or language processing in other experiments (4549). Intentional learning showed increased rCBF in different parts of left prefrontal cortex, primarily in ventrolateral regions noted before to be active during intentional learning (15, 16), and episodic retrieval (13, 50). Thus, although both semantic processing and intentional learning undoubtedly involve some sort of elaborative processing that preferentially engages left prefrontal cortex, our results show that there is a dissociation between the parts of left prefrontal cortex that are involved in these two strategies. Extrastriate cortex also showed differential activity during semantic and intentional encoding. Semantic encoding activated posterior extrastriate areas similar to regions activated during silent naming of stimuli like the ones used here (51). In contrast, intentional learning activated more ventral portions of extrastriate cortex, similar to a study that reported activation of left ventral occipitotemporal cortex during intentional learning of faces (10). Thus, there is now converging evidence to support a differential response of both prefrontal and extrastriate cortices during encoding, depending on the specific encoding strategy that is used. This finding, together with the behavioral evidence, shows that different brain mechanisms underlying different encoding strategies can provide equally effective support for memory processing.

A final issue addressed by this experiment is whether there is an interaction between the type of stimulus that is encoded and the strategy used for encoding, i.e., are the brain areas active during the different encoding conditions the same or different for pictures and words? The behavioral results show a clear interaction in that the performance differences are largest during nonsemantic processing. The brain activity patterns show something of this interaction because there are ventral medial temporal areas where the rCBF difference is also largest during the nonsemantic condition (discussed above). However, during semantic encoding and intentional learning, many brain areas show a similar encoding-related change in activity for pictures and words, indicating that in these areas, these two encoding mechanisms may be operating in the same way regardless of the nature of the incoming stimulus. This pattern of brain activity is reflected in the recognition results, which are similar for pictures and words during semantic encoding and intentional learning. Nevertheless, the patterns are not identical. Activity in medial temporal cortex appears to be particularly sensitive to both stimulus type and encoding condition. The right hemisphere showed sustained activity for pictures and more variable activity for words (depending on the encoding condition), whereas the left hemisphere had increasing activity with deeper processing of words and a more variable pattern for encoding of pictures. This asymmetry is consistent with accounts of the differential effects of right vs. left hemisphere lesions in medial temporal cortex on nonverbal and verbal memory, respectively (e.g., refs. 52 and 53). It also is consistent with activation of left medial temporal structures during semantic encoding of words (14, 54) or retrieval of semantically encoded words (17), and activation of right medial temporal cortex during encoding of faces (10). In addition, although left medial prefrontal cortex is active during semantic processing of both pictures and words, the ventral portion of this area is involved to a greater extent during word encoding. This finding supports other studies that reported involvement of left ventral prefrontal cortex in language processing (42) and verbal retrieval (50).

Our ability to remember pictures better than words, particularly in situations that provide less than adequate support for later retrieval, thus appears to be mediated by medial temporal and extrastriate cortices, which have strong interconnections with one another (55, 56). Exactly what benefit this activation of visual memory areas provides to pictures is unclear. The theory mentioned above suggests that pictures induce a more elaborate or associative encoding than occurs with words. If one assumes that this process of making associations in a certain context is carried out by medial temporal cortex (57, 58), then our results would provide support for this hypothesis. Regardless of the specific mechanism, our results indicate which brain regions may be critical for superior picture memory and provide direction for future research on which aspect of pictures is necessary and sufficient for preferential engagement of these memory-related areas.

Acknowledgments

We thank the staff of the PET Centre at the Clarke Institute of Psychiatry for their technical assistance in conducting this experiment. This work was supported by a grant from the Ontario Mental Health Foundation.

ABBREVIATIONS

rCBF

regional cerebral blood flow

CBF

cerebral blood flow

PLS

partial least squares

LV

latent variable

References

  • 1.Standing L, Conezio J, Haber R N. Psychon Sci. 1970;19:73–74. [Google Scholar]
  • 2.Paivio A. Imagery and Verbal Processes. Rinehart, and Winston, New York: Holt; 1971. [Google Scholar]
  • 3.Shepard R N. J Verb Learn Verb Behav. 1967;6:156–163. [Google Scholar]
  • 4.Craik F I M, Lockhart R S. J Verb Learn Verb Behav. 1972;11:671–684. [Google Scholar]
  • 5.Craik F I M, Tulving E. J Exp Psychol Gen. 1975;104:268–294. [Google Scholar]
  • 6.Baddeley A D. Psychol Rev. 1978;85:139–152. [Google Scholar]
  • 7.Nelson D L. In: Levels of Processing in Human Memory. Cermak L S, Craik F I M, editors. Hillsdale, NJ: Lawrence Erlbaum; 1979. pp. 45–76. [Google Scholar]
  • 8.Gabrieli J D E, Brewer J B, Desmond J E, Glover G H. Science. 1997;276:264–266. doi: 10.1126/science.276.5310.264. [DOI] [PubMed] [Google Scholar]
  • 9.Grady C L, McIntosh A R, Horwitz B, Maisog J M, Ungerleider L G, Mentis M J, Pietrini P, Schapiro M B, Haxby J V. Science. 1995;269:218–221. doi: 10.1126/science.7618082. [DOI] [PubMed] [Google Scholar]
  • 10.Haxby J V, Ungerleider L G, Horwitz B, Maisog J M, Rapoport S I, Grady C L. Proc Natl Acad Sci USA. 1996;93:922–927. doi: 10.1073/pnas.93.2.922. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Roland P E, Gulyas B. Cereb Cortex. 1995;5:79–93. doi: 10.1093/cercor/5.1.79. [DOI] [PubMed] [Google Scholar]
  • 12.Stern C E, Corkin S, Gonzalez R G, Guimaraes A R, Baker J R, Jennings P J, Carr C A, Sugiura R M, Vedantham V, Rosen B R. Proc Natl Acad Sci USA. 1996;93:8660–8665. doi: 10.1073/pnas.93.16.8660. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Tulving E, Markowitsch H J, Craik F I M, Habib R, Houle S. Cereb Cortex. 1996;6:71–79. doi: 10.1093/cercor/6.1.71. [DOI] [PubMed] [Google Scholar]
  • 14.Binder J R, Bellgowan P S, Frost J A, Hammeke T A, Springer J A, Rao S M, Prieto T, O’Reilly W, Cox R W. NeuroImage. 1996;3:S530. (abstr.). [Google Scholar]
  • 15.Shallice T, Fletcher P, Frith C D, Grasby P, Frackowiak R S J, Dolan R J. Nature (London) 1994;368:633–635. doi: 10.1038/368633a0. [DOI] [PubMed] [Google Scholar]
  • 16.Kapur S, Craik F I M, Cabeza R, Jones C, Houle S, McIntosh A R, Tulving E. Cognit Brain Res. 1996;4:243–249. doi: 10.1016/s0926-6410(96)00058-4. [DOI] [PubMed] [Google Scholar]
  • 17.Nyberg L, McIntosh A R, Houle S, Nilsson L-G, Tulving E. Nature (London) 1996;380:715–717. doi: 10.1038/380715a0. [DOI] [PubMed] [Google Scholar]
  • 18.Schacter D L, Alpert N M, Savage C R, Rauch S L, Albert M S. Proc Natl Acad Sci USA. 1996;93:321–325. doi: 10.1073/pnas.93.1.321. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Klingberg T, Roland P E, Kawashima R. NeuroReport. 1994;6:57–60. doi: 10.1097/00001756-199412300-00016. [DOI] [PubMed] [Google Scholar]
  • 20.Schacter D L, Reiman E, Uecker A, Polster M R, Yun L S, Cooper L A. Nature (London) 1995;376:587–590. doi: 10.1038/376587a0. [DOI] [PubMed] [Google Scholar]
  • 21.Buckner R L, Raichle M E, Miezin F M, Petersen S E. J Neurosci. 1996;16:6219–6235. doi: 10.1523/JNEUROSCI.16-19-06219.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Snodgrass J G, Vanderwart M. J Exp Psychol Hum Learn Mem. 1980;6:174–215. doi: 10.1037//0278-7393.6.2.174. [DOI] [PubMed] [Google Scholar]
  • 23.Woods R P, Mazziotta J C, Cherry S R. J Comput Assist Tomogr. 1993;17:536–46. doi: 10.1097/00004728-199307000-00004. [DOI] [PubMed] [Google Scholar]
  • 24.Talairach J, Tournoux P. Co-Planar Stereotaxic Atlas of the Human Brain. New York: Thieme; 1988. [Google Scholar]
  • 25.Frackowiak R S, Friston K J. J Anat. 1994;184:211–225. [PMC free article] [PubMed] [Google Scholar]
  • 26.McIntosh A R, Bookstein F L, Haxby J V, Grady C L. NeuroImage. 1996;3:143–157. doi: 10.1006/nimg.1996.0016. [DOI] [PubMed] [Google Scholar]
  • 27.Edgington E S. Randomization Tests. New York: Dekker; 1980. [Google Scholar]
  • 28.Efron B, Tibshirani R. Stat Sci. 1986;1:54–77. [Google Scholar]
  • 29.Sampson P D, Streissguth A P, Barr H M, Bookstein F L. Neurotox Teratol. 1989;11:477–491. doi: 10.1016/0892-0362(89)90025-1. [DOI] [PubMed] [Google Scholar]
  • 30.Petersen S E, Fox P T, Snyder A Z, Raichle M E. Science. 1990;249:1041–1044. doi: 10.1126/science.2396097. [DOI] [PubMed] [Google Scholar]
  • 31.Haxby J V, Grady C L, Horwitz B, Ungerleider L G, Mishkin M, Carson R E, Herscovitch P, Schapiro M B, Rapoport S I. Proc Natl Acad Sci USA. 1991;88:1621–1625. doi: 10.1073/pnas.88.5.1621. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Zeki S, Watson J D G, Lueck C J, Friston K J, Kennard C, Frackowiak R S J. J Neurosci. 1991;11:641–649. doi: 10.1523/JNEUROSCI.11-03-00641.1991. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Sergent J, Ohta S, MacDonald B. Brain. 1992;115:15–36. doi: 10.1093/brain/115.1.15. [DOI] [PubMed] [Google Scholar]
  • 34.Scoville W B, Milner B. J Neurol Neurosurg Psychiatry. 1957;20:11–21. doi: 10.1136/jnnp.20.1.11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Zola-Morgan S, Squire L R, Amaral D G. J Neurosci. 1986;6:2950–2967. doi: 10.1523/JNEUROSCI.06-10-02950.1986. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Mishkin M. Nature (London) 1978;273:297–298. doi: 10.1038/273297a0. [DOI] [PubMed] [Google Scholar]
  • 37.Aggleton J P, Hunt P R, Rawlins J N P. Behav Brain Res. 1986;19:133–146. doi: 10.1016/0166-4328(86)90011-2. [DOI] [PubMed] [Google Scholar]
  • 38.Sutherland R W, McDonald R J. Behav Brain Res. 1990;37:57–79. doi: 10.1016/0166-4328(90)90072-m. [DOI] [PubMed] [Google Scholar]
  • 39.Squire L R. Psychol Rev. 1992;99:195–231. doi: 10.1037/0033-295x.99.2.195. [DOI] [PubMed] [Google Scholar]
  • 40.Moscovitch M. J Cognit Neurosci. 1992;4:257–267. doi: 10.1162/jocn.1992.4.3.257. [DOI] [PubMed] [Google Scholar]
  • 41.Bookheimer S Y, Zeffiro T A, Blaxton T, Gaillard W, Theodore W. Hum Brain Map. 1995;3:93–106. [Google Scholar]
  • 42.Price C J, Wise R J S, Watson J D G, Patterson K, Howard D, Frackowiak R S J. Brain. 1994;117:1255–1269. doi: 10.1093/brain/117.6.1255. [DOI] [PubMed] [Google Scholar]
  • 43.Nyberg L, Cabeza R, Tulving E. Psychonom Bull Rev. 1996;3:135–148. doi: 10.3758/BF03212412. [DOI] [PubMed] [Google Scholar]
  • 44.Tulving E, Kapur S, Craik F I M, Moscovitch M, Houle S. Proc Natl Acad Sci USA. 1994;91:2016–2020. doi: 10.1073/pnas.91.6.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Bottini G, Corcoran R, Sterzi R, Paulesu E, Schenone P, Scarpa P, Frackowiak R S J, Frith C D. Brain. 1994;117:1241–1253. doi: 10.1093/brain/117.6.1241. [DOI] [PubMed] [Google Scholar]
  • 46.Buckner R L, Raichle M E, Petersen S E. J Neurophysiol. 1995;74:2163–2173. doi: 10.1152/jn.1995.74.5.2163. [DOI] [PubMed] [Google Scholar]
  • 47.Jennings J M, McIntosh A R, Kapur S, Tulving E, Houle S. NeuroImage. 1997;5:229–239. doi: 10.1006/nimg.1997.0257. [DOI] [PubMed] [Google Scholar]
  • 48.Kapur S, Rose R, Liddle P F, Zipursky R B, Brown G M, Stuss D, Houle S, Tulving E. NeuroReport. 1994;5:2193–2196. doi: 10.1097/00001756-199410270-00051. [DOI] [PubMed] [Google Scholar]
  • 49.Martin A, Haxby J V, Lalonde F M, Wiggs C L, Ungerleider L G. Science. 1995;270:102–105. doi: 10.1126/science.270.5233.102. [DOI] [PubMed] [Google Scholar]
  • 50.Blaxton T A, Bookheimer S Y, Zeffiro T A, Figlozzi C M, Gaillard W D, Theodore W H. Can J Exp Psychol. 1996;50:42–56. doi: 10.1037/1196-1961.50.1.42. [DOI] [PubMed] [Google Scholar]
  • 51.Martin A, Wiggs C L, Ungerleider L G, Haxby J V. Nature (London) 1996;379:649–652. doi: 10.1038/379649a0. [DOI] [PubMed] [Google Scholar]
  • 52.Milner B. In: Nerve Cells, Transmitters and Behavior. Levi-Montalcini R, editor. Vatican City: Academia Scientarium; 1980. pp. 601–625. [Google Scholar]
  • 53.Ojemann G, Dodrill C. J Neurosurg. 1985;62:101–107. doi: 10.3171/jns.1985.62.1.0101. [DOI] [PubMed] [Google Scholar]
  • 54.Mayes A R, Gooding P, Gregory L, Hunkin N M, Nunn J A, Van Eijk R, Williams S C R, Brammer M, Bullmore E. NeuroImage. 1997;5:S624. (abstr.). [Google Scholar]
  • 55.Ungerleider L G, Mishkin M. In: Analysis of Visual Behavior. Ingle D J, Goodale M A, Mansfield R J W, editors. Cambridge, MA: MIT Press; 1982. pp. 549–586. [Google Scholar]
  • 56.Suzuki W A, Amaral D G. J Comp Neurol. 1994;350:497–533. doi: 10.1002/cne.903500402. [DOI] [PubMed] [Google Scholar]
  • 57.Eichenbaum H, Otto T, Cohen N J. Behav Brain Sci. 1994;17:449–518. [Google Scholar]
  • 58.Brown M W. In: Learning and Computational Neuroscience Foundations of Adaptive Networks. Gabriel M, Moore J, editors. Cambridge, MA: MIT Press; 1990. pp. 233–282. [Google Scholar]

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES