Abstract
We demonstrate that multivoxel pattern analysis can be used to decode place‐related information in fMRI. Subjects performed a working memory version of the Morris water maze task in a virtual environment with a single wall cue. The voxel data that corresponds to when subjects were located at the goal was extracted for seven regions implicated in spatial navigation, and then used to train a pattern classifier based on partial least squares. Using a leave‐one‐out (LOO) test procedure, goal locations at E, W, N positions (relative to the cue as S) were predicted significantly better than a naïve classifier for voxels in medial prefrontal cortex, hippocampus, and inferior parietal cortex. Prediction with voxels from other regions involved in navigation was also better than a naïve classifier, which raises the possibility that goal‐location information is widely disseminated among the navigation network. It turns out that predictive capability of all regions combined significantly decreases, relative to no change, only when voxel data from the hippocampus is left out. This implies that the hippocampus contains some unique information that identifies goal locations, whereas other regions contain information that also identifies goal locations but is more redundant. Classification of goal locations is an important step toward decoding a variety of place‐related information in spatial navigation with fMRI. Hum Brain Mapp, 2010. © 2009 Wiley‐Liss, Inc.
Keywords: hippocampus, prefrontal cortex, multivoxel pattern analysis, place coding, cognitive map
INTRODUCTION
fMRI studies of spatial navigation using virtual environments have consistently found a network of activated regions that is compatible with regions identified in the animal literature [e.g. Aguirre et al.,1996; Hartley et al.,2003; Iarria et al.,2003; Maguire et al.,1998; Shipman and Astur,2008; Spiers and Maguire,2006; Voermans et al.,2004; Wolbers and Buchel,2005,2007]. This network includes posterior/inferior parietal regions, retrosplenial cortex, and medial temporal regions, including hippocampus and parahippocampal regions.
Multivoxel pattern analysis has shown that fMRI activity may contain predictive information for a variety of sensory and cognitive processes, such as for object categories, stimulus orientation, and intentions [Hanson et al.,2004; Haxby et al.,2001; Haynes et al.,2007; Kamitani and Tong,2005; Polyn et al.,2005]. Given the growing fMRI literature on spatial navigation, and the well‐studied properties of neurons in animal spatial navigation, the ability to predict place‐related information would also be significant. Here we use a visually austere virtual environment to test if goal locations can also be predicted from a pattern of fMRI activity.
Single‐cell recordings in rodents show that information about goal locations are prominently represented by neurons in the hippocampus and prefrontal cortex. For example, one study found an accumulation of place fields at goal locations in the hippocampus, even when those locations are unmarked [Hollup et al.,2001; see also Hok et al.,2005]. Another study found place fields in the prefrontal cortical regions of rats that prominently represented goal locations distinct from reward sites [Hok et al.,2007]. In a study with preoperative humans, goal information was primarily found prefrontal cortex, hippocampus, and parahippocampus [Eckstrom et al.,2003]. It is also the case that hippocampal place‐fields do not show a topology and that the reconstruction of location depends on an ensemble of neurons [Redish et al., 2001; Zhang et al.,1998]. On the basis of these studies, we hypothesized that goal locations can be discriminated from the distributed pattern of voxel activity in medial temporal lobe and prefrontal cortex. We also plan to test other regions in the spatial network because encoding goal locations are only one aspect of route planning and execution in goal‐oriented navigation [Poucet et al.,2004].
The task we used is a version of the Morris water maze that relies on spatial working memory [Morris et al.,1986]. Subjects freely navigated with a MRI‐compatible joystick in a circular arena with one small red square on the wall providing the sole landmark. In an encoding phase, starting from a random location and orientation, subjects ‘picked up a coin’ by moving to a yellow disk on the arena floor. After a short delay with a blank screen, subjects were randomly repositioned and reoriented, and they then retrieved the now invisible coin by returning to the vicinity of its prior location. Successful retrieval requires integration of joystick motion, optic‐flow cues, wall views, and allocentric spatial memory.
Our results show that goal locations can be predicted significantly better than a baseline model for voxels in medial prefrontal cortex, hippocampus, and inferior parietal cortex. We also found best performance when data from all regions were combined. We then tried removing voxel data one region at a time from the combined dataset and found that prediction performance decreased significantly less than zero only when removing voxel data from the hippocampus. Thus, it seems that goal location information may be distributed or broadcast throughout many regions of the navigation network, but the hippocampus contributes something more unique or specialized. These results are the first step toward decoding a variety of place‐related information in spatial navigation with fMRI.
MATERIALS AND METHODS
Spatial Environment and Trial Detail
The virtual environment was coded in C++ with the OpenGL library (http://www.opengl.org) for rendering 3D graphics. The environment consisted of a circular arena with a radius of 100 arbitrary units (U) and a wall 16 units high (see Fig. 1). A fixation point indicating view and heading was placed mid‐height on the wall, and two parallel horizontal lines were drawn on the wall to help perspective. Dots were randomly placed on the floor, and shuffled between trial phases, to provide optic‐flow cues.
Figure 1.

Snapshots of the virtual environment are shown for the encoding phase. In the retrieval test phase (not shown) the coin is not visible (see also Supp. Info. Fig. S1). [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]
Subjects moved around freely with a joystick, which in the scanner was an MRI compatible joystick (Current Designs: http://www.curdes.com). Movement consisted of forward, backward, left, and right steps. To simulate natural movement, the forward motion was allowed to be relatively quick and slightly graded (speed range of about 50–75 U/s), and the other lateral directions were limited to 10 U in any one continuous movement at a constant speed (20 U/s). The left/right buttons of the joystick controlled heading. One button press turned 1 degree and holding the button turned 90°/s. If subjects turned while simultaneously moving forward, then the rotation would be slow so that they would veer slightly as they moved.
Each trial consisted of an encoding and a test phase. In the encoding phase, the subject started in a random orientation and position, and then moved to a visible yellow coin located on the floor within 15 s (see Fig. 1). When the subject was within 15 U of the coin movement was frozen and the word “found” appeared centrally on the screen for 1.5 s. The screen was then blanked out, except for a fixation cross, for a jittered delay period (6 s mean, range 4–10 s). The test phase started with the subject again randomly repositioned and reoriented. The coin was not visible, and the subject had to move to the location where the coin was previously. There was a time limit of 20 s in the test phase to limit searching. The subject had to be near the correct location (within 35 U) for at least 2 s for a successful retrieval. If the coin was retrieved (not retrieved) the word “recalled” (“not recalled”) appeared on the screen for 1.5 s, the screen was blanked out and replaced by a fixation cross for another 4–10 s jittered delay time before the next trial.
There was one red square on the wall (12 U wide) that covered only about 7° of the wall. The circular arena with one wall cue is patterned after Muller et al. [1987]. Subjects were pretrained to retrieve coins all over the arena, but during fMRI scanning the coin was always placed East, West, North (E, W, N) relative to the red square (South), and 10 U in front of the wall. The main reason we limited goals to three locations was so that we can perform neural decoding of place information and have sufficient number of trials at each position. Starting locations for each trial phase were limited to seven of the eight compass positions along the wall circumference (i.e. E, NE, etc …, but not S), and they were counter balanced across phases and goal locations.
There was also a cue‐place control condition in which there were eight unique cues of various colors and polygon shapes evenly spaced out on the wall. The coin was always located next to one of the cues. The cues were shuffled between trial phases to prohibit allocentric encoding (see Supp. Info.: Spatial Environment and Trial Detail).
Subjects
The study was approved by UC Irvine Institutional Review Board committee and all subjects approved and signed consent forms, and were compensated for their participation in a training ($10/h) and scanning session ($25/h). Eleven subjects (6 female, 5 male; 24–39 years of age; mean (std) age 29.8 (5.15); all with post‐secondary education) were recruited directly or by referral. All subjects were self‐reported right handed and none had any known neurological disorders.
fMRI Acquisition and Analysis
All subjects were given 1 h of training 1 or 2 days prior to scanning and were also given 5 min of practice while lying down in the scanner, just before scanning. fMRI data was acquired with a Phillips 3T Intera Achieva, sense factor 2, 37 interleaved 3 mm‐thick slices, TR = 2, no gap, flip angle 70°, TE = 30 ms, FOV = 240 × 240, 3 mm × 3 mm in plane resolution. Slices were taken perpendicular to the long axis of the hippocampus, which has been shown to increase signal in the temporal lobe, accompanied by a decrease in posterior orbital frontal cortex and rostral anterior cingulate [Weiskopf et al.,2005]. The most anterior slice reached the genu of the corpus callosum and covered Broadman Area 9, but cut off orbital frontal cortex.
In the scanning session for each subject, there were three scanning runs. In each run, there were 12 trials with one wall cue and four trials with the cue‐place control condition. Data were analyzed with SPM2 (http://www.fil.ion.ucl.ac.uk). Images were realigned and normalized to the EPI template in MNI (Montreal Neurological Institute) space, and smoothed with 8 mm FWHM Gaussian kernels. We applied the general linear model to whiten data, with regressors for each cell of 2 × 2 matrix for successful trials (allocentric vs. cue‐place × encoding vs. test), as well as the allocentric and cue‐place delay periods. Regressors were convolved with the canonical hemodynamic response. In addition, unsuccessful trials were coded separately.
Voxel Extraction and Dataset Construction
Overall, there was an average of 67.4 (SD 15.0) percent successful trials. We created separate datasets for each of the seven (four females) of 11 subjects who had at least seven correct N trials, which provided a nearly balanced number of E/W/N trials. The average number of successful trials for the seven subjects used for decoding, at E, W, and N locations were 9.36 (SD 1.18), 8.57 (SD 1.9), 8.71 (SD 1.6). Because each trial contributes two observations to the data set (for encoding and test phase), the average total number of observations for subjects was 53.3 (SD 7.0).
In postscan interviews, subjects described locations as generally left/right of the cue, except for three subjects that described the arena as a clock face. No subject noticed that coin locations were repeated exactly at the E, W, N (or 3, 6, 9 O'clock) positions. (As mentioned in methods section on Trial Details, goal locations were spread out all over the arena during training, but locations were limited to three positions during scanning.) Those that projected a clock face thought locations were scattered around 3, 6, or 9 O'clock. It is worth pointing out that the three subjects who used clock face descriptions were the 2nd, 4th, and 6th best performing subjects, which implies they were above average but not outliers. Moreover, only two of those three subjects had at least seven successful N trials and were included in the neural decoding test.
Each data point in each dataset consisted of a row vector of individual voxel activity. The voxel activity in each row was the average at each voxel over the third and fourth volumes after the end of successful trial phases. The third and fourth volumes correspond to the end of trial phase plus a 5–9 s lag time, which covers the peak of the canonical hemodynamic response to the moment that subjects were located at the goal. We included both encoding and test phases in one dataset so that neural decoding represents location information irrespective of differences in specific trial phase processing.
Voxels were extracted using the SPM2 extraction tool to whiten data and regress out motion parameters. Voxels were extract from seven bilateral regions implicated in spatial navigation [e.g. Maguire et al.,1998]. The regions were defined anatomically by predefined templates in the WFU Pickatlas [Maldjian et al., 2003] or by spheres centered at coordinates defined functionally, as detailed in Table I (see also Supp. Info.: fMRI Results and Data Extraction). In addition, all seven regions were combined as another data matrix, thereby giving eight datasets for each subject.
Table I.
Datasets were created from seven regions defined functionally or from the WFU Pickatlas, as well as their combination
| Region | Whole region or center/radius | Number of voxels |
|---|---|---|
| Hippocampus | Whole | 523 |
| Parahippocampus | Whole | 585 |
| Inferior parietal | Whole | 1,065 |
| Superior parietal | Whole | 977 |
| Occipito‐parietal/ retrosplenial | −15, −66, 18; 16 mm | 2,342 |
| Striatum | −12, 10, 0; 12 mm | 531 |
| Medial PFC | Whole | 1,616 |
| Combined | 7,639 |
Neural Decoding
We applied linear partial least squares (PLS) regression combined with taking the maximum output to classify locations (E, W, or N) for predictions. Briefly, PLS uses singular value decomposition to find components with maximum variance, and then project data onto those components prior to regression [e.g. Shawe‐Taylor and Cristianini,2004]. Projecting data onto components reduces the dimensions of the input space and alleviates the need to select voxels, thereby making it more feasible to operate over all voxels in a regional analysis. This is especially relevant for spatial navigation because cellular recordings show that an ensemble of place‐cells is most important for place reconstruction [Redish et al., 2001; Zhang et al.,1998].
In PLS regression, the decomposition is applied to the covariance of the data with target vectors, which is in contrast to the better known principle component analysis (PCA) method that uses the covariance of the data itself. Using the covariance with the target vectors implies that components are directly related to the target. This is especially important for fMRI data, which is known to have many noise sources that could dominate a PCA application. Moreover, in PLS regression the decomposition of the covariance matrix can be iterated with the residuals as a new target, as we detail in the following section. Iterating over the residuals makes this a very different PLS procedure than used for statistical inferencing in fMRI [McIntosh et al.,1996], one that is more appropriate for prediction. PLS also compares quite favorably to support vector machines in a variety of benchmark datasets [Bennett and Embrechts,2003] and was previously shown to have good performance with fMRI data [Rodriguez,2006].
In detail, the procedure starts with a standard multivariate regression equation, Y = XW, where in our case Y is a matrix of t × 3 target vectors, where there are t observations, and labels in each row of Y were set either 1 or 0 to indicate one of three goal locations. X is a mean‐centered data matrix of size t × v, where the v columns are voxels and W is a vector of weights. In the first step, singular value decomposition is used to find the one component U 1 that can best account for the variance of X′Y. Then, the data matrix X in the regression equation is replaced by XU 1, giving Y = XU 1 W. The weight vector is determined by solving the normal equations ([XU 1]′[XU 1])W = [XU 1]′Y. Then, U 1 is removed from the space of X (i.e. take the residual of the equation X = bU 1, where b is another weight vector), thereby producing a new matrix X 2, that is orthogonal to U 1. Likewise, Y is replaced by the residual Y – XU 1 W = Y 2, and the process is repeated using the covariance of X′2 Y 2. After iterating K times, the aforementioned regression equation becomes Y = [XU 1 X 2 U 2 X 3 U 3…X K U K]W, where W = [W 1 W 2…W K]′. For making a prediction, a test data vector is used instead of X [for details, see Bennet and Embrechts,2003; Shawe‐Taylor and Cristianini,2004]. In pilot testing, K ∼ 30 was found to be optimal, and for 27 ≤ K ≤ 31 there are only small changes in classification performance (i.e. less than 0.5% change) that do not affect the significance of p‐values (see Supp. Info.: Neural Decoding).
Leave‐one out cross‐validation was used to calculate the percentage of correct predictions for each of the eight datasets, for each of the seven subjects. This performance was then compared to a naïve classifier as a baseline null model. The naïve classifier ignores voxel data and merely selects the most likely class according to prior probabilities for each subject. It performs slightly above a 33% chance level because subjects do not have an equal number of E/W/N successful trials. Significant performance for each region was inferred by a within‐subject, paired (PLS vs. naïve classifier), right‐tailed t‐test using P‐value threshold of 0.0063, which represents a P‐value of 0.05 with a conservative Bonferroni correction for eight datasets.
RESULTS
The average percent correct across subjects was significantly better than the naïve classifier for medial frontal gyrus, hippocampus, and inferior parietal regions (51.4%, P ∼ 0.0006; 47.7%, P ∼ 0.0056; 49.2%, P ∼ 0.0026, respectively; Fig. 2a). Performance for all other regions was better than the naïve classifier at P < 0.03. The maximum performance of 60.7% correct (P < 0.0001) was attained using the combined regions dataset. Figure 3 shows the percent correct for each region across time windows. There is a clear peak for the time window that corresponds to the peak of a canonical hemodynamic response for when the subject was located at the goal.
Figure 2.

Prediction performance across regions for the PLS procedure are well above that of a naïve classifier. (a) Percent correct prediction performance across seven bilateral regions and the combination of those regions. (HPC, hippocampus; PHPC, parahippocampus; InfPar, inferior parietal; SupPar, superior parietal; OP/Ret, occipito‐parietal/retrosplenial cortex; m/vStr, medial/ventral striatum; mPFC, medial prefrontal cortex; Comb, regions combined; * indicates significance at P‐values <0.0063, which is 0.05 corrected for multiple comparisons of 8 regions). (b) Prediction performance for each subject for the combined regions dataset.
Figure 3.

Prediction performance for each region peaks around the same time as the peak of a canonical hemodynamic response to the moment when subjects are located at the goal (O, combined regions; –, m/vStr regions; all other regions are unmarked because their performance is highly similar).
As a confirmation that classification pertains to navigational processes, we compared prediction performance with other test datasets (Supp. Info: Control Tests). In one test, we split classification performance for each goal location separately to see if perhaps any location had a large impact on the total percent correct. One might for example suppose that N goal locations are classified very well because they are more difficult or perhaps more salient in some regard. We tallied the percent correct classification for each goal location separately for each subject, and then compared locations by paired t‐tests of N minus E, N minus W, etc. No region showed a significantly greater performance for one location over both of the other two (Supp. Info. Fig. S5). Thus, there is no evidence that neural decoding is only picking up on one class for making predictions about goal locations. Instead it seems that each goal location is predicted correctly with similar percentages.
We also considered data from the test phase of unsuccessful trials, which likely represent navigational errors. Given the small number of unsuccessful trials and lack of consistency in subjects' actual locations at the end of the trial, we use the desired goal location as the class label. Performance was poor for all regions (range 29.3–42.5; Supp. Info. Table S3), which shows that error‐trial data are unclassifiable as one should expect. In another control test, we found that data from the cue‐place control condition are not classified as belonging to any goal location. These tests help rule out the possibility that classification performance in successful trials is based on some nonspatial factor.
The fact that many regions of the spatial navigation network carry predictive information suggests that specific goal identity is disseminated throughout the network, and/or each region carries some distinguishing information about goal locations. Indeed, many regions of the navigation network have cells with a coarse representation of location, which likely contributes to goal‐oriented, allocentric navigation [Knierem,2006]. To address this issue, we performed an exploratory analysis to test which regions have the biggest impact on performance of the combined regions dataset. For this we applied a paired t‐test (within subject, uncorrected) to the difference of percent correct with the combined dataset versus percent correct when data from each region was taken out separately. It turns out that only when removing the hippocampus data did we find a significant decrease in performance relative to the null hypothesis of zero change (60.7% for all seven regions combined vs. 58.4% for all regions except hippocampus, within subject mean difference 0.021, SD 0.027, P ∼ 0.039; all other regions were in a range of mean differences −0.003 to 0.029, and P ∼ 0.086−0.67). Consequently, it seems that voxel data from the hippocampus adds unique information to the combined regions data set, whereas information in other regions is more redundant. The presence of unique information does not guarantee the best prediction of goal locations because other factors, such as MRI signal noise, or noise related to other hippocampal coding functions, could interfere with classification. Nonetheless, this finding is consistent with the cellular recordings that show the hippocampus is specialized for place representations, and other regions are involved in coding coarser aspects of goal locations, as part of route planning and execution [e.g. Knierem,2006; Poucet et al.,2004]. Although speculative at the spatial resolution of fMRI, our data suggest that those other representations are somehow more shared among regions of the spatial navigation network.
DISCUSSION
In summary, we have demonstrated that goal locations in virtual navigation can be decoded from the pattern of voxels that corresponds to the moment subjects were located there. Prediction performance was most significant in the medial prefrontal cortex, hippocampus, and inferior parietal cortex. Moreover, we used pattern analysis with all regions combined to show that information about goals seems to be disseminated in the navigation network, except that hippocampus may carry unique information.
The work reported here supports another recent fMRI study on neural decoding for place‐related information by Hassabis et al. [2007]. In that study, subjects navigated to one of four corners of a room, which were identified by wall cues between the corners. In a search light procedure limited to hippocampus and parahippocampus, neural decoding could discriminate activation patterns that correspond to the four corners for voxels in both regions. However, there were significantly more voxels in the hippocampus that could discriminate above chance than in the parahippocampus. Thus, our results and that of Hassabis et al. show that goal location can be decoded and that the hippocampus may be more specialized. An important difference is that the fMRI acquisition in that study used a high‐resolution protocol and only covered slices that encompass the hippocampal axis, which precludes a complete comparison to the work here. Nonetheless, both studies raise an issue whether neural decoding of voxel patterns actually reflects place‐cell activity, or some other undetermined spatial representations. Clearly, more studies are needed to fully establish a connection to place‐cells.
It is also worthwhile to point out other fMRI studies related to goal processing. In a recent spatial navigation study, a region of medial prefrontal cortex was found to correlate with the proximity to goal locations [Spiers and Maguire,2007]. In that study, goals were destinations within a virtual depiction of the city of London. The correlation pertained to all destinations and did not involve neural decoding of specific locations. Thus, it is not clear if voxel activity would also track proximity to specific goal locations. In our case, where there are no barriers or corners that have to be negotiated, the proximity to the goal is confounded with the time to a trial end. Future work could try to separate this more clearly by using obstacles to force the subject to be close to the goal in virtual space, but still far in time from reaching it.
Prefrontal cortex has also been implicated in non‐spatial goal processing functions, such as behavioral planning, decision making/intentions, motor planning, and task‐set maintenance. Haynes et al. [2007] have shown that neural decoding can dissociate motor execution/planning from the intentional aspects of behavior. They found that voxel activity could predict motor execution/planning best in a dorsal‐medial frontal region, whereas intentions were predicted in more anterior and lateral frontal regions. In our task set up, we have randomized and counterbalanced starting positions and orientations with respect to goal locations, so that we do not expect motor movements to be correlated with specific goals. However, we do not claim to separate goal locations from all motor‐related activity because navigational planning may use heading vectors that are egocentrically defined motion plans [e.g. Knierem,2006].
We also do not have strong claims about what kind of task‐set information subjects must maintain during navigation. One possibility is that the goal‐location as defined by a cognitive map is maintained and used in navigation. However, as partially shown in Figure 2b, we did not find strong support for any region that maintains information about specific goals locations throughout the navigation process. Another possibility is that once a subject is oriented and a route is planned, a heading vector is all that is maintained [e.g. Kubie and Fenton,2009; Pearce et al.,1998].
One potential caveat in the study is that visual input could be driving prediction performance if subjects keep the wall cue in view while searching for the coin. However, within the last four seconds, subjects have the wall cue in view only 7.7 (SD 6.9) percent of trial phase endings, which implies that direct views of the cue have limited influence. Moreover, classification performance with voxels from early visual cortex (39.1%) showed no evidence of prediction greater than the naïve classifier (P ∼ 0.339) or chance level (P ∼ 0.096; Supp. Info: Control Tests). Another potential caveat is that attention, working memory, and/or verbal processes about broadly defined goal locations are driving performance, as opposed to navigational representations related to place. However, if that were the case we would not expect there to be a well‐defined peak in Figure 3, because subjects determine their general destination well before reaching the goal location. This argument could be supported by neural decoding of more varied goal locations or other kinds of spatial information.
A potential limitation of the study is that during the test phase subjects might wander around the goal location and retrieve the coin only by chance. However, subjects are required to be in the goal vicinity for at least 2 s so that they cannot achieve good performance merely by a random walk. They must have at least a general idea of where the coin is located. As shown in Supplementary Information Table S1, subjects on average take about 2 s more to find a coin during retrieval than during encoding, which implies they are only wandering around at most a few spots near the goal location. Moreover, if subjects were actually retrieving coins by a random search on occasion, then one would expect that to impair classification performance. Nonetheless, a follow‐up study could vary the goal locations around some general areas and demand that subjects indicate one spot where they believe the coin to be in the retrieval phase.
Future work could also extend the findings here by looking at a variety of place‐related information with multivoxel pattern analysis that reflects, and potentially extends, data from cellular recordings. For example, one could consider salient non‐goal locations, such as decision points in a T‐maze, or heading vectors and head directions. A primary concern is that there would be enough data points. As we show in the control test with smaller samples (e.g. Supp. Info. Table S5), it is important to have a sufficient number of data points for good classification performance. Exactly how many is difficult to say, but for a freely navigated environment, there are trade offs between speed of navigation, the ability to acquire fMRI volumes while the subject is at some location, the availability of landmarks and/or visual cues, total number of discernible locations, and of course, total number of trials.
Supporting information
Additional Supporting Information may be found in the online version of this article.
Supplemental Information
REFERENCES
- Aguirre GK,Detre JA,Alsop DC,D'Esposito M ( 1996): The parahippocampus subserves topographical learning in man. Cereb Cortex 6: 823–829. [DOI] [PubMed] [Google Scholar]
- Bennett KP,Embrechts MJ ( 2003): An optimization perspective on kernel partial least squares regression In: Suykens JAK, Horvath G, Basu S, Micchelli C, Vandewalle J, editors. Advances in Learning Theory: Methods, Models, and Applications. Amsterdam: IOS Press; pp 227–250. [Google Scholar]
- Ekstrom AD,Kahana MJ,Caplan JB,Fields TA,Isham EA,Newman EL,Fried I ( 2003): Cellular networks underlying human spatial navigation. Nature 425: 184–188. [DOI] [PubMed] [Google Scholar]
- Hartley T,Maguire EA,Spiers HJ,Burgess N ( 2003): The well‐worn route and the path less traveled: Distinct neural bases of wayfinding in humans. Neuron 37: 877–888. [DOI] [PubMed] [Google Scholar]
- Hassabis D,Chu C,Rees G,Weiskopf N,Molyneux P,Maguire EA ( 2007): Decoding neuronal ensembles in the human hippocampus. Curr Biol 19: 546–554. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hanson SJ,Matsuka T,Haxby JV ( 2004): Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: Is there a “face” area? Neuroimage 23: 156–166. [DOI] [PubMed] [Google Scholar]
- Haxby JV,Gobbini MI,Furey ML,Ishai A,Schouten JL,Pietrini P ( 2001): Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293: 2425–2430. [DOI] [PubMed] [Google Scholar]
- Haynes JD,Sakai K,Rees G,Gilbert S,Frith C,Passingham RE ( 2007): Reading hidden intentions in the human brain. Curr Biol 17: 323–328. [DOI] [PubMed] [Google Scholar]
- Hok V,Save E,Lenck‐Santini PP,Poucet B ( 2005): Coding for spatial goals in the prelimbic/infralimbic area of the rat frontal cortex. Proc Natl Acad Sci USA 102: 4602–4607. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hok V,Lenck‐Santini PP,Roux S,Save E,Muller RU,Poucet B ( 2007): Goal‐related activity in hippocampal place cells. J Neurosci 27: 472–482. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hollup SA,Molden S,Donnett JG,Moser MB,Moser EI ( 2001): Accumulation of hippocampal place fields at the goal location in an annular watermaze task. J Neurosci 21: 1635–1644. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Iaria G,Petrides M,Dagher A,Pike B,Bohbot VD ( 2003): Cognitive strategies dependent on the hippocampus and caudate nucleus in human navigation: Variability and change. J Neurosci 23: 5945–5952. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kamitani Y,Tong F ( 2005): Decoding the visual and subjective contents of the human brain. Nat Neurosci 8: 679–685. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Knierim JJ ( 2006): Neural representations of location outside the hippocampus. Learn Mem 13: 405–415. [DOI] [PubMed] [Google Scholar]
- Kubie JL,Fenton AA ( 2009): Heading‐vector navigation based on head‐direction cells and path integration. Hippocampus 19: 456–479. [DOI] [PubMed] [Google Scholar]
- Maldjian JA, Laurienti PJ, Kraft RA, Burdette JH ( 2003): An automated method for neuroanatomic and cytoarchitectonic atlas‐based interrogation of fMRI data sets. Neuroimage 19, 1233– 1239. [DOI] [PubMed] [Google Scholar]
- Maguire EA,Burgess N,Donnett JG,Frackowiak RS,Frith CD,O'Keefe J ( 1998): Knowing where and getting there: A human navigation network. Science 280: 921–924. [DOI] [PubMed] [Google Scholar]
- McIntosh AR,Bookstein FL,Haxby JL,Grady CL ( 1996): Spatial pattern analysis using partial least squares. Neuroimage 3, 143–157. [DOI] [PubMed] [Google Scholar]
- Morris RG,Hagan JJ,Rawlins JN ( 1986): Allocentric spatial learning by hippocampectomised rats: A further test of the “spatial mapping” and “working memory” theories of hippocampal function. Q J Exp Psychol B 38: 365–395. [PubMed] [Google Scholar]
- Muller RU, Kubie JL, Ranck JB Jr. ( 1987): Spatial firing patterns of hippocampal complex‐spike cells in a fixed environment. J Neuroscience, 7, 1935–1950. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pearce JM,Roberts AD,Good M ( 1998): Hippocampal lesions disrupt navigation based on cognitive maps but not heading vectors. Nature 396: 75–77. [DOI] [PubMed] [Google Scholar]
- Polyn SM,Natu VS,Cohen JD,Norman KA ( 2005): Category‐specific cortical activity precedes retrieval during memory search. Science 310: 1963–1966. [DOI] [PubMed] [Google Scholar]
- Poucet B,Lenck‐Santini PP,Hok V,Save E,Banquet JP,Gaussier P,Muller RU ( 2004): Spatial navigation and hippocampal place cell firing: The problem of goal encoding. Rev Neurosci 15: 89–107. [DOI] [PubMed] [Google Scholar]
- Redish AD, Battaglia FP, Chawla MK, Ekstrom AD, Gerrard JL, Lipa P, Rosenzwieg ES, Worley PF, Guzowski JF, McNaughton BL, Barnes CA ( 2001): Independence of firing correlates of anatomically proximate hippocampal pyramidal cells. J Neurosci 21: RC134. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rodriguez PF ( 2006): Fast nonlinear prediction with large number of voxels using Kernel partial least squares. In Human Brain Mapping Conference Brain Activity Interpretation Competition, Honorable Mention for Best Prediction of Subjective Assessment (www.ebc.pitt.edu/PBAIC.html). 2006, Human Brain Mapping Conference, Florence, Italy.
- Rodriguez PF ( 2007): The fMRI neural correlates of a cognitive map in a virtual water maze task. Program No. 203.16 2007 Abstract Viewer/Itinerary Planner. Washington, DC: Society for Neuroscience. Online. [Google Scholar]
- Shawe‐Taylor J,Cristianini N ( 2004): Kernel Methods for Pattern Analysis. Cambridge: Cambridge University Press. [Google Scholar]
- Shipman SL,Astur RS ( 2008): Factors affecting the hippocampal BOLD response during spatial memory. Behav Brain Res. 187: 433–441. [DOI] [PubMed] [Google Scholar]
- Spiers HJ,Maguire EA ( 2006): Thoughts, behaviour, and brain dynamics during navigation in the real world. Neuroimage 31: 1826–1840. [DOI] [PubMed] [Google Scholar]
- Spiers HJ,Maguire EA ( 2007): A navigational guidance system in the human brain. Hippocampus 17: 618–626. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Voermans NC,Petersson KM,Daudey L,Weber B,van Spaendonck KP,Kremer HPH,Fernandez G ( 2004): Interaction between the human hippocampus and the caudate nucleus during route recognition. Neuron 43: 427–435. [DOI] [PubMed] [Google Scholar]
- Weiskopf N,Hutton C,Josephs O,Deichmann R ( 2005): Optimal EPI parameters for reduction of susceptibility‐induced BOLD sensitivity losses: A whole‐brain analysis at 3 T and 1.5 T. Neuroimage 33: 493–504. [DOI] [PubMed] [Google Scholar]
- Wolbers T,Buchel C ( 2005): Dissociable retrosplenial and hippocampal contributions to successful formation of survey representations. J Neurosci 25: 3333–3340. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wolbers T,Wiener JM,Mallot HA,Buchel C ( 2007): Differential recruitment of the hippocampus, medial prefrontal cortex, and the human motion complex during path integration in humans. J Neurosci 27: 9408–9416. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang K,Ginzburg I,McNaughton BL,Sejnowski TJ ( 1998): Interpreting neuronal population activity by reconstruction: Unified framework with application to hippocampal place cells. J Neurophysiol. 79: 1017–1044. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Additional Supporting Information may be found in the online version of this article.
Supplemental Information
