Abstract
An important, but as of yet, incompletely resolved issue, is whether spatial knowledge acquired during navigation differs significantly from that acquired by studying a cartographic map. This, in turn, is relevant to understanding the generalizability of the concept of a “cognitive map,” which is often likened to a cartographic map. Based on previous theoretical proposals, we hypothesized that route and cartographic map learning would produce differences in the dynamics of acquisition of landmark-referenced (allocentric) knowledge compared to view-referenced (egocentric) knowledge. We compared this model with competing predictions from two other models linked to route vs. map learning. To test these ideas, participants repeatedly performed a judgment of relative direction (JRD) and scene and orientation-dependent pointing (SOP) task while undergoing route and cartographic map learning of virtual spatial environments. In Experiment 1, we found that map learning led to significantly faster improvements in JRD pointing accuracy compared to route learning. In Experiment 2, in contrast, we found that route learning led to more immediate and greater improvements overall in SOP accuracy compared to map learning. Comparing Experiment 1 and 2, we found a significant 3-way interaction effect, indicating that improvements in performance differed for the JRD vs. SOP task as a function of route vs. map learning. We interpret these findings to suggest that learning modality differentially affects the dynamics of how we utilize primarily landmark-referenced vs. view-referenced knowledge, suggesting potential differences in how we utilize spatial representations acquired from routes vs. cartographic maps.
Keywords: spatial navigation, spatial memory, route learning, map learning, spatial knowledge
Introduction
How we acquire spatial knowledge of our surrounding environment is critical to understanding how we navigate and orient ourselves within the world. During navigation, we often employ routes that we have learned during previous exploration of an environment to arrive at our intended goal, referred to as “route knowledge.” We may also employ a more “map” like representation of the environment, often referred to as a “cognitive map,” allowing us to make inferences about directions and distances of landmarks within the environment (O’Keefe & Nadel, 1978; Siegel & White, 1975; Tolman, 1948). We may acquire map-like knowledge either through extensive navigation of an environment or studying a cartographic map, the result of which is often referred to as “survey knowledge.” (Appleyard, 1970; Siegel & White, 1975; Thorndyke & Hayes-Roth, 1982; Wolbers & Buchel, 2005). This dichotomy between route and survey knowledge forms the underpinning of a number of different influential theoretical models in spatial cognition (O’Keefe & Nadel, 1978; Siegel & White, 1975) and in cognitive neuroscience (Hartley, Maguire, Spiers, & Burgess, 2003; Iaria, Petrides, Dagher, Pike, & Bohbot, 2003; Packard & McGaugh, 1996).
While there is generally little debate that we can acquire spatial knowledge via multiple modalities (Montello, Waller, Hegarty, & Richardson, 2004), including via navigation and cartographic maps, whether and how knowledge acquired from these two different modalities differs remains unresolved (Chrastil, 2012; Evans & Pezdek, 1980; Montello et al., 2004; Shelton & Gabrieli, 2002; Shelton & McNamara, 2004; H.A. Taylor & Tversky, 1992; Wolbers & Buchel, 2005; Zhang, Copara, & Ekstrom, 2012). Several behavioral studies suggest that map learning may provide more direct access to survey knowledge than navigation (R. A. Ruddle, Payne, & Jones, 1997; Thorndyke & Hayes-Roth, 1982). These findings also support the idea that the survey representation is the final element in a hierarchy of acquisition of spatial knowledge (Siegel & White, 1975). The model thus postulates three fundamental, hierarchical steps in spatial learning: 1) representation of landmarks, 2) representation of routes linked to landmarks, and, 3) representation of configurations of landmarks and routes with accompanying directional and metric information, referred to as the “survey representation.” However, it is unclear whether cartographic maps go through the same hierarchy as routes or whether they can provide more immediate access to survey knowledge than routes1.
The partially independent model: Route and cartographic map learning lead to different forms of spatial knowledge
A modification to the Siegel and White model is that map learning may involve more immediate access to configural knowledge, bypassing the need for representation of individual routes, while route learning provides more immediate representation of paths through an environment with less configural knowledge (Moeser, 1988; Noordzij, Zuidhoek, & Postma, 2006; Shemyakin, 1962; H. A. Taylor, Naylor, & Chechile, 1999; Thorndyke & Hayes-Roth, 1982; Zhang et al., 2012). After sufficient exposure, however, spatial knowledge gained from the two modalities is generally comparable (Siegel & White, 1975; Thorndyke & Hayes-Roth, 1982). In one study supporting the partially independent model, Taylor et al. (1999) had participants study maps or navigate an unfamiliar campus building for 10–20 minutes. Participants that learned the environment by route navigation showed better estimates of route distances and worse estimates of Euclidean distances while the opposite pattern emerged for map learners, who showed better estimates of Euclidean distance and worse estimates of route distance (see also: Rossano, West, Robertson, Wayne, & Chase, 1999). The apparent double dissociations reported by Taylor et al., as well as those obtained by others (e.g., Noordzij, Zuidhoek, & Postma, 2006), suggests that cartographic maps may provide more immediate access to survey knowledge than navigation and thus, knowledge acquired via cartographic map learning may be more anchored to landmark configurations than those following route learning. The opposite may apply to routes, which may depend to a greater extent on memory for individual trajectories that are not particularly well integrated, even with extensive exposure to the environment (Hirtle & Hudson, 1991; Moeser, 1988). We term this model the partially independent model.
The overlapping model: Route and cartographic map learning result in the same forms of spatial knowledge
It has also been suggested that both route and map learning involve the same fundamental, visually-based representational system (Kosslyn, 1996; Lee & Tversky, 2001; H.A. Taylor & Tversky, 1992; Tversky, 1992). While early studies suggested that cartographic maps may involve preferential access to the original learned orientation compared to navigation (e.g., Evans & Pezdek, 1980), later studies suggested that both modes of learning involve orientation-dependent forms of representation (Roskos-Ewoldsen, McNamara, Shelton, & Carr, 1998; Shelton & McNamara, 2004). The alignment-dependence, based on the angle from which the layout was originally encoded, has often been taken as evidence for their fundamental similarity (Montello et al., 2004; Shelton & McNamara, 2004). Any costs observed in switching from a route to a map perspective (and vice versa) could be taken as a cost associated in changing the orientation of a single representation rather than switching between two representations per se (c.f., Lee & Tversky, 2001). Finally, several fMRI studies have reported largely overlapping brain areas following route and survey learning, with survey learning recruiting a subset of brain regions involved in route learning (Latini-Corazzini et al., 2010; Shelton & Gabrieli, 2002; Shelton & Pippitt, 2007; but see: Zhang et al., 2012). Thus, these studies suggest that route and cartographic map learning do not differ substantially in terms of their underlying representational structure, suggesting they result in comparable “cognitive maps.” We term this the overlapping model.
The encoding specificity hypothesis and relationship with the above models
A third possibility is that previously observed differences between route and map learning relates to encoding specificity. Specifically, the majority of behavioral studies on route and map learning have employed Euclidean distance and route estimation as measures for route and holistic knowledge, respectively (Roy A. Ruddle, Payne, & Jones, 1999; H. A. Taylor et al., 1999; Thorndyke & Hayes-Roth, 1982), which may be selectively enhanced because of how they were encoded rather than differences in the representation (e.g., encoding specificity: Tulving & Thomson, 1973). For example, map learning, in particular, allows the participant to visualize distances between remote landmarks directly and may thus provide an advantage with tests of spatial memory that provide a similar encoding perspective (Shelton & McNamara, 2004). Similarly, in cognitive neuroscience studies in which participants viewed information from either a route or survey perspective, any differences in activations may be influenced by the specific viewpoint encoded rather than based on differences in the underlying representational structure of routes vs. map (Latini-Corazzini et al., 2010; Shelton & Gabrieli, 2002; Shelton & Pippitt, 2007). One way of dealing with this potential confound is to employ measures not affected exclusively/primarily by how the original layout was encoded in the first place. We argue here that one way to do this involves comparing measures that access primarily scene and orientation dependent (egocentric) vs. landmark-referenced (allocentric) knowledge.
Dual systems for spatial representation
In some manuscripts, “allocentric” is assumed to mean the same thing as a survey representation and egocentric the same as route knowledge (Noordzij et al., 2006). It is not clear, however, that these representations are, in fact, equivalent. For example, Igloi et al. (2009) showed that when participants navigated a virtual environment, they employed both landmark-referencing (allocentric) and memory for number of turns (sequential egocentric), often using both in parallel or switching between the two forms of knowledge (Igloi, Zaoui, Berthoz, & Rondi-Reig, 2009). This finding suggests that route learning facilitates development of both egocentric and allocentric knowledge, sometimes in parallel. Also, while learning a map would seem to involve coding landmarks relative to each other, and thus treating them in a largely allocentric fashion, utilizing a map to navigate requires some conversion into egocentric-based coding schemes to allow alignment to the configuration of landmarks relative to the navigator (Sholl, 1987). Although some theoretical models have suggested ways in which route and map learning might relate to egocentric and allocentric knowledge, (Poucet, 1993), how and in what manner they might relate empirically remains generally unaddressed. Thus, we sought to assess the relative effects of route and cartographic map learning on view vs. landmark-referenced knowledge using two tasks employed in past literature to assess them: a scene and orientation-dependent pointing (SOP) task and judgments of relative direction (JRD), respectively (Mou, McNamara, Rump, & Xiao, 2006; Mou, McNamara, Valiquette, & Rump, 2004; Waller & Hodgson, 2006). To our knowledge, our paper is the first to investigate how and in what manner route and map learning differentially affect performance in these two pointing tasks.
Assessing utilization of egocentric and allocentric knowledge
To address the effects of route and map learning on the two forms of spatial knowledge, we employed a recently developed paradigm by Waller and Hodgson (2006) that provided a means of dissociating view and landmark-referenced representational systems (Waller & Hodgson, 2006). Specifically, we adapted two conditions from a paradigm used by Waller and Hodgson (2006) and others (Holmes & Sholl, 2005; Mou et al., 2004; Rieser, 1989) to differentially measure scene and orientation dependent (view-referenced) memory and landmark-referenced memory, respectively. The first task, which we term the scene and orientation-dependent pointing (SOP) task, was primarily dependent on one’s position and orientation in the environment based on the perceptual details of the scene, employing the same questions as used in the Waller and Hodgson study (e.g., “Point to the Camera Store”). To ensure participant orientation, before the pointing judgments began, participants navigated to a position in the city at which they felt oriented. To further ensure their orientation, participants viewed background information during the pointing task (i.e., everything but the targets)2. The second task, which we term the judgment of relative direction (JRD) of landmarks task (consistent with earlier terminology) was dependent primarily on knowledge of the relative position of landmarks to each other and not being oriented in the immediate environment with scene information. The lack of any orienting information on the screen in our version discouraged updating of their position from trial to trial and any use of information in the city that might conflict with their imagined position3.
In the current study, the JRD task involves estimating spatial relations between targets, which will tap primarily into knowledge of the angular and positional relationships of landmarks to each other. Thus, the JRD task, although involving some degree of self-orientation initially to perform the task, assays participants’ knowledge of the relative positions of landmarks within the spatial configuration. The SOP task, in contrast, depends on knowledge of the relationship between the target and standing position and orientation in the virtual environment, which is more likely to be view-referenced and involve egocentric knowledge. In contrast to the transient types of short-term representations for recent views (Waller & Hodgson, 2006), because participants choose their pointing position at the beginning of the SOP task, this task is more likely to involve enduring egocentric forms of knowledge (see: Wang & Spelke, 2002). While both tasks could be conduced with reference to either egocentric or allocentric representation or both (Igloi et al., 2009), both route and map learning likely also involve the two forms of representation (Moeser, 1988; Noordzij, Zuidhoek, & Postma, 2006; Shemyakin, 1962; H. A. Taylor, Naylor, & Chechile, 1999; Thorndyke & Hayes-Roth, 1982; Zhang et al., 2012). Given that the JRD task will involve mapping onto landmark interrelations in a way in which the SOP task does not, while the SOP tasks depends on position/orientation information, testing the improvements in pointing accuracy during route vs. map learning will provide some insight into the relative development of view/orientation referenced knowledge via the SOP task and knowledge of landmark interrelations via the JRD task.
We set up our experiments such that participants were repeatedly tested on either JRD pointing accuracy (Experiment 1) or SOP accuracy (Experiment 2) during route and map learning. This allowed us to test two novel and hitherto untested hypotheses: 1) that the dynamics of how SOP and JRD pointing accuracy improve differ during route and map learning 2) that SOP and JRD pointing accuracy differ following route and map learning. According to the partially independent model, and based on the logic above, we would generally assume faster improvements in pointing accuracy for the JRD task after map learning compared to route learning. In contrast, we would expect faster improvements in SOP pointing accuracy following route learning compared to map learning. In contrast, for the overlapping model, we would expect no difference in pointing accuracy for the SOP vs. JRD task, since the two learning modalities would utilize the same knowledge. For the encoding specificity model, we might predict overall better performance in the SOP task during route compared to map learning, with possibly the opposite for the JRD task, although this model would not appear to explicitly predict differences in the dynamics of improvement for the SOP vs. JRD task.
General Methods
Stimuli
Two virtual cities were displayed separately on a 21-inch computer screen, which were 304 × 304 and 238 × 309 meters, respectively, and were rendered with an ATI Radeon HD 2400 graphics accelerator. In each of the virtual cities, a set of rectangular target stores (approximately 9.5 × 6.5 × 5.8 meters, length × wide × height) were placed in the virtual cities located along virtual streets. In addition, we enriched the environment with additional buildings, trees, grass, trash cans, benches, fences, walls, sky, and clouds. The target stores were arranged in a way that participants could only view one store at a time from the route view (Figure 1A, right panel). The buildings served as background information to aid in localization and never served as targets. The 3D models of all the items in the virtual cities were first constructed using Maya2008 (Autodesk Inc.) and presented using Panda3D software (Entertainment Technology Center, Carnegie Mellon University) and the panda_epl software (http://memory.psych.upenn.edu/PandaEPL). During testing, participants were seated in front of the computer screen; the distance between the participants and the computer screen was around 80 cm.
Figure 1.
General experimental design
A. Right panel, view of the environment. Left panel, aerial view of the environment.
B. Left panel, JRD pointing task. Right-panel, egocentric pointing task.
Maps of the city were constructed by taking an aerial view snapshot from 300 meters above the center of the virtual city (Figure 1A, left panel). The resultant image was shown in the application Preview on the computer monitor. Each target store was clearly and visibly labeled with its name on the room (e.g., “Coffee Shop.”)
Encoding via Route or Cartographic Map Learning
During route learning, participants were instructed to search for one of eight target stores. On each trial in which participants searched for a target store, a prompt in the upper left corner of the screen indicated which store was the target (e.g. ‘Find the Fast Food Restaurant’, Figure 1A, right panel). Participants then freely navigated until they located the target. When participants arrived at the store, another prompt appeared (e.g., ‘Thank you. You found the Fast Food Restaurant’). Participants then pressed the enter button and the target name for the next trial appeared in the top left corner of the screen. Each round of searching for the eight target stores constituted one block of route learning. The sequence of searching for target stores during route learning within a block was fully randomized. While navigating, participants were encouraged to take as short a route as possible to the target store and could not travel through obstacles. Participants navigated using the arrow keys on the keyboard, which allowed forward and backward movement and left and right turns. The full linear speed was 20 meters/sec and the turning speed was 40 deg/sec. The corresponding accelerations were 37.5 meters/sec2 and 25 deg/sec2.
During map learning, participants viewed and drew the layout of the city from an aerial view (see Stimuli). The city was rendered identically to when it was seen during navigation with the exception that it was viewed from an aerial, or overview, perspective. Relevant details (target stores, store names, road, buildings, etc) were visible from the aerial perspective as well as the route perspective. Participants learned the map by viewing the map of the city on the computer screen and drawing a rendition of what they saw on a piece of paper.
SOP Task
Participants began the SOP task by being placed at a random location within the environment. Participants then positioned themselves in the environment by navigating to a place at which they felt oriented (Figure 1B, right panel). This was to ensure that participants were oriented within the visual scene when performing the task. All target stores were removed in this version of the environment, requiring participants to use other landmarks (buildings, roads, or other stimuli) to orient themselves. Following self-positioning, participants then began the SOP task. Each trial consisted of participants seeing the name of the target store in the upper left corner of the screen (e.g., ‘Please point to the Camera Store’). A virtual compass was present on the bottom half of the screen and the participant pressed leftward and rightward arrow keys to rotate the compass until it pointed to the correct direction of the target.
JRD pointing task
A trial began with participants viewing the ground and the sky and no other stimuli (Figure 1B, left panel). We did this to avoid showing any stimuli that participants could use to orient themselves relative to the previous trial while still maintaining orientation in the up-down direction. For each JRD pointing trial, instructions appeared in the top half of the screen (e.g., “Imagine you are standing at the Costume Shop, facing the Gym. Please point to the Camera Store”). These instructions required participants to imagine themselves in a comparatively novel position within the environment to better assay their knowledge of the configuration of stores.
During both pointing tasks, the full turning speed of the virtual compass was 117.6 deg/sec, with a precision of 1 degree. The turning acceleration was 60 deg/sec2.
Participants
All participants in the study were recruited from the general population in Davis, CA area and gave written informed consent to participate in the study, which was approved by the Institutional Review Board (IRB) at the University of California, Davis. All participants were between the ages of 18 and 35.
Experiment 1
Here, we sought to understand how JRD pointing accuracy changes as a function of exposure to either direct navigation (route learning) or a map (cartographic map learning). This experiment provides a test of the three models by determining how and in what manner JRD pointing accuracy changes as a function of route vs. map learning. One difficulty in testing how route and map learning affect spatial knowledge is that it is challenging to equate exposure between the two different learning modalities (e.g., Leiser, Tzelgov, & Henik, 1987). Repeated testing, although not a complete solution to this problem, provides insight into whether the representations might evolve differentially following the two learning modalities, thus revealing when performance might diverge or converge. This approach also allows us to potentially better equate exposure to the two modalities initially by providing minimal exposure on the first learning block and then seeing whether performance in the JRD task differs in later trials.
Method
Participants
31 participants (16 female and 15 male) were recruited for this experiment; none of them participated in the study before.
Materials
The layouts are shown in Figure 1. Each JRD pointing block consisted of 10 trials randomly selected from a pool of 336 possible questions. The JRD pointing task was otherwise identical to that described in the General Methods.
Experimental design and procedure
After receiving practice on the JRD pointing task, participants were assigned to encode one of the layouts either by route or map learning. During route learning, participants navigated to each of the 8 target stores once in a random order, completing a full round of deliveries in each encoding block. For map learning, participants viewed the map of the layout for 30 seconds. Following a block of encoding, participants performed a block of the JRD pointing task. We selected 30 seconds of map learning because this was the minimal amount of time participants needed to study and acquire the contents of the map. A pilot experiment employing longer degrees of exposure (60 seconds) resulted substantial differences in pointing accuracy between route and map learning on the first block of the JRD task, with map learners performing significantly better. Thirty seconds of exposure thus allowed us to better balance their initial performance on the first block and thus better investigate whether and how differences between the two forms of learning emerged.
For both route and map learning, the encoding/test procedure was repeated 5 times. Participants then studied the second layout, performing either route or map learning; the procedure of the second layout was again identical to that of the first layout except for the learning method. This again allowed us to employ a within-participants design.
The sequence of the two layouts and the encoding method for two layouts were counterbalanced across participants. Participants were encouraged to respond as accurately as possible.
Results and discussion
Pointing error was calculated for each trial and then averaged for each testing block and participant excluding trials with pointing error 2 standard deviations above the mean. Using this criterion, 5.6% of trials were rejected. The mean pointing error for each testing block was then analyzed in a 2 × 5 encoding (route vs. survey learning) × testing block (5 learning blocks) ANOVA. We did not find any significant differences in pointing latency. We also did not find any significant effect of gender on pointing error, (F(1,29) = 1.48, p = .23). Thus, we focus on significant differences in pointing error collapsed across gender.
Mean pointing error is plotted as a function of testing block and encoding method in Figure 2. We found a main effect of learning block (F(4,120) = 18.14, p < .001) and a significant interaction effect between encoding method and learning block (F(4,120) = 3.94, p < .005). Planned comparisons of JRD performance on all testing blocks showed that performance after map learning was better than after route learning for the 2nd, 3rd and 4th testing block (t(30) = 4.38, 2.12, and 2.86, p < .001,.05, and .01) but that there were no significant differences for the 1st or 5th block between route and map learning. These findings indicate that although pointing accuracy in the JRD task started at comparable levels following route and map learning on the first block, it diverged for blocks 2–4, and then converged again at block 5. We also compared the first block of learning against later blocks. For route learning, JRD pointing accuracy was significantly different between the first vs. third block (t(30)=1.65, p<.05, one-tailed), the first vs. fourth block (t(30)=2.81, p <.01), and the first vs. fifth block (t(30)=3.98, p<.001). There was no difference in pointing accuracy for route learning between the first and second block. For map learning, pointing accuracy was significantly different for the first vs. all subsequent blocks (ts(30)=4.79,6.69,6.59, ps <.001). We also performed paired t-tests between adjacent blocks. For route learning, JRD pointing accuracy was significantly better for the third compared with the second block (t(30) = 2.29, p = .029). For map learning, JRD pointing accuracy was better for the second compared with the first block (t(30) = 5.81, p < .001) and for the forth block compared with the third block (t(30) = 2.26, p = .031). These results show that map learning produced a significant improvement in pointing accuracy by the second learning block while this difference manifested later (after the third learning block) for route learning.
Figure 2.
Results from Experiment 1
Mean JRD pointing accuracy as a function of testing block. Solid lines indicate participant data, dotted lines indicate best-fitting models (error correspond to ±1 standard error of the mean, as estimated from the analysis of variance.)
We then performed a function-fitting analysis to determine whether route and map learning were better described by similar or fundamentally different dynamics. Specifically, we might predict that if map learning provided more immediate access to landmark-referenced knowledge, we would see fast, non-linear improvements in JRD pointing accuracy. In contrast, for route learning, assuming acquisition of landmark-referenced knowledge is more gradual and takes time to accumulate, we might expect a more linear function in improvement in JRD pointing accuracy over blocks. Using least squares fitting, we found a difference in the functions that described changes in pointing accuracy over blocks for route vs. map learning. Improvements in pointing accuracy resulting from route learning were better described by a linear (r2=.90) than a power function (r2=.76). In contrast, improvements in pointing accuracy resulting from map learning were better described by a power function (r2=.90) than a linear function (r2=.76). The coefficients for these two best-fit functions (linear and power function) were f(x)=−2.9x+50.5 for route learning and for map learning. Other function fits (power function with positive exponents, quadratic functions, and exponential functions) did not provide comparably significant fits of our data for either condition. These findings suggest that learning in the two conditions could be better described by different functions, further suggesting differences in the dynamics of acquisition of spatial knowledge. Route learning was best described by a gradual, linear process while map learning was best described by a faster, non-linear process. Some caution is necessary with this interpretation, however, because exposure to route and map learning was difficult to equate and not necessarily matched in our experimental design. We consider this issue in more depth, however, in our subsequent analysis of participant time spent navigating.
In addition to improvements in JRD pointing accuracy over blocks during route learning, an important issue to address is whether participants also showed improvements in accuracy of paths during route learning. For example, it could be the case that improvements in JRD pointing accuracy were task specific – in other words, that they did not mirror actual improvements or utilization of these representations during active navigation. It could also be that participants attempted to gain additional knowledge during route learning by spending additional time navigating on later learning blocks to provide an advantage compared to map learning. To address this issue, for each block of virtual navigation, we calculated the total time spent navigating and the total excess path length, a measure of deviation from the ideal path on trips from one store to another (Newman et al., 2007). A 1×5 repeated-measures ANOVA revealed a main effect of block on both total time spent navigating (F(4,120)=36.9, p<.001) and excess path (F(4,120)=58.4, p<.001), indicating that both decreased significantly over blocks (Figure 3). These results showed that route learners, in addition to showing improvements in JRD pointing accuracy, also improved in terms of the speed and accuracy of their paths while navigating the environment – possibly because they could effectively utilize primarily landmark-referenced knowledge to guide their paths during navigation. These results also suggest that participants were not compensating for the increased difficulty involved in route learning by spending significantly greater amounts of time navigating in later blocks.
Figure 3.
Results from Experiment1 for time spent and excess path per round of deliveries during route learning
Experiment 2
Overall, the faster improvement of JRD performance after map learning compared to route learning, and specifically, the significant modality by learning block interaction effect observed in Experiment 1, would appear to provide primary support for the partially independent model. The overlapping model generally would not predict any difference between two learning modalities, which was at least not consistent with the findings here. Together, these results provide tentative support for the partially independent model, although stronger support, and comparison with the encoding specificity model, awaits comparison with the same set-up using the SOP task.
While Experiment 1 addressed the issue of changes in JRD pointing accuracy as a function of route or map learning, this experiment did not address how SOP error changed as a function of encoding method. This, and the interaction between Experiment 1 and 2, are critical tests of the models, as we outline below. Thus, Experiment 2 explored how the two different modes of learning the environment (route vs. map learning) affected SOP task performance using the exact same design as Experiment 1 but with the SOP rather than JRD pointing task.
Method
Participants
36 participants (half female and half male) were recruited for this experiment; none of them participated in the other experiments.
Materials
The layouts were the same as that of Experiment 1. Each SOP block consisted of 24 trials that composed of 3 rounds of pointing to each of the 8 target stores. The SOP task was otherwise identical to that described in Experiment 1.
Experimental design and procedure
The procedure was identical to Experiment 1 except that following a mini-block of encoding participants performed a mini-block of the SOP task. The sequence of the two layouts and the encoding method for two layouts were counterbalanced across participants. As in previous experiments, participants were encouraged to respond as accurately as possible. We maintained the same presentation procedure for route and map learning (1 round of deliveries and 30 seconds of map viewing on each block) to allow direct comparison between Experiment 1 and 2.
Results and Discussion
Pointing error was calculated for each trial and then averaged for each testing block and participant excluding trials with pointing error 2 standard deviations above the mean. Using this criterion, 4% of trials were rejected. The mean pointing error for each testing block was then analyzed in a 2 × 5 encoding (route vs. map learning) × testing block (5 testing blocks) ANOVA. We did not find any significant effects of pointing latency. We also did not find any significant effects of gender on pointing error (F(1,34) = 1.74, p = .20). Thus, we focus on significant differences in pointing error collapsed across gender.
Mean pointing error is plotted as a function of testing block and encoding method in Figure 4. We found a main effect of encoding method (F(1,35) = 7.51, p < 0.01), indicating that SOP accuracy following the route learning condition was better overall than the map learning condition. We also found a main effect of learning block (F(4,140) = 25.56, p < 0.001), indicating that SOP accuracy improved as a function of learning in both conditions. Additionally, we found a significant interaction effect between encoding method and learning block (F(4,140) = 2.66, p < .05), indicating a differential improvement in SOP accuracy following route compared to map learning. Planned comparisons showed that performance on the SOP task following route learning was better than map learning for the 1st and the 2nd testing block (t(35) = 3.66, and 2.49, p < .001, and .05) although there was no difference for the 3rd, 4th, and 5th block between route and map learning. We also performed paired t-tests between adjacent blocks. For route learning, SOP pointing accuracy was significantly better for the second block compared with the first block (t(35) = 3.70, p < .001). Similarly, after map learning, SOP pointing accuracy was significantly better for the second block compared with the first block (t(35) = 3.98, p < .001) but also significantly better for the third block compared with the second block (t(35) = 3.45, p= .001). Thus, comparing adjacent blocks indicated that SOP accuracy improvements stayed stable after the second learning block during route learning while it continued to improve until the third block during map learning.
Figure 4.
Results from Experiment 2
Mean SOP accuracy as a function of testing block (error correspond to ±1 standard error of the mean, as estimated from the analysis of variance.)
Similar to our analyses in Experiment 1, we performed a function-fitting analysis to determine whether changes in the SOP task following route and map learning were better described by similar or fundamentally different dynamics. Using least squares fitting, we found no difference in the functions that described changes in pointing accuracy over blocks for route vs. map learning. Improvements in pointing accuracy resulting from both route-learning and map learning were better described by a power function (r2=.89, .90) than a linear function (r2=.70, .72). Other function fits (exponential, quadratic) did not provide better accounts for the data.
The better performance of SOP after route learning compared to map learning would appear to provide primary support for the partially independent model. The overlapping model, again, would not predict any difference between two learning modalities, which was at least not consistent with the findings here as in Experiment 1. The encoding specificity hypothesis might argue for better performance in SOP pointing accuracy following route compared to map learning, however, after the third learning block, there was no difference between route and map learning on SOP task.
To better compare how route and map learning differentially affected SOP and JRD performance, we conducted a between-experiment comparison of Experiments 1 and 2. We thus compared mean pointing error for both SOP and JRD tasks in a 2 × 5 × 2 pointing task (SOP vs. JRD) × learning block (5 blocks) × encoding method (route vs. map learning) ANOVA. All terms except pointing task, which was between-participant, were within-participant. This comparison is displayed graphically in Figure 5.
Figure 5.
Mean JRD and SOP accuracy during both route and map learning. Note scaling of the y-axis to accommodate plotting of all four functions.
We found a main effect of pointing task (F(1,65) = 41.69, p < 0.001), suggesting that performance on the SOP task was better than performance on the JRD task. We also found a main effect of learning block (F(4,260) = 38.76, p < 0.001), indicating that participants improved in pointing accuracy during both pointing tasks over learning blocks. We also found an interaction effect between encoding method and pointing task (F(1,65) = 7.69, p < .01), suggesting that route and map learning affected SOP and JRD task in different ways. Additionally, we found an interaction between pointing task and learning block (F(4,260) = 3.93, p < 0.005), suggesting that SOP and JRD pointing accuracy changed differently across learning blocks, and an encoding method and learning block interaction effect (F(4,260) = 5.42, p < 0.001), suggesting that the two modes of learning resulted in different improvements in pointing accuracy over learning blocks. Finally, and perhaps most importantly, the three way interaction effect was also significant (F(4,260) = 2.62, p < 0.05), suggesting that the interaction effect between pointing task and learning block was affected differently by route compared to map learning. No other effects were significant.
As can be seen in Figure 5, the three-way interaction effect was driven, in part, by the fact that route learning resulted in clear differences in dynamics of improvement for the JRD task compared to SOP task. In contrast, map learning, while resulting in significant differences in JRD vs. SOP pointing overall (note the y-axes are scaled), displayed comparable dynamics in improvement over learning blocks. The faster improvement in SOP compared to JRD pointing accuracy following route learning compared with map learning provides primary support for the partially independent model. Overall, the three-way interaction effect, is generally not congruent with the overlapping model, which would predict no difference of the SOP task after route and map learning. Because the encoding specificity hypothesis would not appear to predict differences in dynamics of performance improvements for the SOP vs. JRD task during route vs. map learning, the 3-way interaction effect we found would also not appear consistent with this model. We return to these issues in the Discussion.
Summary and Concluding Discussion
In this paper, we focused on how the dynamics route and map learning affect two pointing tasks commonly used in the human spatial navigation literature: the SOP and JRD tasks (Mou, Fan, McNamara, & Owen, 2008; Mou et al., 2006; Waller & Hodgson, 2006). To better frame our ideas of how the two modalities might differentially affect the two representational systems, we tested the predictions of three different models of route and map learning. The partially independent model, articulated in various forms in past literature (Moeser, 1988; Noordzij et al., 2006; Shemyakin, 1962; H. A. Taylor et al., 1999; Thorndyke & Hayes-Roth, 1982; Zhang et al., 2012), argues that map learning provides more immediate access to knowledge of landmark inter-relations compared route learning, while route learning provides more access to viewpoint/orientation specific knowledge than map learning. The overlapping model predicts that both route and map learning tap primarily into the same underlying form of spatial knowledge (Lee & Tversky, 2001; Shelton & Gabrieli, 2002; Shelton & McNamara, 2004; Shelton & Pippitt, 2007; H.A. Taylor & Tversky, 1992). The encoding specificity hypothesis predicts that encoding modality affects subsequent ease of retrieval such that items encoded from route perspective are easier to remember from the same perspective and vice versa for maps (Lee & Tversky, 2001; Tulving & Thomson, 1973).
To attempt to test the three models, participants learned spatial environments by either route or map learning and then performed two different pointing tasks to assess their knowledge of the spatial layouts. The SOP task involved participants selecting a specific location in the city and pointing to each of the targets from this location. By selecting their position and maintaining scene information (in the absence of the targets), this task involved participants utilizing primarily a scene and orientation-referenced (egocentric) form of knowledge. In contrast, during the JRD task, participants viewed a screen devoid of any objects other than the sky and the ground and were asked to imagine themselves at one target, facing another target, and to point to a third. In comparison to the SOP task, this task required participants to use their knowledge of landmark interrelations while providing little or no orientation or positional information from trial to trial. The different models make contrasting predictions about performance on the two tasks following route and map learning, particularly when SOP and JRD testing is interleaved with the two encoding methods, as was the case in Experiment 1 and 2.
We employed interleaved JRD and SOP pointing tasks with repeated exposure (but equal amounts between the experiments) to route and map learning. We found that initial exposure to the route (1 round of deliveries) and map (30 seconds of viewing and drawing) led to comparable JRD pointing accuracy. More extensive exposure led to a divergence in these two functions by the 2nd, 3rd, and 4th trials. Route learning resulted in slower and linear changes in JRD pointing accuracy while map learning resulted in nonlinear more dramatic changes in JRD pointing accuracy. By the fifth trial, though, JRD pointing accuracy was equal between the two modalities. The most natural fit for these data would appear to be the partially independent representation model. This model allows for differences in development of knowledge of landmark inter-relations between route and map learning by suggesting that map learning provides more immediate access to this information than routes. Yet, the data also suggest that, as postulated in the Siegel and White hierarchical model, the two modalities do eventually converge in terms of (measureable) spatial knowledge.
In contrast, improvements in pointing accuracy in the SOP task showed the opposite pattern initially to that in the JRD task, despite equivalence in exposure between the two experiments. SOP pointing accuracy was higher initially following route compared to map learning although pointing accuracy statistically indistinguishable between route and map learning by the 3rd trial. This initial difference for SOP pointing, but not JRD pointing, may relate in part to the fact that SOP pointing accuracy was generally higher than JRD pointing accuracy. We speculate that this likely relates to the fact that the view and position-referenced knowledge tested here were probably more readily acquired and utilized than primarily landmark-referenced knowledge acquired in the JRD task (see also: Waller & Hodgson, 2006). Nevertheless, we also found a significant interaction effect for SOP accuracy improvements during route vs. map learning, indicating that map learning took longer to result in improvements in SOP pointing accuracy compared to route learning. These data support the idea that route learning results in faster improvements in view and orientation referenced knowledge than map learning. Contrasting the two experiments against each other, the three-way interaction supported the idea that the two modalities differentially affected landmark vs. view-referenced knowledge. While the encoding specificity hypothesis supports differences in the two tasks following route and map learning, it would not appear to predict differences in the dynamics of performance improvement we observed. Together, these data would seem to provide the best support for the partially independent model.
In addition to their relevance to understanding cognitive models of route vs. map learning, we believe our data suggest additional constraints on how route and map learning affect underlying representational systems. One idea, discussed in previous work (Burgess, 2006; Byrne, Becker, & Burgess, 2007), is that both route and map learning involve not only egocentric and allocentic knowledge but also the conversion of egocentric to allocentric, and allocentric to egocentric, knowledge (see also: Chen, Byrne, & Crawford, 2011). Specifically, according to this conceptualization, route learning initially favors representation of the environment via scene and orientation dependent trajectories, likely largely egocentric in nature. As one repeatedly experiences different paths to the same locations, one can infer the locations of different objects relative to each other and build an allocentric representation of the environment (Ekstrom, 2010). Our data, and others (Igloi et al., 2009; Ishikawa & Montello, 2006) suggest that while knowledge of landmark interrelations develops early during navigation, they develop at a relatively gradually rate with experience. This could be reflective, in part, of integrating different trajectories between different landmarks, based on experience, to build a more holistic representation of the environment. In contrast, viewing a map may allow one to skip some of this integration process, providing more immediate access to the geometric configuration of the layout.
Using a map to derive scene and orientation dependent knowledge likely involves somewhat of the converse process. Participants must utilize knowledge primarily about the geometrical relationship of the landmarks to each, align this to their current bearing, and then derive their offset from the target (Sholl, 1987). This process is likely dependent on the participant experiencing sufficient numbers of views within the environment in order to “lock” in their bearing. Thus, this process of converting primarily landmark-referenced to view-dependent knowledge may occur in a more nonlinear, instantaneous manner than egocentric to allocentric conversion following route learning, as supported by our function fitting findings from the SOP task. Further data are needed to fully test this idea, including situations that require higher demands for egocentric to allocentric conversion and vice versa.
One issue with our conclusions regards the extent to which results obtained from virtual navigation extend to real-world navigation. Using a virtual version of the same layout used by Thorndyke and Hayes-Roth (1982), Ruddle et al. found that virtual navigators developed comparable levels of accuracy during route and Euclidean distance estimations as found by Thorndyke and Hayes-Roth for navigators of the real building (Ruddle, Payne, & Jones, 1997). Richardson et al. found that training with a virtual version of a real-world environment improved subsequent learning of the locations of landmarks within that environment. These findings argue that spatial representations acquired during virtual reality are generally comparable, and obey similar properties, to those acquired during real-world navigation (see also: Valtchanov, Barton, & Ellard, 2010). Given differences in vestibular and proprioceptive input for egocentric representation in real vs. virtual reality (Bakker, Werkhoven, & Passenier, 1999), we may expect even greater differences in acquisition of egocentric representation for route vs. map learning in “real” environments.
In summary, our findings provide support for models of route and map learning emphasizing these as partially dissociable. Thus, our findings emphasize that route and map learning involve different ways of utilizing spatial knowledge about the layout of objects within a spatial environment. These findings help place an important debate in a potentially new light: the extent to which learning a spatial layout by directly navigating vs. studying it from a cartographic map involves similar or different underlying cognitive systems. Our results overall emphasize differences in how the two types of knowledge emerge during the two different modes of learning.
Acknowledgments
The authors wish to thank John Beck and Evan Layher for assistance with data. The authors also thank the Kahana lab for generously sharing software and Colin Kyle for technical assistance. We also thank Weimin Mou and the UC-Davis Memory Group for helpful comments on an earlier draft of this manuscript.
Footnotes
Although Siegel and White are not explicit about whether routes and maps go through the same hierarchical steps, they state on page 43: “Survey maps appear as coordinations of routes within an objective frame of reference…[and are] possible only after both routes and an object frame of reference exist.” This statement would seem to imply that both modes of learning involve the same steps.
Although kinesthetic and vestibular inputs contribute to egocentric representation and are degraded in virtual reality (Bakker, Werkhoven, & Passenier, 1999), because view and orientation are considered primary influences on egocentric representation (Klatzky, 1998), we consider the SOP task a measure primarily of egocentric representation. Certainly, allocentric representation may, in some instances, aid in solving the egocentric task. This should primarily be the case, though, during disorientation, as suggested by Waller and Hodgson. By ensuring participants orientation during the SOP task, we reasoned that participants would primarily employ egocentric representations during the task.
As pointed out by Mou et al. (2004), the JRD pointing task has both egocentric and allocentric components (Mou et al., 2004). This is because inter-object relations still must be mapped onto an egocentric frame (one’s current position) while simultaneously, one must use external coordinates (the other landmarks) to make judgments about the relative positions of the three objects in the environment. Particularly in the absence of orienting information and compared with the SOP task, however, the JRD task provides insight primarily into landmark-referenced knowledge.
References
- Appleyard D. Styles and Methods of Structuring a City. Environment and Behavior. 1970;2:100–117. [Google Scholar]
- Bakker NH, Werkhoven PJ, Passenier PO. The effects of proprioceptive and visual feedback on geographical orientation in virtual environments. Presence-Teleoperators and Virtual Environments. 1999;8(1):36–53. [Google Scholar]
- Burgess N. Spatial memory: how egocentric and allocentric combine. Trends in Cognitive Sciences. 2006;10(12):551–557. doi: 10.1016/j.tics.2006.10.005. [DOI] [PubMed] [Google Scholar]
- Byrne P, Becker S, Burgess N. Remembering the past and imagining the future: a neural model of spatial memory and imagery. Psychological review. 2007;114(2):340–375. doi: 10.1037/0033-295X.114.2.340. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen Y, Byrne P, Crawford JD. Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach. Neuropsychologia. 2011;49(1):49–60. doi: 10.1016/j.neuropsychologia.2010.10.031. [DOI] [PubMed] [Google Scholar]
- Chrastil ER. Neural evidence supports a novel framework for spatial navigation. Psychonomic Bulletin & Review. 2012 doi: 10.3758/s13423-012-0351-6. [DOI] [PubMed] [Google Scholar]
- Ekstrom AD. Navigation in Virtual Space: Psychological and Neural Aspects. In: Koob G, Thomspon R, LeMoal M, editors. Encyclopedia of Behavioral Neuroscience. London: Elsevier Ltd; 2010. [Google Scholar]
- Evans GW, Pezdek K. Cognitive mapping: knowledge of real-world distance and location information. J Exp Psychol Hum Learn. 1980;6(1):13–24. [PubMed] [Google Scholar]
- Hartley T, Maguire EA, Spiers HJ, Burgess N. The well-worn route and the path less traveled: distinct neural bases of route following and wayfinding in humans. Neuron. 2003;37(5):877–888. doi: 10.1016/s0896-6273(03)00095-3. [DOI] [PubMed] [Google Scholar]
- Hirtle SC, Hudson J. Acquisition of spatial knowledge for routes. Journal of Environmental Psychology. 1991;11:335–345. [Google Scholar]
- Holmes MC, Sholl MJ. Allocentric coding of object-to-object relations in overlearned and novel environments. J Exp Psychol Learn Mem Cogn. 2005;31(5):1069–1087. doi: 10.1037/0278-7393.31.5.1069. [DOI] [PubMed] [Google Scholar]
- Iaria G, Petrides M, Dagher A, Pike B, Bohbot VD. Cognitive strategies dependent on the hippocampus and caudate nucleus in human navigation: variability and change with practice. J Neurosci. 2003;23(13):5945–5952. doi: 10.1523/JNEUROSCI.23-13-05945.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Igloi K, Zaoui M, Berthoz A, Rondi-Reig L. Sequential egocentric strategy is acquired as early as allocentric strategy: Parallel acquisition of these two navigation strategies. Hippocampus. 2009;19(12):1199–1211. doi: 10.1002/hipo.20595. [DOI] [PubMed] [Google Scholar]
- Ishikawa T, Montello DR. Spatial knowledge acquisition from direct experience in the environment: Individual differences in the development of metric knowledge and the integration of separately learned places. Cognitive Psychology. 2006;52(2):93–129. doi: 10.1016/J.Cogpsych.2005.08.003. [DOI] [PubMed] [Google Scholar]
- Klatzky R. Allocentric and egocentric spatial representations: Definitions, Distinctions, and Interconnections. In: Freksa C, Habel C, Wender CF, editors. Spatial cognition: An interdisciplinary approach to representation and processing of spatial knowledge. Vol. 1404. Berlin: Springer-Verlag; 1998. pp. 1–17. [Google Scholar]
- Kosslyn SM. Image and brain: The resolution of the imagery debate. MIT press; 1996. [Google Scholar]
- Latini-Corazzini L, Nesa MP, Ceccaldi M, Guedj E, Thinus-Blanc C, Cauda F, Peruch P. Route and survey processing of topographical memory during navigation. Psychological research. 2010;74(6):545–559. doi: 10.1007/s00426-010-0276-5. [DOI] [PubMed] [Google Scholar]
- Lee PU, Tversky B. Costs of Switching Perspectives in Route and Survey Descriptions. Proceedings of the twenty-third conference of the cognitive science society.2001. [Google Scholar]
- Leiser D, Tzelgov J, Henik A. A Comparison of Map Study Methods - Simulated Travel Vs Conventional Study. Cahiers De Psychologie Cognitive-Current Psychology of Cognition. 1987;7(4):317–334. [Google Scholar]
- Moeser S. Cognitive Mapping in a Complex Building. Environment and Behavior. 1988;20:21–48. [Google Scholar]
- Montello DR, Waller D, Hegarty M, Richardson AE. Spatial memory of real environment, virtual environment, and maps. In: Allen GL, editor. Human spatial memory: Remembering where. Mahwah, NJ: Lawrence Erlbaum Associates; 2004. [Google Scholar]
- Mou W, Fan Y, McNamara TP, Owen CB. Intrinsic frames of reference and egocentric viewpoints in scene recognition. Cognition. 2008;106(2):750–769. doi: 10.1016/j.cognition.2007.04.009. S0010-0277(07)00116-3 [pii] [DOI] [PubMed] [Google Scholar]
- Mou W, McNamara TP, Rump B, Xiao C. Roles of egocentric and allocentric spatial representations in locomotion and reorientation. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2006;32(6):1274–1290. doi: 10.1037/0278-7393.32.6.1274. [DOI] [PubMed] [Google Scholar]
- Mou W, McNamara TP, Valiquette CM, Rump B. Allocentric and egocentric updating of spatial memories. J Exp Psychol Learn Mem Cogn. 2004;30(1):142–157. doi: 10.1037/0278-7393.30.1.1422003-10949-012. [pii] [DOI] [PubMed] [Google Scholar]
- Noordzij ML, Zuidhoek S, Postma A. The influence of visual experience on the ability to form spatial mental models based on route and survey descriptions. Cognition. 2006;100(2):321–342. doi: 10.1016/j.cognition.2005.05.006. [DOI] [PubMed] [Google Scholar]
- O’Keefe J, Nadel L. The Hippocampus as a Cognitive Map. Oxford: Clarendon Press; 1978. [Google Scholar]
- Packard MG, McGaugh JL. Inactivation of hippocampus or caudate nucleus with lidocaine differentially affects expression of place and response learning. Neurobiology of learning and memory. 1996;65(1):65–72. doi: 10.1006/nlme.1996.0007. [DOI] [PubMed] [Google Scholar]
- Poucet B. Spatial cognitive maps in animals: new hypotheses on their structure and neural mechanisms. Psychol Rev. 1993;100(2):163–182. doi: 10.1037/0033-295x.100.2.163. [DOI] [PubMed] [Google Scholar]
- Rieser JJ. Access to knowledge of spatial structure at novel points of observation. J Exp Psychol Learn Mem Cogn. 1989;15(6):1157–1165. doi: 10.1037//0278-7393.15.6.1157. [DOI] [PubMed] [Google Scholar]
- Roskos-Ewoldsen B, McNamara TP, Shelton AL, Carr W. Mental representations of large and small spatial layouts are orientation dependent. Journal of experimental psychology. Learning, memory, and cognition. 1998;24(1):215–226. doi: 10.1037//0278-7393.24.1.215. [DOI] [PubMed] [Google Scholar]
- Rossano MJ, West SO, Robertson TJ, Wayne MC, Chase RB. The acquisition of route and survey knowledge from computer models. Journal of Environmental Psychology. 1999;19(2):101–115. [Google Scholar]
- Ruddle RA, Payne SJ, Jones DM. Navigating buildings in “desk-top” virtual environments: Experimental investigations using extended navigational experience. Journal of Experimental Psychology-Applied. 1997;3(2):143–159. [Google Scholar]
- Ruddle RA, Payne SJ, Jones DM. The effects of maps on navigation and search strategies in very-large-scale virtual environments. Journal of Experimental Psychology: Applied. 1999;5(1):54–75. doi: 10.1037/1076-898x.5.1.54. [DOI] [Google Scholar]
- Shelton AL, Gabrieli JD. Neural correlates of encoding space from route and survey perspectives. J Neurosci. 2002;22(7):2711–2717. doi: 10.1523/JNEUROSCI.22-07-02711.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shelton AL, McNamara TP. Orientation and perspective dependence in route and survey learning. J Exp Psychol Learn Mem Cogn. 2004;30(1):158–170. doi: 10.1037/0278-7393.30.1.158. [DOI] [PubMed] [Google Scholar]
- Shelton AL, Pippitt HA. Fixed versus dynamic orientations in environmental learning from ground-level and aerial perspectives. Psychol Res. 2007;71(3):333–346. doi: 10.1007/s00426-006-0088-9. [DOI] [PubMed] [Google Scholar]
- Shemyakin FN. Orientation in space. In: A, et al., editors. Psychological science in the USSR. Vol. 1. Washington, D.C: U.S. Office of Techincal Reports; 1962. [Google Scholar]
- Sholl MJ. Cognitive maps as orienting schemata. J Exp Psychol Learn Mem Cogn. 1987;13(4):615–628. doi: 10.1037//0278-7393.13.4.615. [DOI] [PubMed] [Google Scholar]
- Siegel AW, White SH. The development of spatial representations of large-scale environments. In: Reese HW, editor. Advances in child development and behavior. New York: Academic; 1975. [DOI] [PubMed] [Google Scholar]
- Taylor HA, Naylor SJ, Chechile NA. Goal-specific influences on the representation of spatial perspective. Memory & Cognition. 1999;27(2):309–319. doi: 10.3758/bf03211414. [DOI] [PubMed] [Google Scholar]
- Taylor HA, Tversky B. Spatial Mental Models Derived from Survey and Route Descriptions. Journal of Memory and Language. 1992;31:261–292. [Google Scholar]
- Thorndyke PW, Hayes-Roth B. Differences in spatial knowledge acquired from maps and navigation. Cogn Psychol. 1982;14(4):560–589. doi: 10.1016/0010-0285(82)90019-6. [DOI] [PubMed] [Google Scholar]
- Tolman EC. Cognitive Maps in Rats and Men. Psychol Rev. 1948;55:189–208. doi: 10.1037/h0061626. [DOI] [PubMed] [Google Scholar]
- Tulving E, Thomson DM. Encoding Specificity and Retrieval Processes in Episodic Memory. Psychological Review. 1973;80(5):352–373. [Google Scholar]
- Tversky B. Distortions in cognitive maps. Geoforum. 1992;2:131–138. [Google Scholar]
- Valtchanov D, Barton KR, Ellard C. Restorative effects of virtual nature settings. Cyberpsychol Behav Soc Netw. 2010;13(5):503–512. doi: 10.1089/cyber.2009.0308. [DOI] [PubMed] [Google Scholar]
- Waller D, Hodgson E. Transient and enduring spatial representations under disorientation and self-rotation. J Exp Psychol Learn Mem Cogn. 2006;32(4):867–882. doi: 10.1037/0278-7393.32.4.867. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang R, Spelke E. Human spatial representation: insights from animals. Trends Cogn Sci. 2002;6(9):376. doi: 10.1016/s1364-6613(02)01961-7. [DOI] [PubMed] [Google Scholar]
- Wolbers T, Buchel C. Dissociable retrosplenial and hippocampal contributions to successful formation of survey representations. J Neurosci. 2005;25(13):3333–3340. doi: 10.1523/JNEUROSCI.4705-04.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang H, Copara MS, Ekstrom AD. Differential Recruitment of Brain Networks following Route and Cartographic Map Learning of Spatial Environments. PLoS One. 2012;7(9):e44886. doi: 10.1371/journal.pone.0044886. [DOI] [PMC free article] [PubMed] [Google Scholar]





