Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2021 Jan 4;31(7):717–736. doi: 10.1002/hipo.23295

Goal‐directed interaction of stimulus and task demand in the parahippocampal region

Su‐Min Lee 1, Seung‐Woo Jin 1, Seong‐Beom Park 1, Eun‐Hye Park 1, Choong‐Hee Lee 1,5, Hyun‐Woo Lee 1, Heung‐Yeol Lim 1, Seung‐Woo Yoo 2, Jae Rong Ahn 3, Jhoseph Shin 1, Sang Ah Lee 4, Inah Lee 1,
PMCID: PMC8359334  PMID: 33394547

Abstract

The hippocampus and parahippocampal region are essential for representing episodic memories involving various spatial locations and objects, and for using those memories for future adaptive behavior. The “dual‐stream model” was initially formulated based on anatomical characteristics of the medial temporal lobe, dividing the parahippocampal region into two streams that separately process and relay spatial and nonspatial information to the hippocampus. Despite its significance, the dual‐stream model in its original form cannot explain recent experimental results, and many researchers have recognized the need for a modification of the model. Here, we argue that dividing the parahippocampal region into spatial and nonspatial streams a priori may be too simplistic, particularly in light of ambiguous situations in which a sensory cue alone (e.g., visual scene) may not allow such a definitive categorization. Upon reviewing evidence, including our own, that reveals the importance of goal‐directed behavioral responses in determining the relative involvement of the parahippocampal processing streams, we propose the Goal‐directed Interaction of Stimulus and Task‐demand (GIST) model. In the GIST model, input stimuli such as visual scenes and objects are first processed by both the postrhinal and perirhinal cortices—the postrhinal cortex more heavily involved with visual scenes and perirhinal cortex with objects—with relatively little dependence on behavioral task demand. However, once perceptual ambiguities are resolved and the scenes and objects are identified and recognized, the information is then processed through the medial or lateral entorhinal cortex, depending on whether it is used to fulfill navigational or non‐navigational goals, respectively. As complex sensory stimuli are utilized for both navigational and non‐navigational purposes in an intermixed fashion in naturalistic settings, the hippocampus may be required to then put together these experiences into a coherent map to allow flexible cognitive operations for adaptive behavior to occur.

Keywords: entorhinal cortex, episodic memory, hippocampus, object recognition, parahippocampal region, perirhinal cortex, postrhinal cortex, scene perception, spatial navigation

1. INFORMATION PROCESSING IN THE HIPPOCAMPAL MEMORY NETWORK AND THE DUAL‐STREAM MODEL

As an animal goes about its day, it engages in various activities ranging from sleeping, socializing, hunting, and feeding. These behaviors can be initiated by the internal goal state of the animal (e.g., hunger, fear) but can also be changed or modified by external stimuli that it encounters along the way (e.g., the scent of food, view of a predator's den). Upon encountering perceptual cues from the environment, the animal must correctly recognize it and, at the same time, promptly respond to it (including ignoring it). Hippocampal episodic memory plays an essential role in allowing an animal to store its experience into memory and plan or adjust its goal‐directed behavior adaptively based on its past experience. For instance, a rat may find a piece of food while foraging and, depending on the circumstances, may eat it on the spot or take it back to its nest (Figure 1). Moreover, these goal‐directed behaviors will be strongly influenced by their episodic memories involving similar contexts and objects.

FIGURE 1.

FIGURE 1

The background visual scene (or context) plays a key role in determining different behavioral responses to the same object. In the above example, depending on whether the rat finds the pizza in a kitchen (top) occupied by humans or in the sewers (bottom), it may take the pizza back to its nest (navigational response) or eat it on the spot (non‐navigational response)

For the past two decades, a “dual‐stream model” of the medial temporal lobe (MTL) has been used as a theoretical guide to understanding how the hippocampus processes information (Figure 2a). Originally inspired by neuroanatomical connectivity (Burwell, 2000), the dual stream model posits the existence of two distinct pathways in the parahippocampal cortical areas of the MTL whose relative dominance depends on whether a stimulus is spatial or nonspatial by nature (Canto, Wouterlood, & Witter, 2008; Chen, Vieweg, & Wolbers, 2019; Insausti, Herrero, & Witter, 1997; Kerr, Agster, Furtak, & Burwell, 2007; Knierim et al., 2014; Knierim, Lee, & Hargreaves, 2006; Nilssen, Doan, Nigro, Ohara, & Witter, 2019; van Strien, Cappaert, & Witter, 2009; Witter et al., 2000). Here, spatial information includes positional and directional signals that can be used directly for spatial navigation, while nonspatial information refers to objects and other items that may not directly provide spatial information for navigation (i.e., “what” component of episodic memory) (Clayton & Dickinson, 1998; Eichenbaum, 2006; Eichenbaum & Fortin, 2005). According to the dual‐stream model, spatial information is processed in the postrhinal cortex (POR, or parahippocampal cortex [PHC] in primates) and then the medial entorhinal cortex (MEC), supported by the spatially selective and navigation‐related cell types found in these regions such as grid cells, head direction cells, border cells, and speed cells (Hafting, Fyhn, Molden, Moser, & Moser, 2005; Kropff, Carmichael, Moser, & Moser, 2015; Savelli, Yoganarasimha, & Knierim, 2008; Solstad, Boccara, Kropff, Moser, & Moser, 2008). On the other hand, nonspatial information is processed in the perirhinal cortex (PER) and then in the lateral entorhinal cortex (LEC), supported by object‐selective neurons in these regions (Ahn & Lee, 2015, 2017; Burke et al., 2012; Deshmukh, Johnson, & Knierim, 2012; Deshmukh & Knierim, 2011; Wang et al., 2018). The dual‐stream model claims that the hippocampus associates the spatial and nonspatial inputs from the MEC and LEC, respectively, to form a cohesive episodic memory.

FIGURE 2.

FIGURE 2

Models of information processing in the hippocampal memory systems. (a) Dual‐stream model. A schematic view of the current status of the dual‐stream model, modified from Knierim, Neunuebel, and Deshmukh (2014) and Doan et al. (2019). (b) The Goal‐directed Interaction of Stimulus and Task‐demand (GIST) model. Our proposed modification of the dual‐stream model incorporates the behavioral task (rather than just stimulus characteristics) in dissociating the relative roles of the MEC and LEC. ADN, anterodorsal thalamic nuclei; HP, hippocampus; LEC, lateral entorhinal cortex; MEC, medial entorhinal cortex; ParaS, parasubiculum; PER, perirhinal cortex; PFC, prefrontal cortex; PHC, parahippocampal cortex; POR, postrhinal cortex; PPC, posterior parietal cortex; PreS, presubiculum; RSC, retrosplenial cortex; SUB, subiculum

Despite its popularity, a growing number of studies have begun to call for modifications of the original dual‐stream model based on experimental results that are not explainable through a simplistic cue‐based criterion (i.e., spatial vs. nonspatial) for dissociating the functions of the parahippocampal subregions (e.g., PER, POR, LEC, MEC). For instance, neurons in the so‐called “nonspatial” stream, particularly in the LEC, also show spatially specific firing around objects (Deshmukh et al., 2012; Deshmukh & Knierim, 2011; Keene et al., 2016; Tsao, Moser, & Moser, 2013). Moreover, neurons in the MEC (the “spatial” stream) also represent objects occupying particular locations (Hoydal, Skytoen, Andersson, Moser, & Moser, 2019; Keene et al., 2016). In addition to the physiological evidence, studies testing behavioral performance with localized lesions in specific subregions provide converging evidence supporting some involvement of the PER and LEC in processing spatial information (Kuruvilla & Ainge, 2017; Ramos, 2017). More recently, some researchers have also reported evidence suggesting that cells in the LEC encode egocentric spatial information in relation to objects and that those in the MEC represent allocentric spatial information (Wang et al., 2018), which may also be stated as evidence that the LEC and MEC process spatial information in local and global reference frames, respectively (Knierim et al., 2014).

The mismatch of experimental results with the dual‐stream model challenges the current model and, for that purpose, one crucial issue that needs to be clarified is whether the spatial (or nonspatial) information‐processing stream is determined by the input characteristics or task demand, or both. We argue that whether a cue is spatial or nonspatial is determined not only by some inherent property of the cue itself but by whether the cue will be used spatially (i.e., for navigation) or not (e.g., manipulated on the spot). In particular, physiological correlates of visual scene information in primates is manifested as view‐specific neurons in the hippocampus, such as the “spatial view cells” (Rolls & O'Mara, 1995), and this may also be true to some extent in rodents (Acharya, Aghajan, Vuong, Moore, & Mehta, 2016; Aronov & Tank, 2014; Chen, King, Burgess, & O'Keefe, 2013). However, a visual scene can be used not only to guide spatial behavior but also non‐navigational behavior such as deciding what to do with an object in that context (Figure 1).

The dual‐stream model does not predict a priori which stream may process visual scene information dominantly because, unlike some of the other more obvious spatial cues such as vestibular and self‐motion signals, the visual scene itself does not allow us to precategorize whether the stimulus is spatial or nonspatial. The implementation of virtual reality (VR) environments in the last 10 years has clearly demonstrated that visual scenes play an essential role in rodent behavior (Aronov & Tank, 2014; Cohen, Bolstad, & Lee, 2017; Dombeck, Harvey, Tian, Looger, & Tank, 2010; Harvey, Coen, & Tank, 2012; Ravassard et al., 2013). Nevertheless, unlike theories of hippocampal function in nonhuman primates and humans, in which visual scenes have been emphasized as a major source of information for both spatial (Epstein, Patai, Julian, & Spiers, 2017; Hassabis & Maguire, 2007, 2009; Rolls & Wirth, 2018; Silson et al., 2019) and semantic/gist memory (Poppenk, Evensmoen, Moscovitch, & Nadel, 2013; Robin & Moscovitch, 2017), it remains unclear as to how visual scenes—which can be either spatial or nonspatial depending on what computations are performed (i.e., navigational vs. non‐navigation)—fit into the hippocampal information processing streams in the rodent according to the classical dual‐stream model. A revised dual‐stream model (Knierim et al., 2014) suggested that visual scenes are processed by the POR (or PHC in primates including humans) (Epstein & Kanwisher, 1998; Epstein, Parker, & Feiler, 2007), but it does not fully address the unique contributions of the POR to processing the visual scenes (in contrast with the well‐known contributions of the PER to object recognition).

2. INTERACTION OF THE STIMULUS TYPE AND RESPONSE DEMAND IN THE PARAHIPPOCAMPAL REGION

In our investigation of hippocampal‐dependent behavior in rats, we have shown not only that visual scenes serve as powerful cues in goal‐directed memory tasks (Ahn & Lee, 2014; Kim, Lee, & Lee, 2012; Lee & Lee, 2020; Lee, Lee, & Lee, 2018; Lee, Park, & Lee, 2014; Park, Ahn, & Lee, 2017; Yoo & Lee, 2017), but that the relative involvement of the different areas of the parahippocampal region, particularly the MEC and LEC, further depends on the type of goal‐directed behavior (e.g., navigational vs. non‐navigational) (Figure 3). In fact, task‐related ambiguity does not only apply to scenes, as it is sometimes difficult to characterize even a given object as a spatial or nonspatial definitively until one sees what the animal does with the object. For example, an object may be defined as spatial cues if the animal uses it to compute its position and path, but as a nonspatial object if the animal simply manipulates it. Based on recent experimental findings reviewed here, we suggest that the dual‐stream model must be updated further to incorporate both the influence of visual scenes and the type of responses required in the task (Figure 2b). These two factors are important in explaining the function of episodic memory and its role in guiding an animal's behavior in accordance with its goals, emphasizing that the specialized function of the MTL is for an experience‐dependent adaptive response in the future, as opposed to simply providing an accurate recollection of the past.

FIGURE 3.

FIGURE 3

Summary table of the results from our own empirical studies for the roles of different areas of the parahippocampal region and the hippocampus in goal‐directed tasks with different cueing stimuli (i.e., scene and object) and response type (i.e., navigational and non‐navigational tasks). Scene stimuli were patterned visual stimuli projected on an array of three LCD panels, and object stimuli were small toy objects. O and X stand for the involvement and noninvolvement of a given area in each stimulus–response condition, respectively. Question marks denote the unavailability of experimental data. POR's involvement in the object‐cued non‐navigational task is observed only when the rat is allowed to sample the object visually (i.e., object stimulus behind the transparent acrylic screen), but not when the object is sampled multimodally

To accommodate these changes, we propose a revised dual‐stream model called the Goal‐directed Interaction of Stimulus and Task‐demand (GIST) model to illustrate the importance of the intended behavior of the animal in determining the flow of perceptual information through the parahippocampal region (Figure 2). In the GIST model, visual scenes and objects are recognized at the level of POR (or PHC in primates) and PER, respectively, perhaps with some functional overlap during the initial phase of identifying an object. At this recognition stage, the goal‐directed response (i.e., task demand) minimally affects information processing in the POR and PER. However, the nature of the intended behavior (e.g., navigational or non‐navigational) does determine whether the recognized entity (scene or object) is processed through the MEC or LEC. Such task demand may be imposed by the interaction with areas such as the prefrontal cortex (PFC), posterior parietal cortex (PPC), and retrosplenial cortex (RSC) which are implicated in goal‐directed decision making for action (Calton & Taube, 2009; Hernandez et al., 2017; Kesner, Farnsworth, & DiMattia, 1989; Kim, Delcasso, & Lee, 2011; Kolb, Buhrmann, McDonald, & Sutherland, 1994; Lee & Shin, 2012; Lee & Solivan, 2008; Miller, Mau, & Smith, 2019; Mushiake, Saito, Sakamoto, Itoyama, & Tanji, 2006; Sakata, Taira, Murata, & Mine, 1995; Vann & Aggleton, 2002; Wise, Boussaoud, Johnson, & Caminiti, 1997). Finally, as in the traditional dual‐stream model, the representations of the MEC and LEC are hypothesized to be merged in the hippocampus to allow more flexible cognitive functions using the environmental stimuli, which might, in turn, influence the information processing in its upstream regions through its cortical output structure, the subiculum (Figure 2b).

Before we continue further, it is worth mentioning how we define objects and scenes in the current review. Dissociating objects from scenes has been a tricky problem in cognitive science and psychology for decades (Bar, 2004; Biederman, 1987; Epstein & Kanwisher, 1998; Greene & Oliva, 2009; Grill‐Spector, Kourtzi, & Kanwisher, 2001; Henderson & Hollingworth, 1999). Based on our understanding of the literature, the following overall consensus has been reached so far. In studies of visual perception in humans (and nonhuman primates), objects and scenes have been repeatedly shown to be dissociable in regions of the cortex (e.g., the inferior temporal [IT] areas). Such dissociations are not only thought to be driven by semantic differences between objects and scenes, but also by lower level visual properties such as scale (Epstein & Baker, 2019; Groen, Silson, & Baker, 2017). For instance, while objects have a bounded contour defining its shape, a scene's contour encompasses the entire visual field and is only meaningful with respect to the terrain structure. The function of objects and scenes are also drastically different. While object recognition allows animals to pick out relevant items to interact with, such as food and nesting material, scene processing allows animals to recognize its environment and plan a path through space. In other words, objects are perceptually compact things that animals perform actions on, while scenes are perceptually distributed representations that one acts in (Epstein, 2005). Objects are clearly embedded in scenes. However, there is ample evidence from rodents to human infants that relative movement or visual contours of “foreground” objects against the “background” scenes is sufficient for the dissociation of objects from scenes (Schnabel et al., 2018; Spelke, 1990; Zoccolan, Oertelt, DiCarlo, & Cox, 2009).

Although it is challenging to find universally agreed‐upon definitions of objects and scenes, an object in our review typically refers to a thing that can be manipulated by a subject (mostly by rodents in this review) (Lee & Lee, 2013b). That is, an object may be pushed away, knocked over, or chewed on. In primates, most object manipulations may be carried out by hands. Such manipulations on objects may involve no goal‐directed intention and may be the expression of pure curiosity. However, the GIST model proposed here focuses more on goal‐directed behavior. Objects in goal‐directed tasks are often manipulated to result in desirable outcomes usually in the form of reward. What is implied in our definition of an object is that whether something is considered as an object or not depends more on possible actions that can be directed potentially toward the stimulus, but not necessarily constrained by the characteristics of the stimulus itself. Naturally, the size and movability of the stimulus matters when defining something as an object in terms of manipulability, but actions associated with the stimulus may be more critical in defining an object. For example, in a spontaneous object exploration (SOE) paradigm in which rodents explore an open field with some objects placed in it, those objects are actively touched and explored by the animals. These stimuli are thus considered as objects by our definition. However, when the same stimuli are hung on the walls of the same environment so that the animals cannot physically interact with those stimuli, those stimuli might not be qualified as objects. It would be fair to call those stimuli potential landmarks instead of objects. In contrast, a scene is not something that can be instantly “manipulated”, but is a semantically coherent background usually characterized by its large‐scale and immovability (Henderson & Hollingworth, 1999). A scene may be composed of or include objects (or landmarks to be more accurate) in it, but animals usually do not rely on single objects in the scene to recognize it. Instead, the particular spatial configuration of individual components of the scene is recognized as a whole pattern. This does not necessarily mean that the animal cannot focus on one of the objects (or landmarks) in the scene if it wants to, but such focused attention to a component of the scene is not necessarily required for scene recognition. In addition, unlike objects, a scene may not be easily manipulated through physical actions and is often used as contextual information for both navigational and non‐navigational behaviors. Although one may define objects and scenes operationally, as described here, in natural settings a scene usually contains an object or multiple objects (or landmarks). The results from prior studies, including our own, have demonstrated that there is a dynamic interaction in the hippocampal memory system between objects and their associated scenes for contextual object recognition and other context‐dependent behaviors (Bar, 2004; Jo & Lee, 2010; Kim et al., 2011, 2012; Komorowski, Manns, & Eichenbaum, 2009; Lee & Park, 2013; Lee & Solivan, 2010; Yoon, Graham, & Kim, 2011).

To examine the validity of our model, we provide here a critical review of the studies on the parahippocampal region by examining the interaction between the stimulus type (scene or object) and response demand (navigational or non‐navigational) in goal‐directed tasks. For this purpose, first, we analyze the experimental environments in prior studies of rodent hippocampal memory, distinguishing those that used visual scenes from objects. A widely used example of a visual scene is the backdrop of distal cues surrounding the apparatus in a behavioral testing room (Kesner, Farnsworth, & Kametani, 1991; Lee & Solivan, 2008; E. Moser, Moser, & Andersen, 1993; Olton & Samuelson, 1976; Poucet & Herrmann, 1990); more recently developed paradigms also use visual scenes presented as natural or artificial visual patterns projected on a screen (Aggleton, Albasser, Aggleton, Poirier, & Pearce, 2010; Davies, Machin, Sanderson, Pearce, & Aggleton, 2007; Kim et al., 2012; Prusky, Douglas, Nelson, Shabanpoor, & Sutherland, 2004). Similarly, studies involving objects use both real three‐dimensional objects (Ahn, Lee, & Lee, 2019; Deshmukh & Knierim, 2011; Jo & Lee, 2010; Rodo, Sargolini, & Save, 2017; Winters & Bussey, 2005) and visual images of objects displayed on a screen (Ahn & Lee, 2015; Clark, Reinagel, Broadbent, Flister, & Squire, 2011; Gaffan, Healey, & Eacott, 2004; Winters, Bartko, Saksida, & Bussey, 2010).

In addition to comparing studies based on the cueing stimuli, we further organized the studies by the nature of the required behavioral response. In our review, behavioral tasks were considered to be navigational if the task required animals to move through space to reach a goal location defined either by the visual landscape of the environment (i.e., scenes; scene‐cued navigational task) (Martig & Mizumori, 2011; Morris, Garrud, Rawlins, & O'Keefe, 1982; M. B. Moser, Moser, Forrest, Andersen, & Morris, 1995; Olton & Wolf, 1981; Packard & McGaugh, 1996; Steffenach, Witter, Moser, & Moser, 2005; Yoo & Lee, 2017) or by an individual object (object‐cued navigational task) (Park et al., 2017; Robinson et al., 2017). If a task required animals to respond to a stimulus without moving through space (i.e., such as pushing a lever or digging a sand‐filled jar), it was considered to be non‐navigational in the present review (scene‐cued or object‐cued non‐navigational task) (Fortin, Agster, & Eichenbaum, 2002; Park et al., 2017; Yoo & Lee, 2017). We argue in this article that navigational versus non‐navigational would be a more objective and concrete criterion, compared to spatial versus nonspatial. If one has to categorize all behaviors as spatial behavior whenever the animal moves its body or body parts up or down, left or right, such a criterion would be too broad to be investigated in relation to the hippocampal memory system. The most important difference between navigational behavior and non‐navigational behavior is whether an animal moves its body through space or not. People have extensively studied navigational behavior and the involvement of the hippocampal system in such behavior. However, non‐navigational behavior has not been studied as extensively compared to navigational behavior, and we would like to argue in this review that the hippocampus is important in remembering visual scenes and the associated, goal‐directed response even when navigation is not required. More importantly, the upstream structures of the hippocampus (i.e., extrahippocampal cortices in the parahippocampal region) may be differentially recruited in a behavioral task depending on whether the task requires navigation or not. Our intention is to provide a different perspective to the field that reinterprets and extends the dual‐strem theory's well‐known spatial versus nonspatial distinction.

When using objects as cues, most prior studies implemented tasks in which animals respond to the objects directly (non‐navigational task demand) instead of using single objects as cues for spatial navigation. Among those, the SOE paradigm has been widely used (Albasser, Davies, Futter, & Aggleton, 2009; Balderas, Rodriguez‐Ortiz, & Bermudez‐Rattoni, 2013; Barker et al., 2006; Bussey, Duck, Muir, & Aggleton, 2000; Davies et al., 2007; Mumby, Glenn, Nesbitt, & Kyriazis, 2002; Norman & Eacott, 2004; Rodo et al., 2017) to test object memory based on a novelty preference. However, our review does not include prior studies using SOE tasks mainly because it is not a goal‐directed task, in that there is no goal reward for a particular action in the SOE paradigm, and the array of multiple objects involved in such tasks creates inputs as both a visual scene and 3D objects, making it difficult to dissociate visual scenes from objects under such circumstances.

3. MEC AND LEC INVOLVEMENT IS MODULATED BY THE NATURE OF THE GOAL‐DIRECTED BEHAVIOR

The findings from our laboratory suggest that the involvement of the LEC and MEC depends on both stimulus and response types (Yoo & Lee, 2017). For instance, the effect of the reversible inactivation of the LEC or MEC with muscimol, a GABA‐A receptor's agonist, was tested in various behavioral tasks using different types of stimuli (scene, object, and tactile cue) and responses (navigational and non‐navigational) (Figures 3 and 4). When visual scenes were used as cues, inactivation of the MEC impaired performance when the rat was required to make a left or right turn on a T‐maze based on the visual scenes on the surrounding LCD monitors. In contrast, when using the same scene cues but requiring the rat to make non‐navigational responses (i.e., either dig or push the sand‐filled jar), inactivation of the LEC, but not the MEC, impaired performance (Figure 4a,c). Interestingly, however, inactivation of neither the MEC nor LEC produced an impairment when the object cue attached to the jar required the rat to either dig or push the jar, suggesting that there might be an upstream (perhaps PER‐originated) direct information‐processing channel that bypasses the entorhinal cortical areas for some object‐cued non‐navigational tasks (Yoo & Lee, 2017).

FIGURE 4.

FIGURE 4

Roles of the MEC and LEC in the scene‐ and object‐cued tasks. (a) Behavioral paradigms using scene stimuli in prior studies (Park et al., 2017; Yoo & Lee, 2017). Rats were required to associate a specific scene stimulus witha left/right turn (i.e., navigational task) or digging/pushing (i.e., non‐navigational) to obtain the reward. (b) Behavioral paradigms using object stimuli as cues to test the involvement in navigational (Park et al., 2017) and non‐navigational (Park et al., 2017; Yoo & Lee, 2017) tasks for the LEC and MEC (Yoo & Lee, 2017) and for the PER and POR (Park et al., 2017). Animals learned the same navigational and non‐navigational tasks as above but in response to objects. The PER or POR was reversibly inactivated in the same rats (Park et al., 2017). (c) Inactivating either the LEC or MEC revealed functional dissociations between the two regions based on task‐demand or response type. The LEC, but not the MEC, was required to produce correct non‐navigational responses (digging/pushing) associated with scene stimuli; the MEC, but not the LEC, was needed for navigational responses (left/right) to scene stimuli (Yoo & Lee, 2017). (d) Neither the LEC nor MEC was needed in the object‐based non‐navigational tasks. EC: entorhinal cortex; aCSF, artificial cerebrospinal fluid; MUS, muscimol

3.1. Roles of the MEC and LEC in processing visual scenes for navigation

In reviewing prior studies that used visual scenes (such as a set of distal visual landmarks) as cues, we found that the majority of those that required spatial navigations were found to be dependent on the MEC (Hales et al., 2014; Hales et al., 2018; O'Reilly, Alarcon, & Ferbinteanu, 2014; Sabariego et al., 2019; Steffenach et al., 2005; Van Cauter et al., 2013; Wahlstrom et al., 2018; Yoo & Lee, 2017). Generally, lesions or inactivations of the MEC impaired performance in the Morris water‐maze in which the animal must navigate to a hidden platform location using distal cues around the maze (Hales et al., 2014; Hales et al., 2018; Van Cauter et al., 2013). In a study by Van Cauter et al., electrolytic lesions of the MEC in rats resulted in an impairment in finding the hidden platform in the water‐maze, presumably using some room cues, taking longer routes to the hidden platform and spending less time in the target quadrant, compared to both sham and the LEC‐lesioned rats (Van Cauter et al., 2013). Similarly, NMDA‐induced lesions of the MEC in rats resulted in impaired performance in the water‐maze (Hales et al., 2014). The MEC‐lesioned rats were impaired as severely as those rats with hippocampal lesions or with combined lesions of the MEC and hippocampus. The same research group also showed previously that the MEC‐lesioned rats were impaired in the water‐maze task regardless of whether the training was conducted recently or remotely in the past (Hales et al., 2018).

The involvement of the MEC in making correct spatial choices in scene‐cued navigational tasks held true in other types of experimental environments and settings, including the T‐maze, plus maze, and Barnes maze (O'Reilly et al., 2014; Sabariego et al., 2019; Wahlstrom et al., 2018; Yoo & Lee, 2017). For example, in a spatial navigation task conducted in a plus maze, O'Reilly et al. tested rats with MEC lesions by NMDA injections (O'Reilly et al., 2014). The maze was located in a room with multiple visual cues, and rats were released from either the north or south arm to find the goal arm (e.g., east arm). The goal arm was switched after 20 trials to the opposite arm, and MEC‐lesioned rats were impaired in finding the goal arm compared to controls. In another study, optogenetic inhibition of the MEC neurons receiving projections from the basolateral amygdala resulted in performance deficits when the rat was required to find the escape port among 18 equally spaced holes along the perimeter of a circular arena (i.e., Barnes maze) (Wahlstrom et al., 2018). In Sabariego et al. (2019), rats with MEC lesions were tested on a spatial working memory task using a continuous T‐maze. In this task, rats were placed on a figure‐8‐shaped maze in an open environment with allocentric visual cues and were required to visit the two arms in an alternating fashion. NMDA‐induced MEC lesions resulted in impaired performance when the delay was imposed between trials. In addition to studies using lesion or inactivation techniques, electrophysiological studies have reported that spatial representations in the MEC can predict behavioral responses (Lipton, White, & Eichenbaum, 2007; O'Neill, Boccara, Stella, Schoenenberger, & Csicsvari, 2017; Weiss et al., 2017). In one study, neurons in the MEC recorded in a spatial alternation task on a continuous T‐maze showed differential firing rates on the stem of the maze, depending on whether the rat made a left or right turn (Lipton et al., 2007). In addition, the ensemble activity of the MEC neurons in a spatial alternation task showed the replay of trajectories while on the maze and in post‐sleep sessions, independently of the hippocampal replay events (O'Neill et al., 2017).

Using VR experimental setups in rodents, recent studies have also shown that the neural activities in the MEC, such as in the typical grid cell firing patterns, are induced by visual stimuli depicting a virtual space (Campbell et al., 2018; Doeller, Barry, & Burgess, 2010; Domnisoru, Kinkhabwala, & Tank, 2013; Low, Gu, & Tank, 2014; Schmidt‐Hieber & Hausser, 2013). The VR system has been particularly useful in dissociating idiothetic self‐motion cues and allothetic visual cues mainly by adjusting the relationships between the visual optic flow and movement of the animal (Campbell et al., 2018; Tennant et al., 2018). Importantly, recent studies have reported that grid cells are not simply bound to self‐generated motor inputs and that visual cues exert powerful influence on their firing patterns (Campbell et al., 2018; Casali, Shipley, Dowell, Hayman, & Barry, 2018; Kinkhabwala, Gu, Aronov, & Tank, 2020). For example, a recent study recorded the changes in the grid cell's firing patterns in head‐fixed mice while manipulating the gain of input by ×0.5 (i.e., the mouse had to run twice the distance, compared to the real world, to run the same distance in the VR environment) and ×1.5 (i.e., the mouse had to run only two‐thirds of the distance, compared to the real world, to run the same distance in VR). In this study, grid cells changed their grid‐like firing patterns asymmetrically according to the manipulation of the gain input, suggesting the possibility that there are some interactions among locomotion, optic flow, and landmark‐based visual scene information in the MEC (Campbell et al., 2018). In addition, a recent study has shown that the neural activity in the MEC is bound to the position of the visual landmarks when the number, position, and direction of the visual landmarks were manipulated on a linear track (Kinkhabwala et al., 2020). Taken together, these empirical studies clearly demonstrate that the MEC is computes spatial responses during goal‐directed navigation in accord with visual information.

Compared to the MEC, there are fewer empirical studies that report that LEC lesions impair performance in a scene‐based navigation task. In one study, Steffenach et al. trained rats to search for a hidden platform location in a water‐maze surrounded by distal visual landmarks and then lesioned the LEC (Steffenach et al., 2005). The LEC‐lesioned rats stayed at the target quadrant less than controls, suggesting that the LEC might be involved in retrieving spatial memory. However, when the goal platform was moved to a new location, the lesioned group showed longer escape latencies during the early part of learning but caught up to the control animals after 4 days of training. As far as we know, this is the only study that has reported performance deficits with lesions in the LEC; it is possible that the LEC may be partially involved in learning to use a particular distal landmark (in this case, perhaps, associating the goal with a specific local portion of the scene). Other studies testing LEC‐lesioned rats in the water‐maze did not report similar deficits (Liu & Bilkey, 1998a; Van Cauter et al., 2013). Moreover, evidence suggests that the LEC is not involved in spatial navigation using path integration, whereas the MEC is needed in the same situation (Van Cauter et al., 2013).

3.2. The roles of the MEC and LEC in processing visual scenes for non‐navigational tasks

Although it is common to test rodents in a navigational task by using distal visual cues in the testing room, as described above, inactivation studies in which visual scenes cue a non‐navigational response (e.g., object manipulation) are difficult to come by in the literature. As described earlier, this was tested in our laboratory by training rats to push or dig a sand‐filled jar, depending on the visual scene presented in the surrounding LCD panels (Figure 4a) (Yoo & Lee, 2017). Performance in this scene‐cued non‐navigational task was unaffected by the inactivation of the MEC but was impaired when muscimol was injected into the LEC (Figure 4c). Interestingly, rats with intracranial cannula tips located at the deep layers of the LEC were more impaired than the rats with the cannula tips at the superficial layers. This suggests that top‐down information regarding the task demand for a non‐navigational response (i.e., object manipulation) could be fed to the deep layers of the LEC, for example, from the hippocampal formation (i.e., the hippocampus and subiculum) or the mPFC (Sesack, Deutch, Roth, & Bunney, 1989). Such top‐down information may also influence the MEC in scene‐cued navigational tasks.

3.3. Involvement of the MEC and LEC in processing objects in navigational tasks

In contrast to the studies that have used scenes as cues, there are very few studies that investigated the role of the LEC or MEC directly by adopting an object‐cued navigation paradigm. In a related study by Robinson et al. (2017), rats were trained on a triangular linear track to sample an object before entering a treadmill area on which they experienced a delay period. Upon exiting the treadmill zone, depending on the object cue sampled before the delay, rats were required to either dig a sand‐filled pot placed immediately next to the exit of the treadmill to receive the food reward there or to ignore the pot and move to receive a reward from another location. Although an object (i.e., sand‐filled ceramic pot) was used in this study, choosing the correct spatial location for reward was the main goal of the experimental task. In that sense, the behavioral paradigm used in the Robinson et al. study would qualify as the object‐cued navigational paradigm. When the MEC was optogenetically inhibited during the delay period on the treadmill, the rats' performance was impaired. Interestingly, inhibition of the MEC during the object‐sampling period did not produce such a deficit, suggesting that the MEC is not involved in object recognition per se. Instead, it is possible that the object‐cued navigational information needs to be retrieved (and maintained if there is a delay) by the MEC.

4. THE ROLES OF THE PER AND POR IN GOAL‐DIRECTED RESPONSES TO SCENES AND OBJECTS

The disruption of the PER and POR, in the upstream regions of the LEC and MEC also results in impairment in behavioral paradigms using visual scenes and objects as cues in association with navigational and non‐navigational response types. However, compared to the MEC or LEC, the behavioral deficits are generally milder with the inactivation of the PER or POR, and the strong response‐based double dissociation found between the MEC and LEC is not observed between the PER and POR (Park et al., 2017). Specifically, when scenes were used as cues, inactivating either the PER and POR with muscimol significantly disrupted performance in both navigational and object‐manipulation (or non‐navigational) tasks (Figure 5a). When objects were used as cues, similar disruptive results were observed for both types of tasks with the inactivation of the PER (Figure 5b) (Ahn & Lee, 2015; Park et al., 2017), whereas the effects of POR inactivation were more mixed. That is, for the navigational task, POR inactivation caused a performance deficit regardless of the cue, as expected. On the non‐navigational task, significant disruptions occurred for both scenes and visual objects (with objects encased in transparent acrylic; Figure 5b inset) but not for manipulable 3D objects, which may be attributed to the availability of other sensory cues such as odors (Park et al., 2017). These results from our laboratory collectively suggest that both PER and POR play some role in recognizing objects and scenes and that there is little interaction with the navigational (or non‐navigational) nature of the intended behavior (Figure 2 and Figure 3). The deficit caused by inactivation of the PER was more severe for objects (both visual and 3D objects), while the deficit caused by the inactivation of the POR was more across‐the‐board and visual in nature.

FIGURE 5.

FIGURE 5

Involvement of the perirhinal cortex (PER) and postrhinal cortex (POR) in the scene‐ and object‐based navigational and non‐navigational tasks. (a) Inactivating the PER or POR in the scene‐based tasks resulted in similar performance deficits, regardless of the response type. (b) Inactivating the PER produced performance deficits in object recognition tasks irrespective of response type; inactivating the POR resulted in overall deficits especially when involving visual cues, but not when given multimodal experience with objects in the non‐navigational task

4.1. General but varied involvement of PER and POR in processing visual scenes for navigation

The results from our laboratory largely converge upon those from past studies of PER and POR function. However, most tasks in the literature using visual scenes involve searching for specific goal locations in a water‐maze or radial‐arm maze, and it is difficult to come across a study in which scene‐based non‐navigational responses were required. In this section, we first review existing studies that have tested the role of PER and POR in scene‐based navigation tasks.

Lesions in the PER after learning have been reported to disrupt the retrieval of memory in scene‐cued navigational tasks (Abe, Ishida, Nonaka, & Iwasaki, 2009; Mumby & Glenn, 2000; Ramos, 2013b; Ramos & Vaquero, 2005). However, the results are sometimes mixed, partially due to the lack of clarity on whether the involvement of the PER is primarily in the acquisition or retrieval phase of learning, given that some studies report deficits in performance when lesions were made in the PER before learning (Abe et al., 2009; Liu & Bilkey, 1998a, 1998b, 1999, 2001; Wiig & Bilkey, 1994) while other studies do not (Aggleton et al., 2010; Davies et al., 2007; Glenn & Mumby, 1998; Liu & Bilkey, 2001; Machin, Vann, Muir, & Aggleton, 2002; Moran & Dalrymple‐Alford, 2003; Mumby & Glenn, 2000; Prusky et al., 2004; Ramos, 2002, 2013a). If visual scene‐based information processing can occur in other areas (such as the POR) in parallel, it is possible that lesions in the PER could produce different results depending on the timing of the lesions and the learning stage of the animal at the time of lesioning.

Another source of variability is the length of the delay implemented across studies, with some requiring longer delays and others requiring short delays when using a delayed nonmatching‐to‐place (DNMP) task (Liu & Bilkey, 1998a, 1998b, 1999, 2001; Prusky et al., 2004). One study reported that PER‐lesioned rats exhibited delay‐dependent performance deficits in a water‐maze when tested in a probe trial (Liu & Bilkey, 1998b). However, other studies have reported no significant deficits in working memory performance caused by PER lesions (Glenn & Mumby, 1998; Machin et al., 2002; Moran & Dalrymple‐Alford, 2003). For example, Glenn and Mumby (1998) tested the effect of temporal delay on PER‐lesioned rats in a DNMP paradigm in the water maze, and the PER‐lesioned group showed no deficits for all delay durations (4–300 s). It allows only speculation at this point why PER lesions have produced mixed results in prior studies, as described here. In addition to the important differences among the studies such as those related to the timing of the lesioning or the length of the delay with respect to learning, variability in lesions produced in the PER among the studies might be another contributing source to the variance in behavioral results. Furthermore, rat strain differences among different studies might have also resulted in discrepant outcomes (Aggleton, Kyd, & Bilkey, 2004). Specifically, most of the studies that showed impairment with PER lesions used Sprague–Dawley rats (Abe et al., 2009; Liu & Bilkey, 1998a, 1998b, 1999, 2001; Wiig & Bilkey, 1994). On the other hand, studies with no impairment with PER lesions used non‐albino strains including the pigmented rats such as Long‐Evans rats (Glenn & Mumby, 1998; Mumby & Glenn, 2000; Prusky et al., 2004), hooded rats (Aggleton et al., 2010; Moran & Dalrymple‐Alford, 2003), and dark agouti rats (Davies et al., 2007; Machin et al., 2002). It has been demonstrated that albino strains show significantly lower visual acuity than pigmented rats when their visual acuity is tested psychophysically using vertical grating stimuli (Prusky, Harker, Douglas, & Whishaw, 2002). Such differences in basic visual functions (Harker & Whishaw, 2002) might have interacted with the experimental designs in prior studies.

By contrast, although there were only a few studies that tested the role of POR in scene‐cued navigational tasks (Liu & Bilkey, 2002; Ramos, 2013b), the results are quite clear so far. Liu and Bilkey (2002) conducted three types of scene‐cued spatial tasks. In these tasks, rats were required to choose the correct place using distal visual cues on the walls of the experimental room. The POR‐lesioned group showed significant deficits during the acquisition phase and also in the working memory version of the task. The POR‐lesioned rats were also significantly impaired in the DNMP task in a T‐maze, compared to controls. Similar effects were found in a standard spatial memory task in a radial‐arm maze. In another study using a plus‐maze surrounded by visual cues, Ramos (2013b) trained rats to choose the west arm to obtain the reward. The starting point of the arm was randomized to make the rats use visual scene cues and to prevent an egocentric strategy. In this task, the POR‐lesioned group showed a significant deficit in the retention test. These results suggest that the POR may play key roles in scene‐cued navigation. Physiological findings that single units in the POR exhibit positional correlates in a spatial working memory task conducted in a plus‐maze support the results from the behavioral studies (Burwell & Hafeman, 2003).

An electrophysiological study from our laboratory revealed that neurons in the PER and POR show neural firing correlates for both scene and object stimuli (Ahn & Lee, 2017). We trained rats to approach a response box to either dig or push the box (thus requiring different motor responses) depending on either visual scenes presented in the surrounding LCD panels in some trials or object cues directly attached to the response box in other trials. The scene‐ and object‐cued trials were pseudorandomly presented throughout a session. In this study, we found that the neural firing patterns of single units were correlated with both scene and object cues in both the PER and POR. However, more cells in the PER showed object‐related responses than in the POR, and the time course with which cells fired for objects and scenes were different between the PER (object‐correlates appeared earlier than scene‐correlates) and POR (object‐ and scene‐correlates appeared approximately at the same time).

Overall, experiments so far suggest that the PER and POR are both generally involved in scene‐based navigation tasks (Figure 3). In scene‐based non‐navigational tasks, despite the lack of past studies in the literature, the experimental results from our laboratory suggest that the two areas also play some varied roles, with the PER more heavily attuned to objects (than scenes) and the POR more attuned to the visual modality (rather than stimulus category).

4.2. Involvement of the PER and POR in object recognition

We saw above that the POR is important mostly in using visual scenes (and also visually recognizing objects) for making spatial choices in navigational tasks. We saw that the PER is also involved in scene‐based tasks to some degree, but with a more varied set of results. When it comes to object processing, however, the functional importance of the PER is more clear‐cut. For example, in a typical goal‐directed object discrimination task in which one of the presented objects is designated as a rewarding stimulus (i.e., with a food well underneath the object), multiple studies report deficits in performance in the PER‐lesioned group (Abe et al., 2009; Mumby & Glenn, 2000; Myhrer, 2000; Wiig & Bilkey, 1995; Winters et al., 2010), with very few exceptions (Clark et al., 2011). For example, in a delayed nonmatch‐to‐sample task, rats first experienced a sample object by displacing it to find a food pellet. After the sampling phase, rats encountered both the sampled object and a novel one and choosing the novel object resulted in a reward. Lesions in the PER resulted in performance deficits in this task (Wiig & Bilkey, 1995).

It is worth mentioning that there has been a debate over whether the role of the PER in object recognition is dedicated to the mnemonic domain (e.g., during a delay period in the absence of the object) or whether it can also participate in object perception (Baxter, 2009; Squire, Stark, & Clark, 2004; Suzuki, 2009; Suzuki & Naya, 2014). Researchers arguing for the perceptual functions of the PER have proposed that the PER is essential for object perception when there exists feature ambiguity (Bussey & Saksida, 2002, 2007). According to this hypothesis, when an object is composed of complex features that can be easily confused with similar ones, the PER may resolve such feature ambiguity during the perceptual stage. Results from behavioral studies that support both perceptual and mnemonic arguments can be found in the literature (Bartko, Winters, Cowell, Saksida, & Bussey, 2007; Buckley, Booth, Rolls, & Gaffan, 2001; Bussey, Saksida, & Murray, 2002; Eacott, Gaffan, & Murray, 1994). Discussing this perceptual versus mnemonic debate in detail is beyond the scope of this review mainly because the debate is about how the PER might carry out object recognition, not necessarily whether it does. The results of the debate therefore do not change our working model (i.e., the GIST model). Nonetheless, our laboratory has shown that single units in the PER exhibit the neural correlates for both perceptual and mnemonic variables when rats are required to make discrete choices in response to morphed, ambiguous objects (Ahn & Lee, 2017). Specifically, in Ahn and Lee's study, rats were initially trained to recognize only two objects and their associated responses (touching left vs. right disc on a touchscreen panel). At the time of recording single units from the PER, rats had to recognize one of the morphed versions of the original object. The firing rates of some single units in the PER changed as a function of the level of perceptual morphing in the original objects, whereas those of other single units changed in a stepwise fashion according to the behavioral choices associated with the object cues. Although the results from our study may be consistent with the argument that the PER may participate in both perception and memory of objects, an object stimulus was presented against black background in a touchscreen monitor in our study. In naturalistic settings, however, an object appears rarely without a background visual scene and animals including humans may need to cognitively bring the object of interest to the foreground, potentially by theta rhythm‐based pruning of noise in neuronal spiking (Ahn et al., 2019). Therefore, it still remains to be seen whether the question of perceptual versus mnemonic functions of the PER is valid in natural settings as opposed to artificially controlled experimental settings.

Compared to the PER‐related studies, there are fewer studies that tested the roles of the POR in object‐cued behavioral tasks. In one study, Myhrer (2000) tested rats in an object discrimination task in which rats should approach the black cylindrical object while ignoring the other two gray cylindrical objects to obtain a water reward directly from the correct object. The POR‐lesioned group showed a significant impairment in performance during the acquisition of the task, but not during the retention stage (13 days after the acquisition). In contrast, the PER‐lesioned group showed deficits in both acquisition and retention periods in the same task. The results from the Myhrer study indicate that there may be a stage during the earlier phase of the task‐acquisition period in which both the PER and POR are recruited, as the animal is in the process of learning to effectively identify the objects from the background scene. Once achieved, the PER is mostly needed for object recognition. This possibility will be further discussed toward the end of the current review but, overall, the involvement of the PER in object‐cued tasks seems essential, whereas the role of the POR is limited (Park et al., 2017). Importantly, whether the task‐related response is navigational does not exert great influence on information processing at the PER and POR, which is contrary to what we have found in the LEC and MEC, as reported above.

5. TASK‐DEPENDENT PROCESSING OF SCENES IN PARAHIPPOCAMPAL REGIONS, AND ITS POTENTIAL GENERALIZABILITY ACROSS SPECIES

Our review of the hippocampal literature has revealed several noteworthy points on the existing research. First, from the viewpoint of investigating the dependence of hippocampal activation on task demand, research in this field has disproportionately used spatial navigational paradigms as opposed to non‐navigational ones. This trend is most highlighted in studies using rodents, with deep historical roots in the cognitive map theory which have traditionally tested spatial behavior in mazes (Barnes, 1979; Morris et al., 1982; Olton & Samuelson, 1976; Tolman, 1948). In addition, remarkable discoveries of spatially specific cell types in the hippocampal formation, such as place cells and grid cells (Fyhn, Molden, Witter, Moser, & Moser, 2004; O'Keefe & Dostrovsky, 1971), have also contributed to the field's general focus on navigation. In comparison, the involvement of the hippocampal memory systems in object‐related information processing has not been as thoroughly investigated, perhaps as a consequence of the popular characterization of the hippocampus as a cognitive map.

Second, with respect to the stimulus type, traditional tests of hippocampal function have not extensively explored the nature of the environmental representations that might be used by the animals. For example, it has been a long‐standing assumption in many studies that animals use visual objects and landmarks available around the testing room as allocentric cues to compute their location and heading. Although physiological evidence of place cells following a distal cue, in addition to behavioral evidence, has largely supported such an assumption, it can only provide limited support due to both the simplicity of cues in impoverished environments (e.g., cylinder or square arena with a simple polarizing visual cue) and the difficulty in isolating and controlling perceptual and idiothetic cues in a typical experimental setting. This trend has been changing in the last decade, as controlled visual experiments and VR studies are discovering that rodent behavior is more visually guided than people might have thought in the past (Cushman et al., 2013; Holscher, Schnee, Dahmen, Setia, & Mallot, 2005; Sato et al., 2017; Youngstrom & Strowbridge, 2012). Although some key inputs such as vestibular information are minimal when rodents are tested in a VR setting, a growing number of studies are converging on the finding that visual information alone can drive the hippocampal system's spatial information processing to a sufficient level of precision (Acharya et al., 2016; Aronov & Tank, 2014; G. Chen et al., 2013; Cohen et al., 2017; Ravassard et al., 2013).

Third, the conventional dual‐stream model has treated the information processing in the hippocampal memory systems as a somewhat passive system, largely predetermined by the characteristics of sensory inputs (i.e., spatial vs. nonspatial). In experimental conditions such as random foraging in an open space, this might be true and, perhaps, even adaptive (Deshmukh et al., 2012; Hafting et al., 2005; Hargreaves, Rao, Lee, & Knierim, 2005; Savelli et al., 2008; Solstad et al., 2008). However, animals, including humans, actively process various stimuli in a goal‐directed fashion in natural settings, with intentions and behaviors that change flexibly as they experience different events.

With the abovementioned points in mind, it is encouraging to see the possibility that the developments in visual scene‐based information processing in the rodent hippocampal network have raised new potential for bridging the gap between rodent research and primate/human hippocampal memory research (Rolls & Wirth, 2018). Traditionally, while visual scenes have been used extensively in studies of primates, scenes have rarely been manipulated directly as target stimuli in rodent studies, presumably because people have assumed that visual information processing is not as important in nocturnal animals. An intriguing review by Rolls and Wirth (2018) suggests that visual information is equally important in primates and rodents. However, because of the differences in visual systems between the two species, primates and rodents may use visual scenes in qualitatively different ways. For example, a large part of primate visual scene processing is based on foveal vision and where the eyes fixate in the scene, whereas the lack of a fovea in rodents and the lateral placement of their eyes allow them to use a panoramic view covering almost 300° of the surrounding environment (de Araujo, Rolls, & Stringer, 2001). Consequently, rodents may need to constantly move around in a space to compute precise location information based on largely overlapping scenes, presumably with the help of idiothetic spatial signals and other sensory information such as olfactory and auditory cues. By contrast, primates may be able to create a cognitive map more efficiently by sampling different views of the environment by their eye movements only, not necessarily moving their bodies through space.

The critical differences in visual systems between primates and rodents described above for scenes may also influence recognizing an object against its background scenes in the two species differently. In primates, single neuronal activity in the IT cortex, the area that includes the PER and the area TE, is associated not only with the object in the fovea but also with the objects in the perifoveal visual field (Aggelopoulos & Rolls, 2005). Therefore, it is possible that a population of IT cortical neurons may convey scene information to some extent if one accepts the idea that a scene is normally composed of multiple objects. Interestingly, the presence of complex visual scenes in the background of objects makes the receptive field of IT cortical neurons tighter (Rolls, Aggelopoulos, & Zheng, 2003), which in turn may help primates to dissociate the object of interest from the background scene. That is, there might be an initial recognition stage in which scene information is more dominantly processed until the object is singled out from its background, especially in a cluttered or novel environment. If that is the case, the object recognition system involving the IT cortex and PER may participate in scene information processing to some extent during the initial phase of object recognition in natural settings. In primates, this stage may last only a short period of time because of their excellent foveal and stereoscopic vision.

By contrast, in rodents, there is no foveal vision, and it is still unknown whether PER neurons (or those in the temporal associational cortical regions in general) also dynamically reduce the sizes of their receptive fields as in primates. Unlike primates, it is possible that the lack of foveal and stereoscopic vision may make it difficult for rodents to single out an object stimulus from its background just based on visual information from a distance. Instead, rodents may need to first spot the approximate area where the object is likely to be found by using visual scene information to approach the object of interest. Then, rodents may need to physically explore the object, including sniffing, touching, and chewing the object for multimodal recognition. If this is the case, the early visual recognition system in rodents should process both scene and object information up to the point where the object is separated cognitively from its background. Based on our experimental data, the PER and POR, positioned relatively early in MTL information processing, may both play some roles in the earlier processing of scenes before discrete objects are picked out and recognized from the background. However, once an object (or multiple objects) is detected, the identity of the object is recognized by the PER regardless of the nature of the response (e.g., spatial navigation, object manipulation). In the PER, the number of spikes of an object‐selective neuron decreases as the object is repeatedly encountered by an animal, so‐called repetition suppression (Brown, Wilson, & Riches, 1987; Fahy, Riches, & Brown, 1993; Li, Miller, & Desimone, 1993; Miller, Li, & Desimone, 1993), in relation to theta rhythm, and this “pruning” process for noise might be the physiological evidence of the roles of the PER in object recognition in a cue‐rich environment (Ahn et al., 2019).

The aforementioned importance of visual information processing may call for a modification of the traditional dual‐stream model mainly because the functions of different parahippocampal regions are currently assigned based on a somewhat abstract categorization of whether the stimuli would give rise to spatial cognition without specifying the nature of the input representations, in this case, visual perception of the external stimulus. This may cause a problem when the model needs to predict the involvement of a certain brain region in a task in which it is sometimes not obvious a priori whether the stimulus is spatial or nonspatial. For example, a relatively distal object in an otherwise cue‐free environment may serve as an important landmark to the rat and thus be processed as a spatial stimulus that informs the animal of its location, whereas the same objects found in a natural cue‐rich environment (and thus a more complex panoramic image to the rat) may not be used in the same way. Similarly, visual scenes may well be used for spatial localization, but they may also inform the animal more generally of the current context that it is in. In our previous experiment in which the task‐dependent double dissociation between the MEC and LEC was reported (Yoo & Lee, 2017), the same visual scene stimuli were sometimes used to spatially guide rats to navigate to certain goal locations and, at other times, to guide the animal for the correct action that needed to be performed on the target objects. In the latter case, visual scenes functioned more like a context or “occasion setter” in Pavlovian conditioning (Hirsh, 1974; Yoon et al., 2011). Thus, our results suggest that the type of stimulus (e.g., visual scene) alone may not predict which information pathway is activated within the entorhinal cortex and that the task demand plays a key role in guiding the sensory information through the LEC or MEC en route to the hippocampus.

We have investigated the relative contributions of the major cortical areas in the parahippocampal region by using the same behavioral paradigms, but we have not tested the roles of the MEC and LEC in the object‐based navigational task (Figure 3). Also, it is still unknown whether the hippocampus is necessary for the object‐based non‐navigational task (Figure 3). Although these are empirical questions that need to be answered by future studies, we may speculate on some possible scenarios. With respect to the involvement of the MEC and LEC in the object‐cued navigational task, it is possible that the same dissociation seen in the scene‐cued navigational task between the MEC and LEC might also be true in the object‐cued non‐navigational task. That is, the MEC, but not the LEC, might be required in the navigational task, whether it is scene‐ or object‐cued. Such results may strongly suggest that the spatial response requirement of a task is the key to recruiting the MEC, but not the LEC. The opposite pattern of results (i.e., the LEC, but not the MEC, is involved in the object‐cued navigational task) is unlikely because such results require an assumption that the LEC is engaged whenever an object is used as the cue, but this was not the case in our previous study (Figure 3; the object‐cued non‐navigational task in Park et al., 2017). It is also possible that both regions are not involved in our object‐cued behavioral tasks in general. These results would be surprising because the upstream regions, the PER and POR, are both important in those tasks. Nonetheless, if true, such results would imply that the MEC and LEC are driven by scenes (or scenes and objects) but not by an object alone. Finally, if both the MEC and LEC are engaged in the object‐cued navigational task, that would imply that there might be no task demand‐based interactions between stimulus and response types for the object‐cued tasks. However, the overall pattern of results of our studies for other conditions (Figure 3) makes this last scenario unlikely.

6. GIST AS A WORKING MODEL FOR INFORMATION PROCESSING IN THE PARAHIPPOCAMPAL REGIONS

Our current working GIST model posits a hypothetical information flow in the parahippocampal cortical areas when an animal uses a scene or object as a cue to make a behavioral response in a goal‐directed fashion (Figure 2b). The GIST model has been developed as a modification of the traditional dual‐stream model to illustrate how scene and object stimuli may be processed, based on the experimental studies reviewed here. According to the GIST model, an external stimulus should first be recognized at the level of the PER and POR (PHC in primates); initially, both the PER and POR may process the visual cues (e.g., during the acquisition phase of the task) until the animal effectively picks out the object from the background scene. Once the system learns to identify the target object effectively, the POR might be no longer needed unless the task requires the animal to continue to represent the visual scene in the object's background. Because an object and its associated background may be treated as a coherent visual scene, the POR may still play some roles when the object is identified only visually (i.e., without further interaction with the object) (Park et al., 2017).

Next, once an object or visual scene is recognized, its representation is fed to either the MEC or LEC depending on task demand (i.e., goal‐related navigational/non‐navigational behavioral response). In laboratory tests, the task demand is largely set by the reward‐associated behaviors that need to be learned during the acquisition. Regions such as the PFC, PPC, and the RSC may interact with the MEC and LEC at this stage to bias the information‐processing channels according to reward‐based decision‐making or planning of specific actions (Calton & Taube, 2009; Hernandez et al., 2017; Kesner et al., 1989; Kim et al., 2011; Kolb et al., 1994; Lee & Lee, 2013a; Lee & Shin, 2012; Lee & Solivan, 2008; Miller et al., 2019; Mushiake et al., 2006; Sakata et al., 1995; Vann & Aggleton, 2002; Wise et al., 1997). If task demand involves navigating through space to reach the target location, the MEC, but not the LEC, is predominantly engaged, and, in this case, computing spatial navigational information using visual cues (scenes and objects) takes precedence. In contrast, non‐navigational demand involving scene‐based action requires the LEC, but not the MEC, to use visual scenes as cues to perform an action, whereas the same response is not dependent on the LEC (or MEC) when an object alone is used as a cue. In natural settings, these navigational and non‐navigational responses would occur dynamically across time and space, as an animal is engaged in various types of tasks with multiple goals, which may potentially explain the heavy anatomical interconnections between the MEC and LEC. Receiving these inputs from the MEC and LEC, the hippocampus may then organize navigational and non‐navigational events as episodic memory representations across time. The resulting hippocampal representations should allow more flexible cognitive operations than those in the parahippocampal region, which are not necessarily bound to certain views of scenes/objects and also capable of dealing with some degradations and modifications of stimuli. Crucially, this episodic organization of memories allows the hippocampus to then simulate future actions to make adaptive responses based on the animal's past experiences; and this process, in turn, may ultimately influence upstream information processing in areas including the MEC and LEC via the subiculum.

7. THE HIPPOCAMPAL MEMORY SYSTEM AS A GENERAL COGNITIVE SPACE FOR ORGANIZED INTERACTION WITH THE ENVIRONMENT

Almost all events that animals experience occur in association with a particular space and time. Decades of research confirm that the hippocampal and parahippocampal memory systems are vital to spatial information processing (Boccara et al., 2010; Fyhn et al., 2004; Hargreaves et al., 2005; LaChance, Todd, & Taube, 2019; Muller & Kubie, 1987; O'Keefe & Dostrovsky, 1971; O'keefe & Nadel, 1978; Solstad et al., 2008; Taube, Muller, & Ranck Jr., 1990) and that the hippocampal formation automatically maps out an environment while remembering past locations and paths. However, such spatiotemporal aspects of an animal's behavior are driven by multiple underlying goals that can change dynamically with the external stimuli. That is, in natural settings, animals including humans often navigate space with goals in mind, and the task demand that needs to be met to achieve those goals may exert a powerful influence on the way the animals selectively process and react to the environmental stimuli they encounter along the way. In this review, we have argued that the traditional dual‐stream model may be improved by incorporating such task demands as the model tries to explain how a stimulus in the environment is processed through a certain information‐processing channel within the parahippocampal network. We only have discussed here how visual scenes and objects may be channeled differently according to task demand in the parahippocampal region, but this principle may also apply to other types of stimuli (e.g., social conspecifics) that activate the hippocampal memory systems (Rao, von Heimendahl, Bahr, & Brecht, 2019).

In his influential work, Howard Eichenbaum argued that the hippocampus is more a map for episodic memory than a purely spatial map to solely represent the animal's location (Eichenbaum & Cohen, 2014; Eichenbaum, Dudchenko, Wood, Shapiro, & Tanila, 1999). This idea was introduced in the original thesis by O'Keefe and Nadel when the concept of the hippocampus as a cognitive map was defined (O'keefe & Nadel, 1978) and consistent with even older findings of the indispensable role of the hippocampus in human episodic memory, starting with the famous case of patient H.M. (Scoville & Milner, 1957). It has also been recently argued by other research groups that the hippocampus may function as a general cognitive space in which the relationships of individual components of various dimensions are formed into a map‐like structure (Aronov, Nevers, & Tank, 2017; MacDonald, Lepage, Eden, & Eichenbaum, 2011; Pastalkova, Itskov, Amarasingham, & Buzsaki, 2008). According to this idea, the low‐level dimensions fed to the hippocampus may go beyond just space, including abstract and conceptual dimensions such as sound and semantics (Aronov et al., 2017; Solomon, Lega, Sperling, & Kahana, 2019). In a sense, grouping all these diverse attributes as “nonspatial” has become too simplistic an approach and needs to be reconsidered. Once a map is formed for representing the relationships among these attributes in the hippocampus, various cognitive operations may become possible based on the information that can be computed from the map. The system's adaptive behavior improves accordingly as its capability of interpreting and predicting events in a dynamically changing environment may significantly increase (as opposed to relying on individual static stimuli).

Our current review has emphasized that the goal‐based task demand, here largely divided into navigational and non‐navigational categories, needs to be taken into account to understand the interactive network of information processing in the parahippocampal region, especially in the LEC and MEC. In particular, we have used the example of a visual scene to demonstrate that, depending on the goal‐directed behavior, a scene can be processed either “spatially” to compute a path cued by the scene, or “nonspatially” to perform an object‐directed behavior associated with the scene. Upstream from the entorhinal cortex, the PER and POR so far appear to be less influenced by task demand, but further studies are needed to ascertain the functions of the PER and POR with respect to memory and cue specificity. These two areas may be critical in recognizing objects and scenes, sometimes separately and other times as a whole, perhaps depending on the goal‐defined task demand. A further examination of the functional roles of these areas upstream of the hippocampus, across goal‐directed tasks and involving various types of environmental cues, will allow for a more complete and accurate model of information processing particularly in the currently unexplored realm of nonspatial or non‐navigational domains.

In conclusion, categorizing input stimuli to the hippocampal systems into spatial and nonspatial stimuli has been fruitful in dissociating different cortical areas in the parahippocampal region and in guiding experimental research on how such information is processed in different subfields of the hippocampus (Deshmukh et al., 2012; Deshmukh & Knierim, 2011; Hargreaves et al., 2005; Henriksen et al., 2010; Keene et al., 2016; Lu, Igarashi, Witter, Moser, & Moser, 2015; Oliva, Fernandez‐Ruiz, Buzsaki, & Berenyi, 2016). We have introduced the GIST model in order to update the traditional dual‐stream model to be able to accommodate the recent discoveries of the hippocampal dependence on visual scenes and the long‐held view of the task‐based modulation of MTL information processing (Kim et al., 2012; Lee et al., 2018; Lee et al., 2014; Park et al., 2017; Yoo & Lee, 2017). Recent technical advances in monitoring a large population of neurons and manipulating their activity with temporal precision in animals may make hopes high to test such theoretical models (Boyden, Zhang, Bamberg, Nagel, & Deisseroth, 2005; Chen et al., 2013; Cushman et al., 2013; Dombeck et al., 2010; Low et al., 2014). Yet, what may be more crucial, as we have realized through this review, is to conduct experiments in which we are mindful in objectively and explicitly defining both the environmental stimuli and the goal‐directed task that drive information processing and, ultimately, behavior (Krakauer, Ghazanfar, Gomez‐Marin, MacIver, & Poeppel, 2017).

ACKNOWLEDGMENTS

The authors thank Hyeri Hwang, Sohee Park, Bona Lee, and Jae‐Min Seol for their help in preparing for the manuscript. This study was supported by basic research grants (NRF—2017M3C7A1029661, 2018R1A4A1025616, 2019R1A2C2088799) from the National Research Foundation of Korea and the BK21 FOUR Program.

Lee S‐M, Jin S‐W, Park S‐B, et al. Goal‐directed interaction of stimulus and task demand in the parahippocampal region. Hippocampus. 2021;31:717–736. 10.1002/hipo.23295

Funding information BK21 FOUR Program; National Research Foundation of Korea, Grant/Award Numbers: 2019R1A2C2088799, 2018R1A4A1025616, 2017M3C7A1029661

DATA AVAILABILITY STATEMENT

The data of the current study are available from the corresponding author upon reasonable request.

REFERENCES

  1. Abe, H., Ishida, Y., Nonaka, H., & Iwasaki, T. (2009). Functional difference between rat perirhinal cortex and hippocampus in object and place discrimination tasks. Behavioural Brain Research, 197(2), 388–397. 10.1016/j.bbr.2008.10.012 [DOI] [PubMed] [Google Scholar]
  2. Acharya, L., Aghajan, Z. M., Vuong, C., Moore, J. J., & Mehta, M. R. (2016). Causal influence of visual cues on hippocampal directional selectivity. Cell, 164(1–2), 197–207. 10.1016/j.cell.2015.12.015 [DOI] [PubMed] [Google Scholar]
  3. Aggelopoulos, N. C., & Rolls, E. T. (2005). Scene perception: Inferior temporal cortex neurons encode the positions of different objects in the scene. The European Journal of Neuroscience, 22(11), 2903–2916. 10.1111/j.1460-9568.2005.04487.x [DOI] [PubMed] [Google Scholar]
  4. Aggleton, J. P., Albasser, M. M., Aggleton, D. J., Poirier, G. L., & Pearce, J. M. (2010). Lesions of the rat perirhinal cortex spare the acquisition of a complex configural visual discrimination yet impair object recognition. Behavioral Neuroscience, 124(1), 55–68. 10.1037/a0018320 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Aggleton, J. P., Kyd, R. J., & Bilkey, D. K. (2004). When is the perirhinal cortex necessary for the performance of spatial memory tasks? Neuroscience and Biobehavioral Reviews, 28(6), 611–624. 10.1016/j.neubiorev.2004.08.007 [DOI] [PubMed] [Google Scholar]
  6. Ahn, J. R., Lee, H. W., & Lee, I. (2019). Rhythmic pruning of perceptual noise for object representation in the hippocampus and perirhinal cortex in rats. Cell Reports, 26(9), 2362–2376 e2364. 10.1016/j.celrep.2019.02.010 [DOI] [PubMed] [Google Scholar]
  7. Ahn, J. R., & Lee, I. (2014). Intact CA3 in the hippocampus is only sufficient for contextual behavior based on well‐learned and unaltered visual background. Hippocampus, 24(9), 1081–1093. 10.1002/hipo.22292 [DOI] [PubMed] [Google Scholar]
  8. Ahn, J. R., & Lee, I. (2015). Neural correlates of object‐associated choice behavior in the perirhinal cortex of rats. Journal of Neuroscience, 35(4), 1692–1705. 10.1523/JNEUROSCI.3160-14.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Ahn, J. R., & Lee, I. (2017). Neural correlates of both perception and memory for objects in the rodent perirhinal cortex. Cerebral Cortex, 27(7), 3856–3868. 10.1093/cercor/bhx093 [DOI] [PubMed] [Google Scholar]
  10. Albasser, M. M., Davies, M., Futter, J. E., & Aggleton, J. P. (2009). Magnitude of the object recognition deficit associated with perirhinal cortex damage in rats: Effects of varying the lesion extent and the duration of the sample period. Behavioral Neuroscience, 123(1), 115–124. 10.1037/a0013829 [DOI] [PubMed] [Google Scholar]
  11. Aronov, D., Nevers, R., & Tank, D. W. (2017). Mapping of a non‐spatial dimension by the hippocampal‐entorhinal circuit. Nature, 543(7647), 719–722. 10.1038/nature21692 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Aronov, D., & Tank, D. W. (2014). Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system. Neuron, 84(2), 442–456. 10.1016/j.neuron.2014.08.042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Balderas, I., Rodriguez‐Ortiz, C. J., & Bermudez‐Rattoni, F. (2013). Retrieval and reconsolidation of object recognition memory are independent processes in the perirhinal cortex. Neuroscience, 253, 398–405. 10.1016/j.neuroscience.2013.09.001 [DOI] [PubMed] [Google Scholar]
  14. Bar, M. (2004). Visual objects in context. Nature Reviews. Neuroscience, 5(8), 617–629. 10.1038/nrn1476 [DOI] [PubMed] [Google Scholar]
  15. Barker, G. R., Warburton, E. C., Koder, T., Dolman, N. P., More, J. C., Aggleton, J. P., … Brown, M. W. (2006). The different effects on recognition memory of perirhinal kainate and NMDA glutamate receptor antagonism: Implications for underlying plasticity mechanisms. Journal of Neuroscience, 26(13), 3561–3566. 10.1523/JNEUROSCI.3154-05.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Barnes, C. A. (1979). Memory deficits associated with senescence: A neurophysiological and behavioral study in the rat. Journal of Comparative and Physiological Psychology, 93(1), 74–104. 10.1037/h0077579 [DOI] [PubMed] [Google Scholar]
  17. Bartko, S. J., Winters, B. D., Cowell, R. A., Saksida, L. M., & Bussey, T. J. (2007). Perceptual functions of perirhinal cortex in rats: Zero‐delay object recognition and simultaneous oddity discriminations. The Journal of Neuroscience, 27(10), 2548–2559. 10.1523/JNEUROSCI.5171-06.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Baxter, M. G. (2009). Involvement of medial temporal lobe structures in memory and perception. Neuron, 61(5), 667–677. 10.1016/j.neuron.2009.02.007 [DOI] [PubMed] [Google Scholar]
  19. Biederman, I. (1987). Recognition‐by‐components: A theory of human image understanding. Psychological Review, 94(2), 115–147. 10.1037/0033-295X.94.2.115 [DOI] [PubMed] [Google Scholar]
  20. Boccara, C. N., Sargolini, F., Thoresen, V. H., Solstad, T., Witter, M. P., Moser, E. I., & Moser, M. B. (2010). Grid cells in pre‐ and parasubiculum. Nature Neuroscience, 13(8), 987–994. 10.1038/nn.2602 [DOI] [PubMed] [Google Scholar]
  21. Boyden, E. S., Zhang, F., Bamberg, E., Nagel, G., & Deisseroth, K. (2005). Millisecond‐timescale, genetically targeted optical control of neural activity. Nature Neuroscience, 8(9), 1263–1268. 10.1038/nn1525 [DOI] [PubMed] [Google Scholar]
  22. Brown, M. W., Wilson, F. A. W., & Riches, I. P. (1987). Neuronal evidence that inferomedial temporal cortex is more important than hippocampus in certain processes underlying recognition memory. Brain Research, 409(1), 158–162. 10.1016/0006-8993(87)90753-0 [DOI] [PubMed] [Google Scholar]
  23. Buckley, M. J., Booth, M. C., Rolls, E. T., & Gaffan, D. (2001). Selective perceptual impairments after perirhinal cortex ablation. The Journal of Neuroscience, 21(24), 9824–9836. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Burke, S. N., Maurer, A. P., Hartzell, A. L., Nematollahi, S., Uprety, A., Wallace, J. L., & Barnes, C. A. (2012). Representation of three‐dimensional objects by the rat perirhinal cortex. Hippocampus, 22(10), 2032–2044. 10.1002/hipo.22060 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Burwell, R. D. (2000). The parahippocampal region: Corticocortical connectivity. Annals of the New York Academy of Sciences, 911, 25–42. 10.1111/j.1749-6632.2000.tb06717.x [DOI] [PubMed] [Google Scholar]
  26. Burwell, R. D., & Hafeman, D. M. (2003). Positional firing properties of postrhinal cortex neurons. Neuroscience, 119(2), 577–588. 10.1016/s0306-4522(03)00160-x [DOI] [PubMed] [Google Scholar]
  27. Bussey, T. J., Duck, J., Muir, J. L., & Aggleton, J. P. (2000). Distinct patterns of behavioural impairments resulting from fornix transection or neurotoxic lesions of the perirhinal and postrhinal cortices in the rat. Behavioural Brain Research, 111(1–2), 187–202. 10.1016/s0166-4328(00)00155-8 [DOI] [PubMed] [Google Scholar]
  28. Bussey, T. J., & Saksida, L. M. (2002). The organization of visual object representations: A connectionist model of effects of lesions in perirhinal cortex. The European Journal of Neuroscience, 15(2), 355–364. 10.1046/j.0953-816x.2001.01850.x [DOI] [PubMed] [Google Scholar]
  29. Bussey, T. J., & Saksida, L. M. (2007). Memory, perception, and the ventral visual‐perirhinal‐hippocampal stream: Thinking outside of the boxes. Hippocampus, 17(9), 898–908. 10.1002/hipo.20320 [DOI] [PubMed] [Google Scholar]
  30. Bussey, T. J., Saksida, L. M., & Murray, E. A. (2002). Perirhinal cortex resolves feature ambiguity in complex visual discriminations. The European Journal of Neuroscience, 15(2), 365–374. 10.1046/j.0953-816x.2001.01851.x [DOI] [PubMed] [Google Scholar]
  31. Calton, J. L., & Taube, J. S. (2009). Where am I and how will I get there from here? A role for posterior parietal cortex in the integration of spatial information and route planning. Neurobiology of Learning and Memory, 91(2), 186–196. 10.1016/j.nlm.2008.09.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Campbell, M. G., Ocko, S. A., Mallory, C. S., Low, I. I. C., Ganguli, S., & Giocomo, L. M. (2018). Principles governing the integration of landmark and self‐motion cues in entorhinal cortical codes for navigation. Nature Neuroscience, 21(8), 1096–1106. 10.1038/s41593-018-0189-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Canto, C. B., Wouterlood, F. G., & Witter, M. P. (2008). What does the anatomical organization of the entorhinal cortex tell us? Neural Plasticity, 2008, 381243. 10.1155/2008/381243 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Casali, G., Shipley, S., Dowell, C., Hayman, R., & Barry, C. (2018). Entorhinal neurons exhibit cue locking in rodent VR. Frontiers in Cellular Neuroscience, 12(512), 512. 10.3389/fncel.2018.00512 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Chen, G., King, J. A., Burgess, N., & O'Keefe, J. (2013). How vision and movement combine in the hippocampal place code. Proceedings of the National Academy of Sciences of the United States of America, 110(1), 378–383. 10.1073/pnas.1215834110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Chen, X., Vieweg, P., & Wolbers, T. (2019). Computing distance information from landmarks and self‐motion cues—Differential contributions of anterior‐lateral vs. posterior‐medial entorhinal cortex in humans. NeuroImage, 202, 116074. 10.1016/j.neuroimage.2019.116074 [DOI] [PubMed] [Google Scholar]
  37. Clark, R. E., Reinagel, P., Broadbent, N. J., Flister, E. D., & Squire, L. R. (2011). Intact performance on feature‐ambiguous discriminations in rats with lesions of the perirhinal cortex. Neuron, 70(1), 132–140. 10.1016/j.neuron.2011.03.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Clayton, N. S., & Dickinson, A. (1998). Episodic‐like memory during cache recovery by scrub jays. Nature, 395(6699), 272–274. 10.1038/26216 [DOI] [PubMed] [Google Scholar]
  39. Cohen, J. D., Bolstad, M., & Lee, A. K. (2017). Experience‐dependent shaping of hippocampal CA1 intracellular activity in novel and familiar environments. eLife, 6. 10.7554/eLife.23040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Cushman, J. D., Aharoni, D. B., Willers, B., Ravassard, P., Kees, A., Vuong, C., … Mehta, M. R. (2013). Multisensory control of multimodal behavior: Do the legs know what the tongue is doing? PLoS One, 8(11), e80465. 10.1371/journal.pone.0080465 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Davies, M., Machin, P. E., Sanderson, D. J., Pearce, J. M., & Aggleton, J. P. (2007). Neurotoxic lesions of the rat perirhinal and postrhinal cortices and their impact on biconditional visual discrimination tasks. Behavioural Brain Research, 176(2), 274–283. 10.1016/j.bbr.2006.10.005 [DOI] [PubMed] [Google Scholar]
  42. de Araujo, I. E., Rolls, E. T., & Stringer, S. M. (2001). A view model which accounts for the spatial fields of hippocampal primate spatial view cells and rat place cells. Hippocampus, 11(6), 699–706. 10.1002/hipo.1085 [DOI] [PubMed] [Google Scholar]
  43. Deshmukh, S. S., Johnson, J. L., & Knierim, J. J. (2012). Perirhinal cortex represents nonspatial, but not spatial, information in rats foraging in the presence of objects: Comparison with lateral entorhinal cortex. Hippocampus, 22(10), 2045–2058. 10.1002/hipo.22046 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Deshmukh, S. S., & Knierim, J. J. (2011). Representation of non‐spatial and spatial information in the lateral entorhinal cortex. Frontiers in Behavioral Neuroscience, 5, 69. 10.3389/fnbeh.2011.00069 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Doan, T. P., Lagartos‐Donate, M. J., Nilssen, E. S., Ohara, S., & Witter, M. P. (2019). Convergent projections from perirhinal and postrhinal cortices suggest a multisensory nature of lateral, but not medial, entorhinal cortex. Cell Reports, 29(3), 617–627. 10.1016/j.celrep.2019.09.005 [DOI] [PubMed] [Google Scholar]
  46. Doeller, C. F., Barry, C., & Burgess, N. (2010). Evidence for grid cells in a human memory network. Nature, 463(7281), 657–661. 10.1038/nature08704 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Dombeck, D. A., Harvey, C. D., Tian, L., Looger, L. L., & Tank, D. W. (2010). Functional imaging of hippocampal place cells at cellular resolution during virtual navigation. Nature Neuroscience, 13(11), 1433–1440. 10.1038/nn.2648 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Domnisoru, C., Kinkhabwala, A. A., & Tank, D. W. (2013). Membrane potential dynamics of grid cells. Nature, 495(7440), 199–204. 10.1038/nature11973 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Eacott, M. J., Gaffan, D., & Murray, E. A. (1994). Preserved recognition memory for small sets, and impaired stimulus identification for large sets, following rhinal cortex ablations in monkeys. The European Journal of Neuroscience, 6(9), 1466–1478. 10.1111/j.1460-9568.1994.tb01008.x [DOI] [PubMed] [Google Scholar]
  50. Eichenbaum, H. (2006). Remembering: Functional organization of the declarative memory system. Current Biology, 16(16), R643–R645. 10.1016/j.cub.2006.07.026 [DOI] [PubMed] [Google Scholar]
  51. Eichenbaum, H., & Cohen, N. J. (2014). Can we reconcile the declarative memory and spatial navigation views on hippocampal function? Neuron, 83(4), 764–770. 10.1016/j.neuron.2014.07.032 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Eichenbaum, H., Dudchenko, P., Wood, E., Shapiro, M., & Tanila, H. (1999). The hippocampus, memory, and place cells: Is it spatial memory or a memory space? Neuron, 23(2), 209–226. 10.1016/s0896-6273(00)80773-4 [DOI] [PubMed] [Google Scholar]
  53. Eichenbaum, H., & Fortin, N. J. (2005). Bridging the gap between brain and behavior: Cognitive and neural mechanisms of episodic memory. Journal of the Experimental Analysis of Behavior, 84(3), 619–629. 10.1901/jeab.2005.80-04 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Epstein, R. (2005). The cortical basis of visual scene processing. Visual Cognition, 12(6), 954–978. 10.1080/13506280444000607 [DOI] [Google Scholar]
  55. Epstein, R., & Kanwisher, N. (1998). A cortical representation of the local visual environment. Nature, 392(6676), 598–601. 10.1038/33402 [DOI] [PubMed] [Google Scholar]
  56. Epstein, R. A., & Baker, C. I. (2019). Scene perception in the human brain. Annual Review of Vision Science, 5, 373–397. 10.1146/annurev-vision-091718-014809 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Epstein, R. A., Parker, W. E., & Feiler, A. M. (2007). Where am I now? Distinct roles for parahippocampal and retrosplenial cortices in place recognition. Journal of Neuroscience, 27(23), 6141–6149. 10.1523/JNEUROSCI.0799-07.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Epstein, R. A., Patai, E. Z., Julian, J. B., & Spiers, H. J. (2017). The cognitive map in humans: Spatial navigation and beyond. Nature Neuroscience, 20(11), 1504–1513. 10.1038/nn.4656 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Fahy, F. L., Riches, I. P., & Brown, M. W. (1993). Neuronal activity related to visual recognition memory: Long‐term memory and the encoding of recency and familiarity information in the primate anterior and medial inferior temporal and rhinal cortex. Experimental Brain Research, 96(3), 457–472. 10.1007/BF00234113 [DOI] [PubMed] [Google Scholar]
  60. Fortin, N. J., Agster, K. L., & Eichenbaum, H. B. (2002). Critical role of the hippocampus in memory for sequences of events. Nature Neuroscience, 5(5), 458–462. 10.1038/nn834 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Fyhn, M., Molden, S., Witter, M. P., Moser, E. I., & Moser, M. B. (2004). Spatial representation in the entorhinal cortex. Science, 305(5688), 1258–1264. 10.1126/science.1099901 [DOI] [PubMed] [Google Scholar]
  62. Gaffan, E. A., Healey, A. N., & Eacott, M. J. (2004). Objects and positions in visual scenes: Effects of perirhinal and postrhinal cortex lesions in the rat. Behavioral Neuroscience, 118(5), 992–1010. 10.1037/0735-7044.118.5.992 [DOI] [PubMed] [Google Scholar]
  63. Glenn, M. J., & Mumby, D. G. (1998). Place memory is intact in rats with perirhinal cortex lesions. Behavioral Neuroscience, 112(6), 1353–1365. 10.1037//0735-7044.112.6.1353 [DOI] [PubMed] [Google Scholar]
  64. Greene, M. R., & Oliva, A. (2009). Recognition of natural scenes from global properties: Seeing the forest without representing the trees. Cognitive Psychology, 58(2), 137–176. 10.1016/j.cogpsych.2008.06.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Grill‐Spector, K., Kourtzi, Z., & Kanwisher, N. (2001). The lateral occipital complex and its role in object recognition. Vision Research, 41(10–11), 1409–1422. 10.1016/s0042-6989(01)00073-6 [DOI] [PubMed] [Google Scholar]
  66. Groen, I. I. A., Silson, E. H., & Baker, C. I. (2017). Contributions of low‐ and high‐level properties to neural processing of visual scenes in the human brain. Philosophical Transactions of the Royal Society B: Biological Sciences, 372(1714), 20160102. 10.1098/rstb.2016.0102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Hafting, T., Fyhn, M., Molden, S., Moser, M. B., & Moser, E. I. (2005). Microstructure of a spatial map in the entorhinal cortex. Nature, 436(7052), 801–806. 10.1038/nature03721 [DOI] [PubMed] [Google Scholar]
  68. Hales, J. B., Schlesiger, M. I., Leutgeb, J. K., Squire, L. R., Leutgeb, S., & Clark, R. E. (2014). Medial entorhinal cortex lesions only partially disrupt hippocampal place cells and hippocampus‐dependent place memory. Cell Reports, 9(3), 893–901. 10.1016/j.celrep.2014.10.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Hales, J. B., Vincze, J. L., Reitz, N. T., Ocampo, A. C., Leutgeb, S., & Clark, R. E. (2018). Recent and remote retrograde memory deficit in rats with medial entorhinal cortex lesions. Neurobiology of Learning and Memory, 155, 157–163. 10.1016/j.nlm.2018.07.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Hargreaves, E. L., Rao, G., Lee, I., & Knierim, J. J. (2005). Major dissociation between medial and lateral entorhinal input to dorsal hippocampus. Science, 308(5729), 1792–1794. 10.1126/science.1110449 [DOI] [PubMed] [Google Scholar]
  71. Harker, K. T., & Whishaw, I. Q. (2002). Place and matching‐to‐place spatial learning affected by rat inbreeding (Dark‐Agouti, Fischer 344) and albinism (Wistar, Sprague‐Dawley) but not domestication (wild rat vs. Long‐Evans, Fischer‐Norway). Behavioural Brain Research, 134(1–2), 467–477. 10.1016/s0166-4328(02)00083-9 [DOI] [PubMed] [Google Scholar]
  72. Harvey, C. D., Coen, P., & Tank, D. W. (2012). Choice‐specific sequences in parietal cortex during a virtual‐navigation decision task. Nature, 484(7392), 62–68. 10.1038/nature10918 [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Hassabis, D., & Maguire, E. A. (2007). Deconstructing episodic memory with construction. Trends in Cognitive Sciences, 11(7), 299–306. 10.1016/j.tics.2007.05.001 [DOI] [PubMed] [Google Scholar]
  74. Hassabis, D., & Maguire, E. A. (2009). The construction system of the brain. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 364(1521), 1263–1271. 10.1098/rstb.2008.0296 [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Henderson, J. M., & Hollingworth, A. (1999). High‐level scene perception. Annual Review of Psychology, 50, 243–271. 10.1146/annurev.psych.50.1.243 [DOI] [PubMed] [Google Scholar]
  76. Henriksen, E. J., Colgin, L. L., Barnes, C. A., Witter, M. P., Moser, M. B., & Moser, E. I. (2010). Spatial representation along the proximodistal axis of CA1. Neuron, 68(1), 127–137. 10.1016/j.neuron.2010.08.042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Hernandez, A. R., Reasor, J. E., Truckenbrod, L. M., Lubke, K. N., Johnson, S. A., Bizon, J. L., … Burke, S. N. (2017). Medial prefrontal‐perirhinal cortical communication is necessary for flexible response selection. Neurobiology of Learning and Memory, 137, 36–47. 10.1016/j.nlm.2016.10.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Hirsh, R. (1974). The hippocampus and contextual retrieval of information from memory: A theory. Behavioral Biology, 12(4), 421–444. 10.1016/s0091-6773(74)92231-7 [DOI] [PubMed] [Google Scholar]
  79. Holscher, C., Schnee, A., Dahmen, H., Setia, L., & Mallot, H. A. (2005). Rats are able to navigate in virtual environments. The Journal of Experimental Biology, 208(Pt 3), 561–569. 10.1242/jeb.01371 [DOI] [PubMed] [Google Scholar]
  80. Hoydal, O. A., Skytoen, E. R., Andersson, S. O., Moser, M. B., & Moser, E. I. (2019). Object‐vector coding in the medial entorhinal cortex. Nature, 568(7752), 400–404. 10.1038/s41586-019-1077-7 [DOI] [PubMed] [Google Scholar]
  81. Insausti, R., Herrero, M. T., & Witter, M. P. (1997). Entorhinal cortex of the rat: Cytoarchitectonic subdivisions and the origin and distribution of cortical efferents. Hippocampus, 7(2), 146–183. [DOI] [PubMed] [Google Scholar]
  82. Jo, Y. S., & Lee, I. (2010). Disconnection of the hippocampal‐perirhinal cortical circuits severely disrupts object‐place paired associative memory. Journal of Neuroscience, 30(29), 9850–9858. 10.1523/JNEUROSCI.1580-10.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Keene, C. S., Bladon, J., McKenzie, S., Liu, C. D., O'Keefe, J., & Eichenbaum, H. (2016). Complementary functional organization of neuronal activity patterns in the perirhinal, lateral entorhinal, and medial entorhinal cortices. Journal of Neuroscience, 36(13), 3660–3675. 10.1523/JNEUROSCI.4368-15.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Kerr, K. M., Agster, K. L., Furtak, S. C., & Burwell, R. D. (2007). Functional neuroanatomy of the parahippocampal region: The lateral and medial entorhinal areas. Hippocampus, 17(9), 697–708. 10.1002/hipo.20315 [DOI] [PubMed] [Google Scholar]
  85. Kesner, R. P., Farnsworth, G., & DiMattia, B. V. (1989). Double dissociation of egocentric and allocentric space following medial prefrontal and parietal cortex lesions in the rat. Behavioral Neuroscience, 103(5), 956–961. 10.1037//0735-7044.103.5.956 [DOI] [PubMed] [Google Scholar]
  86. Kesner, R. P., Farnsworth, G., & Kametani, H. (1991). Role of parietal cortex and hippocampus in representing spatial information. Cerebral Cortex, 1(5), 367–373. 10.1093/cercor/1.5.367 [DOI] [PubMed] [Google Scholar]
  87. Kim, J., Delcasso, S., & Lee, I. (2011). Neural correlates of object‐in‐place learning in hippocampus and prefrontal cortex. Journal of Neuroscience, 31(47), 16991–17006. 10.1523/JNEUROSCI.2859-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Kim, S., Lee, J., & Lee, I. (2012). The hippocampus is required for visually cued contextual response selection, but not for visual discrimination of contexts. Frontiers in Behavioral Neuroscience, 6, 66. 10.3389/fnbeh.2012.00066 [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Kinkhabwala, A. A., Gu, Y., Aronov, D., & Tank, D. W. (2020). Visual cue‐related activity of cells in the medial entorhinal cortex during navigation in virtual reality. eLife, 9, e43140. 10.7554/eLife.43140 [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Knierim, J. J., Lee, I., & Hargreaves, E. L. (2006). Hippocampal place cells: Parallel input streams, subregional processing, and implications for episodic memory. Hippocampus, 16(9), 755–764. 10.1002/hipo.20203 [DOI] [PubMed] [Google Scholar]
  91. Knierim, J. J., Neunuebel, J. P., & Deshmukh, S. S. (2014). Functional correlates of the lateral and medial entorhinal cortex: Objects, path integration and local‐global reference frames. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 369(1635), 20130369. 10.1098/rstb.2013.0369 [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Kolb, B., Buhrmann, K., McDonald, R., & Sutherland, R. J. (1994). Dissociation of the medial prefrontal, posterior parietal, and posterior temporal cortex for spatial navigation and recognition memory in the rat. Cerebral Cortex, 4(6), 664–680. 10.1093/cercor/4.6.664 [DOI] [PubMed] [Google Scholar]
  93. Komorowski, R. W., Manns, J. R., & Eichenbaum, H. (2009). Robust conjunctive item‐place coding by hippocampal neurons parallels learning what happens where. The Journal of Neuroscience, 29(31), 9918–9929. 10.1523/JNEUROSCI.1378-09.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Krakauer, J. W., Ghazanfar, A. A., Gomez‐Marin, A., MacIver, M. A., & Poeppel, D. (2017). Neuroscience needs behavior: Correcting a reductionist bias. Neuron, 93(3), 480–490. 10.1016/j.neuron.2016.12.041 [DOI] [PubMed] [Google Scholar]
  95. Kropff, E., Carmichael, J. E., Moser, M. B., & Moser, E. I. (2015). Speed cells in the medial entorhinal cortex. Nature, 523(7561), 419–424. 10.1038/nature14622 [DOI] [PubMed] [Google Scholar]
  96. Kuruvilla, M. V., & Ainge, J. A. (2017). Lateral entorhinal cortex lesions impair local spatial frameworks. Frontiers in Systems Neuroscience, 11, 30. 10.3389/fnsys.2017.00030 [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. LaChance, P. A., Todd, T. P., & Taube, J. S. (2019). A sense of space in postrhinal cortex. Science, 365(6449), eaax4192. 10.1126/science.aax4192 [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Lee, C. H., & Lee, I. (2020). Impairment of pattern separation of ambiguous scenes by single units in the CA3 in the absence of the dentate gyrus. Journal of Neuroscience, 40(18), 3576–3590. 10.1523/JNEUROSCI.2596-19.2020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Lee, H. W., Lee, S. M., & Lee, I. (2018). Neural firing patterns are more schematic and less sensitive to changes in background visual scenes in the subiculum than in the hippocampus. Journal of Neuroscience, 38(34), 7392–7408. 10.1523/JNEUROSCI.0156-18.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Lee, I., & Lee, C. H. (2013a). Contextual behavior and neural circuits. Frontiers in Neural Circuits, 7. 10.3389/fncir.2013.00084 [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Lee, I., & Lee, S. H. (2013b). Putting an object in context and acting on it: Neural mechanisms of goal‐directed response to contextual object. Reviews in the Neurosciences, 24(1), 27–49. 10.1515/revneuro-2012-0073 [DOI] [PubMed] [Google Scholar]
  102. Lee, I., & Park, S. B. (2013). Perirhinal cortical inactivation impairs object‐in‐place memory and disrupts task‐dependent firing in hippocampal CA1, but not in CA3. Frontiers in Neural Circuits, 7, 134. 10.3389/fncir.2013.00134 [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Lee, I., & Shin, J. Y. (2012). Medial prefrontal cortex is selectively involved in response selection using visual context in the background. Learning & Memory, 19(6), 247–250. 10.1101/lm.025890.112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Lee, I., & Solivan, F. (2008). The roles of the medial prefrontal cortex and hippocampus in a spatial paired‐association task. Learning & Memory, 15(5), 357–367. 10.1101/lm.902708 [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Lee, I., & Solivan, F. (2010). Dentate gyrus is necessary for disambiguating similar object‐place representations. Learning & Memory, 17(5), 252–258. 10.1101/lm.1678210 [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Lee, K. J., Park, S. B., & Lee, I. (2014). Elemental or contextual? It depends: Individual difference in the hippocampal dependence of associative learning for a simple sensory stimulus. Frontiers in Behavioral Neuroscience, 8, 217. 10.3389/fnbeh.2014.00217 [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Li, L., Miller, E. K., & Desimone, R. (1993). The representation of stimulus familiarity in anterior inferior temporal cortex. Journal of Neurophysiology, 69(6) 1918‐1929, 1918–1929. 10.1152/jn.1993.69.6.1918 [DOI] [PubMed] [Google Scholar]
  108. Lipton, P. A., White, J. A., & Eichenbaum, H. (2007). Disambiguation of overlapping experiences by neurons in the medial entorhinal cortex. Journal of Neuroscience, 27(21), 5787–5795. 10.1523/JNEUROSCI.1063-07.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Liu, P., & Bilkey, D. K. (1998a). Lesions of perirhinal cortex produce spatial memory deficits in the radial maze. Hippocampus, 8(2), 114–121. [DOI] [PubMed] [Google Scholar]
  110. Liu, P., & Bilkey, D. K. (1998b). Perirhinal cortex contributions to performance in the Morris water maze. Behavioral Neuroscience, 112(2), 304–315. 10.1037//0735-7044.112.2.304 [DOI] [PubMed] [Google Scholar]
  111. Liu, P., & Bilkey, D. K. (1999). The effect of excitotoxic lesions centered on the perirhinal cortex in two versions of the radial arm maze task. Behavioral Neuroscience, 113(4), 672–682. 10.1037//0735-7044.113.4.672 [DOI] [PubMed] [Google Scholar]
  112. Liu, P., & Bilkey, D. K. (2001). The effect of excitotoxic lesions centered on the hippocampus or perirhinal cortex in object recognition and spatial memory tasks. Behavioral Neuroscience, 115(1), 94–111. 10.1037/0735-7044.115.1.94 [DOI] [PubMed] [Google Scholar]
  113. Liu, P., & Bilkey, D. K. (2002). The effects of NMDA lesions centered on the postrhinal cortex on spatial memory tasks in the rat. Behavioral Neuroscience, 116(5), 860–873. 10.1037//0735-7044.116.5.860 [DOI] [PubMed] [Google Scholar]
  114. Low, R. J., Gu, Y., & Tank, D. W. (2014). Cellular resolution optical access to brain regions in fissures: Imaging medial prefrontal cortex and grid cells in entorhinal cortex. Proceedings of the National Academy of Sciences of the United States of America, 111(52), 18739–18744. 10.1073/pnas.1421753111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Lu, L., Igarashi, K. M., Witter, M. P., Moser, E. I., & Moser, M. B. (2015). Topography of place maps along the CA3‐to‐CA2 axis of the hippocampus. Neuron, 87(5), 1078–1092. 10.1016/j.neuron.2015.07.007 [DOI] [PubMed] [Google Scholar]
  116. MacDonald, C. J., Lepage, K. Q., Eden, U. T., & Eichenbaum, H. (2011). Hippocampal "time cells" bridge the gap in memory for discontiguous events. Neuron, 71(4), 737–749. 10.1016/j.neuron.2011.07.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Machin, P., Vann, S. D., Muir, J. L., & Aggleton, J. P. (2002). Neurotoxic lesions of the rat perirhinal cortex fail to disrupt the acquisition or performance of tests of allocentric spatial memory. Behavioral Neuroscience, 116(2), 232–240. [PubMed] [Google Scholar]
  118. Martig, A. K., & Mizumori, S. J. (2011). Ventral tegmental area disruption selectively affects CA1/CA2 but not CA3 place fields during a differential reward working memory task. Hippocampus, 21(2), 172–184. 10.1002/hipo.20734 [DOI] [PMC free article] [PubMed] [Google Scholar]
  119. Miller, A. M. P., Mau, W., & Smith, D. M. (2019). Retrosplenial cortical representations of space and future goal locations develop with learning. Current Biology, 29(12), 2083–2090 e2084. 10.1016/j.cub.2019.05.034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Miller, E. K., Li, L., & Desimone, R. (1993). Activity of neurons in anterior inferior temporal cortex during a short‐term memory task. Journal of Neuroscience, 13(4), 1460–1478. 10.1523/jneurosci.13-04-01460.1993 [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Moran, J. P., & Dalrymple‐Alford, J. C. (2003). Perirhinal cortex and anterior thalamic lesions: Comparative effects on learning and memory. Behavioral Neuroscience, 117(6), 1326–1341. 10.1037/0735-7044.117.6.1326 [DOI] [PubMed] [Google Scholar]
  122. Morris, R. G., Garrud, P., Rawlins, J. N., & O'Keefe, J. (1982). Place navigation impaired in rats with hippocampal lesions. Nature, 297(5868), 681–683. 10.1038/297681a0 [DOI] [PubMed] [Google Scholar]
  123. Moser, E., Moser, M. B., & Andersen, P. (1993). Spatial‐learning impairment parallels the magnitude of dorsal hippocampal‐lesions, but is hardly present following ventral lesions. Journal of Neuroscience, 13(9), 3916–3925. [DOI] [PMC free article] [PubMed] [Google Scholar]
  124. Moser, M. B., Moser, E. I., Forrest, E., Andersen, P., & Morris, R. G. (1995). Spatial learning with a minislab in the dorsal hippocampus. Proceedings of the National Academy of Sciences of the United States of America, 92(21), 9697–9701. 10.1073/pnas.92.21.9697 [DOI] [PMC free article] [PubMed] [Google Scholar]
  125. Muller, R. U., & Kubie, J. L. (1987). The effects of changes in the environment on the spatial firing of hippocampal complex‐spike cells. Journal of Neuroscience, 7(7), 1951–1968. [DOI] [PMC free article] [PubMed] [Google Scholar]
  126. Mumby, D. G., & Glenn, M. J. (2000). Anterograde and retrograde memory for object discriminations and places in rats with perirhinal cortex lesions. Behavioural Brain Research, 114(1–2), 119–134. 10.1016/s0166-4328(00)00217-5 [DOI] [PubMed] [Google Scholar]
  127. Mumby, D. G., Glenn, M. J., Nesbitt, C., & Kyriazis, D. A. (2002). Dissociation in retrograde memory for object discriminations and object recognition in rats with perirhinal cortex damage. Behavioural Brain Research, 132(2), 215–226. 10.1016/s0166-4328(01)00444-2 [DOI] [PubMed] [Google Scholar]
  128. Mushiake, H., Saito, N., Sakamoto, K., Itoyama, Y., & Tanji, J. (2006). Activity in the lateral prefrontal cortex reflects multiple steps of future events in action plans. Neuron, 50(4), 631–641. 10.1016/j.neuron.2006.03.045 [DOI] [PubMed] [Google Scholar]
  129. Myhrer, T. (2000). Effects of selective perirhinal and postrhinal lesions on acquisition and retention of a visual discrimination task in rats. Neurobiology of Learning and Memory, 73(1), 68–78. 10.1006/nlme.1999.3918 [DOI] [PubMed] [Google Scholar]
  130. Nilssen, E. S., Doan, T. P., Nigro, M. J., Ohara, S., & Witter, M. P. (2019). Neurons and networks in the entorhinal cortex: A reappraisal of the lateral and medial entorhinal subdivisions mediating parallel cortical pathways. Hippocampus, 29(12), 1238–1254. 10.1002/hipo.23145 [DOI] [PubMed] [Google Scholar]
  131. Norman, G., & Eacott, M. J. (2004). Impaired object recognition with increasing levels of feature ambiguity in rats with perirhinal cortex lesions. Behavioural Brain Research, 148(1–2), 79–91. 10.1016/s0166-4328(03)00176-1 [DOI] [PubMed] [Google Scholar]
  132. O'Keefe, J., & Dostrovsky, J. (1971). The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely‐moving rat. Brain Research, 34(1), 171–175. 10.1016/0006-8993(71)90358-1 [DOI] [PubMed] [Google Scholar]
  133. O'Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map, Oxford, United Kingdom: Oxford University Press. [Google Scholar]
  134. Oliva, A., Fernandez‐Ruiz, A., Buzsaki, G., & Berenyi, A. (2016). Spatial coding and physiological properties of hippocampal neurons in the cornu ammonis subregions. Hippocampus, 26(12), 1593–1607. 10.1002/hipo.22659 [DOI] [PubMed] [Google Scholar]
  135. Olton, D. S., & Samuelson, R. J. (1976). Remembrance of places passed—spatial memory in rats. Journal of Experimental Psychology‐Animal Behavioral Processes, 2(2), 97–116. 10.1037/0097-7403.2.2.97 [DOI] [Google Scholar]
  136. Olton, D. S., & Wolf, W. A. (1981). Hippocampal seizures produce retrograde amnesia without a temporal gradient when they reset working memory. Behavioral and Neural Biology, 33(4), 437–452. 10.1016/s0163-1047(81)91797-0 [DOI] [PubMed] [Google Scholar]
  137. O'Neill, J., Boccara, C. N., Stella, F., Schoenenberger, P., & Csicsvari, J. (2017). Superficial layers of the medial entorhinal cortex replay independently of the hippocampus. Science, 355(6321), 184–188. 10.1126/science.aag2787 [DOI] [PubMed] [Google Scholar]
  138. O'Reilly, K. C., Alarcon, J. M., & Ferbinteanu, J. (2014). Relative contributions of CA3 and medial entorhinal cortex to memory in rats. Frontiers in Behavioral Neuroscience, 8, 292. 10.3389/fnbeh.2014.00292 [DOI] [PMC free article] [PubMed] [Google Scholar]
  139. Packard, M. G., & McGaugh, J. L. (1996). Inactivation of hippocampus or caudate nucleus with lidocaine differentially affects expression of place and response learning. Neurobiology of Learning and Memory, 65(1), 65–72. 10.1006/nlme.1996.0007 [DOI] [PubMed] [Google Scholar]
  140. Park, E. H., Ahn, J. R., & Lee, I. (2017). Interactions between stimulus and response types are more strongly represented in the entorhinal cortex than in its upstream regions in rats. eLife, 6. 10.7554/eLife.32657 [DOI] [PMC free article] [PubMed] [Google Scholar]
  141. Pastalkova, E., Itskov, V., Amarasingham, A., & Buzsaki, G. (2008). Internally generated cell assembly sequences in the rat hippocampus. Science, 321(5894), 1322–1327. 10.1126/science.1159775 [DOI] [PMC free article] [PubMed] [Google Scholar]
  142. Poppenk, J., Evensmoen, H. R., Moscovitch, M., & Nadel, L. (2013). Long‐axis specialization of the human hippocampus. Trends in Cognitive Sciences, 17(5), 230–240. 10.1016/j.tics.2013.03.005 [DOI] [PubMed] [Google Scholar]
  143. Poucet, B., & Herrmann, T. (1990). Septum and medial frontal cortex contribution to spatial problem‐solving. Behavioural Brain Research, 37(3), 269–280. 10.1016/0166-4328(90)90139-6 [DOI] [PubMed] [Google Scholar]
  144. Prusky, G. T., Douglas, R. M., Nelson, L., Shabanpoor, A., & Sutherland, R. J. (2004). Visual memory task for rats reveals an essential role for hippocampus and perirhinal cortex. Proceedings of the National Academy of Sciences of the United States of America, 101(14), 5064–5068. 10.1073/pnas.0308528101 [DOI] [PMC free article] [PubMed] [Google Scholar]
  145. Prusky, G. T., Harker, K. T., Douglas, R. M., & Whishaw, I. Q. (2002). Variation in visual acuity within pigmented, and between pigmented and albino rat strains. Behavioural Brain Research, 136(2), 339–348. 10.1016/s0166-4328(02)00126-2 [DOI] [PubMed] [Google Scholar]
  146. Ramos, J. M. (2002). The perirhinal cortex and long‐term spatial memory in rats. Brain Research, 947(2), 294–298. 10.1016/s0006-8993(02)03044-5 [DOI] [PubMed] [Google Scholar]
  147. Ramos, J. M. (2013a). Differential contribution of hippocampus, perirhinal cortex and postrhinal cortex to allocentric spatial memory in the radial maze. Behavioural Brain Research, 247, 59–64. 10.1016/j.bbr.2013.03.017 [DOI] [PubMed] [Google Scholar]
  148. Ramos, J. M. (2013b). Perirhinal cortex lesions produce retrograde but not anterograde amnesia for allocentric spatial information: Within‐subjects examination. Behavioural Brain Research, 238, 154–159. 10.1016/j.bbr.2012.10.033 [DOI] [PubMed] [Google Scholar]
  149. Ramos, J. M., & Vaquero, J. M. (2005). The perirhinal cortex of the rat is necessary for spatial memory retention long after but not soon after learning. Physiology & Behavior, 86(1–2), 118–127. 10.1016/j.physbeh.2005.07.004 [DOI] [PubMed] [Google Scholar]
  150. Ramos, J. M. J. (2017). Perirhinal cortex involvement in allocentric spatial learning in the rat: Evidence from doubly marked tasks. Hippocampus, 27(5), 507–517. 10.1002/hipo.22707 [DOI] [PubMed] [Google Scholar]
  151. Rao, R. P., von Heimendahl, M., Bahr, V., & Brecht, M. (2019). Neuronal responses to conspecifics in the ventral CA1. Cell Reports, 27(12), 3460–3472 e3463. 10.1016/j.celrep.2019.05.081 [DOI] [PubMed] [Google Scholar]
  152. Ravassard, P., Kees, A., Willers, B., Ho, D., Aharoni, D. A., Cushman, J., … Mehta, M. R. (2013). Multisensory control of hippocampal spatiotemporal selectivity. Science, 340(6138), 1342–1346. 10.1126/science.1232655 [DOI] [PMC free article] [PubMed] [Google Scholar]
  153. Robin, J., & Moscovitch, M. (2017). Details, gist and schema: Hippocampal‐neocortical interactions underlying recent and remote episodic and spatial memory. Current Opinion in Behavioral Sciences, 17, 114–123. 10.1016/j.cobeha.2017.07.016 [DOI] [Google Scholar]
  154. Robinson, N. T. M., Priestley, J. B., Rueckemann, J. W., Garcia, A. D., Smeglin, V. A., Marino, F. A., & Eichenbaum, H. (2017). Medial entorhinal cortex selectively supports temporal coding by hippocampal neurons. Neuron, 94(3), 677–688 e676. 10.1016/j.neuron.2017.04.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. Rodo, C., Sargolini, F., & Save, E. (2017). Processing of spatial and non‐spatial information in rats with lesions of the medial and lateral entorhinal cortex: Environmental complexity matters. Behavioural Brain Research, 320, 200–209. 10.1016/j.bbr.2016.12.009 [DOI] [PubMed] [Google Scholar]
  156. Rolls, E. T., Aggelopoulos, N. C., & Zheng, F. (2003). The receptive fields of inferior temporal cortex neurons in natural scenes. Journal of Neuroscience, 23(1), 339–348. 10.1523/jneurosci.23-01-00339.2003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  157. Rolls, E. T., & O'Mara, S. M. (1995). View‐responsive neurons in the primate hippocampal complex. Hippocampus, 5(5), 409–424. 10.1002/hipo.450050504 [DOI] [PubMed] [Google Scholar]
  158. Rolls, E. T., & Wirth, S. (2018). Spatial representations in the primate hippocampus, and their functions in memory and navigation. Progress in Neurobiology, 171, 90–113. 10.1016/j.pneurobio.2018.09.004 [DOI] [PubMed] [Google Scholar]
  159. Sabariego, M., Schonwald, A., Boublil, B. L., Zimmerman, D. T., Ahmadi, S., Gonzalez, N., … Leutgeb, S. (2019). Time cells in the hippocampus are neither dependent on medial entorhinal cortex inputs nor necessary for spatial working memory. Neuron, 102(6), 1235–1248 e1235. 10.1016/j.neuron.2019.04.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  160. Sakata, H., Taira, M., Murata, A., & Mine, S. (1995). Neural mechanisms of visual guidance of hand action in the parietal cortex of the monkey. Cerebral Cortex, 5(5), 429–438. 10.1093/cercor/5.5.429 [DOI] [PubMed] [Google Scholar]
  161. Sato, M., Kawano, M., Mizuta, K., Islam, T., Lee, M. G., & Hayashi, Y. (2017). Hippocampus‐dependent goal localization by head‐fixed mice in virtual reality. eNeuro, 4(3), ENEURO.0369–ENEU16.2017. 10.1523/ENEURO.0369-16.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  162. Savelli, F., Yoganarasimha, D., & Knierim, J. J. (2008). Influence of boundary removal on the spatial representations of the medial entorhinal cortex. Hippocampus, 18(12), 1270–1282. 10.1002/hipo.20511 [DOI] [PMC free article] [PubMed] [Google Scholar]
  163. Schmidt‐Hieber, C., & Hausser, M. (2013). Cellular mechanisms of spatial navigation in the medial entorhinal cortex. Nature Neuroscience, 16(3), 325–331. 10.1038/nn.3340 [DOI] [PubMed] [Google Scholar]
  164. Schnabel, U. H., Bossens, C., Lorteije, J. A. M., Self, M. W., op de Beeck, H., & Roelfsema, P. R. (2018). Figure‐ground perception in the awake mouse and neuronal activity elicited by figure‐ground stimuli in primary visual cortex. Scientific Reports, 8(1), 17800. 10.1038/s41598-018-36087-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  165. Scoville, W. B., & Milner, B. (1957). Loss of recent memory after bilateral hippocampal lesions. Journal of Neurology, Neurosurgery, and Psychiatry, 20(1), 11–21. 10.1136/jnnp.20.1.11 [DOI] [PMC free article] [PubMed] [Google Scholar]
  166. Sesack, S. R., Deutch, A. Y., Roth, R. H., & Bunney, B. S. (1989). Topographical organization of the efferent projections of the medial prefrontal cortex in the rat: An anterograde tract‐tracing study with Phaseolus vulgaris leucoagglutinin. The Journal of Comparative Neurology, 290(2), 213–242. 10.1002/cne.902900205 [DOI] [PubMed] [Google Scholar]
  167. Silson, E. H., Gilmore, A. W., Kalinowski, S. E., Steel, A., Kidder, A., Martin, A., & Baker, C. I. (2019). A posterior‐anterior distinction between scene perception and scene construction in human medial parietal cortex. Journal of Neuroscience, 39(4), 705–717. 10.1523/JNEUROSCI.1219-18.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  168. Solomon, E. A., Lega, B. C., Sperling, M. R., & Kahana, M. J. (2019). Hippocampal theta codes for distances in semantic and temporal spaces. Proceedings of the National Academy of Sciences of the United States of America, 116(48), 24343–24352. 10.1073/pnas.1906729116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  169. Solstad, T., Boccara, C. N., Kropff, E., Moser, M. B., & Moser, E. I. (2008). Representation of geometric borders in the entorhinal cortex. Science, 322(5909), 1865–1868. 10.1126/science.1166466 [DOI] [PubMed] [Google Scholar]
  170. Spelke, E. S. (1990). Principles of object perception. Cognitive Science, 14(1), 29–56. 10.1207/s15516709cog1401_3 [DOI] [Google Scholar]
  171. Squire, L. R., Stark, C. E., & Clark, R. E. (2004). The medial temporal lobe. Annual Review of Neuroscience, 27, 279–306. 10.1146/annurev.neuro.27.070203.144130 [DOI] [PubMed] [Google Scholar]
  172. Steffenach, H. A., Witter, M., Moser, M. B., & Moser, E. I. (2005). Spatial memory in the rat requires the dorsolateral band of the entorhinal cortex. Neuron, 45(2), 301–313. 10.1016/j.neuron.2004.12.044 [DOI] [PubMed] [Google Scholar]
  173. Suzuki, W. A. (2009). Perception and the medial temporal lobe: Evaluating the current evidence. Neuron, 61(5), 657–666. 10.1016/j.neuron.2009.02.008 [DOI] [PubMed] [Google Scholar]
  174. Suzuki, W. A., & Naya, Y. (2014). The perirhinal cortex. Annual Review of Neuroscience, 37, 39–53. 10.1146/annurev-neuro-071013-014207 [DOI] [PubMed] [Google Scholar]
  175. Taube, J. S., Muller, R. U., & Ranck, J. B., Jr. (1990). Head‐direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. Journal of Neuroscience, 10(2), 420–435. [DOI] [PMC free article] [PubMed] [Google Scholar]
  176. Tennant, S. A., Fischer, L., Garden, D. L. F., Gerlei, K. Z., Martinez‐Gonzalez, C., McClure, C., … Nolan, M. F. (2018). Stellate cells in the medial entorhinal cortex are required for spatial learning. Cell Reports, 22(5), 1313–1324. 10.1016/j.celrep.2018.01.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  177. Tolman, E. C. (1948). Cognitive maps in rats and men. Psychological Review, 55(4), 189–208. 10.1037/h0061626 [DOI] [PubMed] [Google Scholar]
  178. Tsao, A., Moser, M. B., & Moser, E. I. (2013). Traces of experience in the lateral entorhinal cortex. Current Biology, 23(5), 399–405. 10.1016/j.cub.2013.01.036 [DOI] [PubMed] [Google Scholar]
  179. van Cauter, T., Camon, J., Alvernhe, A., Elduayen, C., Sargolini, F., & Save, E. (2013). Distinct roles of medial and lateral entorhinal cortex in spatial cognition. Cerebral Cortex, 23(2), 451–459. 10.1093/cercor/bhs033 [DOI] [PubMed] [Google Scholar]
  180. van Strien, N. M., Cappaert, N. L., & Witter, M. P. (2009). The anatomy of memory: An interactive overview of the parahippocampal‐hippocampal network. Nature Reviews. Neuroscience, 10(4), 272–282. 10.1038/nrn2614 [DOI] [PubMed] [Google Scholar]
  181. Vann, S. D., & Aggleton, J. P. (2002). Extensive cytotoxic lesions of the rat retrosplenial cortex reveal consistent deficits on tasks that tax allocentric spatial memory. Behavioral Neuroscience, 116(1), 85–94. [PubMed] [Google Scholar]
  182. Wahlstrom, K. L., Huff, M. L., Emmons, E. B., Freeman, J. H., Narayanan, N. S., McIntyre, C. K., & LaLumiere, R. T. (2018). Basolateral amygdala inputs to the medial entorhinal cortex selectively modulate the consolidation of spatial and contextual learning. Journal of Neuroscience, 38(11), 2698–2712. 10.1523/JNEUROSCI.2848-17.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  183. Wang, C., Chen, X., Lee, H., Deshmukh, S. S., Yoganarasimha, D., Savelli, F., & Knierim, J. J. (2018). Egocentric coding of external items in the lateral entorhinal cortex. Science, 362(6417), 945–949. 10.1126/science.aau4940 [DOI] [PMC free article] [PubMed] [Google Scholar]
  184. Weiss, S., Talhami, G., Gofman‐Regev, X., Rapoport, S., Eilam, D., & Derdikman, D. (2017). Consistency of spatial representations in rat entorhinal cortex predicts performance in a reorientation task. Current Biology, 27(23), 3658–3665 e3654. 10.1016/j.cub.2017.10.015 [DOI] [PubMed] [Google Scholar]
  185. Wiig, K. A., & Bilkey, D. K. (1994). The effects of perirhinal cortical lesions on spatial reference memory in the rat. Behavioural Brain Research, 63(1), 101–109. 10.1016/0166-4328(94)90055-8 [DOI] [PubMed] [Google Scholar]
  186. Wiig, K. A., & Bilkey, D. K. (1995). Lesions of rat perirhinal cortex exacerbate the memory deficit observed following damage to the fimbria‐fornix. Behavioral Neuroscience, 109(4), 620–630. 10.1037//0735-7044.109.4.620 [DOI] [PubMed] [Google Scholar]
  187. Winters, B. D., Bartko, S. J., Saksida, L. M., & Bussey, T. J. (2010). Muscimol, AP5, or scopolamine infused into perirhinal cortex impairs two‐choice visual discrimination learning in rats. Neurobiology of Learning and Memory, 93(2), 221–228. 10.1016/j.nlm.2009.10.002 [DOI] [PubMed] [Google Scholar]
  188. Winters, B. D., & Bussey, T. J. (2005). Transient inactivation of perirhinal cortex disrupts encoding, retrieval, and consolidation of object recognition memory. Journal of Neuroscience, 25(1), 52–61. 10.1523/JNEUROSCI.3827-04.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  189. Wise, S. P., Boussaoud, D., Johnson, P. B., & Caminiti, R. (1997). Premotor and parietal cortex: Corticocortical connectivity and combinatorial computations. Annual Review of Neuroscience, 20, 25–42. 10.1146/annurev.neuro.20.1.25 [DOI] [PubMed] [Google Scholar]
  190. Witter, M. P., Naber, P. A., van Haeften, T., Machielsen, W. C., Rombouts, S. A., Barkhof, F., … Lopes da Silva, F. H. (2000). Cortico‐hippocampal communication by way of parallel parahippocampal‐subicular pathways. Hippocampus, 10(4), 398–410. 10.1002/1098-1063 [DOI] [PubMed] [Google Scholar]
  191. Yoo, S. W., & Lee, I. (2017). Functional double dissociation within the entorhinal cortex for visual scene‐dependent choice behavior. eLife, 6. 10.7554/eLife.21543 [DOI] [PMC free article] [PubMed] [Google Scholar]
  192. Yoon, T., Graham, L. K., & Kim, J. J. (2011). Hippocampal lesion effects on occasion setting by contextual and discrete stimuli. Neurobiology of Learning and Memory, 95(2), 176–184. 10.1016/j.nlm.2010.07.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  193. Youngstrom, I. A., & Strowbridge, B. W. (2012). Visual landmarks facilitate rodent spatial navigation in virtual reality environments. Learning & Memory, 19(3), 84–90. 10.1101/lm.023523.111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  194. Zoccolan, D., Oertelt, N., DiCarlo, J. J., & Cox, D. D. (2009). A rodent model for the study of invariant visual object recognition. Proceedings of the National Academy of Sciences of the United States of America, 106(21), 8748–8753. 10.1073/pnas.0811583106 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data of the current study are available from the corresponding author upon reasonable request.


Articles from Hippocampus are provided here courtesy of Wiley

RESOURCES