Skip to main content
eLife logoLink to eLife
. 2020 Mar 9;9:e43140. doi: 10.7554/eLife.43140

Visual cue-related activity of cells in the medial entorhinal cortex during navigation in virtual reality

Amina A Kinkhabwala 1,2,3,†,‡,, Yi Gu 1,2,3,†,§,, Dmitriy Aronov 1,2,3,#, David W Tank 1,2,3,
Editors: Sachin Deshmukh4, Laura L Colgin5
PMCID: PMC7089758  PMID: 32149601

Abstract

During spatial navigation, animals use self-motion to estimate positions through path integration. However, estimation errors accumulate over time and it is unclear how they are corrected. Here we report a new cell class (‘cue cell’) encoding visual cues that could be used to correct errors in path integration in mouse medial entorhinal cortex (MEC). During virtual navigation, individual cue cells exhibited firing fields only near visual cues and their population response formed sequences repeated at each cue. These cells consistently responded to cues across multiple environments. On a track with cues on left and right sides, most cue cells only responded to cues on one side. During navigation in a real arena, they showed spatially stable activity and accounted for 32% of unidentified, spatially stable MEC cells. These cue cell properties demonstrate that the MEC contains a code representing spatial landmarks, which could be important for error correction during path integration.

Research organism: Mouse

Introduction

Animals navigate using landmarks, objects or features that provide sensory cues, to estimate spatial location. When sensory cues defining position are either absent or unreliable during navigation, many animals can use self-motion to update internal representations of location through path integration (Mittelstaedt, 1982; Tsoar et al., 2011). A set of interacting brain regions, including the entorhinal cortex, parietal cortex, and the hippocampus (Brun et al., 2008; Bush et al., 2015; Calton et al., 2003; Calton et al., 2008; Clark et al., 2010; Clark et al., 2013; Clark et al., 2009; Clark and Taube, 2009; Frohardt et al., 2006; Geva-Sagiv et al., 2015; Golob and Taube, 1999; Golob et al., 1998; Hollup et al., 2001; Moser et al., 1993; Parron et al., 2004; Parron and Save, 2004; Taube et al., 1992; Whitlock et al., 2008) participate in this process.

The MEC is of particular interest in path integration. Grid cells in the MEC have multiple firing fields arrayed in a triangular lattice that tile an environment (Hafting et al., 2005). This firing pattern is observed across different environments with the grid cell population activity coherently shifting during locomotion (Fyhn et al., 2007). These observations have led to the hypothesis that grid cells form a spatial metric used by a path integrator. Given this, theoretical studies have demonstrated how velocity-encoding inputs to grid cell circuits could shift grid cell firing patterns, as expected of a path integrator (Barry and Burgess, 2014; Burak and Fiete, 2009; Fuhs and Touretzky, 2006; McNaughton et al., 2006). Cells encoding the speed of locomotion have been identified in this region (Kropff et al., 2015), providing evidence of velocity-encoding inputs and further support for the role of MEC in path integration.

A general problem with path integration is the accumulation of errors over time. A solution to this problem is to use reliable spatial cues to correct estimates of position (Evans et al., 2016; Hardcastle et al., 2015; Pollock et al., 2018). Many recent experimental studies showed profound impairment of grid cell activity by altering spatial cues, including landmarks and environmental boundaries. For example, the absence of visual landmarks significantly disrupted grid cell firing patterns (Chen et al., 2016; Pérez-Escobar et al., 2016). Also, experiments that maintained the boundaries of a one-dimensional environment but manipulated nonmetric visual cues caused rate changes in grid cells (Pérez-Escobar et al., 2016). The decoupling of an animal’s self-motion and visual scene altered grid cell firing patterns (Campbell et al., 2018). Finally, many studies have shown that grid cell firing patterns were influenced by nearby boundaries (Carpenter et al., 2015; Derdikman et al., 2009; Giocomo, 2016; Hardcastle et al., 2015; Krupic et al., 2015; Krupic et al., 2018; Stensola et al., 2015; Yamahachi et al., 2013).

Border cells in the MEC, with firing fields extending across environmental boundaries (Solstad et al., 2008), are good candidates for supplying information for error correction near the perimeter of simple arenas (Pollock et al., 2018). This role of border cells is supported by the fact that an animal’s interactions with boundaries yielded direction-dependent error correction (Hardcastle et al., 2015). However, grid cell firing fields are maintained throughout open arenas in locations where border cells are not active and thus cannot participate in error correction. It is possible that cue cells, like border cells, could provide a mechanism for error correction.

Also, natural navigation involves moving through landmark-rich environments with higher complexity than arenas with simple boundaries. How information from a landmark-rich environment is represented within the MEC is unknown. If there were cells in the MEC that encoded sensory information of landmarks, then more robust path integration and error correction of grid cells would be possible using circuitry self-contained within this brain area. In the MEC, while border cells have been shown to respond to landmarks in virtual reality (Campbell et al., 2018), increasing evidence suggests that unclassified cells also contain information about spatial environments (Diehl et al., 2017; Hardcastle et al., 2017; Høydal et al., 2019). It would be useful to further determine whether these unclassified cells represent spatial cues (other than borders) that could be used in error correction.

Here we address this question by recording from populations of cells in the MEC during virtual navigation along landmark-rich linear tracks using electrophysiological and two-photon imaging approaches. Virtual reality (VR) allows for complete control over the spatial information of the environment, including the presence of spatial cues along the track. The animal’s orientation within the environment was also controlled, simplifying analysis. We report that a significant fraction of previously unclassified cells in MEC responded reliably to prominent spatial cues. As a population, the cells fired in a sequence as a spatial cue is passed. When some cues were removed, cue cell firing fields near those cues were no longer present. When cues were presented on either the left or right side of the track, these cells subdivided into left and right-side preferring cue cells. During navigation along different virtual tracks, cue cells largely maintained the same cue-aligned firing patterns. These cells could provide position information necessary in local MEC circuits for error correction during path integration in sensory rich environments, which are regularly found in nature.

Results

Cue-responsive cells in virtual reality

Mice were trained to unidirectionally navigate along linear tracks in virtual reality to receive water rewards. Virtual tracks were 8 meters long and had a similar general organization and appearance: tracks began with a set of black walls, followed by a short segment with patterned walls, and then finally the majority of the track was comprised of a long corridor with a simple wall pattern (Figure 1A). Different environments were defined by distinct pairs of identical visual cues (tower-like structures) symmetrically present along both sides of the corridor. These cues were non-uniformly spaced along the track and the last cue was always associated with a water reward.

Figure 1. Cells respond to cues in a virtual linear environment.

(A) Examples of cells with cue-related activity recorded during navigation along virtual tracks. At the top of each example are views of each cue from the animal’s perspective inside the track at that location. Side views of the track are shown below, with the start location to the left. The raster plot for a single cell’s spatial activity pattern across multiple traversals of the track is plotted with the average firing rate (Hz) as a function of track position (spatial firing rate) below. Spatial firing fields for the cell are indicated with horizontal red bars. (B) Calculation of cue score. The Pearson correlation between the cue template and the cell’s spatial firing rate was calculated and the spatial shift was defined as the local maximum closest to zero. The cue template was translated by this shift and the correlations of this shifted cue template and spatial firing rate at each cue were individually calculated. The cue score was defined to be the mean of these correlations (Materials and Methods). (C) The distribution of cue scores of recorded cells is shown at the top with the distribution of cue scores calculated on shuffled data shown below. The threshold was chosen as the value that 95% of the shuffled scores did not exceed (vertical black line). Cells exceeding this threshold were termed ‘cue cells’ and are shown in red in the top plot. (D) Distribution of spatial firing fields of all cue cells in two environments.

Figure 1—source data 1. Cue score and shuffle distributions.
Figure 1—source data 2. Cue cell population field distributions across the virtual tracks.

Figure 1.

Figure 1—figure supplement 1. Histology of tetrode tracks and tetrode cell type summary.

Figure 1—figure supplement 1.

All images were sharpened and recolored to emphasize tetrode tracks and lesions. Scale bar = 1 mm. Tables show summaries of cell types for each animal for the database used in the main figures and for the database shown in the figure supplements (tetrode lowering database).
Figure 1—figure supplement 2. Velocity profiles of navigation along virtual tracks.

Figure 1—figure supplement 2.

Examples of the mean ± standard deviation velocity profiles of animals running along 3 different virtual tracks.
Figure 1—figure supplement 3. Cue cells in tetrode lowering database.

Figure 1—figure supplement 3.

Summary plots for a database including only dates in which tetrodes were lowered (tetrode lowering database). Cue cells accounted for 19% of cells in this database. The mean firing field fraction for spatial bins of the firing field distribution in cue regions (0.40 ± 0.14) was higher than that for bins outside of cue regions (0.26 ± 0.18) (paired one-tailed t-test: cell fraction in cue regions > cell fraction outside cue regions, N = 52, p=4 × 10−7).

We used tetrodes to record 789 units in the MEC of three mice (Materials and methods and Figure 1—figure supplement 1). Activity of a subpopulation of these units exhibited a striking pattern, with spiking occurring only near cue locations along the virtual linear tracks (Figure 1A). On each run along the track, clusters of spikes were present at cue locations, forming a vertical band of spikes at each cue in the run-by-run raster plot. Spatial firing rates were calculated by averaging this spiking activity across all runs along the track. Prominent peaks in the spatial firing rate were present at cue locations. In order to classify these peaks, we defined spatial firing fields as the locations along the track where the spatial firing rate exceeded 70% of the shuffled data (Materials and Methods) and observed that the spatial firing fields were preferentially located near cue locations (fields indicated by red lines in Figure 1A).

To quantify this feature of the spatial firing rate, we developed a ‘cue score’ that measures the relationship between a cell’s spatial firing rate and the visual cues of the environment (Figure 1B and Materials and Methods). The cue score was based on the correlation of the cell’s spatial tuning with a spatial cue template that had value one at each cue, and zero elsewhere. Cells with cue scores above the threshold (95th percentile of shuffled data, Materials and Methods) represented ~13% of all recorded cells (Figure 1C). In the remainder of the paper, we refer to these cells as ‘cue cells’.

We next quantified the distribution of spatial firing fields of all cue cells along the track by calculating, for each 5 cm bin, the fraction of cue cells with a spatial firing field in that bin (Materials and methods). We defined the plot of this fraction versus location as the firing field distribution for all cue cells. This distribution had peaks in locations where salient information about the environment was present (Figure 1D), and some fields were correlated with the beginning of the track where wall patterns changed. The mean firing field fraction for spatial bins of the firing field distribution in cue regions (0.4 ± 0.1) was higher than that for bins outside of cue regions (0.2 ± 0.1) (paired one-tailed t-test: fraction of cue cell population with fields located in cue region bins > fraction of cell population with fields in bins outside cue regions, N = 100, 3 animals, p=1.2 × 10−11). Thus, the cue score identifies a subpopulation of MEC cells with spatial firing fields correlated with prominent spatial landmarks.

Cue cell responses to environment perturbations

Is the activity of cue cells truly driven by the visual cues of the environment? To address this question, we designed related pairs of virtual tracks. One track had all cues present (with-cues track) and, in the second track, the last three cues were removed (missing-cues track). Tetrode recordings were performed as mice ran along both types of track in blocks of trials within the same session. Water rewards were delivered in the same location on each track regardless of the cue location differences.

At the beginning of both with-cues and missing-cues tracks, where the tracks were identical, the spatial firing rates of cue cells were similar across tracks. Vertical bands of spikes were present in the run-by-run raster plots of both tracks and formed peaks in the spatial firing rate. The bands were also identified as spatial firing fields, which generally aligned to features of the environment (spatial cues/changes in wall patterns) present on both tracks (Figure 2A, red lines indicate field locations). However, the firing patterns changed dramatically from the point along the track where the environments began to differ. Spatial firing fields were prominent at cue locations along the entire remaining part of the with-cues track (Figure 2A, top) but were not present on the same part of the missing-cues track (Figure 2A, bottom). To quantify this difference, we performed two calculations using an equal number of runs for both the with-cues and missing-cues tracks (Figure 2B and D). In both cases, the data were split into two regions along the track (top of Figure 2B and D): the start region where cues were present for both tracks (black bar marks this region, Region A - same) and the rest of the track where cues were either present or absent (green bar, Region B - different). We first calculated the Pearson correlation with lags varying from −300 to 300 cm in 5 cm steps. The correlation between the firing rate and template was defined to be the value of the correlation at the peak in the Pearson correlation located closest to zero shift. This correlation was lower for the firing rates on the missing cues track in Region B but not for region A (Figure 2B, paired one-tailed t-test: correlation in Region B on with-cues track > correlation in Region B on missing-cues track, N = 65, 3 animals, for region A: p=0.56; for region B: p=1×10−14). We also compared changes of the spatial firing field distribution of the cue cell population (Figure 2C) across the two tracks and, for each cue cell, the fraction of the track region containing spatial firing fields (Figure 2D). In Figure 2C, for each 5 cm bin along the track, the fraction of the cue cell population with a field in that bin are plotted for the with-cues and missing-cues tracks for two environments. In Region A for both tracks, cue cells showed a similar fraction of the region with fields. Cue cells had spatial firing fields clustered in each region where a cue was located on both tracks, but these fields were not present when cues were removed in Region B of the missing-cues track (Figure 2C). In Region B, the fraction of the region with spatial firing fields was lower in the missing-cues track compared to the with-cues track (Figure 2D, paired one-tailed t-test: field fraction on with-cues track > field fraction on missing-cues track, N = 65, 3 animals, for region A: p=1.00, for region B: p=0.0002). These results were consistent for both cue cells with varying spatial shifts relative to the cues (cue cells separated into two categories: those with fields either near cues or far from cues, Figure 2—figure supplement 1) and for a database consisting of dates in which tetrodes were lowered (Figure 2—figure supplement 2). These results demonstrate that cue cells are more coherently active in regions of an environment where cues are located.

Figure 2. Cells respond to cue changes in an environment.

(A) Examples of the spatial firing rates of cells during cue perturbation experiments. For each example, the top and bottom panels are from the same cell in blocks of trials in which the animal either ran down a virtual track with all cues present (with-cues track, top) or a track where some cues were missing in the later part of the track (missing-cues track, bottom). The environment and cue template for both environments are shown with the corresponding raster plots and spatial firing rates below. (B) Cue correlations of the firing rates along the with-cues and missing cues tracks within Regions A and B were calculated. The correlation of firing rate to cue was lower for the firing rates on the missing cues track in Region B (paired one-tailed t-test: cue correlation on with-cues track >cue correlation on missing-cues track, N = 65, 3 animals, for Region A, p=0.56, for Region B, p=1×10−14). On the right, the means with standard deviations are shown for regions A and B on each track. (C) Population field distribution for the entire population of cue cells along with-cues and missing-cues tracks for two environments. (D) Comparison of firing fields of all cue cells between runs in the initial region that is the same for both tracks (Region A) and the later region (Region B) on the with-cues and missing-cues tracks. The fraction of each region that had spatial firing fields (number of field bins/total bins in that region) is plotted for each cue cell. The field fraction was larger in region B on the with-cues track in comparison to region B on the missing-cues track (paired one-tailed t-test: field fraction on with-cues track >cue field fraction on missing-cues track, N = 65, 3 animals, for region A: p=1.00, for region B, p=0.0002). On the right, the means with standard deviations are shown for regions A and B on each track. ***p≤0.001. Student’s t-test.

Figure 2—source data 1. Cue cell firing field fractions.
Figure 2—source data 2. Correlations of firing rates along tracks to cue template.

Figure 2.

Figure 2—figure supplement 1. Cue-removal responses of cue cells with fields near or far from cues.

Figure 2—figure supplement 1.

Data shown in Figure 2 is separated into subgroups based on the spatial shift of the firing rate of each cue cell relative to the cue template. Plots from Figure 2B–D are shown for each group. Cells with spatial shifts less than and more than 25 cm in either direction were categorized as cells with fields near and far from cues, respectively. Top panel stats: Cue cells with fields near cues. For cue correlations: Region A, p=0.95; Region B, p=6×10−14. For fractions of regions with fields: Region A, p=0.99; Region B, p=0.004. Bottom panel stats: Cue cells with fields far from cues. For cue correlations: Region A, p=0.16; Region B, p=0.004. For fractions of regions with fields: Region A, p=0.98; Region B, p=0.004. ***p≤0.001. **p≤0.005 Student’s t-test.
Figure 2—figure supplement 2. Cue cell responses to cue changes for tetrode lowering database.

Figure 2—figure supplement 2.

Statistics for Figure 2 plots using the tetrode lowering database. For cue correlations: Region A, p=0.66; Region B, p=4×10−10. For fractions of regions with fields: Region A, p=1.00; Region B, p=0.03. ***p≤0.001. *p≤0.05 Student’s t-test.

Relationship to previously defined cell classes

To relate our population of cells recorded along linear tracks in virtual reality to previously characterized cell types in the MEC, the same cells were also recorded as the animal foraged for chocolate chunks in a real two-dimensional (2D) environment (0.5m × 0.5 m). From the recordings performed in the real arena, we calculated grid, border, and head direction scores for all cells (i.e. both cue and non-cue cells, Materials and Methods). We plotted the values of these spatial scores against the cue scores, which were calculated for the same cell during VR navigation, to determine the relationship between cue cells and previously defined cell classes (Figure 3A). We found that a small percentage of cue cells were conjunctive with border (11%) or grid (28%) cell types, and some cue cells had a significant head direction score (35%) (since the head direction score is based on orientation tuning, we do not consider the head direction cell type to be a spatial cell type). The total percentage of cue cells (13%) in the dataset was comparable to that of grid and border cells (Figure 3B).

Figure 3. Cue cell activity during foraging in a real arena.

(A) Relative distributions of cue scores compared to border, grid, and head direction scores. Thresholds were calculated as the value that exceeds 95% of the shuffled scores. The solid line indicates the threshold for each score that was used to determine the corresponding cell type (Materials and Methods). Cells are color-coded for whether they are cue (red), grid (green), border (blue), or head direction cells (black). The percentage of the cue cell population that was conjunctive for border, grid, and head direction is shown in each plot. (B) Percentage of each cell type in the dataset. (C) Examples of the spatial stability of the spatial firing rates of cue cells in a real arena. The recording of each cell was divided in half. The spatial firing rates of the first and second halves are shown for each cell in the left and right columns. Within each column: top: plots of spike locations (red dots) and trajectory (gray lines); middle: the 2D spatial firing rate (represented in a heat map with the maximum firing rate indicated above); bottom: head direction firing rate. The stability was calculated as the correlation of these two firing rates and shown at the top for each cell. (D) Histograms of the spatial and head direction stability of the 2D real environment firing rates by cell type. (E) Percentage of 2D real environment stable cells that are of a certain type. Cell types are color-coded: red = cue cell, green = grid cell, blue = border cell, black = head direction cell.

Figure 3—source data 1. Cue, border, grid and head direction scores.
Figure 3—source data 2. Spatial and head direction stability values by cell type.

Figure 3.

Figure 3—figure supplement 1. Cue cell activity in real arenas.

Figure 3—figure supplement 1.

Top and middle panels: the spatial and head direction firing rates of cue cells in a real arena are sorted based on the spatial shifts of their spatial firing fields to the cue template in virtual reality (bottom panel). No clear patterns of changes in number, size and location of firing fields or the mean vector length of head direction firing rates were observed. In most cases, the cue card was located on the right wall of the environment.
Figure 3—figure supplement 2. Real arena navigation for tetrode lowering database.

Figure 3—figure supplement 2.

Summary of cell type percentages and distributions for database including days when tetrodes were lowered.

Since most cue cells (63%) were not conjunctive with a previously known spatial cell type, we next examined their spatial activity patterns during navigation in the real arena. As expected from their scores, most cells had irregular activity patterns in the arena and were not classified as any previously identified spatial cell type (Figure 3C and Figure 3—figure supplement 1).

One striking feature of the spatial firing patterns of cue cells observed in real environments was the spatial stability of these complex and irregular patterns (Materials and methods). The spatial firing rates in the real arena from the first and the second halves of the recording for 10 cue cells are shown in Figure 3C. We calculated the spatial stability as the correlation between these two halves and found that the spatial firing patterns were irregular but surprisingly stable. To further quantify this observation, we calculated the distributions of the stability of both the spatial and head direction firing rates for all cell types (Materials and methods) (Boccara et al., 2010). We found that the distributions of stability for unclassified cells (not classified as cue, grid, border, or head direction cells, and labeled as ‘Other’) were generally shifted towards lower values compared to all the currently classified cells, indicating that a large fraction of the remaining unclassified cells do not stably encode spatial and head direction information in the real arena (Figure 3D). In contrast, our newly classified cue cells showed comparable spatial stability as other existing cell types, supporting its ability to encode spatial information during navigation. Figure 3E shows the fractions of all cells with significant stability scores for their spatial firing rates in the real arena, classified by cell type. While some cue cells were conjunctive with border or grid cells, a large percentage (63%) of cue cells were previously unclassified as a particular spatial cell type. Cue cells accounted for 7% of the population of spatially stable cells, and for 14% of the previously unclassified spatially stable cells.

Cue cells form a sequence at cue locations

For many cue cells, we observed that their firing fields had varying spatial shifts relative to the cue locations on virtual tracks. We took all cue cells identified for each virtual track and ordered their spatial firing rates and fields by the values of their spatial shift relative to the cue template, which was the smallest displacement of the cue template to best align with the firing rate (Figure 4A; Materials and Methods). We found a striking pattern where cue cells formed a sequence of spatial firing fields that was repeated at each cue. To examine if this pattern was produced by the concentration of neural firing around cues, rather than the alignment and ordering of the data alone, we compared this pattern to that of field-shuffled data, which were created by permuting spatial fields within each spatial firing rate and then ordering these new field-shuffled spatial firing rates in the same manner as for the original spatial firing rates (Materials and Methods). Spatial field shuffled data did not exhibit an obvious sequence (Figure 4A, bottom). This difference between the cue cells and shuffled data was further quantified by a ridge-to-background ratio (Materials and Methods), which was computed as the mean firing rate in a band centered on the sequential spatial firing rates of the cue cell population divided by the mean background rate outside of this band. We note that although ordering the spatial firing rates of the cells by their spatial shift was expected to create a ridge of firing rate along the diagonal, the mean ridge/background ratio for cue cells (1.8) was higher than that for spatial field shuffled data (1.2 ± 0.35, p=9.8 × 10−7; N = 100 cue cells, Figure 4B). Most cue cells had small spatial shifts (Figure 4C). Thus, the sequence represents sequential neural activity preferentially located near cue locations, rather than an artifact of ordering the data.

Figure 4. Cue cells form a sequence aligned at each cue.

(A) Cue cell sequences shown for two environments. Top two rows show a side view of the virtual track and the corresponding cue template below. Just below this, the spatial firing rates, where the normalized firing rate of a cue cell is plotted along each row is shown. The cells are sorted based on their spatial shifts calculated for alignment of spatial firing rates to the cue template (Materials and Methods). The corresponding spatial firing fields of the same cells above are shown in the middle panel with the firing fields ordered in the same sequence as the firing rates. At the bottom, an example of the sorted shuffled spatial firing rates which were generated by shuffling the firing fields of each cell and then sorting based on their spatial shifts to the cue templates. (B) The ridge to background ratio for the data and shuffles of environment 1. (C) Distribution of spatial shifts for all cue cells.

Figure 4—source data 1. Cue cell sequences.

Figure 4.

Figure 4—figure supplement 1. Cue cell sequences in tetrode lowering database.

Figure 4—figure supplement 1.

For this database, the mean ridge/background ratio for cue cells (2.0) was higher than that for spatial field shuffled data (1.2 ± 0.4, p=7 × 10−7; N = 52 cue cells, Figure 4A). Most cue cells had small spatial shifts (Figure 4B).

Side-preference of cue cells in superficial layers of the MEC

Since cues were bilaterally present along all the tracks studied thus far, it is unclear if cue cells were primarily responding to cues on only one side. To address this, we designed a 10-meter virtual track with asymmetric cues on the left and right sides (Figure 5A) and performed cellular-resolution two-photon imaging in order to simultaneously record the responses of a large number of cue cells in the MEC (Low et al., 2014). Calcium dynamics of layer 2 cells were specifically measured using the genetically encoded calcium indicator GCaMP6f, which was stably expressed in layer 2 excitatory neurons of the MEC in GP5.3 transgenic mice (Figure 5—figure supplement 1A; Chen et al., 2013; Dana et al., 2014; Gu et al., 2018).

Figure 5. Cue cell responses to side-specific cues in layer 2 of the MEC.

(A) A 1000 cm (10 meter) long virtual linear track for imaging experiments. ‘L’ and ‘R’ indicate cues on the left and right sides of the track, respectively. (B) Left and right cue templates with cues on the left and right sides of the track. (C) Examples of individual cue cells responding to the left- (top) or right-side cues (bottom) in layer 2 of the MEC. For each cell: top: ΔF/F versus linear track position for a set of sequential traversals. Middle: mean ΔF/F versus linear track position. Bottom: overlay of the cue template and aligned mean ΔF/F (black) according to the spatial shift, which gave the highest correlation between them (Materials and Methods). (D) Left and right cue cell sequences aligned to left- (top) and right-side cues (bottom), respectively. In each row the mean ΔF/F of a single cell along the track, normalized by its maximum, is plotted. The cells are sorted by the spatial shifts identified from the correlationof their mean ΔF/F to the cue template. (E) Bilateral scores of all left and right cells in D. Left: bilateral scores of individual cells (dots). Right: kernel density distribution of bilateral scores. Note that the bilateral scores show a strong bimodal distribution. (F) Percentages of cue cells among all cells activeduring virtual navigation (active cells were determined as cells identified using independent component analysis, Materials and Methods) . Left bar: all left and right cue cells. Middle bar: left cue cells. Right bar: right cue cells. Individual data points that were pooled for this are the percentages of cue cells in 12 FOVs in layer 2, p=6.90 × 10−4. (G) Comparison of cue scores of left and right cue cells in layer 2. Individual data points are cue scores of cells in D,p=1.67 × 10−5. All data were generated using layer 2 cue cells in 12 FOVs in four mice. ***p≤0.001. Student’s t-test.

Figure 5—source data 1. Cue scores, bilateral scores and percentages of left and right cue cells.

Figure 5.

Figure 5—figure supplement 1. Expression of GCaMP6f in layers 2 and 3 of the mouse MEC.

Figure 5—figure supplement 1.

(A) Expression of GCaMP6f in layer 2 of the MEC, shown with an image of a sagittal brain slice of a GP5.3 mouse at 5 months of age. Left: blue epifluorescence image showing cell bodies in layers 2 and 3 (separately delineated by white dotted curves) of the MEC labeled by fluorescent Nissl staining. Right: green epifluorescence image of the same slice on the left showing GCaMP6f expression of layer 2 neurons in the MEC. Scale bar: 200 µm. (B) Expression of GCaMP6f in layer 3 of the MEC, shown with an image of a sagittal brain slice of a wild type mouse 13 days after the injection of AAV1-hSyn-GCaMP6f virus in the MEC. Left: blue epifluorescence image showing cell bodies in layers 2 and 3 (boundaries shown by white dotted curves) of the MEC labeled by fluorescent Nissl staining. Right: green epifluorescence image of the same slice on the left showing GCaMP6f expression of dorsal layer 3 neurons in the MEC. Scale bar: 200 µm. The results of the imaged layer 3 cells are shown in Figure 5—figure supplement 4.
Figure 5—figure supplement 2. Cue cells preferentially represent cues on a single side, rather than both sides of the track.

Figure 5—figure supplement 2.

(A) Left and right cue scores in layer 2 of the MEC. The locations of each circle/dot on the x and y axes represent the right and left cue scores for a cell, respectively. The solid line indicates the threshold for each type of score that was used to determine the corresponding cue cell type. Cells are color-coded according to whether they were right cue cells (magenta circles), left cue cells (green dots), or cells with cue scores that exceeded both right and left cue score thresholds (green dots with magenta outline). The scores of cells below all thresholds are shown as gray circles. The distributions of right and left scores are shown on the top and left of the plots with corresponding colors indicating cue scores above thresholds. (B) Spatial shifts on left and right templates for the 14 layer 2 cue cells (from 12 FOVs in four mice) that passed the thresholds of both templates. Each dot represents one cell. The bigger dot contains two data points with identical x and y coordinates. The gray dotted line indicates x = y. (C–G) Cue cells classified using the both-side cue template, which included cues on both left and right sides of the track. We classified cells using a threshold specific to the both-side template (B). However, we concluded that the cues on both sides were not well represented by the classified cells based on the following three reasons: 1.) The cue scores of both-side cue cells were significantly lower than those of left and right cue cells (D). Since the cue score is defined to be the mean correlation of a cell's response to individual cues, independent of the number of cues on a template, the low cue scores indicate that the responses of both-side cue cells did not correlate well to cues on both sides of the track. 2.) 64% of both-side cue cells were also classified as left or right cue cells, which only strongly responded to cues on one side (E, cell examples in F and G, the first and second panels). 3.) The rest of both-side cue cells (36%) only weakly correlated to the both-side template (E, cell examples in F and G, the third panels), as reflected by their lower cue scores (D). Cue cell sequences aligned to both-side cue template (top). Each row is mean ΔF/F of a single cell along the track, normalized by its maximum. The cells are sorted by the spatial shifts of their mean ΔF/F to the both-side cue template. (D) Comparison of cue scores. From left to right: scores of cells that passed the threshold of left, right, and both-side templates. Among the both-side cue cells, from left to right: all both-side cue cells; both-side cue cells that also independently had left and right cue scores that exceeded the thresholds for those scores; both-side cue cells that were not classified as left and right cue cells (non-left/non-right cells). p value: column 1 to 3: 4.35 × 10−37. Column 2 to 3: 4.08 × 10−63. Column 4 to 5: 7.16 × 10−4. (E) Pie chart showing the percentage of both-side cue cells that were also classified as left and right cue cells (white) and non-left and non-right cue cells (gray). (F) Three examples of both-side cue cells. Left: a both-side cue cell that is also identified as a left cue cell; middle: same but for a right cue cell; Right: a cell uniquely identified as a both-side cue cell (non-left/non-right cue cells). For each cell: top: ΔF/F versus linear track position for a set of sequential traversals. Middle: mean ΔF/F versus linear track position. Bottom: overlay of the cue template and aligned mean ΔF/F (black) according to the spatial shift. The left (green) and right cues (magenta) in the both-side cue templates are also shown in corresponding colors. (G) Cue cell sequences of both-side cue cells. From left to right: both-side cue cells also identified as left cue cells, right cue cells, and cells only identified as both-side cue cells (non-left/non-right cue cells). In each row the mean ΔF/F of a single cell along the track, normalized by its maximum, is plotted. The cells are sorted by the spatial shifts calculated from the correlation of each cell's mean ΔF/F to the cue template. (H) Calculation of the bilateral score. In the two cases shown, cartoon illustrations of activity patterns show examples of cells with responses to cues only on oneside (case 1) or to cues on both sides (case 2).
Figure 5—figure supplement 2—source data 1. Cue scores, spatial shifts and both-side cue template.
Figure 5—figure supplement 3. Comparison of the percentage of cue cells identified using original and randomized cue templates.

Figure 5—figure supplement 3.

(A) Comparison of the percentage of right cue cells identified using the right cue template to cells identified using randomized versions of the original cue template. The red curve shows the distribution of the percentages of cue cells identified using the current cue template (12 FOVs from four mice). The gray curve shows the distribution of the percentages of cue cells identified using random cue templates. The curve represents data from all 12 FOVs with 200 random cue templates generated for each FOV. The distributions for real and shuffled templates were estimated using a kernel smoothing function with the same bandwidth of the smoothing window. The maximal distribution probabilities of all real and shuffled data were normalized to 1. p value: 4.45 × 10−9. (B) Similar to A but for left cue cells. p value: 0.0168.
Figure 5—figure supplement 4. Cue cell properties in layers 2 and 3 of the MEC across different environments.

Figure 5—figure supplement 4.

(A) An 1800 cm (18 meter) long virtual track for imaging experiments. ‘L’, ‘R’ and ‘L/R’ indicate cues on the left, right and both sides of the track, respectively. (B) Left, right, and both-side cue templates of the track in A. (C–G) Results for layer 2 cells, which were generated from 40 FOVs in six mice. (C) Comparison of cue scores. From left to right: scores of cells that passed the threshold of left, right, and both-side templates. Among the both-side cells, from left to right: all both-side cells; both-side cells that also passed thresholds of left and right templates; both-side cells that were not classified as left and right cells ( cells). Statistics: column 1 to 3: p=6.33 × 10−39, Column 2 to 3: p=6.90 × 10−52. (D) Pie chart showing the percentage of both-side cells that were also classified as left and right cells (white) and non-left and non-right cells (gray). (E) Cue cell sequences aligned to left- and right-side cues. Each row shows the mean ΔF/F along the track of a single cell, normalized by its maximum. The cells are sorted by the spatial shifts calculated from the correlation of the mean ΔF/F to the cue template. (F) Bilateral scores of left and right cells shown in C. Left: bilateral scores of individual cells (dots). Right: kernel density distribution of bilateral scores. Note that the distribution of bilateral scores is bimodal . (G) Percentages of cue cells. Left bar: all left and right cue cells. Middle bar: left cue cells. Right bar: right cue cells. Individual data points that were pooled for summary plots are cue cells from 40 FOVs in layer 2, p=7.12 × 10−6. (H–L) Similar to C-G but for layer 3 cells, from 37 FOVs in two mice. (H) Statistics: column 1 to 3: p=4.54 × 10−28. Column 2 to 3: p=5.30 × 10−32. (L) Percentages of cue cells. Individual data points that were pooled for summary plots are cue cells from 37 FOVs in layer 3, p=0.041. *p≤0.05. ***p≤0.001. Student’s t-test.
Figure 5—figure supplement 4—source data 1. Cue scores, bilateral scores and percentages of cue cells in layers 2 and 3 on a 18-meter virtual linear track.
Figure 5—figure supplement 5. Spatial shifts of cells with cue-correlated activity patterns.

Figure 5—figure supplement 5.

Method: The goal of this analysis is to investigate whether cells with cue-correlated activity patterns show consistently shifted responses to individual cues. Since cue cells were largely chosen based on the correlation of their activity patterns to a specific cue template (Figure 1B), this procedure could artificially select cells with activity patterns consistently shifted from individual cues and thus having high correlations to the template (comparability, high cue scores). Consequently, when these selected cue cells were ordered based on their spatial shifts, their responses were very likely to form consistent sequences at individual cues (as in Figures 4A and 5D). To avoid this artifact, here we classified cells with cue-correlated activity using a different approach in order to investigate whether having responses with consistent spatial shifts to individual cues is a true feature of cue cells. This analysis was performed on data collected in layers 2 and 3 of the MEC when mice navigated along an 18-meter track (Figure 5—figure supplement 4A and B). The track contained a large number of cues (10 cues), which allowed a more precise classification of cells with cue-correlated activity even when choosing only half the number of cues (method described below). Each cue template was split into two half-templates (templates 1 and 2), each containing half of the cues of the original template. The cues on the two templates were non-overlapping. Cue wells were first classified using one of the half-templates (i.e., template 1). The spatial shifts found from the correlation to template 1 was compared to that found for the other half-template (template 2), which was not used to classify the cells. The hypothesis was that if the response of a cue cell was shifted by the same distance from each cue, then the spatial shifts would be similar between these two half-templates. An example with two half-templates is shown in A. R1 and R2 are two half-templates with cues on the right side of the track. We calculated the percentage of cells that maintained similar spatial shifts across the two half-templates (the difference of the spatial shifts on R1 and R2 is less than 25 cm). We found that a large fraction of cue cells (76.9% and 80.3% for cells identified on R1 and R2, respectively) had very similar shifts on the two half-templates. A similar example for left-side cues is shown in (B). To further confirm that this high percentage of cells with consistent spatial shifts to cues was not found only by using a particular set of half-templates, we repeated this analysis for cells in both layers 2 and 3 using multiple sets of half-templates comprised of various combinations of cues from the original templates. This more strict analysis of spatial shifts of cue cells data together indicate that cue cells respond to individual cues with consistent spatial shifts.
Figure details: (A) Spatial shifts of cue cells classified using two half-templates of the right cue template (R template). The R template was split into two half-templates: R1 and R2. Spatial shifts of cue cells classified using R1 and R2 are shown in a1 and a2, respectively. a1: From top to bottom: (1) Classification of cue cells (R1 cue cells) using R1 template. (2) Ordered R1 cell responses based on their spatial shifts to R1 (R1 shifts). (3) Ordered R1 cell responses based on their spatial shifts to R2 (R2 shifts). Note that the activity patterns in both (2) and (3) consistently shift from individual cues, indicating that R1 cue cells generally had similar spatial shifts on R1 and R2. (4) Difference in spatial shifts of R1 cells on R1 and R2. Each dot is one cell. The fraction of cue cells (76.9%) with very similar spatial shifts on R1 and R1 (less than 25 cm absolute shift differences, marked by red parenthesis) a2: similar to a1 but for cue cells (R2 cue cells) classified using R2 template. (B) Similar to A but for left cue template (L template). (C) Summary of the percentages of cells in layers 2 and 3 with very similar spatial shifts (less than 25 cm absolute shift differences) on multiple pairs of half-templates (5 and 2 pairs for layers 2 and 3, respectively) comprised of different combinations of cues on the left and right cue templates (L1, L2, R1, R2). All analyses showed similar results. Red lines indicate the mean of each group.

Side-specific cue templates were defined for this environment (Figure 5B for left and right cue templates). We classified side-specific cue cells by using the threshold for each template, which was the 95th percentile of shuffled scores obtained using templates with randomly arranged cues (Material and Methods, Figure 5C and D). We found that in layer 2 of the left MEC most cue cells only passed the threshold for either left or right template. There were 8.1% and 21.9% of cells uniquely identified as left and right cue cells, respectively. Very few cells passed the thresholds of both templates (14 out of the total population of 281 left and right cells (5%), and 1.6% of the population of all cells, N = 4 animals, Figure 5—figure supplement 2A) and their responses correlated to the two single side cue templates under different spatial shifts (Figure 5—figure supplement 2B), indicating that they did not simultaneously respond to both left and right cues. Moreover, the cells identified using the template containing both left and right cues had significantly lower cue scores than the left and right cue cells, further suggesting that the cues on both sides were less well correlated than cues on one side (Figure 5—figure supplement 2C-2G). Therefore, we only focused on the left and right cue cells in the following analyses.

To directly test whether these left and right cue cells preferentially responded to cues on one side of the track we developed a bilateral score that tested against the null hypothesis that cue cells equally responded to cues on both sides. The bilateral score was defined as the difference between the left and right cue scores of a cue cell when its response was best aligned to its preferred template (Figure 5—figure supplement 2H). A large absolute value of the bilateral score (large difference between left and right cue scores) indicated the cell’s preferential response to cues on a single side (left or right), whereas a bilateral score around zero (small difference between left and right cue scores) indicated a comparable response to cues on both sides. We found that the distribution of bilateral scores of left and right cells together was bimodal with two major peaks at large absolute values (Figure 5E), suggesting that the left and right cue cells indeed responded to cues on one side of the track.

We next asked whether the locations of left and right cues were preferentially represented in the MEC in comparison to other locations of the track. We compared the percentages of left and right cue cells identified using the current cue templates to those using random templates, which were created by shuffling cue locations along the track. We reasoned that an unbiased representation of all locations along the track should lead to the classification of similar numbers of cue cells regardless of templates. In contrast, we found that the percentages of left and right cue cells corresponding to the current templates were significantly higher than those to random templates (Figure 5—figure supplement 3), indicating that the locations of left and right cues were preferentially represented by cue cells. For this reason, cue cells area unique population with spatial fields clustered cue locations, rather than a subpopulation among cells with spatial fields uniformly spanning the environment.

As observed for tetrode-recorded cue cells (Figure 4A), the calcium responses of imaged left and right cue cells also had consistent spatial shifts to individual left and right cues, respectively, and the response together formed sequences (Figure 5D). In the left MEC, there were more right cue cells than left cue cells (Figure 5F) and the cue scores of right cue cells were higher than those of left cue cells (Figure 5G).

The above results for cells in layer 2, including the preferred representation of single-side cues, the consistently shifted responses to individual cues, the greater fraction of right cue cells, were also observed for cells in layer 3 of the left MEC when mice navigated on a 18-meter track (Figure 5—figure supplement 1B for the specific labeling of layer 3 cells using virus and Figure 5—figure supplement 4 for features of cue cells). The 18-meter track, which contained a larger number of cues, also allowed us to further validate that responding to individual cues at consistent spatial shifts was a feature of most cells with cue-correlated activity, rather than an artifact of the cue cell selection procedure (see Figure 5—figure supplement 5 for details, N = 6 animals for layer 2 data, N = 2 animals for layer 3 data).

These data together support the fact that in the superficial layers of the left MEC, cue cells largely responded to cues on one side, while the right cues were preferentially represented over left cues. The responses of both left and right cue cells formed consistent sequences around individual cues on their preferred side.

Cue cells represent visual cues in different environments

We next asked whether cue cells are a specialized functional cell type that represents cue locations across environments. To investigate this, we measured calcium responses of the same neurons in layer 2 of the left MEC during navigation along two different virtual tracks. We found that many cue cells maintained cue-correlated responses along both tracks (Figure 6A, N = 5 animals). In general, the percentages of cue cells and non-cue cells that remained as the same cell types on two different tracks were significantly higher than chance (Figure 6B). Finally, cue cells showed highly correlated spatial shifts relative to cue templates across different tracks (Figure 6C). These observations suggest that cue cells are a functional cell type representing visual cue information across different environments.

Figure 6. Cue cells respond to cues in different environments.

Figure 6.

(A) Examples of two cue cells. Each cell was imaged on two tracks. For each cell: black: mean ΔF/F versus linear track position. Red: cue template. The spatial shift is shown above each plot (right and left shifts of the template relative to the mean ΔF/F curve are defined as negative and positive values, respectively). (B) Percentage of cue cells (black) and non-cue cells (gray) that remained as the same cell types in two tracks. Each dot represents one FOV (N = 14 FOVs in 5 mice, which formed 7 groups (two FOVs/group) to determine common cue cells for two tracks). For each cell type, the area between two gray lines represents mean ± STD of data on 50 random datasets, in which cue cells in two tracks were randomly assigned. Real data vs. random data: cue cell: p=1.46 × 10−24; non-cue cell: p=5.13 × 10−14. (C) Spatial shifts of the same cue cells on two tracks. Each circles show the shifts of one cell. As in A, right and left shifts of the template relative to the mean ΔF/F curve are defined as negative and positive values, respectively. ectively. Red dotted line represents x = y. p=2.27 × 10−5. ***p≤0.001. Student’s t-test.

Figure 6—source data 1. Percentages of common cue and non-cue cells, and spatial shifts of common cue cells in different environments.

Discussion

We have described a novel class of cells in MEC—termed cue cells—that were defined by a spatial firing pattern consisting of spatial firing fields located near prominent visual landmarks. When navigating a cue-rich virtual reality linear track, the population of cue cells formed a sequence of neural activity that was repeated at every landmark. When cues were removed, these cells no longer exhibited a sequence of spatial firing fields. This work shows that visual inputs drive cue cells, as additionally indicated by the preferred representation of contralateral cues during navigation along asymmetric tracks. Cue cells maintained cue-related spatial firing patterns across multiple environments, further supporting the hypothesis that cue cells are a distinct functional cell type in MEC. These properties of cue cells suggest that they could provide a source of spatial information in the local circuits of MEC that could be used in error correction in landmark rich environments.

Cue cells and previously identified cell types

By recording during foraging in real arenas, we were able to determine how cue cells were related to previously identified MEC cell types (grid, head direction, border), all of which are defined by their activity in bounded 2D environments. We found that most cue cells were not grid or border cells, yet they did have noticeably stable, and somewhat irregular, spatial firing patterns in real arenas. Cue cell activity patterns cannot be explained by a relationship to the speed of the animal since animals tended to slow down only at the last cue in the sequence which was associated with a water reward (Figure 1—figure supplement 2). Cue cells account for a significant fraction (~22%) of the previously unexplained spatially-stable cells in MEC. While most cue cells were not grid or border cells, a vast majority of them had some orientation tuning (35%). The high prevalence of head direction tuning for these cells suggests either that cue cells may receive inputs from traditional head direction cells, or that a head direction preference is present for cue cells because of the location of particular features of the real arena that drives the activation of cue cells. Further work is required to determine the circuit mechanisms of this orientation tuning preference.

Do cue cells resemble spatially modulated cells in other brain regions, such as place cells or boundary vector cells? Place cells typically have only a single firing field during navigation along short linear tracks, even in virtual reality environments with prominent visual cues along the tracks (Dombeck et al., 2010). This is distinctly different from cue cells, in which the number of spatial firing fields scales with the number of cues. Boundary vector cells, which were found in subiculum, encode distance to a boundary (Lever et al., 2009; Stewart et al., 2014). An identified boundary vector cell must have a spatial firing field that is uniformly displaced from a particular region of the boundary. The width of the spatial firing field is proportional to the distance from the boundary, meaning that cells shifted significantly from the border would have very wide spatial firing fields, which could cover a large majority of the environment (Lever et al., 2009; Stewart et al., 2014). Border cells are a special case of boundary vector cells. To determine whether cue cells might be boundary vector cells, we sorted their spatial responses in the real 2D arena based on the shifts of their spatial firing rates from the visual cue pattern on a virtual track (Figure 3—figure supplement 1). We found no obvious trends in the spatial firing patterns of these cue cells in the real arena. Along with this, many cells had multiple fields or fields that were not uniformly displaced from the border of the environment. These spatial firing field features of cue cells were inconsistent with those of boundary vector cells. Thus, cue cells have properties distinct from both place cells and boundary vector cells. It is possible that cue cells could have properties similar to landmark vector cells found in the hippocampus (Deshmukh and Knierim, 2013). Their spatial firing responses were distributed throughout the arena, despite there being a single white cue card visible, indicating that these cells perform more complex computations in real arenas where spatial cues take many forms in comparison to the cues along a virtual track. Previous studies have also found similar complex responses of non-grid cells in the MEC encoding features of real environments (Diehl et al., 2017; Hardcastle et al., 2017).

Some recent papers show cell types with features similar to those of cue cells (Høydal et al., 2019; Madisen et al., 2015; Wang et al., 2018). In particular, a recent paper described a new cell type, termed object-vector cells (Høydal et al., 2019). We believe these cells belong to a similar cell population as our cue cells do since there are some comparable findings on this cue-related cell type: Similar percentages of cue/object-vector cells; these cells are not predominately conjunctive with other spatial cell types; and these cells maintain cue-related firing across environments (Figure 6). In comparison to that paper, our paper provided additional information about these cue/object vector cells. Along with the sequential nature of the activity of these cells, we also specifically studied cue cells in layers 2 and 3 of the MEC and discovered the side-preference of these cells. The fact that these cells mostly responded to cues located on one side and that the right cues were predominately represented in the left MEC, strongly supported a visual input-based mechanism in driving cue cell response.

Cue cells and path integration

It has been hypothesized that the MEC is a central component of a path integrator that uses self-motion information to update a spatial metric encoded by the population of grid cells (Burak and Fiete, 2009; Fuhs and Touretzky, 2006; McNaughton et al., 2006). Grid cells are grouped into modules based on each cell’s grid spacing. Each grid module maintains its own orientation, to which all grid cells align and are related by a two-dimensional spatial phase. Grid cells in a given module maintain their relative spatial phase offsets across different environments (Fyhn et al., 2007), including linear tracks (Yoon et al., 2016), indicating that the population of grid cells form a consistent spatial metric largely defined only by the two-dimensional phase. This provides support for the idea that grid cell dynamics are constrained to a two-dimensional attractor manifold (Yoon et al., 2013). In a manifold-based path integrator, spatial location is represented as the location of the grid cell population activity on the attractor manifold. Self-motion signals, such as running speed in a particular direction, move the population activity along the manifold, such that changes in location are proportional to the integral of the velocity over time. Path integration is inherently a noisy process that requires calibration and error correction for more accurate estimates of position.

In the context of continuous attractor models for path integration, it is interesting to consider the potential functional roles of cue cells. One role could be to act as external error-correction inputs to the path integrator network that tend to drive the neural activity pattern to manifold locations appropriate for each landmark. An analogous use was proposed for border cells, in which they contribute to error correction near boundaries (Hardcastle et al., 2015; Pollock et al., 2018). If cue cells perform this role independent of other cells, then the cue cell population would need to independently distinguish individual cues as well, one such method would be with cue cells independently varying firing rates at distinct cues. More work is needed to further characterize the nature of this precise coding of unique landmarks and, with new models, to determine how effectively it might be used to drive an attractor network to the appropriate spatial locations when interacting with a noisy path integrator.

An alternative, or additional, role for cue cells in path integration would be to produce a continuous adjustment of location. The sequence of activity that was produced across the cue cell population as individual landmarks were passed could drive the network activity continuously along the manifold, in essence acting as a velocity input that is quite different from those traditionally considered, such as running speed. This use, as an effective velocity, is analogous to the recent demonstration (Hopfield, 2015; Ocko et al., 2018) that the collective state of a line attractor can be moved continuously along the manifold by an appropriately learned sequence of external inputs. In essence, the set of inputs at each time point move the location of the activity on the attractor a slight amount, and this is repeated continuously to produce smooth motion, without requiring asymmetric synaptic connectivity and velocity-encoding signals of previous path integrator models (Burak and Fiete, 2009; Fuhs and Touretzky, 2006; McNaughton et al., 2006; Ocko et al., 2018). In principle, both path-integration and error correction can be combined through this process.

Cue cells in real and virtual environments

Why cue cells had such stable and easily classified activity patterns in virtual reality, but not in the real arena remains an open question. Navigating along a virtual track differs greatly from foraging in a complex real arena. In the case of virtual reality, the animal encounters a single cue or pairs of cues at a time, so the representation of location using visual information is primarily from a limited orientation to individual cues or pairs of cues along the track. It is possible that forming a representation of location in the real arena requires triangulation from many cues located at various angles and distances away. Despite the simple design of our real arena with a single cue card on one wall, there could be multimodal features from the floor, walls, or from distal cues outside of the arena. Navigation along simple virtual environments comprised of only visual cues sets up an ideal experimental paradigm to further understand the activity of these cells. Future experiments could probe other features of the cells with more perturbations of the virtual environment.

Cell classes in MEC

Although our analysis and discussion of cue cells have largely followed the traditional approach of describing MEC cells according to discrete classes, it is interesting to note that cue scores, like grid, head direction, and border scores, each form a continuum and that a significant fraction of cells in MEC are conjunctive for more than one class (Figure 3). The conjunctive coding in neural firing in MEC is also demonstrated by a recent study (Hardcastle et al., 2017) and is conceptually analogous to the ‘mixed selectivity’ in neural codes that have been increasingly recognized in cognitive, sensory and motor systems (Finkelstein et al., 2015; Fusi et al., 2016; Rigotti et al., 2013; Rubin et al., 2014). Recently, mixed selectivity has been demonstrated to be computationally useful in evidence integration and decision-making by allowing the selection of specific integrating modes in accumulating evidence to guide future behavior (Mante et al., 2013; Ulanovsky and Moss, 2011). Reframing this in the context of path integration, it will be useful to determine how navigation systems might use mixed selectivity and context-specific integrating modes to weigh different accumulating information (different velocity and position inputs) according to the reliability of that information during navigation in complex, feature-rich environments.

Materials and methods

Key resources table.

Reagent type
(species) or
resource
Designation Source or
reference
Identifiers Additional
information
Strain, strain background (Adenovirus) AAV1.hSyn.GCaMP6f.WPRE.SV40 Penn Vector Core/addgene Cat#: 100837-AAV1
Genetic reagent (M. musculus) C57BL/6J Jackson Laboratory Stock No: 000664|Black 6
Genetic reagent (M. musculus) Thy1-GCaMP6f transgenic line (GP5.3) Janelia Research Campus;
PMID:25250714
N/A Male and female
Commercial assay NeuroTrace Thermo Fisher Scientific Cat#: N21479
Software, algorithm MATLAB MathWorks https://www.mathworks.com
Software, algorithm ImageJ National Institutes of Health https://imagej.nih.gov/ij/
Software, algorithm ScanImage 5 Vidrio Technologies http://scanimage.vidriotechnologies.com/display/SI5/ScanImage+5
Software, algorithm ViRMEn (Virtual Reality Mouse Engine) PMID:25374363 https://pni.princeton.edu/pni-software-tools/virmen
Software, algorithm Motion correction (CaImAn) PMID:30652683 https://github.com/flatironinstitute/CaImAn

Animals

All procedures were approved by the Princeton University Institutional Animal Care and Use Committee (IACUC protocol# 1910–15) and were in compliance with the Guide for the Care and Use of Laboratory Animals (https://www.nap.edu/openbook.php?record_id=12910). Three C57BL/6J male mice (Stock No: 000664|Black 6), 3–6 months old, were used for electrophysiological experiments. Two 10 week old male mice were used for the two-photon imaging of layer 3 neurons. Mice used for the two-photon imaging of layer 2 neurons were six 10- to 12 week old GP5.3 males and four 10- to 12 week old GP5.3 females, which were heterozygous for carrying the transgene Thy1-GCaMP6f-WPRE to drive the expression of GCaMP6f (Dana et al., 2014).

Experimental design and statistical analysis

All data are represented as mean ± STD. A student’s t-test was always used to evaluate whether the difference of two groups of values was statistically significant. Significance was defined using a p value threshold of 0.05 (*p<0.05, **p<0.01, ***p<0.001). All analysis was performed using custom Matlab (MathWorks) software and built in toolkits. All correlations were Pearson correlations unless otherwise specified.

Real arena for tetrode recording

Experiments were performed as described previously Domnisoru et al. (2013). The real arena consisted of a 0.5 m × 0.5 m square enclosure with black walls at least 30 cm high and a single white cue card on one wall. Animals foraged for small pieces of chocolate (Hershey’s milk chocolate) scattered throughout the arena at random times. Trials lasted 10–20 min. On each recording day, real arena experiments were always performed before virtual reality experiments. Video tracking was performed as described previously Domnisoru et al. (2013) using a Neuralynx acquisition system (Digital Lynx). Digital timing signals, which were sent and acquired using NI-DAQ cards, and controlled using ViRMEn software in Matlab (Aronov and Tank, 2014) were used to synchronize all computers.

Virtual reality (VR)

The virtual reality system was similar to those described previously (Dombeck et al., 2010; Domnisoru et al., 2013; Gauthier and Tank, 2018; Gu et al., 2018; Harvey et al., 2012; Harvey et al., 2009; Low et al., 2014). ViRMEn software (Aronov and Tank, 2014) was used to design the linear VR environment, control the projection of the virtual world onto the toroidal screen, deliver water rewards (4 µl) through the control of a solenoid valve, and monitor running velocity of the mice. Upon running to the end of the track, mice were teleported back to the beginning of the track.

VR for tetrode recording

The animal ran on a cylindrical treadmill, and the rotational velocity of the treadmill, which was proportional to mouse velocity, was measured using sequential sampling of an angular encoder (US Digital) on each ViRMEn iteration (~60 iterations per second). The tracks were 8 meters long with identical cues on both sides of the tracks.

VR for imaging

Mice ran on an air-supported spherical treadmill, which only rotated in the forward/backward direction. Their heads were held fixed under a two-photon microscope (Gu et al., 2018; Low et al., 2014). The motion of the ball was measured using an optical motion sensor (ADNS3080; red LED illumination) controlled with an Arduino Due. The VR environment was rendered in blue and projected through a blue filter (Edmund Optics 54–462). The tracks in Figure 5 and Figure 5—figure supplement 3 were 10 and 18 meters long, respectively, with asymmetric cues on both sides of the track. Water rewards (4 µl) were delivered at fixed locations of the track (as indicated in the figures). The data in Figure 6 were collected on three tracks: two tracks with asymmetric cues (10 and 18 meters) and one track with symmetric cues (10 meter). Water rewards (4 µl) were delivered at the beginning and end of the three tracks.

Microdrives and electrode recording system

Custom microdrives and the electrophysiology recording system used were similar to those described previously Aronov and Tank (2014); Domnisoru et al. (2013); Kloosterman et al., 2009. Tetrodes were made of PtIr (18 micron, California Fine Wire) and plated using Platinum Black (Neuralynx) to 100–150 kΩ at 1 kHz. A reference wire (0.004’ coated PtIr, 0.002’ uncoated 300 µm top) was inserted into the brain medial to the MEC on each side, and a ground screw or wire was implanted near the midline over the cerebellum.

The headstage design was identical to the one used previously Aronov and Tank (2014) with the addition of solder pads to power two LEDs for use in tracking animal location and head orientation. Custom electrode interface boards (EIBs) were also designed to fit within miniature custom microdrives. A lightweight 9-wire cable (Omnetics) connected the headstage to an interface board. The cable was long enough (~3 m) to accommodate the moving of the animal between the real arena and the virtual reality system without disconnection.

Surgery

Tetrode recording

Surgery was performed using aseptic techniques, similar to those described previously Domnisoru et al. (2013). The headplate and microdrive were implanted in a single surgery that lasted no longer than 3 hr. Bilateral craniotomies were performed with a dental drill at 3.2 mm lateral of the midline and just rostral to the lambdoid suture. After the microdrive implantation, 4–6 turns were slowly made on each drive screw, lowering the tetrodes at least 1 mm into the brain. Animals woke up within ~10 min after the anesthesia was removed and were then able to move around and lift their heads.

Imaging

The surgical procedures were similar to those described previously Low et al. (2014). A microprism implant was composed of a right angle microprism (1.5 mm side length, BK7 glass, hypotenuse coated with aluminum; Optosigma), a circular coverslip (3.0 mm diameter, #1 thickness, BK7 glass; Warner Instruments) and a thin metal cylinder (304 stainless steel, 0.8 mm height, 3.0 mm outer diameter, 2.8 mm inner diameter; MicroGroup) bonded together using UV-curing optical adhesive (Norland #81). The microprism implantation was always performed in the left hemisphere (Gu et al., 2018; Low et al., 2014). A circular craniotomy (3 mm diameter) was centered 3.4 mm lateral to the midline and 0.75 mm posterior to the center of the transverse sinus (at 3.4 mm lateral). The dura over the cerebellum was removed. The microprism assembly was manually implanted, with the prism inserted into the subdural space within the transverse fissure. The implant was bonded to the skull using Vetbond (3M) and Metabond (Parkell). A titanium headplate with a single flange was bonded to the skull on the side opposite to the side of the craniotomy using Metabond. For imaging layer 3 neurons in the MEC, AAV1.hSyn.GCaMP6f.WPRE.SV40 (Penn Vector Core, Cat#: 100837-AAV1) virus was diluted 1:4 in a solution of 20% (w/v) mannitol in PBS and pressure injected at two sites (200 nl/site): (1) ML 3.00 mm, AP 0.77 mm, depth 1.79 mm; (2) ML 3.36 mm, AP 0.60 mm, depth 1.42 mm.

Two-photon imaging during virtual navigation

Imaging was performed using a custom-built, VR-compatible two-photon microscope (Low et al., 2014) with a rotatable objective. The 920 nm excitation laser was delivered by a mode-locked Ti:sapphire laser (Chameleon Ultra II, Coherent, 140fs pulses at 80 MHz). The laser scanning for imaging layer 2 neurons of the MEC was achieved by a resonant scanning mirror (Cambridge Tech.). The laser scanning for imaging layer 3 neurons of the MEC was achieved by a galvanometer XY scanner (Cambridge Tech.). Fluorescence of GCaMP6f was isolated using a bandpass emission filter (542/50 nm, Semrock) and detected using GaAsP photomultiplier tubes (1077 PA–40, Hamamatsu). The two objectives used for imaging layers 2 and 3 were Olympus 40×, 0.8 NA (water) and Olympus LUCPLFLN 40x, 0.6 NA (air), respectively. Ultrasound transmission gel (Sonigel, refractive index: 1.3359 [Larson et al., 2011]; Mettler Electronics) was used as the immersion medium for the water immersion objective used for layer 2 imaging. The optical axes of the microscope objective and microprism were aligned at the beginning of each experiment as described previously Low et al. (2014). Microscope control and image acquisition were performed using ScanImage software (layer 2 imaging: v5; layer 3 imaging: v3.8; Vidrio Technologies Pologruto et al., 2003). Images were acquired at 30 Hz at a resolution of 512 × 512 pixels (~410×410 µm FOV) for layer 2 imaging, and 13 Hz at a resolution of 64 × 256 pixels (~100×360 µm FOV) for layer 3 imaging. Imaging and behavioral data were synchronized by simultaneously recording the voltage command signal to the galvanometer together with behavioral data from the VR system at a sampling rate of 1 kHz, using a Digidata/Clampex acquisition system (Molecular Devices).

Histology

For tetrode recording

To identify tetrode locations, small lesions were made by passing anodal current (15 µA, 1 s) through one wire on each tetrode. Animals were then given an overdose of Ketamine (200 mg/kg)/Xylazine (20 mg/kg) and perfused transcardially with 4% formaldehyde in 1X PBS. At the end of perfusion, the microdrive/headplate assembly was carefully detached from the animal. The brain was harvested and placed in 4% formaldehyde in 1X PBS for a day and then transferred to 1X PBS. To locate tetrode tracks and lesion sites, the brain was embedded in 4% agarose and sliced in 80 µm thick sagittal sections. Slices were stained with a fluorescent Nissl stain (NeuroTrace, Thermo Fisher Scientific, Cat#: N21479), and images were acquired on an epifluorescence microscope (Leica) and later compared with the mouse brain atlas (Paxinos). To identify which tetrode track belonged to each tetrode of the microdrive, the microdrive/headplate assembly was observed with a microscope to determine the location of each tetrode in the cannula, the relative lengths of the tetrodes, and whether the tetrodes were parallel or twisted. If tetrodes were twisted, then recordings were used only if grid cells were found on the tetrode.

For imaging

For verifying the layer-specific expression of GCaMP6f in the MEC, animals were transcardially perfused, as described above, and their brains were sliced in 100 µm thick sagittal sections. A fluorescent Nissl stain was performed as described above.

General data processing for tetrode recording

Data analysis was performed offline using custom Matlab code. Electrophysiology data were first demultiplexed and filtered (500 Hz highpass). Spikes were then detected using a negative threshold set to three times the standard deviation of the signal averaged across electrodes on the same tetrodes. Waveforms were extracted and features were then calculated. These features included the baseline-to-peak amplitudes of the waveforms on each of the tetrode wires as well as the top three principal components calculated from a concatenation of the waveforms from all wires.

Cluster separation

Features of the waveforms were plotted with a custom Matlab GUI. Criteria for eliminating clusters from the dataset were: units with less than 100 spikes (in real arena or virtual tracks), the minimum spatial firing rate along the virtual track >10 Hz or the maximum firing rate >50 Hz. After this, 2825 clusters remained. Since clusters were cut with two different methods (using all 4 electrodes and using 3 electrodes with the fourth subtracted as a reference), repeats needed to be removed from the overall dataset. Repeats were found using a combination of 3 measures: the Pearson correlation of the real arena spatial firing rate, the same correlation of the virtual track spatial firing rate and the ISI distribution of the spikes merged between the two clusters. If the sum of these scores exceeded 2.25 then the clusters were considered to be from the same cell; the cluster with the larger number of spikes was kept, and the other cluster was discarded.

Recordings were performed on three animals over two months. There were 1081 clusters that were identified on tetrodes that were histologically identified to be in MEC during the recording that also passed our cluster quality criteria. The grid scores of these cells were calculated and a grid score threshold was calculated using shuffled permutations of these cells (Aronov and Tank, 2014; Domnisoru et al., 2013). Any tetrode on a particular day with a grid cell was then added to the database from that date on (2107 clusters). Duplicate units that were recorded across multiple days were removed using the correlation of firing rates with mean subtraction performed of both their real arena and virtual track spatial firing rates. The cell with the larger maximum spatial firing rate in the arena was kept if the correlation between spatial firing rates was ≥ 0.8 or if the correlation between virtual track rates was ≥ 0.75. There were 789 clusters remaining after duplicates were removed.

Spatial firing rates of tetrode data

Position data (including head orientation in real 2D arenas) were subsampled at 50 Hz and spikes were assigned into the corresponding 0.02 s bins. Velocity was calculated by smoothing the instantaneous velocity with a moving boxcar window of 1 s. Only data in which the animal’s smoothed velocity exceeded 1 cm/sec were used for further analyses of firing rates or scores.

Real arena spatial firing rate

2D arenas were divided into 2.5 × 2.5 cm bins. Spike counts and the total amount of times spent in these bins were convolved with a Gaussian window (5 × 5 bins, σ = 1 bin). Firing rate was not defined for spatial bins visited for a total of less than 0.3 s.

Real arena, head direction

The animal’s head direction was binned in 3-degree intervals. For each angle bin, the spike count and the total amount of time spent (occupancy) was calculated. These values were separately smoothed with a 15 degree (5 bins) boxcar window, and the firing rate was computed as the ratio of the smoothed spike count to the smoothed occupancy.

Virtual track spatial firing rate

Virtual tracks were divided into 5 cm bins. Spike counts and the amount of time spent in these bins were smoothed independently with a Gaussian window (3 point, σ = 1). The smoothed firing rate was calculated as the smoothed spike position distribution divided by the smoothed overall position distribution.

Spatial firing fields

To calculate spatial firing fields we created time arrays for position and for number of spikes of a cell. Time bins were 100 msec. For position, we calculated the average position within each chunk of time (5 data points since data were interpolated to 20 msec sampling intervals). We then divided the spatial track into 5 cm bins and determined in which bin the average position was located. For spikes, we counted the number of spikes in that 100 msec interval. This generated two arrays in time (sampled at 100 msec), one with spike count and one with spatial bin location along the track. We then circularly permuted the spike count array by a random time interval between 0.05 x recording length and 0.95 x recording length. We then calculated the smoothed firing rate of this shuffled spike time array with the spatial bin location array. This was repeated 100 times, and the shuffled spatial firing rate was calculated for each permutation. The p-value was defined for each spatial bin along the track as the fraction of permutations on which the firing rate in that bin was above the actual firing rate. Any bin in which the p-value was less than 0.3 was considered part of a firing field.

Firing field distributions

For each cue cell, we defined an array (5 cm bins) that is 1 when there is a firing field and 0 otherwise. To look at the distribution of firing fields for the population of cells, we summed the values for each bin across all cue cells and divided by the number of cells. This gave the fraction of cue cells with firing fields at each location. The plot of this fraction versus location was defined as the population firing field distribution.

Scores for cells in tetrode data

For the cue score, spatial firing rates remained the same and the cue score was shuffled with cues randomly redistributed along the track. For ridge/background ratio, spatial field shuffles were performed. For all other scores, the shuffle was performed with spike times circularly permuted by a random amount of time chosen between 0.5 x recording length and 0.95 x recording length, a standard method for determining score thresholds (Domnisoru et al., 2013). Shuffled distributions from all units combined were used to calculate a threshold at 95th percentile.

Grid score

The unbiased autocorrelation of the 2D firing rate in a real arena was first calculated (Hafting et al., 2005). Starting from the center of the 2D autocorrelation function, an inner radius was defined as the smallest radius of three different values: local minimum of the radial autocorrelation; where the autocorrelation was negative; or at 10 cm. Multiple outer radii were used from the inner radius + 4 bins to the size of the autocorrelation - 4 bins in steps of 1 bin. For each of these outer radii, an annulus was defined between the inner and the outer annulus. We then computed the Pearson correlation between each of these annuli and its rotation in 30 degree intervals from 30 to 150 degrees. For each annulus we then calculated the difference between the maximum of all 60 and 120 rotation correlations and the minimum of all 30, 90, and 150 degree correlations. The grid score was defined to be the maximum of all of these values across all annuli. 100 shuffles for each cell were performed and pooled.

Head direction score

The head direction score was defined to be the mean vector length of the head direction firing rate (Giocomo et al., 2014). The head direction angle was defined to be the orientation of the mean vector of the head direction firing rate. 100 shuffles for each cell were performed and pooled.

Border score

Border scores were calculated as described in the original publication describing this cell type (Solstad et al., 2008). 100 shuffles for each cell were performed and pooled.

Spatial/head direction stability

This was calculated as described previously Boccara et al. (2010). Recording sessions were divided into two parts, the firing rate was calculated for each, and the spatial stability was defined as the Pearson correlation between the two parts. 100 shuffles for each cell were performed and pooled.

Cue score

The cue score was developed to measure the correlation of the spatial firing rate to the visual cues of the environment. A ‘cue template’ was defined in 5 cm bins with value equal to 1 for bins that included the area between the front and back edges of each cue and 0 elsewhere. The cross correlation between the cue template and the firing rate was first calculated (relative shift ≤300 cm). The peak in the cross correlation with the smallest absolute shift from zero was chosen as the best correlation of the firing rate to the cue template. The spatial shift at which this peak occurred was then used to displace the cue template to best align with the firing rate. The correlation was then calculated locally for each cue. The local window included the cue and regions on either side extending by half of the cue width. The mean of local correlation values across all cues was calculated and defined as the ‘cue score’. An illustration of this method is shown in Figure 1B. This score effectively distinguished grid cells from cue cells, because grid cells generally did not have peaks at consistent locations relative to all the cues. The small number of grid cells that passed the cue score shuffle test also tended to have activity in other locations, where cues were not present. 100 shuffles for each cell were performed and pooled.

Ridge/background ratio

The ridge/background ratio was calculated on the smoothed spatial firing rate at each cue location. The spatial firing rate of each cell was shifted to maximally align to the cue template as was done to calculate the cue score. The 5 bins (25 cm) in the center of each cue location are defined to be bins for the ridge. Background bins were all bins outside of cue locations displaced in both directions by [cue half-width + 20] to [cue half-width + 30]. For each cue, the ridge/background ratio was calculated as the mean firing rate in the ridge bins divided by the mean firing rate in the background bins. The ridge/background ratio for the cell was defined to be the mean of these individual ridge/background ratios. We performed 1000 shuffles of the data, as described above, and calculated the mean ridge/background ratio for each shuffle. The p value is the (number of shuffled data mean values larger than the mean ridge/background ratio of the data)/(number of shuffled data mean values less than the mean ridge/background ratio of the data).

General imaging data processing

All imaging data were motion corrected using a whole-frame, cross-correlation-based method (Dombeck et al., 2010) and were then used to identify regions of interest (ROIs) with fluorescence changes occurring during virtual navigation using an independent component analysis (ICA) based algorithm (Mukamel et al., 2009) (for individual layer 3 field of view (FOV): mu = 1, 150 principal components, 150 independent components, s.d. threshold = 3; for individual layer 2 FOV, which was evenly split as nine blocks before ICA: mu = 0.7, 30 principal components, 150 independent components, s.d. threshold = 3). Fluorescence time series of these ROIs were extracted from all motion-corrected stacks. The fractional change in fluorescence with respect to baseline (ΔF/F) was calculated as (F(t) – F0(t)) / F0(t), similar to what was described previously Gu et al. (2018); Low et al. (2014). Significant calcium transients were identified as those that exceeded cell-specific amplitude/duration thresholds (so that artefactual fluctuations were expected to account for less than 1% of detected transients Dombeck et al., 2007). Mean ΔF/F of the whole imaging session or individual traversals was calculated as a function of position along the virtual track for non-overlapping 5 cm bins. Only data points during which the mouse's running speed met or exceeded 1 cm/s were used for the calculation.

Identifying cue cells in imaging data

Selection of cells

candidates for cue cells were restricted to cells that contained at least one in-field period and one out-of-field period based on a p-value analysis of their calcium responses (Domnisoru et al., 2013; Gu et al., 2018; Heys et al., 2014; Yoon et al., 2016). Similar to identifying spatial fields for tetrode-recorded cells, in- and out-of-field periods were defined by comparing the mean ΔF/F value in each 5 cm bin to that of a random distribution created by 1000 bootstrapped shuffled responses, which were generated by rotating the ΔF/F trace starting from random sample numbers between 0.05 × Nsamples and 0.95 × Nsamples (Nsamples: number of samples in the ΔF/F trace). For each bin, the p-value equaled the percent of shuffled mean ΔF/F that were above the real mean ΔF/F. In-field-periods were defined as three or more adjacent bins (except at the beginning and end of the track where two adjacent bins were sufficient) whose p-value≤0.2 and for which at least 10% of the runs contained significant calcium transients within the period. Out-of-field periods were defined as two or more bins whose p-value≥0.75.

Calculating cue scores to left and right cue templates and defining left and right cue cells

Left and right cue templates were generated using the locations of cues on the left and right sides of the track, respectively. Left and right cue scores for each cell to the left and right templates were calculated as described above (Scores for cells in tetrode data, Cue score). Cue cells were defined as those with cue scores above the threshold, which was the 95th percentile of shuffle scores of all cells. For each cue cell, 200 shuffle scores were computed as its cue scores on 200 shuffled templates, which contained cues identical to the original template but arranged at random locations of the track.

Using the above method, we assigned the cells that uniquely passed left and right template thresholds as right and left cue cells, respectively. Moreover, since the bimodal distribution of bilateral scores (explained below) of the left and right cue cell populations (Figure 5E) indicated their primary responses to single-side cues, we assigned cells that passed the thresholds of both left and right templates to the side with higher cue scores.

Bilateral scores

Bilateral score was defined as the difference between the left and right cue scores (left cue score – right cue score) when a cell response was aligned to its preferred template (Figure 5—figure supplement 2B). For a right cue cell with right cue score R1 under a spatial shift S1, its left cue score L1 was calculated when the cell’s response was aligned to the left cue template under the same spatial shift S1. Its bilateral score = L1 – R1. For a left cue cell with left cue score L2 under a spatial shift S2, its right cue score R2 was calculated when the cell’s response was aligned to the right cue template under the same spatial shift S2. Its bilateral score = L2 – R2. The bilateral score of a cell preferentially responding to left or right side cues should have a large absolute value, whereas the bilateral score of a cell equally responding to left and right side cues should have a small absolute value near zero.

Acknowledgements

We thank current and former members of the Tank lab, Ila Fiete, and Anika Kinkhabwala for helpful discussions, and Jeffrey Santner and Alexander Riordan for comments on the manuscript. This work was supported by NINDS Grant 5R37NS081242 (DWT), NIMH Grant 5R01MH083686 (DWT), NIH Postdoctoral Fellowship Grant F32NS070514-01A1 (AAK).

Funding Statement

The funders had no role in the experiments or analysis in this publication.

Contributor Information

Amina A Kinkhabwala, Email: amina.kinkhabwala@gmail.com.

Yi Gu, Email: guyi.thu@gmail.com.

David W Tank, Email: dwtank@princeton.edu.

Sachin Deshmukh, Indian Institute of Science Bangalore, India.

Laura L Colgin, University of Texas at Austin, United States.

Funding Information

This paper was supported by the following grants:

  • National Institute of Neurological Disorders and Stroke 5R37NS081242 to David W Tank.

  • National Institute of Mental Health 5R01MH083686 to David W Tank.

  • National Institutes of Health F32NS070514-01A1 to Amina A Kinkhabwala.

Additional information

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Resources, Data curation, Software, Formal analysis, Funding acquisition, Validation, Investigation, Visualization, Methodology.

Data curation, Formal analysis, Investigation, Visualization.

Conceptualization, Data curation, Software, Formal analysis, Methodology.

Conceptualization, Resources, Data curation, Software, Formal analysis, Supervision, Funding acquisition, Validation, Investigation, Methodology.

Ethics

Animal experimentation: All procedures were approved by the Princeton University Institutional Animal Care and Use Committee (IACUC protocol# 1910-15) and were in compliance with the Guide for the Care and Use of Laboratory Animals.

Additional files

Transparent reporting form

Data availability

All data generated or analyzed during this study are included in the manuscript and supporting files.

References

  1. Aronov D, Tank DW. Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system. Neuron. 2014;84:442–456. doi: 10.1016/j.neuron.2014.08.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Barry C, Burgess N. Neural mechanisms of self-location. Current Biology. 2014;24:R330–R339. doi: 10.1016/j.cub.2014.02.049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Boccara CN, Sargolini F, Thoresen VH, Solstad T, Witter MP, Moser EI, Moser MB. Grid cells in pre- and parasubiculum. Nature Neuroscience. 2010;13:987–994. doi: 10.1038/nn.2602. [DOI] [PubMed] [Google Scholar]
  4. Brun VH, Leutgeb S, Wu HQ, Schwarcz R, Witter MP, Moser EI, Moser MB. Impaired spatial representation in CA1 after lesion of direct input from entorhinal cortex. Neuron. 2008;57:290–302. doi: 10.1016/j.neuron.2007.11.034. [DOI] [PubMed] [Google Scholar]
  5. Burak Y, Fiete IR. Accurate path integration in continuous attractor network models of grid cells. PLOS Computational Biology. 2009;5:e1000291. doi: 10.1371/journal.pcbi.1000291. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bush D, Barry C, Manson D, Burgess N. Using Grid Cells for Navigation. Neuron. 2015;87:507–520. doi: 10.1016/j.neuron.2015.07.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Calton JL, Stackman RW, Goodridge JP, Archey WB, Dudchenko PA, Taube JS. Hippocampal place cell instability after lesions of the head direction cell network. The Journal of Neuroscience. 2003;23:9719–9731. doi: 10.1523/JNEUROSCI.23-30-09719.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Calton JL, Turner CS, Cyrenne DL, Lee BR, Taube JS. Landmark control and updating of self-movement cues are largely maintained in head direction cells after lesions of the posterior parietal cortex. Behavioral Neuroscience. 2008;122:827–840. doi: 10.1037/0735-7044.122.4.827. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Campbell MG, Ocko SA, Mallory CS, Low IIC, Ganguli S, Giocomo LM. Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation. Nature Neuroscience. 2018;21:1096–1106. doi: 10.1038/s41593-018-0189-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Carpenter F, Manson D, Jeffery K, Burgess N, Barry C. Grid cells form a global representation of connected environments. Current Biology. 2015;25:1176–1182. doi: 10.1016/j.cub.2015.02.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Chen TW, Wardill TJ, Sun Y, Pulver SR, Renninger SL, Baohan A, Schreiter ER, Kerr RA, Orger MB, Jayaraman V, Looger LL, Svoboda K, Kim DS. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature. 2013;499:295–300. doi: 10.1038/nature12354. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Chen G, Manson D, Cacucci F, Wills TJ. Absence of visual input results in the disruption of grid cell firing in the mouse. Current Biology. 2016;26:2335–2342. doi: 10.1016/j.cub.2016.06.043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Clark BJ, Sarma A, Taube JS. Head direction cell instability in the anterior dorsal thalamus after lesions of the interpeduncular nucleus. Journal of Neuroscience. 2009;29:493–507. doi: 10.1523/JNEUROSCI.2811-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Clark BJ, Bassett JP, Wang SS, Taube JS. Impaired head direction cell representation in the anterodorsal thalamus after lesions of the retrosplenial cortex. Journal of Neuroscience. 2010;30:5289–5302. doi: 10.1523/JNEUROSCI.3380-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Clark BJ, Rice JP, Akers KG, Candelaria-Cook FT, Taube JS, Hamilton DA. Lesions of the dorsal tegmental nuclei disrupt control of navigation by distal landmarks in cued, directional, and place variants of the morris water task. Behavioral Neuroscience. 2013;127:566–581. doi: 10.1037/a0033087. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Clark BJ, Taube JS. Deficits in landmark navigation and path integration after lesions of the interpeduncular nucleus. Behavioral Neuroscience. 2009;123:490–503. doi: 10.1037/a0015477. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Dana H, Chen TW, Hu A, Shields BC, Guo C, Looger LL, Kim DS, Svoboda K. Thy1-GCaMP6 transgenic mice for neuronal population imaging in vivo. PLOS ONE. 2014;9:e108697. doi: 10.1371/journal.pone.0108697. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Derdikman D, Whitlock JR, Tsao A, Fyhn M, Hafting T, Moser MB, Moser EI. Fragmentation of grid cell maps in a multicompartment environment. Nature Neuroscience. 2009;12:1325–1332. doi: 10.1038/nn.2396. [DOI] [PubMed] [Google Scholar]
  19. Deshmukh SS, Knierim JJ. Influence of local objects on hippocampal representations: Landmark vectors and memory. Hippocampus. 2013;23:253–267. doi: 10.1002/hipo.22101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Diehl GW, Hon OJ, Leutgeb S, Leutgeb JK. Grid and nongrid cells in medial entorhinal cortex represent spatial location and environmental features with complementary coding schemes. Neuron. 2017;94:83–92. doi: 10.1016/j.neuron.2017.03.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Dombeck DA, Khabbaz AN, Collman F, Adelman TL, Tank DW. Imaging large-scale neural activity with cellular resolution in awake, mobile mice. Neuron. 2007;56:43–57. doi: 10.1016/j.neuron.2007.08.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Dombeck DA, Harvey CD, Tian L, Looger LL, Tank DW. Functional imaging of hippocampal place cells at cellular resolution during virtual navigation. Nature Neuroscience. 2010;13:1433–1440. doi: 10.1038/nn.2648. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Domnisoru C, Kinkhabwala AA, Tank DW. Membrane potential dynamics of grid cells. Nature. 2013;495:199–204. doi: 10.1038/nature11973. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Evans T, Bicanski A, Bush D, Burgess N. How environment and self-motion combine in neural representations of space. The Journal of Physiology. 2016;594:6535–6546. doi: 10.1113/JP270666. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Finkelstein A, Derdikman D, Rubin A, Foerster JN, Las L, Ulanovsky N. Three-dimensional head-direction coding in the bat brain. Nature. 2015;517:159–164. doi: 10.1038/nature14031. [DOI] [PubMed] [Google Scholar]
  26. Frohardt RJ, Bassett JP, Taube JS. Path integration and lesions within the head direction cell circuit: comparison between the roles of the anterodorsal thalamus and dorsal tegmental nucleus. Behavioral Neuroscience. 2006;120:135–149. doi: 10.1037/0735-7044.120.1.135. [DOI] [PubMed] [Google Scholar]
  27. Fuhs MC, Touretzky DS. A spin glass model of path integration in rat medial entorhinal cortex. Journal of Neuroscience. 2006;26:4266–4276. doi: 10.1523/JNEUROSCI.4353-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Fusi S, Miller EK, Rigotti M. Why neurons mix: high dimensionality for higher cognition. Current Opinion in Neurobiology. 2016;37:66–74. doi: 10.1016/j.conb.2016.01.010. [DOI] [PubMed] [Google Scholar]
  29. Fyhn M, Hafting T, Treves A, Moser MB, Moser EI. Hippocampal remapping and grid realignment in entorhinal cortex. Nature. 2007;446:190–194. doi: 10.1038/nature05601. [DOI] [PubMed] [Google Scholar]
  30. Gauthier JL, Tank DW. A dedicated population for reward coding in the Hippocampus. Neuron. 2018;99:179–193. doi: 10.1016/j.neuron.2018.06.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Geva-Sagiv M, Las L, Yovel Y, Ulanovsky N. Spatial cognition in bats and rats: from sensory acquisition to multiscale maps and navigation. Nature Reviews Neuroscience. 2015;16:94–108. doi: 10.1038/nrn3888. [DOI] [PubMed] [Google Scholar]
  32. Giocomo LM, Stensola T, Bonnevie T, Van Cauter T, Moser MB, Moser EI. Topography of head direction cells in medial entorhinal cortex. Current Biology. 2014;24:252–262. doi: 10.1016/j.cub.2013.12.002. [DOI] [PubMed] [Google Scholar]
  33. Giocomo LM. Environmental boundaries as a mechanism for correcting and anchoring spatial maps. The Journal of Physiology. 2016;594:6501–6511. doi: 10.1113/JP270624. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Golob EJ, Wolk DA, Taube JS. Recordings of postsubiculum head direction cells following lesions of the laterodorsal thalamic nucleus. Brain Research. 1998;780:9–19. doi: 10.1016/S0006-8993(97)01076-7. [DOI] [PubMed] [Google Scholar]
  35. Golob EJ, Taube JS. Head direction cells in rats with hippocampal or overlying neocortical lesions: evidence for impaired angular path integration. The Journal of Neuroscience. 1999;19:7198–7211. doi: 10.1523/JNEUROSCI.19-16-07198.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Gu Y, Lewallen S, Kinkhabwala AA, Domnisoru C, Yoon K, Gauthier JL, Fiete IR, Tank DW. A Map-like Micro-Organization of grid cells in the medial entorhinal cortex. Cell. 2018;175:736–750. doi: 10.1016/j.cell.2018.08.066. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Hafting T, Fyhn M, Molden S, Moser MB, Moser EI. Microstructure of a spatial map in the entorhinal cortex. Nature. 2005;436:801–806. doi: 10.1038/nature03721. [DOI] [PubMed] [Google Scholar]
  38. Hardcastle K, Ganguli S, Giocomo LM. Environmental boundaries as an error correction mechanism for grid cells. Neuron. 2015;86:827–839. doi: 10.1016/j.neuron.2015.03.039. [DOI] [PubMed] [Google Scholar]
  39. Hardcastle K, Maheswaranathan N, Ganguli S, Giocomo LM. A multiplexed, heterogeneous, and adaptive code for navigation in medial entorhinal cortex. Neuron. 2017;94:375–387. doi: 10.1016/j.neuron.2017.03.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Harvey CD, Collman F, Dombeck DA, Tank DW. Intracellular dynamics of hippocampal place cells during virtual navigation. Nature. 2009;461:941–946. doi: 10.1038/nature08499. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Harvey CD, Coen P, Tank DW. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature. 2012;484:62–68. doi: 10.1038/nature10918. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Heys JG, Rangarajan KV, Dombeck DA. The functional micro-organization of grid cells revealed by cellular-resolution imaging. Neuron. 2014;84:1079–1090. doi: 10.1016/j.neuron.2014.10.048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Hollup SA, Kjelstrup KG, Hoff J, Moser MB, Moser EI. Impaired recognition of the goal location during spatial navigation in rats with hippocampal lesions. The Journal of Neuroscience. 2001;21:4505–4513. doi: 10.1523/JNEUROSCI.21-12-04505.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Hopfield JJ. Understanding emergent dynamics: using a collective activity coordinate of a neural network to recognize Time-Varying patterns. Neural Computation. 2015;27:2011–2038. doi: 10.1162/NECO_a_00768. [DOI] [PubMed] [Google Scholar]
  45. Høydal ØA, Skytøen ER, Andersson SO, Moser MB, Moser EI. Object-vector coding in the medial entorhinal cortex. Nature. 2019;568:400–404. doi: 10.1038/s41586-019-1077-7. [DOI] [PubMed] [Google Scholar]
  46. Kloosterman F, Davidson TJ, Gomperts SN, Layton SP, Hale G, Nguyen DP, Wilson MA. Micro-drive array for chronic in vivo recording: drive fabrication. Journal of Visualized Experiments. 2009;20:1094. doi: 10.3791/1094. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Kropff E, Carmichael JE, Moser MB, Moser EI. Speed cells in the medial entorhinal cortex. Nature. 2015;523:419–424. doi: 10.1038/nature14622. [DOI] [PubMed] [Google Scholar]
  48. Krupic J, Bauza M, Burton S, Barry C, O’Keefe J, O'Keefe J. Grid cell symmetry is shaped by environmental geometry. Nature. 2015;518:232–235. doi: 10.1038/nature14153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Krupic J, Bauza M, Burton S, O'Keefe J. Local transformations of the hippocampal cognitive map. Science. 2018;359:1143–1146. doi: 10.1126/science.aao4960. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Larson B, Abeytunge S, Rajadhyaksha M. Performance of full-pupil line-scanning reflectance confocal microscopy in human skin and oral mucosa in vivo. Biomedical Optics Express. 2011;2:2055–2067. doi: 10.1364/BOE.2.002055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Lever C, Burton S, Jeewajee A, O'Keefe J, Burgess N. Boundary vector cells in the subiculum of the hippocampal formation. Journal of Neuroscience. 2009;29:9771–9777. doi: 10.1523/JNEUROSCI.1319-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Low RJ, Gu Y, Tank DW. Cellular resolution optical access to brain regions in fissures: imaging medial prefrontal cortex and grid cells in entorhinal cortex. PNAS. 2014;111:18739–18744. doi: 10.1073/pnas.1421753111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Madisen L, Garner AR, Shimaoka D, Chuong AS, Klapoetke NC, Li L, van der Bourg A, Niino Y, Egolf L, Monetti C, Gu H, Mills M, Cheng A, Tasic B, Nguyen TN, Sunkin SM, Benucci A, Nagy A, Miyawaki A, Helmchen F, Empson RM, Knöpfel T, Boyden ES, Reid RC, Carandini M, Zeng H. Transgenic mice for intersectional targeting of neural sensors and effectors with high specificity and performance. Neuron. 2015;85:942–958. doi: 10.1016/j.neuron.2015.02.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Mante V, Sussillo D, Shenoy KV, Newsome WT. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature. 2013;503:78–84. doi: 10.1038/nature12742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. McNaughton BL, Battaglia FP, Jensen O, Moser EI, Moser MB. Path integration and the neural basis of the 'cognitive map'. Nature Reviews Neuroscience. 2006;7:663–678. doi: 10.1038/nrn1932. [DOI] [PubMed] [Google Scholar]
  56. Mittelstaedt H. Homing by Path Integration.  Berlin, Heidelberg: Springer; 1982. [DOI] [Google Scholar]
  57. Moser E, Moser MB, Andersen P. Spatial learning impairment parallels the magnitude of dorsal hippocampal lesions, but is hardly present following ventral lesions. The Journal of Neuroscience. 1993;13:3916–3925. doi: 10.1523/JNEUROSCI.13-09-03916.1993. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Mukamel EA, Nimmerjahn A, Schnitzer MJ. Automated analysis of cellular signals from large-scale calcium imaging data. Neuron. 2009;63:747–760. doi: 10.1016/j.neuron.2009.08.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Ocko SH, Giocomo K, Ganguli S. Emergent elasticity in the neural code for space. bioRxiv. 2018 doi: 10.1073/pnas.1805959115. [DOI] [PMC free article] [PubMed]
  60. Parron C, Poucet B, Save E. Entorhinal cortex lesions impair the use of distal but not proximal landmarks during place navigation in the rat. Behavioural Brain Research. 2004;154:345–352. doi: 10.1016/j.bbr.2004.03.006. [DOI] [PubMed] [Google Scholar]
  61. Parron C, Save E. Comparison of the effects of entorhinal and retrosplenial cortical lesions on habituation, reaction to spatial and non-spatial changes during object exploration in the rat. Neurobiology of Learning and Memory. 2004;82:1–11. doi: 10.1016/j.nlm.2004.03.004. [DOI] [PubMed] [Google Scholar]
  62. Pérez-Escobar JA, Kornienko O, Latuske P, Kohler L, Allen K. Visual landmarks sharpen grid cell metric and confer context specificity to neurons of the medial entorhinal cortex. eLife. 2016;5:e16937. doi: 10.7554/eLife.16937. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Pollock ED, Wei N, Balasubramanian X. Dynamic self-organized error-correction of grid cells by border cells. bioRxiv. 2018 https://arxiv.org/abs/1808.01503
  64. Pologruto TA, Sabatini BL, Svoboda K. ScanImage: flexible software for operating laser scanning microscopes. Biomedical Engineering Online. 2003;2:13. doi: 10.1186/1475-925X-2-13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Rigotti M, Barak O, Warden MR, Wang XJ, Daw ND, Miller EK, Fusi S. The importance of mixed selectivity in complex cognitive tasks. Nature. 2013;497:585–590. doi: 10.1038/nature12160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Rubin A, Yartsev MM, Ulanovsky N. Encoding of head direction by hippocampal place cells in bats. The Journal of Neuroscience. 2014;34:1067–1080. doi: 10.1523/JNEUROSCI.5393-12.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Solstad T, Boccara CN, Kropff E, Moser MB, Moser EI. Representation of geometric borders in the entorhinal cortex. Science. 2008;322:1865–1868. doi: 10.1126/science.1166466. [DOI] [PubMed] [Google Scholar]
  68. Stensola T, Stensola H, Moser MB, Moser EI. Shearing-induced asymmetry in entorhinal grid cells. Nature. 2015;518:207–212. doi: 10.1038/nature14151. [DOI] [PubMed] [Google Scholar]
  69. Stewart S, Jeewajee A, Wills TJ, Burgess N, Lever C. Boundary coding in the rat subiculum. Philosophical Transactions of the Royal Society B: Biological Sciences. 2014;369:20120514. doi: 10.1098/rstb.2012.0514. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Taube JS, Kesslak JP, Cotman CW. Lesions of the rat postsubiculum impair performance on spatial tasks. Behavioral and Neural Biology. 1992;57:131–143. doi: 10.1016/0163-1047(92)90629-I. [DOI] [PubMed] [Google Scholar]
  71. Tsoar A, Nathan R, Bartan Y, Vyssotski A, Dell'Omo G, Ulanovsky N. Large-scale navigational map in a mammal. PNAS. 2011;108:E718–E724. doi: 10.1073/pnas.1107365108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Ulanovsky N, Moss CF. Dynamics of hippocampal spatial representation in echolocating bats. Hippocampus. 2011;21:150–161. doi: 10.1002/hipo.20731. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Wang C, Chen X, Lee H, Deshmukh SS, Yoganarasimha D, Savelli F, Knierim JJ. Egocentric coding of external items in the lateral entorhinal cortex. Science. 2018;362:945–949. doi: 10.1126/science.aau4940. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Whitlock JR, Sutherland RJ, Witter MP, Moser MB, Moser EI. Navigating from hippocampus to parietal cortex. PNAS. 2008;105:14755–14762. doi: 10.1073/pnas.0804216105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Yamahachi H, Moser MB, Moser EI. Map fragmentation in two- and three-dimensional environments. Behavioral and Brain Sciences. 2013;36:569–570. doi: 10.1017/S0140525X13000605. [DOI] [PubMed] [Google Scholar]
  76. Yoon K, Buice MA, Barry C, Hayman R, Burgess N, Fiete IR. Specific evidence of low-dimensional continuous attractor dynamics in grid cells. Nature Neuroscience. 2013;16:1077–1084. doi: 10.1038/nn.3450. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Yoon K, Lewallen S, Kinkhabwala AA, Tank DW, Fiete IR. Grid cell responses in 1D environments assessed as slices through a 2D lattice. Neuron. 2016;89:1086–1099. doi: 10.1016/j.neuron.2016.01.039. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision letter

Editor: Sachin Deshmukh1

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

It is important to understand how path integration errors are corrected in MEC. Your demonstration of cue cells in MEC suggests an interesting mechanism complementing border cells in performing this task.

Decision letter after peer review:

Thank you for submitting your article "Visual cue-related activity of cells in the medial entorhinal cortex during navigation in virtual reality" for consideration by eLife. Your article has been reviewed by three peer reviewers, one of whom is a member of our Board of Reviewing Editors, and the evaluation has been overseen by Laura Colgin as the Senior Editor. The reviewers have opted to remain anonymous.

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

The authors use a combination of tetrode recording and calcium imaging while mice ran on virtual linear tracks containing visual cues (towers) on either side to describe a novel cell type in the superficial layers of medial entorhinal cortex (MEC), which they call "cue cells." Correlation based measures using the spatial firing rate of the cells and a landmark cue template were used to classify cells as "cue cells" that respond by firing repeatedly at every landmark. Recordings in two-dimensional open-field environments revealed the cue cells were also conjunctively encoding other, previously characterized, features of cells in the mEC including the presence of borders (border cells), firing in a regular triangular spatial lattice pattern (grid cells) and the animals heading direction (head direction cells, ~50% of cue cells had some orientation tuning). The results were viewed by all reviewers as novel and significant.

However, there are multiple significant concerns, as the manuscript stands, listed below, which the authors should be able to address in the allotted time. The major issues concern the shuffling procedure used to generate control distributions for cue scores, missing details regarding statistics, and other analyses that need to be modified to better control for spatially selective cells that are not anchored to cues.

Essential revisions:

1) The circular permutation procedure undertaken to form a control data set is inadequate for many of the analyses it is used in. Circular permutation destroys spatial selectivity. When the question being addressed is whether the observed spatial selectivity is correlated with the landmark locations (rather than being random), the control distribution of correlation coefficients needs to be obtained using a randomization procedure maintaining spatially selectivity while randomizing the relative positions of the place fields and the objects. A control distribution of correlation coefficients without spatial selectivity, like the one used here, is likely to be more tightly clustered around zero than a control distribution of correlation coefficients obtained from spatially selective (but randomized) data, thus allowing more false positives. One easy way to do this is to randomize landmark locations while keeping the ratemap unchanged. This will work for detection of cue cells, but for some analyses (e.g. the sequence analysis), shuffling the place fields within a ratemap would be a more suitable shuffle.

2) Some of the findings appear to follow tautologically from the definition of the cue score, which correlates firing patterns to a template that matches the locations of the visual cues. a) The distribution of this score appears uni-modal and the authors them pick out one end of the distribution. Are the cue cells really then a discrete population? Following this, are the cue locations really special, or does the template just pick out cells that fire near the cues from amongst a population that uniformly spans the environment? Can you compare the number of cells you would identify with a randomized cue template to the number of cells picked up by the cue template? The cue-score method picks out cells with positive correlations to the cue template. Are there cells with significant negative correlations? ("anti-cue cells")

b) Do cells that fire near the visual cues respond more to the removal of visual cues than cells that fire away from the visual cues, or do all cells lose their spatial tuning in the cue-removed condition?

c) Can the fact that the sequence is repeated at each cue be explained by the fact that the cue template looks for cells with a fixed spatial offset from each cue? One way to control for this would be to identify cells based on one cue only, and test whether the sequence repeats at the other cues. Alternatively, a cue score could be developed that allows the cue template to move independently at each cue. This would be a more convincing test that cells really do have a fixed offset from each spatial cue.

d) What is the distribution of Left-Cue scores for Right-Cue cells and vice versa? Is it really "either/or", or is there a continuum of cells that respond to combinations of left and right cues?

e) "For each environment, we found the activity of all cue cells was best aligned to the center of the cue rather than the start or end of the cues". If the place field size is proportional to the cue size, this will automatically follow. Start and end would be displaced differentially with respect to place field center while the cue center would be the average of the two. The method used for generating cue scores (subsection “Scores for cells in tetrode data”) generates higher scores for cells with field sizes matching cue sizes (over cue cells that have cue independent field sizes), making this analysis circular. Why not use the peak correlation between the cue template and the firing rate map with the smallest absolute shift from zero as the cue score, instead? That will eliminate this confound.

3) Detailed statistics need to be provided at multiple places. For example, the subsection “Cue cell pairwise activity patterns” mentions Pearson correlation coefficients of 0.3 and 0.13. The authors argue that "This suggests that the spike timing relationship between cue cell pairs is present only when cues are present and thus when these cells are driven to be active in a sequential manner by locomotion past the cue." This will hold true if the two coefficients are significantly different from one another, and the coefficient of 0.3 is statistically significantly different from 0. 2. "*p ≤0.05. **p ≤0.01. ***p ≤0.001. n.s. p > 0.5. Student's t-test. Error bars: mean ± SEM.": detailed statistics including sample size, p values, t-statistics, means and STDs need to be reported in the main text. The journal guidelines state "Report exact p-values wherever possible alongside the summary statistics and 95% confidence intervals. These should be reported for all key questions and not only when the p-value is less than 0.05."

Have the authors corrected for multiple comparisons wherever required?

4) The authors report recording up to 301 cells from a single tetrode (Figure 1—figure supplement 1; including 88 cue cells and 93 grid cells; "Recordings were performed on four animals over two months."). Were repeat recordings from the same neurons on consecutive/multiple days identified and eliminated? How? If they were not eliminated, all the reported statistics suffer from inflation of degrees of freedom. Can the authors comment on this?

5) What are the running speed profiles of the mice? Did they tend to slow down near the visual cues?

6) "In layers 2 and 3, we consistently observed that anatomically adjacent cue cells (physical distances around 30 μm) showed more similar spatial shifts, whereas the relationship was more varied if cue cells were further apart (Figure 7G-N). The similar cue responses of adjacent cue cells suggest that they may share similar inputs or be connected."

Do the cross correlations of neighbouring cells (on the same tetrode) maintain the peak at 0ms in B if they had peak at 0ms in A? If not, the two observations with tetrodes and imaging would contradict one another. In general, the claims made in G-N are rather weak. Authors should consider excluding them.

The 'micro-organisation' relating physical separation to spatial shifts in responses relative to cue location is seen restricted to anatomically adjacent cue cells: is there any danger that this reflects contamination/poor localisation/diffusion of light from neighboring sources?

7) One reason for the more specific apparent correlate of firing in the VR track versus the open field might be that the viewing angle is important and this is not systematically sampled in the open field. Do the mice run in both directions on the VR – if so, do the cue cells fire at a similar cue-angle? Does this relate to the observation that place cell firing becomes more directionally modulated in VR than in real open fields (presumably because of the greater influence of vision in visually-generated VR; Acharya, Aghajan et al., 2016; Chen, King et al., 2018). In the comparison to known cell types, might these cue cells be related to landmark-vector cells (Deshmukh and Knierim, 2013) or object-vector cells (Hoydal et al., 2019), or egocentric responses recently reported in lEC (Wang et al., 2018)? Do the cue cell responses depend on the wider context – need the mouse be running (vs. passive viewing) to see firing? To what extent do cue cells fire similarly across VR environments or do they 'remap'? (this is not entirely clear from Figure 1).

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for submitting your article "Visual cue-related activity of cells in the medial entorhinal cortex during navigation in virtual reality" for consideration by eLife. Your article has been reviewed by two peer reviewers, one of whom is a member of our Board of Laura Colgin as the Senior Editor. The reviewers have opted to remain anonymous.

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

The reviewers remain positive about the overall importance of these results.

However, although some of the concerns were adequately addressed during the first revision, some major concerns that were raised in the first round of review remain. Moreover, a number of new concerns arose from the revisions. Still, reviewers are confident that the authors can easily address these remaining concerns.

Essential revisions:

Duplicates removed database in Author response image 7, which is used throughout the paper uses spatial firing rate correlations > 0.95 as a threshold for discarding cells as being duplicates. This threshold is unreasonably high, as even stable place cells recorded in consecutive sessions in the hippocampus often have substantially lower correlation coefficients, especially in mice (e.g. Kentros et al., 1998, Figure 3). This means that the duplicates removed database is likely to still have an unreasonably high number of duplicates.

The authors should show the tetrode lowering database figure shown in Author response image 7 at least as supplementary data. They must also include the tetrode lowering database stats and aggregate figures for other analyses using tetrode data, including responses to environmental perturbations, sequences, pairwise correlations etc. to convince the reader that the significance of the patterns reported is not grossly overestimated by inflation of degrees of freedom caused by the inclusion of duplicates in their dataset.

In the revised manuscript, it is no longer clear how many animals the data were collected from, and how many neurons of different types were contributed by each animal. The animal and tetrode – wise breakdown of neurons in the tables included in the previous version are essential. Tables showing number of units of different kinds recorded from each tetrode in each animal have been eliminated from Figure 1—figure supplement 1. They should be put back, with numbers for both duplicates removed database as well as for tetrode lowering database.

Related to this, it is not clear how many animals contributed to the new data shown in the new Figure 7. Hence, it is impossible to figure out if the reported results are reproducible across animals. Please mention number of animals included for each analysis/figure in the Results.

"In Region A, there was a spread in the temporal shifts for pairs of cue cells and these temporal shifts were correlated for the two tracks (Figure 5C left, Pearson correlation = 0.52, p=9×10-5). However, the temporal shifts in Region B of the two tracks were less correlated: while a similar spread of temporal shifts was observed when cue cells were recorded on the with cues track (plotted along the x-axis of the bottom right panel in Figure 5C), most cue cell pairs did not have a correlated phase in the relative spike timing when cues were missing (plotted along the y-axis of the bottom right panel in Figure 5C right, “correlation not significant”). The fact that the spike timing relationship between cue cell pairs is maintained only when cues are present suggests that these cells are driven to be active in a sequential manner by locomotion past cues."

Correlation coefficient and p value for region B needs to be included, as requested in the previous review – stating "correlation not significant" is not sufficient. Furthermore, to make the claim that their data suggests that "cells are driven to be active in a sequential manner by locomotion past cues", the authors should demonstrate that the slopes in region A and B shown in Figure 5C are significantly different from one another.

eLife. 2020 Mar 9;9:e43140. doi: 10.7554/eLife.43140.sa2

Author response


Essential revisions:

1) The circular permutation procedure undertaken to form a control data set is inadequate for many of the analyses it is used in. Circular permutation destroys spatial selectivity. When the question being addressed is whether the observed spatial selectivity is correlated with the landmark locations (rather than being random), the control distribution of correlation coefficients needs to be obtained using a randomization procedure maintaining spatially selectivity while randomizing the relative positions of the place fields and the objects. A control distribution of correlation coefficients without spatial selectivity, like the one used here, is likely to be more tightly clustered around zero than a control distribution of correlation coefficients obtained from spatially selective (but randomized) data, thus allowing more false positives. One easy way to do this is to randomize landmark locations while keeping the ratemap unchanged. This will work for detection of cue cells, but for some analyses (e.g. the sequence analysis), shuffling the place fields within a ratemap would be a more suitable shuffle.

We have changed our shuffling methods based upon this comment. All major results remain the same in both tetrode and imaging data.

1) For sequence analysis in Figure 4 of the manuscript, shuffling of spatial fields was used for the ridge/background calculations as suggested.

2) To calculate a cue score threshold for identifying cue cells, instead of circularly permuting spike times or fluorescence signals to calculate shuffled distributions, we tried both methods suggested above, shuffling of cues on the template and spatial fields of the spatial firing rate. We have plotted the shuffle distributions in Author response image 1 since there was concern about the shape and clustering of the shuffle distributions. We found no significant differences in the distributions and threshold values for any of the three methods (Author response image 1). Since shuffling cues on the template generally gave higher thresholds, we used this method to identify cue cells. All major conclusions remained the same.

Author response image 1. Score thresholds calculated by different shuffling methods.

Author response image 1.

(A) The distribution of shuffled cue scores and thresholds using three shuffling methods: circular permutation (circular shuffle), spatial field shuffle (field shuffle), and cue template shuffle (template shuffle). The thresholds values (95th percentile of shuffles) are indicated by dark blue, red and light blue lines, respectively. Left: distribution of real and shuffled cue scores for tetrode and imaging data. Right: distribution of real and shuffled cue scores for imaging data. Thresholds for left, right and both-side templates were separately calculated. B) Comparison of the thresholds generated by all three shuffling methods for tetrode and imaging data. In the current manuscript, we used the template shuffling method (highlighted in yellow).

3) In addition to the threshold changes, for imaging data, we also introduced template-specific thresholds to address Essential Revision 2d. Only the data of left and right cue cells are included in the current manuscript and the details are explained in Essential Revision 2d.

2) Some of the findings appear to follow tautologically from the definition of the cue score, which correlates firing patterns to a template that matches the locations of the visual cues. a) The distribution of this score appears uni-modal and the authors them pick out one end of the distribution. Are the cue cells really then a discrete population? Following this, are the cue locations really special, or does the template just pick out cells that fire near the cues from amongst a population that uniformly spans the environment? Can you compare the number of cells you would identify with a randomized cue template to the number of cells picked up by the cue template? The cue-score method picks out cells with positive correlations to the cue template. Are there cells with significant negative correlations? ("anti-cue cells")

If we gave the impression that we think they are a discrete population, we want to clarify that they are not. We have gone through the paper to make sure our wording is consistent with this. As described in a recent paper from the Giocomo lab, we agree that the MEC is comprised of cell types with some degree of mixed selectivity (Hardcastle et al., 2017).

Based on the reviewer’s suggestion, we further investigated whether there is a preferred representation of cues of the environment by comparing the percentage of identified cue cells using the cue templates of the current environment to those identified using templates of random environments. As shown in Figure 5—figure supplement 3, we found that the percentage of cue cells identified in the current environment was significantly higher than that in random environments. This indicates that cue cells are a unique population within MEC rather than a subpopulation picked out from a larger population of cells with spatial fields distributed across the track. This result also shows that the environmental cues are preferentially represented by cue cells

b) Do cells that fire near the visual cues respond more to the removal of visual cues than cells that fire away from the visual cues, or do all cells lose their spatial tuning in the cue-removed condition?

We compared the spatial firing patterns of cue cells with fields near cues (within a 25 cm spatial shift between the spatial firing rate and cue template) to those with fields far from cues (greater than 25 cm spatial shift). In Author response image 2, the spatial firing fields for all cells with fields near and far from cues are shown on the left (Author response image 2A1) and right (Author response image 2A2), respectively. In each case, the top plots show fields of all cells and the population distribution (cell fraction with fields at each 5 cm bin along the track) for the track in which all cues are present. The bottom two panels show the fields for the track with cues missing along the latter part of the track. In both cases, fields are present where cues are present, and are missing in places where cues have been removed. In Author response image 2B, the fraction of 5 cm bins that have a spatial field (field bins) is plotted for these two conditions (fields near cues and far from cues). For cue cells with fields near and far from cues, there is a decrease in the fraction of field bins when cues are removed. Both of these results are statistically significant, but with different p values: paired one-tailed t-tests: for cue cells with fields near cues in Region B: with cues field bin fraction > missing cues field bin fraction, N = 77, p = 8×10-11; for cue cells with fields far from cues in Region B: with cues field bin fraction > missing cue field bin fraction, N = 20, p = 4×10-5. There was no significant difference in the responses in Region A for either category of cells. Therefore, the spatial firing patterns of cue cells with fields near cues and far from cues are both similarly affected by the removal of cues.

Author response image 2. Cue-removal responses of cue cells with fields near or far from cues.

Author response image 2.

(A) Cue cells were sorted into two groups based on the spatial shift of their spatial firing rate relative to the cue template. Cells with spatial shifts less and more than 25 cm in either direction were categorized as cells with fields near and far from cues, respectively. a1: Top: spatial firing fields of all cue cells with fields near cues and the fraction of cells within each 5 cm bin with firing fields. Bottom: spatial firing fields and distribution of firing fields for the population of cells with fields near cues is shown for the track with the cues missing along the latter part of the track. a2: Similar to the plots in a1 but for cells with spatial firing fields far from cues. B) Summary plots: the fraction of bins with fields (field bins) for each cell is plotted for cells with fields near and far from cues on different track regions. b1: summary plots for cells with fields near cues. Left: the fraction of field bins for each cell in the early region of the track in which cues are present for both the “With cues” and “Missing cues” tracks (Region A). Right: the same type of plot for the latter part of the track where cues are present on the “With cues” track and missing on the “Missing cues” track. b2: summary plots for cells with fields far from cues. The dotted lines are drawn diagonally (slope=1).

c) Can the fact that the sequence is repeated at each cue be explained by the fact that the cue template looks for cells with a fixed spatial offset from each cue? One way to control for this would be to identify cells based on one cue only, and test whether the sequence repeats at the other cues. Alternatively, a cue score could be developed that allows the cue template to move independently at each cue. This would be a more convincing test that cells really do have a fixed offset from each spatial cue.

We agree with the reviewer’s concern that cells with consistent spatial shifts from individual cues could be artificially selected based on the classification criteria of cue cells. To avoid this artifact, the ideal analysis would be to classify cells using a single cue and measuring their spatial shifts to other cues, as the reviewer suggested. However, we found that a template comprised of only one cue picked up a large number of other cell types, including grid cells, the activity of which did not correlate to other cues along the track. Based on our observations of the data collected on the current tracks, three to four cues on a track are generally required to specifically select cells with cue-correlated activity.

We performed a modified version of the suggested analysis on calcium imaging data since the environments used in the previous version of the paper were comprised of more cues (eight) than the tetrode data environments (Figure 5—figure supplement 5). In general, for a given cue template, we classified cells with cue-correlated activity using a half-template containing five cues (template 1), and then compared its spatial shift on template 1 to that of the other half-template comprised of the rest five cues (template 2). The hypothesis is that the spatial shift will be similar for these two half-templates if cue cell responses are similarly shifted from all cues. An example with two half-templates is shown in Figure 5—figure supplement 5A. R1 and R2 are two half-templates with cues on the right side of the track. We calculated the percentage of cells that maintained similar spatial shifts across the two half-templates (where the difference of the spatial shifts on R1 and R2 is less than 25 cm). We found that a large fraction of cue cells (76.9% and 80.3% for cells identified on R1 and R2, respectively) had very similar shifts on the two half-templates.

To further confirm that this high percentage of cells with consistent spatial shifts was not created by using a particular set of half-templates, we repeated this analysis for cells in both layers 2 and 3 using multiple sets of half-templates comprised of various combinations of cues from the original templates. We also performed the analysis for left-side cues (Figure 5—figure supplement 5B). All analyses showed similar results (Figure 5—figure supplement 5C). These data together indicate that responding to individual cues at consistent spatial shifts is a feature of most cells with cue-correlated activity. We can include this result in a supplementary figure if necessary.

d) What is the distribution of Left-Cue scores for Right-Cue cells and vice versa? Is it really "either/or", or is there a continuum of cells that respond to combinations of left and right cues?

In Figure 5—figure supplement 2A, we plotted the distributions of left and right cue scores of cells in the new environment and found that they both exhibited unimodal, continuous distributions. Most cells had cue scores above the cue score threshold for left or right cues but not both. Only a small fraction (~5% among all cells classified using left and right cue template and ~1.6% of all cells) of cells passed the thresholds of both left and right cue templates (top right corner). As shown in Figure 6—figure supplement 2A, their responses correlated to the two templates under different spatial shifts, indicating that they did not simultaneously responded to both left and right cues. Furthermore, we developed a “bilateral score” to specifically determine whether a cell response was encoding cues on one side or both-sides of the environment (Figure 6—figure supplement 2B). Bilateral scores of the left and right cue cells together showed a bimodal distribution (Figure 6E), suggesting that these cells responded to either left or right cues, but not both.

To address conjunctive left-cue/right-cue cells more directly, we also previously used a template with cues on both sides. We classified cells using a threshold specific to the both-side template of the new environment (different from the method in the previous version of the manuscript, where we used a single threshold for all three types of template, Figure 5—figure supplement 2C). However, from closer inspection, we chose to remove the bothside cue cells from the main paper for the following reasons:

1) The cue scores of both-side cue cells were significantly lower than those of left and right cue cells (Figure 5—figure supplement 2D). Since cue score was the mean correlation of a cell response to individual cues, independent of the number of cues on a template, the low cue scores indicated that the responses of both-side cue cells did not correlate well to cues on both-sides of the track.

2) 64% of both-side cue cells were also classified as left or right cue cells, which only strongly responded to cues on one side (see the sequence plots in Figure 5—figure supplement 2G).

3) The rest of both-side cue cells (36%) only weakly correlated to the both-side template (Figure 5—figure supplement 2E-G), as reflected by their lower cue scores (Figure 5—figure supplement 2D).

These conclusions were consistently obtained in cells in layers 2 and 3 of the MEC imaged in the previous environment (Figure 6—figure supplement 3 and Figure 5—figure supplement 4C-D and H-I). Since we believe that cue cells preferentially respond to cues on either left or right cues but not both, we chose to focus on left and right cue cells in the current manuscript.

e) "For each environment, we found the activity of all cue cells was best aligned to the center of the cue rather than the start or end of the cues". If the place field size is proportional to the cue size, this will automatically follow. Start and end would be displaced differentially with respect to place field center while the cue center would be the average of the two. The method used for generating cue scores (subsection “Scores for cells in tetrode data”) generates higher scores for cells with field sizes matching cue sizes (over cue cells that have cue independent field sizes), making this analysis circular. Why not use the peak correlation between the cue template and the firing rate map with the smallest absolute shift from zero as the cue score, instead? That will eliminate this confound.

We agree with the reviewer and have removed Figure 4C-F and the sentence that was quoted.

3) Detailed statistics need to be provided at multiple places. For example, the subsection “Cue cell pairwise activity patterns” mentions Pearson correlation coefficients of 0.3 and 0.13. The authors argue that "This suggests that the spike timing relationship between cue cell pairs is present only when cues are present and thus when these cells are driven to be active in a sequential manner by locomotion past the cue." This will hold true if the two coefficients are significantly different from one another, and the coefficient of 0.3 is statistically significantly different from 0. 2. "*p ≤0.05. **p ≤0.01. ***p ≤0.001. n.s. p > 0.5. Student's t-test. Error bars: mean ± SEM.": detailed statistics including sample size, p values, t-statistics, means and STDs need to be reported in the main text. The journal guidelines state "Report exact p-values wherever possible alongside the summary statistics and 95% confidence intervals. These should be reported for all key questions and not only when the p-value is less than 0.05."

Have the authors corrected for multiple comparisons wherever required?

We have now reported exact p values throughout the paper. We used appropriate statistical analysis for multiple comparisons. All the comparisons were made between two samples, so they did not require multi-comparison analyses.

4) The authors report recording up to 301 cells from a single tetrode (Figure 1—figure supplement 1; including 88 cue cells and 93 grid cells; "Recordings were performed on four animals over two months."). Were repeat recordings from the same neurons on consecutive/multiple days identified and eliminated? How? If they were not eliminated, all the reported statistics suffer from inflation of degrees of freedom. Can the authors comment on this?

In Author response image 3, we have compared the original dataset to two others: 1.) a dataset in which duplicate recordings were removed using cross correlations of the real arena spatial firing rates (cell with higher maximum firing rate was kept if the correlation of the spatial firing rate in the real arena was >= 0.95) and 2.) a dataset only including cells recorded on the day when tetrodes were lowered. We found that both datasets showed consistent results with those used in the original paper. We have updated all figures and text in the paper using the dataset with duplicates removed across days (middle panel in Author response image 3).

Author response image 3. Trimming the tetrode database.

Author response image 3.

Cue cell sequences and scores for three different subsets of the tetrode database. Left: the original database from the original manuscript is shown. Middle: the new trimmed database with duplicates across days removed. Right: database in which cells recorded on the days in which a given tetrode was lowered are kept. In each of the three panels, the sequences of cue cells found in the database for environments 1 and 7 are shown on the left. To the right, the cue scores versus border, grid, and head direction score are shown, as well as the overall distribution of cell types.

5) What are the running speed profiles of the mice? Did they tend to slow down near the visual cues?

Animals tended to slow down only in the region of the last cue, which was associated with a water reward. Examples of the speed animals ran along the track on the last day for three tracks is shown in Figure 1—figure supplement 2.

6) "In layers 2 and 3, we consistently observed that anatomically adjacent cue cells (physical distances around 30 μm) showed more similar spatial shifts, whereas the relationship was more varied if cue cells were further apart (Figure 7G-N). The similar cue responses of adjacent cue cells suggest that they may share similar inputs or be connected."

Do the cross correlations of neighbouring cells (on the same tetrode) maintain the peak at 0ms in B if they had peak at 0ms in A? If not, the two observations with tetrodes and imaging would contradict one another. In general, the claims made in G-N are rather weak. Authors should consider excluding them.

The 'micro-organisation' relating physical separation to spatial shifts in responses relative to cue location is seen restricted to anatomically adjacent cue cells: is there any danger that this reflects contamination/poor localisation/diffusion of light from neighboring sources?

1) To clarify the first point about 0 millisecond correlations of tetrode data: First, we previously used a larger dataset that included some duplicate clusters from the same day and that required the removal of data at 0 msec. We have corrected for this in the paper and there are no longer any cells with a peak in the cross correlation at 0 msec. For this reason, we do not think there is a contradiction between tetrode and imaging data.

2) To clarify the second point about contamination in calcium imaging data: we did not observe the contamination of calcium responses of neighboring cells. In Author response image 4, we showed that while the anatomically adjacent cells had similar spatial shifts (same cells previously shown in Figure 7G-N), the shapes of their calcium transients were quite different, even for the transients almost occurring at the same time. These examples indicate that the similar spatial shifts of adjacent cue cells cannot be explained by contamination of their calcium signals. Upon the suggestion of the reviewers, Figure 7G-N has been removed. In addition, we also removed the original Figure 7A-F (anatomical clustering of cue cells in layers 2 and 3), because we do not consider the conclusion presented in the figure essential for the current manuscript.

Author response image 4. Calcium transients of anatomically adjacent cells with similar spatial shifts.

Author response image 4.

A and B are figure panels in the previous Figure 7G and H. C) Calcium responses of cell 3 (black) and cell 6 (red). Top: mean ΔF/F. Second to bottom: three examples of calcium traces (ΔF/F) showing that the shapes of calcium transients (large peaks) and the patterns of baseline activity of the two cells are different. D) Similar to C but for cell 5 and cell 2.

7) One reason for the more specific apparent correlate of firing in the VR track versus the open field might be that the viewing angle is important and this is not systematically sampled in the open field. Do the mice run in both directions on the VR – if so, do the cue cells fire at a similar cue-angle? Does this relate to the observation that place cell firing becomes more directionally modulated in VR than in real open fields (presumably because of the greater influence of vision in visually-generated VR; Acharya, Aghajan et al., 2016; Chen, King et al., 2018). In the comparison to known cell types, might these cue cells be related to landmark-vector cells (Deshmukh and Knierim, 2013) or object-vector cells (Hoydal et al., 2019), or egocentric responses recently reported in lEC (Wang et al., 2018)? Do the cue cell responses depend on the wider context – need the mouse be running (vs. passive viewing) to see firing? To what extent do cue cells fire similarly across VR environments or do they 'remap'? (this is not entirely clear from Figure 1).

We do not have data from mice running in different directions or during passive viewing but that they need to be addressed in future studies.

To address whether cue cells maintain their spatial cell type identity or remap across environments, we measured calcium responses of neurons in layer 2 of the MEC during the navigation of two different virtual tracks. We found that many cue cells showed cue-correlated responses on both tracks (Figure 6A). In general, the percentages of cue cells and non-cue cells that remained as the same cell types in two different tracks were significantly higher than chance (Figure 6B). Finally, cue cells on two tracks also showed highly correlated spatial shifts relative to cue templates (Figures 6A and C). These observations strongly suggest that the cue cell population represents cues in multiple environments in a consistent manner.

We have included a discussion of the papers that were published at the time of submission within the main paper. In this section we discuss in greater detail the papers mentioned for this reviewer comment as well as one additional paper. We have compared our results to the papers mentioned. We can include this additional discussion in the main text if necessary.

In comparison to “Egocentric coding of external items in the lateral entorhinal cortex”, (Wang et al., 2018): This paper shows a division in object representation for LEC and MEC cells. The central result of this paper is the egocentric representation of objects for cells in LEC. In this paper, the examples of “spatial non-grid cells” showed head direction preference. Some of the cells with boundary bearing sensitivity appeared to be border cells. We think cells from either of these populations could align with our cue cells, which also have spatially selective firing and head direction preference in an open arena. While it is unclear how the population of cue cells will respond to multiple cues in a complex environment, a recent study (Hoydal et al., 2019, discussed below) suggested that cue cells (object vector cells in the paper) may exhibit spatial fields around individual cues. We think the allocentric representation of items found in this paper is consistent with the known cell types within MEC. The object vector cells in the MEC also showed an allocentric vectorial representation of objects. We have not specifically investigated the allocentric or egocentric representation of our cue cells, which can be done in the future by having animals travel through the same track in different directions.

In comparison to “Influence of Local Objects on Hippocampal Representations: Landmark Vectors and Memory”, (Deshmukh and Knierim, 2013): This paper describes cells within CA1 in the hippocampus that encode spatial relationships to objects. It is possible that the landmark vector cells in this paper would have a similar response to cues in our virtual environment. However, this paper only shows responses of these cells in environments with multiple objects distributed throughout the environment. It is unclear what activity pattern these cells will exhibit in an object free, bounded arena, where we recorded our cue cells. This paper also conjectures that landmark vector cells are similar to boundary vector cells. We have shown that our cue cells do not have an obvious direct relationship to boundary vector cells. Therefore, without knowing how landmark vectors cells in CA1 respond in an object- free arena, it is unclear how related these two cell types are.

In comparison to “Entorhinal Neurons Exhibit Cue Locking in Rodent VR” (Casali et al., 2019): This paper describes cue locking of activity of cells during navigation of virtual environments. The authors note that if cues are regularly spaced then these cue locking cells will have regularly spaced firing fields that could resemble grid cell activity. They showed that these cells are not grid cells by performing experiments in real arenas. We think these cells are the same as the ones we describe in our paper.

In comparison to the unpublished paper on object-vector cells (Hoydal et al., 2019): We believe these cells belong to a similar cell population as our cue cells do since this paper and our current manuscript have made comparable findings on this cue-related cell type:

1) Similar percentages of cue/object-vector cells, with the percentage of cue cells exceeding the percentage of grid cells

2) These cells are not predominately conjunctive with other spatial cell types

3) These cells maintain cue-related firing across environments (this result has been added as Figure 6 in the current manuscript based on reviewer suggestions).

This paper did have some complementary findings/differences that our paper does not address:

1) They found that some of these cells have multiple firing fields at the location of a single object. It is possible that their objects are larger or the navigation in two dimensions changes the spatial firing field shape/number.

2) They found that these cells maintain an allocentric representation of objects, not an egocentric representation as has been observed in LEC (Deshmukh and Knierim, 2013).

In comparison to this paper, our paper provided additional information about these cue/object vector cells. We addressed the precise spike timing between simultaneously recorded cells. We also specifically studied cue cells in layers 2 and 3 of the MEC and discovered the side-preference of these cells. The fact that these cells mostly responded to single-side cues and the right cues were predominately represented in the left MEC, strongly supported a visual input-based mechanism in driving the cue cell response. For this reason, both papers provide essential information about this new cell type.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Essential revisions:

Duplicates removed database in Author response image 3, which is used throughout the paper uses spatial firing rate correlations > 0.95 as a threshold for discarding cells as being duplicates. This threshold is unreasonably high, as even stable place cells recorded in consecutive sessions in the hippocampus often have substantially lower correlation coefficients, especially in mice (e.g. Kentros et al., 1998, Figure 3). This means that the duplicates removed database is likely to still have an unreasonably high number of duplicates.

The authors should show the tetrode lowering database figure shown in Author response image 3 at least as supplementary data. They must also include the tetrode lowering database stats and aggregate figures for other analyses using tetrode data, including responses to environmental perturbations, sequences, pairwise correlations etc. to convince the reader that the significance of the patterns reported is not grossly overestimated by inflation of degrees of freedom caused by the inclusion of duplicates in their dataset.

We have added figure supplements showing results for Figures 1-4 with the tetrode lowering database. We also modified the database to better eliminate duplicates by removing duplicate cells in which the firing rates in either the real or virtual environments were correlated (using the Pearson correlation, the threshold for the real arena firing rates is 0.8 and for virtual firing rates is 0.75). If either of the thresholds is exceeded then the cell with the lower maximum firing rate in the real arena is removed from the database. This reduced the database from 2107 clusters to 789 clusters; this database is now used for Figures 1-4. All results from the original paper for Figures 1-4 remained statistically significant. One figure panel only added in the previous version of the paper (first resubmission), Figure 4C, was not statistically significant for the reduced number of cells in the tetrode lowering database and has been removed.

In the revised manuscript, it is no longer clear how many animals the data were collected from, and how many neurons of different types were contributed by each animal. The animal and tetrode – wise breakdown of neurons in the tables included in the previous version are essential. Tables showing number of units of different kinds recorded from each tetrode in each animal have been eliminated from Figure 1—figure supplement 1. They should be put back, with numbers for both duplicates removed database as well as for tetrode lowering database.

These changes have been made to Figure 1—figure supplement 1.

Related to this, it is not clear how many animals contributed to the new data shown in the new Figure 7. Hence, it is impossible to figure out if the reported results are reproducible across animals. Please mention number of animals included for each analysis/figure in the Results.

The information about the number of animals for each result has been included for all figures.

"In Region A, there was a spread in the temporal shifts for pairs of cue cells and these temporal shifts were correlated for the two tracks (Figure 5C left, Pearson correlation = 0.52, p=9×10-5). However, the temporal shifts in Region B of the two tracks were less correlated: while a similar spread of temporal shifts was observed when cue cells were recorded on the with cues track (plotted along the x-axis of the bottom right panel in Figure 5C), most cue cell pairs did not have a correlated phase in the relative spike timing when cues were missing (plotted along the y-axis of the bottom right panel in Figure 5C right, “correlation not significant”). The fact that the spike timing relationship between cue cell pairs is maintained only when cues are present suggests that these cells are driven to be active in a sequential manner by locomotion past cues."

Correlation coefficient and p value for region B needs to be included, as requested in the previous review – stating "correlation not significant" is not sufficient. Furthermore, to make the claim that their data suggests that "cells are driven to be active in a sequential manner by locomotion past cues", the authors should demonstrate that the slopes in region A and B shown in Figure 5C are significantly different from one another.

We decided to remove this figure since it does not add to the main finding of this paper that cue firing fields are no longer present when cues are removed. The sequential nature of the averaged activity of the population of cells is evident in Figures 4 and 5 by direct inspection.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Figure 1—source data 1. Cue score and shuffle distributions.
    Figure 1—source data 2. Cue cell population field distributions across the virtual tracks.
    Figure 2—source data 1. Cue cell firing field fractions.
    Figure 2—source data 2. Correlations of firing rates along tracks to cue template.
    Figure 3—source data 1. Cue, border, grid and head direction scores.
    Figure 3—source data 2. Spatial and head direction stability values by cell type.
    Figure 4—source data 1. Cue cell sequences.
    Figure 5—source data 1. Cue scores, bilateral scores and percentages of left and right cue cells.
    Figure 5—figure supplement 2—source data 1. Cue scores, spatial shifts and both-side cue template.
    Figure 5—figure supplement 4—source data 1. Cue scores, bilateral scores and percentages of cue cells in layers 2 and 3 on a 18-meter virtual linear track.
    Figure 6—source data 1. Percentages of common cue and non-cue cells, and spatial shifts of common cue cells in different environments.
    Transparent reporting form

    Data Availability Statement

    All data generated or analyzed during this study are included in the manuscript and supporting files.


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES