Skip to main content
eLife logoLink to eLife
. 2019 Nov 18;8:e46351. doi: 10.7554/eLife.46351

Replay as wavefronts and theta sequences as bump oscillations in a grid cell attractor network

Louis Kang 1,2,, Michael R DeWeese 1,2
Editors: Sachin Deshmukh3, Laura L Colgin4
PMCID: PMC6901334  PMID: 31736462

Abstract

Grid cells fire in sequences that represent rapid trajectories in space. During locomotion, theta sequences encode sweeps in position starting slightly behind the animal and ending ahead of it. During quiescence and slow wave sleep, bouts of synchronized activity represent long trajectories called replays, which are well-established in place cells and have been recently reported in grid cells. Theta sequences and replay are hypothesized to facilitate many cognitive functions, but their underlying mechanisms are unknown. One mechanism proposed for grid cell formation is the continuous attractor network. We demonstrate that this established architecture naturally produces theta sequences and replay as distinct consequences of modulating external input. Driving inhibitory interneurons at the theta frequency causes attractor bumps to oscillate in speed and size, which gives rise to theta sequences and phase precession, respectively. Decreasing input drive to all neurons produces traveling wavefronts of activity that are decoded as replays.

Research organism: None

Introduction

The hippocampal region contains spatially tuned cells that generally encode an animal’s position in its spatial environment. Place cells in the hippocampal formation fire at one or a few locations (O'Keefe and Dostrovsky, 1971), and grid cells in the medial entorhinal cortex (MEC) fire at many locations that form a triangular lattice in space (Hafting et al., 2005). However, at short timescales, these neurons fire in coordinated sequences that represent trajectories away from the current animal position.

The first type of sequence occurs when the animal is moving and the local field potential (LFP) of the hippocampal region is dominated by an oscillation of 5–11 Hz, the theta range (Vanderwolf, 1969; O'Keefe, 1976; Vinogradova, 1995). During every cycle of this oscillation, neurons corresponding to locations slightly behind the animal fire first, followed by those corresponding to the current location and finally locations ahead of the animal (Skaggs et al., 1996; O'Neill et al., 2017). These so-called theta sequences involving many neurons are related to a single-neuron phenomenon called phase precession (O'Keefe and Recce, 1993; Hafting et al., 2008). When an animal first enters the firing field of a place or grid cell, the neuron spikes late in the theta cycle. As the animal moves through the field, subsequent spikes tend to arrive at smaller theta phase, or earlier within a cycle. Thus, activity within each theta cycle starts with neurons whose peak firing occurs behind the animal and ends with neurons whose peak firing occurs ahead of the animal, which is a theta sequence (Skaggs et al., 1996).

The second type of sequence occurs when the animal is idle and not moving, a state called quiet wakefulness or quiescence; it also occurs during slow wave sleep, which will not concern us here. During idle periods, the LFP of the hippocampal region loses theta oscillations but is instead intermittently punctuated by sharp-wave ripples with power across 140–200 Hz (Buzsáki, 1986; Chrobak and Buzsáki, 1996). During these events, spatially tuned neurons fire in coordinated bursts that represent rapid, long-distance trajectories. These replays are well-established in place cells (Lee and Wilson, 2002; Foster and Wilson, 2006) and have recently been observed in grid cells within superficial layers of the MEC (O'Neill et al., 2017). Note that replays can also involve grid cells within deep layers of the MEC (Ólafsdóttir et al., 2016); these findings, which have been disputed (Trimper et al., 2017), will not be addressed by this work.

Hippocampal experiments on theta sequences and replay have established a rich phenomenology, which includes their properties near branch points of a maze (Wu and Foster, 2014; Belchior et al., 2014), their modulation by reward history (Johnson and Redish, 2007; Wikenheiser and Redish, 2015a; Ambrose et al., 2016; Wu et al., 2017), their ability to predict future actions (Pfeiffer and Foster, 2013; Singer et al., 2013; Wikenheiser and Redish, 2015b; Xu et al., 2019), and their potential role in disease pathogenesis (Middleton et al., 2018). These findings suggest that hippocampal sequences facilitate many cognitive functions, such as memory consolidation, credit assignment, and action planning (Foster, 2017; Ólafsdóttir et al., 2018; Zielinski et al., 2018). However, the mechanisms that produce theta sequences and replay among place and grid cells are still unclear.

Meanwhile, a mechanism for producing grid cells themselves, the continuous attractor model, has been accumulating experimental support (Couey et al., 2013; Yoon et al., 2013; Heys et al., 2014; Dunn et al., 2015; Fuchs et al., 2016; Zutshi et al., 2018; Gu et al., 2018). This class of networks posits particular configurations of synapses within the MEC that are local and symmetric (Burak and Fiete, 2009; Widloski and Fiete, 2014). By incorporating key biological features such as fully spiking neural dynamics, we find that new phenomena emerge in this recurrent architecture: grid cells will exhibit either theta sequences or replays depending on the external input provided by other brain regions. These inputs correspond to experimentally measured changes between active and inactive behavior (Steriade et al., 2001; Simon et al., 2006) and do not involve changes in connectivity. From this perspective, the same architecture that produces grid cells causes them to participate in sequences that may be leveraged by other brain regions, such as the hippocampus, to pursue desired cognitive goals.

Results

Our fully spiking continuous attractor model produces grid cells

We first describe our implementation of a continuous attractor network and demonstrate that it reproduces classic grid responses when a simulated animal explores a 2D open field. Model details are provided in Materials and methods and Table 1. We simulate a single grid module—that is, grid cells of one scale and orientation (Stensola et al., 2012)—with a continuous attractor network based on Burak and Fiete (2009) and Widloski and Fiete (2014). Neurons are assembled in a 2D sheet with five overlapping populations, of which four represent excitatory principal cells and one represents inhibitory interneurons (Figure 1A). Each excitatory neuron excites its neighbors across all populations, and each inhibitory neuron inhibits only excitatory neurons located a certain distance away. We will address the purpose of having four slightly different excitatory populations shortly.

Table 1. Main model parameters and their values unless otherwise noted.

Values that change between runs, idle periods, and allocentric corrections are indicated accordingly.

Parameter Variable Value
Neurons per population n×n 232×232
Simulation timestep Δt 1 ms
Exc. membrane time constant τm+ 40 ms
Inh. membrane time constant τm- 20 ms
Exc.-to-exc. synaptic delay τs++ 5 ms
Exc.-to-inh. synaptic delay τs-+ 2 ms
Inh. synaptic delay τs- 2 ms
Exc. drive maximum amax+ {2.0runs1.6idle2.0allo.
Exc. drive minimum amin+ 0.8
Exc. drive scaled spread ρa+ {1.2runs0.9idle
Inh. drive magnitude amag- {0.72runs0.0idle0.72allo.
Inh. drive theta amplitude ath- {0.2runs0idle0allo.
Inh. drive theta frequency f 8 Hz
Exc. synaptic strength wmag+ 0.2
Exc. synaptic spread rw+ 6
Inh. synaptic strength wmag- 2.8
Inh. synaptic distance rw- 12
Exc. synaptic shift ξ 3
Exc. velocity gain α 0.25 s/m
Exc. noise magnitude Var[ζP(r,t)] 0.0022
Inh. noise magnitude Var[ζ(r,t)] 0.0022

Figure 1. Model architecture and generation of 2D grid cells.

(A) Our model consists of a neural sheet with five overlapping populations, four of them excitatory—N, W, S, and E—and one inhibitory. Each density plot depicts the synaptic outputs in the sheet of a neuron at the origin. (B) Each neuron is driven to a particular membrane potential, which exceeds the spiking threshold for excitatory neurons at the center of the sheet and oscillates at 8 Hz for inhibitory neurons while the animal is running. (C) Snapshot of neural activity showing S and inhibitory populations separately; other excitatory populations have activity patterns similar to that of the S population. Each pixel is a neuron and dark colors indicate recent spiking. (D) Left, segment of a 2D open field trajectory. Right, neural activity over the course of the segment with each neuron colored according to the position at which it attained its maximum firing rate. Each attractor bump moves in synchrony with animal motion. (E) Left, two sample grid cells with spikes shown as colored dots superimposed on the animal’s trajectory. Each neuron’s location in the sheet is indicated by a circle of corresponding color in D. Right, autocorrelation of rate maps calculated from spikes at left. Black scale bars, 50 neurons. White scale bars, 50 cm.

Figure 1.

Figure 1—figure supplement 1. 2D firing maps for many grid cells.

Figure 1—figure supplement 1.

Spatial firing maps for a 2D open field trajectory. Twenty grid cells are selected from the simulation depicted in Figure 1D,E across all four excitatory populations. For each grid cell, spikes superimposed on the animal’s trajectory (left) and autocorrelation of rate maps calculated from spikes (right).
Figure 1—video 1. Emergence of a grid-like pattern on the neural sheet from randomly initialized membrane potentials.
Download video file (1.5MB, mp4)
Neural activities of the W and inhibitory populations over the first 240 ms of simulation setup. Each pixel is a neuron, with black corresponding to current spikes and lightest gray corresponding to spikes 40 ms ago. Defects in the grid are then removed as described in the Model details subsection of Appendix 1 before the main simulation begins.
Figure 1—video 2. Path integration over an open field trajectory produces 2D grid cells.
Download video file (2.1MB, mp4)
Left, neural activity of the W population. Each pixel is a neuron, with black corresponding to current spikes and lightest gray corresponding to spikes 40 ms ago. Red circle indicates the location of a single recorded neuron. Right, open field trajectory in an enclosure of diameter 1.8 m with a black square indicating the current animal position. Red dots indicate animal positions during spikes recorded from the neuron.

To accurately simulate rapid sequences of spikes, we implement fully spiking grid cells that obey leaky integrate-and-fire dynamics. Each neuron has a membrane potential that tends toward a steady-state value called the drive, which represents a combination of resting potential and broad input from other brain regions (Figure 1B). If the potential exceeds a threshold of 1 in arbitrary units, a spike is emitted, the potential is reset to 0, and the neuron’s targets experience a postsynaptic jump in potential after a brief synaptic delay. During locomotion, excitatory principal cells are driven to fire by various cortical and subcortical inputs (Kerr et al., 2007; Bonnevie et al., 2013). Thus, their drive exceeds threshold at the center of the neural sheet; it decays towards the edges to avoid edge effects that disrupt continuous attractor dynamics (Burak and Fiete, 2009). Meanwhile, inhibitory interneurons have subthreshold drive that oscillates at a theta frequency of 8 Hz due to input from the medial septum (Brandon et al., 2011; Koenig et al., 2011; Gonzalez-Sulser et al., 2014; Unal et al., 2015). With this architecture and random initial membrane potentials, the neural sheet naturally develops local regions of activity, called bumps, that are arranged in a triangular lattice and are coherent across populations (Figure 1C and Figure 1—video 1). This self-organized grid is an attractor state of the network.

How do we produce neurons with grid-like spatial tuning from the grid-like activity pattern on the neural sheet? This transformation is performed by the four excitatory populations, each of which corresponds to a perpendicular direction along the neural sheet and a perpendicular direction in the spatial environment. Excitatory neurons have output synapses biased in their preferred sheet direction (Figure 1A), and they receive more input when the animal moves along their preferred spatial direction. When the animal moves along, say, the 'North' direction, neurons in the N population have increased drive; since their outputs are biased in the 'up' direction on the neural sheet, the activity pattern moves up. In this way, the activity pattern on the sheet moves synchronously with the animal’s 2D trajectory (Figure 1D), and the grid pattern is projected onto single neuron spatial responses (Figure 1E and Figure 1—video 2). Each active neuron in the sheet is a grid cell with the same scale and orientation (Figure 1—figure supplement 1).

Grid cells are spatially tuned and theta-modulated along a linear track

For the rest of the paper, we simulate 1D trajectories consisting of runs along a linear track separated by idle periods at either end (Figure 2A). As in the 2D case, attractor bumps on the neural sheet move synchronously with animal motion. But over the course of a simulation, the grid-like pattern on the neural sheet drifts (Burak and Fiete, 2009) such that for a given track position, the corresponding bump locations on the neural sheet slowly change. This drift introduces errors in path-integration (Hardcastle et al., 2015), which are believed to be corrected by allocentric input from border cells (Ocko et al., 2018; Keinath et al., 2018), boundary vector cells (Evans et al., 2016), or landmark cells (Raudies and Hasselmo, 2015; Savelli et al., 2017). Thus, we implement brief allocentric corrections in our model. Attractor bump locations on the neural sheet that correspond to either end of the track are learned during simulation setup. They are periodically reintroduced during the main simulation as excitatory input between idle periods and runs (Figure 2A,B; see Materials and methods). In Appendix 2, we present a version of our model without allocentric corrections that still demonstrates many of our results, though to a weaker degree.

Figure 2. Grid cells along a 1D track.

(A) Trajectory consisting of runs along a track separated by idle periods at either end. Between the end of an idle period and the start of a run, the network receives brief allocentric input. (B) Allocentric input corrects the location of attractor bumps on the neural sheet (Materials and methods). (C) Left, track diagram. Right, neural activity over runs with each neuron colored according to the track position at which it attained maximum firing rate. Red circles indicate regions of recording. Scale bar, 50 neurons. (D) Firing fields of recorded grid cells sorted by position of maximum rate. (E) Multiunit activity of neurons in D averaged over theta cycles, which span from one trough of the oscillating inhibitory drive to the next. Data is repeated over two cycles for clarity.

Figure 2.

Figure 2—figure supplement 1. Firing fields for multiple simulations.

Figure 2—figure supplement 1.

Firing fields of recorded grid cells along a 1D track. Fields are sorted by position of maximum rate. (A) Representative simulations exhibiting one field. (B) Representative simulations exhibiting two fields.

Grid cells at different locations in the sheet generally correspond to different positions along the track (Figure 2C). From four regions in the sheet chosen for recording, we select up to 150 excitatory neurons whose spatial firing fields are stable from lap to lap and collectively tile the track (Figure 2D; Materials and methods). Over replicate simulations with different random initial membrane potentials, many simulations contain only grid cells with single fields as in Figure 2D. In other replicates, the attractor grid self-organizes on the neural sheet with an orientation such that two attractor bumps pass over recorded neurons and produce two fields (Figure 2—figure supplement 1; Appendix 1). All of our single-simulation examples presented below in the main text correspond to the simulation depicted in Figure 2D that exhibits single fields. Similarly, the grid cells in O'Neill et al. (2017), with which we will primarily compare our results, often possess a single dominant field along an arm of a T-maze. However, both types of simulations contribute to our results, and we will provide examples for simulations exhibiting two fields in our figure supplements.

At the theta timescale, grid cell activity is highest when the oscillating inhibitory drive is lowest and excitatory cells are most disinhibited (Figure 2E). This observation allows us identify theta phases in our simulations with those defined through LFP measurements. Experimentally, grid cells (Hafting et al., 2008) and place cells (Buzsáki, 2002; Feng et al., 2015) are most active at troughs of the theta-filtered LFP, which can be explained in terms of extracellular currents generated by neural activity (Buzsáki et al., 2012). O'Neill et al. (2017) define theta cycles to span from trough to trough of the LFP. Accordingly, we align our theta cycles to start with phase 0° at troughs of the inhibitory drive (Figure 2E).

Theta phase precession arises from oscillations in attractor bump size

Phase precession results

During runs, activity passes through recorded neurons as attractor bumps move along the neural sheet (Figure 3A, Figure 3—figure supplement 1, and Video 1). However, spike trains from single neurons exhibit finer temporal organization with respect to the theta oscillation in a variety of ways (Figure 3B, Figure 3—figure supplement 2, and Figure 3—figure supplement 3). Many neurons are phase-independent and fire throughout the theta cycle. Some are phase-locking and strongly prefer to fire at 360° (equivalently, 0° or 720°). Finally, some exhibit phase precession with a preferred phase that starts around 360° when the animal enters a grid field but decreases as the animal progresses through it.

Figure 3. During runs, certain neurons exhibit theta phase locking and precession.

(A) Grid cell spike rasters for four laps along the track. Vertical blue lines indicate theta cycle boundaries. (B) Relationship between animal position along the track and theta phase for representative neurons. Dots represent spikes during runs in the directions indicated by arrows, with each spike repeated at two equivalent phases for clarity. Lines indicate fit by circular-linear regression. Numbers in each panel from top left to top right indicate magnitudes of correlation coefficients, regression fit scores, and regression slopes. (C–E) Data across all replicate simulations. (C) Magnitudes of circular-linear correlation coefficients. Mean ± s.d.: 0.17 ± 0.11. (D) Fit scores for circular-linear regression. (E) Regression slopes in units of field size for neurons with fit score >0.4. The predominance of negative values for rightward runs and positive for leftward runs indicates decreasing phase as the animal traverses grid fields in either direction. (F) Spike densities for different subgroups. An animal enters a grid field at progress 0% and exits it at 100%.

Figure 3.

Figure 3—figure supplement 1. Spike rasters over multiple runs.

Figure 3—figure supplement 1.

Vertical blue lines indicate theta cycle boundaries. (A) Representative simulation exhibiting one field. (B) Representative simulation exhibiting two fields.
Figure 3—figure supplement 2. Position-phase relationships during rightward runs for many neurons.

Figure 3—figure supplement 2.

Relationship between animal position along the track and theta phase during rightward runs. Dots represent spikes during runs in the directions indicated by arrows, with each spike repeated at two equivalent phases for clarity. Lines indicate fit by circular-linear regression. Numbers in each panel from top left to top right indicate magnitudes of correlation coefficients, regression fit scores, and precession ranges. (A) Representative neurons from simulations exhibiting one field. (B) Representative neurons from simulations exhibiting two fields.
Figure 3—figure supplement 3. Position-phase relationships during leftward runs for many neurons.

Figure 3—figure supplement 3.

Relationship between animal position along the track and theta phase during leftward runs. Panels same as in Figure 3—figure supplement 2.
Figure 3—figure supplement 4. Further analysis of subgroups with different phase relationships.

Figure 3—figure supplement 4.

Further analysis of subgroups of phase-independent, phase-locking, and phase-precessing neurons. (A,B) Distributions of regression slopes using a fit score cutoff different from 0.4, which was used in Figure 3. (C) Neural sheet locations of neurons belonging to each subgroup for the simulation described in Figure 3A,B. Data encompassing recording areas shown in Figure 2C and recording areas offset by 45° on the neural sheet. Left, track diagram. Right, neural activity over runs with each neuron colored according to the track position at which it attained maximum firing rate. Scale bar, 50 neurons. (D–G) Distribution of neurons in each subgroup with respect to various properties. Medians for each subgroup indicated by dashed lines. Medians for phase-locking and phase-precessing subgroups compared by the Mann-Whitney U test with reported p-value. (D) Mean firing rate. (E) Distance from the center of the neural sheet divided by the half the neural sheet size n. (F) Angle on the neural sheet relative to the right direction, or +𝐱^. (G) Location relative to attractor bumps in units of relative firing rate. A value of 1 means that the center of a bump passes over the neuron, and a value of 0 means that no part of a bump passes over the neuron (Appendix 1).
Figure 3—figure supplement 5. Dependence of phase relationships on excitatory population and number of fields.

Figure 3—figure supplement 5.

Box-whisker plots with lines at medians, boxes between first to third quartiles, and whiskers between the entire data range excluding outliers lying more than 1.5 times the interquartile distance beyond the first or third quartile. Medians compared by the Mann-Whitney U test with reported p-values. (A–F) Analysis of phase relationships with respect to excitatory population. For these simulations, the direction of animal velocity is 0°, which means that runs are aligned to the E and W directions. The ‘along’ category corresponds to both the E population when the animal is running towards E and the W population when the animal is running towards W. The ‘against’ category corresponds to the E and W populations during the opposite running directions. The ‘perp.’ category corresponds to the N and S directions. (A) Number of valid recorded neurons. Note that there are very few ‘against’ neurons due to low firing rate, and this category is excluded from further analysis. (B) Number of spikes per neuron. (C) Magnitude of correlation coefficient. (D) Fraction of phase locking neurons. (E) Fraction of phase precessing neurons. (F) Phase precession range. (G–J) Analysis of phase relationships with respect to number of fields exhibited by a simulation. (G) Magnitude of correlation coefficient. (H) Fraction of phase locking neurons. (I) Fraction of phase precessing neurons. (J) Phase precession range.
Figure 3—figure supplement 6. Phase relationship properties over different simulation parameters.

Figure 3—figure supplement 6.

Correlation coefficient magnitude, precession range, fraction of phase-locking neurons, and fraction of precessing neurons as functions of various simulation parameters. Points at mean values with bars indicating s.d. over replicate simulations. Small horizontal offsets added for clarity. (A) Animal speed and track orientation relative to the E direction, or +𝐗^. (B) Excitatory drive and recording distance from the center of the neural sheet divided by the half the neural sheet size.
Figure 3—figure supplement 7. Phase relationship properties over different simulation parameters, continued.

Figure 3—figure supplement 7.

Same as Figure 3—figure supplement 6, except as functions of different parameters. (A) Inhibitory theta amplitude and inhibitory drive. (B) Inhibitory and excitatory membrane time constants.
Video 1. Neural activity during runs, idle periods, and allocentric corrections.
Download video file (6.1MB, mp4)

Left, position of the animal (black square) along a 1D track. Right, neural activity of the E population. Each pixel is a neuron, with black corresponding to current spikes and lightest gray corresponding to spikes 40 ms ago. Red circles indicate regions of recording.

We first quantify these relationships using circular-linear correlation (Equation 13), which indicates whether theta phase changes with position (Kempter et al., 2012). Our correlation magnitudes 0.17 ± 0.11 (mean ± s.d.; Figure 3C) are low compared to experiments, which report grid cells with mean ≈0.3 and place cells with mean ≈0.4 (O'Neill et al., 2017). This difference arises from two sources. Figure S10 of O'Neill et al. (2017) shows some highly correlated neurons with magnitudes up to 0.8; these are absent from our simulations. It also shows fewer neurons with correlation value close to 0, which corresponds to either phase-independent or phase-locking neurons. Since both subgroups lack a preferred theta phase that consistently changes with position, circular-linear correlation cannot distinguish between them (Figure 3B, top two rows).

To further characterize phase behavior, we use circular-linear regression (Kempter et al., 2012), which can differentiate between all three subgroups of phase relationships presented in Figure 3B. Phase-independent neurons are defined to have a regression fit score less than a cutoff of 0.4 (Figure 3D; Equation 11). Neurons with high fit score are then distinguished as either phase-locking or precessing by the absolute value of their regression slope in units of field size, which is called the phase precession range (Figure 3E). We use a range of 60° as the cutoff. Experimentally, phase-locking and precessing neurons are found in Layers III and II (LIII and LII), respectively (Hafting et al., 2008). A wide variety of phase precession ranges are reported across subtypes of LII neurons with medians spanning from 50° for pyramidal cells (Ebbesen et al., 2016) to 170° for stellate cells (Reifenstein et al., 2016). Our cutoff is close to the lower limit of this span. Yet, regardless of the cutoff value, regression slopes are biased towards negative values for rightward laps and positive values for leftward laps, which means that the preferred firing phase tends to decrease as an animal moves through a grid field. This directional tendency is a key biological feature of phase precession seen experimentally (Hafting et al., 2008), and it is also maintained for other cutoff values applied to the fit score (Figure 3–Figure Supplement 4A,B).

We also construct spike density plots using all neurons in each subgroup (Figure 3F). These plots show that phase-locking neurons have a preferred phase around 360° throughout the field and that phase-precessing neurons have a preferred phase also around 360° at field entry. This value corresponds to theta troughs when excitatory neurons are most disinhibited (Figure 2E) and matches measurements for LIII and LII neurons (Hafting et al., 2008; note that their phase convention differs from ours by 180°). More detailed features also seem to agree between phase-precessing neurons in our simulations and LII neurons in experiments. First, the peak density decreases by ≈75° over the first 50% of field progress, and second, there is a smaller density at around 60% field progress and 140° theta phase (Figure 3F; Hafting et al., 2008). This second density has been separately related to bursting (Climer et al., 2013) and a second source of neural input (Yamaguchi et al., 2002; Chance, 2012); further investigation is required to reconcile these assertions with its origin in our model.

The dependence of phase behavior on simulation features and parameters is explored in detail in the figure supplements of Figure 3. We will briefly list a few key results interspersed with experimental findings that support them. Phase precession statistics do not depend on the direction of animal motion (Figure 3–Figure Supplement 5E,F); in experiments, Climer et al. (2013) report omnidirectional precession in a 2D environment. Precession range and correlation magnitude increase with animal speed (Figure 3–Figure Supplement 6A, p = 0.002 and 0.0006 for precession range and correlation magnitude, respectively, by pooling data across track orientations and applying the Krushal-Wallis H test); in experiments, Jeewajee et al. (2014) report that faster motion is associated with steeper precession slopes and higher correlation magnitudes (Figure 3a, results for pdcd, which corresponds to field progress defined in this paper).

To summarize our phase precession results, our simulation produces a subgroup of phase-locking neurons akin to MEC LIII neurons and a subgroup of phase-precessing neurons akin to MEC LII neurons. Further consideration of correlation coefficients and precession ranges, with connections to measurements, will be provided in the Discussion.

Attractor bump oscillations explain phase precession

We can understand the emergence of phase precession through the effect of oscillating inhibitory drive on attractor bumps. Higher inhibitory drive suppresses the activity of the excitatory grid cells and decreases the size of bumps (Figure 4A,B). Imagine we record from a single neuron as a bump of activity moves through it on the neural sheet. As an attractor bump moves, its size oscillates with inhibitory drive (Figure 4C, top, and Figure 4—video 1). The activity bump will most likely first reach the grid cell when it is large and growing in size, which corresponds to the end of the theta cycle when the inhibitory drive is decreasing towards its lowest point. Thus, as an animal enters a grid field, the first spike will likely have large theta phase slightly less than 360°. During the next theta cycle, the center of the bump has moved closer to the neuron, so the edge of the bump will reach the neuron when it is growing in size but not as large. Thus, the next spike will have slightly smaller theta phase. This is the origin of phase precession in our model. We continue generating spikes with simplified dynamics in which the neuron fires when it is contained within the bump, but there is a 40 ms refractory period corresponding to membrane potential building up towards threshold after it is reset. This procedure represents one lap along the track at constant velocity.

Figure 4. Bump size oscillations explain phase precession in models with simplified dynamics.

(A,B) Data from simulations with fixed inhibitory drive and constant animal velocity. (A) Snapshots of neural activity. Scale bar, 50 neurons. (B) The diameter of bumps on the neural sheet decreases linearly with inhibitory drive (linear regression R2=0.90, ANOVA p10-9). (C) Phase precession in a conceptual model with bump size oscillations. We imagine an attractor bump, with size oscillations described by B, passing through a recorded grid cell. Top, a single lap. The recorded neuron is at location 0 and fires a spike (black dot) whenever contained within the bump (gray area), subject to a 40 ms refractory period. Bottom, relationship between theta phase and time across multiple laps with different initial phases. Spikes occur around 360° (equivalently, 0°) at the start of the field, and their phase generally decreases with time within in the grid field. (D) Attractor bump shape at different theta phases averaged over theta cycles and individual bumps (Appendix 1). Grays scaled separately for each theta phase. Scale bar, three neurons. (E,F) Phase behavior in a simplified model using average bump dynamics. (E) We imagine the average attractor bump passing through a recorded grid cell. Top, a single lap. The recorded neuron is at location 0 and stochastically fires a spike (black dot) with rate proportional to bump activity, subject to a 40 ms refractory period. Bottom, relationship between time and theta phase across multiple laps with different initial phases. (F) Relationship between time and theta phase using average bumps whose activity has been rescaled to different maximum values: 40, 50, and 100 spikes/s. Dots represent spikes generated according to E. Lines indicate fit by circular-linear regression. Numbers in each panel from top left to top right indicate magnitude of correlation coefficient, regression fit score, and regression slope.

Figure 4.

Figure 4—figure supplement 1. Simplified model using average bump dynamics produces different phase behaviors for different levels of rescaled activity.

Figure 4—figure supplement 1.

Simplified model of phase precession using average bump dynamics. Bump activity is rescaled to different maximum values. (A–D) Neurons exhibit phase locking under low bump activity with maximum value 40 spikes/s. Statistics from 1000 neurons simulated according to Figure 4E. (A) Relationship between animal position along the track and theta phase for representative neurons. Dots represent spikes. Lines indicate fit by circular-linear regression. Numbers in each panel from top left to top right indicate magnitude of correlation coefficient, regression fit score, and regression slope. (B) Magnitudes of circular-linear correlation coefficients. (C) Fit scores for circular-linear regression. (D) Regression slopes for neurons with fit score >0.4. (E–H) Neurons exhibit phase precession under medium bump activity with maximum value 50 spikes/s. Panels same as A–D. (I–K) Neurons exhibit phase independence under high bump activity with maximum value 100 spikes/s. Panels same as A–C.
Figure 4—video 1. Phase precession in a simplified conceptual model.
Download video file (88.3KB, mp4)
An idealized attractor bump, with constant speed and oscillating size, passes through a recorded grid cell multiple times with different initial theta phases. This represents an animal running multiple laps in the same direction at constant speed. Top, the recorded neuron is at location 0 and fires a spike (black dot) whenever it is contained within the bump (gray area), subject to a 40 ms refractory period. Bottom, relationship between theta phase and time for spikes accumulated across multiple laps. Theta phase generally decreases with time within in the grid field. Since the simulated animal moves at constant speed, time is proportional to position, so theta phase decreases with position within in the grid field.
Figure 4—video 2. Average attractor bump shape as a function of theta phase.
Download video file (122.9KB, mp4)
Bump activity is averaged over multiple bumps and theta cycles. Darker areas indicate higher activity, scaled separately for each theta phase. The constant velocity component of bump motion is removed.

We represent additional laps by performing the same simplified procedure as above, but with different shifts in theta phase, since the animal does not always enter the grid field at the same initial phase. Spikes accumulated across laps show phase precession (Figure 4C, bottom), though the effect is strongest at the beginning of the field. In the middle of the field, spikes still precess, but they cluster around values of both 360° and 180°. At the end of the field, a few spikes even precess in the opposite direction, increasing in phase with time. Thus, bump oscillations alone can explain the direction of phase precession and the preferred phase at field entry, but spike phases do not decrease perfectly linearly with progress through the field.

To analyze attractor bump oscillations more precisely, we calculate the average bump shape as a function of theta phase (Figure 4D and Figure 4—video 2; Appendix 1). We now imagine that this average bump passes multiple times through a recorded neuron whose spiking probability is proportional to bump activity (Figure 4E). Each instance of this stochastic process produces a phase-locking, phase-precessing, or phase-independent neuron. By explicitly rescaling the activity of the average bump without otherwise changing its time-varying shape, we can generate neurons enriched in one of these phase relationships (Figure 4F and Figure 4—figure supplement 1). Under high activity, neurons are driven to fire across all theta phases and exhibit phase independence. Under low activity, neurons only fire during the most permissive theta phase and exhibit phase locking at 360°. Under intermediate levels of activity, neurons can respond to oscillations in bump size as demonstrated in Figure 4C and exhibit phase precession.

This finding on how activity level influences phase behavior is supported by the experimental report that higher spike counts are associated with steeper precession slopes (Jeewajee et al., 2014, Figure 3b, results for pdcd, which corresponds to field progress defined in this paper). Our main model also reflects this finding: neurons with low, medium, and high firing rates have increased likelihoods for phase locking, precession, and independence (Figure 3–Figure Supplement 4D). Thus, the heterogeneous phase behaviors observed in our main simulations arise, at least in part, from heterogeneous levels of bump activity. Three factors that determine the level of activity experienced by a neuron are its distance to the center of the neural sheet, its sheet location relative to attractor bumps, and its preferred firing direction relative to animal motion. Accordingly, the prevalence of different phase behaviors varies with these factors (Figure 3–Figure Supplement 4E,G and Figure 3–Figure Supplement 5B,D).

Theta sequences arise from oscillations in attractor bump speed

Next, we use the firing fields illustrated in Figure 2D to decode the animal’s position from the population activity presented in Figure 3A (Materials and methods). The decoded position generally matches the animal’s actual position (Figure 5A and Figure 5—figure supplement 1), but at the theta timescale, there are deviations whose regularities are revealed by averaging over theta cycles.

Figure 5. During runs, the decoded position exhibits theta sequences.

(A) Decoded position for four laps along the track corresponding to Figure 3A. Vertical blue lines indicate theta cycle boundaries. (B) Decoded position shifted by actual position and averaged over theta cycles. The thick black fit line indicates the time window with the steepest increase in decoded position. Vertical thin black lines indicate theta cycle boundaries. (C) Mean actual run speed (error bars indicate s.d. over time) and mean theta sequence speed, as indicated by the slope of the thick black line in B (error bars indicate s.d. over replicate simulations). (D) Difference between the maximum likelihood decoded position from B and the actual animal position as a function of time. Within each theta cycle, the segment from the smallest to the largest value is emphasized as a theta sequence.

Figure 5.

Figure 5—figure supplement 1. Decoded position over many runs.

Figure 5—figure supplement 1.

Decoded position over multiple runs corresponding to Figure 3—figure supplement 1. Vertical blue lines indicate theta cycle boundaries. Orange line indicates the actual animal trajectory. (A) Representative simulation exhibiting one field. (B) Representative simulation exhibiting two fields.
Figure 5—figure supplement 2. Decoded position shifted by actual position and averaged over theta cycles for many simulations.

Figure 5—figure supplement 2.

Vertical black lines indicate theta cycle boundaries. (A) Representative simulations exhibiting one field. (B) Representative simulations exhibiting two fields.
Figure 5—figure supplement 3. Dependence of theta sequence speed on number of fields and simulation parameters.

Figure 5—figure supplement 3.

Theta sequence speed divided by mean actual speed for different numbers of fields and parameter values. (A) Dependence of theta speed on number of fields. Box-whisker plot with line at medians, box between first to third quartiles, and whiskers between the entire data range excluding outliers lying more than 1.5 times the interquartile distance beyond the first or third quartile. p-value calculated by the Mann-Whitney U test. (B–E) Theta sequence speed divided by mean actual speed as a function of various simulation parameters. Points at mean values with bars indicating s.d. over replicate simulations. (B) Animal speed and track orientation relative to the E direction, or +𝐗^. (C) Excitatory drive and recording distance from the center of the neural sheet divided by the half the neural sheet size. (D) Inhibitory theta amplitude and inhibitory drive. (E) Inhibitory and excitatory membrane time constants.

To do so, we take quadruplets of consecutive theta cycles, ignoring those whose decoded positions are close to the ends of the track and reversing those corresponding to right-to-left motion (Materials and methods). We align the decoded positions with respect to the actual position of the animal midway through each theta quadruplet, and we average these shifted decoded positions over theta quadruplets. This average decoded trajectory generally increases across the quadruplet, corresponding to the animal’s actual forward motion (Figure 5B and Figure 5—figure supplement 2). However, within each theta cycle, the decoded position increases rapidly, before retreating at cycle boundaries. We identify these forward sweeps as theta sequences, which represent motion at 0.88 ± 0.15 m/s, approximately twice the actual speed of 0.50 ± 0.05 m/s (mean ± s.d.; Figure 5C). This matches experimental observations of average theta sequence speed ≈1 m/s compared to the animal’s speed of ≈0.4–0.5 m/s (O'Neill et al., 2017). We also illustrate these sequences by comparing the maximum likelihood decoded position and the actual position as a function of time (Figure 5D). Averaged over theta cycles, the decoded position lags behind the actual position at cycle boundaries and then advances ahead of it midcycle.

Theta sequences, like phase precession, arise naturally in our model from an effect of oscillating inhibitory drive on attractor bumps. In addition to determining bump size, inhibitory drive also affects the speed at which bumps move along the neural sheet—higher inhibitory drive produces faster bump motion for a given animal speed (Figure 6A,B). At the middle of theta cycles, inhibitory drive is highest, so attractor bumps move quickly and represent fast animal trajectories. These theta sequences are faster than the animal’s actual speed, which is approximately constant within a theta cycle and thus encoded as the average bump speed over the entire cycle. We can also see the origin of theta sequences directly from the average dynamics of attractor bumps (Figure 4D, Figure 6C, and Figure 4—video 2). The location of peak activity moves with a sawtooth-like pattern that, over a theta cycle, lags behind the overall motion of the bump and advances ahead of it (Figure 6D). These forward surges produce the sequences in decoded position (Figure 5D).

Figure 6. Bump speed oscillations explain theta sequences.

(A,B) Data from simulations with fixed inhibitory drive and constant animal velocity. (A) Snapshots of neural activity. Scale bar, 50 neurons. (B) The speed of bumps on the neural sheet increases linearly with inhibitory drive (linear regression R2=0.91, ANOVA p10-9). Thus, the average decoded position in Figure 5B advances most rapidly around the middle of the theta cycle. (C) Activity of the average attractor bump depicted in Figure 4D along its central axis. (D) Top, peak activity location from C as a function of time. Bottom, peak activity position relative to constant bump motion as a function of time. Within each theta cycle, the segment from the smallest to the largest value is emphasized and corresponds to a theta sequence in Figure 5D.

Figure 6.

Figure 6—figure supplement 1. Relationship between attractor bump size and speed.

Figure 6—figure supplement 1.

Attractor bump size and speed are not generally correlated over changes in multiple simulation parameters, but both are negatively correlated with the amount of inhibition produced by inhibitory neurons. (A) Legend for simulations with fixed inhibitory drive. Each simulation is performed at different values of inhibitory drive amag-, inhibitory synaptic strength wmag-, and inhibitory membrane time constant τm-. Other parameters are set at the values used for main simulation runs (Table 1). (B–F) Relationships between pairs of attractor bump properties across parameters displayed in A. Rank correlation computed by Spearman’s ρ with p-values indicated. Attractor bump properties are diameter, speed, the multiunit activity of inhibitory neurons (inhibitory MUA), and a metric for the amount of inhibition produced by inhibitory neurons (inhibitory MUA×wmag-). Note that B indicates no overall correlation between bump size and speed, but increasing amag- (following blue–cyan–green) increases bump speed and decreases bump diameter, as seen in Figure 4B and Figure 6B.

Our model’s mechanism for theta sequences can explain why their speed is roughly twice the actual speed, as observed experimentally (O'Neill et al., 2017). Suppose the theta oscillation spends roughly the same amount of time at its peaks, when attractor bumps move at maximal speed, and at its troughs, when bumps do not move at all. Then, the average bump speed, which encodes the actual animal speed, is half the maximal bump speed, which encodes the theta sequence speed. Indeed, simulations whose grid cells contain two fields, whose track is oriented differently, whose trajectory is faster or slower, or whose parameters are chosen differently all have theta sequence speeds between 1.2 and 2.5 times the actual speed (Figure 5—figure supplement 3).

Replay arises from traveling wavefronts of activity

We now simulate idle periods at either end of the track. We would like to know how external inputs to the MEC change at transitions between active and quiescent states, but these measurements have not been performed to our knowledge. Thus, we instead adapt experimental findings taken during slow wave sleep, which resembles the quiescent state in many behavioral and electrophysiological ways (Buzsáki, 1996). The neocortex, which activates MEC principal cells, is broadly less active during slow wave sleep (Steriade et al., 2001), and the medial septum, which inhibits MEC interneurons, increases its firing rate while losing most of its oscillatory nature (Simon et al., 2006). We model these effects respectively as decreased excitatory drive and decreased inhibitory drive without oscillations (Figure 7A). The network rapidly switches between these inputs and those of Figure 1B, which represent the active state, as the animal transitions between quiescence and runs.

Figure 7. Lower drive during quiescence produces traveling wavefronts of activity that are decoded as replays.

(A) During idle periods at either end of the track, excitatory and inhibitory drives decrease, and the latter no longer oscillates. (B) Snapshot of neural activity showing S and inhibitory populations separately. Red circles indicate regions of recording. Scale bar, 50 neurons. (C) Grid cell spike rasters for four idle periods. (D) Decoded position corresponding to C. Rapid trajectory sweeps arise from traveling wavefronts as depicted by the S population in B. (E) Replays from one simulation with cyan fit lines. (F) Number of replays across simulations. (G) Mean actual run speed (error bars indicate s.d. over time) and mean replay speed, as indicated by the slopes of cyan lines in E (error bars indicate s.d. over replays).

Figure 7.

Figure 7—figure supplement 1. Spike rasters over multiple idle periods.

Figure 7—figure supplement 1.

(A) Representative simulation exhibiting one field. (B) Representative simulation exhibiting two fields.
Figure 7—figure supplement 2. Decoded position over many idle periods.

Figure 7—figure supplement 2.

Decoded position over multiple idle periods corresponding to Figure 7—figure supplement 1. (A) Representative simulation exhibiting one field. (B) Representative simulation exhibiting two fields.
Figure 7—figure supplement 3. Many replays.

Figure 7—figure supplement 3.

Replays with cyan fit lines. (A) Representative replays from simulations exhibiting one field. (B) Representative replays from simulations exhibiting two fields. Note that two or three parallel lines participate in the fit (Appendix 1).
Figure 7—figure supplement 4. Dependence of replay count and speed on number of fields and simulation parameters.

Figure 7—figure supplement 4.

(A–D) Dependence of replay count and speed on number of fields. Box-whisker plot with line at medians, box between first to third quartiles, and whiskers between the entire data range excluding outliers lying more than 1.5 times the interquartile distance beyond the first or third quartile. p-value calculated by the Mann-Whitney U test. (A) Replay count. (B) Replay speed divided by mean actual speed. (C,D) Same as A,B, but we detect replays in simulations exhibiting two fields with the same method used for single fields. (E–H) Replay count and replay speed divided by mean actual speed as functions of various simulation parameters. Points at mean values with bars indicating s.d. over replicate simulations. (E) Velocity gain and animal speed. (F) Excitatory and inhibitory drive. (G) Excitatory-to-excitatory synaptic delay and noise magnitude for both excitatory and inhibitory neurons. (H) Inhibitory and excitatory membrane time constants.

During quiescence, attractor bumps still form a lattice at the center of the neural sheet, where excitatory drive still exceeds threshold (Figure 7B and Video 1). However, these bumps vanish toward the edge of the sheet, where they are replaced by traveling wavefronts of activity. These wavefronts cannot be solely sustained by excitatory drive, which is subthreshold in this outer region; instead, they must be nucleated and then self-sustained through excitatory-to-excitatory connections. In our model, nucleation occurs at activity bumps toward the edge of the sheet (Video 1). Once nucleated, the wavefront propagates freely through the outer region because inhibitory neurons, with low drive, are not activated by the wavefront. In contrast, wavefronts cannot propagate in the center region, because excitatory neurons there receive enough drive to spike vigorously and activate nearby inhibitory neurons (Figure 7B, inhibitory population). These interneurons, with their surround distribution of synaptic outputs (Figure 1A), constrain neural activity to localized bumps and prevent wavefront propagation.

When a wavefront passes through a region of recorded neurons, it can appear as a sequence of spikes (Figure 7C and Figure 7—figure supplement 1) that is decoded as a rapid trajectory (Figure 7D and Figure 7—figure supplement 2). We identify these events as replays, which traverse much of the track over ~100 ms (Figure 7E and Figure 7—figure supplement 3). Due to the stochastic nature of wavefront generation, the number of detected replays can vary considerably across laps (Figure 7—figure supplement 2) and across replicate simulations (Figure 7F). However, replays have a characteristic speed determined by that of wavefront propagation; it is 7.0 ± 2.6 m/s, which is about 14 times faster than the actual speed of 0.50 ± 0.05 m/s (mean ± s.d.; Figure 7G). This agrees with experiments, which measure replays at ≈4–6 m/s and actual motion at ≈0.4–0.5 m/s (O'Neill et al., 2017). Replay speed depends on the number of grid fields and the values of simulation parameters (Figure 7—figure supplement 4). In fact, grid cells with two fields encode two different track positions, so replays among them can have their distribution of decoded position split between parallel lines (Figure 7—figure supplement 3). We have implemented two different detection methods that yield slightly different replay speeds: one that is identical to the single-field case, and another in which multiple lines participate in the fit (Figure 7—figure supplement 4A-D ; Appendix 1). Nevertheless, across all simulation parameters and detection methods, replays are always much faster than both the actual animal speed and the theta sequence speed, as observed experimentally (O'Neill et al., 2017). This happens because traveling wavefronts lack inhibition and should be the network phenomenon that propagates fastest.

Replay properties under different network configurations

Lower velocity gain produces replays with directional bias

Under the main parameter values in Table 1, attractor bumps travel long distances on the neural sheet when the animal moves from one end of the track to another (Figure 2C). These distances are longer than the grid scale, so different positions on the track can correspond to the similar locations on the neural sheet, preventing a consistent correspondence between neural sheet location and track position (yellow and purple areas in Figure 2C). With lower velocity gain, attractor bumps travel shorter distances, and such a correspondence is apparent (yellow and purple areas in Figure 8A).

Figure 8. Replays acquire additional properties in networks with different configurations of attractor bumps.

(A–C) With lower velocity gain, neural sheet locations have a clear one-to-one correspondence with track positions, which allows replays to exhibit a directional bias. (A) Left, track diagram with animal idle at 0 cm. Right, corresponding snapshot of neural activity superimposed on a background colored according to the position at which each neuron attains its maximum firing rate (cf. Figure 2C, right panel). Red circles indicate regions of recording. Attractor bumps still represent the animal position, and a wavefront can be seen emanating from one bump at the top right. (B) Direction of replay propagation across simulations. For each actual position and replay direction, width of gray bars indicates the number of simulations with a particular replay count. Means indicated with lines. p-values from the Mann-Whitney U test show that a significantly greater number of replays propagate away from either actual position. (C) Distances between actual position and decoded position at the start or end of replays. Paired medians compared by the Wilcoxon rank-sum test with indicated p-value. (D–G) With periodic boundary conditions and uniform excitatory drive, attractor bumps and activity wavefronts appear throughout the neural sheet. (D,E) Excitatory drive during runs and idle periods. (F,G) Snapshots of neural activity during runs and idle periods showing S and inhibitory populations. Each pixel is a neuron and dark colors indicate recent spiking. Red circles indicate regions of recording. Scale bars, 50 neurons.

Figure 8.

Figure 8—figure supplement 1. Replay properties in simulations with lower velocity gain.

Figure 8—figure supplement 1.

(A) Grid cell spike rasters for four idle periods. (B) Decoded position corresponding to A. (C) Replays across multiple simulations with cyan fit lines. (D) Number of replays across simulations. (E) Actual run speed 0.50 ± 0.05 m/s (mean ± s.d. over time) and replay speed 7.4 ± 2.3 m/s (mean ± s.d. over replays).
Figure 8—figure supplement 2. Replay properties in simulations with periodic boundary conditions and uniform excitatory drive.

Figure 8—figure supplement 2.

Same as Figure 8—figure supplement 1, but for simulations with periodic boundary conditions and uniform excitatory drive. Replay speed is 8.0 ± 2.2 m/s (mean ± s.d. over replays).
Figure 8—figure supplement 3. Phase precession and theta sequence properties in simulations with periodic boundary conditions and uniform excitatory drive.

Figure 8—figure supplement 3.

(A) Grid cell spike rasters for four runs. Vertical blue lines indicate theta cycle boundaries. (B) Decoded position corresponding to A. (C) Relationship between animal position and theta phase for representative neurons. Dots represent spikes during runs in the directions indicated by arrows. Lines indicate fit by circular-linear regression. Numbers in each panel from top left to top right indicate magnitudes of correlation coefficients, regression fit scores, and regression slopes. (D–F) Data across all replicate simulations. (D) Magnitudes of circular-linear correlation coefficients. Mean ± s.d.: 0.14 ± 0.09. (E) Fit scores for circular-linear regression. (F) Regression slopes for neurons with fit score >0.4. (G) Decoded position averaged over theta cycles. Thick black line fit to theta sequence. (H) Actual run speed 0.50 ± 0.05 m/s (mean ± s.d. over time) and theta sequence speed 0.90 ± 0.11 m/s (mean ± s.d. over replicate simulations).
Figure 8—video 1. Replays preferentially radiate from attractor bumps encoding the current animal position.
Download video file (462.4KB, mp4)
The activity snapshot in Figure 8A corresponds to t = 41.14 s. Left, position of the animal (black square) at 0 cm along a 1D track, whose positions are colored with a gradient. Right, neural activity of the E population superimposed upon a spatial firing map. Each neuron is colored according the position at which it attains maximum firing rate. Each black pixel represents a spike within the last 5 ms. A wavefront travels through the red recording region located at the top right of the neural sheet. It emanates from an attractor bump sustained by yellow neurons that represent the current position of 0 cm, and it rapidly sweeps through neurons representing progressively farther track positions.
Figure 8—video 2. Neural activity during runs, idle periods, and allocentric corrections for a simulation with periodic boundary conditions and uniform excitatory drive.
Download video file (5.6MB, mp4)
Left, position of the animal (black square) along a 1D track. Right, neural activity of the E population. Each pixel is a neuron, with black corresponding to current spikes and lightest gray corresponding to spikes 40 ms ago. Red circles indicate regions of recording.

When an animal is idle at either end of the track, attractor bumps are still located at neurons that represent the current position (Figure 8A and Figure 8—video 1). Recall that replays are nucleated at attractor bumps (Video 1 and Figure 8A). As a wavefront radiates from a bump, it will likely first activate neurons whose firing fields represent nearby positions, followed by those with increasingly distant fields, due to the correspondence between neural sheet locations and track positions (Figure 8—video 1). Thus, the decoded position tends to start near the actual position of the animal and travel away from it (Figure 8B). Accordingly, the decoded starting position of replays are closer to the actual position than the decoded ending position (Figure 8C).

However, replays are not all detected immediately after wavefront nucleation; wavefronts traveling for some time may pass through a region of recorded neurons from any direction. Thus, despite the predominance of replays directed away from the actual animal position, some start at the opposite side of the track and travel toward the actual position (Figure 8B). This directional bias has not been directly assessed for grid cells, but it appears in experiments on hippocampal replay (Diba and Buzsáki, 2007; Davidson et al., 2009).

Periodic boundary conditions and uniform drive produces replays throughout the neural sheet

Our main model has nonperiodic boundary conditions, which requires excitatory drive to decrease towards the edge of the neural sheet to prevent boundary effects that disrupt path integration (Burak and Fiete, 2009). During idle periods, the persistence of attractor bumps at the center of the sheet maintain a representation of the animal’s position (Figure 7B). With periodic boundaries that eliminate edges, we can implement uniform excitatory drive during both runs and idle periods (Figure 8D,E). This allows the entire neural sheet to exhibit attractor bumps during runs and activity wavefronts during idle periods (Figure 8F,G and Figure 8—video 2). Thus, replays can be recorded from anywhere on the neural sheet. Note that allocentric input at the end of each idle period is essential in this case because positional information in the form of attractor bumps completely disappears. Simulations with periodic boundary conditions and uniform drive produce all the essential replay and theta-related phenomena observed in our main model (Figure 8—figure supplement 2 and Figure 8—figure supplement 3).

Discussion

Summary of our results

Decades of hippocampal experiments have established theta phase precession, theta sequences, and replay as central features of place cell dynamics (O'Keefe, 1976; Buzsáki, 1986; Skaggs et al., 1996; Lee and Wilson, 2002; Foster and Wilson, 2006). Evidence has been accumulating that grid cells, discovered more recently, also exhibit these phenomena (Hafting et al., 2008; Ólafsdóttir et al., 2016; O'Neill et al., 2017). These three phenomena appear to be similar to one another in many ways, but experiments have also demonstrated important discordances. Theta sequences have been understood as the direct consequence of individually phase-precessing neurons (O'Keefe and Recce, 1993; Skaggs et al., 1996). However, place cell experiments show that phase precession can appear without theta sequences (Feng et al., 2015). Theta sequences and replays both represent temporally compressed trajectories, and both seem important for memory consolidation and planning (Foster and Knierim, 2012; Zielinski et al., 2018). Yet, replay trajectories are significantly faster than theta sequences, and theta sequences only propagate in the direction of motion whereas replays can propagate in any direction relative to the resting animal (Davidson et al., 2009; Feng et al., 2015; Zheng et al., 2016; O'Neill et al., 2017).

We unify phase precession, theta sequences, and replay under a single model while maintaining distinctions among them. Phase precession and theta sequences arise naturally from oscillations in attractor bump size and speed, respectively, which are both consequences of medial septum input. However, bump size and speed are not generally correlated across changes in other model parameters (Figure 6—figure supplement 1). Thus, it is conceivable that one theta-related phenomenon can exist without the other under certain conditions. Replays in our model arise from the disappearance of attractor bumps and their replacement by traveling wavefronts of activity. Wavefront propagation involves dynamical processes that are fundamentally different from attractor bump motion, and thus has led to different decoded speeds and directions for replays and theta sequences.

Crucially, we produce these phenomena in a continuous attractor network that can still generate 2D grid cells through path integration. Our implementation shares similarities with an early version of the model (Burak and Fiete, 2009) and a version whose connectivity can be learned (Widloski and Fiete, 2014). However, there are three key differences. First, we use fully spiking neurons, which are rarely used in grid cell attractor models—we have only found them utilized in English (2017) and Bush and Burgess (2014), the latter of which combines an attractor network with the oscillatory interference model for grid cells (Burgess et al., 2007; Hasselmo et al., 2007). Typically, neural dynamics are founded on firing rates, which can be used to generate spike trains through a memoryless Poisson process (Burak and Fiete, 2009; Widloski and Fiete, 2014); these models do not capture the strong short-time anticorrelation in spiking due to the reset of membrane potential. Second, we include excitatory-to-excitatory connections in accordance with their recent experimental discovery (Fuchs et al., 2016; Winterer et al., 2017; Schmidt et al., 2017; Zutshi et al., 2018). These connections allow wavefronts to propagate among excitatory neurons while nearby inhibitory neurons remain silent. Third, we vary the external input to the network between active and quiescent behavioral states in accordance with observations (Steriade et al., 2001; Simon et al., 2006). Consequently, the network can instantly switch between operating regimes that exhibit either theta-related phenomena or replays.

Our model presents a distinct conception of entorhinal replay that can be tested experimentally. Direct observation of propagating wavefronts can be pursued by calcium imaging of the MEC (Heys et al., 2014; Diehl et al., 2019). Otherwise, tetrode recordings of grid cell replay in open field environments may be analyzed for wavefront dynamics. In two dimensions, wavefronts would not represent a single trajectory, but instead a manifold of positions that sweep out a plane through space. New decoding techniques that permit the grid cell population to simultaneously represent inconsistent locations would be required to distinguish between these two possible representations of motion.

Relationships to experiments and other models

We chose to build a continuous attractor model that can produce theta-related behavior and replay with as simple an architecture as possible, but we imagine that more complex network geometries are possible. For example, during quiescence, our model maintains activity bumps at the center of network and only exhibits replays at the edges; with alternative implementations of excitatory drive, multiple areas may contain persistent attractor bumps that nucleate wavefronts. We have also ignored other known components of the grid system. We do not model multiple grid modules (Stensola et al., 2012; Urdapilleta et al., 2017; Kang and Balasubramanian, 2019). During runs, spike frequency adaptation (Yoshida et al., 2013) would reduce the number of spikes late in the grid field, which could enhance phase precession according to the conceptual model presented in Figure 4 and increase correlation magnitude. During quiescence, the MEC exhibits Up and Down states marked by high and low levels of activity; perhaps transitions between these states, which is associated with CA1 ripples (Hahn et al., 2012), can nucleate multiple wavefronts that propagate in synchrony. Moreover, simulation features used to study gamma oscillations in grid cells (Nolan, 2018; Chrobak and Buzsáki, 1998; Colgin et al., 2009) can be introduced to elucidate the role of gamma during theta-related phenomena and replays (Pfeiffer and Foster, 2015; Ramirez-Villegas et al., 2018).

Our model does not explicitly implement biophysical distinctions between LII and LIII neurons or between stellate and pyramidal cells within LII. These neural subpopulations have different characteristic ranges of phase precession that even vary between experimental reports. Ebbesen et al. (2016) found the median precession range of LII stellate and pyramidal cells to be ≈130° and ≈50°, respectively, which is smaller than the values of ≈110° and ≈170° found by Reifenstein et al. (2016). Meanwhile, LIII grid cells, which often possess conjunctive head direction tuning (Sargolini et al., 2006) like the neurons in our model, predominantly exhibit phase-locking with phase precession range close to 0° according to Hafting et al. (2008). However, Reifenstein et al. (2014) reports small, but significant precession in LIII, and single-lap analyses by Ebbesen et al. (2016) finds that LIII neurons precess even more than LII pyramidal cells. O'Neill et al. (2017) does not distinguish between these cell types and observes all of these phase behaviors; this heterogeneity is dynamically captured by the excitatory principal cells in our model. Simulations implementing multiple types of principal cells would be required to precisely assign the distinctions in theta phase properties to the appropriate cell type.

Other models have been proposed for theta phase precession (Tsodyks et al., 1996; Thurley et al., 2008; Navratilova et al., 2012; Thurley et al., 2013) and replay (Ponulak and Hopfield, 2013; Azizi et al., 2013; Bayati et al., 2015; Chenkov et al., 2017; Gönner et al., 2017; Haga and Fukai, 2018). For example, one grid cell model uses after-spike depolarization within a 1D continuous attractor network to generate phase precession and theta sequences (Navratilova et al., 2012). This experimentally observed feature pulls attractor bumps back to recently active locations every theta cycle. A place cell model combines an oscillating membrane potential, as found in our model, with synaptic facilitation (Thurley et al., 2008). These features could be added to our model to strengthen theta-related phenomena and potentially increase correlation magnitude, but they not required. Previously proposed replay models were all intended to address place cells, not grid cells. Several of them encode replay trajectories into synaptic weights, either through hard-wiring or a learning mechanism (Chenkov et al., 2017; Gönner et al., 2017; Haga and Fukai, 2018). Two models have suggested that replays originate from wavefronts of activity propagating through networks of place cells (Ponulak and Hopfield, 2013Bayati et al., 2015Palmer et al., 2017). These wavefronts then enhance connections through the network though spike-timing-dependent plasticity rules, which could account for the ability of reward to modulate hippocampal replay (Pfeiffer and Foster, 2013; Ambrose et al., 2016Palmer and Gong, 2013).

Possible implications

Although phase precession, theta sequences, and replay arise without learning in our model, they are not necessarily epiphenomenal in the sense that they serve no cognitive purpose. Grid cells are thought to provide a spatial representation that is immediately available upon entering an environment and that remains stable over days of experience (Hafting et al., 2005; Leutgeb et al., 2007; Diehl et al., 2019). They provide spatial information to the hippocampus (Cheng and Frank, 2011; Hales et al., 2014), whose place cells associate location with environmental context (Wilson and McNaughton, 1993; Anderson and Jeffery, 2003) and reward (Hollup et al., 2001; Mamad et al., 2017) to aid the animal in pursuing its goals. In this vein, we envision that superficial MEC layers act as a pattern generator for phase precession, theta sequences, and replay, especially in new environments. These phenomena are then presented to the hippocampus to be refined by experience and modulated by reward.

The concept that sequences in hippocampus arise from superficial MEC layers has been experimentally substantiated for theta phenomena: lesioning the MEC disrupts phase precession and theta sequences in place cells (Schlesiger et al., 2015), but grid cell phase precession is not affected by hippocampal inactivation (Hafting et al., 2008). Driving place cell replays with grid cell replays would require temporal coordination between the two, for which experimental support is mixed. O'Neill et al. (2017) does not find coordination between grid cell replays in superficial MEC layers and sharp-wave-ripples (SWRs) in CA1, during which place cell replays appear. However, Ólafsdóttir et al. (2016) and Ólafsdóttir et al. (2017) report that hippocampal replays represent trajectories coherently with grid cell activity in deep MEC layers, and Yamamoto and Tonegawa (2017) reports that SWRs in CA1 are preceded by SWRs in superficial MEC layers. Further investigation would help to elucidate the temporal relationships between hippocampus and MEC. Also, note that we do not exclude the possibility that place cell replays can also be initiated within the hippocampus or by subcortical nuclei (Stark et al., 2014; Schlingloff et al., 2014; Oliva et al., 2016; Angulo-Garcia et al., 2018; Sasaki et al., 2018).

Place cell activity appears to contain an abundance of certain short sequences that represent contiguous locations in novel environments and participate frequently in replays (Liu et al., 2018). This finding is related to the disputed (Silva et al., 2015) observation of hippocampal preplay (Dragoi and Tonegawa, 2011; Grosmark and Buzsáki, 2016; Liu et al., 2019). From the perspective of our model, such preexisting sequences may arise from wavefronts generated intrinsically by grid cells, which then drive corresponding place cells. After experience, hippocampal plasticity supplements the sequences with additional place cells to enhance their representation of meaningful trajectories (Grosmark and Buzsáki, 2016; Drieu et al., 2018). In this way, wavefronts in the MEC can accelerate learning by providing a scaffold for the hippocampus to modify and improve. Meanwhile, the MEC maintains a symmetric representation of space, so the next experience, possibly with different spatial properties, can be learned equally well.

Yet, we emphasize that our model focuses on the mechanistic origins of grid cell sequences and its results hold regardless of whether the suggested relationships between MEC and hippocampus are borne out by future experiments.

Materials and methods

This section provides an overview of our simulation methods. Full details are provided in Appendix 1.

Model architecture and dynamics

Our architecture is inspired by Burak and Fiete (2009) and Widloski and Fiete (2014). A 2D neural sheet contains five overlapping populations, each with neurons at 𝐫=(x,y), where x=1,,n and y=1,,n. There are four excitatory populations: N, S, W, and E; we index them by P and use the symbol + for parameters common to all of them. There is one inhibitory population -.

Each excitatory neuron is described by a membrane potential ϕP(𝐫,t) and a spike indicator sP(𝐫,t), which equals either 1 if neuron 𝐫 in population P spiked at time t or 0 otherwise. When the potential exceeds a threshold of 1 in arbitrary units, the neuron fires a spike and its potential is reset to 0:

ϕP(𝐫,t)0andsP(𝐫,t)1. (1)

We prevent ϕP(𝐫,t) from decreasing past –1 as a limit on hyperpolarization. The same definitions and dynamics apply to the inhibitory population -.

Membrane potentials follow leaky integrator dynamics. The excitatory neurons obey

τm+ϕP(r,t+Δt)ϕP(r,t)Δt+ϕP(r,t)=P,rw+(rrξe^P)sP(r,tτs++)+rw(rr)s(r,tτs)+a+(r)[1+αE^PV(t)]+ζP(r,t). (2)

And the inhibitory neurons obey

τmϕ(r,t+Δt)ϕ(r,t)Δt+ϕ(r,t)=P,rw+(rrξe^P)sP(r,tτs+)+a(t)+ζ(r,t). (3)

Neural drive is given by a, which is modulated by the animal’s velocity 𝐕(t) for excitatory neurons. Each excitatory population P prefers a perpendicular direction in space 𝐄^P, which we call North, South, West, and East for the populations N, S, W, and E. Neural connectivity is given by w. Each excitatory population P has its synaptic outputs shifted by a small amount ξ in a preferred direction 𝐞^P on the neural sheet, which we call up, down, left, and right for the populations N, S, W, and E. Spikes produce an instantaneous change in postsynaptic membrane potential after a synaptic delay τs, which is longer for excitatory-to-excitatory connections (τs++) than for excitatory-to-inhibitory connections (τs-+) based on axon morphology (Schmidt et al., 2017). Independent, zero-mean, normally distributed noise ζ is added to each neuron at each timestep. See Table 1 for complete variable definitions and values.

Drive and connectivity

The drive to excitatory populations is

a+(r)={amin++(amax+amin+)1+cos(πρ/ρa+)2ρ<ρa+amin+ρρa+, (4)

where ρ=(xn+12)2+(yn+12)2/n2 is a scaled radial distance for the neuron at 𝐫=(x,y). It equals 0 at the center of the neural sheet and approximately one at the midpoint of an edge. The drive to inhibitory subpopulations is

a-(t)=amag--ath-cos(2πft+ψ0), (5)

where ψ0 is a phase offset chosen randomly at the start of each lap.

The synaptic connectivity from excitatory neurons to all neurons is based on the symmetric function

w+(r)={wmag+1+cos(πr/rw+)2r<rw+0rrw+, (6)

where r=|𝐫|. The synaptic connectivity from inhibitory neurons to excitatory neurons is

w(r)={wmag1cos(πr/rw)2r<2rw0r2rw. (7)

There is no connectivity from inhibitory neurons to other inhibitory neurons.

See Table 1 for complete variable definitions and values.

Brief allocentric correction

The grid-like pattern of excitatory drive during brief allocentric corrections (Figure 2B) represents inputs from neurons with allocentric responses, such as border, boundary vector, or landmark cells (Evans et al., 2016; Raudies and Hasselmo, 2015; Savelli et al., 2017). Interactions between allocentric neurons and grid cells have also been implemented in other continuous attractor models (Ocko et al., 2018; Keinath et al., 2018). In our model, allocentric input is learned in an accelerated manner during simulation setup. Immediately before the main simulation phase, we simulate one run in each track direction followed by a learning period at either end of the track. The locations of attractor bumps during learning periods are stored by tallying the number of spikes produced at each neural sheet location. These tallies are linearly rescaled to serve as the allocentric drive to excitatory populations.

Single neuron recording and decoding

We simulate our model 10 times with slightly different run trajectories randomly generated by fractional Brownian motion processes, followed by 10 more times with the same trajectories but reversed velocities. These simulations produce spike trains for all neurons in the network. To match multiunit tetrode recordings, we select two sets of up to 150 excitatory neurons in each simulation. Each of these sets is an independent recording to be analyzed separately, which we call a ‘simulation’ in our results.

We select neurons for recording as follows. For the first set, we choose four points on the neural sheet whose distance from the center is 95 neurons and whose angular positions, or clock positions, are random but equally spaced. From circular areas of radius 12 centered at these points, we randomly select up to 150 excitatory neurons whose firing fields are stable across laps. For the second set, we choose the four circular areas with the same distance from the center of the sheet and with angular positions offset 45° from the first four areas. We again select up to 150 neurons with stable firing fields. All single-simulation results reported in the main text come from the same simulation.

We use grid cell firing fields accrued from all laps in both directions to decode the animal’s position from the population activity of its recorded neurons. We use the standard Bayesian decoding procedure that assumes independent, Poisson-distributed spike counts and a uniform prior (Zhang et al., 1998; O'Neill et al., 2017).

Acknowledgements

The authors thank John Widloski, David Foster, Fritz Sommer, Zengyi Li, Guy Isely, Kate Jeffery, Hugo Spiers, and Srdjan Ostojic for stimulating discussions and helpful ideas. Part of this work was performed at the Simons Institute for the Theory of Computing.

Appendix 1

Additional methods

Model details

Setup

We initialize each neuron on the neural sheet to a random membrane potential between 0 and 1. We evolve the network according to Equation 2 and Equation 3 for 500 timesteps without velocity input to generate a rough grid-like pattern of activity. To make the grid more regular and to anneal grid defects, we perform three evolutions of timesteps with constant velocity input V of magnitude 0.5 m/s and angles, π/2 – π/5, 2π/5, and π/4 successively. We then evolve the network for 4 laps without idle periods.

Finally, we learn allocentric inputs as follows. We simulate 2 additional laps with 1000 timesteps of idle periods at each end of this track. During this time, both the excitatory and inhibitory drives are maintained at their run values without theta oscillations. Over the final 500 timesteps at each end, the number of spikes produced at each neural sheet location across the 4 excitatory populations are tallied. These values are linearly rescaled to serve as allocentric inputs to excitatory neurons, with the highest spike count corresponding to amax+ and anything below the 5th percentile of nonzero activity corresponding to amin+. Then the main simulation phase begins.

Simulations with periodic boundary conditions

Since annealing grid defects is difficult with periodic boundary conditions, we initialize membrane potentials with a grid-like configuration on the neural sheet as in Burak and Fiete (2009). The grid scale is chosen to be similar to that self-organized without periodic boundary conditions. The rest of simulation setup proceeds similarly as above, except the three evolutions with constant velocity input last only 500 timesteps each.

Parameter values that differ from those in Table 1 are n = 192, amag+=1.2 during runs and 0.92 when idle for the uniform excitatory drive, amax+=1.4 during allocentric corrections, amag=0.75 during runs and –0.1 when idle, and ath=0.15 during runs. Results are shown in Figure 8F, G.

Trajectory

For 2D open field simulations, we use velocity inputs extracted by Burak and Fiete (2009) from a real animal trajectory reported in Hafting et al. (2005).

For 1D track simulations, we generate trajectories as follows. Each trajectory consists of 36 runs alternating in direction, each lasting 1.5 s. Each run is followed by an idle period of 1.5 s and an allocentric correction period of 0.5 s. The velocity angle is π/5, chosen to avoid directions of symmetry in the neural sheet.

In each run, the speed linearly increases from 0 m/s to 0.5 m/s over the first 0.3 s, remains at 0.5 m/s over the next 0.9 s, and linearly decreases from 0.5 m/s to 0 m/s over the last 0.3 s. To the middle 0.9 s of constant speed, we add noise. The noise is generated by a fractional Brownian motion process with drift 0, volatility 1, and Hurst index 0.8. We then add an overall constant at each timestep to make the noise sum to 0 (and thus, to leading order, contribute 0 additional displacement) and introduce an overall scale factor to make its maximum magnitude 0.1 m/s. We add this processed noise to the constant velocity of 0.5 m/s. The initial theta phase of each run is chosen randomly, and the phase increases during runs at a constant rate to span 360° over 125 ms.

Excitatory and inhibitory drives change between runs and idle periods according to Table 1. They do so by linearly interpolating between corresponding values over the first 0.3 s of each idle period.

Single neuron recording and position decoding

Selection of neurons for recording

As described in Materials and methods, we select up to 150 neurons total for recording from four equally spaced circular areas on the neural sheet (up to 200 neurons total for simulations with periodic boundary conditions). For the main simulation, these areas are centered 95 neurons from the center of the sheet; for the simulations with lower velocity gain in Figure 8A-C, the distance is 75 neurons; for the simulations with periodic boundary conditions in Figure 8D-G, the distance is 60 neurons. In each area, we randomly choose 30–50 neurons from each excitatory population and compute their firing fields for each run as follows. We partition space into 5 cm bins and compute the firing rate of each bin as the number of spikes divided by the occupancy time; these raw fields are smoothed by a Gaussian filter of width 4 bins. Firing fields for leftward and rightward runs are considered separately. In at least one direction, recorded neurons must have maximum firing rate above 0.5 Hz for all runs, and the normalized correlation, defined as 𝐮𝐯/|𝐮||𝐯| for two vectors 𝐮 and 𝐯, must be above 0.6 for all pairs of runs.

Firing fields

For 2D open field simulations, we partition space into bins indexed by R = (X, Y) and compute the firing rate of each bin as the number of spikes divided by the occupancy time. These raw fields are smoothed by a Gaussian filter of width 5 bins to produce the field map F(R). We define the normalized autocorrelation map as

C(𝐑)=1N(𝐑)𝐑F(𝐑)F(𝐑+𝐑)1N(0)𝐑F(𝐑)F(𝐑), (8)

where N(R) is the number of pairs of positions separated by R. Autocorrelation maps are shown in Figure 1E.

For 1D track simulations, we partition space into 2 cm bins indexed by X and compute the firing rate of each bin as the number of spikes divided by the occupancy time over all laps. These raw fields are smoothed by a Gaussian filter of width 1 bin to produce the field map F(X). Field maps are shown in Figure 2D.

To categorize a simulation as exhibiting 'one field' or 'two fields,' we first order all neurons by the track position at which they are most active. We exclude the middle 40% of all neurons, since we do not expect neurons with central field positions to exhibit two fields. We then shift each field such that its position of peak activity is at 0 cm and calculate the fraction of field activity located more than 18 cm away. If this fraction is greater than 0.1, we consider the simulation as exhibiting two fields. The center of mass of this distant activity, using the absolute value of position, is the distance between fields for the simulation.

Position decoding

We use the standard Bayesian techniques to decode position from the population activity of grid cells (Zhang et al., 1998; O'Neill et al., 2017). Let sk(t) count the number of spikes fired by the recorded neuron k within a time window of Δaround t. Each neuron has a field map Fk(X) as described above. For every time t, we wish to calculate the probability that the animal is at position X given the population activity s(t). We do so through Bayes’s rule, assuming a flat prior and a spiking likelihood that obeys independent Poisson statistics:

p(X|𝐬(t))p(𝐬(t)|X)=kp(sk(t)|X)(kFk(X)sk(t))exp(-ΔTkFk(X)). (9)

We increment time in steps of 5 ms. For theta-related analysis, we use a sliding decoding window of width ΔT = 20 ms. For analysis of replays, whose timescales are shorter and spikes are more concentrated, we use ΔT = 10 ms and require each window to contain at least two spikes.

Theta-related analysis

Multiunit activity

To calculate the multiunit activity averaged over theta cycles, we sum spike trains over recorded neurons during each run, extract the multiunit activity of each complete theta cycle, and average these extracts. This theta-modulated multiunit activity is shown in Figure 2E.

Phase precession

We use circular-linear regression and correlation as described in Kempter et al. (2012) to quantify the relationship between theta phase and position within a grid field. For each neuron, we combine spikes across all rightward runs and across all leftward runs, considering each direction separately. For each direction, we identify a separation between grid fields as a 10 cm span without spikes. For each neuron and each direction, we only consider the most central field. We exclude fields that spike within 3 cm from the track ends to avoid edge effects, and also exclude fields containing fewer than 30 spikes or spanning less than 12 cm wide.

For each field, we record the position Xj and theta phase ψj at which each of its spikes occurred. We hypothesize that the phase of each spike is linearly influenced by its position:

ψj=2πqXj+ψ0, (10)

where q is the phase precession slope and ψ0 is the phase offset. We wish to fit these parameters by minimizing the total circular distance between true phases and corresponding predicted phases. The circular distance between two phases ψ and θ is defined to be 2[1 – cos(ψ – θ)]. As explained in Kempter et al. (2012), we can transform the minimization over into the following maximization:

q^=argmaxq[1Sjcos(ψj-2πqXj)]2+[1Sjsin(ψj-2πqXj)]2, (11)

where is the number of spikes. We perform the maximization numerically over q. The maximum value is called the mean resultant length (or vector strength), and it is used as our fit score. The minimizing value of ψcan be then calculated directly:

ψ^0=arctanjsin(ψj-2πq^Xj)jcos(ψj-2πq^Xj). (12)

Regression results are shown in Figure 3D, E.

We can also estimate the circular-linear correlation coefficient, which is analogous to the Pearson correlation coefficient for two linear variables. Note that since the marginal distribution of Θ may be uniform, we use the following expression (Kempter et al., 2012):

ρ^=|jei(ψj-Θj)|-|jei(ψj+Θj)|2jsin2(ψj-ψ¯)jsin2(Θj-Θ¯), (13)

where Θj=2π|q^|Xj is a circular variable related to Xj, and ψ¯=arctan(jsinψj/jcosψj) and Θ¯=arctan(jsinΘj/jcosΘj) are respective circular means. This correlation coefficient is shown in Figure 3C.

Normalized bump coordinate

Over the course of a simulation, attractor bumps trace out trails of activity on the neural sheet (Figure 2C). For a neuron at a given sheet location, we quantify how centrally an attractor bump passes through it with a metric called the normalized bump coordinate. We first tally the total number of spikes occurring at each location on the neural sheet across all excitatory populations. This yields an overall firing rate for each sheet location. For a neuron at a given location, we look for the central axis of the nearest attractor bump trail as follows. We find the highest overall firing rate in a window of length 10 neurons and width 4 neurons centered at the given location and oriented with long side perpendicular to the track orientation. The normalized bump coordinate is the ratio between the given location’s overall firing rate and this highest overall firing rate, which should correspond to an attractor bump’s central axis. Bump coordinate results are shown in Figure 3—figure supplement 4G.

Simplified phase precession models

For our conceptual model with bump size oscillations, we imagine an attractor bump moving through a recorded neuron. Although the bump naturally oscillates in speed, we advance the bump with constant speed in the neural sheet; similar results (not shown) are obtained if the speed oscillations depicted in Figure 6B are included. We stipulate that the middle of the bump passes through the recorded neuron at position 0 and time 0. Thus, the 1D profile of the bump is given by the region between

b±(t)=v¯t±[b¯+Δbcos(2πftψ0)], (14)

where b¯Δb and b¯+Δb are the smallest and largest radii of the bump, v¯ is the constant bump velocity, f = 8 Hz, and the phase offset ψis incremented by π/5 for every new lap. We choose v¯=17neurons/s, b¯=3.3neurons, and Δb = 1.4 neurons, which are close to the values presented in Figure 4B and Figure 6B. Whenever b–(t) ≤ 0 ≤ b+(t), the neuron fires a spike, subject to a refractory period of 40 ms. Results are shown in Figure 4C.

For our simplified model with average bump dynamics, we first determine the average bump shape as a function of theta phase as described later. As in the conceptual model described immediately above, we imagine the central axis of this average bump moving through a recorded neuron. We simulate 18 laps with evenly spaced initial theta phases. The recorded neuron’s spiking rate is proportional to bump activity. We can linearly rescale bump activity to a desired maximum level across all theta phases; that is, the bump would still exhibit different levels of maximum activity at each theta phase. Spikes are generated with sub-Poisson statistics using the decimation procedure described in Burak and Fiete (2009); we choose the coefficient of variation to be 1/8. We also use a refractory period of 40 ms. Results are shown in Figure 4E, F.

Theta sequences

We visualize theta sequences by averaging the decoded position over theta cycles. From all runs along the track, we take overlapping quadruplets of consecutive theta cycles, discarding those during which the animal comes within of the track ends to avoid edge effects. We assign time 0 to the middle of each theta quadruplet. We then shift the decoded positions of each quadruplet by the actual position of the animal at time 0, such that the origin corresponds to the actual animal position at the middle of the theta quadruplet. Before averaging, we proceed as in O'Neill et al. (2017) and divide the probability at each timestep by its maximum value. We then separately average each timestep of the quadruplet, ignoring cycles without spikes within that timestep.

To find the slope of the fastest decoded motion within the theta cycle, we use a similar approach to O'Neill et al. (2017). For each time window whose midpoint lies within the third cycle of the averaged theta quadruplet, we find the fit line that captures the most decoded probability. In O'Neill et al. (2017), this involves summing all the probability that lies within a certain number of position bins away from each possible line. Instead of setting such a hard cutoff, we pass the averaged probability distribution at each timestep through a Gaussian filter of width. For each candidate line, we sum over the filtered probability of the closest position bin at each timestep; this process is analogous to weighting each bin according to a Gaussian function of its distance to the line and summing the unfiltered probability over all bins. The fit line is the one with the largest total probability. Finally, across all time windows, we choose the fit line with highest slope to represent the averaged theta sequence. These sequences and the Gaussian-filtered, averaged probability distribution is shown in Figure 5B.

Replay analysis

Replay detection

As in O'Neill et al. (2017), we look for replays within highly synchronous events (HSEs). To detect HSEs, we first filter the multiunit activity during idle periods by a Gaussian of width and filter size, and we remove all zero-valued timesteps. HSEs are time intervals lasting at least during which the processed multiunit activity exceeds its 20th percentile and contains at least one timestep that exceeds its 80th percentile (or 60th percentile for simulations with periodic boundary conditions).

For each HSE, we look for linear replays by finding the fit line that captures the most decoded probability as we did for theta sequences. We first pass the probability distribution at each timestep during the HSE through a Gaussian filter whose width is and whose maximum value is 1. For each candidate line, we sum over the filtered probability of the closest position bin at each timestep and divide by the number of timesteps that the line spends within track limits. The result is an average probability captured by the line, which we call a replay score. For each HSE, the fit line is the one with the highest replay score. Finally, we exclude HSEs whose fit lines have a replay score less than 0.6 or spend less than 30 ms or 30 cm within track limits. The remaining fit lines are replays. Extracted replays are shown in Figure 7E.

For simulations whose neurons exhibit two fields, the probability distribution of the decoded position may be spread over both fields. Thus, we supplement each candidate line with two more lines of the same slope. These three lines are separated by the distance between grid fields. We sum over the filtered probability of the closest position bins to each of these lines.

Replay properties

The speed of a replay is the magnitude of the slope of its fit line, as reported in Figure 7G.

The start position of a replay is the fit line position at the beginning of its HSE, which is clipped to 0 cm or 60 cm if it lies outside track limits. Similarly, the end position of a replay is the fit line position at the end of its HSE, which is similarly clipped. These positions are reported in Figure 8C.

Simulations with fixed inhibitory drive and constant velocity

Setup and trajectory

These simulations are structured similarly to the main simulations on a 1D track. However, we choose a constant inhibitory drive amag and eliminate theta oscillations with ath=0. During setup, instead of evolving the network with an animal trajectory, we evolve 1000 timesteps with constant velocity of magnitude 0.5 m/s and angle π/5. For the main simulation, we evolve another 1000 timesteps with the same velocity.

Attractor bump speed

To extract the speed of attractor bumps, we first divide time into 40 ms bins. Within each bin, we sum the spikes at each position on the neural sheet across all excitatory populations, and we pass the result through a Gaussian filter of width. We thus obtain smoothed activity maps at 40 ms intervals.

We find the seven activity peaks that remain closest to the center of the neural sheet throughout the simulation. For each time bin, we take the average position of all seven peaks, and fit this average motion with a linear model. The norm of the resulting velocity is the bump speed, as reported in Figure 6B.

Attractor bump size

To extract the size of attractor bumps, we return to the raw spike trains, which we register across the course of the simulation by compensating for the overall bump motion. That is, to each spike emitted by an excitatory neuron, we assign a shifted neural sheet position equal to the true position of the neuron minus the bump velocity obtained above multiplied by its spiking time.

Around each of the seven activity peaks identified above, which are stationary after registration, we draw a circle of radius 12 neurons and calculate the center of mass of all enclosed spikes over the entire simulation. This point is the center of the registered activity bump. We then calculate the registered distance of each spike to the center; the 90th percentile is taken to be the radius of the activity bump. Bump diameters averaged over the seven activity peaks are reported in Figure 4B.

Attractor bump shape

To determine the average attractor bump shape during runs, we use periodic simulations with uniform excitatory drive to generate a set of attractor bumps that experience the same drive. We reintroduce oscillations into the inhibitory drive. These simulations use amag+=1.2, amag=0.7, and ath=0.15.

We first perform registration, as described above, for 12 attractor bumps. We count the number of spikes across all bumps at each registered location on the neural sheet (using 0.2 neuron bins) as a function of time within the theta cycle (using 1 ms bins). We then apply Gaussian filters of width 3 time bins and 5 position bins to obtain average bump shape as a function of theta phase, as shown in Figure 4D and Figure 4—video 2.

Appendix 2

Simulations without brief allocentric correction

Model details

Simulation setup proceeds similarly to simulations with allocentric correction, except we evolve the network for 6 laps including idle periods after the three evolutions with constant velocity input. We also do not learn allocentric connections.

Without allocentric correction, the main simulation can only be run for 12 laps instead of 36. Each lap consists of a 1.5 s run and a 2.5 s idle period. Excitatory and inhibitory drives linearly interpolate between run and idle values over the first and last 0.3 s of each idle period.

We perform 20 replicate simulations in each direction instead of 10. Parameter values that differ from those in Table 1 are amag=0.65 during runs and 0.1 when idle, ξ=2, α=0.12s/m, and Var[ζP(r,t)]=Var[ζ(r,t)]=0.0122.

Analysis details

With fewer laps compared to simulations with allocentric correction, we use more lenient cutoffs for both theta-related and replay analysis. During phase relationship analysis, we require a minimum of 15 spikes, instead of 30. We also use a regression fit score cutoff of 0.3, instead of 0.4, below which neurons are considered phase-independent. During replay detection, replays must span a minimum of 20 cm, instead of 30 cm.

Results

In simulations without brief allocentric corrections, many theta-related and replay properties are present a weaker level. We can only run the simulations for 12 laps, and we must use lower velocity gain so that the attractor bumps do not travel far on the neural sheet (Appendix 2—figure 1A and Appendix 2—video 1). Otherwise, attractor drift will cause the firing fields to change significantly between the start and the end of the simulation.

Appendix 2—figure 1. Theta-related and replay properties in simulations without brief allocentric corrections.

Appendix 2—figure 1.

(A) Left, track diagram. Right, neural activity over runs with each neuron colored according to the track position at which it attained maximum firing rate. Red circles indicate regions of recording. (B) Relationship between animal position and theta phase for representative neurons. Clockwise from top left, relationships depicted are phase independence, phase locking, phase precession, and phase precession. Dots represent spikes during runs in the directions indicated by arrows. Lines indicate fit by circular-linear regression. Numbers in each panel from top left to top right indicate magnitudes of correlation coefficients, regression fit scores, and precession ranges. (C–E) Data across all replicate simulations. (C) Magnitudes of circular-linear correlation coefficients. Mean ± s.d.: 0.15 ± 0.11. (D) Fit scores for circular-linear regression. (E) Regression slopes for neurons with fit score >0.3. (F) Decoded position averaged over theta cycles. Thick black line fit to theta sequence. (G) Replays with cyan fit lines. (H) Mean actual run speed (error bars indicate s.d. over time), mean theta sequence speed (error bars indicate s.d. over replicate simulations), and mean replay speed (error bars indicate s.d. over replays). (I) Direction of replay propagation across simulations. Means indicated with lines. p-values from the Mann-Whitney U test show that a significantly greater number of replays propagate away from either actual position. (J) Distances between actual position and decoded position at the start or end of replays. Paired medians compared by the Wilcoxon rank-sum test with indicated p-value. Scale bars, 50 neurons.

Appendix 2—video 1. Neural activity during runs and idle periods during a simulation without allocentric corrections.
Download video file (11.9MB, mp4)

Left, position of the animal (black square) along a 1D track. Right, neural activity of the E population. Each pixel is a neuron, with black corresponding to current spikes and lightest gray corresponding to spikes 40 ms ago. Red circles indicate regions of recording.

During runs, our neurons can still be divided into subgroups whose activity exhibits different phase relationships (Appendix 2—figure 1B). The magnitude of circular-linear correlation (Appendix 2—figure 1C; 0.15 ± 0.11, mean ± s.d.) is slightly lower than it is with allocentric correction (Figure 3C; 0.17 ± 0.11, mean ± s.d.). Excluding phase-independent neurons with fit score <0.3 (Appendix 2—figure 1D), we see a large population of phase-locking neurons and a small population of phase-precessing neurons (Appendix 2—figure 1E). If we use a high phase range cutoff of 120° for phase precession, we find that preferred phase predominantly decreases with progress through the grid field, which is the biologically observed direction. By decoding position from spike trains and averaging over theta cycles, we see a sawtooth pattern containing theta sequences (Appendix 2—figure 1F), though the pattern is less organized than it is with allocentric correction (Figure 5B).

During idle periods, traveling wavefronts produce replays (Appendix 2—figure 1G and Appendix 2—video 1). The speeds of theta sequences and replays are 1.1 ± 0.3 m/s and 7.0 ± 2.6 m/s respectively (mean ± s.d.; Appendix 2—figure 1H), which agree well with corresponding speeds in simulations with allocentric correction (0.9 ± 0.2 m/s and 7.0 ± 2.6 m/s), and which also agree well with experiments (O'Neill et al., 2017). As in Figure 8A–C, the lower velocity gain produces a bias in replay direction away from the current animal position (Appendix 2—figure 1I,J).

In summary, after removing allocentric corrections, our model maintains its features involving theta sequences and replays. It still produces some degree of phase precession, but for fewer neurons and larger precession ranges. 

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Louis Kang, Email: louis.kang@berkeley.edu.

Sachin Deshmukh, Indian Institute of Science Bangalore, India.

Laura L Colgin, University of Texas at Austin, United States.

Funding Information

This paper was supported by the following grants:

  • Adolph C. and Mary Sprague Miller Institute for Basic Research in Science, University of California Berkeley Postdoctoral fellowship to Louis Kang.

  • Army Research Office W911NF-13-1-0390 to Michael R DeWeese.

Additional information

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Software, Investigation, Visualization, Methodology, Writing—original draft, Writing—review and editing.

Resources, Supervision, Funding acquisition, Visualization, Writing—original draft, Writing—review and editing.

Additional files

Source code 1. Source code for the main simulation written in C.
elife-46351-code1.zip (584.7KB, zip)
Transparent reporting form

Data availability

Source code for the simulations have been included as supporting files.

References

  1. Ambrose RE, Pfeiffer BE, Foster DJ. Reverse replay of hippocampal place cells is uniquely modulated by changing reward. Neuron. 2016;91:1124–1136. doi: 10.1016/j.neuron.2016.07.047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Anderson MI, Jeffery KJ. Heterogeneous modulation of place cell firing by changes in context. The Journal of Neuroscience. 2003;23:8827–8835. doi: 10.1523/JNEUROSCI.23-26-08827.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Angulo-Garcia D, Ferraris M, Ghestem A. Spatio-temporal organization of cell assemblies in nucleus reuniens during slow oscillations. bioRxiv. 2018 doi: 10.1101/474973. [DOI]
  4. Azizi AH, Wiskott L, Cheng S. A computational model for preplay in the Hippocampus. Frontiers in Computational Neuroscience. 2013;7:161. doi: 10.3389/fncom.2013.00161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bayati M, Valizadeh A, Abbassian A, Cheng S. Self-organization of synchronous activity propagation in neuronal networks driven by local excitation. Frontiers in Computational Neuroscience. 2015;9:e00069. doi: 10.3389/fncom.2015.00069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Belchior H, Lopes-Dos-Santos V, Tort AB, Ribeiro S. Increase in hippocampal theta oscillations during spatial decision making. Hippocampus. 2014;24:693–702. doi: 10.1002/hipo.22260. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bonnevie T, Dunn B, Fyhn M, Hafting T, Derdikman D, Kubie JL, Roudi Y, Moser EI, Moser MB. Grid cells require excitatory drive from the Hippocampus. Nature Neuroscience. 2013;16:309–317. doi: 10.1038/nn.3311. [DOI] [PubMed] [Google Scholar]
  8. Brandon MP, Bogaard AR, Libby CP, Connerney MA, Gupta K, Hasselmo ME. Reduction of theta rhythm dissociates grid cell spatial periodicity from directional tuning. Science. 2011;332:595–599. doi: 10.1126/science.1201652. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Burak Y, Fiete IR. Accurate path integration in continuous attractor network models of grid cells. PLOS Computational Biology. 2009;5:e1000291. doi: 10.1371/journal.pcbi.1000291. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Burgess N, Barry C, O'Keefe J. An oscillatory interference model of grid cell firing. Hippocampus. 2007;17:801–812. doi: 10.1002/hipo.20327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Bush D, Burgess N. A hybrid oscillatory interference/continuous attractor network model of grid cell firing. The Journal of Neuroscience. 2014;34:5065–5079. doi: 10.1523/JNEUROSCI.4017-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Buzsáki G. Hippocampal sharp waves: their origin and significance. Brain Research. 1986;398:242–252. doi: 10.1016/0006-8993(86)91483-6. [DOI] [PubMed] [Google Scholar]
  13. Buzsáki G. The hippocampo-neocortical dialogue. Cerebral Cortex. 1996;6:81–92. doi: 10.1093/cercor/6.2.81. [DOI] [PubMed] [Google Scholar]
  14. Buzsáki G. Theta oscillations in the Hippocampus. Neuron. 2002;33:325–340. doi: 10.1016/S0896-6273(02)00586-X. [DOI] [PubMed] [Google Scholar]
  15. Buzsáki G, Anastassiou CA, Koch C. The origin of extracellular fields and currents--EEG, ECoG, LFP and spikes. Nature Reviews Neuroscience. 2012;13:407–420. doi: 10.1038/nrn3241. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Chance FS. Hippocampal phase precession from dual input components. Journal of Neuroscience. 2012;32:16693–16703. doi: 10.1523/JNEUROSCI.2786-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Cheng S, Frank LM. The structure of networks that produce the transformation from grid cells to place cells. Neuroscience. 2011;197:293–306. doi: 10.1016/j.neuroscience.2011.09.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Chenkov N, Sprekeler H, Kempter R. Memory replay in balanced recurrent networks. PLOS Computational Biology. 2017;13:e1005359. doi: 10.1371/journal.pcbi.1005359. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Chrobak JJ, Buzsáki G. High-frequency oscillations in the output networks of the hippocampal-entorhinal Axis of the freely behaving rat. The Journal of Neuroscience. 1996;16:3056–3066. doi: 10.1523/JNEUROSCI.16-09-03056.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Chrobak JJ, Buzsáki G. Gamma oscillations in the entorhinal cortex of the freely behaving rat. The Journal of Neuroscience. 1998;18:388–398. doi: 10.1523/JNEUROSCI.18-01-00388.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Climer JR, Newman EL, Hasselmo ME. Phase coding by grid cells in unconstrained environments: two-dimensional phase precession. European Journal of Neuroscience. 2013;38:2526–2541. doi: 10.1111/ejn.12256. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Colgin LL, Denninger T, Fyhn M, Hafting T, Bonnevie T, Jensen O, Moser MB, Moser EI. Frequency of gamma oscillations routes flow of information in the Hippocampus. Nature. 2009;462:353–357. doi: 10.1038/nature08573. [DOI] [PubMed] [Google Scholar]
  23. Couey JJ, Witoelar A, Zhang SJ, Zheng K, Ye J, Dunn B, Czajkowski R, Moser MB, Moser EI, Roudi Y, Witter MP. Recurrent inhibitory circuitry as a mechanism for grid formation. Nature Neuroscience. 2013;16:318–324. doi: 10.1038/nn.3310. [DOI] [PubMed] [Google Scholar]
  24. Davidson TJ, Kloosterman F, Wilson MA. Hippocampal replay of extended experience. Neuron. 2009;63:497–507. doi: 10.1016/j.neuron.2009.07.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Diba K, Buzsáki G. Forward and reverse hippocampal place-cell sequences during ripples. Nature Neuroscience. 2007;10:1241–1242. doi: 10.1038/nn1961. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Diehl GW, Hon OJ, Leutgeb S, Leutgeb JK. Stability of medial entorhinal cortex representations over time. Hippocampus. 2019;29:284–302. doi: 10.1002/hipo.23017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Dragoi G, Tonegawa S. Preplay of future place cell sequences by hippocampal cellular assemblies. Nature. 2011;469:397–401. doi: 10.1038/nature09633. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Drieu C, Todorova R, Zugaro M. Nested sequences of hippocampal assemblies during behavior support subsequent sleep replay. Science. 2018;362:675–679. doi: 10.1126/science.aat2952. [DOI] [PubMed] [Google Scholar]
  29. Dunn B, Mørreaunet M, Roudi Y. Correlations and functional connections in a population of grid cells. PLOS Computational Biology. 2015;11:e1004052. doi: 10.1371/journal.pcbi.1004052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Ebbesen CL, Reifenstein ET, Tang Q, Burgalossi A, Ray S, Schreiber S, Kempter R, Brecht M. Cell Type-Specific differences in spike timing and spike shape in the rat parasubiculum and superficial medial entorhinal cortex. Cell Reports. 2016;16:1005–1015. doi: 10.1016/j.celrep.2016.06.057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. English G. Switzerland: ETH Zurich and University of Zurich; 2017. Spiking Neural Network Models for the Emergence of Patterned Activity in Grid Cell Populations. [Google Scholar]
  32. Evans T, Bicanski A, Bush D, Burgess N. How environment and self-motion combine in neural representations of space. The Journal of Physiology. 2016;594:6535–6546. doi: 10.1113/JP270666. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Feng T, Silva D, Foster DJ. Dissociation between the experience-dependent development of hippocampal theta sequences and single-trial phase precession. Journal of Neuroscience. 2015;35:4890–4902. doi: 10.1523/JNEUROSCI.2614-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Foster DJ. Replay comes of age. Annual Review of Neuroscience. 2017;40:581–602. doi: 10.1146/annurev-neuro-072116-031538. [DOI] [PubMed] [Google Scholar]
  35. Foster DJ, Knierim JJ. Sequence learning and the role of the Hippocampus in rodent navigation. Current Opinion in Neurobiology. 2012;22:294–300. doi: 10.1016/j.conb.2011.12.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Foster DJ, Wilson MA. Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature. 2006;440:680–683. doi: 10.1038/nature04587. [DOI] [PubMed] [Google Scholar]
  37. Fuchs EC, Neitz A, Pinna R, Melzer S, Caputi A, Monyer H. Local and distant input controlling excitation in layer II of the medial entorhinal cortex. Neuron. 2016;89:194–208. doi: 10.1016/j.neuron.2015.11.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Gönner L, Vitay J, Hamker FH. Predictive Place-Cell sequences for Goal-Finding emerge from goal memory and the cognitive map: a computational model. Frontiers in Computational Neuroscience. 2017;11:84. doi: 10.3389/fncom.2017.00084. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Gonzalez-Sulser A, Parthier D, Candela A, McClure C, Pastoll H, Garden D, Sürmeli G, Nolan MF. GABAergic projections from the medial septum selectively inhibit interneurons in the medial entorhinal cortex. Journal of Neuroscience. 2014;34:16739–16743. doi: 10.1523/JNEUROSCI.1612-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Grosmark AD, Buzsáki G. Diversity in neural firing dynamics supports both rigid and learned hippocampal sequences. Science. 2016;351:1440–1443. doi: 10.1126/science.aad1935. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Gu Y, Lewallen S, Kinkhabwala AA, Domnisoru C, Yoon K, Gauthier JL, Fiete IR, Tank DW. A Map-like Micro-Organization of grid cells in the medial entorhinal cortex. Cell. 2018;175:736–750. doi: 10.1016/j.cell.2018.08.066. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Hafting T, Fyhn M, Molden S, Moser MB, Moser EI. Microstructure of a spatial map in the entorhinal cortex. Nature. 2005;436:801–806. doi: 10.1038/nature03721. [DOI] [PubMed] [Google Scholar]
  43. Hafting T, Fyhn M, Bonnevie T, Moser MB, Moser EI. Hippocampus-independent phase precession in entorhinal grid cells. Nature. 2008;453:1248–1252. doi: 10.1038/nature06957. [DOI] [PubMed] [Google Scholar]
  44. Haga T, Fukai T. Recurrent network model for learning goal-directed sequences through reverse replay. eLife. 2018;7:e34171. doi: 10.7554/eLife.34171. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Hahn TT, McFarland JM, Berberich S, Sakmann B, Mehta MR. Spontaneous persistent activity in entorhinal cortex modulates cortico-hippocampal interaction in vivo. Nature Neuroscience. 2012;15:1531–1538. doi: 10.1038/nn.3236. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Hales JB, Schlesiger MI, Leutgeb JK, Squire LR, Leutgeb S, Clark RE. Medial entorhinal cortex lesions only partially disrupt hippocampal place cells and hippocampus-dependent place memory. Cell Reports. 2014;9:893–901. doi: 10.1016/j.celrep.2014.10.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Hardcastle K, Ganguli S, Giocomo LM. Environmental boundaries as an error correction mechanism for grid cells. Neuron. 2015;86:827–839. doi: 10.1016/j.neuron.2015.03.039. [DOI] [PubMed] [Google Scholar]
  48. Hasselmo ME, Giocomo LM, Zilli EA. Grid cell firing may arise from interference of theta frequency membrane potential oscillations in single neurons. Hippocampus. 2007;17:1252–1271. doi: 10.1002/hipo.20374. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Heys JG, Rangarajan KV, Dombeck DA. The functional micro-organization of grid cells revealed by cellular-resolution imaging. Neuron. 2014;84:1079–1090. doi: 10.1016/j.neuron.2014.10.048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Hollup SA, Molden S, Donnett JG, Moser MB, Moser EI. Accumulation of hippocampal place fields at the goal location in an annular watermaze task. The Journal of Neuroscience. 2001;21:1635–1644. doi: 10.1523/JNEUROSCI.21-05-01635.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Jeewajee A, Barry C, Douchamps V, Manson D, Lever C, Burgess N. Theta phase precession of grid and place cell firing in open environments. Philosophical Transactions of the Royal Society B: Biological Sciences. 2014;369:20120532. doi: 10.1098/rstb.2012.0532. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Johnson A, Redish AD. Neural ensembles in CA3 transiently encode paths forward of the animal at a decision point. Journal of Neuroscience. 2007;27:12176–12189. doi: 10.1523/JNEUROSCI.3761-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Kang L, Balasubramanian V. A geometric attractor mechanism for self-organization of entorhinal grid modules. eLife. 2019;8:e46687. doi: 10.7554/eLife.46687. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Keinath AT, Epstein RA, Balasubramanian V. Environmental deformations dynamically shift the grid cell spatial metric. eLife. 2018;7:e38169. doi: 10.7554/eLife.38169. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Kempter R, Leibold C, Buzsáki G, Diba K, Schmidt R. Quantifying circular-linear associations: hippocampal phase precession. Journal of Neuroscience Methods. 2012;207:113–124. doi: 10.1016/j.jneumeth.2012.03.007. [DOI] [PubMed] [Google Scholar]
  56. Kerr KM, Agster KL, Furtak SC, Burwell RD. Functional neuroanatomy of the parahippocampal region: the lateral and medial entorhinal Areas. Hippocampus. 2007;17:697–708. doi: 10.1002/hipo.20315. [DOI] [PubMed] [Google Scholar]
  57. Koenig J, Linder AN, Leutgeb JK, Leutgeb S. The spatial periodicity of grid cells is not sustained during reduced theta oscillations. Science. 2011;332:592–595. doi: 10.1126/science.1201685. [DOI] [PubMed] [Google Scholar]
  58. Lee AK, Wilson MA. Memory of sequential experience in the Hippocampus during slow wave sleep. Neuron. 2002;36:1183–1194. doi: 10.1016/S0896-6273(02)01096-6. [DOI] [PubMed] [Google Scholar]
  59. Leutgeb JK, Leutgeb S, Moser MB, Moser EI. Pattern separation in the dentate gyrus and CA3 of the Hippocampus. Science. 2007;315:961–966. doi: 10.1126/science.1135801. [DOI] [PubMed] [Google Scholar]
  60. Liu K, Sibille J, Dragoi G. Generative predictive codes by multiplexed hippocampal neuronal tuplets. Neuron. 2018;99:1329–1341. doi: 10.1016/j.neuron.2018.07.047. [DOI] [PubMed] [Google Scholar]
  61. Liu K, Sibille J, Dragoi G. Preconfigured patterns are the primary driver of offline multi-neuronal sequence replay. Hippocampus. 2019;29:275–283. doi: 10.1002/hipo.23034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Mamad O, Stumpp L, McNamara HM, Ramakrishnan C, Deisseroth K, Reilly RB, Tsanov M. Place field assembly distribution encodes preferred locations. PLOS Biology. 2017;15:e2002365. doi: 10.1371/journal.pbio.2002365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Middleton SJ, Kneller EM, Chen S, Ogiwara I, Montal M, Yamakawa K, McHugh TJ. Altered hippocampal replay is associated with memory impairment in mice heterozygous for the Scn2a gene. Nature Neuroscience. 2018;21:996–1003. doi: 10.1038/s41593-018-0163-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Navratilova Z, Giocomo LM, Fellous JM, Hasselmo ME, McNaughton BL. Phase precession and variable spatial scaling in a periodic attractor map model of medial entorhinal grid cells with realistic after-spike dynamics. Hippocampus. 2012;22:772–789. doi: 10.1002/hipo.20939. [DOI] [PubMed] [Google Scholar]
  65. Nolan M. Hippocampal Microcircuits. Springer; 2018. A Model for Grid Firing and Theta-Nested Gamma Oscillations in Layer 2 of the Medial EntorhinalCortex; pp. 567–584. [DOI] [Google Scholar]
  66. O'Keefe J. Place units in the Hippocampus of the freely moving rat. Experimental Neurology. 1976;51:78–109. doi: 10.1016/0014-4886(76)90055-8. [DOI] [PubMed] [Google Scholar]
  67. O'Keefe J, Dostrovsky J. The Hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Research. 1971;34:171–175. doi: 10.1016/0006-8993(71)90358-1. [DOI] [PubMed] [Google Scholar]
  68. O'Keefe J, Recce ML. Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus. 1993;3:317–330. doi: 10.1002/hipo.450030307. [DOI] [PubMed] [Google Scholar]
  69. O'Neill J, Boccara CN, Stella F, Schoenenberger P, Csicsvari J. Superficial layers of the medial entorhinal cortex replay independently of the Hippocampus. Science. 2017;355:184–188. doi: 10.1126/science.aag2787. [DOI] [PubMed] [Google Scholar]
  70. Ocko SA, Hardcastle K, Giocomo LM, Ganguli S. Emergent elasticity in the neural code for space. PNAS. 2018;1154:e11798. doi: 10.1073/pnas.1805959115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Ólafsdóttir HF, Carpenter F, Barry C. Coordinated grid and place cell replay during rest. Nature Neuroscience. 2016;19:792–794. doi: 10.1038/nn.4291. [DOI] [PubMed] [Google Scholar]
  72. Ólafsdóttir HF, Carpenter F, Barry C. Task demands predict a dynamic switch in the content of awake hippocampal replay. Neuron. 2017;96:925–935. doi: 10.1016/j.neuron.2017.09.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Ólafsdóttir HF, Bush D, Barry C. The role of hippocampal replay in memory and Planning. Current Biology. 2018;28:R37–R50. doi: 10.1016/j.cub.2017.10.073. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Oliva A, Fernández-Ruiz A, Buzsáki G, Berényi A. Role of hippocampal CA2 region in triggering Sharp-Wave ripples. Neuron. 2016;91:1342–1355. doi: 10.1016/j.neuron.2016.08.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Palmer J, Keane A, Gong P. Learning and executing goal-directed choices by internally generated sequences in spiking neural circuits. PLOS Computational Biology. 2017;13:e1005669. doi: 10.1371/journal.pcbi.1005669. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Palmer JHC, Gong P. Formation and regulation of dynamic patterns in Two-Dimensional spiking neural circuits with Spike-Timing-Dependent plasticity. Neural Computation. 2013;25:2833–2857. doi: 10.1162/NECO_a_00511. [DOI] [PubMed] [Google Scholar]
  77. Pfeiffer BE, Foster DJ. Hippocampal place-cell sequences depict future paths to remembered goals. Nature. 2013;497:74–79. doi: 10.1038/nature12112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Pfeiffer BE, Foster DJ. PLACE CELLS. Autoassociative dynamics in the generation of sequences of hippocampal place cells. Science. 2015;349:180–183. doi: 10.1126/science.aaa9633. [DOI] [PubMed] [Google Scholar]
  79. Ponulak F, Hopfield JJ. Rapid, parallel path planning by propagating wavefronts of spiking neural activity. Frontiers in Computational Neuroscience. 2013;7:98. doi: 10.3389/fncom.2013.00098. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Ramirez-Villegas JF, Willeke KF, Logothetis NK, Besserve M. Dissecting the synapse- and Frequency-Dependent network mechanisms of in Vivo Hippocampal Sharp Wave-Ripples. Neuron. 2018;100:1224–1240. doi: 10.1016/j.neuron.2018.09.041. [DOI] [PubMed] [Google Scholar]
  81. Raudies F, Hasselmo ME. Differences in Visual-Spatial input may underlie different compression properties of firing fields for grid cell modules in medial entorhinal cortex. PLOS Computational Biology. 2015;11:e1004596. doi: 10.1371/journal.pcbi.1004596. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Reifenstein E, Stemmler M, Herz AV, Kempter R, Schreiber S. Movement dependence and layer specificity of entorhinal phase precession in two-dimensional environments. PLOS ONE. 2014;9:e100638. doi: 10.1371/journal.pone.0100638. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Reifenstein ET, Ebbesen CL, Tang Q, Brecht M, Schreiber S, Kempter R. Cell-Type specific phase precession in layer II of the medial entorhinal cortex. The Journal of Neuroscience. 2016;36:2283–2288. doi: 10.1523/JNEUROSCI.2986-15.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Sargolini F, Fyhn M, Hafting T, McNaughton BL, Witter MP, Moser MB, Moser EI. Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science. 2006;312:758–762. doi: 10.1126/science.1125572. [DOI] [PubMed] [Google Scholar]
  85. Sasaki T, Piatti VC, Hwaun E, Ahmadi S, Lisman JE, Leutgeb S, Leutgeb JK. Dentate network activity is necessary for spatial working memory by supporting CA3 sharp-wave ripple generation and prospective firing of CA3 neurons. Nature Neuroscience. 2018;21:258–269. doi: 10.1038/s41593-017-0061-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Savelli F, Luck JD, Knierim JJ. Framing of grid cells within and beyond navigation boundaries. eLife. 2017;6:e21354. doi: 10.7554/eLife.21354. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Schlesiger MI, Cannova CC, Boublil BL, Hales JB, Mankin EA, Brandon MP, Leutgeb JK, Leibold C, Leutgeb S. The medial entorhinal cortex is necessary for temporal organization of hippocampal neuronal activity. Nature Neuroscience. 2015;18:1123–1132. doi: 10.1038/nn.4056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Schlingloff D, Káli S, Freund TF, Hájos N, Gulyás AI. Mechanisms of sharp wave initiation and ripple generation. Journal of Neuroscience. 2014;34:11385–11398. doi: 10.1523/JNEUROSCI.0867-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Schmidt H, Gour A, Straehle J, Boergens KM, Brecht M, Helmstaedter M. Axonal synapse sorting in medial entorhinal cortex. Nature. 2017;549:469–475. doi: 10.1038/nature24005. [DOI] [PubMed] [Google Scholar]
  90. Silva D, Feng T, Foster DJ. Trajectory events across hippocampal place cells require previous experience. Nature Neuroscience. 2015;18:1772–1779. doi: 10.1038/nn.4151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Simon AP, Poindessous-Jazat F, Dutar P, Epelbaum J, Bassant MH. Firing properties of anatomically identified neurons in the medial septum of anesthetized and unanesthetized restrained rats. Journal of Neuroscience. 2006;26:9038–9046. doi: 10.1523/JNEUROSCI.1401-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Singer AC, Carr MF, Karlsson MP, Frank LM. Hippocampal SWR activity predicts correct decisions during the initial learning of an alternation task. Neuron. 2013;77:1163–1173. doi: 10.1016/j.neuron.2013.01.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Skaggs WE, McNaughton BL, Wilson MA, Barnes CA. Theta phase precession in hippocampal neuronal populations and the compression of temporal sequences. Hippocampus. 1996;6:149–172. doi: 10.1002/(SICI)1098-1063(1996)6:2&#x0003c;149::AID-HIPO6&#x0003e;3.0.CO;2-K. [DOI] [PubMed] [Google Scholar]
  94. Stark E, Roux L, Eichler R, Senzai Y, Royer S, Buzsáki G. Pyramidal cell-interneuron interactions underlie hippocampal ripple oscillations. Neuron. 2014;83:467–480. doi: 10.1016/j.neuron.2014.06.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Stensola H, Stensola T, Solstad T, Frøland K, Moser MB, Moser EI. The entorhinal grid map is discretized. Nature. 2012;492:72–78. doi: 10.1038/nature11649. [DOI] [PubMed] [Google Scholar]
  96. Steriade M, Timofeev I, Grenier F. Natural waking and sleep states: a view from inside neocortical neurons. Journal of Neurophysiology. 2001;85:1969–1985. doi: 10.1152/jn.2001.85.5.1969. [DOI] [PubMed] [Google Scholar]
  97. Thurley K, Leibold C, Gundlfinger A, Schmitz D, Kempter R. Phase precession through synaptic facilitation. Neural Computation. 2008;20:1285–1324. doi: 10.1162/neco.2008.07-06-292. [DOI] [PubMed] [Google Scholar]
  98. Thurley K, Hellmundt F, Leibold C. Phase precession of grid cells in a network model without external pacemaker. Hippocampus. 2013;23:786–796. doi: 10.1002/hipo.22133. [DOI] [PubMed] [Google Scholar]
  99. Trimper JB, Trettel SG, Hwaun E, Colgin LL. Methodological caveats in the detection of coordinated replay between place cells and grid cells. Frontiers in Systems Neuroscience. 2017;11:57. doi: 10.3389/fnsys.2017.00057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Tsodyks MV, Skaggs WE, Sejnowski TJ, McNaughton BL. Population dynamics and theta rhythm phase precession of hippocampal place cell firing: a spiking neuron model. Hippocampus. 1996;6:271–280. doi: 10.1002/(SICI)1098-1063(1996)6:3&#x0003c;271::AID-HIPO5&#x0003e;3.0.CO;2-Q. [DOI] [PubMed] [Google Scholar]
  101. Unal G, Joshi A, Viney TJ, Kis V, Somogyi P. Synaptic targets of medial septal projections in the Hippocampus and extrahippocampal cortices of the mouse. Journal of Neuroscience. 2015;35:15812–15826. doi: 10.1523/JNEUROSCI.2639-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Urdapilleta E, Si B, Treves A. Selforganization of modular activity of grid cells. Hippocampus. 2017;27:1204–1213. doi: 10.1002/hipo.22765. [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Vanderwolf CH. Hippocampal electrical activity and voluntary movement in the rat. Electroencephalography and Clinical Neurophysiology. 1969;26:407–418. doi: 10.1016/0013-4694(69)90092-3. [DOI] [PubMed] [Google Scholar]
  104. Vinogradova OS. Expression, control, and probable functional significance of the neuronal theta-rhythm. Progress in Neurobiology. 1995;45:523–583. doi: 10.1016/0301-0082(94)00051-I. [DOI] [PubMed] [Google Scholar]
  105. Widloski J, Fiete IR. A model of grid cell development through spatial exploration and spike time-dependent plasticity. Neuron. 2014;83:481–495. doi: 10.1016/j.neuron.2014.06.018. [DOI] [PubMed] [Google Scholar]
  106. Wikenheiser AM, Redish AD. Hippocampal theta sequences reflect current goals. Nature Neuroscience. 2015a;18:289–294. doi: 10.1038/nn.3909. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Wikenheiser AM, Redish AD. Decoding the cognitive map: ensemble hippocampal sequences and decision making. Current Opinion in Neurobiology. 2015b;32:8–15. doi: 10.1016/j.conb.2014.10.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Wilson MA, McNaughton BL. Dynamics of the hippocampal ensemble code for space. Science. 1993;261:1055–1058. doi: 10.1126/science.8351520. [DOI] [PubMed] [Google Scholar]
  109. Winterer J, Maier N, Wozny C, Beed P, Breustedt J, Evangelista R, Peng Y, D'Albis T, Kempter R, Schmitz D. Excitatory microcircuits within superficial layers of the medial entorhinal cortex. Cell Reports. 2017;19:1110–1116. doi: 10.1016/j.celrep.2017.04.041. [DOI] [PubMed] [Google Scholar]
  110. Wu C-T, Haggerty D, Kemere C, Ji D. Hippocampal awake replay in fear memory retrieval. Nature Neuroscience. 2017;20:571–580. doi: 10.1038/nn.4507. [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Wu X, Foster DJ. Hippocampal replay captures the unique topological structure of a novel environment. Journal of Neuroscience. 2014;34:6459–6469. doi: 10.1523/JNEUROSCI.3414-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  112. Xu H, Baracskay P, O'Neill J, Csicsvari J. Assembly responses of hippocampal CA1 place cells predict learned behavior in Goal-Directed spatial tasks on the radial Eight-Arm maze. Neuron. 2019;101:119–132. doi: 10.1016/j.neuron.2018.11.015. [DOI] [PubMed] [Google Scholar]
  113. Yamaguchi Y, Aota Y, McNaughton BL, Lipa P. Bimodality of theta phase precession in hippocampal place cells in freely running rats. Journal of Neurophysiology. 2002;87:2629–2642. doi: 10.1152/jn.2002.87.6.2629. [DOI] [PubMed] [Google Scholar]
  114. Yamamoto J, Tonegawa S. Direct medial entorhinal cortex input to hippocampal CA1 is crucial for extended quiet awake replay. Neuron. 2017;96:217–227. doi: 10.1016/j.neuron.2017.09.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Yoon K, Buice MA, Barry C, Hayman R, Burgess N, Fiete IR. Specific evidence of low-dimensional continuous attractor dynamics in grid cells. Nature Neuroscience. 2013;16:1077–1084. doi: 10.1038/nn.3450. [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Yoshida M, Jochems A, Hasselmo ME. Comparison of properties of medial entorhinal cortex layer II neurons in two anatomical dimensions with and without cholinergic activation. PLOS ONE. 2013;8:e73904. doi: 10.1371/journal.pone.0073904. [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Zhang K, Ginzburg I, McNaughton BL, Sejnowski TJ. Interpreting neuronal population activity by reconstruction: unified framework with application to hippocampal place cells. Journal of Neurophysiology. 1998;79:1017–1044. doi: 10.1152/jn.1998.79.2.1017. [DOI] [PubMed] [Google Scholar]
  118. Zheng C, Bieri KW, Hsiao YT, Colgin LL. Spatial sequence coding differs during slow and fast gamma rhythms in the Hippocampus. Neuron. 2016;89:398–408. doi: 10.1016/j.neuron.2015.12.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  119. Zielinski MC, Tang W, Jadhav SP. The role of replay and theta sequences in mediating hippocampal-prefrontal interactions for memory and cognition. Hippocampus. 2018;91:1–13. doi: 10.1002/hipo.22821. [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Zutshi I, Fu ML, Lilascharoen V, Leutgeb JK, Lim BK, Leutgeb S. Recurrent circuits within medial entorhinal cortex superficial layers support grid cell firing. Nature Communications. 2018;9:3701. doi: 10.1038/s41467-018-06104-5. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision letter

Editor: Sachin Deshmukh1
Reviewed by: Sachin Deshmukh2, Tad Blair3

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

Continuous attractor networks have been used to simulate grid cells. This paper shows that the continuous attractors can be easily adapted to show other properties of the entorhinal-hippocampal network, namely phase precession, theta sequences during behavior and replay sequences during rest. This detailed and comprehensive modelling study shows that when the continuous attractor network is coupled with theta modulated inhibition, the attractor bumps expand and contract, leading to theta phase precession and theta sequences. Reduction of external drive leads to spontaneous replay like sequences. These results are of significant interest to both theoreticians as well as experimentalists interested in temporal phenomena in the grid cell network.

Decision letter after peer review:

Thank you for sending your article entitled "Replay as wavefronts and theta sequences as bump oscillations in a grid cell attractor network" for peer review at eLife. Your article is being evaluated by three peer reviewers, and the evaluation is being overseen by a Reviewing Editor and Laura Colgin as the Senior Editor.

Given the list of essential revisions, including major changes to the model and extensive new analyses, the editors and reviewers invite you to respond within the next two weeks with an action plan and timetable for the completion of the additional work. The action plan should contain a similar amount of detail as a traditional rebuttal letter. We plan to share your responses with the reviewers and then issue a binding recommendation.

Essential revisions:

1) The non-periodic boundary conditions and non-uniform driving inputs seem like rather odd design choices. This raises some concern that there may be unexplained reasons for these choices (that is, the model's behavior may depend upon these choices in ways that are not spelled out). The model should be re-implemented using periodic topology, either across the entire manuscript or in additional comparison figures.

2) Phase precession in this model may differ from experimentally observed phase precession in some fundamental ways. The Hafting et al., 2008, study shows that layer II MEC phase precession is highly stereotyped, with typical entry/exit phases of 222º/60º, corresponding to the total ~160º precession. Further, Hafting reported typical slopes of about -3º/cm for 30-70 cm grid spacings; "correcting" this value by mean([30,70])/120 for the larger ~120 cm spacing of the modeled grids (Figure 1E) suggests modeled phase slopes should be on the order of -1.25º/cm. But the authors show the slope ranging up to -30º/cm; this range seems clearly too high. It could be that the trajectory-dependent starting phase produces phase trajectories that yield essentially meaningless circular-linear regression slopes as possibly indicated by Figure 3B. Or, the largest bar in Figure 3D could be correctly showing this smaller slope (instead of phase locking as interpreted by the authors), and the interpretation of the larger slopes as precession was simply incorrect. Regardless, the authors must revise the phase precession mechanism and/or its interpretation to at least qualitatively recapitulate the limited range, grid-scale appropriate slopes, and stereotyped phase trajectories in published data.

3) It is unclear whether the 1D phase precession shown in the paper results are specific to a chosen trajectory through the 2D sheet, or whether these results generalize across all directions. The justification for the track choice to improve "interpretability" is weak given that the choice reduces the robustness of the results. The results should be directionally generalized, and the effects of multiple fields on the track (possibly using larger tracks commensurate with grid scale) should be demonstrated and interpreted. Furthermore, the authors need to demonstrate generalization across a range of speeds.

4) A parametric analysis of the conditions which give rise to phase precession and wavefront propagation is rather lacking here, and should be included.

5) Theta sequences: The O'Neill paper (Supplementary Figure 10E) shows the clear sawtooth pattern in which MEC sequences start from behind and end up ahead of the current position. The stalling/speeding dynamics are qualitatively different from both hippocampal observations, and the O'Neill MEC data. The authors should ensure the validity of theta sequence output with regards to (at the minimum) their stated benchmark.

Reviewer #1:

Continuous attractor networks have been used to simulate grid cells. This paper shows that the continuous attractors can be easily adapted to show other properties of the entorhinal-hippocampal network, namely phase precession, theta sequences during behavior and replay sequences during rest.

While the idea of getting these cellular properties out of a network that hasn't been specifically designed to generate these is very attractive, I suspect that some of these observations may arise specifically out of biologically unrealistic idiosyncrasies of the model.

1) "During locomotion, excitatory principal cells are driven to fire by various neocortical and thalamic inputs [Kerr et al., 2007]. Thus, their drive exceeds threshold at the center of the neural sheet; it decays towards the edges to avoid edge effects that disrupt continuous attractor dynamics [Burak and Fiete, 2009]." And "Once nucleated, the wavefront propagates freely through the outer region because inhibitory neurons, with low drive, are not activated by the wavefront. In contrast, wavefronts cannot propagate in the center region, because excitatory neurons there receive enough drive to spike vigorously and activate nearby inhibitory neurons."

During quiescent periods, the excitatory drive reduces, but still maintains its shape (Figure 5). The replay events are seen only at the periphery and not at the center of the sheet. I'm concerned that 1) This excludes center of the sheet from replays (which is not supported by any observed data to the best of my knowledge) and 2) The replays could simply be a side effect of the simulation architecture (which may not be replicated in reality – no one has demonstrated grid cells which maintain location specific firing during quiescent periods), and may not be seen if the boundary effects are countered by using a toroid instead of a sheet with gaussian excitatory drive peaking at the center of the sheet. The authors do discuss a possible enhanced model with landmark and border inputs sustaining their activity through the quiescent period (Discussion, fifth paragraph) that they claim can deal with this issue, but that is insufficient. The authors should implement the attractor as a toroid instead of as a sheet along with external drive not limited to a part of the network ("center" in case of the sheet) which can persist (at a lower level) during the quiescent period, rather than base a main result on the specific architecture that is unrealistic.

This change will also deal with the need for the external drive to be limited to the center during locomotion.

Theta phase precession and sequences as well as replay sequences arising out of such a model would be much more convincing.

2) "Unlike the 2D case in which grid cells have multiple firing fields, we choose parameters such that they have single fields to improve the interpretability of our results."

The authors need to show that their results are not unique to the selected direction, even if it’s just by showing that the results are at least qualitatively similar in other directions (even if the multiple vertices create redundancy). One possible concern is that the replays can "jump" between redundant vertices when a neuron with multiple fields on the track is active (i.e. instead of going in the order 1 2 3 N 4 5 6, redundant fields of neuron N can give rise to a sequence 1 2 3 N 25 26 27). This can degrade the replay.

3) Figure 3B, G. Theta phase precession shown here is unlike the observed precession, in the sense that there appear to be double/multiple precessing lines running within a field (this can be alternatively interpreted as precessing through >> 360°). This could be an artefact of the chosen refractory period, and the precession might disappear when using a shorter refractory period. A 40ms refractory period is rather long for grid cells. Does having a shorter refractory period drastically alter the results? Cells showing 10-15°/cm seem to have more realistic precession, but even that might get affected by having a shorter refractory period.

4)." Wavefront propagation involves dynamical processes that are fundamentally different from attractor bump motion, and thus leads to different decoded speeds and directions for replays and theta sequences."

How dependent is the wavefront propagation speed on the specific time constants used in the model? Were the time constants tweaked to achieve the desired speed?

Reviewer #2:

In this paper, Kang and DeWeese present a spiking model of the entorhinal grid cell network. The paper's main contributions are rooted in two novel ideas: 1) explaining phase precession as a consequence of theta-synchronized expansion and contraction of attractor bumps, and 2) showing that replay sequences can occur during propagating waves that emerge in regions of the network where excitatory drive is low. There are several issues that I think should be addressed in revision:

1) The idea that phase precession arises from expansion/contraction of attractor bumps is quite innovative, but I am concerned that under this mechanism, the phase at which grid cells fire during entry / exit from their fields may not fully accord with experimental findings. Two issues should be addressed here:

1a) What model parameters determine whether a particular grid cell has a fixed versus variable phase? Does this depend upon movement velocity (speed or direction), the neuron's position in the sheet, the level of excitatory drive, etc.? It would be helpful if the authors could perform a more systematic analysis of the drive / speed / direction regimes within which different phase precession behaviors occur. One of the paper's main strengths is its novel theory for phase precession, but without a clearer understanding of how this phase precession behavior arises from the network dynamics, it is difficult to understand what this theory entails, and what it might predict for biology.

1b) While explaining phase precession by expansion and contraction of the attractor bumps is quite innovative, I am concerned that phase precession arising from this mechanism may differ in a fundamental way from experimentally observed phase precession. Before explaining this concern, it is necessary to point out that the authors have adopted a convention for measuring theta phase that is opposite from what many experimental authors have previously used. Here, the authors define 180° as the phase at which grid cell firing rates are at their minimum. By contrast, classical phase precession papers by O'Keefe, the Mosers, and others defined 180° as the phase at which firing rates of place/grid cells are at their maximum. This reversal of convention must be carefully tracked when assessing whether the model agrees with evidence from prior papers. To avoid any confusion arising from this discrepancy, let me state my concern in a way that is independent of which phase convention is used. In experimental studies, place and grid cells usually show a specific preferred firing phase as the animal enters the spatial field, which is ~150-170° later than the phase at which the cell fires when the animal is in the center of its field. However, in the present model, it appears that this is not the case. For example, in Figure 3G, it appears that as the animal enters a grid cell's field, the cell prefers to fire at a phase which is a few degrees earlier than the phase at which the cell fires when the animal is in the center of its field. I don't think this is in good accordance with experimental findings for grid cells or place cells. This issue should be explicitly addressed in the paper.

2) The authors have chosen to build their network with non-periodic boundary conditions (rectangular sheets of neurons) and a radial gradient of excitatory drive which peaks in the center of the sheet. It is unclear how essential behaviors of the model (grid cells with diverse phase precession profiles, propagating waves) depend upon the non-uniform excitatory drive, or whether these behavior could also be obtained under uniform drive.

3) The influence (if any) of movement velocity upon phase precession is not clear. In a 2D attractor network, velocity has two components: speed and direction. Does the phase precession behavior of the model depend upon one or both of these velocity components? Concerning speed, it is not entirely clear whether simulation results were obtained using fixed or variable running speeds (for example, in Figure 4C, why is there a nonzero error bar for "actual" running speed? It does not appear from the trajectory plot in Figure 2A that there is much noise in the running speed during simulated track traversals). Concerning direction, was the 1D track simulated as some specific linear trajectory through the 2D grid sheet, and if so, what was the orientation alignment of the 1D trajectory on the 2D grid sheet? Would different alignments yield fundamentally different phase precession results?

4) If excitatory drive is the main determinant of wave propagation, then it would be nice if the authors could perform a more systematic analysis of the drive regimes in which wave propagation occurs. Can the entire sheet be switched into wave propagation behavior by applying an appropriately low level of uniform excitatory drive? Or is there something about the non-uniformity of the drive level that is essential for the wave propagation to occur?

Reviewer #3:

Kang and DeWeese present a detailed and comprehensive modeling study of theta-rhythmic modulation of the activity bump on a continuous attractor map to explain phase precession, theta sequences, and, by reducing subcortical input levels, the switch to the "irregular" state in which sharp-wave/ripple 'replay' reactivations are observed. These are well-studied hippocampal phenomena that may be crucial to the spatial, mnemonic, and decision-making roles of the hippocampus; the manuscript addresses how they may arise in medial entorhinal (MEC) grid cell networks, which is a critical question for the field. The attractor-based mechanism is interesting and potentially contributes a compelling unification of these temporal phenomena; however, there are several gaps in logic and disconnections from the literature in the way the results are currently presented.

• The claims regarding phase precession are largely undercut by the low correlations in Figure 3C and zero-dominated histogram in Figure 3D showing that most of the simulated grid cells do not precess. The authors note that additional features may increase the correlations, but the bias of the distribution toward 0 is the problem, not the average value. Hafting et al., 2008, showed that layer II principal cells all generally precess; while the authors mention that phase locking is observed in layer III (cf. the Hafting paper) as justification, their model is not a layer III model. If the bump modulation is the shared cause of precession and sequences, then the two effects should be equivalently strong in the model.

• Additionally, arguments regarding hippocampal phase precession have related to whether it is a cause or effect of temporal compression during theta sequences, or even whether it is epiphenomenal. In MEC, if both effects are generated by the same underlying mechanism as in this model, then the critical question concerns whether phase precession has any function in this setting. Some theories posit that precession helps construct sequences, but others claim that it is simply a by-product of temporal compression (e.g., G. Dragoi and G. Buzsáki. Neuron, 50(1):145-157, 2006.; G. Dragoi. Front Syst Neurosci, 7:46, 2013). It's not clear what the authors are claiming in this manuscript, but the results argue for precession as epiphenomenon because it is unnecessary to either sequences or replay, and the starting phase at the edge of a grid field is apparently random, indicating that it plays no role in the spatiotemporal ordering of activity. The description of the origin of precession in the model (subsection “Theta phase precession arises from oscillations in attractor bump size”, third paragraph) emphasizes that it results from transient changes in spread of activity, instead of external input, network, or cell-assembly modulation as in hippocampal models. The authors should broadly reconsider and clarify the theoretical context and implications of their model design and results regarding precession and theta sequences.

• The wavefront replay mechanism suffers from a similar causative discrepancy in which the replay sequences do not have any particular relationship with recently expressed sequences. Experiments (e.g., Drieu, Todorova, and Zugaro, 2018) are beginning to corroborate earlier theories (such as Dragoi, 2013, mentioned above) that online sequences drive plasticity necessary for subsequent replay. However, grid-cell weights in the model are hard-wired, thus precluding any direct contribution to the learning or memory consolidation roles that may be central to the replay phenomenon in general. The Discussion argues that hard-wired replay might play an initiation role for hippocampal replays, but that incorrectly assumes grid cells directly activate place cells (as in the old grid-to-place models) and ignores sharp-wave initiation studies implicating dentate gyrus, CA2, and/or ascending inputs (Sasaki, et al., 2018; Oliva, et al., 2016; Angulo-Garcia, et al., 2018). The potential roles of grid cell wavefronts may be more limited than discussed, which should be addressed.

• The model is based on the aperiodic network with feedforward Gaussian envelope function described by Burak and Fiete, 2009, but they also described a periodic solution. The periodic solution has problems with rotation, but some of the results (especially the generation of replays) are affected by or depend on the topology. The authors should clarify the reliance of their results on network topology and whether other formulations may also support their conclusions.

eLife. 2019 Nov 18;8:e46351. doi: 10.7554/eLife.46351.sa2

Author response


Essential revisions:

1) The non-periodic boundary conditions and non-uniform driving inputs seem like rather odd design choices. This raises some concern that there may be unexplained reasons for these choices (that is, the model's behavior may depend upon these choices in ways that are not spelled out). The model should be re-implemented using periodic topology, either across the entire manuscript or in additional comparison figures.

This is a good point. Following the Editor and reviewers’ suggestion, we have added comparison figures featuring a model with periodic topology and uniform drive (Figure 8D–G). It reproduces all essential replay and theta-related behaviors of the non-periodic implementation (Figure 8—figure supplement 2 and Figure 8—figure supplement 3). As in the non-periodic case, it transitions between attractor bumps arranged in a triangular grid and traveling wavefronts in response to changes in input drive. The attractor bumps are capable of path-integration during runs.

During quiescent periods, traveling wavefronts propagate across the entire network. This erases information about the animal’s position, so grid fields would not be consistent from lap to lap. To maintain positional information, we have added a simple implementation of brief allocentric input from border, boundary vector, or landmark cells (Ocko et al., 2018; Keinath et al., 2018; and others). This input is active only for 0.5 s between idle periods and runs to nucleate the activity bumps in their appropriate conditions. Inputs to the model are strictly uniform during the 1.5 s-long runs and the 2.0 s-long periods of quiescence that contain traveling wavefronts.

For most of the paper, we still use non-periodic boundary conditions, which requires non-uniform driving inputs (Burak and Fiete, 2009). Although periodic boundary conditions are useful to represent a large area of cortex in which edges are deemed irrelevant, we find that non-periodic boundary conditions are more biologically feasible. Periodic boundary conditions obeying a toroidal topology require a precise configuration of long-range connections that we believe is unlikely to exist in the brain. On the other hand, continuous attractor networks with non-periodic boundary conditions can arise from anatomically local connections, whose precise symmetry can be learned through synaptic plasticity rules (Widlowski and Fiete, 2014).

2) Phase precession in this model may differ from experimentally observed phase precession in some fundamental ways. The Hafting et al., 2008, study shows that layer II MEC phase precession is highly stereotyped, with typical entry/exit phases of 222º/60º, corresponding to the total ~160º precession. Further, Hafting reported typical slopes of about -3º/cm for 30-70 cm grid spacings; "correcting" this value by mean([30,70])/120 for the larger ~120 cm spacing of the modeled grids (Figure 1E) suggests modeled phase slopes should be on the order of -1.25º/cm. But the authors show the slope ranging up to -30º/cm; this range seems clearly too high. It could be that the trajectory-dependent starting phase produces phase trajectories that yield essentially meaningless circular-linear regression slopes as possibly indicated by Figure 3B. Or, the largest bar in Figure 3D could be correctly showing this smaller slope (instead of phase locking as interpreted by the authors), and the interpretation of the larger slopes as precession was simply incorrect. Regardless, the authors must revise the phase precession mechanism and/or its interpretation to at least qualitatively recapitulate the limited range, grid-scale appropriate slopes, and stereotyped phase trajectories in published data.

We agree with the Editor’s and reviewers’ suggestions and have revised our manuscript accordingly, as we will now describe. We will structure our response by successively addressing stereotyped phase trajectories, grid-scale appropriate slopes, and limited range, as listed in the last sentence of the Concern.

First, we discuss stereotyped phase trajectories. Our new implementation of brief allocentric corrections has increased the number of phase-precessing neurons in our model. However, we do not believe that the experimental evidence requires our model to exhibit the same degree of precession across all neurons. These two points are elaborated in turn below.

a) Brief allocentric corrections have made phase behavior more stereotyped in our model. Our original results were based on simulated data from only 12 laps along the track, 6 in each direction, which provided many fewer spikes per neuron than were acquired in phase precession experiments. Our lap count was limited by attractor drift, which arises from noise accumulation and destabilizes grid fields. Allocentric input is believed to correct errors in path-integration (Hardcastle et al., 2015; Ocko et al., 2018; Keinath et al., 2018; and others; see also Essential revision1) and allows us to run simulations for many more laps, providing more spikes per neuron with more stable firing fields. It also allows us to use stronger velocity input, which in our experience produces more attractor drift because the attractor bumps move farther along the neural sheet during each lap.

Compared to our original simulations, more neurons exhibit phase precession between 60º and 360º and a smaller proportion of neurons exhibit phase-position correlation close to 0 (Figure 3). Although correlation values are still less than they are in Supplementary Figure 10C of O’Neill et al., 2017, the peak in the distribution is no longer at 0, as observed in experiment. In the Discussion, we mention how these correlation values can be further increased with additional simulation features that may reduce the number of phase-independent neurons that fire throughout the theta cycle.

b) Although our neurons do not all precess in a stereotyped fashion, this heterogeneity matches the variety of phase behaviors observed in superficial MEC layers. We believe that at most, the experimental literature indicates that only a subpopulation of neurons in superficial MEC layers exhibits highly stereotyped phase precession. Hafting et al., 2008 report that LII neurons consistently precess, whereas LIII neurons exhibit phase locking instead. However, LII pyramidal cells precess less than LII stellate cells do (Ebbesen et al., 2016; Reifenstein et al., 2016) and even less than LIII neurons do at the single-lap level (Ebbesen et al., 2016).

It is not certain that our grid cells should correspond to one subpopulation of MEC neurons in particular. Grid cells in continuous attractor models, like our model, increase their activity when the animal moves along a preferred direction and can thus be considered grid × head direction conjunctive cells (Zilli, 2012). These conjunctive cells are more common in LIII and deeper layers (Sargolini et al., 2006). It is true that other features of continuous attractor models, such as recurrent inhibition (Couey et al., 2013), may be better established in LII. Additionally, some researchers do propose that continuous attractor models describe LII neurons with directional inputs from deeper MEC layers (Bush and Burgess, 2014; and others). Nevertheless, we find it too restrictive to claim that our model must describe only MEC LII neurons (and potentially only its strongly precessing stellate cells). The O’Neill et al., 2017 results that our model addresses were obtained from superficial MEC layers without distinguishing between LII and LIII; they contain a mixture of phase behaviors.

Next, we discuss grid-scale appropriate slopes. In short, to properly make the important experimental comparisons mentioned by the Editor and reviewers, we have decided to convert our slopes to units of degrees per field instead of degrees per cm. In our manuscript, we focused on matching theta behavior and replay as measured by O’Neill et al., 2017. Their grid cells have scale ~150 cm, and their example of phase precession exhibits slope ~10º/cm (Supplementary Figure 10A in O’Neill et al., 2017). It is true that Hafting et al., 2008 provides a more comprehensive investigation of phase precession, and both their grid scales (30–70 cm) and phase precession slopes (~3º/cm) are much smaller. Other papers report precession slopes in units of degrees per second (Ebbesen et al., 2016; Reifenstein et al., 2016). In light of these differences in slope values even among experimental reports and their dependence on grid scale, we find that a more consistent metric for precession is its phase range, which equals the absolute value of precession slope in units of degrees per field. We will discuss experimental comparisons of precession range in the next paragraph.

Finally, we discuss the range of phase precession. The Editor and reviewers point out that our previous results exhibit large phase precession slopes of magnitude ~30º/cm, which corresponds to a precession range of greater than 360º. Our new simulations incorporate brief allocentric input and better align with precession ranges reported experimentally, which are less than 360º. We present two analyses of precession range in our revised manuscript:

a) Figure 3E shows our analysis of phase precession slope in units of field size, and its absolute value is the precession range. We agree with the Editor’s suggestion that some slope values may be meaningless due to poor fit. To address this issue, we modified our slope analysis by only including neurons whose fit score, which is the mean resultant length (Kempter et al., 2012), exceeds a certain value. In Figure 3E, neurons still preferentially show negative slopes for rightward runs and positive slopes for leftward runs, which corresponds to decreasing phase as the animal moves through the field. Moreover, many neurons responsible for this bias, corresponding to regions in which the “left” and “right” histograms do not overlap, precess with range 60º–360º. This compares favorably with these median precession ranges experimentally measured in MEC LII: ~160º in cells of unspecified type (Hafting et al., 2008), ~170º and ~110º in stellate and pyramidal cells respectively (Reifenstein et al., 2016), and ~130º and ~50º in stellate and pyramidal cells respectively (Ebbesen et al., 2016). Our simulations still produce many cells with precession range close to 0º, which may correspond to MEC LIII cells observed to lack strong precession (Hafting et al., 2008).

b) Following Hafting et al., 2008, we present in Figure 3F spike density plots averaged over different subgroups of neurons. The rightmost density plot corroborates our results in Figure 3E: for neurons with high fit scores and large phase ranges >60º, the preferred phase decreases as the animal moves through grid fields. This preferred phase is ~360º (or equivalently 0º) at the beginning of the field, and magnitude of its decrease is ~75º. For neurons with small phase ranges <60º, neurons maintain a constant phase preference of ~360º. The former population appears to be consistent with LII cells and the latter with LIII cells.

In summary, our implementation of brief allocentric corrections leads to more phase-precessing neurons whose precession range lies within the experimentally reported span for LII neurons. In the Discussion, we cite experimental measurements to explain why we do not believe our model should necessarily produce only grid cells with stereotyped precession.

3) It is unclear whether the 1D phase precession shown in the paper results are specific to a chosen trajectory through the 2D sheet, or whether these results generalize across all directions. The justification for the track choice to improve "interpretability" is weak given that the choice reduces the robustness of the results. The results should be directionally generalized, and the effects of multiple fields on the track (possibly using larger tracks commensurate with grid scale) should be demonstrated and interpreted. Furthermore, the authors need to demonstrate generalization across a range of speeds.

This is a good point. We have now generalized our results to a variety of track directions relative to the neural sheet and across a range of speeds (Figure 3—figure supplement 6). We have also compared phase behavior for simulations exhibiting single fields and multiple fields (Figure 3—figure supplement 5). We have found that precession range and correlation magnitude increase with animal speed, as observed experimentally (Jeewajee et al., 2014). However, track orientation and the number of grid fields do not affect phase behavior.

4) A parametric analysis of the conditions which give rise to phase precession and wavefront propagation is rather lacking here, and should be included.

We agree that a parametric analysis is a good idea. We have now measured various phase precession, theta sequence, and replay properties across a wide range of parameters. The results are shown in Figure 3—figure supplement 6, Figure 3—figure supplement 7, Figure 5—figure supplement 3, and Figure 7—figure supplement 4. Across all parameter values, theta sequence speed remains roughly 2 times the actual speed, and replay speed remains roughly 14 times the actual speed. These values are consistent with our original findings and with experimental measurements (O’Neill et al., 2017).

5) Theta sequences: The O'Neill paper (Supplementary Figure 10E) shows the clear sawtooth pattern in which MEC sequences start from behind and end up ahead of the current position. The stalling/speeding dynamics are qualitatively different from both hippocampal observations, and the O'Neill MEC data. The authors should ensure the validity of theta sequence output with regards to (at the minimum) their stated benchmark.

The Editor and the reviewers make a good point; in our new simulations with brief allocentric corrections (see also Essential revision1), theta sequence plots now exhibit a prominent sawtooth pattern (Figure 5B). These sequences start behind the animal’s actual position and end ahead of it (Figure 5D), a feature that arises from activity bump dynamics (Figure 6D). Allocentric corrections allow for combination of more simulation laps, more stable firing fields, and stronger velocity input that cause this sawtooth pattern to emerge.

Reviewer #1:

Continuous attractor networks have been used to simulate grid cells. This paper shows that the continuous attractors can be easily adapted to show other properties of the entorhinal-hippocampal network, namely phase precession, theta sequences during behavior and replay sequences during rest.

While the idea of getting these cellular properties out of a network that hasn't been specifically designed to generate these is very attractive, I suspect that some of these observations may arise specifically out of biologically unrealistic idiosyncrasies of the model.

1) "During locomotion, excitatory principal cells are driven to fire by various neocortical and thalamic inputs [Kerr et al., 2007]. Thus, their drive exceeds threshold at the center of the neural sheet; it decays towards the edges to avoid edge effects that disrupt continuous attractor dynamics [Burak and Fiete, 2009]." And "Once nucleated, the wavefront propagates freely through the outer region because inhibitory neurons, with low drive, are not activated by the wavefront. In contrast, wavefronts cannot propagate in the center region, because excitatory neurons there receive enough drive to spike vigorously and activate nearby inhibitory neurons."

During quiescent periods, the excitatory drive reduces, but still maintains its shape (Figure 5). The replay events are seen only at the periphery and not at the center of the sheet. I'm concerned that 1) This excludes center of the sheet from replays (which is not supported by any observed data to the best of my knowledge) and 2) The replays could simply be a side effect of the simulation architecture (which may not be replicated in reality – no one has demonstrated grid cells which maintain location specific firing during quiescent periods), and may not be seen if the boundary effects are countered by using a toroid instead of a sheet with gaussian excitatory drive peaking at the center of the sheet. The authors do discuss a possible enhanced model with landmark and border inputs sustaining their activity through the quiescent period (Discussion, fifth paragraph) that they claim can deal with this issue, but that is insufficient. The authors should implement the attractor as a toroid instead of as a sheet along with external drive not limited to a part of the network ("center" in case of the sheet) which can persist (at a lower level) during the quiescent period, rather than base a main result on the specific architecture that is unrealistic.

This change will also deal with the need for the external drive to be limited to the center during locomotion.

Theta phase precession and sequences as well as replay sequences arising out of such a model would be much more convincing.

The reviewer raises an interesting point here. As we have described in our response to Essential revision1, we have now implemented periodic boundary conditions with uniform drive, and our simulations show attractor bumps and traveling wavefronts during runs and idle periods, respectively (Figure 8). These grid cells no longer maintain location-specific firing during quiescent periods. However, we do not believe there is convincing experimental evidence for or against location-specific firing during quiescent periods. Few papers report the properties of grid cells during quiescence; grid cells during this state do have much lower firing rates, but there appears to be some residual activity outside of highly synchronized events (Figure 2A in O’Neill et al., 2017). To our knowledge, it is unknown whether this activity maintains a representation of an animal’s location. Nevertheless, we expect our results to hold in either case; the animal’s spatial representation can be either maintained by an active region of the grid network (whether in the center or close to an edge) or reset by brief allocentric inputs.

2) "Unlike the 2D case in which grid cells have multiple firing fields, we choose parameters such that they have single fields to improve the interpretability of our results."

The authors need to show that their results are not unique to the selected direction, even if it’s just by showing that the results are at least qualitatively similar in other directions (even if the multiple vertices create redundancy). One possible concern is that the replays can "jump" between redundant vertices when a neuron with multiple fields on the track is active (i.e. instead of going in the order 1 2 3 N 4 5 6, redundant fields of neuron N can give rise to a sequence 1 2 3 N 25 26 27). This can degrade the replay.

This is a good idea. As described in our response to Essential revision3, we agree that running simulations with varied track orientations and multiple grid fields would test the robustness of our model. We have included these results throughout our paper. Changing track orientation does not affect phase behavior (Figure 3—figure supplement 6). Simulations exhibiting one or two grid fields exhibit the same phase behavior and theta sequence speed (Figure 3—figure supplement 5, Figure 5—figure supplement 3). During replays, simulations with two grid fields can have their decoded probability split between multiple track positions that form parallel lines (Figure 7—figure supplement 3). Their detected speed is slightly lower than the speed of replays among grid cells with one field, but the difference in detection methods that accounts for split probability distributions may contribute to this difference (Figure 7—figure supplement 4).

3) Figure 3B, G. Theta phase precession shown here is unlike the observed precession, in the sense that there appear to be double/multiple precessing lines running within a field (this can be alternatively interpreted as precessing through >> 360°). This could be an artefact of the chosen refractory period, and the precession might disappear when using a shorter refractory period. A 40ms refractory period is rather long for grid cells. Does having a shorter refractory period drastically alter the results? Cells showing 10-15°/cm seem to have more realistic precession, but even that might get affected by having a shorter refractory period.

As described in our response to Essential revision2, our simulations with brief allocentric inputs, which allow us to obtain more spikes per neuron with more stable firing fields, demonstrate more neurons whose range of precession is less than 360º. We do not implement a refractory period in our main simulations; we have only considered conceptual models in Figure 4 that use a refractory period. In our main model, we do use a 40 ms membrane time constant for excitatory neurons. As described in our response to Essential revision4, we have characterized how the behavior of the model depends on the membrane time constants and other parameter values (Figure 3—figure supplement 6 and Figure 3—figure supplement 7). Phase precession properties do not significantly vary with membrane time constants over the range we tested.

4) " Wavefront propagation involves dynamical processes that are fundamentally different from attractor bump motion, and thus leads to different decoded speeds and directions for replays and theta sequences."

How dependent is the wavefront propagation speed on the specific time constants used in the model? Were the time constants tweaked to achieve the desired speed?

Wavefronts, which arise directly from activity propagation from one excitatory neuron to the next, generally travel much faster than activity bumps, whose motion arises from smaller imbalances among excitatory populations with different preferred directions. Thus, replays are generally much faster than the animal trajectory. Without fine tuning, our choice of parameters makes replays ~14 times faster. As described in our response to Essential revision4, we have characterized how replay speed depends on the values of time constants and other parameters (Figure 7—figure supplement 4). In general, replays are 8–20 times faster than the speed of the animal trajectory, which agrees well with experimental measurements of a ~9–15-fold speed-up (O’Neill et al., 2017).

Reviewer #2:

In this paper, Kang and DeWeese present a spiking model of the entorhinal grid cell network. The paper's main contributions are rooted in two novel ideas: 1) explaining phase precession as a consequence of theta-synchronized expansion and contraction of attractor bumps, and 2) showing that replay sequences can occur during propagating waves that emerge in regions of the network where excitatory drive is low. There are several issues that I think should be addressed in revision:

1) The idea that phase precession arises from expansion/contraction of attractor bumps is quite innovative, but I am concerned that under this mechanism, the phase at which grid cells fire during entry / exit from their fields may not fully accord with experimental findings. Two issues should be addressed here:

1a) What model parameters determine whether a particular grid cell has a fixed versus variable phase? Does this depend upon movement velocity (speed or direction), the neuron's position in the sheet, the level of excitatory drive, etc.? It would be helpful if the authors could perform a more systematic analysis of the drive / speed / direction regimes within which different phase precession behaviors occur. One of the paper's main strengths is its novel theory for phase precession, but without a clearer understanding of how this phase precession behavior arises from the network dynamics, it is difficult to understand what this theory entails, and what it might predict for biology.

We appreciate the reviewer’s comment that our model is based on an innovative mechanism. The reviewer makes a good suggestion that we determine differences between grid cells that exhibit phase locking and phase precession, and now we have performed a comprehensive analysis. To constrain spiking to preferred theta phases, both phase-locking and precessing neurons require low excitatory drive and tend to be located towards the edge of the neural sheet where drive is lower (Figure 3—figure supplement 6). In particular, phase-locking neurons must have spikes constrained to only the most permissive part of the theta cycle; thus, they have lower firing rates than phase-precessing neurons (Figure 3—figure supplement 4). Two factors contribute to lower firing rates: phase-locking neurons are slightly less prevalent towards the center of attractor bumps (Figure 3—figure supplement 4) and more prevalent in excitatory populations whose preferred direction is perpendicular to the track orientation (Figure 3—figure supplement 5). This relationship between precession range and firing rate was observed experimentally by Jeewajee et al., 2014, who reported that grid cells with more spikes have steeper precession slopes.

Furthermore, with a new simplified model of phase precession, we demonstrate that the level of neural activity can directly influence phase behavior (Figure 4E, F and Figure 4—figure supplement 1). In this model, we determine the time-varying shape of the average attractor bump and rescale its activity to different levels. We find that low, medium, and high activity levels increase the prevalence of phase-locking, precessing, and independent neurons respectively.

Phase precession properties are similar between populations whose preferred direction is either aligned with or perpendicular to the track orientation (Figure 3—figure supplement 5), which is related to omnidirectional precession observed experimentally (Climer et al., 2013). Precession is also independent of track orientation (see also Essential revision3), but its range increases with animal speed (Figure 3—figure supplement 6), as observed experimentally (Jeewajee et al., 2014). In our parametric analysis (see also Essential revision4), we report how phase precession properties vary in a complex fashion with simulation parameters (Figure 3—figure supplement 6 and Figure 3—figure supplement 7).

1b) While explaining phase precession by expansion and contraction of the attractor bumps is quite innovative, I am concerned that phase precession arising from this mechanism may differ in a fundamental way from experimentally observed phase precession. Before explaining this concern, it is necessary to point out that the authors have adopted a convention for measuring theta phase that is opposite from what many experimental authors have previously used. Here, the authors define 180° as the phase at which grid cell firing rates are at their minimum. By contrast, classical phase precession papers by O'Keefe, the Mosers, and others defined 180° as the phase at which firing rates of place/grid cells are at their maximum. This reversal of convention must be carefully tracked when assessing whether the model agrees with evidence from prior papers. To avoid any confusion arising from this discrepancy, let me state my concern in a way that is independent of which phase convention is used. In experimental studies, place and grid cells usually show a specific preferred firing phase as the animal enters the spatial field, which is ~150-170° later than the phase at which the cell fires when the animal is in the center of its field. However, in the present model, it appears that this is not the case. For example, in Figure 3G, it appears that as the animal enters a grid cell's field, the cell prefers to fire at a phase which is a few degrees earlier than the phase at which the cell fires when the animal is in the center of its field. I don't think this is in good accordance with experimental findings for grid cells or place cells. This issue should be explicitly addressed in the paper.

As described in our response to Essential revision 2, our new simulations with brief allocentric corrections demonstrate more neurons whose range of precession is less than 360º. These neurons have a preferred firing phase that is higher (later) at the beginning of a grid field than at its center.

Figure 4A–C of our revision (corresponding to Figure 3G in our first submission) presents a simplified conceptual model that readily explains two features of phase precession in our model: preferred phases at the beginning of a grid field are close to 0º/360º (under our convention) and preferred phases tend to decrease, not increase, through the field. The simplified conceptual model cannot explain why the range of precession is less than 360º. We have made this point clearer in our revision.

As for the convention for theta phase, we agree that the presence of opposite definitions is confusing. Although classic phase precession papers identify 180º with maximum grid cell activity, other papers, including O’Neill et al., 2017 and Feng et al., 2015, identify 180º with minimum activity. Since our focus is to capture the results of O’Neill et al., 2017, we use the latter convention.

2) The authors have chosen to build their network with non-periodic boundary conditions (rectangular sheets of neurons) and a radial gradient of excitatory drive which peaks in the center of the sheet. It is unclear how essential behaviors of the model (grid cells with diverse phase precession profiles, propagating waves) depend upon the non-uniform excitatory drive, or whether these behavior could also be obtained under uniform drive.

As described in our response to Essential revisions1, our simulations with periodic boundary conditions and uniform drive show attractor bumps and traveling wavefronts across the entire neural sheet during runs and idle periods, respectively (Figure 8). This implementation can produce all the essential behaviors found in our original model (Figure 8—figure supplement 2 and Figure 8—figure supplement 3).

3) The influence (if any) of movement velocity upon phase precession is not clear. In a 2D attractor network, velocity has two components: speed and direction. Does the phase precession behavior of the model depend upon one or both of these velocity components? Concerning speed, it is not entirely clear whether simulation results were obtained using fixed or variable running speeds (for example, in Figure 4C, why is there a nonzero error bar for "actual" running speed? It does not appear from the trajectory plot in Figure 2A that there is much noise in the running speed during simulated track traversals). Concerning direction, was the 1D track simulated as some specific linear trajectory through the 2D grid sheet, and if so, what was the orientation alignment of the 1D trajectory on the 2D grid sheet? Would different alignments yield fundamentally different phase precession results?

As described in our response to Essential revision3, we agree that running simulations with varied track orientations would test the robustness of our model. Yes, the 1D track corresponds to a fixed orientation of 36º on the neural sheet (Appendix 1), chosen to avoid cardinal directions, which are preferred by our neurons. We have now tested different orientations 0º, 22.5º, and 45º, and found no dependence between orientation and phase precession properties (Figure 3—figure supplement 6). We note that we do not fix the orientation of the grid-like activity pattern on the neural sheet; it varies from simulation to simulation.

We do include variations in velocity in our simulated trajectory such that the speed varies between 0.4 and 0.6 m/s (Appendix 1). As part of our parametric analysis (Essential revision4), we have also tested different mean running speeds of 0.3 and 0.75 m/s. The range of phase precession increases with animal speed (Figure 3—figure supplement 6), as observed experimentally (Jeewajee et al., 2014).

4) If excitatory drive is the main determinant of wave propagation, then it would be nice if the authors could perform a more systematic analysis of the drive regimes in which wave propagation occurs. Can the entire sheet be switched into wave propagation behavior by applying an appropriately low level of uniform excitatory drive? Or is there something about the non-uniformity of the drive level that is essential for the wave propagation to occur?

Yes, simulations with periodic boundary conditions and uniform drive demonstrate wavefronts that propagate throughout the entire neural sheet (Essential revision 1; Figure 8). As part of our parametric analysis (Essential revision 4), we found that replays are roughly independent of excitatory drive; their number is reduced with higher inhibitory drive, since inhibition disrupts wavefront propagation (Figure 7—figure supplement 4).

Reviewer #3:

Kang and DeWeese present a detailed and comprehensive modeling study of theta-rhythmic modulation of the activity bump on a continuous attractor map to explain phase precession, theta sequences, and, by reducing subcortical input levels, the switch to the "irregular" state in which sharp-wave/ripple 'replay' reactivations are observed. These are well-studied hippocampal phenomena that may be crucial to the spatial, mnemonic, and decision-making roles of the hippocampus; the manuscript addresses how they may arise in medial entorhinal (MEC) grid cell networks, which is a critical question for the field. The attractor-based mechanism is interesting and potentially contributes a compelling unification of these temporal phenomena; however, there are several gaps in logic and disconnections from the literature in the way the results are currently presented.

We thank the reviewer for stating that we are addressing a critical question for the field and that our proposed mechanism is interesting and potentially contributes a compelling unification of temporal phenomena. We have made many changes to our manuscript that we feel provide more connections to the literature and better logical flow.

• The claims regarding phase precession are largely undercut by the low correlations in Figure 3C and zero-dominated histogram in Figure 3D showing that most of the simulated grid cells do not precess. The authors note that additional features may increase the correlations, but the bias of the distribution toward 0 is the problem, not the average value. Hafting et al., 2008, showed that layer II principal cells all generally precess; while the authors mention that phase locking is observed in layer III (cf. the Hafting paper) as justification, their model is not a layer III model. If the bump modulation is the shared cause of precession and sequences, then the two effects should be equivalently strong in the model.

The Reviewer raises a subtle and important point. As described in our response to Essential revision2, we believe our model does not necessarily describe only neurons with stereotyped phase precession. Our neurons with grid-like firing maps and directional preferences can be considered conjunctive cells, which are more common in MEC LIII than in LII (Sargolini et al., 2006). These LIII cells exhibit phase locking more than phase precession (Hafting et al., 2008). Moreover, Ebbesen et al., 2016 found that the range and slope of phase precession in LII pyramidal cells is even less than those of LIII cells. We do not exclude the possibility that our model may capture LII neurons whose directional selectivity is influenced by conjunctive cells from deeper MEC layers (as suggested by Bush & Burgess, 2014; and others), but we do not agree that biological plausibility requires most of our cells to precess. We thank Reviewer 3 for pointing out that we did not fully discuss this in our manuscript, and now our justification is provided in the Discussion.

Moreover, incorporating brief allocentric corrections leads to a shift in the peak of our phase-position correlation distribution away from 0 (Figure 3C). In the Discussion, we suggest other biophysical features that may be added to our model to increase its correlation further.

• Additionally, arguments regarding hippocampal phase precession have related to whether it is a cause or effect of temporal compression during theta sequences, or even whether it is epiphenomenal. In MEC, if both effects are generated by the same underlying mechanism as in this model, then the critical question concerns whether phase precession has any function in this setting. Some theories posit that precession helps construct sequences, but others claim that it is simply a by-product of temporal compression (e.g., G. Dragoi and G. Buzsáki. Neuron, 50(1):145-157, 2006.; G. Dragoi. Front Syst Neurosci, 7:46, 2013). It's not clear what the authors are claiming in this manuscript, but the results argue for precession as epiphenomenon because it is unnecessary to either sequences or replay, and the starting phase at the edge of a grid field is apparently random, indicating that it plays no role in the spatiotemporal ordering of activity. The description of the origin of precession in the model (subsection “Theta phase precession arises from oscillations in attractor bump size”, third paragraph) emphasizes that it results from transient changes in spread of activity, instead of external input, network, or cell-assembly modulation as in hippocampal models. The authors should broadly reconsider and clarify the theoretical context and implications of their model design and results regarding precession and theta sequences.

The reviewer raises important questions regarding the interpretation of both published data and our modeling results. While our work focuses on the mechanistic origins of phase precession and theta sequences, we appreciate the suggestion to further consider their relationship to each other and to broader cognitive theories. Accordingly, we have expanded upon our comments on this topic in our Discussion.

As reviewer 3 mentions, in hippocampus, some work indicates that phase precession may appear before theta sequences (Feng et al., 2015), which is consistent with the possibility that phase precession gives rise to theta sequences. Our work suggests that such a dependence is not necessary in grid cells and that both can arise from the same underlying cause—namely, oscillations in inhibitory drive. (Similarly, Navratilova et al., 2012, also suggest that both phenomena are consequences of a single factor, after-spike depolarization, in a grid cell model.) Note that this is not inconsistent with a difference in the onset times of these two phenomena in MEC, which has not been observed to our knowledge but would be analogous to what Feng et al., 2015 report in hippocampus. In our model, phase precession arises from oscillations in attractor bump size and theta sequences arise from oscillations in bump velocity. Oscillating inhibitory drive causes both types of bump oscillations, but their relative prominence may differ from one parameter regime to another (Figure 6—figure supplement 1), which may cause phase precession to be detected without theta sequences, or vice versa, at certain times.

One possible implication of our model is that these theta-related phenomena in MEC, which depend only on intrinsic circuitry and medial septum inputs in our model, may drive the corresponding phenomena in hippocampus, where such particular intrinsic circuitry may not exist. In support of this idea are reports that grid cell phase precession occurs independently of hippocampus (Hafting et al., 2008), but place cell phase precession and theta sequences require the MEC (Schlesiger et al., 2015). Plasticity in the hippocampus can then help the animal leverage these phenomena during movement, like replays during quiescence, to pursue cognitive goals. In this sense, phase precession would not be an epiphenomenon that serves no purpose. For example, phase precession may enable phase coding of position, which allows more accurate assignment of rewards, and theta sequences may help the animal decide between possible trajectories (Kay et al., 2019) based on previous experience. However, our results hold independent of whether these ideas about the relationship between MEC and hippocampus are borne out by future experiments.

Finally, we note that spike phases at the beginning of grid fields are consistently close to 0º/360º in our model, and not random (Figure 3F), which aligns well with experimental data (Hafting et al., 2008; O’Neill et al., 2017). The simplified conceptual model presented in Figure 4 of our manuscript provides an intuitive explanation for this feature. We acknowledge the reviewer for the opportunity to emphasize this feature in our revision.

• The wavefront replay mechanism suffers from a similar causative discrepancy in which the replay sequences do not have any particular relationship with recently expressed sequences. Experiments (e.g., Drieu, Todorova and Zugaro, 2018) are beginning to corroborate earlier theories (such as Dragoi, 2013, mentioned above) that online sequences drive plasticity necessary for subsequent replay. However, grid-cell weights in the model are hard-wired, thus precluding any direct contribution to the learning or memory consolidation roles that may be central to the replay phenomenon in general. The Discussion argues that hard-wired replay might play an initiation role for hippocampal replays, but that incorrectly assumes grid cells directly activate place cells (as in the old grid-to-place models) and ignores sharp-wave initiation studies implicating dentate gyrus, CA2, and/or ascending inputs (Sasaki, et al., 2018; Oliva, et al., 2016; Angulo-Garcia et al., 2018). The potential roles of grid cell wavefronts may be more limited than discussed, which should be addressed.

We acknowledge that our discussion of a potential role for grid cell replay, the driving of hippocampal replay, was limited, and we thank the reviewer for bringing this point and the broader literature on sharp-wave initiation to our attention. There can be advantages to having replays driven by intrinsically generated sequences, like a central pattern generator, without synaptic plasticity. These replays would be available immediately and maintain an unbiased spatial distribution. However, it is completely possible that there are multiple sites contributing to sequence initiation, including the ones mentioned by reviewer 3. Although the main results of our paper do not depend on the cognitive roles assigned to grid cell replays, we have revised our Discussion to emphasize the broader context of other potential drivers of hippocampal replay.

The potential role for grid cell replay to drive hippocampal replay does assume that grid cells activate place cells. This activation does not have to be direct; the flow of activity from grid cells in MEC LII to classical place cells in CA1, for example, can travel through dentate gyrus and CA3, although grid cells or conjunctive cells in MEC LIII may also directly activate CA1 through the temporoammonic pathway (Brun et al., 2008). The idea that grid cells influence place cells is supported by classical studies (see Bush et al., 2014 for a review) and more recent work (Hales et al., 2014; Kanter et al., 2017; Mallory et al., 2018). However, we are not asserting that grid cells are the sole drivers of place cell activity (Bush et al., 2014), and recent work also suggests that grid cells can be influenced by (Ólafsdóttir et al., 2016; Boccara et al., 2019; Butler et al., 2019) or depend upon (Chen et al., 2016) external spatial inputs, such as those from place cells in hippocampus. The origin and flow of spatial information within the hippocampal region appears to be quite complicated, and we believe the possibility that grid cells activate place cells cannot be eliminated.

• The model is based on the aperiodic network with feedforward Gaussian envelope function described by Burak and Fiete, 2009, but they also described a periodic solution. The periodic solution has problems with rotation, but some of the results (especially the generation of replays) are affected by or depend on the topology. The authors should clarify the reliance of their results on network topology and whether other formulations may also support their conclusions.

As described in our response to Essential revision1, our new simulations with periodic boundary conditions and uniform drive demonstrate wavefronts that propagate throughout the entire neural sheet.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Source code 1. Source code for the main simulation written in C.
    elife-46351-code1.zip (584.7KB, zip)
    Transparent reporting form

    Data Availability Statement

    Source code for the simulations have been included as supporting files.


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES