Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2020 Feb 17;30(4):396–421. doi: 10.1002/hipo.23197

An uncertainty principle for neural coding: Conjugate representations of position and velocity are mapped onto firing rates and co‐firing rates of neural spike trains

Ryan Grgurich 1,, Hugh T Blair 1
PMCID: PMC7154697  PMID: 32065487

Abstract

The hippocampal system contains neural populations that encode an animal's position and velocity as it navigates through space. Here, we show that such populations can embed two codes within their spike trains: a firing rate code (R) conveyed by within‐cell spike intervals, and a co‐firing rate code (R˙) conveyed by between‐cell spike intervals. These two codes behave as conjugates of one another, obeying an analog of the uncertainty principle from physics: information conveyed in R comes at the expense of information in R˙, and vice versa. An exception to this trade‐off occurs when spike trains encode a pair of conjugate variables, such as position and velocity, which do not compete for capacity across R and R˙. To illustrate this, we describe two biologically inspired methods for decoding R and R˙, referred to as sigma and sigma‐chi decoding, respectively. Simulations of head direction and grid cells show that if firing rates are tuned for position (but not velocity), then position is recovered by sigma decoding, whereas velocity is recovered by sigma‐chi decoding. Conversely, simulations of oscillatory interference among theta‐modulated “speed cells” show that if co‐firing rates are tuned for position (but not velocity), then position is recovered by sigma‐chi decoding, whereas velocity is recovered by sigma decoding. Between these two extremes, information about both variables can be distributed across both channels, and partially recovered by both decoders. These results suggest that populations with different spatial and temporal tuning properties—such as speed versus grid cells—might not encode different information, but rather, distribute similar information about position and velocity in different ways across R and R˙. Such conjugate coding of position and velocity may influence how hippocampal populations are interconnected to form functional circuits, and how biological neurons integrate their inputs to decode information from firing rates and spike correlations.

Keywords: differentiation, head direction cells, integration, rhythm, speed cells, theta

1. INTRODUCTION

The rodent hippocampal system contains populations of neurons that encode an animal's position and velocity as it navigates through space. In the literature, these populations are named for variables that modulate their firing rates: “head direction” (HD) cell firing rates are tuned for the angular position of the head (Taube, Muller, & Ranck, 1990), “grid” cell firing rates are periodically tuned for the animal's spatial position (Hafting, Fyhn, Molden, Moser, & Moser, 2005), “place” cell firing rates are non‐periodically tuned for the animal's spatial position (O'Keefe & Dostrovsky, 1971), “border” (or “boundary”) cell firing rates increase (or decrease) near environmental boundaries (Lever, Burton, Jeewajee, O'Keefe, & Burgess, 2009; Savelli, Yoganarasimha, & Knierim, 2008; Solstad, Boccara, Kropff, Moser, & Moser, 2008), “speed” cell firing rates increase (or decrease) in proportion with the animal's running speed (Góis & Tort, 2018; Hinman, Brandon, Climer, Chapman, & Hasselmo, 2016; Kropff, Carmichael, Moser, & Moser, 2015), and “theta” cell firing rates are temporally modulated by 4–12 Hz theta oscillations. Although these monikers accurately describe how the firing rates of individual neurons are tuned, they may not fully describe the information that such neurons encode at the population level, because populations of spiking neurons can encode information not only in their firing rates, but in other ways as well.

Position‐tuned neurons (such as place and grid cells) exhibit oscillatory modulation of their spike trains by theta rhythm. Information about the animal's position can be decoded not only from the firing rates of these neurons (Wilson & McNaughton, 1993), but also from the phases at which spikes occur relative to theta rhythm in the local field potential (Climer, Newman, & Hasselmo, 2014; Hafting, Fyhn, Bonnevie, Moser, & Moser, 2008; Jeewajee et al., 2014; Jensen & Lisman, 2000; O'Keefe & Recce, 1993; Skaggs, McNaughton, Wilson, & Barnes, 1996; Tingley & Buzsaki, 2018). Hence, spatially tuned neurons can encode the animal's position in two different ways: in their firing rates and firing phases. These two codes map the animal's position into different representational spaces. The firing rate code maps position into a space of “population vectors” where each dimension measures time intervals between pairs of spikes that are both fired by the same neuron (because a neuron's firing rate is simply the inverse of its mean interspike interval [ISI]). By contrast, the phase code maps position into a representational space where each dimension measures time intervals (normalized by the theta cycle period) between pairs of spikes that are fired by two different neurons (e.g., between a place or grid cell and a neuron that spikes in synchrony with the theta LFP). If neural populations can simultaneously encode information in different ways, then how should information about the world be distributed among different coding channels?

Here, we shall propose how neural populations divide information between two coding channels, which shall be referred to as firing rates versus co‐firing rates. It is worthwhile to further explain what is meant here by the term “co‐firing rate.” A traditional firing rate code may be considered an example of a within‐cell spike code that maps information onto intervals between pairs of spikes fired by the same neuron. By contrast, a phase code may be considered an example of a between‐cell spike code that maps information onto intervals between pairs of spikes fired by different neurons. This distinction between within‐ versus between‐cell spike codes is orthogonal to the distinction between “rate” and “time” codes. In addition to being a within‐cell spike code, a traditional firing rate code is also a “rate” code (hence the name, firing “rate”), because firing rates are derived by averaging spike intervals over time, and this discards information about precise temporal sequences of spikes. Alternatively, consider a code that represents information using precise temporal sequences of spikes fired by a single neuron; this would be an example of a within‐cell spike code that is also a “time” code. Hence, standard firing rate codes are a particular subset of within‐cell spike codes, namely, those which map information onto time‐averaged within‐cell spike intervals (rather than onto temporal patterns of within‐cell spike intervals). We may analogously define “co‐firing rate codes” to be the subset of between‐cell spike codes that map information onto time‐averaged between‐cell spike intervals, rather than onto temporal patterns of between‐cell spike intervals. Note that standard firing rates are an unsigned quantity (there is no such thing as a negative firing rate), but co‐firing rates are a signed quantity, because a distinction can be drawn between the mean interval at which a spike from neuron A follows a spike from neuron B (positive co‐firing rate) and the mean interval at which a spike from neuron B follows a spike from neuron A (negative co‐firing rate). In summary, we define a “rate code” broadly to be any code that maps information onto time‐averaged spike intervals, rather than onto temporal patterns of spike intervals. The term “firing rate” shall henceforth refer to time‐averaged measurements of within‐cell spike intervals, and the term “co‐firing rate” shall refer to time‐averaged measurements of between‐cell spike intervals.

We propose below that firing rates and co‐firing rates behave as conjugate coding channels, and consequently, their information content is regulated by principles that mirror the well‐known “uncertainty principle” from physics. The uncertainty principle states that the more accurately we know a particle's position, the less accurately we can know its momentum (which is identical to velocity, after normalizing for mass), and vice versa. This trade‐off arises because position and velocity are conjugate variables, related to one another via the Fourier transform. Simulations presented below show that a neural population can simultaneously embed two codes within its spike trains: a firing rate code (R) derived from within‐cell spike intervals, and a co‐firing rate code (R˙) derived from between‐cell spike intervals. It is postulated that R and R˙ behave as conjugates of one another, and thus obey an uncertainty principle for neural coding: the more information is encoded by R, the less information can be encoded by R˙ (and vice versa). Hence, the firing rate and co‐firing rate channels together cannot convey more information than either channel alone. But a special case arises when neurons simultaneously encode a variable, q, and its conjugate, q˙=dq/dt (e.g., position and velocity). In such cases, conjugacy between R and R˙ complements conjugacy between q and q˙, and states of the phase space q×q˙ can be encoded simultaneously in firing rates and co‐firing rates, without competition for capacity across channels. The conjugate relationship between R and R˙ thus confers limitations as well as advantages for packaging information about the world into neural spike trains. We shall express these constraints as a set of formal mathematical postulates (yet to be proven), and discuss how these constraints might influence neural representations of position and velocity in the hippocampal system. We also discuss how conjugate coding might impact the computations that biological neurons perform upon their inputs, by making it possible to extract useful information from both firing rates and co‐firing rates of neural spike trains.

2. RESULTS

All simulations presented here shall use simulated spike trains, artificially generated from real behavioral data (no spike trains recorded from biological neurons are used). A single neuron's spike train shall be represented by a neural response function stored as a series of binary values sampled synchronously at discrete time points,

sn=snt0snt1sntksntK, (1)

where sn[tk] = 1 if neuron n fires a spike at time tk, and sn[tk] = 0 otherwise. A fixed sampling rate of 1 kHz shall be used for all simulations. Spike trains generated by a population of N neurons are stored in a spike response matrix:

S=s1s2sN=S0S1SkSK, (2)

where each row, sn, contains the neural response function of a different neuron, and each column, Sk = {s1[tk], s2[tk], …, sN[tk]}T, contains a population vector of binary values flagging which neurons fired a spike at time tk.

2.1. Computing angular velocity by differentiating HD cell spike trains

In this section, spike trains of HD cells are simulated to encode an animal's angular head position—but not angular head velocity—in their firing rates. It is shown that head angle can be recovered from the HD cell population's firing rates by process referred to as sigma decoding. It is further shown that even if individual neurons do not encode information about angular velocity in their firing rates, velocity information can nonetheless be recovered from the HD cell population's co‐firing rates by a process referred to as sigma‐chi decoding.

2.1.1. Simulation of HD cell spike trains

Simulated HD cell spike trains were generated from head angle data obtained while a behaving rat foraged freely for food pellets in a cylindrical arena (see Section 4). Tracking data was upsampled to 1 kHz to match the spike train simulation rate, yielding a time series of head azimuth angles, q = {q0, q1, …, qk, …, qK}. HD cell tuning curves were simulated by a Von Mises distribution, so that the probability of HD cell n firing a spike at time tk was given by:

pntk=Rmaxeκcosqkq^n2πI0κdt, (3)

where q^n is the preferred firing direction for neuron n, Rmax is the peak firing rate of the HD cell in Hz, κ is a concentration parameter which regulates the HD tuning width, I0(κ) is the modified Bessel function of order 0, and dt = 1 ms is the time bin resolution. Note that parameters Rmax and κ are not indexed by n, because all HD cells shared the same values of these parameters. Figure 1a shows HD tuning curves for N = 12 simulated HD cells with Rmax = 100 Hz and κ = 0.5. At each time step, the value of pn[tk] was used as a binary threshold on the output of pseudorandom number generator, to stochastically determine whether HD cell n fired a spike at time tk. The binary values generated by this stochastic process filled a spike response matrix (Equation (2)) for the HD cell population. Figure 1b shows spike rasters for 12 simulated HD cells during a 10 s segment of head angle data.

Figure 1.

Figure 1

Sigma decoder recovers head angle from HD cell firing rates. (a) Von Mises tuning functions were used to simulate spike trains of 12 HD cells. (b) Black line shows 10 s sample of head angle data, q; rasters show spike trains of 12 HD cells. (c) Head angle data (black line) superimposed over colored lines showing HD cell firing rates derived by Equation (5) (τ u = 200 ms). (d) Sigma neurons sum simulated HD cell firing rates to decode sine and cosine components of the head angle. (e) Arctangent function recovers the estimated head angle, q, from its sine and cosine components; head angle cannot be recovered by the sigma‐chi decoder, q∑χ

2.1.2. Sigma decoding: recovering head angle from HD cell firing rates

The head angle can be recovered from simulated HD cell spike trains using a standard vector summation method for decoding neural firing rates. This method—henceforth referred to as sigma decoding—involves three steps, which can be written as a sequence of mapping functions: S → R → Y → q. The first step, S → R, converts spike trains into firing rates via a process that mimics temporal integration of spike trains at synapses. The second step, R → Y, computes weighted sums of firing rates to obtain estimates for the sine and cosine components of the head angle. The third step, Y → q, applies the arctangent function to estimate the head angle from its sine and cosine components. We shall write q to denote the time series of estimated head angles generated by the sigma decoder.

Step 1: Converting spike trains into firing rates

HD cell spike trains were converted into firing rates via a process intended to mimic temporal integration at synapses. The input to this process is a spike response matrix S (Equation (2)), and the output is a firing rate response matrix:

R=r1r2rN=R0R1RkRK. (4)

Each column of this matrix, Rk = {r1[tk], r2[tk], …, rN[tk]}T, is a population vector containing instantaneous HD cell firing rates measured at time tk. Each row, rn = {rn[t0], rn[t1], …, rn[tK]}, is a time series of firing rate estimates for a single HD cell obtained by convolving its spike train with an exponential decay kernel,

rn=sn*u. (5)

Individual elements of the decay kernel, u = {u0, u1, …, ul, …, u}, are given by

ul=el/τu, (6)

where τu is the decay constant, and l indexes time bins with positive offsets from pn[tk] (kernel weights were uniformly zero for negative time offsets). Figure 1c illustrates examples of simulated HD cell firing rates with τu = 200 ms.

Step 2: Converting firing rates into angle components

Firing rates were converted into sine and cosine components of the head angle via spatial integration of synaptic inputs to a pair of sigma units (Figure 1d) that decoded sin(q) and cos(q). Activation of these two sigma units can be represented by a response matrix, Y = {ysin, ycos}T = {Y0, Y1, …, Yk, …, YK}, where the two rows ysin and ycos are time series of decoded estimates for sin(q) and cos(q), and each column Yk= {sink, cosk}T contains the estimated sine and cosine of the decoded head angle at time tk. The output from decoder neuron m was computed as a weighted sum of its firing rate inputs:

ymtk=n=1Nwm,nrmtk, (7)

where wm, n is a synaptic weight assigned to the nth spike train input received by decoder neuron m. In most simulations, the input weights were wm, n = sin(2πn/N) or wm, n = cos(2πn/N) for the sine versus cosine decoder neurons, respectively. But in some simulations (see Section 2.1.4), the vector wm, n was derived via the pseudoinverse to obtain an optimal fit between the decoded versus actual head angle data.

Recovering head angle via the arctangent function

The final decoding step, Y → q, was performed by taking the arctangent of decoder neuron outputs at each time step:

qk=atan2sinkcosk. (8)

Figure 1e shows that q (red line) provides an accurate estimate of q (black line), except that q is delayed in time with respect to q by a latency that is approximately equal to the exponential decay constant (τu = 200 ms for the simulation shown). The dynamics of the animal's head turning set an upper bound on τu, because the decay constant must not be set so large that q can undergo large changes within the span of the integration time window. Below this upper bound, there is a trade‐off between accuracy and latency of the decoded position signal, such that increasing τu to integrate over longer time periods improves the accuracy of q (since a larger time window contains more spikes and thus more information), at a cost of delaying the decoded signal in time (Figure 2a). This type of latency shift can be eliminated if inputs to the decoder are modulated by velocity as well as position (Eliasmith, 2005), but this trick will not be utilized here, because our purpose is to demonstrate how velocity information can be recovered from co‐firing rates in the case where individual firing rates exclusively encode position, and contain no information about velocity.

Figure 2.

Figure 2

Accuracy‐latency tradeoff for spike train decoding. (a) Mean‐squared error (MSE; y‐axis) for decoding angular head position from simulated HD cell firing rates at varying latencies (x‐axis); larger decay constants (τ) improve decoding accuracy at a cost of increased latency. (b) MSE for decoding angular velocity from simulated HD cell co‐firing rates at varying latencies; optimal latencies are approximately doubled from those in “a” because the sigma‐chi decoder of co‐firing rates uses two temporal integration steps

2.1.3. Sigma‐chi decoding: recovering angular velocity from HD cell co‐firing rates

Simulated HD cell firing rates were modulated only by the angular head position, q, and not by the angular head velocity q˙ (Equation (3)). Nonetheless, q˙ can be decoded from HD cell spike trains using a process we shall refer to as sigma‐chi decoding, which involves a sequence of three steps: SXR˙q˙∑χ. The first step, SX, performs nonlinear integration on pairs of spike trains to derive a quantity we call the chi rate. The second step, XR˙, pools chi rates to extract a vector of co‐firing rates that form a frequency domain representation of angular velocity, encoded by a population of neurons called sigma‐chi units. The third step, R˙q˙∑χ, decodes angular head velocity from sigma‐chi units via vector summation of co‐firing rates. We write q˙∑χ to denote the time series of angular head velocities recovered by the sigma decoder.

Step 1: Deriving chi rates from spike train pairs

The firing rate of a single spike train is inversely proportional to the ISIs between pairs of spikes in the train, and can thus be measured via leaky integration of the spike train (Equation (5)). We analogously define the chi rate for a pair of spike trains, i and j, to be a quantity that is inversely proportional to ISIs between pairs of spikes where one spike comes from neuron i, and the other from neuron j:

χi,j=sirj*v, (9)

where χi, j denotes the chi rate between spike trains i and j, denotes the element‐wise product of same‐length vectors, and v is an exponential decay kernel (exactly as in Equation (5), but with a different decay constant, τv). The term si ∘ rj may be regarded as a “carbon copy” of si in which spikes no longer have unit amplitude, but instead have a real‐valued amplitude that is scaled by rj. After the spike amplitudes are scaled, they are temporally integrated by the exponential decay kernel, v. Hence, the chi rate is derived by sequential integration with two decay constants: τu (Equation (5)) and τv (Equation (9)). For simplicity, we set τu = τv for all simulations presented here. However, setting these decay constants to different values may sometimes help to improve the accuracy of decoding information from co‐firing rates. Possible biological substrates for computing the chi rate shall be considered in the Discussion.

Figure 3 illustrates how χi, j is influenced by q and q˙ for the case where neurons i and j are HD cells with nearby preferred directions, q^i=0° and q^j=60°. These two HD cells only generate spikes when the head angle is in the neighborhood of both q^i and q^j, so clearly, χi, j depends upon q. However, χi, j is also sensitive to q˙. To see why this is so, consider what happens during the two shaded periods in Figure 3, when the head angle passes through q^i and q^j in the clockwise (q˙<0) versus counterclockwise (q˙>0) directions. During the first period where q˙<0, we see that χi, j (Figure 3b, blue) grows large, but χj, i (Figure 3c, yellow) does not. This is because neuron j spikes prior to neuron i, so that rj is nonzero when si starts firing and si ∘ rj grows large (Figure 3b), but ri is near zero when sj starts firing so sj ∘ ri does not grow large (Figure 3c). During the second shaded period where q˙>0, exactly the opposite occurs, and therefore, χj, i grows large but χi, j does not. Hence, χi, j for neighboring HD cells depends not only upon q, but also upon q˙. To derive a signal that is dependent purely upon q˙ and not upon q, we may pool χi, j values across HD cell pairs that share the same angle of separation, Δq^i,j.

Figure 3.

Figure 3

Chi rates for a pair of HD cells. (a) Black line shows 10 s sample of the head angle q, red line shows angular head velocity q˙ during the same period; colored lines show preferred firing directions for simulated HD cells i and j (which are the same as HD cells 1 and 3, respectively, from Figure 1). (b,c) Dependency of chi rates χ i,j (b) and χ j,i (c) upon angular velocity during two time periods when q˙<0 and q˙>0 (gray shading; see main text for further explanation)

Step 2: Pooling chi rates to derive co‐firing rates

Given a population of N spiking neurons, there are N2 unique ordered pairings between the neurons, including pairings of each neuron with itself (it is necessary to consider ordered rather than unordered pairings, since χi, j ≠ χj, i; as outlined in the introduction, co‐firing rates are signed quantities, so χi, j and χj, i may be regarded as separate channels for conveying positive versus negative values of the co‐firing rate between neurons i and j). This forms an N × N matrix of chi rates. For HD cells, the rows and columns of this matrix can be sorted by preferred directions, q^1,q^2,,q^N. If a time series of chi rates, χi, j, is placed in each matrix entry, then the combined entries form a time series of 2D matrices, henceforth denoted as X (Figure 4a). At each time slice tk, X contains a 2D matrix, Xk, in which each entry is the instantaneous chi rate, χi, j[tk], between a pair of HD cells, i and j. The main diagonal of Xk contains chi rates of HD cells paired with themselves; flanking the main diagonal there are N − 1 diagonal bands composed from entries containing chi rates between pairs of HD cells that share the same angle of separation, Δq^i,j=q^jq^i, between their preferred directions. These diagonal bands wrap around at the matrix edges, since Δq^i,j is a circular variable. We shall index these diagonal bands by m = 0, 1, 2, …, N − 1, such that the main diagonal (i = j) is indexed by m = 0, and each nonzero index m references a band containing chi rates between pairs of HD cells whose preferred directions are separated by a common angle, Δq^m=m×360°/N. The activation of sigma‐chi unit m can then be expressed as a co‐firing rate, which is the sum of all chi rates along a single band (Figure 4b):

r˙mtk=i=1Nχi,modi+m,Ntk, (10)

where i indexes each of the matrix elements along diagonal m. By summing up chi rates in this way, sigma‐chi units acquire direction‐independent sensitivity to angular velocity.

Figure 4.

Figure 4

Sigma‐chi units compute co‐firing rates that encode angular head velocity. (a) Chi rate matrix contains diagonal bands along which simulated HD cell pairs share the same angle of separation, Δq = |qj − qi|. (b) Sigma‐chi units compute co‐firing rates by summing chi rates along each diagonal band. (c) Each sigma‐chi unit's co‐firing rate exhibits selective tuning for angular head velocity

The sensitivity of co‐firing rates to angular velocity depends upon the time constants of temporal integration. τu and τv, and also upon the spatial separation, Δq^m, between the preferred directions of HD cell pairs along the diagonal indexed by m. To see why this is so, it is helpful to think of r˙m as a measurement of the change in head angle, Δq, over a specific time span, Δt. The duration of Δt is determined by τu and τv, so these decay constants should be chosen to maximize sensitivity within a representative range of angular velocities that occurs in the behavior data. In simulations presented here, we varied τu and τv to study their influence upon decoding accuracy (Figure 2), but we did not permit different sigma‐chi neurons to have different time constants (hence, in any given simulation, all of the sigma chi neurons integrated Δq across the same span of time). However, since each sigma‐chi unit has its own value of Δq^m, different units measure Δq with a different sized “yardstick.” This endows each unit with its own preferred frequency of head rotation. The angular separations Δq^0,Δq^1,,Δq^N1 thus act like a set of spatial frequencies for representing angular velocity in the Fourier domain. This co‐firing rate code for angular head velocity is conjugate and orthogonal to the firing rate code for angular head position. Sigma‐chi units indexed by m = 1, 2, …, (N/2) − 1 have positively sloped tuning functions (Figure 4c, solid lines), and thus generate co‐firing rates that are analogous to positive frequency components. Sigma‐chi units indexed by m = (N/2) + {1, 2, …, (N/2) − 1} have negatively sloped tuning functions (Figure 4c, dashed lines), and thus generate co‐firing rates that are analogous to negative frequency components. The unit indexed by m = 0 sums over the chi rates of each HD cell with itself, and symmetrically decreases its activity for turning in either direction. The unit indexed by m = N/2 (assuming even‐valued N) sums over chi rates of HD cells that have 180° opposing preferred directions, and symmetrically increases its activity for turning in either direction. Figure 4c shows that activation of sigma‐chi neurons is modulated by head turning in a manner that is quite similar to “angular velocity cells” that have been reported in the head direction system (Sharp, 1996; Stackman & Taube, 1998).

Step 3: Decoding velocity from sigma‐chi units

Across time, the sigma‐chi rates defined by Equation (10) form a co‐firing rate response matrix:

R˙=r˙0r˙1r˙N1=R˙t0R˙t1R˙tL1. (11)

Each row of this matrix, r˙m=r˙mt0r˙mt1r˙mtK, stores a time series of co‐firing rates generated by sigma‐chi unit m (Figure 5a). Each column, R˙tk=r˙0tkr˙1tkr˙N1tkT, stores the instantaneous population vector of co‐firing rates at time step k. A mapping from co‐firing rates onto an estimate of the angular velocity, R˙q˙∑χ, can be accomplished by computing a weighted sum of co‐firing rates (Figure 5b):

q˙∑χtk=m=0N1w˙mr˙mtk, (12)

where w˙n is a synaptic weight assigned to the nth co‐firing rate. To derive the synaptic weights in Equation (12), the pseudoinverse method was used to find a weight vector that minimized the error between q˙ and q˙∑χ for a training set of HD cell spike trains. Decoding accuracy was then tested on an independent test set of spike trains, derived from different behavior data. Figure 5c shows q˙ and q˙∑χ for a 10 s example simulation where τu = τv = 200 ms. Weight fitting was repeated at different time offsets between q˙ and q˙∑χ (since a time delay is introduced by the decay kernels in the model), and the best fits were obtained when q˙∑χ was delayed from q˙ by approximately twice the value of the integration time constant (Figure 2b). This is to be expected, since R˙ is obtained from two sequential integration steps (using decay constants τu and τv), and illustrates an important asymmetry that we shall return to in the discussion: information coded in the co‐firing rate channel is generally time delayed with respect to information coded in the firing rate channel.

Figure 5.

Figure 5

Sigma‐chi decoder recovers angular velocity from HD cell co‐firing rates. (a) Colored lines show 10 s sample of co‐firing rates computed by 12 sigma‐chi units; gray line shows angular velocity during the simulation. (b) True angular head velocity data (black) plotted alongside angular velocity signals recovered from co‐firing rates via the sigma‐chi decoder (q˙∑χ), and from firing rates via the sigma decoder (q˙)

Sigma‐chi units perform an operation of neural differentiation to derive q˙ from q. This differentiation process may be regarded as an inversion of the standard path integration process by which HD cells are proposed to derive q from q˙ in attractor‐integrator network models (Zhang, 1996). Since the velocity signal encoded by R˙ is delayed in time, it shall be referred to as a post‐positional velocity signal, to distinguish it from pre‐positional velocity signals (such as vestibular outputs) that normally would be used for angular path integration.

2.1.4. Conjugacy of firing rates and co‐firing rates

We have seen that q can be recovered from HD cell firing rates using the sigma decoder (Figures 1 and 2a), and q˙ can be recovered from HD cell co‐firing rates using the sigma‐chi decoder (Figures 2b and 5b). What happens if we swap the decoders, and thereby attempt to recover q from co‐firing rates with the sigma‐chi decoder, and recover q˙ from firing rates with the sigma decoder? We shall write q∑χ to denote position recovered from co‐firing rates by the sigma‐chi decoder, and q˙ to denote velocity recovered from firing rates via the sigma decoder.

To derive q∑χ, HD cell co‐firing rates R˙ (instead of firing rates) were delivered as input to the sine and cosine decoder neurons (Figure 6a). The pseudoinverse method was then used to find a weight vector that minimized error between q∑χ and q. Decoding accuracy was tested on a novel set of spike trains derived from head angle data that was independent from the data used to derive the weights. Figure 1e shows that q∑χ (blue line) generated chance‐level estimates the true head angle for the 10 s example dataset. To further test whether co‐firing rates contained information about q, we used the pseudoinverse method to fit a weighted sum of both R and R˙ to q, thereby obtaining a prediction q∑ + ∑χ of the head angle which was derived simultaneously from both firing rates and co‐firing rates. Over 10 independent simulations with different head angle data (Figure 6b), we found that q accurately predicted q regardless of the integration time constant value, but q∑χ could not accurately predict q at any value of the time constant, and q∑ + ∑χ was never a more accurate predictor of q than q. Hence, decoding head angle from firing rates and co‐firing rates together was not more accurate than decoding from firing rates alone. From this, we conclude that co‐firing rates of simulated HD cell spike trains only conveyed information about q˙, but not q.

Figure 6.

Figure 6

Conjugate representations of position and velocity. (a) Spike trains from a ring of 12 simulated HD cells are fed to a sigma or sigma‐chi decoder. (b) Decoding error (MSE, y‐axis) was computed over a range of integration time constants: τ =.4, .2, .1, .05, .025, .01 s (x‐axis, plotted on a log10 scale); head angle was accurately decoded from HD cell firing rates (q) for all τ, but was near chance levels when decoded from co‐firing rates (q∑χ), and decoding from both channels (q∑ + ∑χ) did not improve upon decoding from firing rates alone. (c) Angular velocity was accurately decoded from simulated HD cell co‐firing rates (q˙∑χ) using sufficiently large integration time constant (τ >.1), but could not be decoded from firing rates (q˙) at any τ; decoding from both channels (q˙+∑χ) did not improve upon decoding from co‐firing rates alone

To derive q˙, the HD cell firing rates R (instead of co‐firing rates) were delivered as input to a sigma neuron (Figure 6a), which computed their weighted sum (Equation (7)). The pseudoinverse method was used to find a weight vector that minimized error between q˙ and q˙. Decoding accuracy was tested on a novel set of spike trains derived from independent head angle data. Figure 5c shows that the angular velocity recovered from the sigma decoder, q˙ (red line), did not accurately estimate the true angular velocity for the 10 s example dataset. To further test whether firing rates contained information about q˙, we fit a weighted sum of both R and R˙ to q˙, thereby obtaining a prediction q˙+∑χ of the angular velocity which was obtained simultaneously from both firing rates and co‐firing rates. Over 10 independent simulations with different head angle data (Figure 6c), we found that q˙∑χ accurately predicted q˙ for sufficiently large values of the integration time constant, but q˙ could not accurately predict q˙ at any value of the integration time constant, and q˙+∑χ was never a more accurate predictor of q˙ than q˙∑χ. Hence, decoding angular velocity from firing rates and co‐firing rates together was not more accurate than decoding from co‐firing rates alone (Figure 5c). From this, we conclude that co‐firing rates of simulated HD cell spike trains only conveyed information about q˙, but not about q.

These simulations show that when simulated HD cell spike trains encode only the head angle (but not angular velocity) in their firing rates, the firing rate and co‐firing rate channels convey orthogonal representations of position and velocity. Firing rates exclusively encode information about angular position (and not angular velocity), whereas co‐firing rates exclusively encode information about angular velocity (and not angular position).

2.2. Computing speed by differentiating grid cell spike trains

In much the same way that HD cells encode a periodic representation of an animal's angular head position, grid cells encode a periodic representation of an animal's translational position within a spatial environment. In this section, it is shown that when grid cell spike trains are fed as input to the sigma‐chi decoder, sigma‐chi neurons behave similarly to “speed cells” that have been reported in the hippocampus and entorhinal cortex (Góis & Tort, 2018; Hinman et al., 2016; Kropff et al., 2015).

2.2.1. Simulation of grid cell spike trains

When a rat runs on a circular track, grid cells fire periodically as a function of the distance travelled around the track's circumference (Jacob et al., 2019). To simulate this periodic firing, grid cell spike trains were generated from position data that was obtained as a rat ran laps on a 1.5 m diameter circular track while harnessed to a boom arm (Jayakumar et al., 2019). The rat's position on the track was sampled at 1 kHz by the angle of the boom arm, yielding a time series of angles measured in radians, ϕ = {ϕ0, ϕ1, …, ϕk, …, ϕK}. For simplicity, the vertex spacing of simulated grid cells was assigned to be exactly one third of the distance around the track, Λ = C/3, where C = 471 cm is the track circumference. Each grid cell thus fired at three stable preferred locations during every lap around the track (Figure 7a). Grid cell tuning curves were simulated using the same Von Mises distribution described above for simulating HD cell tuning curves (Equation (3)), except that qk = ϕk/3 now represents the rat's angular position within the grid spacing interval, Λ, rather than the head azimuth angle. For grid cell simulations, Von Mises tuning parameters were Rmax = 100 Hz and κ = 0.25; the spatial phases of the grids, q^1,q^2,,q^N, were evenly spaced over Λ. As described above for HD cell simulations, Equation (3) was used to threshold a pseudorandom number generator and stochastically determine whether grid cell n fired a spike at time tk. Binary values generated by this stochastic process filled a spike response matrix (Equation (2)), in which all spike trains exhibited Poisson rate statistics. In rodents, grid cell firing is typically modulated by theta rhythm, so grid cells do not generate Poisson spike trains. But here, we disregard theta modulation of grid cells for simplicity.

Figure 7.

Figure 7

Decoding position and running speed from grid cells. (a) Von Mises tuning functions were used to simulate periodic spatial tuning of grid cells on a circular track. (b) Black line shows 15 s sample of position data, q; rasters show spike trains of 12 simulated grid cells. (c) True position (black line) superimposed over position decoded from firing rates (q) and co‐firing rates (q∑χ). (d) True running speed (black line) superimposed over running speed decoded from firing rates (q˙) and co‐firing rates (q˙∑χ). (e) Speed tuning of co‐firing rates generated by different sigma‐chi neurons (indexed by m). (f) Latency of speed tuning is proportional to the value of integration time constants

2.2.2. Decoding position and speed from grid cell spike trains

Figure 7b shows spike rasters for 12 simulated grid cells over a 15 s time period during which the rat ran almost 2/3 lap on the track (which is nearly 2 complete traversals of the spacing interval, Λ). Figure 7c shows that the sigma decoder accurately recovers q (but fails to recover running speed) from grid cell firing rates (τu = τv = 200 ms in these simulations). Conversely, Figure 7d shows that the sigma‐chi decoder accurately recovers running speed (but fails to recover q) from grid cell co‐firing rates. When grid cell spike trains are used as inputs to the sigma‐chi decoder, the sigma‐chi units exhibit tuning for running speed (Figure 7e) that is qualitatively similar to the tuning of “speed cells” in the rodent entorhinal cortex and hippocampus (Góis & Tort, 2018; Hinman et al., 2016; Kropff et al., 2015). The speed signal encoded by co‐firing rates is delayed in time by an amount proportional to the integration time constants τu and τv. Hence, the sigma‐chi units are best tuned for the animal's past running speed (Figure 7f), rather than present or future running speed. Sigma‐chi units thus behave similarly to a subset of speed cells in the rodent brain which have been observed to lag the animal's running speed in time (Kropff et al., 2015).

2.3. Computing position by integrating theta cell spike trains

Simulations presented above show that if a population of spiking neurons (HD cells or grid cells) encodes a position signal in their firing rates, then they can simultaneously encode a velocity signal in their co‐firing rates. Suppose now that we wish to invert this coding scheme, so that velocity is encoded by firing rates, and position is encoded by co‐firing rates. How could this be done? This may seem like a counterintuitive exercise for those accustomed to thinking about population vector codes composed from neural firing rates, because we seek to construct a neural code for position in which individual neurons exhibit no tuning whatsoever of their firing rates for position. As it turns out, there exists a class of spatial coding models called “oscillatory interference models” that are based upon precisely this idea (Blair, Gupta, & Zhang, 2008; Burgess, Barry, & O'Keefe, 2007; Giocomo, Zilli, Fransén, & Hasselmo, 2007). In such models, oscillatory modulation of spike trains occurs in such a way that position information is conveyed by between‐cell spike intervals (i.e., spike correlations), rather than within‐cell spike intervals (i.e., firing rates). Recent evidence suggests that neural populations in the lateral septum may utilize this type of oscillatory interference code for the rat's position (Tingley & Buzsaki, 2018). Simulations presented below show that when an oscillatory interference code is constructed from simulated theta cells, position information can be recovered from theta cell co‐firing rates using a sigma‐chi decoding process similar to that used above for recovering velocity information from HD cell or grid cell co‐firing rates, and velocity information can be recovered from theta cell firing rates using the same sigma decoding process that was used above to recover position information from HD cell or grid cell firing rates. Hence, an oscillatory interference code for position may be viewed as the conjugate inverse of a population vector code for position, because information about position and velocity is perfectly swapped out between the firing rate and co‐firing rate channels.

2.3.1. Simulation of theta cell spike trains

To simulate phase coding of position on a circular track by theta cells, we assume the existence of a reference oscillator against which theta cells shift their phases as a function of the animal's position. To mimic the rhythmicity of theta cell spike trains, all simulations used a reference oscillator with a constant frequency of 7 Hz (14π radians/s). The time‐varying phase of a theta cell can then be expressed as an offset, θn, from this reference oscillator,

θntk=ψntkψ0tk, (13)

where ψ0 is the reference phase and ψn is the instantaneous phase of theta cell n. Here, all of the simulated theta cells shifted their phases against the reference oscillator (and none remained in fixed synchrony with it), so the reference oscillator was not explicitly represented by any of the theta cell spike trains. We may now define a phase offset matrix,

Θ=θ1θ2θN=Θ0Θ1ΘkΘK, (14)

where each column, Θk = {θ1[tk], θ2[tk], …, θN[tk]}T, gives the instantaneous phase offsets for N theta cells at time tk, and each row, θn = {θn[t0], θn[t1], …, θn[tK]}, is a time series of phase offsets for theta cell n. Since θn is an angle, it can encode the animal's position on a spatially periodic interval. The animal's position within this interval shall be denoted qn, defined as follows:

qntk=2πxtkλn, (15)

where x[tk] is the total integrated distance that the animal has travelled around the track at time tk, and λn is the distance over which θn shifts by one full theta cycle against the reference oscillator. The dependence of θn upon angular position may then be written

θntk=θ^n+qntk, (16)

where θ^n is a phase offset parameter for theta cell n. Equation (16) forces theta cells to behave as velocity controlled oscillators (VCOs) like those used in oscillatory interference models of spatial coding (Burgess et al., 2007; Hasselmo, Giocomo, & Zilli, 2007). As in these prior models, the burst frequency, ωn, of each theta cell varies as a function of the animal's running speed,

ωntk=ω0+2πvtkλn, (17)

where v[tk] is the running speed at time tk, and ω0 is the frequency of the reference oscillator in radians/s. In simulations presented below, we shall permit λn to take either positive or negative values. If λn > 0, then ωn increases with running speed, and θn precesses in phase against the reference oscillator. By contrast, if λn < 0, then ωn decreases with running speed, and the reference oscillator precesses in phase against θn.

To compute the probability of spiking at each time step, we first created a “seed spike train” for each theta cell, denoted s~n. This is a binary spike response function (Equation (1)) containing a single spike at every time step where theta cell n passes through perfect phase synchrony with the reference oscillator. The probability of spiking for theta cell n at each time step was then computed by convolving the seed spike train with a Gaussian kernel:

pntk=Rmaxs~n*g, (18)

where Rmax sets the maximum spike probability within a single time step, and g is a Gaussian kernel of unit amplitude. Here we used σ = 25 ms as the width of the Gaussian kernel in all simulations. To prevent the spike probability from exceeding Rmax in any time bin, convolution was performed separately on even and odd numbered spikes within s~n (see Section 4).

2.3.2. Ring oscillators

To construct a phase code for position, we shall group theta cells into subpopulations that form ring oscillators (Blair, Wu, & Cong, 2014). Each ring oscillator is a circular array of theta cells through which a “bump” of activity circulates at a rate of once per theta cycle. To simulate bump circulation, phase offset parameters of theta cells within a ring are staggered at even spacings throughout the cycle by assigning θ^p=2πp1/P, where p indexes theta cells within the ring, and P is the total number of theta cells within a ring (P = 12 for all simulations presented here). Hence, theta cells exhibit preferred phases that are evenly space around the theta cycle, in much the same way that the preferred directions of HD cells were evenly spaced around the circle in simulations above.

Our present objective shall be to construct a phase code that perfectly segregates information about position and velocity into the co‐firing rate and firing rate channels, respectively. That is, the co‐firing rate channel shall only convey information about position (but not velocity), and the firing rate channel shall only convey information about velocity (but not position). To achieve this perfect segregation, it is necessary to define pairs of ring oscillators, a and b, composed from theta cells with phase slopes of the same magnitude but opposing sign (Figure 8a). Hence, if the phase slope for theta cells in ring a is denoted by λa, then the slope for cells in ring b must be λb =  − λa. Assuming λa > 0, then theta cells in ring a precess (shift backward in phase) against the reference oscillator by one cycle per traversal of the distance λa, whereas cells in ring b process (shift forward in phase) against the reference oscillator by one cycle over the same distance. Figure 8b shows example autocorrelograms for theta cells residing in a pair of ring oscillators with λa = C/1.5 = 314 cm and λa =  − C/1.5 =  − 314 cm. Raster plots of spike trains for these theta cells are shown in Figure 8c.

Figure 8.

Figure 8

Sigma‐chi decoder recovers position from theta cell co‐firing rates. (a) Two sets of 12 simulated theta cells resided in ring oscillators with phase slopes of equal and opposite sign. (b) Autocorrelgrams for theta cells in each ring. (c) Rasters show 2 s sample of theta‐modulated spike trains in the two rings. (d) Sigma‐chi decoder converts simulated theta cell spike trains into chi rates (X) and then into co‐firing rates encoded by activity of sigma‐chi neurons (inset shows autocorrelogram for a sigma‐chi neuron). (e) Sigma‐chi neurons behave like grid cells with periodic spatial tuning on the circular track. (f) Sine and cosine components of angular position on the interval Λ are decoded via vector summation. (g) The atan2 function converts output from sine and cosine decoder neurons into an accurate prediction, q∑χ), of position on the track

2.3.3. Decoding position from theta cell co‐firing rates

Since theta cell phases are dependent upon qn (Equation (16)), the rat's angular position on the track can be decoded from co‐firing rates via the sigma‐chi decoding process, which in this case has four steps: SXR˙Yq∑χ. In this section, we shall write q∑χ to denote the time series of angular track positions recovered from co‐firing rates via the sigma‐chi decoder, and q to denote the time series of true positions.

Step 1: Deriving chi rates from spike trains

The first decoding step, SX, converts pairs of theta cell spike trains into chi rates, χi, j, using Equation (9). To recover position information from χi, j, it is necessary for theta cells i and j to reside in different ring oscillators, so that the two cells will shift phase against one another as a function of the animal's position. Moreover, to obtain co‐firing rates that exclusively encode position and not velocity (which is the goal of our current didactic exercise, but is by no means a requirement for neural coding of position and velocity in the brain), it is necessary for the rings containing cells i and j to have phase slopes of identical magnitude but opposing sign (i.e., λi =  − λj). A pair of theta cells meeting these requirements will shift through one cycle of phase against each other each time the animal traverses a distance equal to Λ = |λ|/2, where |λ| is the shared magnitude of the phase slope for cells i and j. Figure 8 shows an example with |λ| = 314 cm, so that Λ = 157 cm.

Step 2: Pooling chi rates to derive co‐firing rates

The second decoding step, XR˙, pools chi rates from multiple theta cell pairs to derive a vector of co‐firing rates. Given two ring oscillators that each contains P theta cells, there are P2 unique ordered pairings between theta cells that reside in different rings, forming a P × P matrix of chi values (Figure 8d). The rows and columns of this matrix can be sorted by the preferred phases, θ^1,θ^2,,θ^P, of theta cells in each ring. At each time step, tk, we may define a 2D matrix, Xk, in which each entry contains the instantaneous chi rate, χi, j[tk], for theta cells i and j. This is exactly the same matrix structure that was defined for pairs of HD cells in Figure 4a, except that here, the preferred firing phase of theta cells is substituted for the preferred head angle of HD cells. Hence, the main diagonal of Xk contains chi rates for pairs of theta cells that share the same phase offset parameter, θ^n, within their respective ring oscillators. Flanking the main diagonal, there are N − 1 diagonal bands composed from entries containing chi rates for pairs of theta cells that share the same difference between their phase offset parameters, Δθ^i,j=θ^jθ^i. As illustrated above (Figure 4a), we may index these diagonal bands by m = 0, 1, 2, …, P − 1, such that the main diagonal (i = j) is indexed by m = 0, and each nonzero index m references a band containing chi rates between pairs of theta cells with preferred phases separated by a common offset, Δθ^m=m×2π/P. Activation of sigma‐chi unit m can then be expressed as a co‐firing rate, r˙m, obtained by summing all chi rates along a single band, as in Equation (10). Since sigma‐chi units pool inputs across all phases of theta, the time series r˙m is not itself modulated by theta (Figure 8d, inset). Figure 8e shows that each co‐firing rate r˙m exhibits spatially periodic positional tuning, and therefore, each sigma‐chi unit behaves like a grid cell with vertex spacing equal to Λ and spatial phase equal to Δθ^m/2π. It is thus possible to decode angular position, ϕ, from the vector of co‐firing rates, in much the same way that position was decoded above from firing rates of simulated grid cell spike trains (Figure 7c).

Step 3: Converting co‐firing rates into angle components

The third decoding step, R˙Y, converts co‐firing rates into sine and cosine components of the animal's periodic angular position, ϕ, within the interval Λ. A pair of decoder neurons extracts the sine and cosine components of ϕ (Figure 8f). The pseudoinverse method was used to assign weights minimizing the error for estimating sin(ϕ) and cos(ϕ).

Step 4: Converting angle components into periodic position

The fourth decoding step, Y → q∑χ, converts the sine and cosine components of ϕ into a prediction of the animal's position within the interval Λ. The atan2 function was used to obtain ϕ in radians from sin(ϕ) and cos(ϕ), and radians were then converted to distance by q = Λ × ϕ/2π (Figure 8g).

2.3.4. Decoding velocity from theta cell firing rates

We now show that the animal's running speed can be decoded from theta cell firing rates via the sigma decoding process, which in this case has two steps: SRq˙. In this section, we shall write q˙ to denote the time series of running speeds recovered from the sigma decoder, and q˙ to denote the time series of true running speeds.

Step 1: Deriving theta cell firing rates from their spike trains

The first decoding step, S → R, converts theta cell spike trains into firing rates. For simulations presented here, theta cell spike trains were converted into firing rates using Equation (5). As explained above, each theta cell's burst frequency varies with running speed in accordance with Equation (16). However, we define “firing rate” to mean the rate at which individual spikes occur—not the rate at which theta bursts occur—so it does not automatically follow from Equation (17) that theta cell firing rates must vary with running speed. Nonetheless, since a theta cell's spike probability oscillates with its burst frequency (Equation (18)), it turns out that theta cells do indeed behave as “speed cells” with firing rates that are modulated by running speed. Firing rates of theta cells residing in paired rings a and b were modulated by running speed with a steepness that was inversely proportional to their phase slopes, λa and λb (Figure 9a). Theta cells with positive phase slopes (λa > 0) increased their firing rates with running speed, and those with negative phase slopes (λb < 0) decreased their firing rates with running speed. Figure 9b shows autocorrelograms for simulated theta cells over a range of different phase slopes.

Figure 9.

Figure 9

Sigma decoder recovers velocity from theta cell firing rates. (a) Graphs plot dependence of simulated theta cell firing rates (y‐axis) upon running speed (x‐axis) for different values of the phase slope parameter, Λ; theta cells with positive and negative phase slopes are plotted blue and brown, respectively. (b) Autocorrelograms for theta cells plotted in “a.” (c) Actual running speed q˙ (black) superimposed with decoded speed q˙ (red) during a 60 s period of running on the circular track; top four graphs show q˙ decoded from 24 theta cells with phase slopes (12 positive, 12 negative) in “a”; bottom graph shows running speed decoded from all theta cells in top four graphs combined (96 theta cell in total, 48 with positive and 48 with negative phase slopes)

Step 2: Decoding running speed from firing rates

The second decoding step, Rq˙, converts theta cell firing rates into an estimate of the animal's running speed. We have already seen that theta cell firing rates are modulated by running speed, so in principle, it should be easy to decode the animal's running speed from theta cell firing rates. However, when we used the pseudoinverse method to derive weights that minimized the error between q˙ and q˙ for theta cells with |λ| = 314 cm, the fit was not good when the decoder was tested on data independent from the training set (Figure 9c, top panel). This is because the accuracy of speed decoding is proportional to the slope with which velocity modulates theta cell firing rates, which in turn is inversely proportional to |λ|, and setting |λ| = 314 cm yields a slope that is too shallow for accurate speed decoding. This problem is easily remedied by using theta cells with steeper phase slopes, which increases the slope of velocity modulation (and also decreases the vertex spacing, Λ, of grid cells formed via the sigma‐chi decoding process; see Figure 10a). Figure 9c shows that velocity decoding progressively improved with steeper slopes of velocity modulation, becoming quite accurate for |λ| = 41 cm, which yields a vertex spacing of Λ = 20.5 cm for grid cells simulated by sigma‐chi neurons; this is within the range of experimentally observed vertex spacings for grid cells (Hafting et al., 2005).

Figure 10.

Figure 10

Segregation of position and velocity into co‐firing and firing rate channels. (a) Four complementary pairs of ring oscillators contain a total of 96 simulated theta cells; firing rates are fed to a sigma decoder, and co‐firing rates are fed to a sigma‐chi decoder. (b) Angular position (q, y‐axis) is accurately decoded from co‐firing rates (q∑χ) but not firing rates (q) of theta cells for all spatial periods (L) and integration time constants (τ =.4, .2, .1, .05, .025, or .01 s; x‐axis plots t on a log scale). (c) Running speed (q˙, y‐axis) is accurately decoded from firing rates (q˙) but not co‐firing rates (q˙∑χ) of theta cells for sufficiently large integration time constants (x‐axis)

There is no biological requirement for all ring oscillators to share the same value of |λ|; in fact, different values of |λ| yield sigma‐chi neurons that mimic grid cells with different vertex spacings. Thus, we simulated multiple ring oscillators with different phase slopes. Four pairs of ring oscillators (8 rings in all, with 12 theta cells in each ring) containing a total of 96 theta cells were assigned phase slopes λ1 =  ± 314 cm, λ2 =  ± 188 cm, λ3 =  ± 72 cm, and λ4 =  ± 41 cm. Figure 10b shows that when spike trains from these four ring pairs were fed into a sigma‐chi decoder (configured so that chi rates were derived within but not between ring pairs), it was possible to accurately recover the animal's position q on intervals of length Λ1 = 157 cm, Λ2 = 94 cm, Λ3 = 36 cm, and Λ4 = 20.5 cm, which are the vertex spacings of grid cells simulated by sigma‐chi neurons formed from each ring pair. A sigma neuron summing inputs from all 96 theta cells (using weights assigned via the pseudoinverse method on an independent training dataset) accurately recovered the animal's running speed from theta cell firing rates (Figure 10c).

2.3.5. Conjugate coding of position and velocity

Results shown in Figures 8, 9, 10 demonstrate that theta cells can simultaneously encode position information in their co‐firing rates, while also encoding velocity (running speed) information in their firing rates. As explained above, the theta cell population was constructed to contain pairs of ring oscillators with equal and opposite phase slopes (Figure 9a), so that q was decoded from each ring pair on a spatially periodic interval of length Λ (which was determined by the phase slopes for that ring pair). We now show that if co‐firing rates are computed exclusively within (and not between) these complementary ring pairs, then position (q) and running speed (q˙) information become perfectly segregated into the co‐firing and firing rate channels, respectively. This can be demonstrated by swapping the decoders, and thereby attempting to recover q˙ from co‐firing rates and q from firing rates. We shall write q to denote position recovered from firing rates, and qχ to denote velocity recovered from co‐firing rates.

To derive q˙∑χ, a weighted sum of theta cell co‐firing rates r˙1,r˙2,,r˙NT was computed, retaining the constraint that co‐firing rates were only computed within (and not between) complementary ring pairs. The pseudoinverse method was then used to find a weight vector that minimized error between q˙∑χ and q˙. Decoding accuracy was tested on a novel set of spike trains derived from independent behavior data. Figure 10c shows that q˙∑χ did not accurately estimate the true running speed. To further test whether co‐firing rates contained information about q˙, we attempted to recover running speed from a weighted sum of both firing rates and co‐firing rates, q˙+∑χ, and used the pseudoinverse method to minimize error between q˙+∑χ and q˙. Over 10 independent simulations with different behavior data, we found that decoding q˙ from theta cell firing rates and co‐firing rates together was not more accurate than decoding from firing rates alone (Figure 10c). That is, the accuracy of q˙+∑χ was not better than the accuracy of q˙. From this, we conclude that firing rates of simulated theta cells only conveyed information about q˙, and not about q.

To derive q, a weighted sum of theta cell firing rates {r1, r2, …, rN}T was computed, and the pseudoinverse method was used to find a weight vector that minimized error between q and q. Decoding accuracy was then tested on a novel set of spike trains derived from independent behavior data. Figure 5b shows that q did not accurately estimate the animal's true position on the interval Λ. To further test whether firing rates contained information about q, we attempted to recover running speed from a weighted sum of both firing rates and co‐firing rates, q∑ + ∑χ, and used the pseudoinverse method to minimize error between q∑ + ∑χ and q. Over 10 independent simulations with different behavior data, we found that decoding q from theta cell firing rates and co‐firing rates together was no more accurate than decoding from co‐firing rates alone (Figure 10b). From this, we conclude that if co‐firing rates are computed from complementary pairs of theta ring oscillators, then they only convey information about q, and not about q˙.

2.3.6. Co‐firing rates of noncomplementary ring oscillator pairs

Given a set of N ring oscillators, there are (N2 − N)/2 unique non‐ordered pairings between different rings. In the simulations of Figure 10, we used 8 ring oscillators (4 pairs of complementary rings), which yields a total of 28 non‐ordered pairings. However, we only computed co‐firing rates from 4 of these 28 possible pairings, because as explained above, the 4 pairings between complementary rings have the special property of yielding co‐firing rates which only contain position (and not velocity) information.

It is certainly possible to derive co‐firing rates from noncomplementary ring pairs, but co‐firing rates computed in this way will contain information about both position and velocity, rather than just position. To demonstrate this, Figure 11 shows results from simulations of 4 ring oscillators—each containing 12 theta cells (for a total of 48 theta cells in all)—which were all assigned to have positive phase slopes: λ1 =  + 314 cm, λ2 =  + 188 cm, λ3 =  + 55 cm, and λ4 =  + 39 cm. There are six unique non‐ordered pairings among these four rings, and since all rings have positive phase slopes, there are no complementary pairs. Figure 11b shows that position can only be decoded from co‐firing rates (but not firing rates) of theta cells residing in noncomplementary ring pairs, a result similar to that shown above for complementary ring pairs. However, Figure 11c shows that running speed can now be recovered from either firing rates or co‐firing rates of theta cells residing in noncomplementary ring pairs, and the most accurate decoding is obtained when running speed is recovered from both together. This is in marked contrast with simulations in Figure 10, where running speed was not decodable from co‐firing rates of complementary ring pairs. However, Figure 11c shows that position can still only be decoded from co‐firing rates (but not firing rates) of theta cells residing in noncomplementary ring pairs, a result similar to that shown for complementary ring pairs in Figure 10. We shall now add positional modulation of firing rates to the noncomplementary ring model of Figure 11, to demonstrate how the same information can be encoded by a population of speed cells versus grid cells, despite their different firing rate tuning.

Figure 11.

Figure 11

Position and velocity coding with non‐complementary ring oscillators. (a) Co‐firing rates are derived from six unique pairings among four ring oscillators, all with positive phase slopes. (b) Angular position can be decoded from co‐firing rates on six different length intervals (Λ = 22.5, 25, 34, 39, 67, 235.5 cm); decoding error (MSE, y‐axis) is low when angular position is decoded from theta cell co‐firing rates (q∑χ) but near chance levels when decoded from firing rates (q); x‐axis plots integration time constant (τ =.4, .2, .1, .05, .025, .01 s) on a log10 scale. (c) Running speed (q˙, y‐axis) can be decoded from either firing rates (q˙) or co‐firing rates (q˙∑χ) of simulated theta cells, and is most accurate when decoded from both together (q˙+∑χ)

2.3.7. Speed and grid cells can encode the same information

Figure 12 compares results from simulations of speed cells versus grid cells; for simplicity, all simulations used τu = τv = 100 ms. Speed cells were simulated using the same model as in Figure 11, as a population of theta neurons residing in four noncomplementary ring oscillators. Hence, speed cells exhibited strong theta modulation of their spike trains (Figure 12a, top), linear modulation of their firing rates by running speed (Figure 12a, bottom), and no modulation of their firing rates by position on the track (Figure 12a, middle). Grid cell spike trains were simulated by probabilistically re‐sorting the speed cell spike trains in time (see Section 4), so that their firing rates became periodically tuned for position with a period of Λ = 157 cm (spatial phases of the 12 cells in each ring were evenly spaced across Λ). Importantly, this re‐sorting was done in such a way that the total number of spikes fired by each neuron was preserved, so that no position or velocity information was added or subtracted merely by increasing or decreasing the total number of spikes. But since the re‐sorting was probabilistic, some (but not all) of the original theta and speed modulation in the pre‐sorted spike trains was preserved after sorting. Re‐sorting spike trains to convert speed cells into grid cells thus endowed the neurons with periodic spatial tuning (Figure 12b, middle), while also reducing the depth of theta modulation (Figure 12b, top) and the slope of speed modulation (Figure 12b, bottom).

Figure 12.

Figure 12

Adding position information to firing rate channel comes at the expense of information in other channels. (a,b) Graphs show autocorrelograms (top), position tuning (middle), and speed tuning (bottom) for simulated speed cells (a) and grid cells (b). (c,d) Mean position error (c) at length scale Λ = 157 cm and speed error for the sigma, sigma‐chi, and combined decoders. E) Mean position error at length scales encoded by the six ring pairs for the sigma‐chi (top), sigma (middle), and combined (bottom) decoders. All simulations conducted using τ = 100 ms

We then analyzed how adding position information to the firing rate channel (by re‐sorting spikes) affected the amount of information about position and velocity that was conveyed by firing rates versus co‐firing rates. Figure 12c (red bars) shows that when the rat's position within the interval Λ = 157 cm was decoded from firing rates, the decoding error was near chance levels for speed cells, but near zero for grid cells. Hence, as expected, position information was conveyed by the firing rates of grid cells but not speed cells. Figure 12c (blue bars) shows that when the rat's position within the interval Λ = 157 cm was decoded from co‐firing rates, the decoding error was near chance levels for both speed and grid cells, because no pair of ring oscillators encoded the rat's position on a length scale of Λ = 157 cm in their co‐firing rates. By contrast, when the rat's position was decoded from co‐firing rates on any of the length scales that were encoded by one of the six ring oscillator pairs, the decoding error was very small for speed cells (as already shown above in Figure 11) and significantly larger (though still far below chance levels) for grid cells (Figure 12e, blue bars). This demonstrates that when speed cells were converted to grid cells, adding position information to the firing rate channel came at the expense of position information in the co‐firing rate channel, in accordance with an uncertainty trade‐off between firing rates and co‐firing rates: the more information the firing rate channel conveys about a given variable (in this case, position), the less information about that variable can be conveyed in the conjugate co‐firing rate channel, and vice versa. Note that when position was decoded from firing rates on the length scales encoded by co‐firing rates (Figure 12e, red bars), the decoding error was near chance levels for both speed and grid cells; for this reason, the error when decoding from both firing and co‐firing rates together was nearly identical to the error when decoding from co‐firing rates alone (Figure 12e, black bars).

Figure 12d (black bars) shows that when the rat's running speed was decoded from firing rates and co‐firing rates together, the decoding error was nearly identical for speed cells and grid cells. Hence, adding position information to the firing rate channel did not come at the expense of the total velocity information; together, the firing and co‐firing rate channels conveyed just as much information about running speed before versus after position information was added to the firing rate channel. But importantly, information about running speed was re‐distributed between the firing rate and co‐firing rate channels. Figure 12d shows that speed cells had lower sigma than sigma‐chi decoding error for running speed, and thus conveyed more information about velocity in their firing rates than co‐firing rates; since speed cells were more strongly modulated by theta than grid cells, this is fully consistent with simulations from Figure 9, showing that theta cells preferentially encode velocity information in their firing rates. By contrast, grid cells had lower sigma‐chi than sigma decoding error for running speed, and thus conveyed more information about velocity in their co‐firing rates than firing rates; this is fully consistent with simulations in Figures 5, 6, 7, showing that neurons with position‐tuned firing rates (such as HD cells and grid cells) preferentially encode velocity information in their co‐firing rates. But importantly, the total amount of velocity information conveyed by firing rates and co‐firing rates together was not different for speed and grid cells. What this suggests is that position and velocity information only compete with one another for coding capacity within a single coding channel (firing rates or co‐firing rates), but do not compete with one another for capacity across coding channels. We take this as evidence for an uncertainty principle that governs neural coding of conjugate variables by conjugate coding channels. We shall now articulate this hypothesized uncertainty principle in a formal and falsifiable way.

2.4. An uncertainty principle for neural coding of conjugate variables

In quantum physics, the uncertainty principle states that the more information we have about a particle's position, the less we can know about its momentum (or velocity), and vice versa. A standard mathematical formulation of the uncertainty principle is written σxσp ≥ /2 (where is the reduced Planck constant), placing a lower bound on the product of variances for the probability distributions of position, σx, and momentum, σp (Kennard, 1927). Based on the results of simulations above, we shall now formally hypothesize (but not conclusively prove) a similar set of uncertainty principles for neural coding of conjugate variables. We shall do this by stating these principles using mathematical expressions that have a form similar to the uncertainty principle from physics.

2.4.1. Between‐channel uncertainty trade‐off

Assume that a population of neurons generates a set of spike trains S, containing some fixed number of spikes S. Let us first consider the case where the population encodes information about a single unidimensional variable, q. As we have seen, information about q can be encoded by two channels embedded in S: firing rates (R) and co‐firing rates (R˙). Hence, S simultaneously implements two neural codes, q → R and qR˙. We posit that these two codes are conjugates of one another, and thus, there is an uncertainty trade‐off between them: the more information one code conveys about q, the less information the other code can convey about q. Each channel thus has its own limited capacity to convey information about q, which is directly proportional to S and inversely proportional to the amount of information that the other channel conveys about q. Using brackets 〈 〉 to denote taking the average, let us write var q = 〈(q − q)2 and var qχ = 〈(qχ − q)2 to denote the variances of firing rate (sigma) and co‐firing rate (sigma‐chi) decoding errors for q, respectively (note that superscripts are used to denote raw estimates for q generated by the decoders, whereas subscripts are introduced to denote the decoding errors of these estimates). We postulate that the trade‐off in decoding error between the two channels obeys a relation similar to the uncertainty principle:

varqvarq∑χCq, (19)

where Cq denotes a Cramer‐Rao bound for estimating q from both decoders together. This equation expresses our postulate that a single stimulus q cannot be more accurately encoded by firing rates and co‐firing rates together than by either firing rates or co‐firing rates alone. Evidence for this postulate is provided in simulations of Figure 12, where it was shown that adding position information to the firing rate channel caused a reduction the amount of position information conveyed by the co‐firing rate channel, as Equation (19) would predict.

2.4.2. Uncertainty trade‐off within coding channels

Now, let us now consider cases where S encodes information not only about q, but also about its conjugate, q˙=dq/dt (e.g., position and velocity). Since q and q˙ are conjugate variables, we postulate that the more information either of the two coding channels conveys about q, the less information that same channel can convey about q˙, and vice versa. For the firing rate channel, this postulated uncertainty trade off may be written as

varqvarq˙C, (20)

where C denotes a Cramer‐Rao bound for the sigma decoder. Evidence for this postulate is provided by the simulations of Figure 12, where it was shown that adding position information to the firing rate channel caused a reduction in the amount of velocity information conveyed by the firing rate channel, as predicted by Equation (20). For the co‐firing rate channel, the same within‐channel trade‐off may be written as

varq∑χvarq˙∑χC∑χ, (21)

where C∑χ denotes a Cramer‐Rao bound for the sigma‐chi decoder. Evidence for this postulate is again provided by simulations of Figure 12, where it was shown that a reduction in the amount of position information in the co‐firing rate channel was accompanied by an increase in the amount of velocity information conveyed by the co‐firing rate channel, consistent with Equation (21).

2.4.3. Conjugate coding of conjugate variables

We have proposed that that if one coding channel conveys information about q, then this must come at the expense of conveying information about q in the other channel (Equation (19)), and must also come at the expense of conveying information about q˙ withing the same channel (Equations (20) and (21)). But does conveying information about q within one channel come at the expense of conveying information about q˙ in the other channel? We postulate that the answer is no, because the conjugacy between R and R˙ mirrors the conjugacy between q and q˙. Consequently, conveying information about q in one channel does not consume the other channel's capacity to convey information about q˙. In summary, two conjugate variables only compete with one another for coding capacity within conjugate coding channels, but do not compete with one another for capacity between conjugate coding channels. In Section 3, we shall explore how this conjugate coding principle might yield new insights into how position and velocity information are represented by neural populations in the hippocampus, how real biological neurons extract information from their spike train inputs, and what role oscillations may play in neural computations that process information about conjugate variables.

3. DISCUSSION

A neural code maps states of the world onto states of the brain, and thus implements a function of the form W → B, where W is a domain of world states, and B is a range of brain states. In simulations above, we showed that a population of spiking neurons can simultaneously convey information via two coding channels: a firing rate code (R) conveyed by within‐cell spike intervals, and a co‐firing rate code (R˙) conveyed by between‐cell spike intervals (i.e., correlations among spike trains). Together, these two coding channels define a representational space, R×R˙B, that contains a larger number of states than R or R˙ alone, which in turn suggests that firing rates and spike correlations might together possess a greater capacity to convey information than either channel alone. Consistent with this idea, empirical evidence shows that decoding a rodent's position from spike rates and phases of hippocampal place cells is more accurate than decoding from rate or phase alone (Jensen & Lisman, 2000), and HD can be more accurately decoded from both the firing rates and spike correlations of HD cells than from either source alone (Peyrache, Lacroix, Petersen, & Buzsáki, 2015). However, R and R˙ are not independent coding channels, since almost any change to a spike train's firing rate will also change its correlation with other spike trains, and thus affect co‐firing rates as well. While empirical evidence may show that information can be simultaneously conveyed via firing rates as well as spike correlations, this does not tell us whether the information conveyed via one channel has come at the expense of information that might otherwise have been conveyed in the other channel. Firing rates and spike correlations are “entangled” with one another, and therefore, it is not obvious whether more information can be encoded by both channels together than by either channel alone.

3.1. An uncertainty principle for neural coding

Here, we have proposed an uncertainty principle for neural coding which postulates precise conditions under which more information can (or cannot) be encoded by firing rates and co‐firing rates together than by either alone. This principle states that R and R˙ behave as conjugate coding channels, and therefore, information conveyed via firing rates normally comes at the expense of conveying information via co‐firing rates, and vice versa (Equation (19)). If spike trains are used to encode a single variable, q ∈ W, or to encode multiple variables that are not conjugates of one another, q1 × q2 × ⋯ × qN ∈ W, then firing rates and co‐firing rates together generally cannot convey more information than firing rates or co‐firing rates alone. But an exception occurs when R and R˙ encode a pair of conjugate variables, q and q˙=dq/dt. In such cases, the conjugacy between R and R˙ mirrors the conjugacy between q and q˙, and the full capacity of R×R˙ becomes “liberated” to encode states within the phase space, q×q˙W.

We thus postulated that the full capacity of R and R˙ becomes available when spike trains encode information about a pair of conjugate variables. Such cases are of broad interest, because almost any computational task that requires the brain to encode a dynamically changing state variable will also require encoding that same variable's time derivative (i.e., its conjugate). Here, we have focused attention upon dual coding of position and velocity in hippocampal systems to support spatial navigation. But position and velocity are encoded by many other sensory and motor systems as well. Other examples of conjugate variable pairs that are encoded by neural circuits include velocity and acceleration, expected reward and prediction error, phase and frequency, and so on. The conjugate coding principle suggests that whenever and wherever populations of spiking neurons encode pairs of conjugate variables in the brain, a single population of neurons can simultaneously distribute information about both variables across firing rates and co‐firing rates. As discussed below, some computational problems may become easier and more efficient to solve under this flexibility to choose between alternative representations for q and q˙. Recognizing these advantages may help us to better understand why identified neural populations generate specific patterns of neural spiking (such as peak‐shaped tuning functions and neural oscillations), how different populations are interconnected to form functional circuits, and how individual neurons integrate spike train inputs to decode information from both firing rates and spike correlations.

3.1.1. The dual coding spectrum

Schemes for dual coding of conjugate variables reside along a spectrum that spans two extreme endpoints. At one endpoint are cases where q is encoded exclusively by firing rates (q → R) and q˙ is encoded exclusively by co‐firing rates (q˙R˙). Peak‐shaped tuning of firing rates for q is the core mechanism by which this is achieved. For example, in simulations of HD and grid cells with Poisson firing rates, we showed that peak‐shaped tuning of firing rates for position (q) automatically embeds information about velocity (q˙) into co‐firing rates (see Figures 5 and 7). A similar coding scheme could be used to convey information about other pairs of conjugate variables, such as velocity and acceleration, or expected reward and prediction error. For example, encoding acceleration (q˙) in co‐firing rates could be achieved if firing rates exhibit peak‐shaped tuning for velocity (q), whereas encoding prediction error (q˙) in co‐firing rates would require firing rates to exhibit peak‐shaped tuning for expected reward (q).

At the other extreme endpoint of the dual coding spectrum are cases where q˙ is encoded exclusively by firing rates (q˙R) and q˙ is encoded exclusively by co‐firing rates (qR˙). The core mechanism for constructing such a code is for firing rates of individual neurons to exhibit linear tuning for q˙ in conjunction with peak‐shaped tuning for oscillatory phase, which causes information about q and q˙ to become packaged into co‐firing rates and firing rates, respectively. The role of oscillatory modulation in constructing such codes is not to impose a precise pattern of temporal firing on the spikes (co‐firing rate codes are not time codes, as explained in Section 1), but rather to endow co‐firing rates with the correct sign (positive or negative) under the coding scheme that they implement (because unlike firing rates, co‐firing rates are signed quantities). The period of oscillatory modulation thus sets a time scale for sign reversal of co‐firing rates. Coding of position by co‐firing rates was demonstrated in simulations of theta‐modulated speed cells (Figures 8, 9, 10), but here again, a similar coding scheme could be used to convey information about other pairs of conjugate variables, such as velocity and acceleration, or expected reward and prediction error. For example, encoding velocity (q) in co‐firing rates would require acceleration (q˙) to linearly modulate firing rate differences between oscillatory neurons, and encoding expected reward (q) in co‐firing rates would require prediction error (q˙) to linearly modulate firing rate differences between oscillatory neurons.

In between the two extremes are intermediate cases of dual coding where both R and R˙ convey partial information about both variables, q and q˙. In such cases, whatever leftover capacity in R not used to encode q can be used to encode q˙, and likewise for R˙. So spike trains simultaneously implement two mappings, q×q˙R and q×q˙R˙, as demonstrated by simulations of theta‐modulated grid cells (Figure 12). This kind of intermediate coding for position and velocity requires neurons that simultaneously exhibit peak‐shaped tuning for position (to map q → R and q˙R˙) in conjunction with oscillatory modulation and linear velocity dependence (to map q˙R and qR˙). Neural populations in the hippocampal system often exhibit all three of these influences at the same time—position tuning, velocity sensitivity, and oscillatory modulation—suggesting that they do not occupy extreme endpoints of the dual coding spectrum, but rather lie somewhere in between. Some neural populations might lie nearer to one end of the spectrum, such place and grid cells that are strongly modulated by position but weakly modulated by theta oscillations, while other populations might lie near the opposite end of the spectrum, such as theta cells that are weakly modulated by position and more strongly modulated by theta oscillations and running speed. But rather than encoding different variables like position versus speed, as is commonly assumed, these different populations may encode the exactly same pair of conjugate variables (position and velocity) in different ways, by differentially distributing information across firing rates and spike correlations. Why should different populations redundantly encode the same information in different ways? Perhaps because different computations can be performed more efficiently at different points along the dual coding spectrum, so that the flexibility to represent the same information at different points along the spectrum allows a wider range of computations to be optimized (see below).

3.1.2. Single‐neuron computation

To model how information can be extracted from R and R˙, we introduced two biologically inspired decoding methods, sigma and sigma‐chi decoding, which extract information from within‐ versus between‐cell spike intervals, respectively. Sigma decoding is similar to computations performed by model neurons in standard artificial neural networks, which are typically composed of simplified linear units that compute weighted sums of their firing rate inputs, and thus derive their outputs as a function of R (but not of R˙). By contrast, sigma‐chi decoding requires nonlinear computations to detect correlations among different spike trains, and such computations are not easily reducible to simple weighted linear summation of firing rate inputs (for review, see Mel, 2007). Unlike simple linear neurons in artificial networks, biological neurons have large dendritic trees with detailed branching patterns, endowing them with complex anatomical and electrotonic structure. It has been shown that the dendrites of hippocampal pyramidal cells (Makara & Magee, 2013) and entorhinal stellate cells (Schmidt‐Hieber et al., 2017) can perform nonlinear integration of their spike train inputs, which might allow dendrites to perform operations similar to the derivation of chi rates (Equation (9)) that underlie sigma‐chi decoding in our simulations. Thus, a single neuron might compute different chi rates in each of its dendrites, and then then pool multiple chi rates together at the soma (Equation (10)) to derive its output as a function of R˙ instead of (or in addition to) R. A network composed from such sigma‐chi neurons would have flexibility to perform computations that operate on representations in R˙ as well as R. At the circuit level, this flexibility might facilitate novel, efficient solutions to certain computational problems that are unavailable in standard network models composed from linear “sigma” neurons that only operate upon representations in R.

3.1.3. Circuit‐level computations

A wide range of theoretical models have been proposed to explain how neural populations in the hippocampal system—such as place cells, grid cells, border cells HD cells, speed cells, and theta cells—might be functionally interconnected with one another to solve navigational problems such as cognitive mapping, self‐localization, and trajectory planning (for review, see Hinman, Dannenberg, Alexander, & Hasselmo, 2018). One type of problem solved by such networks is trajectory planning, that is, finding the best path through an environment from a start location to a goal. This problem bears resemblance calculating a “path of least action” in classical mechanics, since it involves finding the trajectory that agent should follow as it is influenced by multiple “forces” (goal seeking, obstacle avoidance, effort minimization, etc.). To solve the equations of motion that guide an object along a complex trajectory, physicists rely upon convenient mathematical representations—such as the LaGrangian or Hamiltonian—that cleanly separate the influences of conjugate variables (such as position and momentum) into distinct mathematical terms of a single equation. It is interesting to speculate that neural circuits might employ a similar strategy for simplifying complex trajectory‐finding problems, by representing position and velocity in distinct coding channels (firing rates vs. co‐firing rates) of a single set of neural spike trains. This would be an interesting avenue for future research on dual coding. But for now, we shall focus further discussion upon two other fundamental computations that are commonly performed by neural systems: differentiation, which converts position into velocity (qq˙), and integration, which converts velocity into position (q˙q).

3.2. Neural differentiation

We showed that if neurons with peak‐shaped positional tuning of their firing rates (such as HD and grid cells) are simulated using Poisson spike trains (Figures 1, 2, 3, 4, 5, 6, 7), then position is encoded exclusively by firing rates (q → R), and velocity is encoded exclusively by co‐firing rates (q˙R˙). At this extreme end of the dual coding spectrum, position information is recovered via sigma decoding, whereas velocity information is recovered via sigma‐chi decoding. Under these circumstances, sigma‐chi decoding can be viewed as tantamount to a process of neural differentiation that recovers q˙ from q (e.g., velocity from position). It makes somewhat less sense to conversely view sigma decoding as a process of neural integration that recovers q from q˙, because at this end of the dual coding spectrum, the estimate of q˙ generated by the sigma‐chi decoder is delayed in time with respect to the estimate of q generated by the sigma‐chi decoder (see Figure 2). Therefore, under the reasonable assumption that a computation's input precedes its output in time, it makes more sense to view q as an input and q˙ as an output when q precedes q˙, and thus to regard sigma‐chi decoding as a neural differentiation process. As will be seen below (see Section 3.3), the temporal order of q and q˙ reverses at the opposite end of the dual coding spectrum, so that sigma‐chi decoding then becomes tantamount to integration rather than differentiation.

3.2.1. Pre‐ versus post‐positional velocity signals

We use the term “post‐positional velocity signals” to describe speed or velocity signals that are derived by differentiating position signals (e.g., via sigma‐chi decoding). This is to draw a distinction with “pre‐positional” velocity signals that could be derived from the motor or vestibular systems, independently of any positional code. There is evidence that pre‐ and post‐positional velocity signals coexist in the hippocampal system: some speed cells are prospectively correlated with the animal's future running speed, while others are retrospectively correlated with the past running speed (Kropff et al., 2015). If post‐positional speed cells derive their speed tuning via differentiation of position signals, then they should lose their speed modulation following any disruption of the position signal that they differentiate. For example, if retrospective speed cells derive their tuning by differentiating inputs from grid cells (as in simulations of Figure 7e), then disrupting grid cells should disrupt these retrospective speed cells. Medial septum inactivation has been shown to impair grid cell firing (Brandon et al., 2011; Koenig, Linder, Leutgeb, & Leutgeb, 2011) while sparing speed cell firing (Hinman et al., 2016) in entorhinal cortex; however, entorhinal speed cells are biased toward prospective coding (Kropff et al., 2015), and may thus not derive their speed tuning by differentiating position signals. By contrast, hippocampal speed cells tend to show a bias for retrospective speed coding (Kropff et al., 2015), so these cells might be more likely to lose their speed tuning after place or grid cells are disrupted. To the best of our knowledge, this has not yet been experimentally tested, but one testable prediction of the conjugate coding hypothesis is that some retrospective speed cells should show impaired speed tuning after disruption of the position signals that they differentiate.

Why should there be separate populations of prospective and retrospective speed cells? Some neural computations in the hippocampal system might depend upon both pre‐ and post‐positional velocity signals. For example, it has recently been shown from place cell recordings that the gain of the hippocampal path integration can be modified by prolonged exposure to a cue conflict between inertial and non‐inertial self‐motion cues (Jayakumar et al., 2019). Such cue conflicts may generate error signals that recalibrate the gain of the path integrator. One way to compute such error signals would be to calculate the difference between pre‐ versus post‐positional velocity signals, and adjust the gain in proportion to the mismatch. Hence, deriving post‐positional velocity signals via sigma‐chi decoding might be essential for calibrating the gain of neural integrators in the hippocampal system and other brain systems.

3.2.2. Differentiation by cells versus circuits

The fact that velocity information can be extracted from between‐cell spike intervals of position‐tuned neurons is not a new idea. A similar idea has long been exploited in classical models of visual motion sensitivity, where neurons tuned for the direction of visual motion can be modeled by circuits that differentiate inputs from position‐tuned neurons with receptive fields that are arranged sequentially along the preferred path of motion (Baccus, Ölveczky, Manu, & Meister, 2008; Hubel & Wiesel, 1962). Such models bear a strong resemblance to our simulations of angular velocity cells (Figure 4c) and speed cells (Figure 7e), in which sigma‐chi neurons derive velocity signals from position‐tuned neurons. But in classical models of visual motion detection, directional tuning is typically derived from circuit‐level mechanisms (such as asymmetric lateral inhibition), whereas in our simulations, velocity tuning was instead achieved by the chi operation (Equation (9)), which could performed at the cellular level (rather than the circuit level) in dendrites of biological neurons (see below). Indeed, calcium imaging experiments have revealed that in visual cortex, most neurons that receive input from orientation‐tuned cells pool their inputs across the entire 360° range of edge orientations, and exhibit “hot spots” for specific orientations in different parts of their dendritic tree (Jia, Rochefort, Chen, & Konnerth, 2010). This arrangement is similar to what would be expected if visual cortex neurons were computing co‐firing rates by pooling their inputs across multiple orientation phases, in a manner similar to the way that sigma‐chi neurons pool inputs across multiple phases of head angle (Figure 4) or theta phase (Figure 10) in our simulations.

3.3. Neural integration

Neural integration plays important roles in spatial coding. Spatially tuned neurons are thought to compute an animal's position in two ways: by measuring the animal's displacement from fixed landmarks in the surrounding environment, and by measuring the animal's current position relative to its prior position (McNaughton et al., 1996). The latter process requires integrating the animal's movement velocity over time to compute its position, a procedure known as path integration. Extracting position information via sigma‐chi decoding (as in simulations of Figures 8, 9, 10) is tantamount to path integration, since it is a process that derives position (q) from velocity (q˙). Three types of models have been proposed to explain how hippocampal networks might perform path integration: attractor networks, reservoir computing models, and oscillatory interference models. It is worthwhile to consider how dual coding of conjugate variables might impact mechanisms of path integration in each of these three classes of models.

3.3.1. Attractor networks

Attractor networks have been used to simulate path integration by HD cells (Song & Wang, 2005; Zhang, 1996), place cells (Samsonovich & McNaughton, 1997; Eliasmith, 2005; Hedrick & Zhang, 2016), and grid cells (Burak & Fiete, 2009; Fuhs & Touretzky, 2006; Guanella, Kiper, & Verschure, 2011). Although these models differ in their implementation details, they all share two core features of attractor networks in common (see Knierim & Zhang, 2012). First, a population of neurons are reciprocally interconnected with one another via lateral inhibition, causing a localized “activity bump” to form as a stable attractor state of the network that endows individual neurons (such as HD, place, or grid cells) with peak‐shaped positional tuning of their firing rates. Second, the symmetry of lateral connections among position‐tuned neurons is controlled by neurons with velocity‐tuned firing rates, which can push the activity bump through the network along trajectories that mirror the animal's trajectory through space.

Analysis of spike trains from pairs of simultaneously recorded HD cells (Butler & Taube, 2017; Peyrache et al., 2015) and grid cells (Yoon et al., 2013) has revealed evidence for attractor dynamics in these networks, and experimentalists have reported connections in the entorhinal grid cell network that resemble connectivity patterns predicted by standard attractor network models (Couey et al., 2013; Fuchs et al., 2016). However, attractor networks implicitly assume that information about position and velocity is encoded only by firing rates, and not by spike correlations. Consequently, simulated attractor networks are typically composed from linear neurons (either nonspiking units or linear integrate‐and‐fire units) that derive their outputs exclusively from R, and not from R˙, much like the sigma decoder in our simulations. But what if individual neurons behave more like sigma‐chi decoders than sigma decoders? This is a real possibility, as evidenced by the fact that dendrites of hippocampal pyramidal cells and entorhinal stellate cells can perform nonlinear integration of their inputs, and may thus derive their outputs from correlations among their input spike trains rather than linear summation of firing rate inputs (Makara & Magee, 2013; Schmidt‐Hieber et al., 2017). The connectivity patterns and neural firing properties predicted by attractor models are founded upon the assumption that individual neurons behave like sigma decoders, but if they instead behave like sigma‐chi decoders, then connectivity and firing properties within the hippocampal system could look very different from what standard attractor models predict.

For example, consider a standard attractor network composed from linear units, in which velocity signals that push the activity bump are extracted from firing rates of velocity‐tuned neurons. Such a model predicts feedforward connections from velocity‐tuned neurons to position‐tuned neurons. But under conjugate coding, velocity signals could be extracted from the co‐firing rates of position‐tuned neurons, so feedforward velocity inputs predicted by standard attractor networks might be minimal or existent. Instead, they could be replaced (or augmented) by functionally equivalent connections from one population of position tuned neurons to another (e.g., from entorhinal grid cells to hippocampal place cells, or vice versa). Since velocity signals encoded by co‐firing rates tend to be delayed in time by the integration time constant (see Figure 2), a time lag would be introduced into velocity signals extracted from co‐firing rates of position‐tuned neurons. However, position signals encoded by the firing rates HD cells and grid cells are often seen to prospectively lead the animal's true position (Blair & Sharp, 1995; De Almeida, Idiart, Villavicencio, & Lisman, 2012; Mehta, Barnes, & McNaughton, 1997; Muller & Kubie, 1989), which could help to compensate for time lag if velocity signals were derived from the co‐firing rates of these neurons for the purpose of path integration. Standard attractor networks also predict lateral inputs to position tuned neurons from neighboring position tuned neurons, but under conjugate coding, these connections too could be replaced or augmented by functionally equivalent alternatives, such as inputs to position tuned neurons (e.g., place cells or grid cells) from velocity‐tuned neurons (e.g., theta cells) that encode position in their co‐firing rates rather than their firing rates. In summary, the conjugate coding principle suggests that biological and artificial attractor networks may be subject to radically different constraints upon their connectivity, and therefore, attractor models composed purely from linear neurons may be quite limited in their ability generate accurate predictions about connectivity in biological path integration networks.

Dual coding of position and velocity might also help to explain why some populations of neurons that are predicted to exist by standard attractor models have never been observed. In two‐dimensional open‐field environments, place cells and grid cells exhibit robust positional tuning of their firing rates in two dimensions. Path integration in two dimensions requires a two‐dimensional representation of movement velocity, but to the best of our knowledge, no experiment has ever observed neurons with firing rates that are tuned for velocity in two dimensions. Speed cells may contribute to encoding the nondirectional component of velocity in two dimensions, but an additional component for movement direction would also be required. The entorhinal cortex does contain neurons that are tuned for head direction (Giocomo et al., 2014), and many grid cells are selective not only for the animal's position but also for the direction in which its head is facing (Sargolini et al., 2006; Wills, Barry, & Cacucci, 2012). However, these entorhinal populations appear to be tuned for the head's azimuthul position, rather than for azimuthal movement direction. Accurate two‐dimensional path integration requires an azimuthal velocity signal, rather than an azimuthal position signal (Raudies, Brandon, Chapman, & Hasselmo, 2015). If velocity is encoded in spike correlations as well as (or instead of) firing rates, then a firing rate code for azimuthal movement velocity would not need to exist, because neurons that encode a two‐dimensional position signal in their firing rates (such as place or grid cells) would automatically also encode a two‐dimensional velocity in their co‐firing rates. Hence, the co‐firing rates of grid cells could serve as the substrate for encoding velocity signals that support two‐dimensional path integration, even in the absence of any neural population that encodes two‐dimensional velocity signals in their firing rates. A similar idea has been proposed by Zutshi, Leutgeb, and Leutgeb (2017), who argued that theta sequences generated by grid cells may provide a substrate for two‐dimensional velocity coding in entorhinal cortex. However, our simulations suggest that while oscillatory modulation (which gives rise to theta sequences) is essential for encoding position signals in spike correlations, it is not essential for encoding velocity signals in spike correlations, as demonstrated by the fact that velocity can be recovered via sigma‐chi decoding from simulated HD cells (Figure 5) and grid cells (Figure 7) even when they generate Poisson spike trains with no oscillatory modulation at all.

3.3.2. Reservoir computing networks

Reservoir computing models of path integration are recurrent neural networks trained from example data (via gradient descent methods) to convert time‐varying velocity signals into time‐varying position signals (Abbott, DePasquale, & Memmesheimer, 2016; Denève & Machens, 2016). It has recently been shown that neurons with periodic spatial tuning—similar to entorhinal grid cells—can emerge spontaneously in a nonspiking recurrent network trained to perform spatial path integration (Banino et al., 2018). Like attractor models, most reservoir computing models of path integration use recurrent networks composed from linear neurons, so the inputs and outputs to these networks encode velocity and position signals solely as vectors of neural firing rates (not spike train correlations). However, unlike standard attractor models, recurrent connections within a reservoir computing network are sculpted by learning. It is thus possible that when a recurrent network of spiking neurons is trained to perform path integration, error‐driven learning could cause the formation of neural microcircuits that extract information about position or velocity from correlations among spike trains, and not just from firing rates. This would be more likely to occur if the network were constructed not from linear neurons, but from spiking neurons that perform nonlinear integration of their inputs, similar to the sigma‐chi neurons in our simulations. Training a recurrent network of sigma‐chi neurons to perform path integration would require differentiating Equation (9) to derive a cost function that supports convergent learning. If this could be achieved, then it would be intriguing to investigate what kinds of coding mechanisms naturally emerge when a recurrent network of sigma‐chi neurons is trained to perform path integration. Phenomena such as oscillatory rhythms that support dial coding might emerge spontaneously from the training of such a network, in a manner similar to the way that grid cells emerge spontaneously when a network of linear neurons is trained to perform path integration (Banino et al., 2018). The emergent firing properties and connectivity patterns in a trained network of recurrently connected sigma‐chi neurons might bear closer resemblance to real hippocampal and entorhinal networks than those that emerge from networks of linear neurons.

3.3.3. Oscillatory interference models

Oscillatory interference models are explicitly designed to encode position information using correlated neural activity, rather than firing rates. These models propose that the brain contains VCOs that shift phase against one another at a rate that depends upon the animal's movement velocity (Burgess et al., 2007; Geisler, Robbe, Zugaro, Sirota, & Buzsáki, 2007). Consequently, phase offsets between oscillators depend upon the animal's position in space. If VCOs are implemented by spike trains of neurons that burst rhythmically at the theta frequency, then the VCOs map the animal's position into a representational space where each dimension measures time intervals (normalized by the theta cycle period) between pairs of spikes that are fired by different neurons (i.e., cells synchronized to different VCOs). This is in marked contrast with attractor and reservoir computing models, which map the animal's position into a representational space where each dimension measures time intervals between pairs of spikes that are fired by the same neuron (i.e., firing rates).

Early oscillatory interference models demonstrated how individual grid cells (Burgess et al., 2007; Giocomo et al., 2007) or place cells (Blair et al., 2008) could derive their position‐tuned firing rates by detecting of location‐specific synchrony among inputs from theta VCOs. This mechanism can account not only for the spatial firing properties of place and grid cells, but also for temporal firing properties such as theta rhythmicity, phase precession (Hafting et al., 2008; O'Keefe & Recce, 1993), and modulation of theta oscillations by running speed (Geisler et al., 2007; Jeewajee et al., 2014; Welday, Shlifer, Bloom, Zhang, & Blair, 2011). Some of these early oscillatory interference models (Burgess et al., 2007; Giocomo et al., 2007) simulated nonlinear multiplicative interactions among VCO inputs to grid cells, while others were based upon linear summation of VCO inputs (Welday et al., 2011). However, these models were focused upon mimicking the firing properties of individual grid or place cells, rather than modeling path integration or other network‐level computations. Later models proposed novel “hybrid” architectures for performing path integration through a combination of both attractor dynamics and oscillatory interference (Bush & Burgess, 2014; Hasselmo & Shay, 2014). Despite incorporating oscillatory interference mechanisms rooted in spike correlations, these models were simulated by networks of linear neurons that derived their outputs by computing weighted sums of their inputs, and then converted these weighted sums into spike trains using LIF or Izhikevic spike dynamics. The connectivity patterns predicted by these models might change—and perhaps become more biologically accurate—if they were reformulated using model neurons that perform nonlinear integration of their inputs to take full advantage of dual coding by firing rates as well as spike correlations.

3.4. Summary and conclusions

Neuroscience researchers are sometimes prone to “ratism”—a bias to regard the firing rates of spiking neurons as the primary coordinate basis of the neural code. Under the influence of this bias, it is reflexively assumed that if individual neurons are tuned for some particular variable (such as head angle, spatial position, or running speed), then a population of such neurons exists mainly to encode distributed representations of that same variable using vectors of neural firing rates. While it is certainly true that information can be encoded by (and decoded from) population vectors of neural firing rates, it is does not follow from this that the space of firing rate vectors is the only representational space into which information is mapped by neural spike trains. It is necessary to recognize and appreciate that the same population of neurons can encode different information when viewed through the “lenses” of different decoders.

Here, we have shown that there is a conjugate relationship between firing rate codes and spike correlation codes, which mirrors the uncertainty principle from physics. A firing rate code for position (q ⟶ R) can coexist with a co‐firing rate code for velocity (q˙R˙), or a co‐firing rate code for position (qR˙) can co‐exist with a firing rate code for velocity (q˙R). More generally, a firing rate code for any time‐varying stimulus can coexist with a co‐firing rate code for the time derivative of that stimulus, and vice versa. Hence, the conjugate coding principle described here may be useful for understanding neural coding not only in the hippocampal system, but in other brain systems as well. For example, the visual system encodes information about the position and as well as the velocity of objects in the visual field. Motor and proprioceptive systems processes information about the position and velocity of moving limbs. And neural circuits for reinforcement learning encode information about expected value and prediction error, which bear a relationship to one another that is similar to the relationship between position and velocity (prediction error is the time derivative of expected value, and expected value is the time integral prediction error). In all of these systems, the conjugate coding principle may be at work to embed orthogonal representations of stimuli and their time derivatives within the firing rates and co‐firing rates of neural spike trains. Further theoretical and empirical analysis may help to elucidate how biological neurons extract information (at the level of single cells) from both firing rates and co‐firing rates to efficiently perform useful computations.

4. METHODS

All simulations were carried out using MATLAB (source code available upon request).

4.1. Position tracking

Simulated HD, grid, and theta cells were generated from position tracking data. Head direction data from the open field was obtained while a rat foraged freely for food pellets in a circular arena (diameter = 80 cm). Rats wore a pair of red and green light‐emitting diodes (LEDs) spaced 11.25 cm apart from one another, and an overhead video camera sampled LED positions at R = 30 Hz with a resolution of p = 4.7 pixels/cm. Each LED's position was smoothed using a boxcar window 15 samples (0.5 s) wide, and the rat's head direction at each time step was estimated as arctan(Δyixi), where Δxi and Δyi denote the difference between the x and y coordinates of the red and green LEDs, respectively a the ith position sample. Position data from the circular track was obtained while rats ran laps on a 1.5 m diameter circular track enclosed within a planetarium‐style dome, where an array of three visual landmarks was projected onto the interior surface to create an augmented reality environment (Jayakumar et al., 2019). The rat was attached to a boom arm that rotated around a joint in the center of the track, and the rat's position on the track was sampled at 100 Hz from an optical encoder of the boom angle in the center joint.

4.2. Generation of simulated spike trains

To simulate spike trains of HD cells or grid cells, we computed the probability of spiking at each time step using Equation (3). The computed probability was then used as a threshold for a random number generator that output a real value between 0 and 1; a spike was placed in a given time step if the random value fell below the calculated probability threshold, yielding a binary spike response function (Equation (1)). For simulations of theta cell spike trains, the spike response function was generated in two steps. First we first created a “seed spike train” for each theta cell, containing a single spike at every time step where the theta cell's phase (computed using Equation (16)) passed through perfect phase synchrony with the reference oscillator. The probability of spiking for theta cell n was then computed by convolving the seed spike train with a Gaussian kernel (Equation (18)). A problem with this method was that when seed spikes were spaced closer together than the width of the Gaussian kernel, it was possible for the convolution procedure to yield inappropriately large spike probabilities. To prevent this, the convolution was performed separately on even and odd numbered spikes within the seed spike train, so that no two spikes would be closer together than the width of the Gaussian kernel. The two convolution results were then merged by taking the maximum probability from either of the two results at each time step. The probabilities in the merged time series were then used as the probability threshold on the random number generator for outputting the theta cell's spike response function.

4.3. Conversion of speed cells to grid cells

Simulated speed cell spike trains were converted to simulated grid cell spike trains (Figure 12) in the following way. Each speed cell resided at 1 of 12 positions in a ring oscillator, so we may index speed cells by the variable i = 1, 2, …, 12. The conversion process introduced a spatially periodic firing rate modulation into the speed cell's spike train, converting it into a grid cell. The probability of position‐driven spiking under this modulatory influence was given by ρi(q) = cos(3q + 2πi/12), where q is the animal's angular position on the track, 3 cycles per lap is the spatial frequency of the periodic modulation, and i shifts the spatial phase of the grid field. At all time steps where speed cell i fired a spike and ρi<0, a random number ri(t) ∈ {0, 1} was generated and the spike was deleted if ri(t) < |ρi|. At all time steps where speed cell i did not fire a spike and ρi>0, a random number ri(t) ∈ {0, 1} was generated and a spike was added if ri(t) < ϑ|ρi|, where ϑ = .0475 was a coefficient that was empirically determined to leave the mean number of spikes fired by the speed cell unchanged after it had been converted to a grid cell, using the speed cell simulation parameters from the simulations in Figure 11.

ACKNOWLEDGMENTS

We thank Kechen Zhang for valuable discussions about how to mathematically formulate our ideas about the uncertainty principle and dual coding of conjugate variables. Manu Madhav and Ravi Jayakumar provided position tracking data from the circular track, and Rose DeGuzman assisted in collecting the head direction data from the open field. We also thank Joseph Monaco for helpful feedback and discussion. This work was supported by NIH Initiative for Maximizing Student Development (IMSD) grant #GM055052 to R.G.

Grgurich R, Blair HT. An uncertainty principle for neural coding: Conjugate representations of position and velocity are mapped onto firing rates and co‐firing rates of neural spike trains. Hippocampus. 2020;30:396–421. 10.1002/hipo.23197

Funding information NIH, Grant/Award Number: GM055052

DATA AVAILABILITY STATEMENT

All simulation were carried out using MATLAB (source code available upon request).

REFERENCES

  1. Abbott, L. F. , DePasquale, B. , & Memmesheimer, R.‐M. (2016). Building functional networks of spiking model neurons. Nature Neuroscience, 19(3), 350–355. 10.1038/nn.4241 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Baccus, S. A. , Ölveczky, B. P. , Manu, M. , & Meister, M. (2008). A retinal circuit that computes object motion. Journal of Neuroscience, 28(27), 6807–6817. 10.1523/JNEUROSCI.4206-07.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Banino, A. , Barry, C. , Uria, B. , Blundell, C. , Lillicrap, T. , Mirowski, P. , … Kumaran, D. (2018). Vector‐based navigation using grid‐like representations in artificial agents. Nature, 521, 429–433. 10.1038/s41586-018-0102-6 [DOI] [PubMed] [Google Scholar]
  4. Blair, H. T. , & Sharp, P. E. (1995). Anticipatory head direction signals in anterior thalamus: Evidence for a thalamocortical circuit that integrates angular head motion to compute head direction. Journal of Neuroscience, 15(9), 6260–6270. 10.1523/JNEUROSCI.15-09-06260.1995 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Blair, H. T. , Gupta, K. , & Zhang, K. (2008). Conversion of a phase‐ to a rate‐coded position signal by a three‐stage model of theta cells, grid cells, and place cells. Hippocampus, 18(12), 1239–1255. 10.1002/hipo.20509 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Blair, H. T. , Wu, A. , & Cong, J. (2014). Oscillatory neurocomputing with ring attractors: A network architecture for mapping locations in space onto patterns of neural synchrony. Philosophical Transactions of the Royal Society B: Biological Sciences., 369, 20120526 10.1098/rstb.2012.0526 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brandon, M. P. , Bogaard, A. R. , Libby, C. P. , Connerney, M. A. , Gupta, K. , & Hasselmo, M. E. (2011). Reduction of theta rhythm dissociates grid cell spatial periodicity from directional tuning. Science, 332(6029), 595–599. 10.1126/science.1201652 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Burak, Y. , & Fiete, I. R. (2009). Accurate path integration in continuous attractor network models of grid cells. PLoS Computational Biology, 5(2), e1000291 10.1371/journal.pcbi.1000291 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Burgess, N. , Barry, C. , & O'Keefe, J. (2007). An oscillatory interference model of grid cell firing. Hippocampus, 17(9), 801–812. 10.1002/hipo.20327 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bush, D. , & Burgess, N. (2014). A hybrid oscillatory interference/continuous attractor network model of grid cell firing. Journal of Neuroscience, 34(14), 5065–5079. 10.1523/JNEUROSCI.4017-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Butler, W. N. , & Taube, J. S. (2017). Oscillatory synchrony between head direction cells recorded bilaterally in the anterodorsal thalamic nuclei. Journal of Neurophysiology, 117, 1847–1852. 10.1152/jn.00881.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Climer, J. R. , Newman, E. L. , & Hasselmo, M. E. (2014). Phase coding by grid cells in unconstrained environments: Two‐dimensional phase precession. European Journal of Neuroscience, 38(4), 2526–2541. 10.1111/ejn.12256 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Couey, J. J. , Witoelar, A. , Zhang, S.‐J. , Zheng, K. , Ye, J. , Dunn, B. , … Witter, M. P. (2013). Recurrent inhibitory circuitry as a mechanism for grid formation. Nature Neuroscience, 16(3), 318–324. 10.1038/nn.3310 [DOI] [PubMed] [Google Scholar]
  14. De Almeida, L. , Idiart, M. , Villavicencio, A. , & Lisman, J. (2012). Alternating predictive and short‐term memory modes of entorhinal grid cells. Hippocampus, 22(8), 1647–1651. 10.1002/hipo.22030 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Denève, S. , & Machens, C. K. (2016). Efficient codes and balanced networks. Nature Neuroscience, 19(3), 375–382. 10.1038/nn.4243 [DOI] [PubMed] [Google Scholar]
  16. Eliasmith, C. (2005). A unified approach to building and controlling spiking attractor networks. Neural Computation, 17(6), 1276–1314. 10.1162/0899766053630332 [DOI] [PubMed] [Google Scholar]
  17. Fuchs, E. C. , Neitz, A. , Pinna, R. , Melzer, S. , Caputi, A. , & Monyer, H. (2016). Local and distant input controlling excitation in layer II of the medial entorhinal cortex. Neuron, 89(1), 194–208. 10.1016/j.neuron.2015.11.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Fuhs, M. C. , & Touretzky, D. S. (2006). A spin glass model of path integration in rat medial entorhinal cortex. Journal of Neuroscience, 26(16), 4266–4276. 10.1523/JNEUROSCI.4353-05.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Geisler, C. , Robbe, D. , Zugaro, M. , Sirota, A. , & Buzsáki, G. (2007). Hippocampal place cell assemblies are speed‐controlled oscillators. Proceedings of the National Academy of Sciences of the United States of America, 104(19), 8149–8154. 10.1073/pnas.0610121104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Giocomo, L. M. , Stensola, T. , Bonnevie, T. , Van Cauter, T. , Moser, M.‐B. , & Moser, E. I. (2014). Topography of head direction cells in medial entorhinal cortex. Current Biology, 24(3), 252–262. 10.1016/j.cub.2013.12.002 [DOI] [PubMed] [Google Scholar]
  21. Giocomo, L. M. , Zilli, E. A. , Fransén, E. , & Hasselmo, M. E. (2007). Temporal frequency of subthreshold oscillations scales with entorhinal grid cell field spacing. Science, 315(5819), 1719–1722. 10.1126/science.1139207 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Góis, Z. H. T. D. , & Tort, A. B. L. (2018). Characterizing speed cells in the rat hippocampus. Cell Reports, 25(7), 1872–1884.e4. 10.1016/j.celrep.2018.10.054 [DOI] [PubMed] [Google Scholar]
  23. Guanella, A. , Kiper, D. , & Verschure, P. (2011). A model of grid cells based on a twisted torus topology. International Journal of Neural Systems, 17(04), 231–240. 10.1142/S0129065707001093 [DOI] [PubMed] [Google Scholar]
  24. Hafting, T. , Fyhn, M. , Bonnevie, T. , Moser, M.‐B. , & Moser, E. I. (2008). Hippocampus‐independent phase precession in entorhinal grid cells. Nature, 453(7199), 1248–1252. 10.1038/nature06957 [DOI] [PubMed] [Google Scholar]
  25. Hafting, T. , Fyhn, M. , Molden, S. , Moser, M.‐B. , & Moser, E. I. (2005). Microstructure of a spatial map in the entorhinal cortex. Nature, 436(7052), 801–806. 10.1038/nature03721 [DOI] [PubMed] [Google Scholar]
  26. Hasselmo, M. E. , & Shay, C. F. (2014). Grid cell firing patterns may arise from feedback interaction between intrinsic rebound spiking and transverse traveling waves with multiple heading angles. Frontiers in Systems Neuroscience, 8 10.3389/fnsys.2014.00201 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Hasselmo, M. E. , Giocomo, L. M. , & Zilli, E. A. (2007). Grid cell firing may arise from interference of theta frequency membrane potential oscillations in single neurons. Hippocampus, 17(12), 1252–1271. 10.1002/hipo.20374 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Hedrick, K. R. , & Zhang, K. (2016). Megamap: Flexible representation of a large space embedded with nonspatial information by a hippocampal attractor network. Journal of Neurophysiology, 116, 868–891. 10.1152/jn.00856.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Hinman, J. R. , Brandon, M. P. , Climer, J. R. , Chapman, G. W. , & Hasselmo, M. E. (2016). Multiple running speed signals in medial entorhinal cortex. Neuron, 91(3), 666–679. 10.1016/j.neuron.2016.06.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Hinman, J. R. , Dannenberg, H. , Alexander, A. S. , & Hasselmo, M. E. (2018). Neural mechanisms of navigation involving interactions of cortical and subcortical structures. Journal of Neurophysiology, 119, 2007–2029. 10.1152/jn.00498.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Hubel, D. H. , & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of Physiology, 160(1), 106–154. 10.1113/jphysiol.1962.sp006837 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Jacob, P.‐Y. , Capitano, F. , Poucet, B. , Save, E. , & Sargolini, F. (2019). Path integration maintains spatial periodicity of grid cell firing in a 1D circular track. Nature Communications, 10(1), 840 10.1038/s41467-019-08795-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Jayakumar, R. P. , Madhav, M. S. , Savelli, F. , Blair, H. T. , Cowan, N. J. , & Knierim, J. J. (2019). Recalibration of path integration in hippocampal place cells. Nature, 566(7745), 533–537. 10.1038/s41586-019-0939-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Jeewajee, A. , Barry, C. , Douchamps, V. , Manson, D. , Lever, C. , & Burgess, N. (2014). Theta phase precession of grid and place cell firing in open environments. Philosophical Transactions of the Royal Society B: Biological Sciences, 369 10.1098/rstb.2012.0532 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Jensen, O. , & Lisman, J. E. (2000). Position reconstruction from an ensemble of hippocampal place cells: Contribution of theta phase coding. Journal of Neurophysiology. 10.1152/jn.2000.83.5.2602 [DOI] [PubMed] [Google Scholar]
  36. Jia, H. , Rochefort, N. L. , Chen, X. , & Konnerth, A. (2010). Dendritic organization of sensory input to cortical neurons in vivo . Nature, 464(7293), 1307–1312. 10.1038/nature08947 [DOI] [PubMed] [Google Scholar]
  37. Kennard, E. H. (1927). Zur Quantenmechanik einfacher Bewegungtypen. Zeitschrift für Physik, 44, 326–352. [Google Scholar]
  38. Knierim, J. J. , & Zhang, K. (2012). Attractor dynamics of spatially correlated neural activity in the limbic system. Annual Review of Neuroscience, 35, 267–285. 10.1146/annurev-neuro-062111-150351 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Koenig, J. , Linder, A. N. , Leutgeb, J. K. , & Leutgeb, S. (2011). The spatial periodicity of grid cells is not sustained during reduced theta oscillations. Science, 332(6029), 592–595. 10.1126/science.1202333 [DOI] [PubMed] [Google Scholar]
  40. Kropff, E. , Carmichael, J. E. , Moser, M.‐B. , & Moser, E. I. (2015). Speed cells in the medial entorhinal cortex. Nature, 523(7561), 419–424. 10.1038/nature14622 [DOI] [PubMed] [Google Scholar]
  41. Lever, C. , Burton, S. , Jeewajee, A. , O'Keefe, J. , & Burgess, N. (2009). Boundary vector cells in the subiculum of the hippocampal formation. Journal of Neuroscience, 29(31), 9771–9777. 10.1523/JNEUROSCI.1319-09.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Makara, J. K. , & Magee, J. C. (2013). Variable dendritic integration in hippocampal CA3 pyramidal neurons. Neuron, 80(6), 1438–1450. 10.1016/j.neuron.2013.10.033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. McNaughton, B. L. , Barnes, C. A. , Gerrard, J. L. , Gothard, K. , Jung, M. W. , Knierim, J. J. , … Weaver, K. L. (1996). Deciphering the hippocampal polyglot: The hippocampus as a path integration system. Journal of Experimental Biology, 199(1), 173–185. [DOI] [PubMed] [Google Scholar]
  44. Mehta, M. R. , Barnes, C. A. , & McNaughton, B. L. (1997). Experience‐dependent, asymmetric expansion of hippocampal place fields. Proceedings of the National Academy of Sciences, 94(16), 8918–8921. 10.1073/pnas.94.16.8918 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Mel, B. W. (2007). Why have dendrites? A computational perspective In Stuart G. (Ed.), Dendrites (2nd ed., pp. 421–440). Oxford: Oxford University Press; 10.1093/acprof:oso/9780198566564.003.0016 [DOI] [Google Scholar]
  46. Muller, R. U. , & Kubie, J. L. (1989). The firing of hippocampal place cells predicts the future position of freely moving rats. Journal of Neuroscience, 9(12), 4101–4110. 10.1523/JNEUROSCI.09-12-04101.1989 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. O'Keefe, J. , & Dostrovsky, J. (1971). The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely‐moving rat. Brain Research, 34(1), 171–175. 10.1016/0006-8993(71)90358-1 [DOI] [PubMed] [Google Scholar]
  48. O'Keefe, J. , & Recce, M. L. (1993). Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus, 3(3), 317–330. 10.1002/hipo.450030307 [DOI] [PubMed] [Google Scholar]
  49. Peyrache, A. , Lacroix, M. M. , Petersen, P. C. , & Buzsáki, G. (2015). Internally organized mechanisms of the head direction sense. Nature Neuroscience, 18(4), 569–575. 10.1038/nn.3968 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Raudies, F. , Brandon, M. P. , Chapman, G. W. , & Hasselmo, M. E. (2015). Head direction is coded more strongly than movement direction in a population of entorhinal neurons. Brain Research, 1621, 355–367. 10.1016/j.brainres.2014.10.053 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Samsonovich, A. , & McNaughton, B. L. (1997). Path integration and cognitive mapping in a continuous attractor neural network model. Journal of Neuroscience, 17(15), 5900–5920. 10.1523/JNEUROSCI.17-15-05900.1997 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Sargolini, F. , Fyhn, M. , Hafting, T. , McNaughton, B. L. , Witter, M. P. , Moser, M.‐B. , & Moser, E. I. (2006). Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science, 312(5774), 758–762. 10.1126/science.1125572 [DOI] [PubMed] [Google Scholar]
  53. Savelli, F. , Yoganarasimha, D. , & Knierim, J. J. (2008). Influence of boundary removal on the spatial representations of the medial entorhinal cortex. Hippocampus, 18(12), 1270–1282. 10.1002/hipo.20511 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Schmidt‐Hieber, C. , Toleikyte, G. , Aitchison, L. , Roth, A. , Clark, B. A. , Branco, T. , & Häusser, M. (2017). Active dendritic integration as a mechanism for robust and precise grid cell firing. Nature Neuroscience, 20(8), 1114–1121. 10.1038/nn.4582 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Sharp, P. (1996). Multiple spatial/behavioral correlates for cells in the rat postsubiculum: Multiple regression analysis and comparison to other hippocampal areas. Cerebral Cortex, 6(2), 238–259. 10.1093/cercor/6.2.238 [DOI] [PubMed] [Google Scholar]
  56. Skaggs, W. E. , McNaughton, B. L. , Wilson, M. A. , & Barnes, C. A. (1996). Theta phase precession in hippocampal neuronal populations and the compression of temporal sequences. Hippocampus, 6(2), 149–172. [DOI] [PubMed] [Google Scholar]
  57. Solstad, T. , Boccara, C. N. , Kropff, E. , Moser, M.‐B. , & Moser, E. I. (2008). Representation of geometric borders in the entorhinal cortex. Science, 322(5909), 1865–1868. 10.1126/science.1166466 [DOI] [PubMed] [Google Scholar]
  58. Song, P. , & Wang, X.‐J. (2005). Angular path integration by moving “hill of activity”: A spiking neuron model without recurrent excitation of the head‐direction system. Journal of Neuroscience, 25(4), 1002–1014. 10.1523/JNEUROSCI.4172-04.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Stackman, R. W. , & Taube, J. S. (1998). Firing properties of rat lateral mammillary single units: Head direction, head pitch, and angular head velocity. Journal of Neuroscience, 18(21), 9020–9037. 10.1523/JNEUROSCI.18-21-09020.1998 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Taube, J. S. , Muller, R. U. , & Ranck, J. B. (1990). Head‐direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. Journal of Neuroscience, 10(2), 420–435. 10.1523/JNEUROSCI.10-02-00420.1990 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Tingley, D. , & Buzsáki, G. (2018). Transformation of a spatial map across the hippocampal‐lateral septal circuit. Neuron, 98(6), 1229–1242.e5. 10.1016/j.neuron.2018.04.028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Welday, A. C. , Shlifer, I. G. , Bloom, M. L. , Zhang, K. , & Blair, H. T. (2011). Cosine directional tuning of theta cell burst frequencies: Evidence for spatial coding by oscillatory interference. Journal of Neuroscience, 31(45), 16157–16176. 10.1523/JNEUROSCI.0712-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Wills, T. , Barry, C. , & Cacucci, F. (2012). The abrupt development of adult‐like grid cell firing in the medial entorhinal cortex. Frontiers in Neural Circuits, 6, 21 10.3389/fncir.2012.00021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Wilson, M. A. , & McNaughton, B. L. (1993). Dynamics of the hippocampal ensemble code for space. Science, 261(5124), 1055–1058. [DOI] [PubMed] [Google Scholar]
  65. Yoon, K. , Buice, M. A. , Barry, C. , Hayman, R. , Burgess, N. , & Fiete, I. R. (2013). Specific evidence of low‐dimensional continuous attractor dynamics in grid cells. Nature Neuroscience, 16(8), 1077–1084. 10.1038/nn.3450 [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Zhang, K. (1996). Representation of spatial orientation by the intrinsic dynamics of the head‐direction cell ensemble: A theory. Journal of Neuroscience, 16(6), 2112–2126. 10.1523/JNEUROSCI.16-06-02112.1996 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Zutshi, I. , Leutgeb, J. K. , & Leutgeb, S. (2017). Theta sequences of grid cell populations can provide a movement‐direction signal. Current Opinion in Behavioral Sciences, 17, 147–154. 10.1016/j.cobeha.2017.08.012 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All simulation were carried out using MATLAB (source code available upon request).


Articles from Hippocampus are provided here courtesy of Wiley

RESOURCES