Abstract
Representations of continuous variables are crucial to create internal models of the external world. A prevailing model of how the brain maintains these representations is given by continuous bump attractor networks (CBANs) in a broad range of brain functions across different areas, such as spatial navigation in hippocampal/entorhinal circuits and working memory in prefrontal cortex. Through recurrent connections, a CBAN maintains a persistent activity bump, whose peak location can vary along a neural space, corresponding to different values of a continuous variable. To track the value of a continuous variable changing over time, a CBAN updates the location of its activity bump based on inputs that encode the changes in the continuous variable (e.g., movement velocity in the case of spatial navigation)—a process akin to mathematical integration. This integration process is not perfect and accumulates error over time. For error correction, CBANs can use additional inputs providing ground-truth information about the continuous variable’s correct value (e.g., visual landmarks for spatial navigation). These inputs enable the network dynamics to automatically correct any representation error. Recent experimental work on hippocampal place cells has shown that, beyond correcting errors, ground-truth inputs also fine-tune the gain of the integration process, a crucial factor that links the change in the continuous variable to the updating of the activity bump’s location. However, existing CBAN models lack this plasticity, offering no insights into the neural mechanisms and representations involved in the recalibration of the integration gain. In this paper, we explore this gap by using a ring attractor network, a specific type of CBAN, to model the experimental conditions that demonstrated gain recalibration in hippocampal place cells. Our analysis reveals the necessary conditions for neural mechanisms behind gain recalibration within a CBAN. Unlike error correction, which occurs through network dynamics based on ground-truth inputs, gain recalibration requires an additional neural signal that explicitly encodes the error in the network’s representation via a rate code. Finally, we propose a modified ring attractor network as an example CBAN model that verifies our theoretical findings. Combining an error-rate code with Hebbian synaptic plasticity, this model achieves recalibration of integration gain in a CBAN, ensuring accurate representation for continuous variables.
1. Introduction
The brain’s ability to represent and process continuous variables, such as location, time, and sensory information, is fundamental to our understanding and interaction with the external world. A compelling theoretical framework for how the brain constructs these representations is provided by continuous bump attractor networks (CBANs) in a diverse range of brain functions, such as orientation tuning in visual cortex [1], working memory [2, 3], evidence accumulation and decision-making [4–7], and spatial navigation [8–11].
The CBAN is a class of recurrent neural network, which maintains persistent patterns of population activity through interactions among its neurons. This persistent activity typically forms the shape of a bell curve like a ‘bump’, when visualized on an appropriate topological arrangement of neurons (known as a low-dimensional manifold), such as a plane, circle, or torus [12]. Although the shape of the activity bump is constrained by network dynamics, its center location can vary along this low-dimensional manifold, corresponding to different values of the encoded continuous variable. Neural activity consistent with these key properties of CBANs, namely, the activity bump and the low-dimensional manifold, have been observed in recordings from various regions of the mammalian brain that encode continuous variables [13–15]. More conclusive and direct evidence for CBANs has been found in the central complex of fly brain, where a biological CBAN encoding the fly’s heading angle, a continuous variable, has been identified based on the connectome and a combination of techniques, such as calcium imaging and optogenetics [16–18]. While these experimental findings support the idea of brain circuits employing CBANs to represent continuous variables, the neural mechanisms that enable CBANs to accurately update their representations in response to changes in continuous variables remain incompletely understood.
CBANs update their representations of a continuous variable based on one or both of two distinct types of inputs. The first type provides ‘absolute’ information, namely, the true value of a continuous variable, such as spatial location relative to visual landmarks or the item to be held in the working memory. When this absolute information is available, it provides localized input on the CBAN’s low-dimensional manifold to a location that is associated with the true value to the continuous variable. In response to this localized input, internal dynamics of the CBAN creates a ‘basin’ on its low-dimensional manifold toward which the activity bump gravitates, resulting in nearly perfect encoding of the continuous variable [19–21]. Strong experimental evidence for this phenomenon has been observed in fly brain where optogenetic excitation of the central complex, a seemingly biological CBAN, at a specific anatomical location mimicking an absolute information resulted in the CBAN’s activity bump, normally changing its location to encode the fly’s heading, being pulled to the excitation location [16]. As more indirect evidence, neural recordings from the hippocampus and entorhinal cortex, two regions modeled as CBANs in mammalian brain, showed that their representations for the animal’s location are anchored to the visual landmarks even when the landmarks are rotated around an open arena [22–26].
In contrast to the first type of inputs providing absolute information to the CBAN, the second type provides ‘differential’ information, namely, the changes in the continuous variable. Sources of such inputs may be, for instance, self-generated movements providing velocity information to be integrated in the context of spatial navigation or sensory cues serving as pieces of evidence to be accumulated in the context of decision-making. In response to these inputs, the internal dynamics of the CBAN shifts the activity bump along the low-dimensional manifold—in a process akin to mathematical integration—such that the bump’s location reflects the value of the continuous variable. However, compared to the absolute information, the encoding accuracy of this integration process depends critically on an additional factor, namely, the integration gain of the network that relates the cumulative change in the continuous variable to the updating of the bump location in a proportional manner [27, 28]. If this gain factor is miscalibrated, the result of the CBAN’s integration begins drifting away from the true value of the continuous variable; that is, it accumulates error. In the presence of absolute information sources such as visual landmarks for spatial localization, the aforementioned ‘basin’ mechanism continuously corrects error, preventing it from accumulating. However, without such absolute information, error accumulation continues, which may cause, for example, a CBAN integrating evidence to reach a decision threshold too soon or too late or a CBAN integrating an animal’s angular head velocity to over- or underestimate the correct head direction. Thus, a finely tuned integration gain is crucial for a CBAN to accurately encode a continuous variable based on inputs with only differential information.
Present CBAN models of path integration treat the integration gain as a constant that is perfectly set via carefully chosen, hard-wired model parameters (e.g., synaptic weights) [11, 29–31] (but see [32]). However, recent data from time cells and place cells of the rodent hippocampal formation, hypothesized to rely on CBANs [33], showed that the integration gain is actually a plastic variable whose value is adjusted based on the feedback from absolute information sources [34, 35]. In the first study that demonstrated this phenomenon on place cells [35], the virtual visual landmarks, which provided the absolute information, were moved as a function of the animal’s movement. This movement created persistent error between the encoded location based on path integration and the actual location relative to the landmarks. Prolonged exposure to this conflict led to recalibration of the system’s integration gain, altering it in a direction and by an amount that reduced the positional encoding error. This recalibration was most evident after the landmarks were extinguished (i.e., when the absolute information to the putative CBAN was abolished); the space encoded by hippocampal cells during pure path integration either expanded or contracted, depending on the direction of the preceding landmark manipulation. Therefore, an open question is how CBANs adjust their integration gain based on error feedback from absolute information sources.
In the present paper, we aim to address this open question and generate testable physiological predictions about the neural mechanisms of gain recalibration in brain circuits that encode continuous variables. As a representative problem, we focus on hippocampal place coding and theoretically examine how visual landmarks, being the absolute information source, recalibrate the integration gain of a CBAN that encodes the animal’s position on a circular track, as demonstrated experimentally in [35]. To this end, we first derive analytical expressions of the integration gain, along with a simple dynamical model of the activity bump’s location in the ring attractor network, a specific type of CBAN for circular position encoding. In contrast to the previous work implicitly assuming the network’s integration gain as a constant, global parameter independent of the bump location in the low-dimensional manifold, our analysis reveals that the integrator gain of a CBAN is a spatially distributed, possibly inhomogeneous, parameter. We then employed control theory techniques to dissect the necessary algorithmic conditions for accurate recalibration of this spatially distributed integration gain via feedback from the absolute information sources. Mapping these conditions from the algorithmic level to the mechanistic level uncovered a key mechanistic requirement for gain recalibration. We found that, unlike correction of encoding errors that happens automatically and implicitly through network dynamics when feedback from an absolute information source is available, the process of learning/recalibrating the integration gain through Hebbian plasticity requires an additional neural signal that explicitly encodes the error in the network’s representation. In other words, all prior work demonstrating error correction in a CBAN does not require explicit error encoding, but our work shows that correcting the integration process itself (i.e., recalibration) requires an explicit representation of error. This error signal must be provided by changes in the firing rate of some neurons with one of two signals—the instantaneous error or the time-integral of the error—for recalibration of the integration gain. Finally, we propose a modified ring attractor network as an example CBAN model that instantiates our theoretical findings. Combining an error-rate code with Hebbian plasticity, this model achieves recalibration of integration gain in a CBAN, ensuring accurate representation for continuous variables.
2. Model Setup: Ring Attractor Network
A continuous bump attractor network is a recurrently connected neural network in which neighboring neurons excite one another and inhibit distant neurons according to a connectivity pattern known as local excitation and global inhibition [1, 38]. This connectivity gives rise to a persistent bump of activity as a stable, equilibrium state of the system. Invariance of the connectivity across the network leads to a continuum of such equilibrium states, called attractor states. Arrangement of neurons and the exact pattern of the recurrent connectivity determine the topology of this attractor. In the case of a ring attractor, neurons are arranged conceptually as a topological ring [11]. By sustaining an activity bump whose location can be shifted along the ring based on external inputs (i.e., relative and absolute information sources), a ring attractor network is well-suited to represent a variable on a closed curve (e.g., angular location of an animal on a circular track). Augmenting the arrangement of neurons to a two dimensional plane results in a plane attractor whose activity bump is well-suited to represent two variables, for example, the x and y coordinates of location in a room [37].
The activities of place and grid cells in 2D environments have been traditionally modeled using a plane attractor [12, 31, 37]. However, in the present investigation of gain recalibration based on location encoding, originally demonstrated in place cells from rats running laps on a 1D circular track, we chose the ring attractor as the basis of our model because of its analytical tractability.
The ring attractor that we analyzed is a network model consisting of three groups of neurons ordered in a ring arrangement: a central ring, a clockwise (CW) rotation ring, and a counter-clockwise (CCW) rotation ring [8, 11], as depicted in Fig. 1A. Neurons in this network receive synaptic input from other neurons in the network via intrinsic connections and from upstream neurons carrying velocity information(i.e., a ‘differential’ type of input) and the positional feedback from visual landmarks (i.e., an ‘absolute’ type of inputs) via extrinsic connections.
Figure 1:
Ring attractor network model [10, 31, 36, 37]. (A) Schematic representation of the model. The central ring forms the main body of the model based on its recurrent connections (labeled with ➀). Its reciprocal offset connections with the rotation CW and CCW rings (labeled with ➁ and ➂) creates a push-pull mechanism that modulates the intrinsically controlled neural activity of the central ring based on external inputs from the CW and CCW velocity neurons (labeled with ➃). An additional external input is provided to the central ring from the visual ring (labeled with ➄), corresponding to a set of sensory neurons that are tuned to visual landmarks. (B) Synaptic weight function that describes the recurrent connections within the central ring according to the well-known local excitation and global inhibition pattern. (C) Numerical demonstration of how recurrent connectivity within the central ring can autonomously maintain a persistent activity bump. Simulation of the central ring neurons was started with initial conditions that are assigned pseudo-randomly (light green line labeled with
). Within 100 milliseconds, a bump of activity emerges (medium green line labeled with
). Eventually, the firing rates converge to an equilibrium, forming a persistent bump of activity (dark green line labeled with
). (D) Tuning curves of CCW and CW velocity neurons shown with blue-dashed and red-dashed lines, respectively.
We can model the dynamics of the ring attractor network using a set of equations that model the firing rate of a continuum of neurons in response to their synaptic inputs. If we parameterize a neuron based on its angle in the circular neural space, the model of the central ring neurons takes the form
| (1) |
where denotes the firing rate of the central ring neuron at time denotes the synaptic time constant of central ring neurons, ⊛ denotes the circular convolution operation, denotes an activation function (chosen as rectified linear unit (RELU) in our current study), denotes external synaptic inputs to the central ring, and denotes a rotationally invariant synaptic weight function that describes the recurrent connections (➀ in Fig. 1A) according to the pattern known as local excitation and global inhibition (Fig. 1B). Being a hallmark of the CBANs, this recurrent connectivity pattern leads to stabilization of a persistent “bump” of activity within the network [38–40] (Fig. 1C). While the shape of the emergent activity bump is determined by the shape of the recurrent connectivity pattern, the location of the bump can be controlled by external synaptic inputs to the central ring.
These external inputs are provided by neurons of the rotation rings and the visual ring. The rotation rings conjoin the self-movement velocity information with the positional information by receiving inputs from two different afferent neurons: By changing their firing rates in different directions (Fig. 1D), CW and CCW ‘velocity’ neurons signal the animal’s velocity information to their respective rotation rings through synaptic weight functions (➃ in Fig. 1A), whereas the central ring provides the positional information, represented by the bump location, to both rotation rings through synaptic weight functions (➁ in Fig. 1A). The resulting conjunctive codes of velocity and position within the rotation rings are then transmitted back to the central ring through offset synaptic weight functions (➂ in Fig. 1A), leading to a shift of the persistent activity bump within the central ring proportional to the animal’s velocity, a process known as path integration (PI). In contrast to the rotation rings, the visual ring does not explicitly receive any inputs; instead, its neurons are presumed to autonomously fire at specific locations of the animal relative to landmarks, capturing the absolute positional information received from visual landmarks available at each position (modeling how egocentric visual processes can calculate position from landmarks is beyond the scope of this paper). Through synaptic weight function (➄, in Fig. 1A), this firing of the visual rings provides to the central ring a bump-like synaptic input encoding the animal’s “true” position relative to landmarks. This bump-like input pulls the activity bump of the central ring, hence correcting positional errors in the ring attractor’s representation.
Before concluding this section, we clarify an important distinction between traditional models of the ring attractor network and our model in the present paper. In traditional models, the synaptic weights and are treated as constants, each constrained to take a uniform value across the entire neural space. However, in our model, we intentionally relax this constraint. Instead, we treat these weights as functions that can vary throughout the neural space, possibly taking nonuniform values. While this approach may be less mathematically convenient, it becomes necessary when we later explore the possibility of gain recalibration through plasticity of spatially distributed synapses in the ring attractor network. As will be evident in the subsequent sections, our approach has broader implications, especially for the spatial metric of the ring attractor network. The complete mathematical model of the ring attractor, including the functional synaptic weights and the dynamics of the rotation and visual rings, is given in Appendix 6.1.
3. Algorithmic and Mechanistic Requirements for Gain Recalibration
In this section, we analyze the complex dynamics of our unconstrained ring attractor model to garner insight into how the network’s integration gain, hereafter referred to as the PI gain, can be recalibrated by visual landmarks. Our analysis begins with reduction of complex ring-attractor dynamics into a simple, one-dimensional differential equation model, including an analytical expression of the PI gain, in Section 3.1. Leveraging the analytical tractability of this simple model, we then identify algorithmic conditions for gain recalibration in Section 3.2. Finally, we use the analytical expression of the PI gain to map the algorithmic conditions within the simple model to mechanistic prerequisites for gain recalibration within the high-dimensional, complex model of the ring attractor network, in Section 3.3.
3.1. Dimensionality reduction reveals computational principles of the network
To derive a simple model for how the location of the attractor’s activity bump is controlled by external velocity and visual inputs, we follow the dimensionality reduction protocol described in [41]. Briefly, the protocol exploits the fact that ring-attractor dynamics constrain the population activity to form a bump whose overall shape stays invariant, but its center location can vary across the central ring. Although we do not know the exact solution to the network dynamics that can describe this activity pattern, we can ‘guess’ a solution form that describes its general properties without relying on a specific function. This guess, termed an ansatz solution, makes the analysis mathematically tractable by reducing the complex network dynamics to a one-dimensional differential equation that tracks the temporal change in the location of the activity bump as a function of external inputs.
Using this protocol, prior work derived the simple models for the traditional ring attractors, constraining the synaptic weights and to be constant [41, 42]. Here, we extend this approach to our unconstrained ring attractor. We develop the differential equation models progressively, starting from the simplest case that the animal is stationary in the absence of landmarks. Next, we add movement inputs and show that the network employs a spatially distributed PI gain. Finally, we extend the model to include landmark-based correction, and show that the network combines the spatially distributed integration with landmarks in a computation that resembles a Kalman filter [43]. The resulting model forms the basis for our search for the algorithmic conditions of the gain recalibration.
3.1.1. Ansatz solution to network dynamics
When the central ring’s recurrent weight function is symmetric with local excitation and global inhibition, firing rate of its neurons converges to a symmetric, persistent activity bump, taking nonzero values in a limited range and featuring a single peak corresponding to the internal representation of the animal’s position (Fig. 1C). Although specific functions such as thresholded Gaussians or sinusoids possess these characteristics and are often employed to explain neurophysiological data [11, 19, 44–47], we do not restrict ourselves to such a specific structure in the present analysis of the ring attractor. Instead, we assume that the firing rates of the central ring neurons can be represented by a general ansatz solution
| (2) |
where is a function that described the aforementioned persistent activity with a single peak at . This moderate generality allows our analysis to be valid for a broad range of ring attractor models with various particular ansatzes (including, but not limited to, the commonly used ones mentioned above). See Appendix 6.2.1 for a formal description of the properties of the ansatz function .
The persistent activity in the central ring also spreads to the CW and CCW rotation rings through synaptic connections, resulting in the following ansatz solutions to their firing rates :
| (3) |
where denote the assumed form of the ansatz solutions, denote the baseline firing rates of the CW, CCW velocity neurons (during the animal’s immobility), denote the absolute value of the slopes of the velocity neurons’ tuning curves (i.e., the absolute change in their firing rates per unit velocity of the animal), and denotes global inhibition. With a properly set inhibition , the solutions take the shape of a bump, similar to the ansatz for the central ring (see Appendix 6.2 for further details).
Collectively, Eqs. (2) and (3) constitute a solution to the entire ring attractor network. That is, for a given velocity of the animal, the firing rates of all neurons in a given network can be computed using the ansatz that describes the shape of the persistent activity bump within the central ring and that denotes the location of the bump. While the exact form of is determined by the profile of the recurrent weight function , the bump location is controlled by external inputs to the central ring. Therefore, assuming that the ansatz persists at all times like the previous work [41, 42], we can reduce the high dimensional ring-attractor dynamics to a one dimensional differential equation that models the position representation as a function of the external inputs. As derived in Appendix 6.2, if the central ring receives balanced (i.e., symmetric) inputs from the rotation rings during the animal’s immobility in the absence of landmarks (a classical assumption in CBAN models [3, 11, 31, 48, 49]), the position representation remains invariant. This invariance can be simply modeled as
| (4) |
3.1.2. Reduced-order path integration model reveals spatially distributed gain
In the present section, we examine how the position representation , decoded from the bump location, varies as the animal moves on a circular track in the absence of visual landmarks. By triggering differential changes in the firing of CW, CCW velocity neurons (Fig. 1D), such movement modulates the persistent activity bumps of the CW, CCW rotation rings also differentially. Synaptic connections from the rotation rings to the central ring then translate these changes in the activities of rotation rings to a change in the synaptic inputs to the central ring. As a result, the central ring no longer receives balanced excitation about its activity bump during the animal’s movement. With its magnitude proportional to the animal’s speed, this movement-based imbalance shifts the activity bump across the central ring, instantiating PI.
To obtain a model for the temporal dynamics of the position representation during this process, we apply the dimensionality reduction protocol described in [41]. Under mild assumptions detailed in Appendix 6.2.4, this reduction protocol leads to an ordinary differential equation
| (5) |
showing a linear relationship between the temporal change in and the animal’s velocity via a factor . This factor quantifies the ring attractor network’s PI gain, and its analytical expression takes the form
| (6) |
where denotes the index of the summation, representing either the CW or CCW rotation ring, denotes the absolute value of the slope of the velocity neurons’ tuning curves, and denotes the value of the offset in the connections between rotation rings and the central ring. As shown by this equation, the PI gain is a parameter determined by the network’s functional properties (i.e., profiles of the activity bumps and the tuning slope of the velocity neurons) and structural properties (i.e., time constant of the neurons and synaptic weights of the velocity-to-rotation ring and rotation-to-central ring connections), thereby capturing the complex interaction between the network’s internally generated persistent activity and the external inputs from the velocity neurons. A comprehensive examination of how these network properties relate to the PI gain is provided later in Section 3.3 when we explore the mechanisms of gain recalibration.
As mentioned previously, in our model, we do not adopt the assumption of the traditional models that constrains the synaptic weights of the velocity-to-rotation ring and rotation-to-central ring connections ( and , respectively) to be constants, each taking a uniform value in the entire neural space. Rather, we relax this constraint and treat these weights as functions that can vary along the neural space of the attractor network model, a heterogeneity likely to exist in biological networks. This functional treatment of the synaptic weights in our unconstrained model reveals an important characteristic of integration in CBANs: Unlike traditional treatments that implicitly assume a single, spatially global integration gain, CBANs employ a distributed, possibly inhomogenous, gain factor that can vary as the position representation varies (Equation 6 and top row in Fig. 2B). This implies for the ring attractor network that, by employing an inhomogenous PI gain, the network can adjust its spatial resolution locally, which would result in ‘overrepresentation’ or ‘underrepresentation’ of certain locations (bottom row in Fig. 2B) as is seen under various conditions of hippocampal place cell recordings [50, 51].
Figure 2:
Models of the ring attractor’s position representation. (A) The position representation , decoded from the peak location of an example ansatz solution to the central ring’s firing rates. (B) Model of path integration. Top left: Circular track with uniformly spaced points. Top right: internal representation of this track with a spatially inhomogenous path-integration (PI) gain that ranges from 0.6 at to 1.4 at . The position representation is visualized here by the pale brown rat. As the rat moves through physical space at velocity , the representation moves through neural space at . Bottom: Firing rate of uniformly distributed cells in the neural space as a function of the animal’s position in physical space. Left shows a ‘traditional’ network model, including a single, global PI gain of 1. Right shows our unconstrained network model with the spatially inhomogenous PI gain in the top row. (C) Stabilizing visual feedback. Top: The central ring’s activity bump (green) and the bump-shaped synaptic input to the central ring from the visual ring (pink). The activities of both rings are aligned with respect to the same neural space, so in this example, the visual ring bump is “ahead of” the central ring bump. Bottom: The function captures the stabilizing feedback from visual landmarks. Note that operates on the difference, , so here, the axis is a dummy variable. (D) Model of path integration with visual feedback. The pale brown rat symbolizes the internal representation of the animal’s position as in (B), while the medium brown rat symbolizes the animal’s actual location as represented by the visual drive. Left: the temporal change in the position representation is visualized by two arrows acting on the pale brown rat, one corresponding to updating by the path integration term and the other corresponding to updating by the visual feedback term . Note that in this position, the PI gain is “low” and thus PI underestimates position relative to the landmarks. Right: Same as Left but PI overestimates position relative to the landmarks due to “high” PI gain in this position.
3.1.3. Landmark correction to path integration resembles a Kalman filter
Lastly, we examine how the position representation varies during the most general case that the animal moves on the circular track in the presence of landmarks. To derive this one-dimensional model of , we employ the same dimensionality reduction protocol [41] but this time taking into account the visual neurons that provide feedback from landmarks. The resulting model will be crucial later when searching for the algorithmic conditions of the gain recalibration.
It is known that feedback from landmarks anchors the internal representation of position in a stable manner [22–25]. In classical ring attractor models, this stabilizing feedback is achieved with an allocentrically anchored, bump-like synaptic current applied onto the central ring from the visual ring, encoding the animal’s location relative to the landmarks [52, 53]. We adopt the same approach in our unconstrained ring attractor model and incorporate the following synaptic current from the visual ring onto the central ring:
| (7) |
where denotes a function describing the bump shape of this current, and denotes the location of the peak current, corresponding to the animal’s current position relative to landmarks. Note that, throughout the paper, we use the superscript * to distinguish a true value of a quantity (measured relative to the external world) from its internal value (e.g., vs ).
With Equation (7) in mind, applying the dimensionality reduction protocol [41] leads to a differential equation model for how the position representation is controlled by the PI and the landmarks as follows:
| (8) |
Here, is a function that, locally, takes the same sign as its argument (Fig. 2C). This sign property, a form of negative feedback, is crucial for stable control of the position representation ; for example, in the absence of movement, the sign property of ensures . In the more general case, when the animal’s velocity is nonzero, the position representation is changed by a combination of the (stabilizing) visual feedback and the feedforward PI-related term , as given in Equation (8). While the PI-related term, , shifts the position representation proportionally with the animal’s velocity, the function acts in an additive manner to bring toward , the animal’s true position relative to visual landmarks. This fusion between the feedforward PI-related term and feedback correction from landmarks is illustrated in Fig. 2D. A properly balanced combination would stabilize around , with only minor deviations due to PI error. As a result, the ring attractor effectively “estimates” the position of the animal, a computation that bares striking similarity to a Kalman filter in engineering. Here, the state being estimated by the network is the animal’s true position, , and the estimate is the bump location, .
3.2. Control Theory Reveals Algorithmic Conditions for Gain Recalibration
Experiments showed that the PI gain is a plastic variable that can be recalibrated accurately by visual landmarks [35]. In this section, we seek necessary conditions for this recalibration using Equation (8), the simple model presented in the previous section.
We begin by recalling the experimental conditions that brought about the recalibration of the PI gain [35]: An animal moved on a circular track while an array of visual landmarks was rotated around the track as a function of the animal’s velocity and an experimentally controlled, visual gain factor, . When , the landmarks moved in the same direction as the animal, decreasing the animal’s speed relative to the landmarks; when , the landmarks moved in the opposite direction as the animal, increasing the animal’s speed relative to the landmarks; when (veridical condition), the landmarks were stationary. Neural recordings showed that persistent exposure to these visual conditions recalibrated the animal’s PI gain such that a tight correlation was observed between the average value of the PI gain measured over many laps after the landmarks were extinguished and the final value of the visual gain before the landmarks were extinguished. Here, we analyze this recalibration using our simplified ring attractor model.
The experimental conditions leading to the gain recalibration can be simulated in a ring attractor model with a visual synaptic drive revolving around the central ring at a rate equal to the animal’s velocity times the visual gain . Extending (8), the simple model for the position representation , with an additional equation for modeling the changes in the peak location of such a visual drive, we obtain
| (9) |
where the first equation models the relation between the visual gain , the animal’s velocity , and the resulting temporal change in the visual drive’s bump location , and the second equation models the change in the attractor’s bump location based on the external velocity and visual inputs. Recall that, as we previously showed, the PI gain of a ring attractor network is a spatially distributed parameter that may potentially take different values as the network’s position representation varies during the animal’s movement in the environment. However, in recalibration experiments [35], average neural activity over many laps was used to estimate the average value of the PI gain, providing no information about whether the PI gain took different values across the environment as predicted by the model. Therefore, from a theoretical perspective, the experimental result that the PI gain was recalibrated to the visual gain [35] only suggests that its spatial average converges to the visual gain (i.e., , where
| (10) |
This convergence of toward necessitates to be updated over time during recalibration. The firing rate of neurons in the network is a biologically plausible candidate for controlling these updates. Therefore, to garner mathematical insight into this process, we searched for a general equation that could represent the updating of based on firing rates within the ring attractor, assuming an environment with a spatially homogenous feedback from visual landmarks. Under mild assumptions described in Appendix 6.3.1, this search led to a surprisingly simple equation
| (11) |
where denotes an analytic function that instantiates the instantaneous change in based on three variables: the current gain , the animal’s velocity and the difference between the visual drive’s position representation and the ring attractor’s position representation , without directly depending on either or (since the visual feedback is assumed to be uniform across the environment).
3.2.1. Necessity of matching the sign in gain change with the product of error and velocity
The update rule could be any function fitting the form in Equation (11); however, some such functions may fail to result in gain recalibration, i.e., the PI gain’s spatial average would not converge to the visual gain . What are the necessary properties of the gain update rule for convergence of to ?
To seek these properties, we revisit Equation (9), the simple model of the central ring’s and visual ring’s position representations and , now taking into account that the PI gain’s spatial average is time varying according to the gain update rule in Equation (11). Perfect convergence of to through this update rule would imply that the error between these two gains, namely , approaches zero. When this gain error becomes zero, it is intuitively expected that the error in the attractor’s position representation relative to the visual drive, namely , also approaches zero. Hence, analyzing the temporal progression of these error terms— namely the gain error and the positional error —provides an opportunity to garner insight into the algorithmic underpinnings of the gain recalibration process.
To this end, let us assume a constant visual gain, implying . To track how the PI gain’s spatial average recalibrates to this constant value, we subtract the second row of Equation (9) from its first row and Equation (11) from . This yields the so-called error dynamics
| (12) |
where captures the spatial variation of PI gain from its spatial average . Analyzing these error dynamics with tools from feedback control theory, we then identify the necessary conditions for complete recalibration of to .
First, we find that, if the ring attractor achieves and maintains zero gain error , as required for complete recalibration, then it also maintains zero positional error , rendering the origin an equilibrium point of the error dynamics. Convergence to this equilibrium point, howewer, requires the animal must be moving, else there exists no gain update rule that can achieve it. This is an unsurprising result, because, were the animal stationary, the visual landmarks would correct all the positional error , making the gain error imperceptible to the animal. Assuming accordingly that the animal is always moving, our analysis then reveals a sign requirement that must be satisfied by any gain update rule . For stable convergence of the gain and positional error to the equilibrium point at the origin , the gain’s spatial average must be updated in the same direction as the product of the animal’s velocity and the ring attractor’s positional error in some neighborhood of :
| (13) |
See Appendix 6.3.2 for formal statements and proofs of these findings.
What if the biological system can be recalibrated only partially? That is, at the steady state, the PI gain’s spatial average converges to a value biased towards, but not necessarily the same as, the visual gain . Under a simplifying assumption that the animal’s velocity is constant, we can generalize our findings for the complete recalibration to a case that covers both complete and partial recalibration. To this end, we re-analyze the error dynamics (Equation 12). First, we find that convergence of the gain error to some value, say , after recalibration results in convergence of the ring attractor’s positional error also, say to . If the gain recalibration is complete (i.e., ), then this steady-state positional error is zero; if the recalibration is partial (i.e., ), however, is nonzero, proportional to the product of the steady-state gain error and the animal’s velocity . Our analysis then reveals a generalized sign requirement that must be satisfied by any gain update rule: For gain recalibration, the gain’s spatial average must be updated in the same direction as the product of the animal’s velocity and the deviation of the current positional representation error relative to its steady-state value in some neighborhood of :
| (14) |
This generalized sign requirement captures both complete and partial recalibration; it, for example, reduces to Equation (13), the requirement for the complete gain recalibration, if the steady-state positional error is zero. See Appendix 6.3.3 for formal statements and proofs of these findings.
3.2.2. Sufficiency of positive gain change with respect to the product of error and velocity
We next analyze whether a gain update rule satisfying Equations (13) and (14), the necessary algorithmic conditions for recalibration, achieves gain recalibration, i.e., is it sufficient?
To address this question, we further analyzed the error dynamics in Equation (12). This analysis reveals a sufficient condition for gain recalibration. According to this condition, a gain update rule is guaranteed to achieve gain recalibration (may be partial or complete) for any visual gain and a given velocity of the animal if it has a positive slope with respect to the product of velocity and positional error at its zero value (i.e., ), namely,
| (15) |
This sufficient condition is a small extension to the generalized necessary condition for recalibration such that it guarantees recalibration if a gain update rule satisfies the generalized sign condition in Equation (14) together with the mild additional requirement that the update rule also has a nonzero derivative with respect to the product of and . For example, a linear update rule with satisfies the sufficient condition but a cubic rule only satisfies the necessary condition. We now would like to demonstrate how this sufficiency leads to gain recalibration via two example update rules. Although both rules satisfy the sufficient condition, Example 1 achieves complete recalibration, while example 2 achieves only partial recalibration.
Example 1. The simplest gain update rule that satisfies Equation (15), the slope condition guarantee for gain recalibration, takes the form
where denotes a positive learning rate. Furthermore, the update rule takes the same sign as the product , thus also satisfying the necessary condition for complete recalibration (Fig. 3B).
Figure 3:
Numerical simulation of the two example gain update rules. For both simulations, we chose the initial condition and the parameters . The gain choices imply that the initial value of gain error is . Additionally, we chose for the second example. (A) Temporal progression of the smoothed animal’s velocity from an experiment in [35]. (B) Simulated error trajectories under example gain update rule 1. As soon as the animal begins its movement at , the positional error (black line relative to the left y axis) quickly increases because of the nonzero gain error (purple line relative to the right y axis). As the animal recalibrates its gain, the gain error gradually converges to zero (i.e., ), accompanied by positional error gradually converging to zero also (i.e., ). In addition to these gradual convergent trends, the error trajectories include many fast, transitory changes. As can be seen from the black line, the instantaneous value of positional error is correlated with the animal’s velocity also. For example, when animal slows down, the positional error decreases, becoming zero when the velocity is zero. This is a reflection of the relatively increased landmark stabilization when path integration inputs are decreased. On the other hand, the temporal changes in the gain are correlated with the multiplication of the positional error and the animal’s velocity as determined by the gain update rule . When the animal pauses temporarily around minutes 5 and 20 (i.e., ), the positional error is completely corrected by landmarks (i.e., ), causing the gain updates to pause also (i.e., ). As the animal continues moving, the positional error and the velocity fine-tune the gain until the gain error converges to zero, demonstrating that the system can achieve complete gain recalibration. (C) Simulated error trajectories under example gain update rule 2. The convention is the same as panel B. The error trajectories exhibit similar trends to the panel B except that their final values do not converge to zero, demonstrating that the system can only achieve partial gain recalibration.
Example 2. This example is inspired from the modified ring attractor network that we propose in Section 4 as a model that achieves gain recalibration. Let denote a positive learning rate as before and denote a constant. Consider the gain update rule
Since its partial derivative is positive (i.e., ), this update rule results in gain recalibration like Example 1. However, unlike Example 1, the recalibration can be complete or partial, depending on the value of . If , the update rule takes the same sign as the product , thus satisfying the necessary condition for complete recalibration. Otherwise, it only satisfies the necessary condition for partial recalibration by taking the same sign as for (Fig. 3C).
3.3. Mechanistic Constraints Reveal Instrumental Role of Positional Error Codes
What are the mechanistic prerequisites for meeting the necessary algorithmic condition derived in the previous section for gain recalibration? To adddress this question, we investigate an analytical expression of the PI gain’s spatial average which can be simply obtained by averaging the PI gain in Equation (6) over as follows:
| (16) |
The terms in this expression identifies a number of possible loci or mechanisms for updating that satisfy the algorithmic requirement for gain recalibration in Equation (13). These terms include: (i) the offset in the central-to-rotation ring connections, (ii) the synaptic time constant , (iii) the slope parameters quantifying the absolute value of the tuning slopes of velocity neurons, (iv) the synaptic weight functions of the velocity-to-rotation ring connections, (v) the synaptic weight functions of the rotation-to-central ring connections, (vi) the function describing the central ring’s persistent activity bump, and (vii) the functions describing solutions to the rotation ring’s persistent activity bump. Note that we treat the CW and CCW components of the same term as an inseparable pair. Out of the seven terms, we consider the last five terms (iii-vii) as candidates driving the gain recalibration within the ring attractor model via temporal changes, implicitly assuming that the first two terms, the offset and the synaptic time constant , are “hardwired” (i.e., time-invariant).
The rationale behind excluding the first two terms arises, in part, from limitations of our modeling framework. First, the rate-based approach, upon which we describe the network dynamics, does not include any cellular and receptor details to capture possible temporal changes in the synaptic time constant . Instead, our model includes as a “lumped parameter” reduction of complex phenomena that governs the changes in membrane potential with ion flux through receptors; future work could use chemical kinetics modeling to investigate how changes in could contribute to gain recalibration, but that is beyond the scope of the present study. Second, our model employs a simplified one-to-one connectivity between the rotation rings and the central ring such that one neuron in a rotation ring connects to only one neuron in the central ring with a fixed offset , rather than a one-to-all connectivity required to capture plasticity in through gradual modulation of weights along the neural space. Therefore, excluding and from further consideration, we focused our analysis on the remaining five terms as the driver of gain recalibration.
By analyzing the relation between the temporal change in each of these candidate terms and the resulting temporal change in , we find that rate-based encoding of the positional error is required to satisfy the necessary algorithmic condition for the gain recalibration regardless of which term drives the changes in PI gain. However, the specific nature of the error code depends on the driver term (Fig. 4A) as demonstrated in the next subsections.
Figure 4:
Visualization of mechanistic constraints for a numerical simulation based on a hypothetical gain update rule Except for the animal’s velocity profile, we chose the parameters and initial conditions the same as Fig. 3B. For color coding, we use Fig. 1A as the reference, where red and blue denote the CW and CCW rotation rings, and green denotes the central ring. (A) Top graph shows the simulated velocity of the animal (purple line) on the right y axis and the temporal progression of the positional error (black line) on the left y axis. Notice the synchronous fluctuations in the positional error and the animal’s velocity. As explained in Fig. 3B, these synchronous fluctuations occur because the positional error is correlated with the animal’s velocity. Bottom graph shows the PI and visual gains with solid and dashed purple lines, respectively, on the right y axis and the time-integral of the positional error with the black line on the left y axis. Notice that as the PI gain gradually converges to the visual gain, the temporal progression of the time-integral of the positional error follows a very similar trajectory. This similarity indicates that the integration gain reflects the past accumulation of positional representation errors, thus opening up the possibility for the network to track the time-integral of the error as a proxy signal to encode the integration gain. (B) The mechanistic constraint for recalibration through plasticity of the velocity-to-rotation ring connections. Top graph shows the mean firing rates of the CCW and CW rotation rings over time with blue and red lines. Notice that they are similar to the trajectory of the positional error in panel A, except that the changes in the CW rotation ring’s mean firing rate (red line) is the negative of those in the CCW rotation ring’s mean firing rate. Bottom graph shows the direct relationship between mean firing rates and positional error in the attractor’s representation. (C) The mechanistic constraint for recalibration through plasticity of the rotation-to-central ring connections. Top graph shows the mean firing rates of either the rotation rings or the central ring over time with the orange line. Notice that the changes in these firing rates follow a similar trend as the temporal progression of the positional error. Bottom graph shows this relationship directly (the positive correlation is chosen arbitrarily as our analysis does not provide a conclusive insight into the required direction). (D) The mechanistic constraint for recalibration through changes in the velocity neurons’ slopes. Top graph shows the mean firing rate of the CCW and CW rotation rings, the same quantities as panel B. However, unlike panel B where the mean firing rates were similar to the instantaneous positional error, the mean firing rates in this panel are similar to the time-integral of the error. Bottom graph shows this relationship between the mean firing rates and the time-integral of the error directly. (E) The mechanistic constraint for recalibration through changes in the rotation rings’ activity bumps. Top graph shows the bump width of both rotation rings over time. Similar to how the mean firing rates of the rotation rings encode the time-integral of the positional error in panel D, the bump widths encode the time-integral of the error in this panel. Bottom graph shows this relationship directly. (F) The mechanistic constraint for recalibration through changes in the central ring’s activity bump. Top graph shows the temporal progression of the mean firing rate of the central ring, which is tightly but negatively correlated with the temporal progression of the time-integral of the positional error. Bottom graph shows this relationship directly.
3.3.1. Plasticity in the velocity pathway requires a rate code of the instantaneous error
We first consider the scenario that the temporal change in the PI gain is driven by temporal changes in either set of the synaptic weights along the pathway from velocity neurons to the central ring. These sets include the pair , describing the strength of velocity-to-rotation ring connections (➃ in Fig. 1A), and the pair , describing the strength of rotation-to-central ring connections (➂ in Fig. 1A).
According to Equation (16), the CW and CCW components of these weight pairs additively influence the PI gain . This additive influence suggests a possibility where CW and CCW components vary independently of one another during recalibration to tune the PI gain’s spatial average . However, this possibility is limited by the requirement that the central ring must receive balanced (i.e., symmetric) inputs from the CW and CCW rotation rings to keep the activity bump stationary during the animal’s immobility as we show in Appendix 6.4. Thus, we assume hereafter that the CW and CCW components vary in a coordinated fashion to ensure that their individual contribution to is symmetric. This symmetry assumption implies that if the overall value of changes as per the necessary algorithmic condition in (13), then the individual contribution to this change from both CW and CCW components must be in the direction of the product of the animal’s velocity and the positional error . To identify the mechanistic underpinnings of such symmetric gain recalibration, we revisit Equation (16). By differentiating this equation with respect to time and considering Hebbian plasticity as the mechanism underlying the changes in the weight pairs or , we find that the algorithmic condition translates into a mechanistic constraint as follows:
Hebbian plasticity of the velocity-to-rotation ring connections :
A change in the weights leads to a commensurate change in the speed at which the network’s activity bump is shifted along the ring for a given speed of the animal. These commensurate changes suggest a positively correlated relationship between the weights and the PI gain . Assuming the symmetry between CW and CCW components, we indeed show in Appendix 6.4.1 that Equation (16), relating the weights to the PI gain via positively-weighted integrals, can be reformulated as
where the symbol denotes the existence of a positively-sloped, proportional relationship. Because of these positive correlations, satisfying the algorithmic condition for recalibration implies that the average strength of both CW and CCW velocity-to-rotation ring synapses is modified in the direction of the product of the animal’s velocity and the network’s positional error , namely,
| (17) |
Here, the dots appearing over the weights denote the temporal change in the weight. Recall that Hebbian plasticity of a synapse is driven by the joint activity of pre- and post-synaptic neurons. In the specific case of velocity-to-rotation ring synapses, the pre-synaptic side is composed of the velocity neurons, designated to solely encode the animal’s velocity with tuning curves that have a negative slope for the CW velocity neuron and a positive slope for the CCW velocity neuron (previously shown in Fig. 1D). Therefore, in a manner matching these differential signs of -encoding on the presynaptic side, the CW and CCW rotation rings on the post-synaptic side must monotonically decrease and increase their average firing rates with the instantaneous positional error to satisfy the equality in Equation (17) (Fig. 4B). Mathematical details are provided in Appendix 6.4.1.
Hebbian plasticity of the rotation-to-central ring connections ():
Like the velocity-to-rotation ring connections discussed above, the rotation-to-central ring synaptic weight functions enter linearly in the calculation of the PI gain in Equation (16). Therefore, like the previous case, the algorithmic condition for recalibration via requires Hebbian plasticity to modify their average strength in the direction of the product of the animal’s velocity and the network’s positional error , namely,
| (18) |
In the previous case, the mechanistic prerequisite for meeting a similar sign requirement was errorencoding on the postsynaptic side since the pre-synaptic neurons were assumed to solely encode the velocity. In the present case, however, it is feasible to encode the error in either the pre- or post-synaptic side since neither side is subject to such a limitation. Therefore, when the animal is traveling in one direction (as was the case in the experiments that originally demonstrated the gain recalibration [35]), satisfying the equality in Equation (18) requires mean firing rate of either the rotation rings or the central ring vary monotonically with the network’s instantaneous positional error (Fig. 4C). However, unlike the previous case, our analysis of the present case does not provide conclusive information about the direction of these monotonic relations. Mathematical details are provided in Appendix 6.4.2.
Collectively, these findings show that Hebbian plasticity in the pathway carrying the external velocity information to the central ring requires a rate code of the network’s instantaneous positional error to update the synaptic weights in the direction of the product of the animal’s velocity and the error as per the algorithmic condition for gain recalibration in Equation (13).
3.3.2. Plasticity elsewhere requires a rate code of the time-integral of the error
We next consider the scenario that the synaptic weights along the pathway from velocity neurons to the central ring are hardwired (i.e., constant). This scenario implies that the gain recalibration is instead driven by temporal changes in one of the three firing-rate related terms, including the slope parameters quantifying the absolute value of the slopes of the CW, CCW velocity neurons’ tuning curves, the ansatz functions describing the persistent activity bumps of the rotation rings, or the ansatz of the persistent activity bump of the central ring. Independent of which of these terms undergoes temporal changes, the algorithmic condition for gain recalibration translates into a mechanistic constraint that, while still a rate code of error, differs in its fundamental characteristics as detailed below:
Changes in the slopes of velocity neurons’ tuning curves ():
As shown previously in Fig. 1D, the CW and CCW velocity neurons are tuned to the animal’s velocity with slopes having different signs. Assuming these differential signs to be hard-wired (i.e., constant), we examine in the present case that absolute values of the slopes, denoted by parameters and , undergo temporal changes. A change in these parameters leads to a commensurate change in the speed at which the network’s activity bump is shifted along the central ring for a given speed of the animal. As in the previous section, this implies a positively correlated relationship between the slope parameters and and the PI gain . Indeed, this relationship is explicitly seen in Equation (16). Thus, as in the previous section, satisfying the algorithmic condition for recalibration (Equation (13)) requires and to change in the direction of the product of the animal’s velocity and the network’s positional error , namely,
| (19) |
An implication of this requirement is that, when the animal is traveling in one direction, the change in the slope parameters is monotonically related to the positional error, reflecting its value on a moment-to-moment basis with a sign additionally depending on the sign of the velocity. When these changes are integrated over time, the current value of the slope parameters reflects the accumulation of positional errors through the past, depending monotonically on the time-integral of the positional error in a direction that depends on the animal’s velocity. Through connections between the velocity neurons and the rotation rings, these monotonic relationships are also translated to the mean firing rate of rotation rings. The direction of the monotonic relationships between the time-integral of the error and the mean firing rate of the rotation rings, however, depends additionally on the sign of the velocity neurons’ tuning slopes. For example, when the animal is traveling in one direction (say CCW) like the original recalibration experiments [35], the mean firing rate of the CCW rotation ring increases monotonically with the time-integral of the error due to the positive slope of the CCW velocity neuron. Conversely, the mean firing rate of the CW rotation ring decreases monotonically with the time-integral of the error due to the negative slope of the CW velocity neuron (Fig. 4D). If the animal travels in the other direction, the direction of these monotonic relationships is also reversed. Mathematical details are provided in Appendix 6.4.3.
Changes in the persistent activity bump of the rotation rings :
Velocity information is transmitted to the central ring by the rotation rings whose firing rates are modulated by the animal’s velocity. In this transmission, an increase in the number of actively firing rotation ring neurons (i.e., larger width of the bumps ) would result in a commensurate increase in the number of central ring neurons that receive the velocity information. As a result, the network’s activity bump shifts faster along the central ring even when the animal’s velocity is unchanged. This implies a positively correlated relationship between the widths of the rotation rings’ activity bumps and the PI gain , reminiscent of the relationship in the previously investigated case of . Thus, like the previous case, satisfying the algorithmic condition requires the rotation rings’ activity widths to change monotonically with the product of the animal’s velocity and the positional error. Consequently, when the animal is traveling in one direction (say positive), the widths of the rotation rings’ activity bumps must increase monotonically with the time-integral of the error (Fig. 4E). If the animal’s travel direction is negative, this relationship turns into a monotonically decreasing function. Mathematical details are provided in Appendix 6.4.4.
Changes in the persistent activity bump of the central ring :
Consider as an example that there are two networks with the same Gaussian bump profile but one has a higher peak firing rate. In this case, if all else is equal, the network with the higher firing requires higher velocity inputs to shift its activity bump from point A to point B in the same time as the other network. This need of higher velocity inputs indicates an inversely correlated relationship between the magnitude of the central ring’s activity bump and the PI gain , which can also be verified from Equation (16) wherein the denominator includes a term proportional to the bump magnitude: the squared norm of the activity bump’s gradient. Thus, satisfying the algorithmic condition for gain recalibration is subject to a mechanistic constraint that is similar to the previous case in spirit but slightly different due to the inverse effect: When the animal is traveling in one direction (say positive), the mean firing rate of the central ring must decrease monotonically with the time-integral of the error (Fig. 4F). If the animal’s travel direction is negative, the direction of this monotonic relationship is reversed. Note that, for deriving this result, we assume the general shape of the central ring’s activity bump to be invariant, unlike its magnitude. Mathematical details are provided in Appendix 6.4.5.
Collectively, these findings show that gain recalibration might be possible without synaptic plasticity in the pathway carrying the external velocity information to the central ring, provided that there is a rate code of the time-integral of the positional error as opposed to the error itself. Our analysis does not provide any insights into the mechanisms necessary for such a rate code and the temporal changes in the terms associated with it (e.g., slopes of velocity neurons). However, it may be the case that plasticity elsewhere than the velocity pathway is required. Independent of the underlying mechanism, however, an absence of plasticity in the velocity pathway would lead to the conclusion that the PI gain is no longer encoded in the synaptic weights: instead, it is encoded in the firing rates that track the time-integral of the error, a proxy of the PI gain (bottom row in Fig. 4A).
4. Implementing Gain Recalibration in a Ring Attractor
In this section, we propose a modified ring attractor model that can achieve gain recalibration through a mechanism devised based on the theoretical insights we have garnered so far. Briefly, the model relies on synaptic plasticity in the velocity-to-rotation ring connections and its mechanistic prerequisite (a rate code for the positional error instantiated in the rotation rings as shown in Fig. 4B) to achieve gain recalibration. We chose this mechanism, instead of other theoretical candidate mechanisms examined in the previous section, partly because we found it relatively easy to implement compared to others, but we conjecture that some of the other candidate mechanisms also have biologically feasible implementations. Thus, our model should be viewed as an example that demonstrates the effectiveness of our theoretical analysis rather than the only network model that can achieve gain recalibration. In addition to gain recalibration, the model can also reproduce two other important aspects of biological path integration: (1) flexible association of visual landmarks to different positions in the neural space and (2) correction of accumulated PI errors by visual landmarks. In the next subsections, we describe the model in detail and explain the mechanisms by which it achieves these aspects.
4.1. A connectivity pattern yielding a rate code for error
We begin by proposing a connectivity pattern that causes a neuron population to vary its firing rate monotonically with the difference in the bump locations of two other populations. As will be evident, this connectivity pattern plays a crucial role by providing the means to achieve a rate code for the positional error, with the error being the difference in the bump locations of the visual drive and the attractor activity.
The connectivity pattern can be described by considering three distinct neuron populations (X1–3 in Fig. 5A), each arranged on a circle like the populations in the ring attractor network. Suppose that populations and X2 consist of excitatory and inhibitory neurons, respectively, and maintain their own activity bumps. The population X3 derives its activity based on the inputs from X1 and X2. The excitatory inputs from X1 to X3 are routed through topographic connections that wire together the neurons at the same angular location in the neural space. The inhibitory inputs from X2 to X3, however, are routed through CCW offset connections that wire together the neurons at different angular locations. To understand how this connectivity causes X3 to vary its firing rate as a monotonic function of the difference in the bump locations of X1 and X2, we can track the flow of neural activity as follows:
Figure 5:
A modified ring attractor network model. (A) A proposed connectivity pattern that varies firing rate of population X3 as a monotonic function of the difference in the activity-bump locations of populations X1 and X2. Arrow and circle terminals denote excitatory and inhibitory connections, respectively. (B) Schematic diagrams depicting the computation within the proposed connectivity in panel (A). The first row shows the activity of the X2 population (yellow) relative to the activity of the X1 population (green) for different conditions: CW difference (left column), no difference (middle column), and CCW difference (right column). The second row shows the synaptic inputs to the X3 population from the excitatory X1 (green line) and inhibitory X2 (yellow line) populations. The fourth row shows the resulting firing rates of the X3 population. (C) Schematic representation of the model. Solid and dashed lines denote hardwired and plastic connections, respectively. The labels M1, M2, M3, correspond to the three modifications made to the classical ring attractor: M1 is the association ring, M2 refers to the plasticity of the velocity-to-rotation-ring connections, and M3 refers to the hardwired, offset association-to-rotation connections. (D) Numerical simulation demonstrating the association of the visual ring’s activity with the central ring’s activity through Hebbian plasticity. Top and bottom rows show the initial and final values of the simulated variables. The left column shows the firing rates of the central (green) and visual (pink) rings. The middle column visualizes the weight matrix describing the visual-to-association ring connections. The right column shows the firing rates of the association ring (yellow) and the synaptic inputs from the visual ring (pink). (E) Tuning curves depicting the relationships between the rotation rings’ mean and peak firing rates vs. the error (left and right graph, respectively). The color coding is the same as panel C. (F) Numerical simulations of the gain recalibration within the proposed model. The top shows the the recalibration of PI gain (green) toward the visual gain (pink) in a selected simulation. The middle shows the progression of the weights of the velocity-to-rotation ring connections (four samples normalized to initial condition of the weights with the opacity changes from the lightest and to the darkest green corresponding to chronological order of the samples.). The bottom shows the final values of the PI gain for various visual gains (green line) and the hypothetical perfect recalibration (dashed black line). (G) Numerical simulation demonstrating how visual landmarks correct positional error. The top panel shows the progression of the bump locations of the visual (pink) and central rings (green). Bottom panel shows the mean firing rate of CW (red) and CCW (blue) rotation rings over time.
Let and denote the location of X1 and X2’s activity bumps on the circular neural space, respectively. When , these activities are aligned, but the synaptic inputs to the population X3 from X1 and X2 are misaligned due to the CCW offset in X2-to-X3 connections (the second column of Fig. 5B). When , however, X1 and X2’s activity bumps are misaligned with a CW or CCW difference in their locations. Consider first the CW-difference case (the first column of Fig. 5B). In this case, the bump location of X2 is shifted in the CW direction compared to that of X1. Because of the CCW offset in the X2-to-X3 connections, this misalignment between the activity bumps decreases at the level of synaptic inputs, bringing the inhibition from X1 closer to active neurons of X3, thereby decreasing the firing rate of X3 compared to the no-difference case. Consider next the CCW-difference case (the third column of Fig. 5C). In this case, the CCW offset in the X2-to-X3 connections redirects the inhibition from X2 further away from the active neurons of X3, thus increasing the firing rate of X3. Through this mechanism, the proposed connectivity pattern causes population X3 to vary its firing rate as a monotonically increasing function of the difference in the bump locations of X1 and X2. The direction of this monotonic relationship can be easily reversed if the offset in the X2-to-X3 connections is reversed from CCW to CW.
How can we make use of this connectivity pattern in our modified ring attractor network that will rely on plastic velocity-to-rotation ring connections for gain recalibration? The connectivity lends itself naturally to this recalibration mechanism, as it requires the rotation rings (X3) to vary their firing rates monotonically with the positional error, a quantity equal to the difference in the bump locations of the central ring (X1) and the visual drive (X2). Despite this suitability, however, employing the connectivity pattern in the ring attractor network requires an additional modification for a reason which will be clear in the next section.
4.2. Flexible Association of Landmarks to Positions through Hebbian Plasticity
In traditional ring attractor models, feedback from landmarks is incorporated into the network via direct synaptic connections from the visual ring [20, 46, 54]. In these connections, synaptic weights between coactive neuron pairs encoding the same position of the animal are potentiated through Hebbian plasticity, resulting in a flexible associative mapping between the visual ring and the attractor network [20, 55].
However, this approach, relying on direct plastic connections from the visual ring onto the attractor network, is incompatible with the connectivity pattern proposed in the previous section for the error-rate code. That is, the error code’s connectivity pattern requires the wiring of neurons representing different positions by an offset as opposed to Hebbian plasticity wiring together the neurons representing the same position. To resolve this incompatibility, our modified ring attractor network model makes a small change to the approach in traditional models by placing the plastic associative mapping problem outside the network, which makes it possible to include the error code’s connectivity pattern inside the network.
The modification is as follows: We first remove the visual-to-central ring connections (➄ in Fig. 1A) and introduce an intermediate ring of neurons, which we call an association ring, that associates the activity in the visual ring with that in the central ring by receiving inputs from both (see M1 in Fig. 5C). The afferent connections to this ring from the central ring are hardwired and weak, while those from the visual ring are plastic, hence capable of becoming strong. In a novel environment, where the plastic visual connections are initially untuned and random, the spatial selectivity of the visual ring’s activity is not conveyed in its synaptic inputs to the association ring. In contrast, inputs from the central ring to the association ring always preserve the spatial selectivity of the central ring’s activity because of the hard-wired, topographic connections. This combination of malleable inputs from the visual ring and weak but hardwired inputs from the central ring biases the visual-to-association ring connections near the central ring’s activity bump to be selectively potentiated through Hebbian plasticity (top row in Fig. 5D). Eventually, synaptic inputs from the visual ring become sufficiently strong and aligned with the inputs from the central ring, making the association ring’s activity strongly visually driven such that it implicitly represents a flexible associative mapping between the representations in the visual ring and the attractor network (bottom row Fig. 5D).
Thus, by serving as an intermediary, the association ring promises to act in our modified ring attractor network model as a proxy visual drive that can circumvent the previously noted incompatibility issues with the error-rate code’s connectivity pattern. In the next section, we describe how the association ring can be combined with this connectivity pattern to implement gain recalibration.
4.3. Gain Recalibration by Landmarks through Hebbian Plasticity
As noted earlier, our ring attractor network model relies on plasticity in the velocity-to-rotation ring connections for gain recalibration (M2 in Fig. 5C). However, this recalibration mechanism requires as its mechanistic prerequisite (described in Section 3.3) that CCW and CW rotation rings respectively increase and decrease their firing rates with the positional error in the central ring’s representation relative to the visual landmarks. To obtain these error-rate codes, we make use of the visual information in the association ring by connecting it to the CCW and CW rotation rings with CCW and CW offset, respectively (M3 in Fig. 5C). Combined with the topographic central-to-rotation ring connections, these offset connections from the association ring implement the connectivity pattern described in Section 4.1, thereby achieving the required error-rate codes (Fig. 5E). With these error-rate codes in the rotation rings and the plasticity in the velocity-to-rotation ring connection, our ring attractor model now includes all the necessary ingredients for gain recalibration.
To test if these ingredients are also sufficient to achieve gain recalibration, we first take an analytical approach based on Equation (15), which states a sufficient condition for gain recalibration. According to this condition, gain recalibration is guaranteed if the temporal change in the gain’s spatial average has a positive slope with respect to the product of the animal’s velocity and the attractor’s positional error . As discussed in Section 3.3.1, the gain’s spatial average is positively correlated with the average strength of velocity-to-rotation ring connections. Therefore, the sufficient condition can be rephrased as follows: the gain recalibration is guaranteed if the temporal change in the average strength of velocity-to-rotation ring connections has a positive slope with respect to the product of and . The presynaptic side of these plastic connections has velocity neurons encoding the animal’s velocity , while the postsynaptic side has the rotation ring neurons encoding the positional error . Since both of these rate codes have the same slopes on the CW and CCW parts of the network (negative on the CW part, positive on the CCW part), their correlated activity, as the driver of Hebbian plasticity, modifies the strength of velocity-to-rotation ring connections with a positive slope relative to the product of and , satisfying the sufficient condition in (15). However, as in Example 2 in Section 3.2.2, gain recalibration is expected to be imperfect due to imperfect encoding of the positional error in the rotation rings whose firing rates are additionally modulated by the animal’s velocity . As a result, our model is expected to recalibrate its average PI gain to steady-state values that are close to, but not the same as, the visual gain, . During this recalibration, the model is expected to go through a transitory stage with a spatially non-homogenous PI gain as Hebbian plasticity modifies the spatially distributed velocity-to-rotation ring connections non-uniformly because of non-uniform firing rate of rotation rings’ neurons across the neural space at any moment in time. However, as the animal runs in the environment, these non-uniform effects will be washed out, resulting in a spatially homogeneous PI gain . Mathematical details of these analytical findings are given in Appendix 6.5.
We next verify these analytical findings by performing numerical simulations of our modified ring attractor model for a simulated rat running on a circular track while visual landmarks were moved as per the visual gain . The model demonstrated imperfect yet stable gain recalibration for a range of values (Fig. 5F).
4.4. Correction of Positional Errors by Landmarks through a Rate Code of Error
Next, we test if visual landmarks can correct PI errors in our model. The PI error manifests itself as a misalignment between the peak locations of the path-integration driven activity bump in the central ring and the strongly visually driven activity bump in the association ring. Traditional models correct for this misalignment by providing visual drive directly onto the central ring (like in Fig. 1A), toward which its activity bump is gravitated by means of attractor dynamics. In our model, however, we have removed such direct connections in Section 4.2 because of their incompatibility with the connectivity pattern needed for the error-rate codes. Instead, we have connected the association ring to the rotation rings with some offset. The question then arises: are these offset connections sufficient for PI error correction or do we need to reinstate direct connections onto the central ring from the visual drive?
As explained in the previous section, the offset connections causes the CCW and CW rotation rings to increase and decrease their firing rates monotonically with the positional error, respectively. This differential modulation of the rotation rings’ firing rates by the error is similar to their differential modulation by the velocity; when the animal is moving, the firing rate of one rotation ring increases and that of the the other decreases, which in turn shifts the activity bump along the central ring. Therefore, by employing rate codes of the positional error, our model effectively transforms the positional error into a virtual velocity signal, shifting the activity bump along the central ring in a manner decreasing this error-a manifestation of error correction provided by visual landmarks. We verified this error correction mechanism in numerical simulation of our model. Following a positional error introduced abruptly between the activity bumps of the ring attractor and the association ring, the differential changes in the rotation rings’ firing rates eliminated the error by re-aligning the central ring’s activity bump with that of the association ring (Fig. 5E).
5. Discussion
Fine-tuning of a neural integration computation is crucial to maintain accurate representations of continuous variables since the relationship between the sensing of the relative change in a continuous variable and its actual value can fluctuate on both developmental (e.g., changes in body size that can affect location coding) and behavioral timescales (e.g., swimming versus walking in the case of location coding) and even due to dynamic biological processes, such as circadian rhythm, that can alter synaptic transmission and intrinsic electrical properties of neurons. Building upon previous behavioral work on perceptual plasticity of human locomotion [56], physiological evidence for such fine-tuning was first observed in hippocampal place cells [35], where persistent conflict between self-motion and external visual cues recalibrated the integrator gain. In the present paper, we give the first theoretical examination of this phenomenon in continuous bump attractor networks (CBANs), a prevailing model for representations of continuous variables.
Our examination unveiled the algorithmic and mechanistic requirements for gain recalibration in a ring attractor network, a representative CBAN model used for circular continuous variables. In CBAN models, when the integration gain is inaccurate, an internal representation of a continuous variable slightly drifts relative to its actual value, resulting in encoding errors. Absolute ‘ground-truth’ information, such as feedback from visual landmarks for location coding, correct these errors through internal dynamics of the network, without the need for an explicit rate-based representation of the error. In contrast to this automatic error correction through network dynamics, we found that fine-tuning the integration gain based on errors requires an explicit error signal, i.e., firing rate of some neurons to encode the error in the CBAN’s representation of a continuous variable relative to its actual value. Building upon this insight, we also proposed a ring attractor network model that shows how a CBAN can recalibrate its integration gain through biologically known plasticity mechanisms. Although the ring attractor is specialized for integration of 1D circular continuous variables (e.g., an animal’s location on a circular track), our findings can be readily extended to higher dimensions and other types of continuous variables. Overall, our findings suggest that a rate code for the error in the internal representation of a continuous variable is a core component of the bump attractor-type neural integrators, and that such a rate code plays an essential role in their gain recalibration.
5.1. The Bump Attractor Network as an Adaptive Kalman Filter
To identify algorithmic requirements for recalibration of the integration gain, we simplified dynamics of the ring attractor through a dimensionality reduction protocol described in [41]. Similar approaches have been successfully applied in recent years to explore the neural dynamics capturing how high-dimensional neural data evolves within low-dimensional topological structures [6, 14, 57]. In our specific case, the dimensionality reduction led to a simplified 1D model of the ring attractor, capturing the dynamics of its representation as a function of external inputs that provide differential (e.g., animal’s velocity) and absolute (e.g., positional feedback from visual landmarks) information [41, 42, 58].
Previous research showed that, when two external cues are presented as inputs, the ring attractor network fuses them optimally in the Bayesian sense [20, 59–61]. Furthermore, if one of the cues provides only differential information like the animal’s velocity that is integrated over time to compute the overall change in the continuous variable, the ring attractor network performs the Bayesian fusion recursively for each step of the integration [41]. This recursive computation is known as Kalman filtering and has been proposed as a model of cue integration in the entorhinal cortex of the mammalian brain [21]. Consistent with this prior work, we found that the ring attractor network operates as a Kalman filter updating the representation by a combination of integrated relative information (the internal model component of the Kalman filter) and the instantaneous feedback from absolute information (the measurement model component).
In engineered systems, the accuracy of a Kalman filter relies on precise knowledge of its internal model parameters; to address this issue, adaptive Kalman filters that fine-tune their own parameters have been proposed [62, 63]. Like adaptive Kalman filters, we showed that a ring attractor network can fine-tune itself through gain recalibration. We also elucidated the algorithmic requirement for this recalibration, showing that the integration gain must change in the same direction as the product of the animal’s velocity and the error in the attractor’s representation relative to the absolute ‘ground-truth’ information. Interestingly, this requirement resembles characteristics of a classical algorithm known as the MIT rule in adaptive control systems and Kalman filtering [64]. Thus, a ring attractor with gain recalibration effectively operates as an adaptive Kalman filter, updating its representation accurately through a finely tuned integration gain.
5.2. Necessity of a Rate-Based Explicit Error Signal in the Bump Attractor Networks
Satisfying the algorithmic requirement for gain recalibration is subject to certain mechanistic constraints, which we discovered analyzing the network dynamics. In essence, gain recalibration requires that some neurons vary their firing rates monotonically with the instantaneous value or the time-integral of the error in the representation of the continuous variable relative to its true value. Without such error-rate codes, the network does not have a teaching signal that can guide tuning of its gain for recalibration. This shows that, for CBAN networks, learning from representational errors to recalibrate the integration gain is a very different neural process than correcting the errors. In the case of error correction, input signals from absolute ‘ground-truth’ information sources, such as visual landmarks for a CBAN encoding location, are sufficient to trigger an error correction response from network dynamics. In contrast, recalibrating the integration gain based on representational error additionally requires a neural signal that explicitly encodes this error via a rate code.
This hypothesized error signal resembles reward and sensory prediction error signals within the mammalian brain. Dopamine neurons in the midbrain of mammals encode error in the internal predictions of reward via monotonic changes in their firing rates [65, 66]; they exhibit elevated activity with more reward than predicted, remain at baseline activity for fully predicted rewards, and exhibit depressed activity with less reward than predicted. Climbing fiber inputs to Purkinje cells of the mammalian cerebellum encode errors in the predicted sensory consequences of motor commands relative to the actual sensory feedback via changes in the rate and duration of complex spikes [67, 68]. Both of these prediction error codes are thought to act as a teaching signal that fine-tunes the internal models, mapping the stimulus to reward prediction in the dopamine system and the motor commands to sensory prediction in the cerebellum, through plasticity, just like how error coding can act as a teaching signal that recalibrates the integration gain of a CBAN.
To instantiate this idea computationally, we presented a modified ring attractor model that can recalibrate its integrator gain based on a rate code of the instantaneous error via Hebbian plasticity. Relevant to this model, a previous study hypothesized a currently unknown plasticity rule as a mechanism for gain recalibration within the ring attractor network [32]; the hypothesized plasticity rule modifies synaptic weights of each neuron according to an implicit positional error signal, computed locally within each neuron through comparison of the synaptic inputs at its basal and apical dendrites receiving, respectively, the absolute ‘ground-truth’ information and the network’s current representation information. While it is unclear whether such a plasticity rule exists in the brain, our model demonstrates a biologically plausible alternative: Hebbian plasticity, combined with an explicit rate-based representation of the error, is sufficient to achieve gain recalibration. It remains as a future work to experimentally test if such an error signal exists in the brain circuits that are thought to employ CBANs for encoding continuous variables.
5.3. Implications of a Distributed, Inhomogenous Integration Gain
Prior CBAN models implicitly assumed the integration gain to be a single, global parameter of the network, independent of the value of the encoded continuous variable [11, 31]. Although the idea of different, hard-wired integration gains has previously been suggested in the context of location coding to explain the changes in the spatial scale of place coding along the dorsal-ventral axis of the hippocampus [27], it is assumed that the integration gains are constant at all locations within an environment. In contrast, we showed that the integration gain within a single CBAN, specifically the ring-attractor network, is a distributed parameter instantiated in the network’s array of synaptic weights, implying that the network can adopt unique integration gains for different values of the encoded continuous variable. This would be, for example, a CBAN changing its integration gain depending on the location of the animal in the context of spatial navigation or depending on the amount of accumulated evidence in the context of decision-making. In the latter case, the CBAN may look like it unevenly weights early or late evidence, which is a well-established phenomenon known as primacy and recency effects in the decision-making literature [69–71]. Compared to a network with a single, global integration gain, a network with a distributed, possibly inhomogeneous, gain can adjust its representation metric for the continuous variable locally, hence providing greater flexibility in representing different values of the continuous variable with uneven resolutions, depending on, for instance, their behavioral significance [72].
How might gain inhomogenoity arise? Theoretically, it can be a product of the recalibration process if the teaching signal (e.g., feedback from absolute ‘ground-truth’ information sources) is available unevenly across the values of the continuous variable. In the context of location coding, for example, such differences may occur between when the animal is nearby the boundaries of the environment, which is a relatively rich area in terms of external ground-truth information, versus when it is near the center of the arena. We speculate that such spatially distributed recalibration of the integration gain may offer a mechanistic explanation to some of the experimental findings about local distortions and deformations in the activity patterns of entorhinal grid cells and hippocampal place cells, encoding the animal’s location, during environmental manipulations [73–80]. According to this speculation, grid patterns might get distorted nearby environmental boundaries through changes in the local integration gain of the network; through the same mechanism, place cells might represent locations nearby landmarks and boundaries with a greater spatial resolution (also known as overrepresentation). Overall, inhomogenous integration gain of CBANs offer a potential explanation to an array of seemingly complex responses in spatial navigation as well as other brain functions.
Acknowledgments
This work was supported by National Institutes of Health grants R01 NS102537 (N.J.C., J.J.K.), U01 NS131438 (N.J.C.), and R01 MH118926 (J.J.K., N.J.C.). We thank Ravikrishnan Jayakumar, Kathryn Hedrick, Bharath Krishnan, Manu Madhav, Francesco Savelli, and Kechen Zhang.
Footnotes
Supplementary Files
References
- [1].Ben-Yishai R., Bar-Or R. L., and Sompolinsky H., “Theory of orientation tuning in visual cortex.,” Proceedings of the National Academy of Sciences, vol. 92, no. 9, pp. 3844–3848, 1995. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Wimmer K., Nykamp D. Q., Constantinidis C., and Compte A., “Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory,” Nature neuroscience, vol. 17, no. 3, pp. 431–439, 2014. [DOI] [PubMed] [Google Scholar]
- [3].Compte A., Brunel N., Goldman-Rakic P. S., and Wang X.-J., “Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model,” Cerebral cortex, vol. 10, no. 9, pp. 910–923, 2000. [DOI] [PubMed] [Google Scholar]
- [4].Esnaola-Acebes J. M., Roxin A., and Wimmer K., “Flexible integration of continuous sensory evidence in perceptual estimation tasks,” Proceedings of the National Academy of Sciences, vol. 119, no. 45, p. e2214441119, 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [5].Furman M. and Wang X.-J., “Similarity effect and optimal control of multiple-choice decision making,” Neuron, vol. 60, no. 6, pp. 1153–1168, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Nieh E. H., Schottdorf M., Freeman N. W., Low R. J., Lewallen S., Koay S. A., Pinto L., Gauthier J. L., Brody C. D., and Tank D. W., “Geometry of abstract learned knowledge in the hippocampus,” Nature, vol. 595, no. 7865, pp. 80–84, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [7].Wojtak W., Coombes S., Avitabile D., Bicho E., and Erlhagen W., “A dynamic neural field model of continuous input integration,” Biological Cybernetics, vol. 115, no. 5, pp. 451–471, 2021. [DOI] [PubMed] [Google Scholar]
- [8].Skaggs W., Knierim J., Kudrimoti H., and McNaughton B., “A model of the neural basis of the rat’s sense of direction,” Advances in neural information processing systems, vol. 7, 1994. [PubMed] [Google Scholar]
- [9].Blair H., “Simulation of a thalamocortical circuit for computing directional heading in the rat,” Advances in neural information processing systems, vol. 8, 1995. [Google Scholar]
- [10].Redish A. D., Elga A. N., and Touretzky D. S., “A coupled attractor model of the rodent head direction system,” Network: computation in neural systems, vol. 7, no. 4, p. 671, 1996. [Google Scholar]
- [11].Zhang K., “Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory,” Journal of Neuroscience, vol. 16, no. 6, pp. 2112–2126, 1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [12].Knierim J. J. and Zhang K., “Attractor dynamics of spatially correlated neural activity in the limbic system,” Annual review of neuroscience, vol. 35, pp. 267–285, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].Yoon K., Buice M. A., Barry C., Hayman R., Burgess N., and Fiete I. R., “Specific evidence of low-dimensional continuous attractor dynamics in grid cells,” Nature neuroscience, vol. 16, no. 8, pp. 1077–1084, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [14].Gardner R. J., Hermansen E., Pachitariu M., Burak Y., Baas N. A., Dunn B. A., Moser M.-B., and Moser E. I., “Toroidal topology of population activity in grid cells,” Nature, vol. 602, no. 7895, pp. 123–128, 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Ajabi Z., Keinath A. T., Wei X.-X., and Brandon M. P., “Population dynamics of head-direction neurons during drift and reorientation,” Nature, vol. 615, no. 7954, pp. 892–899, 2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Kim S. S., Rouault H., Druckmann S., and Jayaraman V., “Ring attractor dynamics in the drosophila central brain,” Science, vol. 356, no. 6340, pp. 849–853, 2017. [DOI] [PubMed] [Google Scholar]
- [17].Turner-Evans D., Wegener S., Rouault H., Franconville R., Wolff T., Seelig J. D., Druckmann S., and Jayaraman V., “Angular velocity integration in a fly heading circuit,” Elife, vol. 6, p. e23496, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [18].Turner-Evans D. B., Jensen K. T., Ali S., Paterson T., Sheridan A., Ray R. P., Wolff T., Lauritzen J. S., Rubin G. M., Bock D. D., et al. , “The neuroanatomical ultrastructure and function of a biological ring attractor,” Neuron, vol. 108, no. 1, pp. 145–163, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Bicanski A. and Burgess N., “Environmental anchoring of head direction in a computational model of retrosplenial cortex,” Journal of Neuroscience, vol. 36, no. 46, pp. 11601–11618, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [20].Jeffery K. J., Page H. J., and Stringer S. M., “Optimal cue combination and landmark-stability learning in the head direction system,” The Journal of physiology, vol. 594, no. 22, pp. 6527–6534, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Campbell M. G., Ocko S. A., Mallory C. S., Low I. I., Ganguli S., and Giocomo L. M., “Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation,” Nature neuroscience, vol. 21, no. 8, pp. 1096–1106, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Muller R. U. and Kubie J. L., “The effects of changes in the environment on the spatial firing of hippocampal complex-spike cells,” Journal of Neuroscience, vol. 7, no. 7, pp. 1951–1968, 1987. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].Taube J. S., Muller R. U., and Ranck J. B., “Head-direction cells recorded from the postsubiculum in freely moving rats. ii. effects of environmental manipulations,” Journal of Neuroscience, vol. 10, no. 2, pp. 436–447, 1990. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Goodridge J. P., Dudchenko P. A., Worboys K. A., Golob E. J., and Taube J. S., “Cue control and head direction cells.,” Behavioral neuroscience, vol. 112, no. 4, p. 749, 1998. [DOI] [PubMed] [Google Scholar]
- [25].Knierim J. J., Kudrimoti H. S., and McNaughton B. L., “Interactions between idiothetic cues and external landmarks in the control of place cells and head direction cells,” Journal of neurophysiology, vol. 80, no. 1, pp. 425–446, 1998. [DOI] [PubMed] [Google Scholar]
- [26].Savelli F., Luck J., and Knierim J. J., “Framing of grid cells within and beyond navigation boundaries,” Elife, vol. 6, p. e21354, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Terrazas A., Krause M., Lipa P., Gothard K. M., Barnes C. A., and McNaughton B. L., “Self-motion and the hippocampal spatial metric,” Journal of Neuroscience, vol. 25, no. 35, pp. 8085–8096, 2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Tcheang L., Bülthoff H. H., and Burgess N., “Visual influence on path integration in darkness indicates a multimodal representation of large-scale space,” Proceedings of the National Academy of Sciences, vol. 108, no. 3, pp. 1152–1157, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].Fuhs M. C. and Touretzky D. S., “A spin glass model of path integration in rat medial entorhinal cortex,” Journal of Neuroscience, vol. 26, no. 16, pp. 4266–4276, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [30].Conklin J. and Eliasmith C., “A controlled attractor network model of path integration in the rat,” Journal of computational neuroscience, vol. 18, pp. 183–203, 2005. [DOI] [PubMed] [Google Scholar]
- [31].Burak Y. and Fiete I. R., “Accurate path integration in continuous attractor network models of grid cells,” PLoS computational biology, vol. 5, no. 2, p. e1000291, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [32].Vafidis P., Owald D., D’Albis T., and Kempter R., “Learning accurate path integration in ring attractor models of the head direction system,” Elife, vol. 11, p. e69841, 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [33].McNaughton B. L., Battaglia F. P., Jensen O., Moser E. I., and Moser M.-B., “Path integration and the neural basis of the’cognitive map’,” Nature Reviews Neuroscience, vol. 7, no. 8, pp. 663–678, 2006. [DOI] [PubMed] [Google Scholar]
- [34].Shimbo A., Izawa E.-I., and Fujisawa S., “Scalable representation of time in the hippocampus,” Science Advances, vol. 7, no. 6, p. eabd7013, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [35].Jayakumar R. P., Madhav M. S., Savelli F., Blair H. T., Cowan N. J., and Knierim J. J., “Recalibration of path integration in hippocampal place cells,” Nature, vol. 566, no. 7745, pp. 533–537, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [36].Goodridge J. P. and Touretzky D. S., “Modeling attractor deformation in the rodent head-direction system,” Journal of neurophysiology, vol. 83, no. 6, pp. 3402–3410, 2000. [DOI] [PubMed] [Google Scholar]
- [37].Samsonovich A. and McNaughton B. L., “Path integration and cognitive mapping in a continuous attractor neural network model,” Journal of Neuroscience, vol. 17, no. 15, pp. 5900–5920, 1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [38].Amari S.-i., “Dynamics of pattern formation in lateral-inhibition type neural fields,” Biological cybernetics, vol. 27, no. 2, pp. 77–87, 1977. [DOI] [PubMed] [Google Scholar]
- [39].Wilson H. R. and Cowan J. D., “A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue,” Kybernetik, vol. 13, no. 2, pp. 55–80, 1973. [DOI] [PubMed] [Google Scholar]
- [40].Ermentrout G. B. and Cowan J. D., “A mathematical theory of visual hallucination patterns,” Biological cybernetics, vol. 34, no. 3, pp. 137–150, 1979. [DOI] [PubMed] [Google Scholar]
- [41].Ocko S. A., Hardcastle K., Giocomo L. M., and Ganguli S., “Emergent elasticity in the neural code for space,” Proceedings of the National Academy of Sciences, vol. 115, no. 50, pp. E11798–E11806, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [42].Fung C. A., Wong K. M., and Wu S., “A moving bump in a continuous manifold: a comprehensive study of the tracking dynamics of continuous attractor neural networks,” Neural Computation, vol. 22, no. 3, pp. 752–792, 2010. [DOI] [PubMed] [Google Scholar]
- [43].Simon D., Optimal state estimation: Kalman, H infinity, and nonlinear approaches. John Wiley & Sons, 2006. [Google Scholar]
- [44].Salinas E. and Abbott L. F., “A model of multiplicative neural responses in parietal cortex,” Proceedings of the national academy of sciences, vol. 93, no. 21, pp. 11956–11961, 1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [45].Solstad T., Moser E. I., and Einevoll G. T., “From grid cells to place cells: a mathematical model,” Hippocampus, vol. 16, no. 12, pp. 1026–1031, 2006. [DOI] [PubMed] [Google Scholar]
- [46].Yan Y., Burgess N., and Bicanski A., “A model of head direction and landmark coding in complex environments,” PLoS computational biology, vol. 17, no. 9, p. e1009434, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [47].Lyu C., Abbott L., and Maimon G., “Building an allocentric travelling direction signal via vector computation,” Nature, vol. 601, no. 7891, pp. 92–97, 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [48].Stringer S., Trappenberg T., Rolls E., and Araujo I., “Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cells,” Network: Computation in Neural Systems, vol. 13, no. 2, pp. 217–242, 2002. [PubMed] [Google Scholar]
- [49].Brody C. D., Romo R., and Kepecs A., “Basic mechanisms for graded persistent activity: discrete attractors, continuous attractors, and dynamic representations,” Current opinion in neurobiology, vol. 13, no. 2, pp. 204–211, 2003. [DOI] [PubMed] [Google Scholar]
- [50].Hollup S. A., Molden S., Donnett J. G., Moser M.-B., and Moser E. I., “Accumulation of hippocampal place fields at the goal location in an annular watermaze task,” Journal of Neuroscience, vol. 21, no. 5, pp. 1635–1644, 2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [51].Bourboulou R., Marti G., Michon F.-X., El Feghaly E., Nouguier M., Robbe D., Koenig J., and Epsztein J., “Dynamic control of hippocampal spatial coding resolution by local visual cues,” Elife, vol. 8, p. e44487, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [52].McNaughton B., Chen L., and Markus E., ““dead reckoning,” landmark learning, and the sense of direction: a neurophysiological and computational hypothesis,” Journal of Cognitive Neuroscience, vol. 3, no. 2, pp. 190–202, 1991. [DOI] [PubMed] [Google Scholar]
- [53].McNaughton B. L., Knierim J. J., and Wilson M. A., Vector encoding and the vestibular foundations of spatial cognition: neurophysiological and computational mechanisms, p. 585–595. The MIT Press, 2020. [Google Scholar]
- [54].McNaughton B. L., Barnes C. A., Gerrard J. L., Gothard K., Jung M. W., Knierim J. J., Kudrimoti H., Qin Y., Skaggs W., Suster M., et al. , “Deciphering the hippocampal polyglot: the hippocampus as a path integration system,” Journal of Experimental Biology, vol. 199, no. 1, pp. 173–185, 1996. [DOI] [PubMed] [Google Scholar]
- [55].Milford M. J., Wyeth G. F., and Prasser D., “Ratslam: a hippocampal model for simultaneous localization and mapping,” in IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004, vol. 1, pp. 403–408, 2004. [Google Scholar]
- [56].Rieser J. J., Pick H. L., Ashmead D. H., and Garing A. E., “Calibration of human locomotion and models of perceptual-motor organization.,” Journal of Experimental Psychology: Human Perception and Performance, vol. 21, no. 3, p. 480, 1995. [DOI] [PubMed] [Google Scholar]
- [57].Low I. I., Williams A. H., Campbell M. G., Linderman S. W., and Giocomo L. M., “Dynamic and reversible remapping of network representations in an unchanging environment,” Neuron, vol. 109, no. 18, pp. 2967–2980, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [58].Wu S., Hamaguchi K., and Amari S.-i., “Dynamics and computation of continuous attractors,” Neural computation, vol. 20, no. 4, pp. 994–1025, 2008. [DOI] [PubMed] [Google Scholar]
- [59].Kutschireiter A., Basnak M. A., Wilson R. I., and Drugowitsch J., “Bayesian inference in ring attractor networks,” Proceedings of the National Academy of Sciences, vol. 120, no. 9, p. e2210622120, 2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [60].Pouget A., Zhang K., Deneve S., and Latham P. E., “Statistically efficient estimation using population coding,” Neural computation, vol. 10, no. 2, pp. 373–401, 1998. [DOI] [PubMed] [Google Scholar]
- [61].Zhang W.-H., Chen A., Rasch M. J., and Wu S., “Decentralized multisensory information integration in neural systems,” Journal of Neuroscience, vol. 36, no. 2, pp. 532–547, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [62].Mehra R., “On the identification of variances and adaptive kalman filtering,” IEEE Transactions on automatic control, vol. 15, no. 2, pp. 175–184, 1970. [Google Scholar]
- [63].Mohamed A. and Schwarz K., “Adaptive kalman filtering for ins/gps,” Journal of geodesy, vol. 73, pp. 193–203, 1999. [Google Scholar]
- [64].Åström K. J. and Wittenmark B., Adaptive control. Courier Corporation, 2013. [Google Scholar]
- [65].Schultz W., “Predictive reward signal of dopamine neurons,” Journal of neurophysiology, 1998. [DOI] [PubMed] [Google Scholar]
- [66].Hollerman J. R. and Schultz W., “Dopamine neurons report an error in the temporal prediction of reward during learning,” Nature neuroscience, vol. 1, no. 4, pp. 304–309, 1998. [DOI] [PubMed] [Google Scholar]
- [67].Medina J. F. and Lisberger S. G., “Links from complex spikes to local plasticity and motor learning in the cerebellum of awake-behaving monkeys,” Nature neuroscience, vol. 11, no. 10, pp. 1185–1192, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [68].Kitazawa S., Kimura T., and Yin P.-B., “Cerebellar complex spikes encode both destinations and errors in arm movements,” Nature, vol. 392, no. 6675, pp. 494–497, 1998. [DOI] [PubMed] [Google Scholar]
- [69].Cheadle S., Wyart V., Tsetsos K., Myers N., De Gardelle V., Castañón S. H., and Summerfield C., “Adaptive gain control during human perceptual choice,” Neuron, vol. 81, no. 6, pp. 1429–1441, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [70].Keung W., Hagen T. A., and Wilson R. C., “A divisive model of evidence accumulation explains uneven weighting of evidence over time,” Nature Communications, vol. 11, no. 1, p. 2160, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [71].Brunton B. W., Botvinick M. M., and Brody C. D., “Rats and humans can optimally accumulate evidence for decision-making,” Science, vol. 340, no. 6128, pp. 95–98, 2013. [DOI] [PubMed] [Google Scholar]
- [72].Ginosar G., Aljadeff J., Las L., Derdikman D., and Ulanovsky N., “Are grid cells used for navigation? on local metrics, subjective spaces, and black holes,” Neuron, 2023. [DOI] [PubMed] [Google Scholar]
- [73].Krupic J., Bauza M., Burton S., Barry C., and O’Keefe J., “Grid cell symmetry is shaped by environmental geometry,” Nature, vol. 518, no. 7538, pp. 232–235, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [74].Stensola H., Stensola T., Solstad T., Frøland K., Moser M.-B., and Moser E. I., “The entorhinal grid map is discretized,” Nature, vol. 492, no. 7427, pp. 72–78, 2012. [DOI] [PubMed] [Google Scholar]
- [75].Barry C., Hayman R., Burgess N., and Jeffery K. J., “Experience-dependent rescaling of entorhinal grids,” Nature neuroscience, vol. 10, no. 6, pp. 682–684, 2007. [DOI] [PubMed] [Google Scholar]
- [76].Barry C., Ginzberg L. L., O’Keefe J., and Burgess N., “Grid cell firing patterns signal environmental novelty by expansion,” Proceedings of the National Academy of Sciences, vol. 109, no. 43, pp. 17687–17692, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [77].Keinath A. T., Epstein R. A., and Balasubramanian V., “Environmental deformations dynamically shift the grid cell spatial metric,” Elife, vol. 7, p. e38169, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [78].Hetherington P. A. and Shapiro M. L., “Hippocampal place fields are altered by the removal of single visual cues in a distance-dependent manner.,” Behavioral neuroscience, vol. 111, no. 1, p. 20, 1997. [DOI] [PubMed] [Google Scholar]
- [79].Eichenbaum H., Wiener S. I., Shapiro M., and Cohen N., “The organization of spatial coding in the hippocampus: a study of neural ensemble activity,” Journal of Neuroscience, vol. 9, no. 8, pp. 2764–2775, 1989. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [80].Sato M., Mizuta K., Islam T., Kawano M., Sekine Y., Takekawa T., Gomez-Dominguez D., Schmidt A., Wolf F., Kim K., et al. , “Distinct mechanisms of over-representation of landmarks and rewards in the hippocampus,” Cell reports, vol. 32, no. 1, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [81].Noorman M., Hulse B. K., Jayaraman V., Romani S., and Hermundstad A. M., “Accurate angular integration with only a handful of neurons,” bioRxiv, pp. 2022–05, 2022. [Google Scholar]
- [82].Dayan P. and Abbott L. F., Theoretical neuroscience: computational and mathematical modeling of neural systems. MIT press, 2005. [Google Scholar]
- [83].Khalil H., Nonlinear Systems. Pearson Education, Prentice Hall, 2002. [Google Scholar]
- [84].Guckenheimer J. and Holmes P., Nonlinear oscillations, dynamical systems, and bifurcations of vector fields, vol. 42. Springer Science & Business Media, 2013. [Google Scholar]





