Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2019 Oct 21;116(45):22783–22794. doi: 10.1073/pnas.1911633116

Oscillatory recurrent gated neural integrator circuits (ORGaNICs), a unifying theoretical framework for neural dynamics

David J Heeger a,b,1, Wayne E Mackey a,b
PMCID: PMC6842604  PMID: 31636212

Significance

Oscillatory recurrent gated neural integrator circuits (ORGaNICs) are a family of recurrent neural circuits that can simulate a wide range of neurobiological phenomena, various of which have each been explained by different models. This theoretical framework can be used to simulate neural activity with complex dynamics, including sequential and traveling waves of activity. When used to model cognitive processing in the brain, these circuits can both maintain and manipulate information during a working memory task. When used to model motor control, these circuits convert spatial patterns of premotor activity to temporal profiles of motor control activity. ORGaNICs offer a conceptual framework; rethinking cortical computation in these terms should have widespread implications, motivating a variety of experiments.

Keywords: computational neuroscience, recurrent neural network, normalization, working memory, motor control

Abstract

Working memory is an example of a cognitive and neural process that is not static but evolves dynamically with changing sensory inputs; another example is motor preparation and execution. We introduce a theoretical framework for neural dynamics, based on oscillatory recurrent gated neural integrator circuits (ORGaNICs), and apply it to simulate key phenomena of working memory and motor control. The model circuits simulate neural activity with complex dynamics, including sequential activity and traveling waves of activity, that manipulate (as well as maintain) information during working memory. The same circuits convert spatial patterns of premotor activity to temporal profiles of motor control activity and manipulate (e.g., time warp) the dynamics. Derivative-like recurrent connectivity, in particular, serves to manipulate and update internal models, an essential feature of working memory and motor execution. In addition, these circuits incorporate recurrent normalization, to ensure stability over time and robustness with respect to perturbations of synaptic weights.


Neuroscience research on working memory has largely focused on sustained delay-period activity (14). A large body of experimental research has measured sustained activity in prefrontal cortex (PFC) and/or parietal cortex during delay periods of memory-guided saccade tasks (59) and delayed-discrimination and delayed match-to-sample tasks (1013). Most of the models of working memory, based on neural integrators (see SI Appendix, Figs. S1–S3 for a primer on neural integrators), are aimed to explain sustained delay-period activity or to explain behavioral phenomena associated with sustained activity (14, 15).

Working memory, however, involves much more than simply holding a piece of information online. In cognitive psychology, the idea of working memory includes manipulating online information dynamically in the context of new sensory input. For example, understanding a complex utterance (with multiple phrases) often involves disambiguating the syntax and/or semantics of the beginning of the utterance based on information at the end of the sentence. Doing so necessitates representing and manipulating long-term dependencies, that is, maintaining a representation of the ambiguous information, and then changing that representation when the ambiguity is resolved. In addition, there are a variety of experimental results that are difficult to reconcile with sustained activity and neural integrator models. Some (if not the majority of) neurons either exhibit sequential activity such that activity is handed off from one neuron to the next during a delay period with each individual neuron being active only transiently (1621) or they exhibit complex dynamics during delay periods (2127). Complex dynamics (including oscillations) are evident also in the combined activity (e.g., as measured with local field potentials) of populations of neurons (28, 29). We hypothesize that these complex dynamics serve a purpose, to manipulate working memory representations.

Models of perceptual decision making, like working memory models, are also based on simple neural integrators. Specifically, perceptual decision making has been proposed to involve integration of noisy sensory information (3034), a simple form of manipulation, in which neurons literally sum sensory-evoked activity over a period of time. However, a more general theoretical framework for representing and manipulating long-term dependencies is lacking.

Motor preparation and execution, analogous to working memory, involves maintaining a neural representation of a motor plan while manipulating that representation to generate the desired movement dynamics. Neural circuits and systems subserving motor preparation and execution exhibit analogous sustained and sequential activity phenomena (3539), and there are analogous challenges reconciling these phenomena with neural integrator models.

Long short-term memory units (LSTMs) are machine-learning (ML) algorithms that represent and manipulate long-term dependencies (40). LSTMs are a class of recurrent neural networks. A number of variants of the basic LSTM architecture have been developed and tested for ML applications including language modeling, translation, and speech recognition (4145). In these and other tasks, the input stimuli contain information across multiple timescales, but the ongoing presentation of stimuli makes it difficult to correctly combine that information over time. This is analogous to the problem of representing and manipulating long-term dependencies mentioned above in working memory, decision making, and motor control. An LSTM handles this problem by updating its internal state over time with a pair of gates: The update gate selects which part(s) of the current input to process, and the reset gate selectively deletes part(s) of the current output. The gates are computed at each time step from the current inputs and outputs. This enables LSTMs to maintain a representation of some of the inputs, until needed, and then to manipulate that representation based on inputs that come later in time.

Here, we introduce a theoretical framework for neural dynamics that is a generalization of and a biophysically plausible implementation of LSTMs. We show that these circuits simulate key phenomena of working memory, including both maintenance and manipulation, and both sequential and sustained activity. We also show that the exact same circuits (with the same synaptic weights) simulate key phenomena of motor control. Preliminary versions of this work, along with further details and mathematical derivations, were posted on preprint servers (46, 47). MATLAB code for recreating the simulation results is available at https://archive.nyu.edu/handle/2451/60439 (48).

Results

ORGaNICs.

We begin by describing the basic architecture of oscillatory recurrent gated neural integrator circuits (ORGaNICs). The following subsections elaborate this basic architecture and demonstrate that this architecture can subserve a variety of functions including working memory and motor control.

An example ORGaNICs circuit is depicted in Fig. 1. The neural responses of a population of neurons are modeled as dynamical processes that evolve over time. The output responses depend on an input drive (a weighted sum of the responses of a population of input neurons) and a recurrent drive (a recurrent weighted sum of their own responses). The time-varying output responses are represented by a vector y = (y1, y2,…, yj,…, yN), where the subscript j indexes different neurons in the population (boldface lowercase letters denote vectors and boldface uppercase letters denote matrices.) The time-varying inputs are represented by another vector x = (x1, x2,…, xj,…, xM). The output responses are also modulated by 2 populations of time-varying modulators, recurrent modulators a and input modulators b. (We use the term “modulator” to mean a multiplicative computation regardless of whether or not it is implemented with neuromodulators.) The recurrent and input modulators are analogous, respectively, to the reset and input gates in LSTMs. The modulators depend on the inputs and outputs. So, there are 2 nested recurrent circuits: 1) recurrent drive: the output responses depend on the recurrent drive, which depends on a weighted sum of their own responses, and 2) multiplicative modulators: the output responses are modulated (multiplicatively) by the responses of 2 other populations of neurons (the modulators), which also depend on the output responses.

Fig. 1.

Fig. 1.

ORGaNICs architecture. (A) Diagram of connections in an example ORGaNIC. Solid lines/curves are excitatory (positive weights) and dashed curves are inhibitory (negative weights). Gray scale represents strength of connections (weight magnitude). Only a few of the input-drive connections and recurrent-drive connections are shown to minimize clutter. (B) Oculomotor delayed response task. Black cross-hair, fixation point. Black circle, eye position at the beginning of a trial. Blue circles, possible target locations, each of which evokes an input.

Specifically, neural responses are modeled by the following dynamical systems equation:

τydyjdt=yj+(bj+1+bj+)zj+(11+aj+)y^j, [1]
z=Wzxx,
y^=Wy^yx,
r=Wryy,
aj+0andbj+0.

Eq. 1 can be implemented with a simplified biophysical (equivalent electrical circuit) model of pyramidal cells (see SI Appendix and ref. 46 for details). The variables (y, ŷ, x, z, a, b, and r) are each functions of time, for example y(t), but we drop the explicit dependence on t to simplify the notation. The responses y depend on an input drive z, which is computed as a weighted sum of inputs x. The encoding weight matrix (also called the embedding matrix) Wzx is an N × M matrix of weights where N is the number of neurons in the circuit and M is the number of inputs to the circuit. The rows of Wzx are the response fields of the neurons. The responses y also depend on a recurrent drive ŷ, which is computed as a weighted sum of the responses y. The recurrent weight matrix Wŷy is an N × N matrix. For the example circuit depicted in Fig. 1, the recurrent weights have a center-surround architecture in which the closest recurrent connections are excitatory and the more distant ones are inhibitory, and the circuit exhibits sustained activity (discussed below). For other choices of the recurrent weight matrix, the circuit can exhibit stable, ongoing oscillations, sequential activity, or traveling waves of activity (discussed below). The recurrent drive and input drive are modulated, respectively, by 2 other populations of neurons: the recurrent modulators a and the input modulators b. The superscript + is a rectifying output nonlinearity. Half-wave rectification is the simplest form of this rectifying nonlinearity, but other output nonlinearities could be substituted, for example sigmoid, exponentiation, half-squaring (49), normalization (50, 51), and so on. The value of τy is the intrinsic time constant of the neurons. Finally, the output responses are multiplied by a readout matrix Wry, where r is the readout (not depicted in the figure).

The time-varying values of the modulators a and b determine the state of the circuit by controlling the recurrent gain and effective time constant. During periods of time when both aj and bj are large (e.g., aj = bj ≫ 1), the response time courses are dominated by the input drive, so the responses exhibit a short effective time constant. When both aj and bj are small (∼0), the responses are dominated by the recurrent drive, so the responses exhibit a long effective time constant. When aj is large and bj is small, the recurrent drive is shut down (like the reset gate in an LSTM). A leaky neural integrator corresponds to a special case in which aj = bj is constant over time (see SI Appendix for a primer on neural integrators).

The modulators are themselves dynamical systems that depend on the inputs and outputs:

τadadt=a+Waxx+f(y), [2]
τbdbdt=b+Wbxx.

The values of τa and τb are the intrinsic time constant of the modulator neurons. The recurrent modulator a depends on a function of the output responses f(y), to incorporate recurrent normalization (Robustness via Normalization and SI Appendix). ORGaNICs (by analogy with LSTMs) use the modulators to encode a dynamically changing state. The modulators depend on the current inputs and the current outputs, which in turn depend on past inputs and outputs, so the state depends on the current inputs and past context. The modulators can be controlled separately for each neuron so that each neuron can have a different state (different values for aj and bj) at each instant in time. In the example that follows, however, all of the neurons in the circuit shared the same state, but that state changed over time.

ORGaNICs are inherently a nonlinear dynamical system because the input drive and the recurrent drive are each multiplied by nonlinear functions of the modulators (Eq. 1) and because the recurrent modulator depends nonlinearly on the output responses (Eq. 2). However, there are circumstances when these equations can be analyzed as a linear system, specifically when the modulators are constant over time, because the only remaining nonlinearity is due to the normalization which simply acts to rescale the responses.

There is considerable flexibility in the formulation of ORGaNICs, with different variants corresponding to different hypothesized neural circuits (SI Appendix). In one such variant, each of the modulators can depend on both the inputs and the outputs, unlike Eq. 2 in which only a depends on the output responses. In another variant, the 2 modulators have analogous effects such that larger values of a increase the gain of the recurrent drive, unlike Eq. 1 in which larger values of a decrease the gain of the recurrent drive. In yet another variant, the 2 modulators are coordinated to govern balance between input drive and recurrent drive.

The following subsections describe some examples of ORGaNICs. We begin with a simplified example of a sustained activity circuit, then modify the recurrent weights to simulate sequential activity and traveling waves, and then add multiple recurrent terms for manipulation. Simulated neural responses shown in the figures are intended to exhibit qualitative aspects of neurophysiological phenomena, that is, the models have not (yet) been optimized to replicate published data by tuning or fitting the model parameters. The weights in the various weight matrices were prespecified (not learned) for each of the simulations in this paper (although ORGaNICs are compatible with modified versions of ML algorithms; see SI Appendix).

Sustained Activity.

We used ORGaNICs to simulate sustained activity during a memory-guided saccade task (Fig. 2), using the circuit depicted in Fig. 1A. In this task, a target is flashed briefly while a subject is fixating the center of a screen (Fig. 1B). After a delay period of several seconds, the fixation point disappears, cueing the subject to make an eye movement to the remembered location of the target.

Fig. 2.

Fig. 2.

Sustained activity. (A) Encoding matrix (Wzx), each row of which corresponds to a neuron’s response field. Graph, response field corresponding to the middle row of the matrix. (B) Recurrent weight matrix (Wŷy), each row of which corresponds to the recurrent synaptic weights from other neurons in the population. Graph, recurrent weights corresponding to the middle row of the matrix. (C) Input stimulus and reconstructed stimulus. Blue, input stimulus (x) corresponding to target location. Orange, reconstructed stimulus, computed as a weighted sum of the reconstructed input drive (D). (D) Input drive and reconstructed input drive. Blue, input drive (z) to each neuron as a function of that neuron’s preferred target location. Orange, reconstructed input drive, computed as a weighted sum of the readout (H). (E) Input drive (z) over time. Each color corresponds to a different neuron. (F) Modulator responses. Top row, a. Bottom row, b. (G) Output responses (y). Each color corresponds to a different neuron. (H) Readout (r). Each color corresponds to a different component of the readout.

The modulators in the simulation were constant during each successive phase of the behavioral task. Many experimental protocols in behavioral neuroscience comprise a sequence of distinct phases (including the oculomotor delayed response task; see below figures for more examples). The behavioral cues built into the experimental protocol set the state of the modulators via Wax and Wbx in Eq. 2, and the state changed from one phase to the next. During each phase, the modulators were constant and the circuit reduced to a linear dynamical system, making it mathematically tractable.

Each neuron in the simulation responded selectively to target location, each with a different preferred polar angle (i.e., saccade direction) in the visual field (Figs. 1B and 2A), all with the same preferred radial position (i.e., saccade amplitude). We ignored saccade amplitude for this simulation, but it would be straightforward to replicate the circuit for each of several saccade amplitudes. The input drive z to each neuron, consequently, depended on target location and the time course of the target presentation (Fig. 2 D and E). The recurrent weights Wŷy were chosen to have a center-surround architecture; each row of Wŷy had a large positive value along the diagonal (self-excitation), flanked by smaller positive values, and surrounded by small negative values (Fig. 2B). All neurons in the circuit shared the same pair of modulators (aj = a and bj = b), that is, all of the neurons had the same state at any given point in time. The input to the circuit comprised not only the target presentation but also the time courses of 2 cues, one of which indicated the beginning of the trial (at time 0 ms) and the other of which indicated the end of the delay period (at time 3,500 ms). The response time courses of the modulators followed the 2 cues (Fig. 2F), by setting appropriate values in the weight matrices Wax and Wbx.

This circuit was capable of maintaining a representation of target location during the delay period with sustained activity (Fig. 2G). The responses followed the input drive initially (compare Fig. 2 E and G) because the value of the input modulator was set to b = 1 (via Wbx in Eq. 2) by the cue indicating the beginning of the trial. The value of b then switched to be small (= 0, corresponding to a long effective time constant) before the target was extinguished, so the output responses exhibited sustained activity (Fig. 2G). Finally, the value of the recurrent modulator was set to a ≈ 1 (via Wax in Eq. 2) by the cue indicating the end of the trial, causing the output responses to be extinguished.

The dynamics of the responses, during the delay period, depended on the eigenvalues and eigenvectors of the recurrent weight matrix Wŷy. In this particular example circuit, the recurrent weight matrix (Fig. 2B) was a symmetric 36 × 36 matrix (n = 36 was the number of neurons in the circuit, that is, each of y and z were 36-dimensional vectors). For this particular recurrent weight matrix, 19 of the eigenvalues were equal to 1, and the others had values less than 1. There is, of course, nothing special about these numbers; the circuit could include any number of neurons with any number of eigenvalues equal to 1, but providing these details makes it easier to visualize and understand. The critical issue is that the weight matrix was scaled so that the largest eigenvalues were equal to 1. (It is of course unrealistic for a biological circuit to have such precisely tuned synaptic weights but we show below that the circuit is robust with respect to the precise tuning because of the built-in normalization). The corresponding eigenvectors defined an orthonormal coordinate system (or basis) for the responses. The responses during the delay period (when b = 0) were determined entirely by the projection of the initial values (the responses at the very beginning of the delay period) onto the eigenvectors. Eigenvectors with corresponding eigenvalues equal to 1 were sustained throughout the delay period. Those with eigenvalues less than 1 decayed to zero (smaller eigenvalues decayed more quickly). Those with eigenvalues greater than 1 would have been unstable, growing without bound (which is why the weight matrix was scaled so that the largest eigenvalues = 1). This example circuit had a representational dimensionality d = 19, because the recurrent weight matrix had 19 eigenvalues = 1. The neural activity in this circuit was a 19-dimensional continuous attractor during the delay period. It could, in principle, maintain the locations and contrasts of up to 19 targets, or it could maintain a 19-dimensional pattern of inputs.

The input drive and target location were reconstructed from the responses, at any time during the delay period (Fig. 2 C and D). To do so, the responses were first multiplied by a readout matrix. The readout matrix Wry = Vt was a 19 × 36 matrix, where the rows of Vt were computed from the eigenvectors of the recurrent weight matrix Wŷy. Specifically, V was an orthonormal basis for the 19-dimensional subspace spanned by the eigenvectors of Wŷy with corresponding eigenvalues = 1. The resulting readout (Fig. 2H), at any time point, was then multiplied by a decoding (or reconstruction) matrix (SI Appendix). The result was a perfect reconstruction of the input drive (Fig. 2D, orange) up to a scale factor (because of normalization), and an approximate reconstruction of the input stimulus (Fig. 2C, orange) with a peak at the target location. The reconstruction of the input stimulus was imperfect because the response fields were broadly tuned for polar angle. Regardless, we do not mean to imply that the brain attempts to reconstruct the stimulus from the responses. The reconstruction merely demonstrates that the responses and readout implicitly represent the target location. The encoding matrix Wzx was a 36 × 360 matrix (M = 360 was the number of polar angle samples in the input stimulus). The response fields (i.e., the rows of the encoding weight matrix Wzx) were designed based on the same eigenvectors. Doing so guaranteed that the input drive was reconstructed perfectly from the responses at any time during the delay period (Fig. 2D; see SI Appendix for derivation).

Robustness via Normalization.

The sustained activity circuit, as described above, depended on precisely tuned synaptic weights. The recurrent weight matrices were scaled so that the eigenvalues were no greater than 1. For a linear recurrent circuit with eigenvalues greater than 1, the responses are unstable, growing without bound during a delay period. This is a well-known problem for recurrent neural networks (5254).

ORGaNICs solve this problem by incorporating normalization. The normalization model was initially developed to explain stimulus-evoked responses of neurons in primary visual cortex (V1) (50) but has since been applied to explain neural activity in a wide variety of neural systems (51). The model’s defining characteristic is that the response of each neuron is divided by a factor that includes a weighted sum of activity of a pool of neurons. The model predicts and explains many well-documented physiological phenomena, as well as their behavioral and perceptual analogs.

The simulated neural circuits used the recurrent modulator a to provide normalization via feedback. The recurrent modulator determined the amount of recurrent gain; it was a particular nonlinear function of the responses: f(y) in Eq. 2 (see SI Appendix for details). For an input drive z that was constant for a period of time, the output responses achieved a stable state in which they were normalized (see SI Appendix for derivation):

|yj|2=|zj|2σ2+|zj|2. [3]

The responses were proportional to the input drive when the amplitude of the input drive was small (i.e., when the sum of the squared input drives was ≪ σ2). The responses saturated (i.e., leveled off) when the amplitude of the input drive was large (≫ σ2). The value of σ (the semisaturation constant) determined the input drive amplitude that achieved half the maximum response. Despite saturation, the relative responses were maintained (see SI Appendix for derivation):

|yj|2|yj|2=|zj|2|zj|2. [4]

That is, the normalized responses represented a ratio between the input drive to an individual neuron and the amplitude of the input drive summed across all of the neurons. Consequently, the responses of all neurons saturated together (at the same input drive amplitude) even though some neurons responded strongly to the input whereas others did not.

Recurrent normalization made the circuit robust with respect to imperfections in the recurrent weight matrix (Fig. 3). Without normalization, responses depended critically on fine tuning. For example, we used the sustained activity circuit (Figs. 1 and 2), but with f(y) = 0 so that normalization was disabled, and we scaled the recurrent weight matrix by a factor of 1.02. The responses were unstable, growing without bound (Fig. 3A). Including normalization automatically stabilized the activity of the circuit (Fig. 3B). The increases in activity evoked by the recurrent weight matrix (with largest eigenvalues = 1.02) were countered by normalization such that the total activity in the circuit was roughly constant over time (||y||2 ∼ 1). The ratios of the responses were maintained (Eq. 4), enabling an accurate readout, throughout the delay period. Analogous results were obtained with the other example circuits described below, including those that exhibited oscillatory and sequential dynamics, because the normalization depends on the squared norm of the responses, which was constant over time during the delay period for each of these example circuits. The stability of the normalized responses did not depend on fine-tuning any of the other synaptic weights in the circuit; perturbing those synaptic weights by random values within ±5% yielded virtually identical simulated responses and the responses were stable even when those synaptic weights were perturbed by random values ranging from 0.5× to 2× (see SI Appendix for details). We have also implemented a generalization of this recurrent normalization circuit in which each neuron’s response can be normalized by an arbitrary (nonnegative) weighted sum of the other neurons in the circuit.

Fig. 3.

Fig. 3.

Normalization. (A) Output responses (y), corresponding to the sustained activity circuit depicted in Figs. 1 and 2, but with the recurrent weight matrix scaled by a factor of 1.02. Each color corresponds to a different neuron. (Inset) Full range of responses on an expanded (240×) ordinate. (B) Output responses with normalization. Dashed oval, high frequency, coherent, synchronized oscillations following target onset.

The normalized responses exhibited high-frequency oscillations following target onset that were synchronized across all of the neurons in the circuit (Fig. 3B, dashed oval). There are 2 nested recurrent circuits in ORGaNICs: 1) the recurrent drive and 2) the multiplicative modulators. The high-frequency oscillations emerged because of the inherent delay in the second of these recurrent circuits (i.e., because of the multiplicative modulator underlying normalization). The oscillation frequency depended on the membrane time constants. For the time constants used for Fig. 3, the responses exhibited oscillations in the gamma frequency range. Different intrinsic time constants yielded different oscillation frequencies. The oscillation frequency would have depended also on axon length if we were to include conduction delays.

The responses exhibited lower-frequency oscillations during the delay period (Fig. 3B). These lower-frequency oscillations emerged because of the recurrent drive in combination with normalization; the recurrent weight matrix was scaled to have eigenvalues greater than 1, which drove the responses to increase over time, but this increase was countered by normalization. These oscillations were synchronized so the ratios of the responses were maintained (Eq. 4), enabling an accurate readout, despite the oscillations.

Sequential Activity.

ORGaNICs can be used to generate delay-period activity with complex dynamics, including sequential activity and traveling waves of activity, in addition to sustained activity, and the same theoretical framework was used to analyze them. The key idea is that the recurrent weight matrix can have complex-valued eigenvectors and eigenvalues. One way for this to happen is when the recurrent weights and output responses are complex-valued (SI Appendix, Fig. S4). The complex-number notation is just a notational convenience (SI Appendix). Another way to generate complex dynamics is for the recurrent weight matrix to be real-valued but asymmetric, such that the responses are real-valued but the eigenvectors are eigenvalues are complex-valued.

One such example circuit was designed to generate sequential activity (Fig. 4). In this example circuit, there were again 36 neurons with the same response fields as in the preceding example (Fig. 2A). The modulators were also the same as in the preceding example, including recurrent normalization. The recurrent weight matrix was real-valued but asymmetric (Fig. 4A). Because of the asymmetry, the eigenvectors and eigenvalues of the recurrent weight matrix were complex-valued, and the output responses exhibited oscillatory dynamics (Fig. 4B). The recurrent weight matrix was designed so that the recurrent connectivity depended on the spatial derivative of the neural activity (55), that is, the difference in activity between nearby neurons (SI Appendix). Consequently, the activity was handed off from one neuron to the next during the delay period, analogous to a synfire chain (5659), but with activity that continuously tiled time (60).

Fig. 4.

Fig. 4.

Sequential activity. (A) Recurrent weight matrix (Wŷy). Graph, recurrent weights corresponding to the middle row of the matrix. (B) Output responses (y). Each color corresponds to a different neuron. Successive rows, responses of a few example neurons. (C) Readout (r+). Each color corresponds to a different component of the readout.

Despite the complex dynamics, the readout was constant over time (Fig. 4C). The readout matrix was again, as for the preceding sustained activity circuit (Fig. 2), computed as a unitary basis for the subspace spanned by the eigenvectors of Wŷy with corresponding eigenvalues that had real parts = 1. However, the readout was computed as r+ = |Wry y|, that is, the modulus (square root of the sum of squares of real and imaginary parts) of a weighted sum of the responses. Consequently, this circuit was capable of maintaining some (but not all) information about the input during the delay period. Unlike the preceding example, it was not possible to reconstruct the input drive from the readout at arbitrary points in time during the delay period. A linear reconstruction (like that used for the preceding example) generated a copy of the input drive that shifted over time like a traveling wave (SI Appendix, Fig. S5). That is, the information maintained during the delay period was sufficient for discriminating some inputs (e.g., 2 targets with different contrasts or 2 pairs of targets with different spacings) but incapable of discriminating between other inputs (e.g., a single target of the same contrast presented at 2 different locations).

Motor Preparation and Motor Control.

ORGaNICs are also capable of generating signals, like those needed to execute a complex sequence of movements (e.g., speech, bird song, backside double McTwist 1260 on a snowboard out of the halfpipe). Some actions are ballistic (open loop), meaning that they are executed with no sensory feedback during the movement. Others are closed loop, meaning that the movements are adjusted on the fly based on sensory feedback. ORGaNICs evoke patterns of activity over time that may underlie the execution of both open- and closed-loop movements.

An example of open-loop control (Fig. 5) was implemented using the sequential activity circuit described above, but with a different readout. The encoding matrix and the recurrent matrix were identical to those in the sequential activity circuit. The modulators were also the same as in the preceding examples, including recurrent normalization. The readout was different, simply summing the components, rΣ = Σ Re(Wry y). Different spatial patterns of inputs led to different temporal dynamics of the responses. When the input was chosen to drive a particular eigenvector (i.e., because the input drive was orthogonal to the other eigenvectors), then the readout during the period of motor execution (same as the delay period in the preceding example circuits) was a 1-Hz sinusoid (Fig. 5A). When the input was chosen to drive another eigenvector, then the readout was an 8-Hz sinusoid (Fig. 5C). A linear sum of these inputs evoked a readout that was proportional (because of normalization) to the linear sum of the readouts (Fig. 5D).

Fig. 5.

Fig. 5.

Motor preparation and motor control. (A) Input drive and readout corresponding to input that drives only the 1-Hz component of the recurrent weight matrix. (A, Left) Input drive (z), spatial pattern activity across the 36 neurons during the premotor time period (250 to 500 ms). (A, Right) Readout (rΣ) over time. Vertical dashed lines, times corresponding to curves in B. (B) Responses exhibit an oscillating traveling wave of activity. Different colors correspond to different time points, indicated in A. (C) Input drive and readout corresponding to the 8-Hz component of the recurrent weight matrix. Same format as A. (D) Summing the inputs from A and C evokes the sum of the responses. (E) Input drive from A is shifted in space, generating a readout that is shifted in time.

How are these temporal profiles of activity generated? Each eigenvector of the recurrent weight matrix is associated with a basis function, a pattern of activity across the population of neurons and over time. Each basis function is a complex exponential (i.e., comprising sine and cosine), the frequency of which is specified by the imaginary part of the corresponding eigenvalue:

ωi=(1,0002πτy)Im(λi). [5]

The value of λi is the imaginary part of the ith eigenvalue of the recurrent weight matrix, and ωi is the corresponding oscillation frequency (in hertz). The factor of 1,000 is needed because the time constant τy is presumed to be specified in milliseconds but the oscillation frequency is specified in hertz (cycles per second). The responses exhibit an oscillating traveling wave (Fig. 5B); the response of any individual neuron oscillates over time and the entire pattern of activity across the population of neurons shifts over time (Fig. 5B, orange – yellow – purple – green – cyan – red). For inputs corresponding to different eigenvectors, the responses oscillate at correspondingly different frequencies (Fig. 5C). The frequencies of the various components corresponding to each of the eigenvalues, for this particular recurrent weight matrix, included a number of other frequencies in addition to the 1- and 8-Hz components shown in the figure. Motor control signals with any arbitrary phase, for each of the frequency components, can be generated by shifting the input drive (Fig. 5E). That way, all combinations of amplitudes, frequencies, and phases can be generated just by changing the spatial pattern of premotor activity, with a fixed, linear readout. This dovetails with experimental evidence demonstrating that the function of motor preparation is to set the initial conditions that generate the desired movement (6163), and that complex movements are based on a library of motor primitives (64, 65).

The readout for open-loop control is, in general, a linear sum of the responses rΣ. The readout matrix for short-term memory, in the preceding sustained activity circuit (Fig. 2), comprised eigenvectors of the recurrent weight matrix to ensure that the input was recovered during the delay period. However, recovering the input is not the goal for open-loop control. Rather, a sum of the (co)sinusoidal basis functions was used to generate motor control signals for ballistic (open-loop) movements.

ORGaNICs may also generate more complicated control signals. The basis functions are damped oscillators when the modulators are greater than 0 but equal to one another (a = b) and constant over time, and when the input is constant over time. If the input is time-varying, then the responses depend on a linear combination of the inputs and the basis functions, and the responses may be used for closed-loop control. If the modulators are also time-varying, and different for each neuron, then the responses may exhibit a wide range of dynamics, with the capability (by analogy with LSTMs) of solving relatively sophisticated tasks (see Introduction for references).

Manipulation: Spatial Updating.

A simulation of the double-step saccade task illustrates how ORGaNICs can both maintain and manipulate information over time (Fig. 6). In this task, 2 targets are shown while a subject is fixating the center of a screen (Fig. 6 A, Upper). A pair of eye movements are then made in sequence to each of the 2 targets. Eye movements are represented in the brain using retinotopic, that is, eye-centered, coordinates (Fig. 6 A, Upper, red lines). Consequently, after making the first eye movement, the plan for the second eye movement must be updated (Fig. 6 A, Lower; the solid red line copied from the upper panel no longer points to the second target). This is done by combining a representation of the target location with a copy of the neural signals that control the eye muscles (i.e., corollary discharge) to update the planned eye movement (Fig. 6 A, Lower, dashed red line).

Fig. 6.

Fig. 6.

Spatial updating. (A) Double-step saccade task. (A, Top) Targets presented. (A, Bottom) After eye movement to target 1. White dots, targets. Black cross-hairs, eye position. Solid red lines, planned eye movements without updating. Dashed red line, planned eye movement after updating. (B) Recurrent weight matrices. (Top) Recurrent weight matrix corresponding to modulator a1 for maintaining a representation of the target locations. (Middle and Bottom) Recurrent weight matrices corresponding to modulators a2 and a3 for updating the representation with leftward and rightward eye movements, respectively. (C) Input stimulus and reconstructed stimulus. Blue, input stimulus (x) corresponding to the 2 target positions. Orange, reconstructed stimulus, computed as a weighted sum of the readout (F). (Top) Before eye movement to target 1. (Bottom) After eye movement to target 1. (D) Modulator responses. (Top) a1. (Middle) a2 (blue) and a3 (orange). (Bottom) b. (E) Output responses (y). (Top) Time course of activity, with different colors corresponding to different neurons. (Bottom) Responses for each of several time points (different colors correspond to different time points) while updating the neural representation of the target locations. (F) Readout (r). Dashed vertical lines in DF correspond to the snapshots in A.

The example circuit in Fig. 6 received 2 types of inputs: 1) the target locations at the beginning of the trial (Fig. 6 C, Top, blue) and 2) a corollary discharge of the impending eye movement. The targets were assumed to be along the horizontal meridian of the visual field. There were again 36 neurons, but unlike the preceding examples each neuron responded selectively to a different eccentricity along the horizontal meridian of the visual field (i.e., degrees of visual angle away from fixation), not different polar angles around fixation at a fixed eccentricity. The encoding matrix Wzx was analogous to that in the preceding examples, but the neurons were selective for target eccentricity instead of polar angle. Readout and reconstruction were the same as for the sustained activity circuit (Fig. 2).

What distinguishes this example circuit from the preceding examples is that there were 3 recurrent weight matrices (Fig. 6B), the first for maintaining a representation of the target locations (Fig. 6 B, Top), the second for changing the representation with leftward eye movements (Fig. 6 B, Middle), and the third for changing the representation with rightward eye movements (Fig. 6 B, Bottom). As in the preceding examples, the modulators were the same for each neuron in the circuit. Consequently, we can modify Eq. 1:

τydydt=y+(b+1+b+)z+(11+a1+)y^1+(a2+1+a2+)y^2+(a3+1+a3+)y^3, [6]
y^k=Wy^kyy,

where the subscript k indexes over the 3 recurrent weight matrices. The first recurrent weight matrix was identical to that in the sustained activity circuit (Fig. 2B). The second recurrent weight matrix was a discrete approximation to the derivative of the responses (SI Appendix), and third was the negative derivative matrix (i.e., the second and third recurrent matrices differed from one another by a factor of −1). To accommodate 2 dimensions of eye movements, the input drive would depend on 2-dimensional response fields tiling the visual field, and the recurrent drive would depend on 5 recurrent weight matrices, one to maintain the current eye position, a pair for the horizontal component of movements, and another pair for the vertical component (or likewise a pair for the polar angle component of movements and another pair for the radial component).

The modulators were used to encode and update a representation of the target locations (Fig. 6D). As in the preceding examples, the responses followed the input drive at the beginning of the simulated trial because the input modulator was set to b = 1 (via Wbx in Eq. 2) by the cue indicating the beginning of the trial. The value of b then switched to be small (= 0) before the targets were extinguished, so the output responses exhibited sustained activity that represented the original target locations (Fig. 6 C, Top, orange). The modulator a1 was responsible for recurrent normalization, as in the preceding example circuits. The modulator a3 was nonzero for a period of time beginning just prior to the eye movement (Fig. 6 D, Middle, orange). The amplitude of a3 and duration of time during which it was nonzero determined the magnitude of updating, that is, corresponding to the amplitude of the impending saccade (for an eye movement in the opposite direction, the amplitude of a2, instead of a3, would have been nonzero). Finally, the value of the recurrent modulator was set to a1 ≈ 1 (via Wax in Eq. 2) by the cue indicating the end of the trial, causing the output responses to be extinguished.

The output responses exhibited a traveling wave of activity across the topographic map of target locations during the period of time when the neural representation of the targets was updated (Fig. 6E). The readout (Fig. 6F) encoded the 2 target locations, both before and after updating. The readout and decoding matrices were identical to those in the sustained activity circuit (Fig. 2). Preceding the eye movement, the original target locations were reconstructed from the readout (Fig. 6 C, Top, orange curve). After the eye movement, the updated target locations were reconstructed (Fig. 6 C, Bottom).

Manipulation: Time Warping and Time Reversal.

A challenge for models of motor control is to generate movements at different speeds, for example playing a piece of piano music, generating speech (66), or generating birdsong (67) at different tempos. Likewise, a challenge for models of sensory processing is that perception must be tolerant with respect to compression or dilation of temporal signals, for example listening to fast vs. slow speech (68). A possible mechanism for time warping is to scale the time constants of the neurons (69), all by the same factor, which scales the oscillation frequencies by the inverse of that scale factor (Eq. 5). A fixed value for the scale factor would handle linear time rescaling in which the entire input (and/or output) signal is compressed or dilated accordingly. A neural circuit might compute a time-varying value for the scale factor, based on the inputs and/or outputs, to handle time-varying time warping.

Here, we offer a different mechanism for time warping (also time reversal), making use of the modulators. An example open-loop motor control circuit was implemented that enabled time warping and time reversal (Fig. 7). The encoding matrix and the recurrent matrix were identical to those in the spatial updating example (Fig. 6). The a1 and b modulators were also the same as in the spatial updating example, but the time courses of the other 2 modulators a2 and a3 were different (Fig. 7A). The readout was the same as that in the motor control circuit (Fig. 5), summing across the components rΣ. The input was chosen to drive all of the eigenvectors with randomly chosen amplitudes and phases. Different values of the a2 and a3 modulators generated control signals that were time-warped and/or time-reversed. Increasing the modulator response from 1 to 5/3 caused the readout to increase in tempo by 25% (compare Fig. 7 B and C); tempo was proportional to a2/(1 + a2). A time-varying modulator generated time-varying time warping. The circuit exhibited these phenomena because the responses exhibited oscillating traveling waves (Fig. 5B). The readout was a sum of these traveling waves, and the speed of the traveling waves was controlled by the modulators (SI Appendix). When a3 (instead of a2) was nonzero, the readout was time reversed (compare Fig. 7 B and D) because the traveling waves of activity moved in the opposite direction.

Fig. 7.

Fig. 7.

Time warping and time reversal. (A) Modulator responses. (B) Readout for a2 = 1 and a3 = 0. (C) Time-warped readout for a2 = 5/3 and a3 = 0. (D) Time-reversed readout for a2 = 0 and a3 = 1.

Discussion

We developed a theoretical framework for neural dynamics called ORGaNICs and applied it to simulate key phenomena of working memory and motor control. We demonstrated the following results. 1) Working memory: ORGaNICs can simulate delay-period activity with complex dynamics, including sequential activity and traveling waves of activity, to maintain and manipulate information over time. Derivative-like recurrent connectivity, in particular, generated traveling waves of activity. We propose that these traveling waves play a role in circuit function to manipulate and update internal models. 2) Motor control: The exact same circuits (with the same synaptic weights) were used to generate signals with complex motor dynamics, by converting spatial patterns of premotor activity to temporal profiles of motor control activity. Different spatial patterns of premotor activity evoked different motor control dynamics. These circuits were controlled to manipulate (e.g., time-warp) the motor dynamics. 3) Normalization: Recurrent normalization, via the recurrent modulator, ensured stability over time and robustness with respect to perturbations of synaptic weights. 4) Mechanism: ORGaNICs can be implemented with a simplified biophysical (equivalent electrical circuit) model of pyramidal cells (see SI Appendix and ref. 46). There is considerable flexibility in the formulation of ORGaNICs, with different variants corresponding to different hypothesized neural circuits (SI Appendix). We demonstrated all of the above results with 2 circuits; the first circuit generated the simulation results in Figs. 25 and the second one generated Figs. 6 and 7, noting that the first circuit is equivalent to a special case of the second one.

Because they are generalizations of LSTMs, ORGaNICs can solve tasks that are much more sophisticated than the typical delayed-response tasks used in most cognitive psychology and neuroscience experiments. Indeed, although this is not an ML paper, we note that ORGaNICs may offer computational advantages compared to varieties of LSTMs that are commonly used in ML applications (see SI Appendix and ref. 46).

This theoretical framework, of course, includes components previously proposed in the computational/theoretical neuroscience literature, and the ML literature, that have achieved some of the same goals (7087). However, with ORGaNICs we show that a single unified circuit architecture captures key neurophysiological phenomena associated with sensory, cognitive, and motor functions, each of which has been modeled separately in the previously published literature. Unlike linear recurrent neural networks, the modulators in ORGaNICs introduce nonlinearities (analogous to the gates in LSTMs) that can perform multiple functions including handling long-term dependencies and providing robustness via normalization (discussed below). Unlike most nonlinear recurrent neural nets, ORGaNICs are mathematically tractable, making it possible to derive concrete, quantitative predictions that can be fit to experimental measurements. The theory is tractable when the modulators are constant, that is, during each successive phase of a behavioral task. In addition, the responses of the normalization circuit follow the normalization equation (Eq. 3) exactly, so that this circuit makes predictions that are identical to those of the normalization model, thereby preserving all of the desirable features of that model, which has been fit to hundreds of experimental datasets. In classic work on neural fields (8890), by contrast, diverse patterns of activity are accomplished by biasing a nonlinear network to different operating points, each having a different solution that can be approximated by local linearization. Here, we start with a linear dynamical system that is fully tractable, characterized by the eigenvalues and eigenvectors of the linear system, but also limited to only those patterns of activity that can be expressed as linear sums of the eigenvectors. We circumnavigate this limitation with the modulators that shape solutions to dynamically change the eigenstructure of the linear system; for each choice of values for the modulators, we have a different linear system. Unlike black-box ML approaches, ORGaNICs provide insight; for example, we understand exactly when and how it is possible to reconstruct an input by reading out the responses during the delay period of a working memory task and how to generate motor control signals with complex dynamics (see SI Appendix for derivations). ML algorithms are particularly useful for computing solutions to optimization problems (e.g., model fitting via gradient descent), and we plan to use ML implementations of ORGaNICs to fit experimental data. ML approaches can also provide inspiration for neuroscience theories (and vice versa), like the links presented here between ORGaNICs and LSTMs. Left open in the current paper is how the weights in the various weight matrices emerge through development and/or learning. We engineered the weights to demonstrate the computational capabilities of this theoretical framework and to illustrate that the theory can reproduce neurobiological phenomena (although ORGaNICs are compatible with modified versions of ML algorithms; see SI Appendix). Some of the previously published literature (cited above) focuses on learning. However, having the right circuit architecture is a prerequisite for developing an accurate model of learning.

We propose that ORGaNICs can serve as a unifying theoretical framework for neural dynamics, a canonical computational motif based on recurrent amplification, gated integration, reset, and controlling the effective time constant. Rethinking cortical computation in these terms should have widespread implications, some of which are elucidated in the paragraphs that follow (see also SI Appendix).

Sustained delay-period activity and sequential activity are opposite sides of the same coin. ORGaNICs, a straightforward extension of leaky neural integrators and neural oscillators, provide a unified theoretical framework for sustained activity (Fig. 2), oscillatory activity (SI Appendix, Fig. S4), and sequential activity (Fig. 4), just by changing the recurrent weight matrix. Indeed, ORGaNICs can switch between these different behaviors. The spatial updating circuit, for example, exhibits sustained activity during the delay periods and sequential activity coincident with the eye movement (Fig. 6). The modulators a2 and a3 do the job of toggling between sustained and sequential. We assert that complicated dynamics is the norm, to support manipulation as well as maintenance (e.g., Fig. 6).

ORGaNICs can be used to generate motor control signals, with the very same circuits used to model working memory, just by changing the readout. The circuits convert spatial patterns of input (premotor) activity to temporal profiles of output (motor control) activity. Different spatial patterns of premotor activity evoke motor control outputs with different temporal response dynamics (e.g., as in Figs. 5 and 7), and the modulators provide a means for manipulating (time warping and time reversal) the dynamics (Fig. 7).

ORGaNICs are applicable also to models of sensory integration (e.g., integrating corollary discharge in Fig. 6) and sensory processing (e.g., with normalization as in Fig. 3). ORGaNICs may be stacked in layers such that the inputs to one ORGaNIC are the outputs from one or more other ORGaNICs. Particular stacked architectures encompass convolutional neural nets (i.e., deep nets) as a special case: specifically, when the encoding/embedding weight matrices are convolutional and when the modulators are large (aj = bj ≫ 0) such that the output responses from each layer are dominated by the input drive to that layer. Consequently, working memory, motor control, sensory processing (including prediction over time; see SI Appendix and ref. 46), and possibly other cognitive functions (in addition to working memory, such as cognitive control, for example controlling attention) may all share a common canonical computational foundation.

Derivative-like recurrent connectivity (55) simulates sequential activity and traveling waves of activity (Figs. 47), and we propose that these traveling waves play a particular role in circuit function. Weight matrices with derivative-like weights are a mainstay of feed-forward models of sensory processing (91, 92), but the contribution of derivative-like weights in recurrent connectivity has been underappreciated. Traveling waves are ubiquitous in cortical activity, but their functional role has remained a mystery (93). We used recurrent weight matrices based on derivatives (i.e., the difference in activity between nearby neurons) to evoke traveling waves of activity that functioned to support manipulation. The traveling waves served to transform spatial patterns of premotor activity to temporal patterns of motor control activity (Figs. 5 and 7) or to update internal models (working memory representations) whether or not there was an overt movement (Fig. 6).

Why do some neural circuits exhibit sustained activity while others exhibit sequential activity, and what are the relative advantages or disadvantages of each? Sustained activity circuits are useful for short-term memory (i.e., maintenance), but not for other cognitive functions that require manipulation and control. For sustained-activity circuits, a simple linear readout of the responses can be used to reconstruct the input drive (and to approximately reconstruct the input stimulus), at any point in time during a delay period (Fig. 2). In addition, sustained-activity circuits are likely to be more robust than sequential-activity circuits, because all of the components share the same dynamics. Sequential-activity circuits, on the other hand, offer much more flexibility. The same circuit, with the same fixed recurrent weight matrix and the same fixed encoding matrix, can support multiple different functions just by changing the readout. For example, the sequential-activity circuit (Fig. 4) and the motor-control circuit (Fig. 5) were identical except for the readout. For the sequential-activity circuit (Fig. 4), a (nonlinear) modulus readout generated an output that was constant over time (i.e., to support maintenance). For the motor-control circuit (Fig. 5), a linear readout was used to generate control signals as sums of (co)sinusoidal basis functions with various different frequencies and phases. Likewise, the spatial-updating circuit (Fig. 6) and the time-warping/time-reversal circuit (Fig. 7) were identical. This circuit can be used to perform working memory (maintenance and manipulation), and the same circuit (without changing the encoding or recurrent weights) can be used to execute movements with complex dynamics. One way to implement this, for example, would be to have 2 different brain areas with stereotypical intrinsic circuitry (i.e., identical recurrent weights) that support 2 different functions with different readouts. Indeed, there is experimental evidence that different brain areas support different functions with similar circuits, for example parietal areas underlying working memory maintenance and PFC areas underlying motor planning (94). Alternatively, the output from a single circuit could innervate 2 different brain areas, one of which performs the first readout and the other of which performs the second readout, or a single brain area might switch between 2 different readouts (e.g., using a gating mechanism analogous to the modulators in ORGaNICs), corresponding to different behavioral states, without changing the intrinsic connectivity within the circuit. This makes biological sense. Rather than having to change everything (the encoding matrix, the recurrent matrix, the modulators, and the readout), you need only change one thing (the readout matrix) to enable a wide variety of functions. This is not possible with recurrent weight matrices that exhibit sustained activity, simply because there is only a single mode of dynamics (constant over time).

The modulators perform multiple functions and can be implemented with a variety of circuit, cellular, and synaptic mechanisms. The time-varying values of the modulators determine the state of the circuit by controlling the recurrent gain and effective time constant of each neuron in the circuit. The multiple functions of the modulators include normalization (Fig. 3), maintenance (Figs. 27), controlling pattern generators (Figs. 5 and 7), gated integration/updating (Fig. 6), time warping and time reversal (Fig. 7), reset (Figs. 27), controlling the effective time constant (SI Appendix, Fig. S1), controlling the relative contributions of bottom-up versus top-down connections (95), representing and weighting the reliability of sensory evidence (likelihood) and internal model (prior, expectation) for inference, prediction over time, and multisensory integration (95). ORGaNICs may have multiple recurrent weight matrices, each multiplied by different recurrent modulators, to perform combinations of these functions (Eq. 6 and Figs. 6 and 7). Some of the modulator functions need to be fast and selective (e.g., normalization), likely implemented in local circuits. A variety of mechanisms have been hypothesized for adjusting the gain of local circuits (9698). Some modulator functions might depend on thalamocortical loops (20, 99101). Other modulator functions are relatively nonselective and evolve relatively slowly over time and may be implemented with neuromodulators (102105).

Recurrent normalization, as implemented with ORGaNICs (Fig. 3), is consonant with the idea that normalization operates via recurrent amplification, that is, that weak inputs are strongly amplified but that strong inputs are only weakly amplified. Several hypotheses for the recurrent circuits underlying normalization have been proposed (50, 51, 96, 106108), but most of them are inconsistent with experimental observations suggesting that normalization is implemented via recurrent amplification (109114). ORGaNICs offer a family of dynamical systems models of normalization, each of which comprises coupled neural integrators to implement normalization via recurrent amplification (SI Appendix). When the input drive is constant over time, the circuit achieves an asymptotic stable state in which the output responses follow the normalization equation exactly (Eq. 3).

There is a critical need for developing behavioral tasks that animal models are capable of learning, and that involve both maintaining and manipulating information over time. ORGaNICs (and LSTMs) manage long-term dependencies between sensory inputs at different times, using a combination of gated integration and reset. Typical delayed-response tasks like the memory-guided saccade task are appropriate for studying what psychologists call “short-term memory,” but they are weak probes for studying working memory (115118), because those tasks do not involve manipulation of information over time. Behavioral tasks that are popular in studies of decision making involve integration of noisy sensory information (30, 32) or integration of probabilistic cues (119). Variants of these tasks (31, 34) might be used to test the gated integration and reset functionality of ORGaNICs. The antisaccade task (120123) and the double-step saccade task (124126) might also be used, with delay periods, to test the theory and to characterize how cortical circuits manage long-term dependencies.

Finally, the theory motivates a variety of experiments, some examples of which are as follows. First, the theory predicts that the modulators change the effective time constant and recurrent gain of a PFC neuron. Experimental evidence suggests that the modulatory responses are computed in the thalamus (2, 20, 99). Consequently, manipulating the responses of these thalamic neurons (e.g., via optogenetics) should have a particular impact on both the time constant and recurrent gain of cortical neurons. Second, the specific biophysical implementation (SI Appendix, Fig. S6) predicts that the soma and basal dendrites share input drive, but with opposite sign. This would, of course, have to be implemented with inhibitory interneurons. Third, the theory predicts that that neural activity underlying motor control and working memory is normalized. Normalization might be measured in motor cortex by comparing activity when making each of 2 simple movements vs. the combination of those movements simultaneously, or by comparing activity in one subpopulation of neurons with and without optogenetic stimulation of a separate subpopulation of neurons. Normalization might be measured in working memory circuits by comparing activity when maintaining one item versus multiple items during a delay period (127129). Fourth, following previous research (130), a model based on ORGaNICs may be fit to behavioral and neurophysiological measurements of working memory. Trial-to-trial variability of behavioral performance during a working memory task has been shown to be linked with trial-to-trial variability in delay-period activity. These data might be fit by adding noise to the responses and/or synaptic weights, leading to drift in activity ratios during a delay period. Fifth, as noted above, variants of sensory integration tasks might be used to test the gated integration and reset functionality of ORGaNICs, and variants of the antisaccade and double-step saccade tasks might also be used, with delay periods, to characterize how cortical circuits manage long-term dependencies.

Supplementary Material

Supplementary File

Acknowledgments

We thank Mike Landy, Eero Simoncelli, Charlie Burlingham, Gyuri Buzsaki, Mike Long, Kenway Louie, Roozbeh Kiani, E. J. Chichilnisky, and Lindsey Glickfeld for comments and discussion.

Footnotes

The authors declare no competing interest.

Data deposition: The MATLAB code for recreating the simulation results is available at https://archive.nyu.edu/handle/2451/60439.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1911633116/-/DCSupplemental.

References

  • 1.Jacobsen C. F., Functions of frontal association area in primates. Arch. Neurol. Psychiatry 33, 558–569 (1935). [Google Scholar]
  • 2.Fuster J. M., Alexander G. E., Neuron activity related to short-term memory. Science 173, 652–654 (1971). [DOI] [PubMed] [Google Scholar]
  • 3.Fuster J. M., Unit activity in prefrontal cortex during delayed-response performance: Neuronal correlates of transient memory. J. Neurophysiol. 36, 61–78 (1973). [DOI] [PubMed] [Google Scholar]
  • 4.Constantinidis C., et al. , Persistent spiking activity underlies working memory. J. Neurosci. 38, 7020–7028 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Gnadt J. W., Andersen R. A., Memory related motor planning activity in posterior parietal cortex of macaque. Exp. Brain Res. 70, 216–220 (1988). [DOI] [PubMed] [Google Scholar]
  • 6.Funahashi S., Bruce C. J., Goldman-Rakic P. S., Mnemonic coding of visual space in the monkey’s dorsolateral prefrontal cortex. J. Neurophysiol. 61, 331–349 (1989). [DOI] [PubMed] [Google Scholar]
  • 7.Goldman-Rakic P. S., Cellular basis of working memory. Neuron 14, 477–485 (1995). [DOI] [PubMed] [Google Scholar]
  • 8.Constantinidis C., Williams G. V., Goldman-Rakic P. S., A role for inhibition in shaping the temporal flow of information in prefrontal cortex. Nat. Neurosci. 5, 175–180 (2002). [DOI] [PubMed] [Google Scholar]
  • 9.Schluppeck D., Curtis C. E., Glimcher P. W., Heeger D. J., Sustained activity in topographic areas of human posterior parietal cortex during memory-guided saccades. J. Neurosci. 26, 5098–5108 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Miller E. K., Erickson C. A., Desimone R., Neural mechanisms of visual working memory in prefrontal cortex of the macaque. J. Neurosci. 16, 5154–5167 (1996). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Romo R., Brody C. D., Hernández A., Lemus L., Neuronal correlates of parametric working memory in the prefrontal cortex. Nature 399, 470–473 (1999). [DOI] [PubMed] [Google Scholar]
  • 12.Hussar C. R., Pasternak T., Memory-guided sensory comparisons in the prefrontal cortex: Contribution of putative pyramidal cells and interneurons. J. Neurosci. 32, 2747–2761 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Goard M. J., Pho G. N., Woodson J., Sur M., Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions. eLife 5, e13764 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wang X. J., Synaptic basis of cortical persistent activity: The importance of NMDA receptors to working memory. J. Neurosci. 19, 9587–9603 (1999). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Compte A., Brunel N., Goldman-Rakic P. S., Wang X. J., Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cereb. Cortex 10, 910–923 (2000). [DOI] [PubMed] [Google Scholar]
  • 16.Baeg E. H., et al. , Dynamics of population code for working memory in the prefrontal cortex. Neuron 40, 177–188 (2003). [DOI] [PubMed] [Google Scholar]
  • 17.Fujisawa S., Amarasingham A., Harrison M. T., Buzsáki G., Behavior-dependent short-term assembly dynamics in the medial prefrontal cortex. Nat. Neurosci. 11, 823–833 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Pastalkova E., Itskov V., Amarasingham A., Buzsáki G., Internally generated cell assembly sequences in the rat hippocampus. Science 321, 1322–1327 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Harvey C. D., Coen P., Tank D. W., Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature 484, 62–68 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Schmitt L. I., et al. , Thalamic amplification of cortical connectivity sustains attentional control. Nature 545, 219–223 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Lundqvist M., Herman P., Miller E. K., Working memory: Delay activity, yes! Persistent activity? Maybe not. J. Neurosci. 38, 7013–7019 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Brody C. D., Hernández A., Zainos A., Romo R., Timing and neural encoding of somatosensory parametric working memory in macaque prefrontal cortex. Cereb. Cortex 13, 1196–1207 (2003). [DOI] [PubMed] [Google Scholar]
  • 23.Machens C. K., Romo R., Brody C. D., Flexible control of mutual inhibition: A neural model of two-interval discrimination. Science 307, 1121–1124 (2005). [DOI] [PubMed] [Google Scholar]
  • 24.Shafi M., et al. , Variability in neuronal activity in primate cortex during working memory tasks. Neuroscience 146, 1082–1108 (2007). [DOI] [PubMed] [Google Scholar]
  • 25.Markowitz D. A., Curtis C. E., Pesaran B., Multiple component networks support working memory in prefrontal cortex. Proc. Natl. Acad. Sci. U.S.A. 112, 11084–11089 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Kobak D., et al. , Demixed principal component analysis of neural population data. eLife 5, e10989 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Murray J. D., et al. , Stable population coding for working memory coexists with heterogeneous neural dynamics in prefrontal cortex. Proc. Natl. Acad. Sci. U.S.A. 114, 394–399 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Pesaran B., Pezaris J. S., Sahani M., Mitra P. P., Andersen R. A., Temporal structure in neuronal activity during working memory in macaque parietal cortex. Nat. Neurosci. 5, 805–811 (2002). [DOI] [PubMed] [Google Scholar]
  • 29.Lundqvist M., et al. , Gamma and beta bursts underlie working memory. Neuron 90, 152–164 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Shadlen M. N., Newsome W. T., Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. J. Neurophysiol. 86, 1916–1936 (2001). [DOI] [PubMed] [Google Scholar]
  • 31.Gold J. I., Shadlen M. N., The influence of behavioral context on the representation of a perceptual decision in developing oculomotor commands. J. Neurosci. 23, 632–651 (2003). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Brunton B. W., Botvinick M. M., Brody C. D., Rats and humans can optimally accumulate evidence for decision-making. Science 340, 95–98 (2013). [DOI] [PubMed] [Google Scholar]
  • 33.Kiani R., Churchland A. K., Shadlen M. N., Integration of direction cues is invariant to the temporal gap between them. J. Neurosci. 33, 16483–16489 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Purcell B. A., Kiani R., Hierarchical decision processes that operate over distinct timescales underlie choice and changes in strategy. Proc. Natl. Acad. Sci. U.S.A. 113, E4531–E4540 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Aksay E., Gamkrelidze G., Seung H. S., Baker R., Tank D. W., In vivo intracellular recording and perturbation of persistent activity in a neural integrator. Nat. Neurosci. 4, 184–193 (2001). [DOI] [PubMed] [Google Scholar]
  • 36.Hahnloser R. H., Kozhevnikov A. A., Fee M. S., An ultra-sparse code underlies the generation of neural sequences in a songbird. Nature 419, 65–70 (2002). [DOI] [PubMed] [Google Scholar]
  • 37.Kozhevnikov A. A., Fee M. S., Singing-related activity of identified HVC neurons in the zebra finch. J. Neurophysiol. 97, 4271–4283 (2007). [DOI] [PubMed] [Google Scholar]
  • 38.Jin D. Z., Fujii N., Graybiel A. M., Neural representation of time in cortico-basal ganglia circuits. Proc. Natl. Acad. Sci. U.S.A. 106, 19156–19161 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Long M. A., Jin D. Z., Fee M. S., Support for a synaptic chain model of neuronal sequence generation. Nature 468, 394–399 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Hochreiter S., Schmidhuber J., Long short-term memory. Neural Comput. 9, 1735–1780 (1997). [DOI] [PubMed] [Google Scholar]
  • 41.Graves A., Generating sequences with recurrent neural networks. arXiv:1308.0850 (4 August 2013).
  • 42.Graves A., Mohamed A.-r., Hinton G., “Speech recognition with deep recurrent neural networks” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 6645–6649. [Google Scholar]
  • 43.Cho K., et al. , Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv:1406.1078 (3 June 2014).
  • 44.Graves A., Wayne G., Danihelka I., Neural Turing machines. arXiv:1410.5401 (20 October 2014).
  • 45.Sutskever I., Vinyals O., Le Q. V., “Sequence to sequence learning with neural networks” in Advances in Neural Information Processing Systems (MIT Press, 2014), vol. 2, pp. 3104–3112. [Google Scholar]
  • 46.Heeger D. J., Mackey W. E., ORGaNICs: A theory of working memory in brains and machines. arXiv:1803.06288 (16 March 2018).
  • 47.Heeger D. J., Mackey W. E., ORGaNICs: A canonical neural circuit computation. bioRxiv:10.1101/506337 (26 December 2018).
  • 48.Heeger D. J., Supplemental Material for “Oscillatory recurrent gated neural integrator circuits (ORGaNICs), a unifying theoretical framework for neural dynamics.” NYU Faculty Digital Archive. https://archive.nyu.edu/handle/2451/60439. Deposited 10 October 2019. [DOI] [PMC free article] [PubMed]
  • 49.Heeger D. J., Half-squaring in responses of cat striate cells. Vis. Neurosci. 9, 427–443 (1992). [DOI] [PubMed] [Google Scholar]
  • 50.Heeger D. J., Normalization of cell responses in cat striate cortex. Vis. Neurosci. 9, 181–197 (1992). [DOI] [PubMed] [Google Scholar]
  • 51.Carandini M., Heeger D. J., Normalization as a canonical neural computation. Nat. Rev. Neurosci. 13, 51–62 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Seung H. S., How the brain keeps the eyes still. Proc. Natl. Acad. Sci. U.S.A. 93, 13339–13344 (1996). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Koulakov A. A., Raghavachari S., Kepecs A., Lisman J. E., Model for a robust neural integrator. Nat. Neurosci. 5, 775–782 (2002). [DOI] [PubMed] [Google Scholar]
  • 54.Brody C. D., Romo R., Kepecs A., Basic mechanisms for graded persistent activity: Discrete attractors, continuous attractors, and dynamic representations. Curr. Opin. Neurobiol. 13, 204–211 (2003). [DOI] [PubMed] [Google Scholar]
  • 55.Zhang K., Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory. J. Neurosci. 16, 2112–2126 (1996). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Abeles M., Corticonics: Neural Circuits of the Cerebral Cortex (Cambridge University Press, 1991). [Google Scholar]
  • 57.Abeles M., Bergman H., Margalit E., Vaadia E., Spatiotemporal firing patterns in the frontal cortex of behaving monkeys. J. Neurophysiol. 70, 1629–1638 (1993). [DOI] [PubMed] [Google Scholar]
  • 58.Bienenstock E., A model of neocortex. Network 6, 179–224 (1995). [Google Scholar]
  • 59.Herrmann M., Hertz J., Prügel-Bennett A., Analysis of synfire chains. Network 6, 403–414 (1995). [Google Scholar]
  • 60.Izhikevich E. M., Polychronization: Computation with spikes. Neural Comput. 18, 245–282 (2006). [DOI] [PubMed] [Google Scholar]
  • 61.Churchland M. M., Cunningham J. P., Kaufman M. T., Ryu S. I., Shenoy K. V., Cortical preparatory activity: Representation of movement or first cog in a dynamical machine? Neuron 68, 387–400 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Shenoy K. V., Sahani M., Churchland M. M., Cortical control of arm movements: A dynamical systems perspective. Annu. Rev. Neurosci. 36, 337–359 (2013). [DOI] [PubMed] [Google Scholar]
  • 63.Russo A. A., et al. , Motor cortex embeds muscle-like commands in an untangled population response. Neuron 97, 953–966.e8 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Thoroughman K. A., Shadmehr R., Learning of action through adaptive combination of motor primitives. Nature 407, 742–747 (2000). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Giszter S. F., Motor primitives–New data and future questions. Curr. Opin. Neurobiol. 33, 156–165 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Long M. A., et al. , Functional segregation of cortical regions underlying speech timing and articulation. Neuron 89, 1187–1193 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Long M. A., Fee M. S., Using temperature to analyse temporal dynamics in the songbird motor pathway. Nature 456, 189–194 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Lerner Y., Honey C. J., Katkov M., Hasson U., Temporal scaling of neural responses to compressed and dilated natural speech. J. Neurophysiol. 111, 2433–2444 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Gütig R., Sompolinsky H., Time-warp-invariant neuronal processing. PLoS Biol. 7, e1000141 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Grossberg S., Nonlinear neural networks: Principles, mechanisms, and architectures. Neural Netw. 1, 17–61 (1988). [Google Scholar]
  • 71.Olshausen B. A., Anderson C. H., Van Essen D. C., A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. J. Neurosci. 13, 4700–4719 (1993). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.O’Reilly R. C., Frank M. J., Making working memory work: A computational model of learning in the prefrontal cortex and basal ganglia. Neural Comput. 18, 283–328 (2006). [DOI] [PubMed] [Google Scholar]
  • 73.Mongillo G., Barak O., Tsodyks M., Synaptic theory of working memory. Science 319, 1543–1546 (2008). [DOI] [PubMed] [Google Scholar]
  • 74.Goldman M. S., Memory without feedback in a neural network. Neuron 61, 621–634 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Lundqvist M., Compte A., Lansner A., Bistable, irregular firing and population oscillations in a modular attractor memory network. PLoS Comput. Biol. 6, e1000803 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Lundqvist M., Herman P., Lansner A., Theta and gamma power increases and alpha/beta power decreases with memory load in an attractor network model. J. Cogn. Neurosci. 23, 3008–3020 (2011). [DOI] [PubMed] [Google Scholar]
  • 77.Druckmann S., Chklovskii D. B., Neuronal circuits underlying persistent representations despite time varying activity. Curr. Biol. 22, 2095–2103 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Hennequin G., Vogels T. P., Gerstner W., Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron 82, 1394–1406 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Sussillo D., Churchland M. M., Kaufman M. T., Shenoy K. V., A neural network that finds a naturalistic solution for the production of muscle activity. Nat. Neurosci. 18, 1025–1033 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Rajan K., Harvey C. D., Tank D. W., Recurrent network models of sequence generation and memory. Neuron 90, 128–142 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Costa R., Assael I. A., Shillingford B., de Freitas N., Vogels T., “Cortical microcircuits as gated-recurrent neural networks” in 31st Annual Conference on Neural Information Processing Systems, von Luxburg U., et al., Eds. (NIPS, La Jolla, CA, 2017), vol. 1, pp. 272–283. [Google Scholar]
  • 82.Murray J. M., Escola G. S., Learning multiple variable-speed sequences in striatum via cortical tutoring. eLife 6, e26084 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Goudar V., Buonomano D. V., Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks. eLife 7, e31134 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Kraynyukova N., Tchumatchenko T., Stabilized supralinear network can give rise to bistable, oscillatory, and persistent activity. Proc. Natl. Acad. Sci. U.S.A. 115, 3464–3469 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Orhan E., Ma W. J., A diverse range of factors affect the nature of neural representations underlying short-term memory. bioRxiv:10.1101/244707 (13 October 2018). [DOI] [PubMed]
  • 86.Stroud J. P., Porter M. A., Hennequin G., Vogels T. P., Motor primitives in space and time via targeted gain modulation in cortical networks. Nat. Neurosci. 21, 1774–1783 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Tallec C., Ollivier Y., Can recurrent neural networks warp time? arXiv:1804.11188 (23 March 2018).
  • 88.Amari S., Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern. 27, 77–87 (1977). [DOI] [PubMed] [Google Scholar]
  • 89.Ermentrout B., Neural networks as spatio-temporal pattern-forming systems. Rep. Prog. Phys. 61, 353–430 (1998). [Google Scholar]
  • 90.Bressloff P. C., Spatiotemporal dynamics of continuum neural fields. J. Phys. A Math. Theor. 45, 033001 (2011). [Google Scholar]
  • 91.Adelson E. H., Bergen J. R., “The plenoptic function and the elements of early vision” in Computational Models of Visual Processing, Landy M. S., Movshon J. A., Eds. (MIT Press, 1991), pp. 3–20. [Google Scholar]
  • 92.Simoncelli E. P., Heeger D. J., A model of neuronal responses in visual area MT. Vision Res. 38, 743–761 (1998). [DOI] [PubMed] [Google Scholar]
  • 93.Muller L., Chavane F., Reynolds J., Sejnowski T. J., Cortical travelling waves: Mechanisms and computational principles. Nat. Rev. Neurosci. 19, 255–268 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Mackey W. E., Curtis C. E., Distinct contributions by frontal and parietal cortices support working memory. Sci. Rep. 7, 6188 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Heeger D. J., Theory of cortical function. Proc. Natl. Acad. Sci. U.S.A. 114, 1773–1782 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96.Carandini M., Heeger D. J., Summation and division by neurons in primate visual cortex. Science 264, 1333–1336 (1994). [DOI] [PubMed] [Google Scholar]
  • 97.Carandini M., Heeger D. J., Senn W., A synaptic explanation of suppression in visual cortex. J. Neurosci. 22, 10053–10065 (2002). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98.Chance F. S., Abbott L. F., Reyes A. D., Gain modulation from background synaptic input. Neuron 35, 773–782 (2002). [DOI] [PubMed] [Google Scholar]
  • 99.Guo Z. V., et al. , Maintenance of persistent activity in a frontal thalamocortical loop. Nature 545, 181–186 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100.Rikhye R. V., Gilra A., Halassa M. M., Thalamic regulation of switching between cortical representations enables cognitive flexibility. Nat. Neurosci. 21, 1753–1763 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Rikhye R. V., Wimmer R. D., Halassa M. M., Toward an integrative theory of thalamic function. Annu. Rev. Neurosci. 41, 163–183 (2018). [DOI] [PubMed] [Google Scholar]
  • 102.Thurley K., Senn W., Lüscher H. R., Dopamine increases the gain of the input-output response of rat prefrontal pyramidal neurons. J. Neurophysiol. 99, 2985–2997 (2008). [DOI] [PubMed] [Google Scholar]
  • 103.Varga V., et al. , Fast synaptic subcortical control of hippocampal circuits. Science 326, 449–453 (2009). [DOI] [PubMed] [Google Scholar]
  • 104.Marder E., Neuromodulation of neuronal circuits: Back to the future. Neuron 76, 1–11 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 105.Wei K., et al. , Serotonin affects movement gain control in the spinal cord. J. Neurosci. 34, 12690–12700 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 106.Ozeki H., Finn I. M., Schaffer E. S., Miller K. D., Ferster D., Inhibitory stabilization of the cortical network underlies visual surround suppression. Neuron 62, 578–592 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 107.Louie K., LoFaro T., Webb R., Glimcher P. W., Dynamic divisive normalization predicts time-varying value coding in decision-related circuits. J. Neurosci. 34, 16046–16057 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108.Rubin D. B., Van Hooser S. D., Miller K. D., The stabilized supralinear network: A unifying circuit motif underlying multi-input integration in sensory cortex. Neuron 85, 402–417 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 109.Adesnik H., Scanziani M., Lateral competition for cortical space by layer-specific horizontal circuits. Nature 464, 1155–1160 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 110.Huang X., Elyada Y. M., Bosking W. H., Walker T., Fitzpatrick D., Optogenetic assessment of horizontal interactions in primary visual cortex. J. Neurosci. 34, 4976–4990 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111.Nassi J. J., Avery M. C., Cetin A. H., Roe A. W., Reynolds J. H., Optogenetic activation of normalization in alert macaque visual cortex. Neuron 86, 1504–1517 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 112.Sato T. K., Haider B., Häusser M., Carandini M., An excitatory basis for divisive normalization in visual cortex. Nat. Neurosci. 19, 568–570 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 113.Adesnik H., Synaptic mechanisms of feature coding in the visual cortex of awake mice. Neuron 95, 1147–1159.e4 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 114.Bolding K. A., Franks K. M., Recurrent cortical circuits implement concentration-invariant odor coding. Science 361, eaat6904 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 115.Atkinson R. C., Shiffrin R. M., “Human memory: A proposed system and its control processes” in Psychology of Learning and Motivation, Spence K.W., Spence J. T., Eds. (Elsevier, 1968), vol. 2, pp. 89–195. [Google Scholar]
  • 116.Cowan N., Attention and Memory: An Integrated Framework (Oxford University Press, 1998). [Google Scholar]
  • 117.Cowan N., What are the differences between long-term, short-term, and working memory? Prog. Brain Res. 169, 323–338 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 118.Postle B. R., The cognitive neuroscience of visual short-term memory. Curr. Opin. Behav. Sci. 1, 40–46 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 119.Yang T., Shadlen M. N., Probabilistic reasoning by neurons. Nature 447, 1075–1080 (2007). [DOI] [PubMed] [Google Scholar]
  • 120.Hallett P. E., Primary and secondary saccades to goals defined by instructions. Vision Res. 18, 1279–1296 (1978). [DOI] [PubMed] [Google Scholar]
  • 121.Funahashi S., Chafee M. V., Goldman-Rakic P. S., Prefrontal neuronal activity in rhesus monkeys performing a delayed anti-saccade task. Nature 365, 753–756 (1993). [DOI] [PubMed] [Google Scholar]
  • 122.Munoz D. P., Everling S., Look away: The anti-saccade task and the voluntary control of eye movement. Nat. Rev. Neurosci. 5, 218–228 (2004). [DOI] [PubMed] [Google Scholar]
  • 123.Johnston K., Everling S., Neural activity in monkey prefrontal cortex is modulated by task context and behavioral instruction during delayed-match-to-sample and conditional prosaccade-antisaccade tasks. J. Cogn. Neurosci. 18, 749–765 (2006). [DOI] [PubMed] [Google Scholar]
  • 124.Westheimer G., Eye movement responses to a horizontally moving visual stimulus. AMA Arch. Opthalmol. 52, 932–941 (1954). [DOI] [PubMed] [Google Scholar]
  • 125.Becker W., Jürgens R., An analysis of the saccadic system by means of double step stimuli. Vision Res. 19, 967–983 (1979). [DOI] [PubMed] [Google Scholar]
  • 126.Goldberg M. E., Bruce C. J., Primate frontal eye fields. III. Maintenance of a spatially accurate saccade signal. J. Neurophysiol. 64, 489–508 (1990). [DOI] [PubMed] [Google Scholar]
  • 127.Ma W. J., Huang W., No capacity limit in attentional tracking: Evidence for probabilistic inference under a resource constraint. J. Vis. 9, 3.1–30 (2009). [DOI] [PubMed] [Google Scholar]
  • 128.Wei Z., Wang X. J., Wang D. H., From distributed resources to limited slots in multiple-item working memory: A spiking network model with normalization. J. Neurosci. 32, 11228–11240 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 129.Keshvari S., van den Berg R., Ma W. J., No evidence for an item limit in change detection. PLoS Comput. Biol. 9, e1002927 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 130.Wimmer K., Nykamp D. Q., Constantinidis C., Compte A., Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory. Nat. Neurosci. 17, 431–439 (2014). [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary File

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES