Abstract
Spike-timing-dependent plasticity(STDP) is a biological process of synaptic modification caused by the difference of firing order and timing between neurons. One of neurodynamical roles of STDP is to form a macroscopic geometrical structure in the neuronal state space in response to a periodic input by Susman et al. (Nat. Commun. 10(1), 1–9 2019), Yoon, & Kim. Stdp-based associative memory formation and retrieval. arXiv:2107.02429v2 (2021). In this work, we propose a practical memory model based on STDP which can store and retrieve high dimensional associative data. The model combines STDP dynamics with an encoding scheme for distributed representations and is able to handle multiple composite data in a continuous manner. In the auto-associative memory task where a group of images are continuously streamed to the model, the images are successfully retrieved from an oscillating neural state whenever a proper cue is given. In the second task that deals with semantic memories embedded from sentences, the results show that words can recall multiple sentences simultaneously or one exclusively, depending on their grammatical relations.
Subject terms: Learning algorithms, Neural encoding, Computational science
Introduction
Spike-timing-dependent plasticity(STDP) is a biological process of synaptic modification according to the order of pre- and post-synaptic spiking within a critical time window3–5, and considered to be critical for understanding the cognitive mechanisms such as temporal coding6–8, and the formation of associative memory9,10. In our separate work2, we analyzed an STDP-based neural model and showed that the model can associate multiple high-dimensional memories to a geometric structure in the neural state space which we call a memory plane. When exposed to repeatedly occurring spatio-temporal input patterns, the neural activity based on STDP transforms the patterns into the corresponding memory plane. Further, the stored memories can be dynamically revived with macroscopic neural oscillations around the memory plane if perturbed by a similar stimulus.
The presence and the function of the memory plane in the neural networks have caught attention in Ref.1, where it has been proposed that STDP can store transient inputs as imaginary-coded memories. In this work, we further emphasize a practical aspect of the memory plane, showing that it can play a central role in storing, retrieving, and manipulating structured information. Using the theoretical works in Ref.2, we intend to integrate an analytic and an implementation level description of the neural memory process based on the memory plane that is capable of handling high dimensional associative data.
In this work, we propose that a STDP-based memory model, combined with a proper encoding scheme, can store and retrieve a group structured information in the neuronal state space. Among a number of schemes for encoding compositional structure that have been proposed over the last few years, we adopt Tensor Product Representation(TPR)11. TPR is a general method for generating vector-space embeddings of internal representations and operations, which prove to contain a variety of structural information such as lists of paired items, sequences and networks.
We show that the STDP-based memory model with TPR can naturally provide a mechanism for segmenting continuous streams of sensory input into representations of associative bindings of items: first, we demonstrate an auto-associative memory task with a group of images. While the images are sequentially streamed into the system for storage, the corresponding information is internally stored in the connectivity matrix. Then the whole group of the images can be dynamically retrieved from the oscillating neural state, when the system is perturbed by a memory cue which is similar to any of the original images. In the second task for semantic manipulation, we use multiple semantic vectors to represent a sentence as a composite of words. Once several sentences are stored in the system via such semantic vectors, a single word can recall multiple sentences simultaneously or one exclusively, depending on their grammatical relations. This implies that the proposed method provides an alternative bio-inspiring approach to process multiple groups of associative data with composite structure.
Methods
STDP-based memory model
Our work follows the framework of standard firing-rate models1,12. We set the differential equation for the neural state as
| 1 |
where is the state of N neuronal nodes and is a connectivity matrix with corresponding to the strength of synaptic connection from node j to i. Here is a regularizing transfer function and is a memory input.
The plasticity model that we propose is based on13 as
| 2 |
where K is a temporal kernel and is the learning rate. The parameter is the decaying rate of homeostatic plasticity. While the synapses are repeatedly excited by the external input, they also need to decay simultaneously to forget and dump the obsolete and irrelevant information gradually. For analytic simplicity, we use and a simplified Dirac-delta typed temporal kernel K(s) defined as
| 3 |
with . Using this kernel instead of the conventional exponential one has two main consequences: first, it models a specific interspike timing exhibiting maximal concentration of synaptic changes near origin, which has been observed in related works. Second, it yields a simple delayed term through the convolution in Eq. (2) which is relatively feasible for analysis2.
Now after simplifications, the main model becomes
| 4 |
where stands for the delayed synaptic response.
We use the memory input in the form of a sequential harmonic pulse as
| 5 |
where are memory representations to be stored. Here stands for the frequency of neural oscillations and , stands for the sampling time for each component. The trajectory of the memory input in (5) is periodic and embedded in a 2-dimensional plane which we call a memory plane with respect to the memory representations . While the memory representations are distributed in the high dimensional neural state space , the memory plane S tends to be located in close proximity to the memory representations under a suitable condition2. Further, the system (4) has a asymptotically stable solution that consists of a periodic solution on the memory plane S and a constant connectivity matrix for some vectors and in S. We will see below that the matrix contains the essential information to retrieve the memory representations . This implies, a convergence of to a certain periodic oscillation, , is a sign that the memories are stored in in a distributed way.
In the retrieval phase, we set in Eq. (4) and use the connectivity matrix as
| 6 |
where
| 7 |
Here is the representation of memory cue. The retrieval system (6) has also a asymptotically stable periodic solution, say . One can show that the trajectory of the periodic solution is closely located to S if the memory cue is relevant to any of the memory representations Hence, as a neural state is attracted to , it is also bound to circle around the memory representations . Figure 1 shows that the neural oscillation occurs near the memory plane and therefore in proximity to all the memory representations, when is given close to one of them.
Figure 1.
Graphical illustration of the memory representations and the corresponding memory plane S. The memory plane S is located in the subspace spanned by the memory representations and is shown to be close enough to them. A periodic orbit close to S can be used for efficient memory retrieval.
Encoding/decoding associative data with tag vectors
Before storing and retrieving specific associative data in (4) and (6), respectively, we first need to properly encode them into memory representations . It is reasonable to assume that the cognitive systems do not simply receive external inputs in a passive way, but rather actively pose them in the neural state space on acceptance. Suppose are a series of the external inputs from the environment containing high dimensional associative information. We use a set of internal tags to mark what classes the corresponding external inputs belong to. They may be used to indicate the order of sequence for the events (the first, the second, ..., the last) if the input is streamed through sequential observations, or types of sensors (visual, auditory, olfactory, tactile) if the input is a combination of senses, or the sentence elements (subject, predicate, object, modifier) if the input is a sentence composed of words. Such internal tags can be formulated as low dimensional orthonormal vectors.
Following the vector embedding method for structured information11, we use the tensor product to encode a raw data into a memory representation as
| 8 |
where is the tag vector corresponding to the raw data . If is of unit length, the original data can be exactly decoded from the memory representations by applying the right dot product of tagging vector as
| 9 |
We say a neural state to be retrievable if is a linear combination of the memory representations . For a retrievable state , a selective recovery of the original is possible by applying the right dot product with the corresponding tag vector as
| 10 |
It can be shown that all the points on the memory plane S with respect to are retrievable.
Let us describe how the encoding/decoding scheme is combined with the above memory models. In the storage phase, we encode the data into the memory representations using the tag vectors in Eq. (8), and run the system (4) with the memory input in Eq. (5). To retrieve the original data, we run the retrieval system (6) with a certain cue . We wait until converges to an oscillating trajectory, , then try to evaluate for retrieval. It has been shown2 that always has intersections with S which are therefore retrievable. However, since the neural state space is high dimensional, an irrelevant choice for leads to a trivial intersection near the origin which yields no meaningful retrieval. To the contrary, if is close to any of , then converges to a periodic solution which is almost embedded S as illustrated in Fig. 1. Indeed, one can show that if is a scalar multiple of one of the memory representations, the corresponding is completely embedded in S and is therefore retrievable for all t.
In the following section, we show through numerical tests for storage and retrieval that the STDP-based model (4) with the tensor product encoding can naturally provide a neural mechanism for segmenting continuous streams of sensory input into representations associative bindings of items.
Results
Retrieval of grouped images
We first demonstrate an auto-associative memory task that involves a group of images. This task uses five grayscale images of classical orchestral instruments in Fig. 2. The images are translated into external input vectors , in and are combined into the memory representations as , . Here the tag vectors , are orthonormal in and used as a placeholder for each image.
Figure 2.
Grayscale images of pixels displaying classical orchestral instruments that are used for the memory input vectors . All images were taken from Pixabay.
For numerical simulations in this article, the modified Euler’s method for delay equations has been universally used. In the first memory task, each image is translated to a vector as follows: every pixel is mapped to a value in , depending on its brightness (pure black to and pure white to linearly). Then the resulted matrix is flattened to a vector . In the storage phase, was used to maintain the magnitude for and at an appropriate level. Reconstructing the image from the vector can be done by performing the procedure in reverse order. The storage phase was proceeded for 40 seconds with the integration step size and (5-evenly sequenced points on ). The model parameters are chosen as , , and . These specific choices were made based upon the analysis in [2], to achieve the convergence of with simultaneously guaranteeing the sufficiently large magnitude of . As expected, stable convergence of connectivity to a certain is well achieved, when the arbitrary initial conditions and are appropriately small. The retrieval phase was proceeded for 15 seconds with for appropriately small arbitrary . In Figs. 3, 4, and 5, the brightness threshold was adjusted to 0.002 for clear visibility, since the magnitude of the retrieved images are relatively small compared to original ones. Thus, any element of having value outside is developed to a pixel of just pure black or white.
Figure 3.
Auto-associative memory retrieval from a contaminated cue. (a) The noisy cue is generated from and where , are Gaussian noise following and respectively. The parameters are and . (b) Snapshot of the retrieved images at the farthest point (red dot) from memory plane S. (c) Snapshot of the retrieved images at the intersection (green dot) of the orbit and the memory plane S. The timing of intersection can be analytically determined as , 2. Here, the contrast between the main object and the background are not well reproduced as the original images, since the brightness threshold is used for drawing the retrieved images. One can obtain an original(either identical or exactly inverted) image by using a certain optimal value of for each retrieved image.
Figure 4.
Comparison of retrieval quality according to the noise level in the cue. (a) The less noisy cue. The cue is generated in the same way as in Fig. 3 except using and instead of and . Gaussian noise are and , respectively. The parameters are and . (b) The severely contaminated cue with . (c) Snapshots of the retrieved images from the less noisy cue in (a), taken at the farthest point from S (top row) and at the intersection (bottom row). Here, , and . (d) Snapshots of the retrieved images from the more noisy cue in (b), taken at the farthest point from S (top row) and at the intersection (bottom row). Here, , and .
Figure 5.
(a) Result of retrieval from a partially obstructed cue. Snapshots of the retrieved images are taken at the farthest point from S (top row) and at the intersection of the orbit and the memory plane (bottom row). Here, , and . (b) Result of retrieval from an irrelevant cue. Snapshots of the retrieved images are taken at the farthest point from S (top row) and at the intersection of the orbit and the memory plane (bottom row). Here, , and . In both (a) and (b), we used the noisy tag vector used in Fig. 3 for retrieval.
Figure 3 depicts the numerical simulation for the retrieval phase. For a better understanding of the process, a graphic illustration of the memory plane and the initial memory cue is given with the actual data. When the neural state in Eq. (5) is continually perturbed by a noisy copy of one of the original images (violin), it approaches the memory plane S. Once the converges to a limit cycle around S as stated in Theorem 2, the external input can be reproduced by applying the tag vectors to . In Fig. 3, we display two snapshots of the retrieved images obtained at two points on the orbit: Fig. 3b is taken at the farthest from S and Fig. 3c is at the intersection. It is notable that the retrieved images continuously oscillate, developing week/strong and positive/negative images in turns. Such flashing patterns are generally different from image to image and are affected by the sequential order of the memory representations in Eq. (5) in the storage phase. Furthermore, due to the orthogonality of the tag vectors, the perfect images are acquired on the time instance when penetrates S.
Figure 4 shows that the quality of the retrieved images depends on how close the memory cue is to the original image input. The cue with low-level noise in Fig. 4a leads to the orbit (blue) close to the memory plane S, producing the images of decent quality in Fig. 4c. However, if the cue is more contaminated with noise as in Fig. 4b, approaches S at a relatively larger angle, making a stretched narrower elliptical orbit (red) that periodically gets far from S. Although the orbit from the severely contaminated cue still passes through the memory plane, it only does near the origin, providing relatively feeble images during a short time.
The retrieval can be performed with an incomplete cue. In Fig. 5a, the images are recalled from the partially obstructed cue. The original images can be recovered at a decent level, especially when passes through the memory plane S. Figure 5b displays that an irrelevant cue (forest) fails to retrieve the original memory inputs. Indeed, it can be shown that a completely irrelevant cue results in a one-dimensional periodic orbit that keeps penetrating the memory plane back and forth just at the origin.
To quantitatively verify the performance, we introduce the scaled cosine similarity between the original image and the retrieved image at t. Note that, if , this metric reflects the usual cosine similarity between two images and also ‘the memory intensity’ together. Let p(t) denote the average of the scaled cosine similarity for the images at t, and let denote the temporal average of p(t). One can see that these two measures successfully capture the quality of retrieved images depending on given cues. See the caption of Figs. 4 and 5 for more detail.
The memory capacity of the model measured by the time-averaged performance according to the number of memorized patterns is illustrated in Fig. 6. The performance is observed to decrease at least in a degree of as expected in Ref.2.
Figure 6.

Memory capacity measured by the time-averaged performance: the memory representations were constructed with randomly generated 200 dimensional pattern vectors and 20 dimensional tag vectors.
Multiple groups of memory with composite structure
This section deals with applications of the model to more complex associative memory. Suppose we have multiple groups of memory representations and have stored each group in the form of the memory plane using the system in Eq. (4). We are especially interested in the case where some memory representations belong to multiple groups. The following questions naturally arise: 1) Can the common memory component retrieve the corresponding multiple groups together? 2) Can a single memory group be selected by adding a further memory component in the cue? These questions are potentially related to high-level inference on memory.
We also focus on compositional structure of memory representations created by the tag vectors. Memory inputs in this section are words and are collectively provided in the form of a sentence. We assume that each tag vector stands for the sentence element (subject, predicate, object, modifier) and is naturally bound to a word according to the role of the corresponding word in the sentence. Being activated by such a sequential stream of words, the system in Eq. (4) forms the memory plane which can be referred to as the encoding of the sentence.
For the simulation of semantic memory, we use three sentences composed of 8 words. Every word appearing in the sentence has one of the 4 roles (sentence elements). The vocabulary of the words and the roles are listed in Fig. 7a. We simply use arbitrarily chosen orthonormal sets for the words and for the roles, respectively. Figure 7b shows a couple of examples for memory representations each of which is a binding of a word and a role. Here the subindex on the right-hand side is used to express the corresponding role for the word. Our goal is to store the semantic information of sentences through Eq. (4) with the memory input in Eq. (5). There are three sentences and listed in Fig. 7c that we use as the memory input in the simulation. Note that word John appears three times in the sentences, once in as an object, and twice in and as a subject. Similarly, the words Mary and garden occur twice in a different context.
Figure 7.
(a) List of words and roles used for the external input and the tag . (b) Descriptive explanation on constructing memory representations. (c) Three sentences , and generated by grouping memory representations. Note that the words John, Mary and garden appear several times in different contexts.
The memory connectivity , are obtained from separate single group learning on the sentences , , respectively. We then set the combined memory connectivity for three sentences, i.e., for the collective retrieval phase. Here, we adopt the function
| 11 |
to measure how close the retrieved quantity is to the word as the role .
In the following semantics tasks, the retrieval was proceeded for 30 seconds with for appropriately small . Multiple cues such as in Fig. 9d are implemented by assigning each cue to its original sampling time through a harmonic pulse. In other words, the combined cue is implemented as .
Figure 9.
(a) Expected result of retrieval by a single component cue . (b) Expected result of retrieval by multiple component cues and . (c) Numerical result of retrieval by a single component cue . (d) Numerical result of retrieval by multiple component cues and .
In the first task of multiple composite memories, is given as the cue. Since Mary occurs as a subject only in , one can expect the retrieved result to be as in Fig. 8a. The numerical simulation of the retrieval process turned out to agree well with this expectation. Figure 8b compares the fitness of the words. The values of in Eq. (11) are evaluated while is oscillating along a convergent orbit of Eq. (6). If keeps increasing with a large slope, the corresponding memory component can be identified as a dominantly retrieved one. The graphs in Fig. 8b show that such representations are , , and , which are well matched to .
Figure 8.
(a) Expected result of retrieval when the cue given. (b) The numerical result of retrieval by the cue . Dominant increasing values of are colored red, which turned out to correspond to .
The second task deals with the case with an ambiguous memory cue. Suppose that the memory component is given as the cue. Since it occurs in the both sentence and , it is reasonable that the retrieval result should involve all the memory representations in both sentences as in Fig. 9a. This may be understood in that one of the fundamental capabilities of the brain is to examine all possible memories that contain the common cue, especially when the given cue is insufficient. However, this ambiguity can be eliminated by adding further cues. For example, if is added as in Fig. 9b, the retrieval result should be narrowed down to due to the extra constraint.
It turned out that the numerical simulations successfully capture the expected features of the memory retrieval process mentioned above. Figure 9c shows the results from the memory cue . It is notable that and in the second graph simultaneously increase with the almost same slope, indicating that they are equally dominant memory representations in retrieval. This is even clearer when compared to another memory component which is steady and negligible. The same pattern appears with and in the third graph, both of which are dominant retrieval representations. The numerical results in Fig. 9d also reflect the retrieval tendency with additional memory cue. We provide the system in Eq. (6) with the extended memory input in Eq. (7) that consists of two memory representations and . Since the newly added cue confines the retrieval result to the sentence as in Fig. 9b, the memory representations in , and , should be suppressed in retrieval. The second and third graphs in Fig. 9d show that, while and increase (due to the common cue ), the slope is smaller than that of and in , respectively. This implies that the dominantly retrieved representations are , , and which are matched to .
Discussion
There is now substantial evidence accumulated that neural oscillations are related to memory encoding, attention, and integration of visual patterns14–16. In Ref.1, the idea has been proposed that memories constitute stable dynamical trajectories on a two-dimensional plane in which an incoming stimulus is encoded as a pair of imaginary eigenvalues in the connectivity matrix. We extended such an idea further through a specific memory system that can process a group of high dimensional associative data sets, by using the exact analytic relation between the inputs and the corresponding synaptic changes shown in Ref.2.
While the Hopfield network17 retrieves static single data as a fixed point, the proposed model explores multiple data sets through neural oscillations. Compared to other neural-based models capable of conducting explicit and perfect retrieval of grouped data such as18–20, the model proposed in this work has the novelty of using a simple and continuous framework to reconstruct a group of multiple data in the general vector form. Moreover, the model produces the output through neural oscillations in reaction to an external cue, showing a potential link to a real memory process occurring in the brain.
We encode the input data with tag vectors based on the tensor representation, which has been proposed as a robust and flexible distributed memory representation11,21–24. This preprocess enables us to efficiently retrieve the stored data and, in addition, to deal with the composite structure in the data set. The ability to process associate multiple data sets with composite structures is essential in natural language understanding and reasoning. It has been shown that the proposed model can handle multiple sentences that describe distinct situations and can selectively allow the recall cue to arouse a group of associative memories according to its semantic relevance.
From a practical perspective, our results suggest an alternative approach for a memory device. The conventional von Neumann architecture is non-scalable and its performance is limited by the so-called von Neumann bottleneck between nonvolatile memories and microprocessors. On the other hand, operating data with artificial synapses is benefiting from a parallel information process consuming a small amount of energy per synapse. Moreover, conventional digital memory systems convert the inputs to a binary code and save it in a separate storage device, likely destroying the correlation information by such physical isolation. The proposed model is based on continuous dynamical systems and provides a simple and robust approach to deal with a sequence of associative high-dimensional data. Processing data in the continuous and distributed system results in the plastic storage of the correlated information in the synaptic connections.
In this work, we used a continuous model based on the average of local spiking activities of neurons. The model can be seen as a continuous approximation of Spiking Neural Network(SNN) which has been recognized as a promising arhitecture for bio-inspired neuromorphic hardware. There have been several studies showing the dynamical correspondence between SNNs and their firing-rate approximations including Wilson–Cowan model25–27. However, reproducing the memory process in this paper through SNN is substantial yet interesting challenge in practice, considering it requires establishing a proper conversion between a series of high dimensional data and spiking patterns.
Acknowledgements
P. Kim was supported by National Research Foundation of Korea (2017R1D1A1B04032921). P. Kim was also supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2021R1A4A1032924).
Author contributions
H.Y. developed the theoretical formalism, performed the analytic calculations and performed the numerical simulations. P.K. devised the main conceptual ideas and wrote the main manuscript text.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Susman L, Brenner N, Barak O. Stable memory with unstable synapses. Nat. Commun. 2019;10(1):1–9. doi: 10.1038/s41467-019-12306-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.H.-G. Yoon, & P. Kim. Stdp-based associative memory formation and retrieval. arXiv:2107.02429v2 (2021). [DOI] [PubMed]
- 3.Bliss TV, Collingridge GL. A synaptic model of memory: Long-term potentiation in the hippocampus. Nature. 1993;361(6407):31–39. doi: 10.1038/361031a0. [DOI] [PubMed] [Google Scholar]
- 4.Bi G-Q, Poo M-M. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 1998;18(24):10464–10472. doi: 10.1523/JNEUROSCI.18-24-10464.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Caporale N, Dan Y. Spike timing-dependent plasticity: A hebbian learning rule. Annu. Rev. Neurosci. 2008;31:25–46. doi: 10.1146/annurev.neuro.31.060407.125639. [DOI] [PubMed] [Google Scholar]
- 6.Blum KI, Abbott LF. A model of spatial map formation in the hippocampus of the rat. Neural Comput. 1996;8(1):85–93. doi: 10.1162/neco.1996.8.1.85. [DOI] [PubMed] [Google Scholar]
- 7.Rao RP, Sejnowski TJ. Spike-timing-dependent hebbian plasticity as temporal difference learning. Neural Comput. 2001;13(10):2221–2237. doi: 10.1162/089976601750541787. [DOI] [PubMed] [Google Scholar]
- 8.Gerstner W, Kempter R, Van Hemmen JL, Wagner H. A neuronal learning rule for sub-millisecond temporal coding. Nature. 1996;383(6595):76–78. doi: 10.1038/383076a0. [DOI] [PubMed] [Google Scholar]
- 9.Tsodyks M. Spike-timing-dependent synaptic plasticity-the long road towards understanding neuronal mechanisms of learning and memory. Trends Neurosci. 2002;25(12):599–600. doi: 10.1016/S0166-2236(02)02294-4. [DOI] [PubMed] [Google Scholar]
- 10.Szatmáry B, Izhikevich EM. Spike-timing theory of working memory. PLoS Comput. Biol. 2010;6(8):e1000879. doi: 10.1371/journal.pcbi.1000879. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Smolensky P. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artif. Intell. 1990;46(1–2):159–216. doi: 10.1016/0004-3702(90)90007-M. [DOI] [Google Scholar]
- 12.Dayan P, Abbott LF, et al. Theoretical neuroscience: Computational and mathematical modeling of neural systems. J. Cogn. Neurosci. 2003;15(1):154–155. doi: 10.1162/089892903321107891. [DOI] [Google Scholar]
- 13.Kempter R, Gerstner W, Van Hemmen JL. Hebbian learning and spiking neurons. Phys. Rev. E. 1999;59(4):4498. doi: 10.1103/PhysRevE.59.4498. [DOI] [Google Scholar]
- 14.Singer W, Gray CM. Visual feature integration and the temporal correlation hypothesis. Annu. Rev. Neurosci. 1995;18(1):555–586. doi: 10.1146/annurev.ne.18.030195.003011. [DOI] [PubMed] [Google Scholar]
- 15.Gupta N, Singh SS, Stopfer M. Oscillatory integration windows in neurons. Nat. Commun. 2016;7(1):1–10. doi: 10.1038/ncomms13808. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Rutishauser U, Ross IB, Mamelak AN, Schuman EM. Human memory strength is predicted by theta-frequency phase-locking of single neurons. Nature. 2010;464(7290):903–907. doi: 10.1038/nature08860. [DOI] [PubMed] [Google Scholar]
- 17.Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 1982;79(8):2554–2558. doi: 10.1073/pnas.79.8.2554. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.C. Yáñez-Márquez, L. P. Sánchez-Fernández, & I. López-Yáñez. Alpha-beta associative memories for gray level patterns. in International Symposium on Neural Networks, pp. 818–823, Springer (2006).
- 19.C. Yáñez-Márquez, M. E. Cruz-Meza, F. A. Sánchez-Garfias, & I. López-Yáñez. Using alpha-beta associative memories to learn and recall rgb images. in International Symposium on Neural Networks, pp. 828–833, Springer (2007).
- 20.Yáñez-Márquez C, López-Yáñez I, Aldape-Pérez M, Camacho-Nieto O, Argüelles-Cruz AJ, Villuendas-Rey Y. Theoretical foundations for the alpha-beta associative memories: 10 years of derived extensions, models, and applications. Neural Process. Lett. 2018;48(2):811–847. doi: 10.1007/s11063-017-9768-2. [DOI] [Google Scholar]
- 21.Hintzman DL. Minerva 2: A simulation model of human memory. Behav. Res. Methods Instrum. Comput. 1984;16(2):96–101. doi: 10.3758/BF03202365. [DOI] [Google Scholar]
- 22.P. Kanerva, Sparse distributed memory. MIT press (1988).
- 23.Humphreys MS, Bain JD, Pike R. Different ways to cue a coherent memory system: A theory for episodic, semantic, and procedural tasks. Psychol. Rev. 1989;96(2):208. doi: 10.1037/0033-295X.96.2.208. [DOI] [Google Scholar]
- 24.Pollack JB. Recursive distributed representations. Artif. Intell. 1990;46(1–2):77–105. doi: 10.1016/0004-3702(90)90005-K. [DOI] [Google Scholar]
- 25.Brette R. Philosophy of the spike: Rate-based vs. spike-based theories of the brain. Front. Syst. Neurosci. 2015;9:151. doi: 10.3389/fnsys.2015.00151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Chow CC, Karimipanah Y. Before and beyond the wilson-cowan equations. J. Neurophysiol. 2020;123(5):1645–1656. doi: 10.1152/jn.00404.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Wallace E, Benayoun M, Van Drongelen W, Cowan JD. Emergent oscillations in networks of stochastic spiking neurons. PLoS ONE. 2011;6(5):e14804. doi: 10.1371/journal.pone.0014804. [DOI] [PMC free article] [PubMed] [Google Scholar]








