Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2022 Mar 18;12:4666. doi: 10.1038/s41598-022-08469-6

An STDP-based encoding method for associative and composite data

Hong-Gyu Yoon 1, Pilwon Kim 1,
PMCID: PMC8933433  PMID: 35304537

Abstract

Spike-timing-dependent plasticity(STDP) is a biological process of synaptic modification caused by the difference of firing order and timing between neurons. One of neurodynamical roles of STDP is to form a macroscopic geometrical structure in the neuronal state space in response to a periodic input by Susman et al. (Nat. Commun. 10(1), 1–9 2019), Yoon, & Kim. Stdp-based associative memory formation and retrieval. arXiv:2107.02429v2 (2021). In this work, we propose a practical memory model based on STDP which can store and retrieve high dimensional associative data. The model combines STDP dynamics with an encoding scheme for distributed representations and is able to handle multiple composite data in a continuous manner. In the auto-associative memory task where a group of images are continuously streamed to the model, the images are successfully retrieved from an oscillating neural state whenever a proper cue is given. In the second task that deals with semantic memories embedded from sentences, the results show that words can recall multiple sentences simultaneously or one exclusively, depending on their grammatical relations.

Subject terms: Learning algorithms, Neural encoding, Computational science

Introduction

Spike-timing-dependent plasticity(STDP) is a biological process of synaptic modification according to the order of pre- and post-synaptic spiking within a critical time window35, and considered to be critical for understanding the cognitive mechanisms such as temporal coding68, and the formation of associative memory9,10. In our separate work2, we analyzed an STDP-based neural model and showed that the model can associate multiple high-dimensional memories to a geometric structure in the neural state space which we call a memory plane. When exposed to repeatedly occurring spatio-temporal input patterns, the neural activity based on STDP transforms the patterns into the corresponding memory plane. Further, the stored memories can be dynamically revived with macroscopic neural oscillations around the memory plane if perturbed by a similar stimulus.

The presence and the function of the memory plane in the neural networks have caught attention in Ref.1, where it has been proposed that STDP can store transient inputs as imaginary-coded memories. In this work, we further emphasize a practical aspect of the memory plane, showing that it can play a central role in storing, retrieving, and manipulating structured information. Using the theoretical works in Ref.2, we intend to integrate an analytic and an implementation level description of the neural memory process based on the memory plane that is capable of handling high dimensional associative data.

In this work, we propose that a STDP-based memory model, combined with a proper encoding scheme, can store and retrieve a group structured information in the neuronal state space. Among a number of schemes for encoding compositional structure that have been proposed over the last few years, we adopt Tensor Product Representation(TPR)11. TPR is a general method for generating vector-space embeddings of internal representations and operations, which prove to contain a variety of structural information such as lists of paired items, sequences and networks.

We show that the STDP-based memory model with TPR can naturally provide a mechanism for segmenting continuous streams of sensory input into representations of associative bindings of items: first, we demonstrate an auto-associative memory task with a group of images. While the images are sequentially streamed into the system for storage, the corresponding information is internally stored in the connectivity matrix. Then the whole group of the images can be dynamically retrieved from the oscillating neural state, when the system is perturbed by a memory cue which is similar to any of the original images. In the second task for semantic manipulation, we use multiple semantic vectors to represent a sentence as a composite of words. Once several sentences are stored in the system via such semantic vectors, a single word can recall multiple sentences simultaneously or one exclusively, depending on their grammatical relations. This implies that the proposed method provides an alternative bio-inspiring approach to process multiple groups of associative data with composite structure.

Methods

STDP-based memory model

Our work follows the framework of standard firing-rate models1,12. We set the differential equation for the neural state as

x˙=-x+Wϕ(x)+b(t), 1

where x=x1xNRN is the state of N neuronal nodes and W=(Wij)RN×N is a connectivity matrix with Wij corresponding to the strength of synaptic connection from node j to i. Here ϕ is a regularizing transfer function and b(t) is a memory input.

The plasticity model that we propose is based on13 as

W˙ij(t)=-γWij(t)+ρ0K(s)ϕ(xj(t-s))ϕ(xi(t))dspre- to post- firing+0K(-s)ϕ(xj(t))ϕ(xi(t-s))dspost- to pre- firing, 2

where K is a temporal kernel and ρ is the learning rate. The parameter γ is the decaying rate of homeostatic plasticity. While the synapses are repeatedly excited by the external input, they also need to decay simultaneously to forget and dump the obsolete and irrelevant information gradually. For analytic simplicity, we use ϕ(x)=x and a simplified Dirac-delta typed temporal kernel K(s) defined as

K(s):=δ(s-τ)s>0-δ(s+τ)s0, 3

with τ>0. Using this kernel instead of the conventional exponential one has two main consequences: first, it models a specific interspike timing exhibiting maximal concentration of synaptic changes near origin, which has been observed in related works. Second, it yields a simple delayed term through the convolution in Eq. (2) which is relatively feasible for analysis2.

Now after simplifications, the main model becomes

x˙=-x+Wx+b(t)W˙=-γW+ρxxτ-xτx, 4

where xτ=x(t-τ) stands for the delayed synaptic response.

We use the memory input in the form of a sequential harmonic pulse as

b(t)=i=1nsin(ωt-ξi)mi, 5

where m1,,mnRN are memory representations to be stored. Here ω stands for the frequency of neural oscillations and ξi, i=1,,n stands for the sampling time for each component. The trajectory of the memory input b(t) in (5) is periodic and embedded in a 2-dimensional plane SRN which we call a memory plane with respect to the memory representations m1,,mn. While the memory representations are distributed in the high dimensional neural state space RN, the memory plane S tends to be located in close proximity to the memory representations under a suitable condition2. Further, the system (4) has a asymptotically stable solution (x,W) that consists of a periodic solution x(t) on the memory plane S and a constant connectivity matrix W(t)=W=α(vu-uv) for some vectors u and v in S. We will see below that the matrix W contains the essential information to retrieve the memory representations m1,,mn. This implies, a convergence of x(t) to a certain periodic oscillation, x(t), is a sign that the memories are stored in W in a distributed way.

In the retrieval phase, we set γ=ρ=0 in Eq. (4) and use the connectivity matrix W=W as

x˙=-x+Wx+b(t). 6

where

b(t)=sinωtmc,mcRN. 7

Here mc is the representation of memory cue. The retrieval system (6) has also a asymptotically stable periodic solution, say xc(t). One can show that the trajectory of the periodic solution xc(t) is closely located to S if the memory cue mc is relevant to any of the memory representations m1,,mn. Hence, as a neural state x(t) is attracted to xc(t), it is also bound to circle around the memory representations m1,,mn. Figure 1 shows that the neural oscillation occurs near the memory plane and therefore in proximity to all the memory representations, when mc is given close to one of them.

Figure 1.

Figure 1

Graphical illustration of the memory representations m1,,mn and the corresponding memory plane S. The memory plane S is located in the subspace spanned by the memory representations and is shown to be close enough to them. A periodic orbit close to S can be used for efficient memory retrieval.

Encoding/decoding associative data with tag vectors

Before storing and retrieving specific associative data in (4) and (6), respectively, we first need to properly encode them into memory representations m1,,mn. It is reasonable to assume that the cognitive systems do not simply receive external inputs in a passive way, but rather actively pose them in the neural state space on acceptance. Suppose f1,,fnRD are a series of the external inputs from the environment containing high dimensional associative information. We use a set of internal tags r1,,rmRK to mark what classes the corresponding external inputs belong to. They may be used to indicate the order of sequence for the events (the first, the second, ..., the last) if the input is streamed through sequential observations, or types of sensors (visual, auditory, olfactory, tactile) if the input is a combination of senses, or the sentence elements (subject, predicate, object, modifier) if the input is a sentence composed of words. Such internal tags r1,,rm can be formulated as low dimensional orthonormal vectors.

Following the vector embedding method for structured information11, we use the tensor product to encode a raw data fi into a memory representation mi as

mi=fir, 8

where r is the tag vector corresponding to the raw data fi. If r is of unit length, the original data can be exactly decoded from the memory representations by applying the right dot product of tagging vector as

mi·r=(fir)·r=fi(r·r)=fi. 9

We say a neural state xRN to be retrievable if x is a linear combination of the memory representations m1,,mn. For a retrievable state x(t)=j=1ncjmj, a selective recovery of the original fi is possible by applying the right dot product with the corresponding tag vector as

x·ri=j=1ncjmj·ri=cifi. 10

It can be shown that all the points on the memory plane S with respect to m1,,mn are retrievable.

Let us describe how the encoding/decoding scheme is combined with the above memory models. In the storage phase, we encode the data f1,,fn into the memory representations m1,,mn using the tag vectors in Eq. (8), and run the system (4) with the memory input b(t) in Eq. (5). To retrieve the original data, we run the retrieval system (6) with a certain cue mc. We wait until x(t) converges to an oscillating trajectory, xc(t), then try to evaluate x(t)·r for retrieval. It has been shown2 that xc(t) always has intersections with S which are therefore retrievable. However, since the neural state space RN is high dimensional, an irrelevant choice for mc leads to a trivial intersection near the origin which yields no meaningful retrieval. To the contrary, if mc is close to any of m1,,mn, then x(t) converges to a periodic solution xc(t) which is almost embedded S as illustrated in Fig. 1. Indeed, one can show that if mc is a scalar multiple of one of the memory representations, the corresponding xc(t) is completely embedded in S and is therefore retrievable for all t.

In the following section, we show through numerical tests for storage and retrieval that the STDP-based model (4) with the tensor product encoding can naturally provide a neural mechanism for segmenting continuous streams of sensory input into representations associative bindings of items.

Results

Retrieval of grouped images

We first demonstrate an auto-associative memory task that involves a group of images. This task uses five 64×64 grayscale images of classical orchestral instruments in Fig. 2. The images are translated into external input vectors fi, i=1,,5 in R642 and are combined into the memory representations as mi=firi, i=1,,5. Here the tag vectors ri, i=1,,5 are orthonormal in R5 and used as a placeholder for each image.

Figure 2.

Figure 2

Grayscale images of 64×64 pixels displaying classical orchestral instruments that are used for the memory input vectors f1,,f5. All images were taken from Pixabay.

For numerical simulations in this article, the modified Euler’s method for delay equations has been universally used. In the first memory task, each 64×64 image is translated to a vector fi as follows: every pixel is mapped to a value in [-σ,σ],σ>0, depending on its brightness (pure black to -σ and pure white to σ linearly). Then the resulted 64×64 matrix is flattened to a vector fi. In the storage phase, σ=0.02 was used to maintain the magnitude for fi and mi at an appropriate level. Reconstructing the image from the vector can be done by performing the procedure in reverse order. The storage phase was proceeded for 40 seconds with the integration step size Δt=0.1 and ξi=π5(i-1) (5-evenly sequenced points on [0,π]). The model parameters are chosen as ω=1.5, γ=ρ=0.5, and τ=π2ω=π3. These specific choices were made based upon the analysis in [2], to achieve the convergence of W(t)W with simultaneously guaranteeing the sufficiently large magnitude of W. As expected, stable convergence of connectivity to a certain W is well achieved, when the arbitrary initial conditions x(0) and W(0) are appropriately small. The retrieval phase was proceeded for 15 seconds with Δt=0.01 for appropriately small arbitrary x(0). In Figs. 3, 4, and 5, the brightness threshold σ was adjusted to 0.002 for clear visibility, since the magnitude of the retrieved images are relatively small compared to original ones. Thus, any element of x·ri having value outside [-0.0020.002] is developed to a pixel of just pure black or white.

Figure 3.

Figure 3

Auto-associative memory retrieval from a contaminated cue. (a) The noisy cue mc=f1~r1~ is generated from f1~=1-α2f1+αζ and r1~=1-β2r1+βη, where ζ, η are Gaussian noise following NR642(0,f1) and NR5(0,1), respectively. The parameters are α=0.25 and β=0.2. (b) Snapshot of the retrieved images at the farthest point (red dot) from memory plane S. (c) Snapshot of the retrieved images at the intersection (green dot) of the orbit and the memory plane S. The timing of intersection t=t>0 can be analytically determined as t=(tan-1ω+nπ)/ω, nZ2. Here, the contrast between the main object and the background are not well reproduced as the original images, since the brightness threshold σ=0.005 is used for drawing the retrieved images. One can obtain an original(either identical or exactly inverted) image by using a certain optimal value of σ for each retrieved image.

Figure 4.

Figure 4

Comparison of retrieval quality according to the noise level in the cue. (a) The less noisy cue. The cue is generated in the same way as in Fig. 3 except using f3 and r3 instead of f1 and r1. Gaussian noise are NR642(0,f3) and NR5(0,1), respectively. The parameters are α=0.1 and β=0.2. (b) The severely contaminated cue with α=0.7. (c) Snapshots of the retrieved images from the less noisy cue in (a), taken at the farthest point from S (top row) and at the intersection (bottom row). Here, p(t)0.0271, and p¯0.0899. (d) Snapshots of the retrieved images from the more noisy cue in (b), taken at the farthest point from S (top row) and at the intersection (bottom row). Here, p(t)0.0194, and p¯0.0688.

Figure 5.

Figure 5

(a) Result of retrieval from a partially obstructed cue. Snapshots of the retrieved images are taken at the farthest point from S (top row) and at the intersection of the orbit and the memory plane (bottom row). Here, p(t)0.0273, and p¯0.1092. (b) Result of retrieval from an irrelevant cue. Snapshots of the retrieved images are taken at the farthest point from S (top row) and at the intersection of the orbit and the memory plane (bottom row). Here, p(t)0.0000, and p¯0.0015. In both (a) and (b), we used the noisy tag vector r1~ used in Fig. 3 for retrieval.

Figure 3 depicts the numerical simulation for the retrieval phase. For a better understanding of the process, a graphic illustration of the memory plane and the initial memory cue is given with the actual data. When the neural state x(t) in Eq. (5) is continually perturbed by a noisy copy of one of the original images (violin), it approaches the memory plane S. Once the x(t) converges to a limit cycle around S as stated in Theorem 2, the external input f1,,f5 can be reproduced by applying the tag vectors to x(t). In Fig. 3, we display two snapshots of the retrieved images obtained at two points on the orbit: Fig. 3b is taken at the farthest from S and Fig. 3c is at the intersection. It is notable that the retrieved images continuously oscillate, developing week/strong and positive/negative images in turns. Such flashing patterns are generally different from image to image and are affected by the sequential order of the memory representations in Eq. (5) in the storage phase. Furthermore, due to the orthogonality of the tag vectors, the perfect images are acquired on the time instance when x(t) penetrates S.

Figure 4 shows that the quality of the retrieved images depends on how close the memory cue is to the original image input. The cue with low-level noise in Fig. 4a leads to the orbit (blue) close to the memory plane S, producing the images of decent quality in Fig. 4c. However, if the cue is more contaminated with noise as in Fig. 4b, x(t) approaches S at a relatively larger angle, making a stretched narrower elliptical orbit (red) that periodically gets far from S. Although the orbit from the severely contaminated cue still passes through the memory plane, it only does near the origin, providing relatively feeble images during a short time.

The retrieval can be performed with an incomplete cue. In Fig. 5a, the images are recalled from the partially obstructed cue. The original images can be recovered at a decent level, especially when x(t) passes through the memory plane S. Figure 5b displays that an irrelevant cue (forest) fails to retrieve the original memory inputs. Indeed, it can be shown that a completely irrelevant cue results in a one-dimensional periodic orbit that keeps penetrating the memory plane back and forth just at the origin.

To quantitatively verify the performance, we introduce the scaled cosine similarity figi(t)/fi2 between the original image fi and the retrieved image gi(t) at t. Note that, if gi(t)=O(fi), this metric reflects the usual cosine similarity between two images and also ‘the memory intensity’ gi(t)/fi together. Let p(t) denote the average of the scaled cosine similarity for the images fi,i=1,,n at t, and let p¯ denote the temporal average of p(t). One can see that these two measures successfully capture the quality of retrieved images depending on given cues. See the caption of Figs. 4 and 5 for more detail.

The memory capacity of the model measured by the time-averaged performance p¯ according to the number of memorized patterns is illustrated in Fig. 6. The performance p¯ is observed to decrease at least in a degree of n-12 as expected in Ref.2.

Figure 6.

Figure 6

Memory capacity measured by the time-averaged performance: the memory representations were constructed with randomly generated 200 dimensional pattern vectors and 20 dimensional tag vectors.

Multiple groups of memory with composite structure

This section deals with applications of the model to more complex associative memory. Suppose we have multiple groups of memory representations and have stored each group in the form of the memory plane using the system in Eq. (4). We are especially interested in the case where some memory representations belong to multiple groups. The following questions naturally arise: 1) Can the common memory component retrieve the corresponding multiple groups together? 2) Can a single memory group be selected by adding a further memory component in the cue? These questions are potentially related to high-level inference on memory.

We also focus on compositional structure of memory representations created by the tag vectors. Memory inputs in this section are words and are collectively provided in the form of a sentence. We assume that each tag vector stands for the sentence element (subject, predicate, object, modifier) and is naturally bound to a word according to the role of the corresponding word in the sentence. Being activated by such a sequential stream of words, the system in Eq. (4) forms the memory plane which can be referred to as the encoding of the sentence.

For the simulation of semantic memory, we use three sentences composed of 8 words. Every word appearing in the sentence has one of the 4 roles (sentence elements). The vocabulary of the words and the roles are listed in Fig. 7a. We simply use arbitrarily chosen orthonormal sets {fi}i=18 for the words and {rj}j=14 for the roles, respectively. Figure 7b shows a couple of examples for memory representations each of which is a binding of a word and a role. Here the subindex on the right-hand side is used to express the corresponding role for the word. Our goal is to store the semantic information of sentences through Eq. (4) with the memory input b(t) in Eq. (5). There are three sentences S1,S2 and S3 listed in Fig. 7c that we use as the memory input in the simulation. Note that word John appears three times in the sentences, once in S1 as an object, and twice in S2 and S3 as a subject. Similarly, the words Mary and garden occur twice in a different context.

Figure 7.

Figure 7

(a) List of words and roles used for the external input fi and the tag rj. (b) Descriptive explanation on constructing memory representations. (c) Three sentences S1, S2 and S3 generated by grouping memory representations. Note that the words John, Mary and garden appear several times in different contexts.

The memory connectivity Wk, k=1,2,3 are obtained from separate single group learning on the sentences Sk, k=1,2,3, respectively. We then set the combined memory connectivity for three sentences, i.e., W=W1+W2+W3 for the collective retrieval phase. Here, we adopt the function

Pji(t):=t0tfi(x(s)·rj)ds,i=1,,8,j=1,,4, 11

to measure how close the retrieved quantity is to the word fi as the role rj.

In the following semantics tasks, the retrieval was proceeded for 30 seconds with Δt=0.01 for appropriately small x(0). Multiple cues such as JohnS+MaryO in Fig. 9d are implemented by assigning each cue to its original sampling time through a harmonic pulse. In other words, the combined cue JohnS+MaryO is implemented as bc(t)=sin(ωt-ξ1)(f2r1)+sin(ωt-ξ3)(f1r3).

Figure 9.

Figure 9

(a) Expected result of retrieval by a single component cue JohnS. (b) Expected result of retrieval by multiple component cues JohnS and MaryO. (c) Numerical result of retrieval by a single component cue JohnS. (d) Numerical result of retrieval by multiple component cues JohnS and MaryO.

In the first task of multiple composite memories, MaryS is given as the cue. Since Mary occurs as a subject only in S1, one can expect the retrieved result to be S1 as in Fig. 8a. The numerical simulation of the retrieval process turned out to agree well with this expectation. Figure 8b compares the fitness of the words. The values of Pji(t) in Eq. (11) are evaluated while x(t) is oscillating along a convergent orbit of Eq. (6). If Pji(t) keeps increasing with a large slope, the corresponding memory component firj can be identified as a dominantly retrieved one. The graphs in Fig. 8b show that such representations are MaryS, callingP, JohnO and livingroomM, which are well matched to S1.

Figure 8.

Figure 8

(a) Expected result of retrieval when the cue MaryS given. (b) The numerical result of retrieval by the cue MaryS. Dominant increasing values of Pij(t) are colored red, which turned out to correspond to S1.

The second task deals with the case with an ambiguous memory cue. Suppose that the memory component JohnS is given as the cue. Since it occurs in the both sentence S2 and S3, it is reasonable that the retrieval result should involve all the memory representations in both sentences as in Fig. 9a. This may be understood in that one of the fundamental capabilities of the brain is to examine all possible memories that contain the common cue, especially when the given cue is insufficient. However, this ambiguity can be eliminated by adding further cues. For example, if MaryO is added as in Fig. 9b, the retrieval result should be narrowed down to S3 due to the extra constraint.

It turned out that the numerical simulations successfully capture the expected features of the memory retrieval process mentioned above. Figure 9c shows the results from the memory cue JohnS. It is notable that chasingP and lookingP in the second graph simultaneously increase with the almost same slope, indicating that they are equally dominant memory representations in retrieval. This is even clearer when compared to another memory component callingP which is steady and negligible. The same pattern appears with JohnO and dogO in the third graph, both of which are dominant retrieval representations. The numerical results in Fig. 9d also reflect the retrieval tendency with additional memory cue. We provide the system in Eq. (6) with the extended memory input in Eq. (7) that consists of two memory representations JohnS and MaryO. Since the newly added cue MaryO confines the retrieval result to the sentence S3 as in Fig. 9b, the memory representations in S2, chasingP and dogO, should be suppressed in retrieval. The second and third graphs in Fig. 9d show that, while chasingP and dogO increase (due to the common cue JohnS), the slope is smaller than that of lookingP and MaryO in S3, respectively. This implies that the dominantly retrieved representations are JohnS, lookingP, MaryO and gardenM which are matched to S3.

Discussion

There is now substantial evidence accumulated that neural oscillations are related to memory encoding, attention, and integration of visual patterns1416. In Ref.1, the idea has been proposed that memories constitute stable dynamical trajectories on a two-dimensional plane in which an incoming stimulus is encoded as a pair of imaginary eigenvalues in the connectivity matrix. We extended such an idea further through a specific memory system that can process a group of high dimensional associative data sets, by using the exact analytic relation between the inputs and the corresponding synaptic changes shown in Ref.2.

While the Hopfield network17 retrieves static single data as a fixed point, the proposed model explores multiple data sets through neural oscillations. Compared to other neural-based models capable of conducting explicit and perfect retrieval of grouped data such as1820, the model proposed in this work has the novelty of using a simple and continuous framework to reconstruct a group of multiple data in the general vector form. Moreover, the model produces the output through neural oscillations in reaction to an external cue, showing a potential link to a real memory process occurring in the brain.

We encode the input data with tag vectors based on the tensor representation, which has been proposed as a robust and flexible distributed memory representation11,2124. This preprocess enables us to efficiently retrieve the stored data and, in addition, to deal with the composite structure in the data set. The ability to process associate multiple data sets with composite structures is essential in natural language understanding and reasoning. It has been shown that the proposed model can handle multiple sentences that describe distinct situations and can selectively allow the recall cue to arouse a group of associative memories according to its semantic relevance.

From a practical perspective, our results suggest an alternative approach for a memory device. The conventional von Neumann architecture is non-scalable and its performance is limited by the so-called von Neumann bottleneck between nonvolatile memories and microprocessors. On the other hand, operating data with artificial synapses is benefiting from a parallel information process consuming a small amount of energy per synapse. Moreover, conventional digital memory systems convert the inputs to a binary code and save it in a separate storage device, likely destroying the correlation information by such physical isolation. The proposed model is based on continuous dynamical systems and provides a simple and robust approach to deal with a sequence of associative high-dimensional data. Processing data in the continuous and distributed system results in the plastic storage of the correlated information in the synaptic connections.

In this work, we used a continuous model based on the average of local spiking activities of neurons. The model can be seen as a continuous approximation of Spiking Neural Network(SNN) which has been recognized as a promising arhitecture for bio-inspired neuromorphic hardware. There have been several studies showing the dynamical correspondence between SNNs and their firing-rate approximations including Wilson–Cowan model2527. However, reproducing the memory process in this paper through SNN is substantial yet interesting challenge in practice, considering it requires establishing a proper conversion between a series of high dimensional data and spiking patterns.

Acknowledgements

P. Kim was supported by National Research Foundation of Korea (2017R1D1A1B04032921). P. Kim was also supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2021R1A4A1032924).

Author contributions

H.Y. developed the theoretical formalism, performed the analytic calculations and performed the numerical simulations. P.K. devised the main conceptual ideas and wrote the main manuscript text.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Susman L, Brenner N, Barak O. Stable memory with unstable synapses. Nat. Commun. 2019;10(1):1–9. doi: 10.1038/s41467-019-12306-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.H.-G. Yoon, & P. Kim. Stdp-based associative memory formation and retrieval. arXiv:2107.02429v2 (2021). [DOI] [PubMed]
  • 3.Bliss TV, Collingridge GL. A synaptic model of memory: Long-term potentiation in the hippocampus. Nature. 1993;361(6407):31–39. doi: 10.1038/361031a0. [DOI] [PubMed] [Google Scholar]
  • 4.Bi G-Q, Poo M-M. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 1998;18(24):10464–10472. doi: 10.1523/JNEUROSCI.18-24-10464.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Caporale N, Dan Y. Spike timing-dependent plasticity: A hebbian learning rule. Annu. Rev. Neurosci. 2008;31:25–46. doi: 10.1146/annurev.neuro.31.060407.125639. [DOI] [PubMed] [Google Scholar]
  • 6.Blum KI, Abbott LF. A model of spatial map formation in the hippocampus of the rat. Neural Comput. 1996;8(1):85–93. doi: 10.1162/neco.1996.8.1.85. [DOI] [PubMed] [Google Scholar]
  • 7.Rao RP, Sejnowski TJ. Spike-timing-dependent hebbian plasticity as temporal difference learning. Neural Comput. 2001;13(10):2221–2237. doi: 10.1162/089976601750541787. [DOI] [PubMed] [Google Scholar]
  • 8.Gerstner W, Kempter R, Van Hemmen JL, Wagner H. A neuronal learning rule for sub-millisecond temporal coding. Nature. 1996;383(6595):76–78. doi: 10.1038/383076a0. [DOI] [PubMed] [Google Scholar]
  • 9.Tsodyks M. Spike-timing-dependent synaptic plasticity-the long road towards understanding neuronal mechanisms of learning and memory. Trends Neurosci. 2002;25(12):599–600. doi: 10.1016/S0166-2236(02)02294-4. [DOI] [PubMed] [Google Scholar]
  • 10.Szatmáry B, Izhikevich EM. Spike-timing theory of working memory. PLoS Comput. Biol. 2010;6(8):e1000879. doi: 10.1371/journal.pcbi.1000879. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Smolensky P. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artif. Intell. 1990;46(1–2):159–216. doi: 10.1016/0004-3702(90)90007-M. [DOI] [Google Scholar]
  • 12.Dayan P, Abbott LF, et al. Theoretical neuroscience: Computational and mathematical modeling of neural systems. J. Cogn. Neurosci. 2003;15(1):154–155. doi: 10.1162/089892903321107891. [DOI] [Google Scholar]
  • 13.Kempter R, Gerstner W, Van Hemmen JL. Hebbian learning and spiking neurons. Phys. Rev. E. 1999;59(4):4498. doi: 10.1103/PhysRevE.59.4498. [DOI] [Google Scholar]
  • 14.Singer W, Gray CM. Visual feature integration and the temporal correlation hypothesis. Annu. Rev. Neurosci. 1995;18(1):555–586. doi: 10.1146/annurev.ne.18.030195.003011. [DOI] [PubMed] [Google Scholar]
  • 15.Gupta N, Singh SS, Stopfer M. Oscillatory integration windows in neurons. Nat. Commun. 2016;7(1):1–10. doi: 10.1038/ncomms13808. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Rutishauser U, Ross IB, Mamelak AN, Schuman EM. Human memory strength is predicted by theta-frequency phase-locking of single neurons. Nature. 2010;464(7290):903–907. doi: 10.1038/nature08860. [DOI] [PubMed] [Google Scholar]
  • 17.Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 1982;79(8):2554–2558. doi: 10.1073/pnas.79.8.2554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.C. Yáñez-Márquez, L. P. Sánchez-Fernández, & I. López-Yáñez. Alpha-beta associative memories for gray level patterns. in International Symposium on Neural Networks, pp. 818–823, Springer (2006).
  • 19.C. Yáñez-Márquez, M. E. Cruz-Meza, F. A. Sánchez-Garfias, & I. López-Yáñez. Using alpha-beta associative memories to learn and recall rgb images. in International Symposium on Neural Networks, pp. 828–833, Springer (2007).
  • 20.Yáñez-Márquez C, López-Yáñez I, Aldape-Pérez M, Camacho-Nieto O, Argüelles-Cruz AJ, Villuendas-Rey Y. Theoretical foundations for the alpha-beta associative memories: 10 years of derived extensions, models, and applications. Neural Process. Lett. 2018;48(2):811–847. doi: 10.1007/s11063-017-9768-2. [DOI] [Google Scholar]
  • 21.Hintzman DL. Minerva 2: A simulation model of human memory. Behav. Res. Methods Instrum. Comput. 1984;16(2):96–101. doi: 10.3758/BF03202365. [DOI] [Google Scholar]
  • 22.P. Kanerva, Sparse distributed memory. MIT press (1988).
  • 23.Humphreys MS, Bain JD, Pike R. Different ways to cue a coherent memory system: A theory for episodic, semantic, and procedural tasks. Psychol. Rev. 1989;96(2):208. doi: 10.1037/0033-295X.96.2.208. [DOI] [Google Scholar]
  • 24.Pollack JB. Recursive distributed representations. Artif. Intell. 1990;46(1–2):77–105. doi: 10.1016/0004-3702(90)90005-K. [DOI] [Google Scholar]
  • 25.Brette R. Philosophy of the spike: Rate-based vs. spike-based theories of the brain. Front. Syst. Neurosci. 2015;9:151. doi: 10.3389/fnsys.2015.00151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Chow CC, Karimipanah Y. Before and beyond the wilson-cowan equations. J. Neurophysiol. 2020;123(5):1645–1656. doi: 10.1152/jn.00404.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Wallace E, Benayoun M, Van Drongelen W, Cowan JD. Emergent oscillations in networks of stochastic spiking neurons. PLoS ONE. 2011;6(5):e14804. doi: 10.1371/journal.pone.0014804. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES