Abstract
Neuromorphic computing research is being actively pursued to address the challenges posed by the need for energy-efficient processing of big data. One of the promising approaches to tackle the challenges is the hardware implementation of spiking neural networks (SNNs) with bio-plausible learning rules. Numerous research works have been done to implement the SNN hardware with different synaptic plasticity rules to emulate human brain operations. While a standard spike-timing-dependent-plasticity (STDP) rule is emulated in many SNN hardware, the various STDP rules found in the biological brain have rarely been implemented in hardware. This study proposes a CMOS-memristor hybrid synapse circuit for the hardware implementation of a Ca ion-based plasticity model to emulate the various STDP curves. The memristor was adopted as a memory device in the CMOS synapse circuit because memristors have been identified as promising candidates for analog non-volatile memory devices in terms of energy efficiency and scalability. The circuit design was divided into four sub-blocks based on biological behavior, exploiting the non-volatile and analog state properties of memristors. The circuit was designed to vary weights using an H-bridge circuit structure and PWM modulation. The various STDP curves have been emulated in one CMOS-memristor hybrid circuit, and furthermore a simple neural network operation was demonstrated for associative learning such as Pavlovian conditioning. The proposed circuit is expected to facilitate large-scale operations for neuromorphic computing through its scale-up.
Subject terms: Electrical and electronic engineering, Electronic devices
Introduction
The fast growth in data volumes across the globe has necessitated the need for advances in computing systems. The constraints of the von Neumann architecture, including memory transfer bottlenecks, will results in the limitations of operating speed and power efficiency. To overcome these limitations1, numerous research groups and chip manufacturers have implemented CMOS-based neuro-inspired computing hardware systems2–6. The focus of these efforts has been on the development of machine learning hardware7, with the aim of achieving high performance in first-generation artificial neural networks (ANN)8, second-generation deep neural networks (DNN)8 and convolutional neural networks (CNN)9, and progressing towards the development of third-generation spiking neural networks (SNN)10,11. The main advantage of SNN include the ability to use sparse input data, resulting in reduced power consumption12,13. The characteristics of neuromorphic computing and SNN are illustrated in the Supplementary Fig. S1. Current state-of-the-art technologies are mainly CMOS-based and include Loihi2,3, TrueNorth4, Tianjic5 and SpiNNaker6, with reports of new or improved products emerging in the field14. However, CMOS-based systems utilize SRAM for artificial synapse, which requires at least 6 transistors, leading to area constraints. Additionally, there are significant energy consumption issues in peripheral circuits such as ADCs and in synapse memory15. Therefore, research utilizing memristors, which allow for a reduction in the number of transistors in unit synapses and simpler designs for peripheral circuits, has been reported16–19. The memristor-based area efficiency with 8 Mbit integration has surpassed the computational efficiency of conventional CPU architectures, achieving 1.2 TOPS/mm2, and challenging the 1,000 TOPS/W target of emerging device-based SNNs with a result of 195 TOPS/W20,21. This compares favorably with traditional computing in terms of both computation and integration. In addition to achieving optimal performance, numerous studies are underway to fabricate arrays of various scales and calculate power and area efficiency22–25. In addition, many researches on the hardware implementation of SNNs and analogue computing have been reported26–28.
Numerous studies have proposed research utilizing synapses (memristors), aiming to mimic brain functions such as associative learning and inference29,30, along with endeavors towards biologically plausible approaches like SNN31–33. The synaptic weight can be modulated by various learning rules, one of which is the Hebbian rule34, where learning is performed based on the temporal activity relationship between two neurons. A well-known example is spike-timing-dependent plasticity (STDP), where the weight is changed based on the time difference between pre- and post-synaptic spikes. In addition, the Bienenstock, Cooper, and Munro (BCM)35 theory proposes that synaptic weight changes according to the frequency of neuron activation. Furthermore, homeostatic plasticity mechanisms organically adjust synaptic weights to prevent neurons from becoming either overactive or inactive. As stated above, the rules of biological learning are very diverse. However, most research works on memristors have mainly focused on the implementation of standard STDP operations36–38 although diverse learning rules other than the standard plasticity have been found in mammal brains and considered to play important roles in learning and memory. Therefore, it is necessary to implement a variety of learning rules, not only Hebbian learning rules based on long-term potentiation (LTP), long-term depression (LTD) and STDP, but also synaptic modifications based on BCM theory, such as spike-rate-dependent plasticity (SRDP). Furthermore, each learning rule does not operate in isolation, but through multifactor STDP, which is frequency- and timing-dependent.
Recently, a calcium-based plasticity learning model has been proposed in which synaptic plasticity depends on the concentration of calcium in the postsynaptic terminal39. Ca ion-based plasticity model capable of implementing various models such as multifactor-STDP, SRDP, LTD, and LTP. This Ca ion concentration in this model has been characterized by different structures depending on spike patterns and rates, and has been formulated through mathematical expressions40,41. The modulation of synaptic weight is achieved by regulating the number of ion channels that are passed through when a presynaptic spike occurs and the same neurotransmitter is released. This regulation depends on the concentration of calcium ions, with the activity of CaMKII (Ca2+/calmodulin-dependent protein kinase II) leading to an increase in AMPA channels in the dendrites of postsynaptic neurons. Conversely, CaN (calcineurin) functions to decrease AMPA channels in the dendrites of postsynaptic neurons42,43. Ongoing efforts to implement Ca ion based plasticity model using CMOS hardware have been reported44,45, but face limitations of a capacitor-based synapse such as the inability to maintain stable states due to its volatility, the difficulty in clearly defining analogue states, and the need for pre-charging to implement long-term depression (LTD). Consequently, there is a growing need for research aimed at overcoming these limitations through the application of new devices. One proposed method is to use a memristor as analog memory in a CMOS-memristor hybrid structure. The aim is to use this structure to overcome these problems and mimic various SNN behaviors. In other words, limitations persist due to formulations in equations or behaviors restricted by the volatile nature of synapses. Applying memristors to the Ca ion-based plasticity model not only addresses these issues but also enables the implementation of Pavlovian learning, an associative learning paradigm previously only achievable through supervised learning, using unsupervised STDP.
Memristors, as two-terminal devices, exhibit a characteristic where their resistance state changes in response to the applied voltage, making them suitable for artificial synapse applications. In this study, a bipolar memristor, distinguished by the bias polarity, was utilized46. This involves performing a set operation, in which the resistance decreases by applying a positive bias to one electrode while connecting the opposite electrode to ground. Conversely, a reset operation, which increases resistance, is executed by reversing the connection between the positive bias and ground on both electrodes. To facilitate these operations seamlessly, the read operation of the memristor is conducted such that resistance and conductance are deduced based on the current flowing through the memristor, thus necessitating the absence of switching phenomena that alter the device's state. Consequently, the read operation is performed at voltages below the set voltage47–50. To use memristors effectively, several challenges must be overcome. Firstly, the set, reset and read operations of a memristor must be performed correctly within a circuit. Unlike the approach taken in existing literature, where synaptic weights are emulated by the voltage across a capacitor, requiring only charge and discharge actions, memristors require a change in polarity for operation, requiring a more complex structure. Secondly, the use of programming techniques such as pulse amplitude modulation (PAM) and pulse width modulation (PWM) is required to ensure smooth acquisition of multi-conductance states.
In this study, we validated a calcium-based plasticity learning model using memristor-integrated circuits through simulations using LTSpice. The calcium-based plasticity circuits consisted of 4 sub-blocks named as part I, II, III and IV. In part I, the Ca-ion concentration block implemented changes in Ca-ion concentration current based on pre-spike and post-spike events. In Part II, the potentiation & depression block, a winner-take-all (WTA) circuit was designed to ensure that only the dominant part would operate. Depending on the magnitude of the Ca-ion concentration current, either CaMKII or CaN dominates. In Part III's synapse block, potentiation and depression pulses are transmitted through an H-Bridge to enable modulation of the memristor's conductance. If the memristor's conductance is below a threshold and a pre-spike signal occurs, Part IV's post neuron block operates, resulting in the generation of a post spike. Using the proposed memristive CMOS circuit, we carried out validation within a single circuit block, in particular using Ca model operation, which facilitates the implementation of multi-factor operations. We applied an H-bridge circuit that allows the set, reset and read operations to be performed smoothly and ensures that the memristor operates within the desired voltage range, thus addressing the first constraint. In addition, we exploited the change in the memristor's conductance based on PWM signals, enabling the memristor to operate within the desired range by changing the voltage signal applied to it in a PWM manner. This approach confirmed the successful application of conductance changes from individual memristors to those within the circuit. We successfully implemented the functionalities of long-term potentiation (LTP) and long-term depression (LTD), as well as standard spike-timing-dependent plasticity (STDP), spike-rate-dependent plasticity (SRDP) and frequency-dependent STDP, thereby realizing various synaptic plasticity mechanisms in spiking neural networks (SNNs). Finally, using the standard STDP model, we successfully implemented all the associative update, conditioning and extinction operations that are difficult to achieve using Ca-based plasticity circuits in Pavlovian learning, a form of associative learning.
Methods
The phenomenon of synaptic weight changes, induced by the spike signals of pre-neurons and post-neurons, is widely recognized and described by the Hebbian rule. One of the models explaining this is the calcium plasticity learning model. This model entails that the spike activity alters the calcium ion concentration in post-neurons, which in turn modulates the synaptic weight. mathematically40,41.
| 1 |
In the Eq. (1), τ represents the time constant, and ρ denotes the normalized synaptic weight ranging from 0 to 1. γp is the potentiation constant, and γd is for depression. ρ* indicates the boundary point between potentiation and depression. The Heaviside function Θ ensures that the weight changes only when the calcium ion concentration c(t) exceeds the potentiation threshold θp or exceeds the depression threshold θd. Additionally, the term—ρ(1 − ρ)(ρ* − ρ) incorporates a bistable function, enabling ρ to transition directionally towards 0 or 1. This differential equation-based model dynamically responds to spike patterns and rates, allowing for the implementation of complex synaptic plasticity. According to this formula, when a spike occurs, the concentration of Ca ions changes analogously, and by adjusting the parameters, it is possible to achieve different results even with the same spike signal. The diversity of STDP curves according to this formula was presented in a paper by Michael Graupner40, and its implementation in a circuit using CMOS transistors was proposed in a paper by Frank L. Maldonado Huayaney41, based on this formula.
To effectively mimic Ca ion-based plasticity model, it is crucial to understand its operation. In Fig. 1a, the illustration depicts the axon terminal of the pre-neuron and the dendrite of the post-neuron. Upon the occurrence of a pre-spike, vesicles are released from the axon terminal of the pre-neuron, along with the release of neurotransmitters contained within the vesicles. These neurotransmitters play a role in transmitting the potential or spike signal as a chemical signal. The neurotransmitters activate ligand-gated AMPA channels39,40 on the dendrite of the post-neuron. When the AMPA channels open, K+ ions and Na+ ions are influxed from the extracellular space. This influx of ions leads to an increase in the potential of the post-neuron. In this scenario, neurotransmitters also open voltage-ligand gated NMDA channels42,43. Upon opening, Ca2+ ions are influxed from the extracellular space through the NMDA channels. Changes in Ca ion concentration lead to alterations in the activity of CaMKII and CaN42,43, ultimately modulating the addition or reduction of AMPA channels, which is reflected in synaptic weights. Ultimately, depending on the concentration of Ca ions, the system may either fail to operate or induce potentiation or depression actions.
Figure 1.
(a) The fluctuation in calcium ion concentration in the post-neuron is observed in response to the action potential (pre-spike) of the presynaptic neuron, and synaptic potentiation and depression are mediated by the magnitude of this calcium ion concentration. (b) The overall structure of the neuromorphic circuit based on Ca ion concentration. Part I: the section responsible for converting spike signals from pre or post neurons into calcium ion concentration. Part II: performs comparative analysis based on Ca ion concentration and generates potentiation and depression pulses. Part III: implementation of weights using an H-bridge circuit and a memristor compact model. Part IV: the post-neuron section operates based on ion current.
The calcium plasticity learning model was designed to effectively mimic biological behaviors by dividing the operational aspects into four circuit blocks, as illustrated in Fig. 1b. Part I: the Ca ion concentration block implements the variation in Ca ion concentration in response to input spikes (both pre and post spikes). Part II: the potentiation & depression block compares the Ca ion concentration with bio-inspired thresholds represented by θp and θd for CaN threshold and CaMKII threshold, respectively. It generates potentiation and depression pulses to alter weights accordingly. Part III: the synapse block, responsible for managing weights, applies a memristor compact model representing synapses and utilizes an H-Bridge structure for smooth operation. Finally, as the ion current levels change based on memristor weights, Part IV: the post neuron block detects this and triggers a post spike firing when the weight exceeds a certain threshold.
A compact model was used to simulate the behavior of the memristors responsible for adjusting the weights based on the signals received by each applied voltage. The Yakopcic model, proposed by Chris Yakopcic in 2011.51 This model is composed of three core equations.
| 2 |
| 3 |
Equation (2) governs the evolution of the state of the memristor over time. The term g(V(t)) ensures that changes in the state occur only when voltages equal to or exceed the set and reset voltages, with the magnitude of the change regulated by the function f(x(t)), which represents the state variable motion. Equation (2) dictates state changes based on applied voltage, and ultimately, Eq. (3) is employed to model nonlinear current variations based on the current state and the applied voltage, V(t). The parameters a1, a2, and b play a role in adjusting the magnitude of the change. Detailed threshold functions, state variable motion functions, and parameters are represented in supplementary information section-2 in the form of LTspice subcircuits.
Through careful parameter tuning, the Compact model was designed to replicate the I-V curves characteristic of memristors. In addition, pulse-conductance curves were integrated to represent 50 potentiation pulses and 50 depression pulses, with pulse widths ranging from 500 μs to 30 ms, while maintaining a fixed voltage across the terminals of the compact model at ± 3 V. The results showed linear changes in conductance for pulse widths of 500 μs and 1 ms. However, greater widths resulted in abrupt changes in the curve, with the magnitude of the transition becoming more pronounced as the width increased. The results of the compact model closely resembled those observed in the experimental measurements, with the curves shown in Fig. 2.
Figure 2.
Results using the Yakopcic compact model (a) IV characteristics with an applied voltage of ± 3 V between TE and BE. (b) Conductance variation due to 50 potentiation and 50 depression pulses, under a pulse voltage of ± 3 V, between TE and BE (simulations performed with varying pulse widths).
For the circuit simulation we used LTSpice (XVII, 17.0.37), a SPICE simulator software, to configure components and specify variables for dynamic simulations. Each circuit block was validated and tested by sweeping various parameters, primarily through transient simulation, which allowed us to input spikes and observe how the circuit behaved over time. For capacitors and resistors, we used the built-in ideal models. Transistors were modelled using CMOS transistor models based on 160 nm commercial Si foundry technology from TSMC in Taiwan. The transistor's width and length were set to 160 nm and 350 nm, respectively, with NMOS represented as MNx and PMOS as MPx in the circuit diagram. The detailed specifications of all components, including transistors, thyristors, buffers, and op-amps, are described in Supplementary information section-3. Parts I to III, consisted of 20 n-MOSFETs and 11 p-MOSFETs, totaling 31 transistors, along with 4 op-amps, 4 NOT gates, and 2 buffers. Additionally, for simulating Pavlovian learning in Part IV, 2 op-amps and 4 transistors and 1 buffer were required for neuron circuitry and connections.
SPICE circuit designs
Part I: Ca ion concentration block
The first block, Part I: calcium ion concentration block, is shown in Fig. 3a. This circuit was designed based on the previously reported DPI circuit52, with modifications to certain parameters. When pre-spike and post-spike input signals are applied to the gates of MN5 and MN6, the transistors turn on. Consequently, current flows, causing the discharge of capacitor C1. As the voltage across this capacitor decreases, MP2 turns on, allowing Ca ion concentration current(ICa) to flow. Several parts are included to fine-tune the overall operation. The threshold part, for instance, is responsible for ensuring that when the gate voltage of MN1(Vth,spikes) is below the voltage of the input spike(Vpre-spike, Vpost-spike), C1 remains undischarged. The number of vesicles regulates vesicle release, even with the same spike input. Circuit-wise, the gate voltages of MN3 and MN4(Vpre,vesicles, Vpost,vesicles) are adjusted to control the total current. In other words, even with the same input spike, C1 discharges more or less, thereby regulating the magnitude of the Ca ion concentration current. The decay rate of the Ca ion concentration current can be implemented by adjusting the size of the C1 in the decay part and the gate voltage of the MP1(Vdecay-time). Validation of the operation of the respective blocks began with an examination of the changes in the Ca ion concentration current in response to pre- and post-spikes. The results of simulations with varied parameters are shown in Supplementary Fig. S4.
Figure 3.
(a) Part I: a circuit for converting pre-spike and post-spike into calcium ion concentration current. The variables include threshold voltage, decay time, and number of vesicles. (b) Part II: within the winner-takes-all (WTA) circuit, a comparison between Ca ion concentration, CaMKII, and CaN determines which calcium-dependent protein predominates and subsequently sends a signal to facilitate its operation. The comparison levels of CaMKII and CaN can be adjusted through the threshold component, by adjusting the current limit, the degree of operation signal can be controllable. (c) Part II: using a differential amplifier, potentiation and depression pulses are generated based on the dominant signal between CaMKII operation signal and CaN operation signal.
Part II: potentiation & depression block
The potentiation & depression block can be further divided into the WTA section and the PWM section. WTA stands for Winner-Take-All circuit53, wherein only the circuit on the side with the higher voltage applied between two comparison groups operates. Attempts have been made to improve this structure and apply it to the Ca ion model circuit. Previous research had suggested that this structure could replace the role of synapses41. However, our study modified the circuit to generate operation signals, supplying PWM signals to memristors by altering parameters within this structure. Here, the currents of CaMKII and CaN are compared. In biological terms, when the calcium concentration is high, CaMKII is activate. otherwise, CaN operates, and below the CaN threshold, no action occurs. To emulate this, the CaMKII threshold is set higher than the CaN threshold, leading to a winner-takes-all (WTA) circuit operation.
In Part I, the voltage variation of C1 induces Ca ion concentration current to flow through MP2 and MP2*. By comparing the Ca ion concentration current with the CaMKII threshold current and CaN(calcineurin) threshold current through MP5 and MP6, the current flowing through MN15 and MN16 is limited(Each gate voltage parameter regulates the current.).
As the final output current is constrained, it is essential to ascertain which of the two currents, the CaMKII threshold current flowing from MP5 to ground or the Ca2+ ion concentration current flowing from MP2 to ground, conducts a greater current. The aforementioned threshold currents are indirectly compared through the relationship Iref,CaMKII + Iref,Ca = Itotal,left. Consequently, the VDS of the MN7 transistor is altered, and this signal is employed as the operational signal. Similarly, the right calcineurin threshold part operates under the same logic, with Iref,calcineurin + Iref,Ca = Itotal,right. The VDS of the MN8 transistor is used to derive the operation signal.
Furthermore, adjusting the current limits of each threshold can be achieved through the gate voltages of MP5 and MP6(Vth,CaMKII, Vth,calcineurin). The circuit structure is depicted in Fig. 3b. Initially, the validation of the block's operation was conducted, and the generation of CaMKII and CaN operation signals based on the Ca ion concentration current signal was verified.
The operation signals of potentiation and depression derived from the WTA circuit are relatively small in magnitude, necessitating clear distinction of their operational ranges. To address this, part II: potentiation & depression block (PWM) involves the selection of the dominant operation signal through a differential amplifier. These distinctly separated signals are then transformed into PWM-based signals using an inverting amplifier and a buffer. In the event that CaN is the dominant factor, a depression pulse is generated. Conversely, if CaMKII is the dominant factor, a potentiation pulse is generated. The structure is illustrated in Fig. 3c. The results of simulations with varied parameters are presented in Supplementary Fig. S5.
Part III: synapse block
The two-terminal artificial synapse memristor requires the application of voltages of + VSET, -VRESET, and VREAD to its terminals for conductance variation and current state detection. In order to ensure the proper differentiation of each operation, a part III: synapse block was employed, with a structure as depicted in Fig. 4. The H-bridge circuit operates by utilizing a transmission gate, which parallels the connection of an NMOS and a PMOS transistor, to modify the TE and BE of the memristor. The H-bridge is solely responsible for the configuration of the simple circuit. Therefore, separate + VSET and –VRESET must be applied to the memristor.
Figure 4.
Through an H-bridge and memristor compact model, the set, reset operations of the memristor compact model are performed in response to potentiation pulses, depression pulses.
Upon the occurrence of an input spike in the pre-neuron, the CaMKII operation signal and the CaN operation signal are generated, resulting in the generation of potentiation and depression pulses. These potentiation and depression pulse signals, thus generated, are applied to the gate of the H-bridge, thereby controlling the switch operation of the transistor. Consequently, when these signals are applied, the values of VSET and VRESET must be determined, as these determine the state alteration of the memristor.
Part IV: post neuron block
In the event of an input spike, the determination of whether the weight, represented by the current of the memristor, exceeds the threshold determines the generation of a post spike. To achieve this goal, the LIF neuron model was employed. The model, proposed by Rozenberg in 201954, employs a two-transistor and silicon-controlled rectifier (SCR) configuration. When an input signal is present, integration is performed through a capacitor, and firing of the post-neuron occurs when the threshold is exceeded. Furthermore, a frequency tuning component allows for the adjustment of frequency. In order to ensure clear operation verification, the frequency of post-spikes was deliberately set at a low level, as they are designed to influence the Ca2+ ion concentration block. The circuit diagram and results of varying parameters for the LIF neuron are provided in the Supplementary Fig. S6.
Simulation results
Verification of 4 sub-blocks operation
Figure 5 presents the SPICE simulation results of the sub-blocks in the form of a simulation. In Fig. 5a, the pre-spike and post-spike were triggered at 50 ms and 200 ms, respectively, with a voltage of 2 V in part I. This resulted in a change in the concentration of Ca ions. It can be observed that the concentration of Ca ions changed more significantly following the post-spike, leading to the release of more vesicles. Furthermore, the relationship between the levels of Ca ion concentration and CaMKII threshold and CaN threshold can be observed. When the Ca ion concentration exceeds both thresholds, as with the post-spike, CaMKII operation signal and CaN operation signal are generated through the WTA circuit in Part II, as shown in Fig. 5b. These operation signals are then transformed into potentiation and depression pulses through the PWM circuit in part II. The two signals do not overlap and function to alter the weight of the memristor through variations in width.
Figure 5.
(a) Part I: occurrences of pre-spike and post-spike events revealed changes in the calcium ion concentration current. Furthermore, distinctions in operation based on the magnitude of calcium ion concentration were delineated within the winner-take-all (WTA) circuit. These distinctions are represented by the CaN threshold and CaMKII threshold. (b) Part II: conversion of the operation signal, produced by the WTA circuit in WTA's comparison of Ca ion concentration current, into potentiation and depression pulses. (c) Part III: application of signals to the memristor within the H-bridge for confirming state transitions involved constructing signals of 5 repetitions of potentiation at 3 V for 8 ms and 5 repetitions of depression at − 3 V for 14 ms. (d) The variation of the memristor within the H-bridge is confirmed by the gradual increase and decrease of the ion current, induced by ± 3 V signals lasting 8 ms and 14 ms, respectively.
For part III, potentiation and depression pulses were set as shown in Fig. 5c at 3 V 8 ms for 5 times and 3 V 14 ms for 5 times, respectively. When these signals are applied to the transmission gate of the H-bridge circuit, the memristor undergoes set and reset operations due to the + 3 V and − 3 V. Consequently, the weight of the memristor is altered. As illustrated in Fig. 5d, the weight of the memristor exhibits an increase and a subsequent decrease over time. The larger depression width causes a more rapid decrease in depression.
Standard STDP learning
Spike-timing-dependent plasticity (STDP) is a form of synaptic plasticity that occurs as a result of the temporal difference between pre-spike and post-spike events52. In situations where the pre-spike precedes the post-spike with Δt > 0 ms, potentiation occurs, as evidenced by an increase in synaptic strength. Conversely, when the post-spike precedes the pre-spike with Δt < 0 ms, depression occurs, leading to a decrease in synaptic weight. The ability to effectively mimic such SNN learning rules was tested. For confirming potentiation, an input signal was generated where the pre-spike occurs first, followed by a post-spike 30 ms later. Similarly, to verify depression, an input signal was composed where the post-spike precedes the pre-spike by 40 ms. The configuration of these input signals is illustrated in Fig. 6a,d. The corresponding variations in the Ca ion concentration current due to these input signals are depicted in Fig. 6b,e. When a post-spike follows a pre-spike, the peak of the Ca ion concentration current due to the post-spike increases, extending the region beyond the CaMKII threshold, thereby inducing potentiation. The duration of the potentiation pulse varies with the Δt between the pre- and post-spikes. Conversely, when a post-spike precedes a pre-spike, the prolonged decay time of the Ca ion concentration current results in an extended region exceeding the CaN threshold but falling below the CaMKII threshold, leading to depression. The variations in ion current due to the input pulse pairs specified earlier, at + 30 ms and − 40 ms, are depicted in Fig. 6c,f, representing the changes in ion current(≒synaptic weight) over time. The application of potentiation pulses results in a reduction in the resistance of the memristor, which in turn leads to an increase in current. Conversely, depression pulses result in an increase in memristor resistance, which in turn leads to a decrease in current.
Figure 6.
When Δt = + 30 ms, and a pair of pre-spike and post-spike events occur five times, the following aspects are examined: (a) the structure of a single spike pair, (b) the Ca ion concentration current, CaMKII threshold, and CaN threshold, and (c) the potentiation of synapse weight expressed as ion current. When Δt = − 40 ms, and a pair of pre-spike and post-spike events also occur five times, the following aspects are examined: (d) the structure of a single spike pair, (e) the Ca ion concentration current, CaMKII threshold, and CaN threshold, and (f) the depression of synapse weight expressed as ion current.
In a single Δt interval, effective potentiation and depression were observed. To examine the variations in resistance, i.e., weight changes, of the memristor in the potentiation and depression regions, the range of Δt was extended from 5 to 100 ms in increments of 2 ms. The results demonstrated the standard behaviour of STDP when a pair of pre- and post-spikes was present, as illustrated in Fig. 7a. Using the standard STDP, it was observed that weight changes differ when Δt is positive compared to when it is negative. This resembles the results of biological STDP55.
Figure 7.
(a) Standard STDP curves as a function of Δt for pre-spike and post-spike. (b) Multifactor STDP curves as a function of Δt and frequency for three pairs of pre-spike and post-spike. (c) The SRDP curve showing depression and potentiation as a function of frequency when Δt is 135 ms.
Frequency dependent STDP, SRDP
Given the diverse and intricate learning rules inherent in human biology, it can be seen that factors such as input spike frequency also play a significant role alongside STDP. Mathematical modelling of Ca-based plasticity indicates that various STDP behaviours can be observed depending on frequency, including DPD and DPD'40. Indeed, we varied the frequency from 2 to 100 Hz and examined the Δt range from 0 to ± 150 ms, as depicted in Fig. 7b. Furthermore, when the time interval between stimuli (Δt) is fixed at 135 ms, a gradual transition from a region dominated by depression to an area where potentiation becomes stronger with increasing frequency can be observed. This transition is known as spike-rate-dependent plasticity (SRDP) behaviour. The results of SRDP behaviour are presented in Fig. 7c.
Associative learning
The learning mechanisms in humans and animals are known to involve the creation of associations between unique patterns of stimuli, which then control conditioned responses. Difficulty in learning arises when such associative learning fails to occur56. Implementing associative learning, a critical function of the human brain, in neuromorphic computing can serve as a means to verify its efficacy and enhance understanding of processes such as memory formation and forgetting in a biologically plausible manner. As Pavlovian conditioning is a simple form of associative learning, we construted a simple network consisting of two input neuron and one output neruons connected with conditioned synapse and unconditioned synapse57,58. The structure was configured as depicted in Fig. 8a, where neuron I and synapse I were conditioned to elicit a salivation response from neuron III upon the occurrence of a food signal. Conversely, neuron II and synapse II were not conditioned, and through repeated exposure to both food and bell signals, Pavlovian learning was attempted to induce salivation solely in response to the bell sound. When a food signal is presented, it activates the output neuron via the unconditioned synapse after a delay of 620 ms. The firing of the output neuron not only triggers the salivation response in the forward direction but also influences synaptic weight changes through its post-spike role. The relationship between tsalivation and tbell defines Δt. By applying the STDP rule, it was determined that conditioning occurred sufficiently after one trial at 11 ms, two trials at 16 ms, three trials at 26 ms, and four trials at 31 ms. The specific ion current values through the memristors are illustrated in Fig. 8b. Figure 8c,d demonstrate that after three trials at Δt = 31 ms (the results for other values of Δt are included in the Supplementary Fig. S7), the response was insufficient to elicit a reaction solely from the bell ringing. However, after four trials, the response occurred even with just the bell sound. Following this conditioning, if only the bell sound continued without the food signal, extinction of the learned response should occur. When the bell signal is repeatedly presented alone, the post-spike activity causes changes in Ca2+ concentration at the synapse, leading to LTD and gradual extinction. Although the extinction behavior does not follow the STDP rule, presenting the bell signal after the salivation response tsalivation−tbell < 0 can induce depression according to the STDP rule. Therefore, simulations were conducted to observe the extinction response when only the bell sound was repeatedly presented after three trials, indicating completion of the learning process. The extinction response to the bell sound is depicted in Fig. 8e.
Figure 8.
(a) Schematic of a Pavlovian model displaying conditioned learning and salivation in response to food signals and the bell ring. (b) Variation in the number of epochs required to reach the threshold for Δt-based STDP learning at 11 ms, 16 ms, 26 ms, and 31 ms. (c,d) changes in response to the bell ring after 3 and 4 repetitions of learning for Δt = 31 ms. (e) Extinction behavior through Bell repetition.
Discussion
The Ca ion based plasticity learning model, which is able to closely mimic biological synaptic behaviour, has prompted reported studies which illustrate multi-factor plasticity behaviours when integrated into circuits. However, previous CMOS-based research endeavours implemented synaptic weights through capacitor charging and discharging mechanisms, which resulted in an inability to precisely control weight changes and encountered issues with continuous discharge. Consequently, this approach also presents a challenge in that implementing long-term depression (LTD) behaviour becomes unfeasible unless pre-charging is conducted. To overcome these challenges, a CMOS-memristor hybrid synapse structure was adopted, employing an H-bridge configuration for smooth implementation. Furthermore, it was demonstrated that this structure allows for precise control of weight changes. The final hardware implementation yielded results consistent with the hypothesis that the well-designed hardware is capable of effectively representing Ca plasticity learning. These results included those for LTP, LTD, Standard spike-timing-dependent plasticity (STDP), spike-rate-dependent plasticity (SRDP), and Multifactor-STDP. In addition to the reported changes in synaptic weight, an association learning process through unsupervised learning using standard STDP behaviour was implemented, effectively demonstrating characteristics such as conditioning, extinction, and time difference dependency and Ca ion based LTD.
This study effectively emulates Ca model plasticity using CMOS-memristor hybrid synapses, marking a significant advancement towards the development of bio-plausible spiking neural network hardware. Additionally, beyond the results of individual synapses, successful learning in networks suggests the potential development of neural network processors. Although previous studies have demonstrated discrepancies between hardware design and simulation, this study, despite being conducted solely in simulation, employed more realistic memristors and transistors, necessitating subsequent research to demonstrate large-scale network operations on actual hardware.
Conclusion
A Memristor-CMOS hybrid hardware design has been employed to successfully implement calcium-based plasticity, inspired by human biological behaviour. This circuit comprises four constituent blocks. Part I: calcium ion concentration block converts spike signals into Ca ion concentration currents. Part II: potentiation & depression block utilises WTA to select the superior between CaMKII and CaN, and PWM methodology to form potentiation and depression pulses. The pulses thus generated modulate the weight (memristor's conductance) in part III: synapse block. Finally, in part IV: post neuron block, post spikes are fired based on the weight when spike signals occur. Each operation was defined and validated. Using this designed hardware, we have demonstrated the multi-factor plasticity characteristics based on pre-spike and post-spike calcium dynamics. In particular, the features of spike-timing-dependent plasticity (STDP), multifactor-STDP, and spike-rate-dependent plasticity (SRDP) have been confirmed. Furthermore, the associative learning observed by Palov’s dog experement has been employed to implement conditioning based on STDP and demonstrate extinction behaviour based on LTD, which showcases the disappearance of the formed conditioning under specific conditions. The designed hardware, structured at the neuron and synapse levels and implemented in a crossbar array architecture, allows for scalability to large-scale neural networks. The realisation and expansion of such a biologically plausible plasticity learning model may pave the way for the manufacturing of neuromorphic processors that closely resemble the human brain in the near future.
Supplementary Information
Acknowledgements
The authors gratefully acknowledge financial support from Korea Institute of Science and Technology (KIST) (Grant No. 2E32961).
Author contributions
J.G. Lim conducted the SPICE simulations and wrote a draft manuscript. S. Park and M.L. Lee collected the data and analyzed the simulation results. Y. Jeong and J. Kim designed the synapse circuits. S. Lee, J. Park, G.W. Hwang, B-K. Ju and K.S. Lee reviewed the manuscript. S. Park and H.J. Jang designed the associative learning neural network. J.K. Park and I. Kim designed the experiment and supervised the entire research work.
Data availability
The datasets generated and analyzed during the current study are available in the GitHub repository: [http://gihub.com/CrossbarArray/Ca-model-circuits].
Competing interests
The authors declare no competing interests.
Footnotes
The original online version of this Article was revised: In the original version of this Article, Byeong-Kwon Ju was omitted as a corresponding author. Full information regarding the correction made can be found in the correction for this Article.
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Change history
9/5/2024
A Correction to this paper has been published: 10.1038/s41598-024-70848-y
Contributor Information
Byeong-Kwon Ju, Email: bkju@korea.ac.kr.
Jong Keuk Park, Email: jokepark@kist.re.kr.
Inho Kim, Email: inhok@kist.re.kr.
Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-024-68359-x.
References
- 1.Beyond von Neumann. Nat. Nanotechnol.15, 507. 10.1038/s41565-020-0738-x (2020). [DOI] [PubMed]
- 2.Davies, M. et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro38, 82–99. 10.1109/mm.2018.112130359 (2018). 10.1109/mm.2018.112130359 [DOI] [Google Scholar]
- 3.Orchard, G. et al. In 2021 IEEE Workshop on Signal Processing Systems (SiPS) 254–259 (2021).
- 4.Akopyan, F. et al. TrueNorth: Design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst.34, 1537–1557. 10.1109/tcad.2015.2474396 (2015). 10.1109/tcad.2015.2474396 [DOI] [Google Scholar]
- 5.Deng, L. et al. Tianjic: A unified and scalable chip bridging spike-based and continuous neural computation. IEEE J. Solid-State Circuits55, 2228–2246. 10.1109/jssc.2020.2970709 (2020). 10.1109/jssc.2020.2970709 [DOI] [Google Scholar]
- 6.Furber, S. B., Galluppi, F., Temple, S. & Plana, L. A. The SpiNNaker project. Proc. the IEEE102, 652–665. 10.1109/jproc.2014.2304638 (2014). 10.1109/jproc.2014.2304638 [DOI] [Google Scholar]
- 7.Nassif, A. B., Shahin, I., Attili, I., Azzeh, M. & Shaalan, K. Speech recognition using deep neural networks: A systematic review. IEEE Access7, 19143–19165. 10.1109/access.2019.2896880 (2019). 10.1109/access.2019.2896880 [DOI] [Google Scholar]
- 8.Dastres, R. & Soori, M. Artificial neural network systems. Int. J. Imaging Robot. (IJIR)21, 13–25 (2021). [Google Scholar]
- 9.Yoo, H.-J. Deep convolution neural networks in computer vision: A review. IEIE Trans. Smart Process. Comput.4, 35–43. 10.5573/ieiespc.2015.4.1.035 (2015). 10.5573/ieiespc.2015.4.1.035 [DOI] [Google Scholar]
- 10.Taherkhani, A. et al. A review of learning in biologically plausible spiking neural networks. Neural Netw.122, 253–272. 10.1016/j.neunet.2019.09.036 (2020). 10.1016/j.neunet.2019.09.036 [DOI] [PubMed] [Google Scholar]
- 11.Nguyen, D.-A., Tran, X.-T. & Iacopi, F. A review of algorithms and hardware implementations for spiking neural networks. J. Low Power Electron. Appl.10.3390/jlpea11020023 (2021). 10.3390/jlpea11020023 [DOI] [Google Scholar]
- 12.Davies, P. T. et al. Sparse coding by spiking neural networks: Convergence theory and computational results. ArXiv. 10.48550/arXiv.1705.05475 (2017).
- 13.Sengupta, N. & Kasabov, N. Spike-time encoding as a data compression technique for pattern recognition of temporal data. Inf. Sci.406–407, 133–145. 10.1016/j.ins.2017.04.017 (2017). 10.1016/j.ins.2017.04.017 [DOI] [Google Scholar]
- 14.Walter, F., Röhrbein, F. & Knoll, A. Computation by time. Neural Process. Lett.44, 103–124. 10.1007/s11063-015-9478-6 (2015). 10.1007/s11063-015-9478-6 [DOI] [Google Scholar]
- 15.Zhang, W. et al. Neuro-inspired computing chips. Nat. Electron.3, 371–382. 10.1038/s41928-020-0435-7 (2020). 10.1038/s41928-020-0435-7 [DOI] [Google Scholar]
- 16.Aguirre, F. et al. Hardware implementation of memristor-based artificial neural networks. Nat. Commun.15, 1974. 10.1038/s41467-024-45670-9 (2024). 10.1038/s41467-024-45670-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Chen, J., Li, J., Li, Y. & Miao, X. Multiply accumulate operations in memristor crossbar arrays for analog computing. J. Semicond.10.1088/1674-4926/42/1/013104 (2021). 10.1088/1674-4926/42/1/013104 [DOI] [Google Scholar]
- 18.Raqibul Hasan, T. M. T. & Chris, Y. On-chip training of memristor based deep neural networks. Int. Joint Conf. Neural Netw.10.1109/IJCNN.2017.7966300 (2017). 10.1109/IJCNN.2017.7966300 [DOI] [Google Scholar]
- 19.Yakopcic, C., Alom, M. Z. & Taha, T. M. In 2016 International Joint Conference on Neural Networks (IJCNN) 963–970.
- 20.Hung, J.-M. et al. In 2022 IEEE International Solid- State Circuits Conference (ISSCC) 1–3 (2022).
- 21.Xue, C.-X. et al. In 2021 IEEE International Solid- State Circuits Conference (ISSCC) 245–247 (2021).
- 22.Chang, M. et al. In 2022 IEEE International Solid- State Circuits Conference (ISSCC) 1–3 (2022).
- 23.Correll, J. M. et al. In 2022 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits) 264–265 (2022).
- 24.Spetalnick, S. D. et al. In 2022 IEEE International Solid- State Circuits Conference (ISSCC) 1–3 (2022).
- 25.Xue, C. X. et al. in 2020 IEEE International Solid-State Circuits Conference - (ISSCC). 244–246.
- 26.Tomas, J., Bornat, Y., Saighi, S., Levi, T. & Renaud, S. In 2006 13th IEEE International Conference on Electronics, Circuits and Systems 946–949.
- 27.Gautam, A. & Kohno, T. A conductance-based silicon synapse circuit. Biomimetics10.3390/biomimetics7040246 (2022). 10.3390/biomimetics7040246 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Sun, J. Cmos and Memristor Technologies for Neuromorphic Computing Applications. Technical Report No. UCB/EECS-2015–218 (2015).
- 29.Dong, Z. et al. Neuromorphic extreme learning machines with bimodal memristive synapses. Neurocomputing453, 38–49 (2021). 10.1016/j.neucom.2021.04.049 [DOI] [Google Scholar]
- 30.Ji, X., Dong, Z., Lai, C., Zhou, G. & Qi, D. A physics-oriented memristor model with the coexistence of NDR effect and RS memory behavior for bio-inspired computing. Mater. Today Adv.16, 100293 (2022). 10.1016/j.mtadv.2022.100293 [DOI] [Google Scholar]
- 31.Ji, X. et al. A flexible memristor model with electronic resistive switching memory behavior and its application in spiking neural network. IEEE Trans. NanoBiosci.22, 52–62 (2022). 10.1109/TNB.2022.3152228 [DOI] [PubMed] [Google Scholar]
- 32.Ke, S. et al. Efficient spiking neural networks with biologically similar lithium-ion memristor neurons. ACS Appl. Mater. Interfaces16, 13989–13996. 10.1021/acsami.3c19261 (2024). 10.1021/acsami.3c19261 [DOI] [PubMed] [Google Scholar]
- 33.Duan, Q. et al. Spiking neurons with spatiotemporal dynamics and gain modulation for monolithically integrated memristive neural networks. Nat. Commun.11, 3399. 10.1038/s41467-020-17215-3 (2020). 10.1038/s41467-020-17215-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Bi, G. & Poo, M. Synaptic modification by correlated activity: Hebb’s postulate revisited. Annu. Rev. Neurosci.24, 139–166. 10.1146/annurev.neuro.24.1.139 (2001). 10.1146/annurev.neuro.24.1.139 [DOI] [PubMed] [Google Scholar]
- 35.Cooper, L. N. & Bear, M. F. The BCM theory of synapse modification at 30: Interaction of theory with experiment. Nat. Rev. Neurosci.13, 798–810. 10.1038/nrn3353 (2012). 10.1038/nrn3353 [DOI] [PubMed] [Google Scholar]
- 36.Guo, Y., Wu, H., Gao, B. & Qian, H. Unsupervised learning on resistive memory array based spiking neural networks. Front. Neurosci.10.3389/fnins.2019.00812 (2019). 10.3389/fnins.2019.00812 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Wang, W. et al. Computing of temporal information in spiking neural networks with ReRAM synapses. Faraday Discuss.213, 453–469 (2019). 10.1039/C8FD00097B [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Covi, E. et al. Analog memristive synapse in spiking networks implementing unsupervised learning. Front. Neurosci.10, 208311 (2016). 10.3389/fnins.2016.00482 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Zucker, R. S. Calcium- and activity-dependent synaptic plasticity. Curr. Opin. Neurobiol.9, 305–313. 10.1016/s0959-4388(99)80045-2 (1999). 10.1016/s0959-4388(99)80045-2 [DOI] [PubMed] [Google Scholar]
- 40.Graupner, M. & Brunel, N. Calcium-based plasticity model explains sensitivity of synaptic changes to spike pattern, rate, and dendritic location. Proc. Natl. Acad. Sci. USA109, 3991–3996. 10.1073/pnas.1109359109 (2012). 10.1073/pnas.1109359109 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Maldonado Huayaney, F. L., Nease, S. & Chicca, E. Learning in silicon beyond STDP: A neuromorphic implementation of multi-factor synaptic plasticity with calcium-based dynamics. IEEE Trans. Circuits Syst. I63, 2189–2199. 10.1109/tcsi.2016.2616169 (2016). 10.1109/tcsi.2016.2616169 [DOI] [Google Scholar]
- 42.Penny, C. J. & Gold, M. G. Mechanisms for localising calcineurin and CaMKII in dendritic spines. Cell Signal.49, 46–58. 10.1016/j.cellsig.2018.05.010 (2018). 10.1016/j.cellsig.2018.05.010 [DOI] [PubMed] [Google Scholar]
- 43.Li, L., Stefan, M. I. & Le Novere, N. Calcium input frequency, duration and amplitude differentially modulate the relative activation of calcineurin and CaMKII. PLoS ONE7, e43810. 10.1371/journal.pone.0043810 (2012). 10.1371/journal.pone.0043810 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Rachmuth, G., Shouval, H. Z., Bear, M. F. & Poon, C. S. A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity. Proc. Natl. Acad. Sci. USA108, E1266-1274. 10.1073/pnas.1106161108 (2011). 10.1073/pnas.1106161108 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Rahimi Azghadi, M., Al-Sarawi, S., Abbott, D. & Iannella, N. A neuromorphic VLSI design for spike timing and rate based synaptic plasticity. Neural Netw.45, 70–82. 10.1016/j.neunet.2013.03.003 (2013). 10.1016/j.neunet.2013.03.003 [DOI] [PubMed] [Google Scholar]
- 46.Mohammad, B. et al. State of the art of metal oxide memristor devices. Nanotechnol. Rev.5, 311–329 (2016). 10.1515/ntrev-2015-0029 [DOI] [Google Scholar]
- 47.Wang, H. & Yan, X. Overview of resistive random access memory (RRAM): Materials, filament mechanisms, performance optimization, and prospects. Phys. Status Solidi13, 73. 10.1002/pssr.201900073 (2019). 10.1002/pssr.201900073 [DOI] [Google Scholar]
- 48.Banerjee, W., Kashir, A. & Kamba, S. Hafnium Oxide (HfO(2)): A multifunctional oxide: A review on the prospect and challenges of hafnium oxide in resistive switching and ferroelectric memories. Small18, e2107575. 10.1002/smll.202107575 (2022). 10.1002/smll.202107575 [DOI] [PubMed] [Google Scholar]
- 49.Khalid, M. Review on various memristor models, characteristics, potential applications, and future works. Trans. Electr. Electron. Mater.20, 289–298. 10.1007/s42341-019-00116-8 (2019). 10.1007/s42341-019-00116-8 [DOI] [Google Scholar]
- 50.Li, Y., Wang, Z., Midya, R., Xia, Q. & Yang, J. J. Review of memristor devices in neuromorphic computing: Materials sciences and device challenges. J. Phys. D10.1088/1361-6463/aade3f (2018). 10.1088/1361-6463/aade3f [DOI] [Google Scholar]
- 51.Yakopcic, C., Taha, T. M., Subramanyam, G., Pino, R. E. & Rogers, S. A memristor device model. IEEE Electron Device Lett.32, 1436–1438. 10.1109/led.2011.2163292 (2011). 10.1109/led.2011.2163292 [DOI] [Google Scholar]
- 52.Bartolozzi, C. & Indiveri, G. Synaptic dynamics in analog VLSI. Neural Comput.19, 2581–2603. 10.1162/neco.2007.19.10.2581 (2007). 10.1162/neco.2007.19.10.2581 [DOI] [PubMed] [Google Scholar]
- 53.Lazzaro, J., Ryckebusch, S., Mahowald, M. A. & Mead, C. Winner-take-all networks of O(N) complexity. In Neural Information Processing Systems.
- 54.Rozenberg, M. J., Schneegans, O. & Stoliar, P. An ultra-compact leaky-integrate-and-fire model for building spiking neural networks. Sci. Rep.9, 11123. 10.1038/s41598-019-47348-5 (2019). 10.1038/s41598-019-47348-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Bi, G.-Q. & Poo, M.-M. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci.18, 10464–10472 (1998). 10.1523/JNEUROSCI.18-24-10464.1998 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Sutherland, R. J. & Rudy, J. W. Configural association theory: The role of the hippocampal formation in learning, memory, and amnesia. Psychobiology17, 129–144. 10.3758/BF03337828 (1989). 10.3758/BF03337828 [DOI] [Google Scholar]
- 57.Pershin, Y. V. & Di Ventra, M. Experimental demonstration of associative memory with memristive neural networks. Neural Netw.23, 881–886 (2010). 10.1016/j.neunet.2010.05.001 [DOI] [PubMed] [Google Scholar]
- 58.Tan, Z.-H. et al. Pavlovian conditioning demonstrated with neuromorphic memristive devices. Sci. Rep.7, 713 (2017). 10.1038/s41598-017-00849-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The datasets generated and analyzed during the current study are available in the GitHub repository: [http://gihub.com/CrossbarArray/Ca-model-circuits].








