Skip to main content
Biomimetics logoLink to Biomimetics
. 2023 Jan 11;8(1):28. doi: 10.3390/biomimetics8010028

The Influence of the Number of Spiking Neurons on Synaptic Plasticity

George-Iulian Uleru 1, Mircea Hulea 1,*, Alexandru Barleanu 1
Editor: Phil Husbands1
PMCID: PMC9844446  PMID: 36648814

Abstract

The main advantages of spiking neural networks are the high biological plausibility and their fast response due to spiking behaviour. The response time decreases significantly in the hardware implementation of SNN because the neurons operate in parallel. Compared with the traditional computational neural network, the SNN use a lower number of neurons, which also reduces their cost. Another critical characteristic of SNN is their ability to learn by event association that is determined mainly by postsynaptic mechanisms such as long-term potentiation. However, in some conditions, presynaptic plasticity determined by post-tetanic potentiation occurs due to the fast activation of presynaptic neurons. This violates the Hebbian learning rules that are specific to postsynaptic plasticity. Hebbian learning improves the SNN ability to discriminate the neural paths trained by the temporal association of events, which is the key element of learning in the brain. This paper quantifies the efficiency of Hebbian learning as the ratio between the LTP and PTP effects on the synaptic weights. On the basis of this new idea, this work evaluates for the first time the influence of the number of neurons on the PTP/LTP ratio and consequently on the Hebbian learning efficiency. The evaluation was performed by simulating a neuron model that was successfully tested in control applications. The results show that the firing rate of postsynaptic neurons post depends on the number of presynaptic neurons pre, which increases the effect of LTP on the synaptic potentiation. When post activates at a requested rate, the learning efficiency varies in the opposite direction with the number of pres, reaching its maximum when fewer than two pres are used. In addition, Hebbian learning is more efficient at lower presynaptic firing rates that are divisors of the target frequency of post. This study concluded that, when the electronic neurons additionally model presynaptic plasticity to LTP, the efficiency of Hebbian learning is higher when fewer neurons are used. This result strengthens the observations of our previous research where the SNN with a reduced number of neurons could successfully learn to control the motion of robotic fingers.

Keywords: spiking neural networks, Hebbian learning efficiency, post-tetanic potentiation, long-term potentiation

1. Introduction

Spiking neural networks (SNNs) benefit from biological plausibility, fast response, high reliability, and low power consumption when implemented in hardware. An SNN operates using spikes that are the effect of neuronal activation that occurs if a given threshold is exceeded and provides the SNN with sensitivity to event occurrence [1,2]. Thus, one of the critical advantages of SNN over traditional convolutional neural networks is the introduction of time in information processing. Another characteristic of SNN is their ability to learn, which is related to time based on the relative occurrence of the events.

1.1. Long-Term Plasticity

The main mechanism that determines learning is the long-term potentiation (LTP) that strengthens the synapses when the presynaptic neuron (pre) activates before the stimulated postsynaptic neuron (post). The reversed order of post and pre activation reduces the synaptic weights via long-term depression (LTD) [3]. The amplitude of the synaptic change due to pairs prepost and postpre depends strongly on the temporal difference between the activation of pre and post, respectively [4]. A detailed study of the biological neurons in vitro showed that LTP and LTD windows are asymmetric, making the LTP to dominate the LTD [5], implying that the resultant effect is LTP. Indeed, the essential mechanism of learning in the hippocampus is LTP, as stated in neuroscience [6]. LTP is also the basic element of Hebbian learning that determines the potentiation of the weak synapses if these are paired with strong synapses that activate postsynaptic neurons [7]. This implies that Hebb’s rules are critical for learning in the human brain and they are the foundation for the most biologically plausible supervised SNN learning algorithm [8,9].

1.2. Hebbian Learning in Artificial Systems

Besides the biological importance of Hebbian learning, these rules are used in artificial systems, mainly for training competitive networks [8] and to store memories in Hopfield neural networks [10,11]. Hebb’s rule strengthens neural paths that have temporal correlations between pre and post activation. This implies that each neuron tends to pick out its own cluster of neurons, of which the activation is correlated in time by potentiating the synapses that contribute to the activation of postsynaptic neurons [12,13]. In this case, each neuron competes to respond to a subset of inputs matching the principles of competitive learning [8]. In addition, recent research showed that Hebbian learning is suitable to train SNNs of high biological plausibility to control robotic fingers using external forces mimicking the principles of physical guidance [14]. Here, the effect of the strong synapses that are driven by sensors was associated with the effect of weak synapses driven by a command signal [15].

Supervised learning based on gradient descent is more powerful than Hebbian learning [8] in computational applications. However, these error-correcting learning rules are not suitable for bioinspired SNNs because the explicit adjustment of the synaptic weight is not typically feasible. Therefore, for adaptive systems of high biological plausibility, Hebb’s rules are more suitable to train the synapses unsupervised, as they are trained in the brain.

The repetitive activation of pres is independent of a post activity increase in synaptic efficacy by the presynaptic elements of learning such as post-tetanic potentiation (PTP) [16] that can last from a few seconds to minutes [17]. This synaptic potentiation represents an increase in the quantity of the mediator released from the presynaptic membrane during pre activation [16,18]. PTP influences the motor learning specific to Purkinje cells that plays a fundamental role in motor control [19]. Taking into account that this type of presynaptic long-term plasticity occurs in the absence of postsynaptic activity, the Hebbian learning mechanisms are altered by PTP [18].

1.3. The Number of Neurons in SNNs

Each neuron is a complex cell comprising multiple interacting parts and small chambers containing molecules, ions, and proteins. The human brain is composed of neurons in the order of 1011 connected by about 1015 synapses. Creating mathematical and computational models would be an efficient solution towards understanding the functions of the brain, but even if with an exponential increase in computational power, it does not seem achievable in the near future. Even if this could be achieved, the complexity of the resulting simulation maybe as complex as the brain itself. Hence, there is a need for tractable methods for reducing the complexity while preserving the functionality of the neural system. The size effect in SNNs has various approaches. Statistical physics formalism based on the many-body problem was used to derive the fluctuation and correlational effects on finite networks of N neurons as a perturbation expansion of 1/N around the mean field limit of N [20]. Another method that was used to optimise the size and resilience of the SNN is with empirical analysis using evolutionary algorithms. Thus, smaller networks may be generated by using a multiobjective fitness function incorporating a penalty for the number of neurons evaluating every network in a population [21].

In addition, research on computational neural networks showed that, for classification problems, SNNs use fewer neurons than the second generation of artificial neural networks (ANNs) does [22]. In addition, the hardware implementation of SNNs demonstrated their efficacy in modelling conditional reflex formation [23] or in controlling the contraction of artificial muscles composed of shape memory alloy (SMA). In later applications, SNNs with only a few excitatory and inhibitory neurons have been able to control the force [24,25] and learn the motion [14,15] of anthropomorphic fingers. In addition, using fewer neurons is important in reducing the cost and increasing the reliability of the hardware implementation of SNNs.

Analysing the Hebbian learning efficiency of adaptive SNN provides a useful tool for reducing the size of experimental networks and minimising the simulation time while preserving bioinspired features.

1.4. The Goal and Motivation of the Current Research

The presynaptic long-term plasticity determined by PTP reduces the efficiency of Hebbian learning that is determined by LTP. This mechanism is critical to make the neural network respond to concurrent events by potentiating the untrained synapses when activated with the trained neural paths. Thus, PTP potentiates synapses in the absence of a postsynaptic response, meaning that the causality is broken.

Considering these aspects, the goal of this paper is to determine in which conditions the effect of LTP over PTP is maximised, increasing the efficacy of Hebbian learning. Typically, fewer neurons must fire at a higher rate or have stronger potentiated synapses to activate post above the preset rates. Reducing the number of neurons can increase both the firing rate of pres and the synaptic weights that are necessary to reach the requested frequencies of post.

At certain firing rates and synaptic weights, the ratio between the LTP and PTP rates can be higher, implying that associative learning is more efficient. Considering that dWLTP represents the maximal contribution of LTP and dWPTP, the effect of PTP to the synaptic weight during the period tL of training, then there is a maximal ratio: rWMAX of dWLTPrMAX and dWPTPrMAX. In this work, we consider that the maximal efficiency of Hebbian learning is for rWMAX, which corresponds to dWLTPrMAX. If a target frequency fPOST for the postsynaptic neuron is requested, then a minimal number of untrained presynaptic neurons nUNT with weight dWLTPrMAX can be activated to reach fPOST. Therefore, the ideal case when LTP is maximal nUNT depends on the functions that describe the weight variation by PTP and LTP, and on fPOST.

Starting from these ideas, the contribution of this work is twofold: (i) The quantification of Hebbian learning efficiency as the ratio between LTP and PTP; (ii) the evaluation of the influence of the number of neurons on the efficiency of Hebbian learning, focusing on an SNN with reduced number of neurons (fewer than 20 per area).

As presented in Section 1.2 and Section 1.3, there are several comprehensive studies related to Hebbian learning or focused on the influence of variation in the number of neurons on the performance of adaptive SNN. However, there are no studies focused on overlapping these two research directions in systems of high biological plausibility.

The rest of the paper is organised as follows: Section 2 presents the general structure of the neural network and the experimental phases focusing on the proposed neuron model, and on the implementation of PTP and LTP mechanisms. The experimental results along with the details for each measured item are presented in Section 3. The paper ends with Section 4, which discusses the results, focusing on the biological plausibility of the used model, and presents some considerations for future research.

2. Materials and Methods

The SNN is based on a neuronal model of high biological plausibility [14]. Although this electronic neuron was implemented and tested in PCB hardware, the analysis presented in this work was based on spice simulations of the electronic circuit.

2.1. The Model of the Artificial Neuron

An artificial neuron includes a SOMA and one or more synapses. The electronic SOMA models elements related to information processing, such as the temporal integration of incoming stimuli, the detection of the activation threshold, and a refractory period. Electronic synapses model presynaptic elements of learning such as PTP and the postsynaptic plasticity that determines Hebbian learning via long-term potentiation (LTP). In addition, a synapse stores the synaptic weight using a capacitor that can be charged or discharged in real time using cheap circuits [26,27]. Electronic synapses generate excitatory or inhibitory spikes to feed the corresponding posts.

Figure 1 shows the main elements related to learning that are included by the neuronal schematic that is detailed in Appendix A [14]. The neuron detects the activation threshold using the transistor TM, and, during activation, TS generates a spike of variable energy ESPK that depends on the synaptic weight ws. In this work, we refer to ws as the voltage VW read in the capacitor CL, shown in Figure 1. The synapse is potentiated by PTP that is modelled by the discharge of CL when the neuron activates, and by LTP, which alters the charge in capacitor during the activation of post.

Figure 1.

Figure 1

The main elements of the neuronal schematic that are used to model the mechanisms of learning.

Potential VW determines the duration of the generated spike at OUT, modelling the effect of synaptic weight on postsynaptic activity. Spike duration tSPK is determined by potential VW because, during SOMA activation, transistor TS is open as long as VU (which is proportional with VW) is below the emitter-base voltage of TS. The variation in VU is given by:

VU=VU0+(VDDVU0)etRDCU (1)

The initial potential VU0 is calculated using Equation (2) for cut-off and Equation (3) for saturated regimes of transistor TS as follows:

VU0=VWRURU+RD (2)
VU0=VWRU+VBRB1RD+1RU+1RB (3)

In VW=VDD+VAVF and VB=VEVEB, VF and VEB are the forward and emitter base voltages, respectively. Similarly, after SOMA inactivation, VU is restored to VDD as follows:

VU=(VDDVUI)(1et(RC+RD)CU) (4)

where VUI is the initial value of VU when SOMA inactivates and CU starts charging.

2.2. Model for PTP and LTP

The activation of the neuron that lasts tSPK=44us reverses the polarity of CL, which is discharged by an amount that is given by:

ΔVPTP=VDDetSPK(RD+RU)CL (5)

Equation (5) models the potentiation by PTP of the synapse. For expressing LTP, we should consider the charge variation in CL when it is discharged using CA, followed by a reset of the charge in CA during post activation.

During neuronal activation, the potential in the capacitor CL varies as follows:

ΔV=VDD(1etSPKRACE) (6)

where the equivalent capacitance is:

CE=CLCACL+CA

Considering that ΔVA and ΔVC represent the variation in potential in CA and CL, respectively, we denote the ratio:

k=ΔVAΔVC=CACL

Thus, variation in the potential in CL that represents the temporary potentiation of the synapse is:

ΔVC=CECLΔV (7)

During the neuronal idle state after its activation, the resultant variation of the potential in CL and CA is:

ΔVS=ΔVetW(RA+RM)CE (8)

where tW is the time window tposttpre between the moments of the neuronal activation. CA discharges in CL until the potentials in both capacitors reach equilibrium. This variation restores the synaptic weight to the value that was before activation of the presynaptic neuron. If the postsynaptic neuron fires during the restoration of the synaptic weight, capacitor CA is discharged at a significantly higher rate until equilibrium is reached. Taking into account that RL<<RA+RM, the potential variation in CL during post activation is negligible. This implies that the variation in the potential in CL that models weight variation by LTP is:

ΔVLTP=CECLΔVS (9)

ΔVS decreases according to Equation (8), implying that ΔVLTP depends on the time window tW.

For this neuronal design ws varies in the opposite direction, with VW in the range of [0.2 V:1.6 V]. This implies that a lower VW models higher synaptic potentiation. To simplify the presentation, in this research, we refer to variation dW in the voltage in CL that occurs during potentiation.

Therefore, the experiments presented in the following section focus on evaluating how the number of neurons affect learning efficiency.

2.3. The Structure of the SNN

The synaptic configuration includes two presynaptic neural areas, preNAT and preNAUNT, which include nT and nUNT neurons, respectively. The pres included in these neural areas connect to only one post, as in Figure 2a. For allowing weight variation by LTP, at the beginning of each experiment, synapses ST between PreNAT and post were fully potentiated for the activation of post, while the weights of synapses SUNT driven by PreNAUNT were minimal. The SNN included additional neurons PreAUX and PostAUX for the evaluation of the potentiation by PTP of SAUX, which had the same value as that for SUNT. This allowed for us to compare the PTP and LTP effects in similar conditions.

Figure 2.

Figure 2

(a) The structure of the SNN that was used to perform tests; (b) R varied among presynaptic neurons Pre1,Pre2,,PreN.

As shown in Figure 2b, neurons in each presynaptic area were activated by constant potentials V1,,VN or by pulses, as we detail in the sequel. For modelling the variability in the electronic components, input resistors R1, R2, and RN, shown in Figure 2b, were set in an 10% interval that varied the firing rate of pres slightly.

The firing rate of the postsynaptic neurons fPOST, and the variation in the synaptic weights dWLTP and dWPTP due to LTP and PTP, respectively, were determined via measurements on the simulated electronic signals [28,29]. The values of the input voltages for the activation of pres were set to several values to activate the neurons in the range that was used during our previous experiments [14]. In order to highlight the influence of the number of neurons on the Hebbian learning efficiency, the initial synaptic weights were minimal for extending their variation range.

2.4. Experimental Phases

The experiments started with a preliminary phase in which we determined dWLTP and dWPTP for a single spike when fUNT took several values, and variation in fPOST with the number nUNT. Following these preliminary measurements, we evaluated the efficiency of Hebbian learning by calculating the ratio rW=dWLTP/dWPTP during several phases as follows:

Phase 1. The value of dWLTP was determined when nT and nUNT in the neural areas preNAT and preNAUNT, respectively, varied independently or simultaneously. These results were compared with the effect of PTP when only the neurons in the untrained area preNAUNT were activated. Variation dWLTP in the synaptic weight included potentiation due to LTP being determined by prepost pair activation and due to PTP that occurred due to pre action potential.

Typically, the frequency of post can be controlled in certain limits by adjusting the firing rate of pres independent of the number of neurons. In order to simplify the SNN structure during Phases 2 and 3, preNAT included one neuron.

Phase 2. Next, we determined the variation in dWLTP and dWPTP when synapses in preNAUNT were trained until they were able to activate post in the absence of preNAT activity.

Phase 3. For the last phase of the experiments, we considered a fixed frequency fM of the output neuron that matched the firing rate of the output neurons that actuated the robotic junctions in our previous experiments [14]. Thus, the SNN was trained until fM had reached 100 Hz when stimulated only by preNAUNT independent of preNAT. In order to extract the contribution of PTP to the dWLTP, neuron PreAUX was activated at the same rate with preNAUNT, and dWPTP was measured.

For the untrained pres, we set different frequencies that were not divisible with the firing rate of post, mimicking a less favourable scenario of neuronal activation. In this case, the time interval between pre and post activation varied randomly, increasing the diversity of the weight gain per the action potential of post. In a favourable scenario, the frequency of post is the divisor of the firing rate of pre, which improves the weight gain via the synchronisation of neuronal activation.

3. Results

The obtained results during the experimental phases mentioned above are presented here.

3.1. Preliminary Phase

In order to asses the influence of the electronics on synaptic potentiation during a single spike, we determined the weight variation by PTP for several values of VIN when pre activated once. As presented in Figure 3a, PTP decreased as fUNT increased. However, long term, this variation was compensated by the number of spikes per time unit that increased at a higher rate with fUNT. A similar evaluation was performed for LTP when post was activated by trained neurons at t=0.002 s after the activation of the untrained pre. In this case, the influence of LTP presented in Figure 3b was extracted from the measured dW by the subtraction of the PTP effect shown in Figure 3a.

Figure 3.

Figure 3

Weight variation with fpre during a single spike by (a) PTP and (b) LTP for a time window.

Typically, the output frequency of an SNN depends on the number of pres that stimulate post, as shown in Figure 4a. Starting from this observation, we determined dWLTP after 2 s of training for a different number of pres in PreNAUNT and PreNAT when fUNT=75 Hz and fT=100 Hz. Figure 4b–d show that dWLTP depended on the number of pres following different patterns for the trained and untrained neurons. In addition, the learning rate by LTP increased with the number of pres, mainly due to higher values of fPOST determined by the activation of more pres.

Figure 4.

Figure 4

(a) The frequency of the output neuron. The synaptic weight variation by LTP after 2 s of training for variation in the number of (b) untrained neurons with fUNT=75 Hz; (c) trained neurons with fT=100 Hz; (d) both trained and untrained neurons.

3.2. The Efficiency of Hebbian Learning

Variation in the ratio rW with the voltage VW synaptic weight is presented in Figure 5a. This represents the ideal case when LTP is maximal, which was obtained by the activation of the postsynaptic neuron shortly after the untrained pre. rW was maximal for a specific weight that was far from the limits of the variation interval.

Figure 5.

Figure 5

(a) Variation in the ratio rW with synaptic weight; (b) the ratio of synaptic weight variation between PTP and LTP after 2 s of training.

Weight variation dW for different firing rates of pre was determined for both PTP and LTP when the neurons activated for a fixed period of time t=2 s. The data plotted in Figure 5b show that variation in the ratio rW reduced significantly when the frequency of the untrained pres was above 50 Hz. Thus, taking into account that rW was almost stable for a single neuron in PreNAUNT, for the next experimental phase, we evaluated the influence of the number of neurons on rW for fixed activation frequency fUNT=50 Hz.

In this setup, the SNN was trained until the first activation of post by the synapse that was potentiated by LTP. The PTP level for the synapse SAUX was determined via the activation of the auxiliary neuron PreAUX (see SNN structure in Figure 2a) at the same frequency as that of the untrained pres. In order to determine if rW had a similar variation for another frequency of the neurons in the PreNAUNT, we performed similar measurements for fUNT=75 Hz. As presented in Figure 6, the variation in rW showed that the best learning efficiency was obtained for nUNT=1neuron when fUNT=75 Hz and for nUNT=3neurons when fUNT=75 Hz. The different number of pres indicated that fUNT influenced the optimal number of neurons for the best learning efficiency when the neural paths were trained until the first activation of post by PreNAUNT independent of PreNAT.

Figure 6.

Figure 6

The ratio of potentiation levels between LTP and PTP when the SNN was trained until first activation of post for (a) fUNT=50 Hz and (b) fUNT=75 Hz.

The next experimental phase evaluated rW and the duration of the training process tL when the firing rate of the output neurons reached fM=100 Hz, while the untrained pres in the area PreNAUNT activated at several firing rates in a set that included divisors of fM.

The plots in Figure 7a,b show that weight variations dWPTP and dWLTP decreased when the number of untrained pres nUNT increased. Typically, dWPTP is proportional with tL, implying that the SNN learns faster when more neurons activate post. The improved value for tL was determined by the lower weights that were necessary to activate post at the requested firing rate. In order to compare dWPTP and dWLTP, we determined the ws that were potentiated by PTP when the neuron PreAUX was activated at fUNT as the neurons in the area PreNAUNT. Figure 8a shows the variation rW for several firing rates fUNT that were not divisors of fM. In this case, local maximum rWlocal was for a single neuron per area (nUNT=1). Taking into account that LTP may be more efficient when fUNT is a divisor of fM due to the synchronisation of the prepost neurons, we evaluated the weight variation for fUNT= 25, 33.3, and 50 Hz. In order to eliminate the variation in PTP with the continuous input voltage of the neurons, the pres were activated with digital pulses with the same amplitude generated at rate fUNT. The results presented in Figure 8b show that the maximal rWlocal was obtained for nUNT=2 neurons. The best learning efficiency rWMAX was obtained when fUNT=25 Hz, and the untrained presynaptic area included two neurons.

Figure 7.

Figure 7

Learning until fPOST=100 Hz when the number of untrained neurons nUNT varied. (a) dWLTP and (b) dWPTP for fUNT=50 Hz.

Figure 8.

Figure 8

Variation in ratio rW=dWLTP/dWPTP with nUNT when (a) fUNT is not a divisor of fM and (b) fUNT is a divisor of fM.

Typically, the weight variation with training duration tL by LTP and PTP varies, following different patterns. The difference between the two functions implies that there is a value for tLmax where the ratio between LTP and PTP is maximal. This value corresponds to a synaptic weight wLTP obtained by LTP and consequently to a potential VW. Typically, there is a minimal number of pres nUNT firing at a fixed frequency that are able to activate a post when the weight is wLTP. In our work, the best rW=4.28 for nUNT=2 neurons corresponded to the potential VW=0.7 V in the weight capacitor.

4. Discussion and Conclusions

At the synaptic level, the neural paths in the brain are trained by associative or Hebbian learning that is based on long-term potentiation, which is the postsynaptic element of learning. From a biological point of view, presynaptic long-term plasticity violates Hebbian learning rules that depend on postsynaptic activity. Previous research on SNN showed that the control systems use a reduced number of electronic neurons, while in classification tasks, SNN uses fewer neurons than the traditional CNN does. Starting from these ideas, this work focused on the evaluation of the influence of the number of neurons on the efficiency of Hebbian learning, characterised as the ratio of LTP and PTP effects on the synaptic weights. This ratio increases the effect of LTP and consequently the power of the SNN to discriminate between the neural paths that are trained by associative learning over the paths where only presynaptic plasticity occurs. The simulation results showed that, despite the fact that LTP depends mainly on the frequency of postsynaptic neurons, the number of neurons affect the Hebbian learning efficiency when the posts must reach a predefined frequency. In this case, the best LTP/PTP ratio was obtained when the frequency of the untrained pres was the lowest divisor of the target frequency of post. The efficiency of Hebbian learning reached a maximum for two pres and decreased in the opposite direction with the number of pres. Taking into account that, for a certain number of neurons, the LTP/PTP ratio was better, we could deduce that certain synaptic weights resulted in better Hebbian learning efficiency. Indeed, the position of the maximal rW inside the variation interval (Figure 8b) matched the variation in the rW in the ideal case presented in Figure 5a. This implies that the minimal number of neurons that were necessary to activate post at the requested firing rate was related to the synaptic weight. In conclusion, previous research showed that electronic SNNs with a reduced number of neurons are trained efficiently by Hebbian learning, while the current research strengthen the idea showing that fewer neurons improve associative learning. This could reduce the cost and improve the reliability of the hardware implementation of SNNs.

Abbreviations

The following abbreviations are used in this manuscript:

ANN Artificial neural network
LDT Long-term depression
LTP Long-term potentiation
PCB Printed circuit board
PTP Post-tetanic potentiation
SMA Shape memory alloy
SNN Spiking neural network

Appendix A

Figure A1 presents the schematic circuit, including the parametric values, of the electronic neuron that was implemented in PCB hardware [28]. The neuron included one electronic soma (SOMA) and one or more electronic synapses (SYN). The SOMA detects the neuronal activation threshold using transistor T1 and activates the SYNs. The SOMA of the postsynaptic neurons that are stimulated by excitatory or inhibitory synapses includes an integrator of the input activity. When the SOMA activates the connected SYNs, SOUT generates pulses at their output NOUT, of which the nergy depends on the charge stored in the weight capacitor CL.

Figure A1.

Figure A1

The schematic of (a) the electronic SOMA and (b) the electronic synapse (SYN).

Author Contributions

Conceptualisation, M.H.; methodology, G.-I.U. and M.H.; simulation, G.-I.U.; validation, M.H. and A.B.; formal analysis, G.-I.U.; investigation, G.-I.U. and M.H.; data curation, G.-I.U.; writing—original draft preparation, G.-I.U. and M.H.; writing—review and editing, M.H. and A.B.; supervision, M.H.; All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This research was funded by the Doctoral School of TUIASI.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Maass W. Networks of spiking neurons: The third generation of neural network models. Neural Netw. 1997;10:1659–1671. doi: 10.1016/S0893-6080(97)00011-7. [DOI] [Google Scholar]
  • 2.Blachowicz T., Grzybowski J., Steblinski P., Ehrmann A. Neuro-Inspired Signal Processing in Ferromagnetic Nanofibers. Biomimetics. 2021;6:32. doi: 10.3390/biomimetics6020032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Bi G.Q., Poo M.M. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 1990;18:10464–10472. doi: 10.1523/JNEUROSCI.18-24-10464.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Karmarkar R., Buonomano D. A Model of Spike-Timing Dependent Plasticity: One or Two Coincidence Detectors? J. Neurophysiol. 2002;88:507–513. doi: 10.1152/jn.2002.88.1.507. [DOI] [PubMed] [Google Scholar]
  • 5.Sjostrom P.J., Turrigiano G., Nelson S. Rate, Timing, and Cooperativity Jointly Determine Cortical Synaptic Plasticity. Neuron. 2001;32:1149–1164. doi: 10.1016/S0896-6273(01)00542-6. [DOI] [PubMed] [Google Scholar]
  • 6.Whitlock J.R., Heynen A.J., Shuler M.G., Bear M.F. Learning Induces Long-Term Potentiation in the Hippocampus. Science. 2006;313:1093. doi: 10.1126/science.1128134. [DOI] [PubMed] [Google Scholar]
  • 7.Barrionuevo G., Brown T.H. Associative long-term potentiation in hippocampal slices. Proc. Natl. Acad. Sci. USA. 1983;80:7347–7351. doi: 10.1073/pnas.80.23.7347. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Mcclelland J. How far can you go with Hebbian learning, and when does it lead you astray. Atten. Perform. 2006;21:33–69. [Google Scholar]
  • 9.Gavrilov A., Panchenko K. Methods of Learning for Spiking Neural Networks. A Survey; Proceedings of the 13th International Scientific-Technical Conference APEIE; Novosibirsk, Russia. 3–6 October 2016. [Google Scholar]
  • 10.Alemanno F., Aquaro M., Kanter I., Barra A., Agliari E. Supervised Hebbian Learning. Europhys. Lett. 2022;141:11001. doi: 10.1209/0295-5075/aca55f. [DOI] [Google Scholar]
  • 11.Hopfield J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA. 1982;79:2554–2558. doi: 10.1073/pnas.79.8.2554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Kempter R., Gerstner W., van Hemmen L. Hebbian learning and spiking neurons. Phys. Rev. E. 1999;59:4498. doi: 10.1103/PhysRevE.59.4498. [DOI] [Google Scholar]
  • 13.Song S., Miller K.D., Abbott L. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat. Neurosci. 2000;3:919–926. doi: 10.1038/78829. [DOI] [PubMed] [Google Scholar]
  • 14.Hulea M., Uleru G.I., Caruntu C.F. Adaptive SNN for anthropomorphic finger control. Sensors. 2021;28:2730. doi: 10.3390/s21082730. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Uleru G.I., Hulea M., Manta V.I. Using hebbian learning for training spiking neural networks to control fingers of robotic hands. Int. J. Humanoid Robot. 2022:2250024. doi: 10.1142/S0219843622500244. [DOI] [Google Scholar]
  • 16.Powell C.M., Castillo P.E. 4.36—Presynaptic Mechanisms in Plasticity and Memory. In: Byrne J.H., editor. Learning and Memory: A Comprehensive Reference. Academic Press; Cambridge, MA, USA: 2008. pp. 741–769. [Google Scholar]
  • 17.Fioravante D., Regehr W.G. Short-term forms of presynaptic plasticity. Curr. Opin. Neurobiol. 2011;21:269–274. doi: 10.1016/j.conb.2011.02.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Yang Y., Calakos N. Presynaptic long-term plasticity. Front. Synaptic Neurosci. 2013;17:8. doi: 10.3389/fnsyn.2013.00008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Kano M., Watanabe M. 4—Cerebellar circuits. In: Rubenstein J., Rakic P., Chen B., Kwan K.Y., editors. Neural Circuit and Cognitive Development (Second Edition) Academic Press; Cambridge, MA, USA: 2020. pp. 79–102. [Google Scholar]
  • 20.Buice M.A., Chow C.C. Dynamic Finite Size Effects in Spiking Neural Networks. PLoS Comput. Biol. 2013;9:1659–1671. doi: 10.1371/journal.pcbi.1002872. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Dimovska M., Johnston T., Schuman C., Mitchell J., Potok T. Multi-Objective Optimization for Size and Resilience of Spiking Neural Networks; Proceedings of the IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON); New York, NY, USA. 10–12 October 2019; pp. 433–439. [Google Scholar]
  • 22.Hussain I., Thounaojam D.M. SpiFoG: An efficient supervised learning algorithm for the network of spiking neurons. Sci. Rep. 2020;4:13122. doi: 10.1038/s41598-020-70136-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Hulea M., Barleanu A. Electronic Neural Network For Modelling The Pavlovian Conditioning; Proceedings of the International Conference on System Theory, Control and Computing (ICSTCC); Sinaia, Romania. 19–21 October 2017. [Google Scholar]
  • 24.Uleru G.I., Hulea M., Burlacu A. Bio-Inspired Control System for Fingers Actuated by Multiple SMA Actuators. Biomimetics. 2022;7:62. doi: 10.3390/biomimetics7020062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Hulea M., Uleru G.I., Burlacu A., Caruntu C.F. Bioinspired SNN For Robotic Joint Control; Proceedings of the International Conference on Automation, Quality and Testing, Robotics (AQTR); Cluj-Napoca, Romania. 21–23 May 2020; pp. 1–5. [Google Scholar]
  • 26.Hulea M., Burlacu A., Caruntu C.F. Intelligent motion planning and control for robotic joints using bio-inspired spiking neural networks. Int. J. Hum. Robot. 2019;16:1950012. doi: 10.1142/S0219843619500129. [DOI] [Google Scholar]
  • 27.Hulea M., Barleanu A. Refresh Method For The Weights of The Analogue Synapses; Proceedings of the International Conference on System Theory, Control and Computing (ICSTCC); Sinaia, Romania. 8–10 October 2020; pp. 102–105. [Google Scholar]
  • 28.Hulea M. Electronic Circuit for Modelling an Artificial Neuron. Patent RO126249 (A2) 2018 November 29;
  • 29.Hulea M., Ghassemlooy Z., Rajbhandari S., Younus O.I., Barleanu A. Optical Axons for Electro-Optical Neural Networks. Sensors. 2020;20:6119. doi: 10.3390/s20216119. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Biomimetics are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES