Skip to main content
Cognitive Neurodynamics logoLink to Cognitive Neurodynamics
. 2024 Nov 12;18(6):4071–4087. doi: 10.1007/s11571-024-10178-x

Collective behavior of an adapting synapse-based neuronal network with memristive effect and randomness

Vinoth Seralan 1, D Chandrasekhar 2, Sarasu Pakiriswamy 3, Karthikeyan Rajagopal 4,5,
PMCID: PMC11655764  PMID: 39712094

Abstract

This study delves into the examination of a network of adaptive synapse neurons characterized by a small-world network topology connected through electromagnetic flux and infused with randomness. First, this research extensively explores the existence of the global multi-stability of a single adaptive synapse-based neuron model with magnetic flux. The non-autonomous neuron model exhibits periodically switchable equilibrium states that are strongly related to the transitions between stable and unstable points in every whole periodic cycle, leading to the creation of global multi-stability. Various numerical measures, including bifurcation plots, phase plots, and basin of attraction, illustrate the intricate dynamics of diverse coexisting global firing activities. Moreover, the model is extended by coupling two neurons with a memristive synapse. The dynamics of the coupled neurons model are showcased with the help of largest Lyapunov exponents, and synchronized dynamics are viewed with the help of mean average error. Next, we consider a regular network of neurons connected to their nearest neighbors through the memristive synapse. We then reconstruct it into a small-world network by increasing the randomness in the rewiring links. Consequently, we observed collective behavior influenced by the number of neighborhood connections, coupling strength, and rewiring probability. We used spatio-temporal patterns, recurrence plots, as well as global-order parameters to verify the reported results.

Keywords: Stability, Bifurcation, Adaptive synapse neuron, Memristive synapse, Electromagnetic induction, Synchronous dynamics

Introduction

Neural networks, which draw inspiration from the complexities of neurobiology, are conceptualized in the context of two distinct types of variables referred to as neurons and synapses. These variables serve as the fundamental building blocks for understanding the inner workings of these neuronal systems. In a seminal work, Dong and Hopfield (1992) demonstrated the effectiveness of an adaptive neuron model that integrates synaptic processes, providing a valuable tool for simulating the formation of synaptic connections within the visual cortex. Additionally, there has been significant research emphasis on exploring the dynamic properties of these neural network models. For instance, Li and Chen (2005) investigated the presence of multiple attractors within a single-neuron model incorporated with an adapting feedback synapse. Following this, short-term memory traces were observed in the swiftly adapting synapses of the inferior temporal cortex was studied by Sugase-Miyamoto et al. (2008). Furthermore, the study of analog electronic circuit design for these systems was discussed in detail by Chen and Li (2011). A neural learning model was then formulated and its investigation delved into the effects of synaptic adaptation’s spatial features on learning performance was observed by Berger et al. (2017). Bao et al. (2020, 2021) presented an adaptive synapse-based neuron model in their works, illustrating the global presence of firing patterns. Within this neuron model, a consistent occurrence of chaotic or periodic firing patterns are observed, alongside two small periodic firing patterns across a specific range of control parameters.

Numerous neuron models, like as Chay (1985); Xu et al. (2020), Hodgkin and Huxley (1990), Gu et al. (2014), Morris and Lecar (1981) and FitzHugh (1961) have been created to date in an effort to understand the firing patterns of biological neurons. Various activation functions are considered in the literature, such as the hyperbolic tangent function by Chen and Li (2011) and the sine function by Bao et al. (2022), have been employed in these neuron models. Different coupling mechanisms, such as gap junctions by Pal et al. (2021), linear coupling by Wang et al. (2021), chemical connections by Bahramian et al. (2021), blinking coupling by Parastesh et al. (2022), and flux-controlled memristive coupling by Xu et al. (2023). Substantial progress has been made in understanding the coupling dynamics within these neuron-based small networks. Additionally, novel approaches such as electric field coupling using capacitors by Liu et al. (2019) and electromagnetic coupling employing memristors have been presented by Bao et al. (2019), Bao et al. (2023), Hajian et al. (2023) and Eftekhari and Amirian (2023). For instance, Xu et al. (2023) explored extreme multi-stability and phase synchronization in a heterogeneous bi-neuron Rulkov network with memristive electromagnetic induction. Another study by Ding et al. (2023) focused on a memristive synapse-coupled piece-wise linear simplified Hopfield neural network, providing dynamical behavior and its circuit implementation. The study utilized a flux-controlled memristive synapse to couple a bi-neuron HNN to demonstrate the emulation of electromagnetic-induced current with piece-wise linear activation function carried out by Xu et al. (2021). Bao et al. (2022) replaced the compound hyperbolic tangent activation function with a simple sine function to introduce a unique adaptive synapse-based neuron (ASN) model and explored that their model exhibits up to 12 coexisting heterogeneous attractors. Kumarasamy et al. (2023) examined a four-dimensional discrete neuron network incorporating magnetic flux coupling, revealing the appearance of diverse synchronization state such as existence of chimera states and traveling waves. In the realm of discrete nonlinear systems, Ramasamy et al. (2023) adopted a master stability approach to analyze synchronization dynamics.

Synchronization phenomena persistently manifest across diverse domains, encompassing biological systems, natural occurrences, social dynamics, and engineering applications. Extensive exploration of this phenomenon has been conducted within the realm of complex biological networks. Building upon the pioneering work of Watts and Strogatz (1998), who delved into small-world networks. Commencing with a regular lattice, studies have demonstrated that introducing a small number of randomly connected links significantly reduces the distance between nodes (random network). Neural systems also frequently exhibit these small-world effects. Introducing a few of random shortcuts onto a regular network, the network synchronizability can be significantly increased (Riecke et al. 2007). For instance, the synchronization of firing patterns in small-world network of HR neurons has been explored by Wang et al. (2010, 2015), Zhu et al. (2016). Most recently, Joseph et al. (2023) carried out the synchronization study of HR neurons random network connected via adaptive memristive synapses. The stochastic resonance in the three neuron FitzHugh–Nagumo (FHN) motifs and its small-world network with higher order motif interactions are studied by Li et al. (2024). Notably, the work on adaptive synapse-based neurons with intra-coupled memristive synapse in a randomly connected network through inter-coupled memristors has not been previously reported. To illustrate the electromagnetic induction caused by membrane potential changes in a network of adaptive synapse-based neurons, this study introduces the use of a flux-controlled memristive synapse.

The main contribution of this paper are as follows:

  1. Our research delves into the dynamic interplay of neuron activity, unveiling a adaptive synapse-based model.

  2. Investigating the non-autonomous nature of the model, we uncover periodically switchable equilibrium states, intricately linked to external inputs, resulting in a global multi-stability phenomenon.

  3. Through detailed numerical analyses, we showcase the complex dynamics of coexisting firing activities, employing bifurcation plots, phase portraits, and basin of attraction to illuminate the intricate transitions between different attractors.

  4. Expanding our model to include the coupling of neurons with memristive synapse, we explore the dynamics of these interconnected system, identifying dynamics through the largest Lyapunov exponents.

  5. Extending our exploration to small-world networks of neurons, we unveil their collective behavior and synchronization patterns, shedding light on the intricate interdependence of neurons connected in a network through memristive synapse.

The remainder of the paper is structured as follows: In Sect. 2, we study the single neuron model and conduct a stability analysis of its equilibrium points. Given the numerical simulation, we explore bifurcation behaviors through bifurcation plots, as well as Lyapunov exponent spectra. We investigate the dynamical behavior dependent on initial conditions, employing the local attraction basin and phase trajectories. Section 3 provides a detailed description of the coupled model, accompanied by presentation of the Lyapunov exponent spectra and mean average error as a synchronization quantifier. Additionally, Sect. 4 delves into network dynamics through spatio-temporal patterns, recurrence plots, and order parameters. Finally, Sect. 5 gives discussions and summarizes the conclusions drawn from the study.

Mathematical model and dynamical behaviors

The neurons, which serves as the fundamental unit in the brain’s system, plays a crucial role as the basic building block for an intricate and nonlinear electrical activities. In this section, our main emphasis is to thoroughly explore the bifurcation process within a neuron model that is dependent on adaptive synapse.

Adapting synapse-based neuron model

In the neural system, the connections between neurons are facilitated by synapses, and the intensity of the connection from neuron j to neuron i is expressed as Tij. A neuron indexed by i can be conceptualized as a distinct unit with two main components: an input denoted as ui and an output represented as Vi. As a result, the dynamics of neuron activity are governed by the following equation:

duidt=-uiτi+TijVj+Ii,i=1,2,,N, 1

where, τi signifies the integration time constant, N represents number of neurons and Ii represents input current. The mathematical model described in Eq. (1) is specifically designed for scenarios characterized by constant neuron connections, where the value of Tij remains fixed over time. However, when dealing with adaptive synapses, it becomes essential to introduce additional equations that elucidate the evolving nature of these synapses over time, as detailed in references Dong and Hopfield (1992), Li and Chen (2005). Consequently, a single neuron model featuring an adaptive synapse is given by

u˙=A(u)A(s)+Iext-u,s˙=-αs-αA2(u), 2

here, u and s represents the membrane potential and synapse variable, and α is a positive constant, as indicated by prior studies (Li and Chen 2005; Chen and Li 2011; Bao et al. 2020). And Iext=Icos(2πft) is the externally imposed input, were I and f are the amplitude and frequency, respectively. The neuron activation function A for an input x is formulated as

A(x)=2tanh(gx)-tanh(gx+1.5g)-tanh(gx-1.5g), 3

here g is a positive parameter. Then, the neuron model (2) featuring an adjustable synapse with magnetic flux is expressed as

u˙=A(u)A(s)+Iext-u-k0Mϕu,s˙=-α(s-A(u)2),ϕ˙=k1u-k2ϕ, 4

here, ϕ represents the magnetic flux, and Mϕ characterizes the interaction between ϕ and u. The term -k0Mϕu denotes the resultant current generated due to electromagnetic induction, with its strength controlled by the parameter k0. According to Bao et al. (2020), Lv et al. (2016), Usha and Subha (2019), memory inductance Mϕ can be expressed as Mϕ=δ+3βϕ2 where δ and β are positive parameters. Finally, the parameters k1 and k2 are rates that gives the evolution of the intra-connected magnetic flux according to Lv et al. (2016), Usha and Subha (2019), Joseph et al. (2023).

The proposed model (4) has an adaptive property of the synapse, refers to its ability to alter its strength depends on the correlation between the activities of connected neurons. This adaptation is governed by a variant of the Hebbian learning rule, which asserts that the synaptic connection increases when there is a positive correlation between the pre-synaptic and post-synaptic neuron activities, see Dong and Hopfield (1992), Li and Chen (2005). In particular, the computation of synaptic weights varies according to distinct patterns of neuronal activity. The degree to which synaptic weights stabilize or fluctuate during network learning or in response to different inputs is indicative of the correlation dynamics within the neural network. Recently, Wu et al. (2023), Hou et al. (2023) explained the self-adaptive property of synapse, such that the synaptic intensity and intrinsic parameters can be controlled by energy level.

Equilibrium and stability

Let S=(u,s,ϕ) be an arbitrary equilibrium state for the model given in (4), where s=A2(u) and ϕ=k1k2, and u can be obtained by solving

h(u)=-u+A(u)A[A2(u)]+Iext-k0δ+3βk12k22u2u=0. 5

Using above equation yields S=(u,s,ϕ), referred to as an AC equilibrium state (Bao et al. 2020). The stability matrix at the S=(u,s,ϕ) is derived from (4) as follows:

JS=-1+A(u)A(s)-k0(δ+3βϕ2)A(u)A(s)-6k0βuϕ2αA(u)A(u)-α0k10-k2, 6

where

A(x)=-2gtanh2(gx)+gtanh2(gx+1.5g)+gtanh2(gx-1.5g).

The characteristic equation of matrix (6) is

P(λ)=λ3+b1λ2+b2λ+b3=0. 7

where

b1=α+k2+c2k0-A(u)A(s)+1,b2=6k0k1βuϕ+α[k2+c2k0-A(u)c1+1]+k2[c2k0-A(u)A(s)+1],b3=αk2[1-A(u)c1+c2k0]+6k0k1αβuϕ,c1=A(s)+2A(s)A2(u),c2=δ+3βϕ2.

The roots of Eq. (7), denoted as λi for i=1,2,3, correspond to the eigenvalues of the Jacobian matrix (6). By utilizing the given values of b1, b2, and b3, an analysis of equilibrium stabilities is conducted using the Routh–Hurwitz criteria. Specifically, for Eq. (7), it is established that the real parts of the roots are exclusively negative when the conditions b1>0, b3>0, and b1b2-b3>0 are met. Next, we can categorize the time-varying equilibria and their stability characteristics based on the relationships between b1, b2 and b3.

For the ASN model (4) the control parameters are set to

α=3,I=1.2,f=1,g=5,β=0.1,δ=0.1,k0=0.1,k1=0.01,andk2=0.1. 8

And the initial conditions for all variables are chosen randomly in an interval [-0.5,0.5]. Assume that for the period of I(t)’s complete periodic cycles, t fluctuates between -1 and 1. Using the periodically switchable state S=(u,s,ϕ), the characteristic equation in (7) is used to compute the three eigenvalues. The evolution of the state u is visually depicted in Fig. 1, where the abbreviations HB and LP signify the Hopf bifurcation point and limit point, respectively. The resulting classifications for different values of external forcing time t=-1,-0.5,0,0.5,1 have fixed points Si,i=1,5, and their stability nature is given in Table 1. The DC equilibrium states are obtained by considering external forcing t=0.

Fig. 1.

Fig. 1

Represents the AC equilibrium states are examined over two complete periodic cycles of the function I(t) using the specified parameter values. The stable and unstable branches of equilibrium states are indicated by the red and black lines, respectively. Furthermore, an inset plot offers a detailed view of the diagram, concentrating on the external forcing time interval t(0.23,0.27). The intersections of the vertical dashed lines with the graph correspond to the DC equilibrium states. (Color figure online)

Table 1.

Equilibrium points and their stability for model (4) for different values of t

t Equilibria Eigenvalues Stability
(-1,0,1)/(-0.5,0.5) S1:(0.141033,1.47688,0.0141033) λ1=-0.1 Unstable spiral
λ2,3=1.46686±15.9943i
S2:0.0680174,0.429168,0.00680174 λ1=-3.61512,λ2=-0.09999 Unstable node
λ3=16.8869
S3:±1.0874,3.87358,±0.10874 λ1=-3, λ2=-1.10284 Stable node
λ3=-0.100707
S4:±1.43961,1.67207,±0.143961 λ1=-13.6154,λ2=-0.0999675 Unstable node
λ3=8.12176
S5:±1.63125,0.179893,±0.163125 λ1=-0.100135 Stable spiral
λ2,3=-4.44686±3.93503i

In the subsequent analysis, numerical investigations were conducted using the Julia software and the DynamicalSystems.jl package by Datseris (2018). The simulations employed the AutoTsit5(Rosenbrock23()) algorithm with a relative tolerance of 10-4 and a time step of Δt=0.01.

Bifurcation

In the context of model (4), the parameters remain fixed according to (8). We define four distinct initial conditions (u(0),s(0),ϕ(0)) as follows: (-0.2,5,0), (0.2, 5, 0), (3, 5, 0), and (-3,5,0). Initially, the parameter I serves as the bifurcation parameter, varying within the range of I[0.5,1.4]. Bifurcation diagrams depicting the maxima (umax) of the variable u are presented in Fig. 2a(i). Orbits represented by red, dark blue, dark green, and brown originate from the initial conditions (-0.2,5,0), (0.2, 5, 0), (3, 5, 0), and (-3,5,0), respectively. Additionally, a bifurcation diagram is constructed by considering k0 as a bifurcation parameter within the interval k0[-5,5], as illustrated in Fig. 2a(ii).

Fig. 2.

Fig. 2

a The bifurcation diagram related to parameter I(0.5,1.4) and k0(-5,5) for four distinct sets of initial conditions, denoted as (-0.2,5,0), (3, 5, 0), (0.2, 5, 0) and (-3,5,0), correspondingly. b The coexisting largest Lyapunov exponents. (Color figure online)

To differentiate the dynamical behavior of the model (4), the largest Lyapunov exponent (LLE) is calculated, which measures the rate of separation of infinitesimally close trajectories in the phase space with a fixed time step of 5×10-3. Periodic behaviors with the regular attractors are characterized by LLE<0, the quasi-periodic behaviors are supported by LLE=0, while the chaotic behavior for LLE>0, see Xu et al. (2023), Kong et al. (2024), Sun et al. (2023). The dynamic behavior of chaotic system is analyzed by nonlinear dynamic analysis method and for some references to chaos theory (Strogatz 2018; Wang et al. 2023; Gao et al. 2023). Figure 2b illustrate the largest Lyapunov exponents (LLEs) for the four sets of initial conditions (-0.2,5,0), (0.2, 5, 0), (3, 5, 0), and (-3,5,0) for parameters k0 and I, respectively. The red and green solid lines correspond to the initial conditions (-0.2,5,0) and (3, 5, 0), while the blue and brown dashed lines correspond to the initial conditions (0.2, 5, 0) and (-3,5,0), respectively. Notably, two observed periodic orbits, colored green and brown, persist in the adjustable parameter region k0(-5,1.5). Similarly, for I[0.5,1.4], globally observed periodic orbits appear in green and brown, while chaotic or periodic orbits appear in blue and red. These periodic orbits consistently coexist with the central chaotic or periodic orbit, leading to the emergence of global multi-stability, where multiple firing activities coexist on a global scale. Next, to incorporate different dynamical behaviors is presented in the form of phase portraits for different values of the parameters k0=-5,-4,-2,1,3,5 in Fig. 3, shows chaotic and higher periodic orbits with initial condition (0.2, 5, 0).

Fig. 3.

Fig. 3

Phase portrait for the model (4) exhibits chaotic and higher periodic orbits for different values of k0 a k0=-5, b k0=-4, c k0=-2, d k0=1, e k0=3, f k0=5

The depiction of a nonlinear dynamical system’s behavior in the parameter plane is facilitated by its dynamical map, a valuable tool for characterizing the evolution of dynamic behaviors. This mapping involves assigning distinct colors based on the values of the largest Lyapunov exponent (LLE). To enhance comprehension of dynamic distributions in the parameter planes, LLEs are utilized in the I versus k0 plane to visually represent the progression of neuron firing activities in Fig. 4.

Fig. 4.

Fig. 4

Dynamical maps based on the two-dimensional largest Lyapunov exponent for neuronal firing activities in the (I,k0) parameter plane. (Color figure online)

Bistability

Previously, we showed that the model (4) is highly parameter dependence and sensitive to parameter values by plotting the bifurcation diagram and LLEs for the coexistence of chaotic and periodic orbit. Now, it is important to show the sensitiveness to the initial conditions for the model (4). The basin of attraction helps us to identify region of separation of existence of multiple attractors in phase plane. Previously, we demonstrated the model (4) to be highly dependent on parameters and sensitive to their values, illustrating this through bifurcation diagrams and largest Lyapunov Exponents (LLEs) showcasing the coexistence of chaotic and periodic orbits. Now, it is imperative to showcase the sensitivity to initial conditions for the same model (4). The basin of attraction serves as a tool to identify regions in the phase plane where multiple attractors coexist. Further, we emphasize the discovery of complex basin of attraction in the ASN model, exhibiting intricate patterns displaying high sensitivity to initial conditions. Consequently, the ASN model exhibits entirely different firing behaviors with small changes to the initial conditions. Additionally, a range of theoretical analyses and numerical simulations of oscillations and motions play a pivotal role in conducting comprehensive investigations into the dynamics of complex systems, especially those involving nonlinear dynamical systems.

The basin of attraction is determined by selecting initial conditions in the intervals u(0)[-5,5], v(0)[-15,15], and ϕ(0)[-3,3], with 400 points in the interval. The basin of attraction for specific choices of ϕ(0), such as ϕ(0)=-2.985 in the u0 versus s0 plane (Fig. 5a(i)), ϕ(0)=-1.5 (Fig. 5b(i)), and ϕ(0)=-0.135 (Fig. 5c(i)), is depicted. The basin for the coexistence of different attractors is differentiated with various colors, and corresponding phase portraits are plotted for different choices of initial conditions from the basins in Fig. 5a(ii), b(ii) and c(ii). Remarkably, for specific selections of the initial state of the plane, the phase positions of coexisting attractors, particularly those exhibiting chaotic behavior, display intricate dynamics, contributing to the emergence of substantial complexity. To illustrate the coexistence of various types of attractors, such as one-periodic (1P), two-periodic (2P), and chaotic attractor (CA), phase portraits are plotted with different choices of initial conditions (Fig. 6), and the specific initial conditions are provided in Table 2.

Fig. 5.

Fig. 5

In the initial plane defined by u0 and s0, the basin of attraction is depicted for three different initial values as ϕ0=-2.985 in a, -1.5 in b and -0.135 in c. Attracting regions with distinct attractors are distinguished by their respective colors. b Corresponding numerical phase plane plots illustrating representative coexisting firing activities in the u-s plane for varying values of the ϕ0. (Color figure online)

Fig. 6.

Fig. 6

Demonstrating the coexistence of various attractors, including two 1-period, two 2-period, and chaotic attractors, for different selections of the initial condition, which is given in Table 2. (Color figure online)

Table 2.

Initial conditions used to obtain coexisting attractors of Fig. 6

Coexisting Corresponding colors Initial condition
Chaotic attractor Cyan (-2, 3.5, 0)
Chaotic attractor Gray (2, 6, 0)
Period-2 limit cycle Blue (-0.1, 2, 0)
Period-2 limit cycle Red (0.1, 3, 0)
Period-1 limit cycle Green (-2, 1, 0)
Period-1 limit cycle Brown (2, 1, 0)

Dynamics of coupled neurons

We noticed the structure of a single neuron is complex, this section is dedicated to exploring the dynamic behavior of a system consisting of two interconnected adaptive synapse neurons diffusively connected through the magnetic flux and its schematic diagram is presented in the Fig. 7. And the proposed model takes the following form

u1˙=A(u1)A(s1)+Iext-u1-k0Mϕ1u1-g0Mψ(u2-u1),s1˙=-α(s1-A(u1)2),ϕ1˙=k1u1-k2ϕ1,u2˙=A(u2)A(s2)+Iext-u2-k0Mϕ2u2-g0Mψ(u1-u2),s2˙=-α(s2-A(u2)2),ϕ2˙=k1u2-k2ϕ2,ψ˙=k3(u1-u2)-k4ψ. 9

where g0 is the coupling coefficient, Mϕ1=δ+3βϕ12 and Mϕ2=δ+3βϕ22 represents the intra-memory conductance of the first and second neurons, respectively. The expressions Mϕ1 and Mϕ2 represent specific terms. Additionally, Mψ is used to represent the inter-memory conductance between two neurons. The rates governing the evolution of interconnected magnetic flux, influenced by the membrane potential and leakage, are denoted as k3 and k4. All other parameter have similar meanings to those in model (4). The investigation involves the computation of two-parameter maximal Lyapunov exponents and the generation of bifurcation diagrams, which serve to describe the overall dynamics of the coupled neurons. Furthermore, for extended precision for parameters and variables, the numerical simulations have been conducted. It is observed that the state variables of the two neurons synchronize at different choices of coupling strength.

Fig. 7.

Fig. 7

The schematic diagram for two neurons coupled with memristor

This synchronization can be observed by computing the mean absolute error (MAE), given by MAE=1ni=1nu1-u^i, where n is the number of observations. u1 is membrane potential for first node. ui,i=2,N, N is the number of nodes. For the coupled model with two nodes (9), the MAE=u1-u2/n, To show the synchronization behavior to model (9), the time series and phase portrait are depicted in Fig. 8. The dynamical behavior of the coupled neuron model (9) is depicted with the help of LLE by varying k0(-5,5) and fixed g0=-1.08 in Fig. 9a. The synchronization behavior is illustrated with the help MAE in Fig. 9b. When MAE is approximately 0, the system is completely coherence; higher MAE values indicate incoherence behavior. Systematically varying the two parameters of the interconnected neurons and presenting the maximum Lyapunov exponent is demonstrated in Fig. 10a(i), (ii), and (iii). These visuals depict the dynamics of the system, showcasing periodic attractors, quasi-periodic patterns, and chaotic behaviors, as indicated by the two-parameter largest Lyapunov exponent diagrams. Correspondingly, the mean average error is calculated in order to show the coherent behavior in parameter plane, which is plotted in Fig. 10b(i), (ii), and (iii).

Fig. 8.

Fig. 8

The time series and phase portrait for the coupled neuron model (9). a Completely incoherent phase portrait for g0=-0.3. b Partially coherent phase portrait for g0=-0.8. c Coherent phase portrait for g0=-1.5

Fig. 9.

Fig. 9

a The visualization of one parameter LLE for the coupled neuron model (9) for the parameter k0 in the range (-5,5) and the mean absolute error is visualized in b, correspondingly

Fig. 10.

Fig. 10

Dynamical maps based on the two-dimensional largest Lyapunov exponent for neuronal firing activities in the a(i) g0 versus I parameter plane with k0=-5, (ii) k0 versus g0 parameter plane with I=0.95, (iii) k0 versus I parametric plane with g0=-5. Under specific parameter values, the two neurons exhibit synchronized dynamics. b The mean absolute error is visualized correspondingly

Neuronal network model

The neural network is an intricately structured and studying the collective dynamic behavior of neural networks is very essential. To investigate the collective dynamical properties of an adapting synapse-based random neuron network with small world network construction of N nodes, considering the effect of rewiring links with a certain probability p and n nearest neighbor connections. The network model is described as follows:

ui˙=A(ui)A(si)+Iext-ui-k0Mϕiui-g0Mψij=1NB1(uj-ui),si˙=-α(si-A(ui)2),ϕi˙=k1ui-k2ϕi,ψi˙=k3j=1NB2(ui-uj)-k4ψi,i,j=1,2,,N. 10

Here, B1=[aij]N×N signifies the adjacency matrix, while B2=[bij]N×N represents an adjacency matrix where all elements above the diagonal are negative. The connection among nodes in the network is established through the membrane potential variable. Equation (4) describes the local model of a single neuron, while the coupled model is presented in Eq. (9). The specific system parameters are set, ensuring that the individual system is in a state of chaos. The next aim is to explore the collective dynamical behaviors by changing the coupling strength g0 for N=100 number of nodes connected with n=10 nearest-neighbors (q=2n - the base degree of each edge, N>q, q>=2, q must be even), q=20 connections (10-left and 10-right neighbors). A network is employed, and with a probability p, rewiring is conducted to a randomly selected node, ensuring the avoidance of double edges and self-loops. The schematic diagram, based on the method followed by Watts and Strogatz (1998), involves constructing a small-world network by introducing randomness through link rewiring in a regular lattice with p=0 (regular), p=0.5 (small-world) and p=1 (random), as shown in Fig. 11. The corresponding heatmap of matrix B1 and B2 visualizing the connectivity among the 100 nodes is given in Fig. 12. The schematic diagram for the network topology of N=100 neurons with electromagnetic flux connected to one nearest neighbor (two connections) through the memristor with rewiring probability p=0 is depicted in Fig. 13. The recurrent analysis and global order parameters are the two of some quantifiers to find the synchronization in network, which are discussed below:

Fig. 11.

Fig. 11

Creating small-world networks: Watts and Strogatz’s approach involves constructing a small-world network by introducing randomness through link rewiring in a regular lattice by Watts and Strogatz (1998). Randomness: a p=0, b p=0.5, and c p=1

Fig. 12.

Fig. 12

Network connectivity heatmap: Visualizing the connections among 100 nodes. a Denotes the matrix B1 and b denotes the matrix B2 with randomness: (i) p=0, (ii) p=0.5, and (iii) p=1

Fig. 13.

Fig. 13

This figure represents schematics diagram for network topology of N=100 neurons with magnetic flux connected to one nearest neighbor (two connections) through the memristor with rewiring probability p=0

Recurrence analysis

The recurrence analysis is determined by the calculation of RPij=||ui-uj|| for i,j=1,,N, where N represents the number of nodes, and ||·|| denotes the Euclidean distance, as described in previous works by Marwan et al. (2007), Eckmann et al. (1995).

Global order parameter

To assess the coherence of collective motion, we employ the order parameter denoted as R (Kuramoto 2012; Ivanchenko et al. 2004). The formulation is expressed as follows:

Reiχ=1Nj=1Neiθj. 11

Here, χ represents the average phase, and θj signifies the instantaneous phase of the j-th oscillator, computed using θj=arctan(yj/xj) (Ghosh et al. 2022; Sharma 2019). The parameter R (0R1) serves as an indicator of phase coherence. In instances of complete synchronization, R approaches unity, while incoherent solutions are characterized by R=0.

Dynamics of the network

In this subsection, the effect on the network under memristor coupling is explored. The influence of diffusively coupled intensity through memristor is worth of attention. The parameter values are chosen as follows: α=3,I=1,f=1,g=5,β=0.1,δ=0.1,k0=1,k1=0.01,andk2=0.1. Plotting of the collected information shows the pattern generation and synchronization characteristics of neural networks at various coupling strengths. Figure 14 represents the spatio-temporal (first row), snapshots (second row) of each node in the network with probability of rewiring p=0 (no rewiring) for five different coupling strength. Similarly, for p=0.5 (half of connections are rewired) and p=1 (fully rewired) is plotted in Figs. 15 and 16, correspondingly. Furthermore, the coherent behavior of the network is examined by the application of recurrence quantification analysis. Figures 1415 and 16 display the recurrence plots for different values g0 in the third row. Figure 14 presents the synchronized behaviour with a coupling strength of g0=-6 for p=0. The corresponding snapshot is obtained at t=2975, and the recurrence is also shown. Similarly, for p=0.5 the synchronized behavior in shown for the coupling strength g0=-0.8 and for p=1 the synchronized behavior in shown for the coupling strength g0=-0.8.

Fig. 14.

Fig. 14

The dynamics of a synapse-based neuron network with small-world topology, comprising 100 coupled neurons with rewiring probability p=0. a Spatiotemporal plots, b Snapshots at time t=2975 denote the end-state values of the nodes, and c Recurrence plot for five different values of g0: 0, -0.3002, -0.3026, -3.5, and -6, respectively, where yellow color represents incoherent nodes and green color represents coherent nodes. For g0=-6, all nodes of the network are completely synchronized. The parameter values are given in (8) with I=1 and k0=1. (Color figure online)

Fig. 15.

Fig. 15

The dynamics of a synapse-based neuron network with small-world topology, comprising 100 coupled neurons with rewiring probability p=0.5. a Spatiotemporal plots, b Snapshots at time t=2975 denote the end-state values of the nodes, and c Recurrence plot for five different values of g0: 0, -0.224, -0.2806, -0.65, and -0.8, respectively, where yellow color represents incoherent nodes and green color represents coherent nodes. For g0=-0.8, all nodes of the network are completely synchronized. The parameter values are given in (8) with I=1 and k0=1. (Color figure online)

Fig. 16.

Fig. 16

The dynamics of a synapse-based neuron network with small-world topology, comprising 100 coupled neurons with rewiring probability p=1. a Spatiotemporal plots, b Snapshots at time t=2975 denote the end-state values of the nodes, and c Recurrence plot for five different values of g0: 0, -0.2115, -0.25, -0.64, and -0.8, respectively, where yellow color represents incoherent nodes and green color represents coherent nodes. For g0=-0.8, all nodes of the network are completely synchronized. The parameter values are given in (8) with I=1 and k0=1. (Color figure online)

The order parameter, denoted as R, characterizing the synchrony of an adaptive synapse-based neuron network consisting of N=100 coupled neurons considering the impact of different rewiring probabilities: (i) p=0, (ii) p=0.5, and (iii) p=1. As the coupling strength varies, there is a transition from a non-synchronized, synchronized, which evolves to completely synchronized. To describe the effect of coupling strength on the collective behavior of system (10), the order parameter was computed, and the results are displayed in Fig. 17. It is noticed that the global order parameter R gradually tends to 1 when varying the coupling strength g0. Figure 17a is values of the order parameter R with respect to the g0 with the probability of rewiring p=0. Similarly, Fig. 17b and c denotes the values of the order parameter R with respect to the g0 with the probability of rewiring p=0.5 and p=1, respectively. Also, the inset plot in Fig. 17 represents the sample time series at different nodes N=20,40,60,80,100. When the coupling strength varies in a range g0(-7,0) reflects the slow convergence to complete synchrony. Is is observed that, when the rewiring probability p=0, the coupling parameter g0 crosses -6 to reaches complete synchronization. Similarly, for p=0.5 and p=1, the coupling strength g0 crosses -0.8 to reaches synchronization state. Further, to show the influence of other parameters in synchronization of the network the order parameter values is calculated and plotted in Fig. 18. In the g0 vs I plane in Fig. 18a, the intensity of the dark blue color corresponds to the value of the order parameter R. A darker shade signifies a higher value of R, indicating a greater degree of synchronization within the system, approaching unity. Similarly, this calculation extends to the g0 versus k0 plane in Fig. 18b. This analysis provides insights into how the rewiring probability influences the synchronization behavior of the adaptive neuron network in both the g0 versus I and g0 versus k0 planes.

Fig. 17.

Fig. 17

Variation of the order parameter R with respect to the coupling strength g0 is presented under different rewiring probabilities: a p=0, b p=0.5, and c p=1. The inset plot provides a detailed view of sample time series at various nodes (N=20,40,60,80,100)

Fig. 18.

Fig. 18

The figure shows the order parameter value R of a synapse-based neuron network with small-world topology, consisting of 100 coupled neurons, in the g0 versus I plane. The effect of rewiring probability is illustrated for three cases: (i) p=0, (ii) p=0.5, and (iii) p=1. Panel b presents a similar analysis in the g0 versus k0 plane. The color bar on the right represents the order parameter R, where dark blue indicates R1, signifying synchronization, and lighter colors indicate lower values of R, reflecting less coherence in the network’s dynamical behavior. (Color figure online)

Dynamics with respect to randomness and nearest neighbours

Furthermore, Fig. 19 illustrates the impact of the rewiring probability (p) in relation to the coupling strength (g0) on the order parameter. The visualization reveals that as the degree of randomness in the network increases, synchronization occurs at lower values of coupling strength, and conversely, as randomness decreases, higher coupling strength is required for synchronization.

Fig. 19.

Fig. 19

The figure depicts the order parameter value R of a synapse-based neuron network for coupling coefficient g0 versus the rewiring probability p. (Color figure online)

In Fig. 20, the influence of the number of connections to neighbors (n) on the order parameter (R) is examined with respect to both g0 and n for varying values of p. This analysis depicts the transition from a non-synchronized to a synchronized state, highlighting that a higher coupling strength is necessary for fewer connections to achieve network synchronization. Ultimately, it emphasizes the substantial impact of both the number of nearest neighbor connections and the degree of randomness on the coupling parameter in governing network synchronization.

Fig. 20.

Fig. 20

The figure depicts the order parameter value R of a synapse-based neuron network in g0 versus n-number of neighbors for rewiring probability. a p=0, b p=0.5, and p=1. Therefore, the number of connections in the node q=2n. (Color figure online)

Discussion and conclusion

In this paper, the dynamics of an adaptive synapse neuron model with electromagnetic flux were numerically studied. The non-autonomous adaptive synapse-based neuron model’s globally coexisting multiple firing activities could simulate the synapse dynamics and firing activity of biological neurons. Because of the frequently switchable equilibrium state this results in complete multistability. Numerous numerical charts effectively revealed the complex dynamics of the globally coexisting multiple firing activities. Further, we investigated the two interconnected adaptive synapse neurons diffusively connected through the magnetic flux. The dynamics of the coupled system, including periodic attractors, quasi-periodicities, and chaotic behaviors, as revealed by the two-parameter Lyapunov exponent. The synchronization in the coupled neurons is explored with the help of MAE. Also, this study extends to the investigation of the collective dynamical properties of the adapting synapse-based neuron network, a network of 100 coupled neurons model with the effect of rewiring probability p which involves increasing the rewiring probability for the transition of small-world network to random network. We observed that the random network gets synchronized by varying the coupling strength. The synchronization properties are analyzed with the help of recurrence analysis and global order parameters.

The examination of synchronization patterns indicates a strong correlation between bifurcation scenarios and the shift in the collective firings of 100 neurons from a state of asynchrony to achieving perfect synchrony. This synchronization crucially depends on the coupling strength and rewiring probability. The inherent model and the comprehensive examination of synchronization attributes in this study hold biological significance, expanding our understanding of complex network synchronization. This contributes to a nuanced comprehension of the interplay between internal and external couplings within a neural network of networks. The results may find use in both healthy and pathological situations, particularly those involving imbalances in different types of synchronization. Moreover, the findings offer a theoretical basis for exploring particular details connected to collective neuronal events linked to brain dysfunctions. We conclude by predicting that the theoretical framework will provide a wealth of new directions for future studies in random networks, revealing deeper facets of these complex types of synchronization.

Our findings suggest that the synchronization of network is influenced by coupling strength, number of nearest neighborhood connections, and randomness in the network. It is important to note that future research could explore the collective dynamics of a network comprising an interactive, interconnected adaptive synapse neuron network, may consider the impact of factors such as noise and diverse synapse types, including chemical, otaps, and electromagnetic synapses. Another limitation lies in the network size, as the present analysis is restricted to 100 coupled neurons but could certainly be expanded further in future. Exploring this phenomenon in higher spatial dimensions with complex connectivity patterns inspired by those observed in brain medical experiments would provide valuable insights. Future investigation are needed to extend the adaptive synapse neuron model with magnetic flux to study in multilayer networks could yield new and valuable results in neural dynamics, such as phase synchronization and the existence of chimera patterns.

Acknowledgements

This article has been written with the joint partial financial support of Center for Nonlinear Systems, Chennai Institute of Technology, India, vide Funding number CIT/CNS/2024/RP-017.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Declarations

Conflict of interest

The authors declare that they have no Conflict of interest.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Berger DL, De Arcangelis L, Herrmann HJ (2017) Spatial features of synaptic adaptation affecting learning performance. Sci Rep 7(1):11016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bao B, Hou L, Zhu Y, Wu H, Chen M (2020) Bifurcation analysis and circuit implementation for a tabu learning neuron model. AEU-Int J Electron Commun 121:153235 [Google Scholar]
  3. Bao H, Liu W, Hu A (2019) Coexisting multiple firing patterns in two adjacent neurons coupled by memristive electromagnetic induction. Nonlinear Dyn 95:43–56 [Google Scholar]
  4. Bahramian A, Parastesh F, Pham V-T, Kapitaniak T, Jafari S, Perc M (2021) Collective behavior in a two-layer neuronal network with time-varying chemical connections that are controlled by a Petri net. Chaos Interdiscip J Nonlinear Sci 31(3) [DOI] [PubMed]
  5. Bao H, Yu X, Xu Q, Wu H, Bao B (2023) Three-dimensional memristive Morris–Lecar model with magnetic induction effects and its FPGA implementation. Cogn Neurodyn 17(4):1079–1092 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bao B, Zhu Y, Li C, Bao H, Xu Q (2020) Global multistability and analog circuit implementation of an adapting synapse-based neuron model. Nonlinear Dyn 101(2):1105–1118 [Google Scholar]
  7. Bao B, Zhu Y, Ma J, Bao H, Wu H, Chen M (2021) Memristive neuron model with an adapting synapse and its hardware experiments. Sci China Technol Sci 64(5):1107–1117 [Google Scholar]
  8. Bao H, Zhang J, Wang N, Kuznetsov N, Bao B (2022) Adaptive synapse-based neuron model with heterogeneous multistability and riddled basins. Chaos Interdiscip J Nonlinear Sci 32(12) [DOI] [PubMed]
  9. Chay TR (1985) Chaos in a three-variable model of an excitable cell. Physica D 16(2):233–242 [Google Scholar]
  10. Chen J, Li C-G (2011) Chaos in a neuron model with adaptive feedback synapse: electronic circuit design. Acta Phys Sin 60(5):050503 [Google Scholar]
  11. Datseris G (2018) Dynamicalsystems.jl: A julia software library for chaos and nonlinear dynamics. J Open Source Softw 3(23):598. 10.21105/joss.00598
  12. Dong DW, Hopfield JJ (1992) Dynamic properties of neural networks with adapting synapses. Netw Comput Neural Syst 3(3):267
  13. Ding S, Wang N, Bao H, Chen B, Wu H, Xu Q (2023) Memristor synapse-coupled piecewise-linear simplified hopfield neural network: Dynamics analysis and circuit implementation. Chaos Soliton Fract 166:112899 [Google Scholar]
  14. Eftekhari L, Amirian MM (2023) Stability analysis of fractional order memristor synapse-coupled hopfield neural network with ring structure. Cogn Neurodyn 17(4):1045–1059 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Eckmann J-P, Kamphorst SO, Ruelle D (1995) Recurrence plots of dynamical systems. World Sci Ser Nonlinear Sci Ser A 16:441–446 [Google Scholar]
  16. FitzHugh R (1961) Impulses and physiological states in theoretical models of nerve membrane. Biophys J 1(6):445–466 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Ghosh D, Frasca M, Rizzo A, Majhi S, Rakshit S, Alfaro-Bittner K, Boccaletti S (2022) The synchronized dynamics of time-varying networks. Phys Rep 949:1–63 [Google Scholar]
  18. Gao X, Mou J, Banerjee S, Zhang Y (2023) Color-gray multi-image hybrid compression-encryption scheme based on BP neural network and knight tour. IEEE Trans Cybernet 53(8):5037–5047 [DOI] [PubMed] [Google Scholar]
  19. Gu H, Pan B, Chen G, Duan L (2014) Biological experimental demonstration of bifurcations from bursting to spiking predicted by theoretical models. Nonlinear Dyn 78(1):391–407 [Google Scholar]
  20. Hodgkin A, Huxley A (1990) A quantitative description of membrane current and its application to conduction and excitation in nerve. Bull Math Biol 52:25–71 [DOI] [PubMed] [Google Scholar]
  21. Hou B, Hu X, Guo Y, Ma J (2023) Energy flow and stochastic resonance in a memristive neuron. Phys Scr 98(10):105236 [Google Scholar]
  22. Hajian DN, Ramadoss J, Natiq H, Parastesh F, Rajagopal K, Jafari S (2023) Dynamics of Hindmarsh–Rose neurons connected via adaptive memristive synapse. Chin J Phys
  23. Ivanchenko MV, Osipov GV, Shalfeev VD, Kurths J (2004) Phase synchronization in ensembles of bursting oscillators. Phys Rev Lett 93(13):134101 [DOI] [PubMed] [Google Scholar]
  24. Joseph D, Ramachandran R, Karthikeyan A, Rajagopal K (2023) Synchronization studies of Hindmarsh–Rose neuron networks: unraveling the influence of connection induced memristive synapse. Biosystems 234:105069 [DOI] [PubMed] [Google Scholar]
  25. Kumarasamy S, Moroz IM, Sampathkumar SK, Karthikeyan A, Rajagopal K (2023) Dynamics and network behavior of a four-dimensional discrete neuron model with magnetic flux coupling. Eur Phys J Plus 138(8):683 [Google Scholar]
  26. Kuramoto Y (2012) Chaos and statistical methods. In: Proceedings of the Sixth Kyoto Summer Institute, Kyoto, Japan September 12–15, 1983 vol. 24. Springer, New York
  27. Kong X, Yu F, Yao W, Cai S, Zhang J, Lin H (2024) Memristor-induced hyperchaos, multiscroll and extreme multistability in fractional-order HNN: image encryption and FPGA implementation. Neural Netw 171:85–103 [DOI] [PubMed] [Google Scholar]
  28. Li C, Chen G (2005) Coexisting chaotic attractors in a single neuron model with adapting feedback synapse. Chaos Soliton Fract 23(5):1599–1604 [Google Scholar]
  29. Liu Z, Wang C, Jin W, Ma J (2019) Capacitor coupling induces synchronization between neural circuits. Nonlinear Dyn 97:2661–2673 [Google Scholar]
  30. Lv M, Wang C, Ren G, Ma J, Song X (2016) Model of electrical activity in a neuron under magnetic flow effect. Nonlinear Dyn 85:1479–1490 [Google Scholar]
  31. Li T, Yu D, Wu Y, Ding Q, Jia Y (2024) Stochastic resonance in the small-world networks with higher order neural motifs interactions. Eur Phys J Spec Top 1–10
  32. Morris C, Lecar H (1981) Voltage oscillations in the barnacle giant muscle fiber. Biophys J 35(1):193–213 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Marwan N, Romano MC, Thiel M, Kurths J (2007) Recurrence plots for the analysis of complex systems. Phys Rep 438(5–6):237–329 [Google Scholar]
  34. Pal K, Ghosh D, Gangopadhyay G (2021) Synchronization and metabolic energy consumption in stochastic Hodgkin-Huxley neurons: patch size and drug blockers. Neurocomputing 422:222–234 [Google Scholar]
  35. Parastesh F, Rajagopal K, Jafari S, Perc M, Schöll E (2022) Blinking coupling enhances network synchronization. Phys Rev E 105(5):054304 [DOI] [PubMed] [Google Scholar]
  36. Ramasamy M, Kumarasamy S, Sampathkumar SK, Karthikeyan A, Rajagopal K, et al (2023) Synchronizability of discrete nonlinear systems: a master stability function approach. Complexity 2023
  37. Riecke H, Roxin A, Madruga S, Solla SA (2007) Multiple attractors, long chaotic transients, and failure in small-world networks of excitable neurons. Chaos Interdiscip J Nonlinear Sci 17(2) [DOI] [PubMed]
  38. Sharma A (2019) Explosive synchronization through dynamical environment. Phys Lett A 383(17):2051–2055 [Google Scholar]
  39. Sun J, Li C, Wang Z, Wang Y (2023) A memristive fully connect neural network and application of medical image encryption based on central diffusion algorithm. IEEE Trans Ind Inform
  40. Sugase-Miyamoto Y, Liu Z, Wiener MC, Optican LM, Richmond BJ (2008) Short-term memory trace in rapidly adapting synapses of inferior temporal cortex. PLoS Comput Biol 4(5):1000073 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Strogatz SH (2018) Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering. CRC Press
  42. Usha K, Subha P (2019) Hindmarsh–Rose neuron model with memristors. Biosystems 178:1–9 [DOI] [PubMed] [Google Scholar]
  43. Wang Z, Chen Z, Wang Y, Sun J (2023) Application of chaotic systems reduced-order observer synchronization based on DNA strand displacement in information encryption of the IoT. IEEE Internet Things J
  44. Wu F, Guo Y, Ma J (2023) Energy flow accounts for the adaptive property of functional synapses. Sci China Technol Sci 66(11):3139–3152 [Google Scholar]
  45. Wang G, Jin W, Wang A (2015) Synchronous firing patterns and transitions in small-world neuronal network. Nonlinear Dyn 81(3):1453–1458 [Google Scholar]
  46. Wang Q, Perc M, Duan Z, Chen G (2010) Impact of delays and rewiring on the dynamics of small-world neuronal networks with two types of coupling. Physica A 389(16):3299–3306 [Google Scholar]
  47. Watts DJ, Strogatz SH (1998) Collective dynamics of ‘small-world’ networks. Nature 393(6684):440–442 [DOI] [PubMed] [Google Scholar]
  48. Wang G, Yu D, Ding Q, Li T, Jia Y (2021) Effects of electric field on multiple vibrational resonances in Hindmarsh–Rose neuronal systems. Chaos Soliton Fract 150:111210 [Google Scholar]
  49. Xu Q, Ding S, Bao H, Chen M, Bao B (2021) Piecewise-linear simplification for adaptive synaptic neuron model. IEEE Trans Circuits Syst II Express Briefs 69(3):1832–1836 [Google Scholar]
  50. Xu Q, Liu T, Ding S, Bao H, Li Z, Chen B (2023) Extreme multistability and phase synchronization in a heterogeneous bi-neuron Rulkov network with memristive electromagnetic induction. Cogn Neurodyn 17(3):755–766 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Xu Q, Tan X, Zhu D, Bao H, Hu Y, Bao B (2020) Bifurcations to bursting and spiking in the Chay neuron and their validation in a digital circuit. Chaos Soliton Fract 141:110353 [Google Scholar]
  52. Zhu J, Chen Z, Liu X (2016) Effects of distance-dependent delay on small-world neuronal networks. Phys Rev E 93(4):042417 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.


Articles from Cognitive Neurodynamics are provided here courtesy of Springer Science+Business Media B.V.

RESOURCES