Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2016 Mar 4;6:22617. doi: 10.1038/srep22617

Chaotic, informational and synchronous behaviour of multiplex networks

M S Baptista 1,a,*, R M Szmoski 2,*, R F Pereira 3,*, S E de Souza Pinto 4,*
PMCID: PMC4778120  PMID: 26939580

Abstract

The understanding of the relationship between topology and behaviour in interconnected networks would allow to charac- terise and predict behaviour in many real complex networks since both are usually not simultaneously known. Most previous studies have focused on the relationship between topology and synchronisation. In this work, we provide analytical formulas that shows how topology drives complex behaviour: chaos, information, and weak or strong synchronisation; in multiplex net- works with constant Jacobian. We also study this relationship numerically in multiplex networks of Hindmarsh-Rose neurons. Whereas behaviour in the analytically tractable network is a direct but not trivial consequence of the spectra of eigenvalues of the Laplacian matrix, where behaviour may strongly depend on the break of symmetry in the topology of interconnections, in Hindmarsh-Rose neural networks the nonlinear nature of the chemical synapses breaks the elegant mathematical connec- tion between the spectra of eigenvalues of the Laplacian matrix and the behaviour of the network, creating networks whose behaviour strongly depends on the nature (chemical or electrical) of the inter synapses.


Complex networks1,2,3 serve as a model for a broad range of phenomena. Brain4,5, social interactions6, and linguistics7 are all examples of systems represented by complex networks. In general, networks are useful models for studying systems that have a spatial extension. For instance, insect populations whose interaction between them produces the extinction of one of them8, the interaction between proteins9 and the interaction between gears10. These networks can be represented by a multiplex network of coupled complex subnetworks11,12,13,14,15,16,17,18.

In the case of the brain5, interconnections between complex subnetworks are typically made by chemical synapses while intraconnections can be formed by both chemical and electric synapses19. For brain research19,20 and brain-based cryptography21, the interest is to understand the inter and intracouplings such that the units in the complex networks are sufficiently independent (unsynchronous) to achieve independent computations. However, the networks must be sufficiently connected (synchronous) such that information is exchanged between subnetworks and integrated into coherent patterns22.

The academic community has dedicated much attention to elucidate the interplay between topology and behaviour in multiplex networks. In particular, the action of the inter and intracoupling strengths in the synchronisability of optimally evolved multiplex network graphs23, and in the synchronisation of multiplex networks of dynamical oscillators11,24,25,26,27 or neurons19,28,29,30. Authors have shown an intricate interplay between different aspects of the network topology with weak or strong (not full) synchronisation, which was shown to be dependent on the ratio between interlinks with all the links in networks of phase oscillators27, on the number of interlinks in networks of Rössler oscillators24 and neural networks30, and on the ratio between inter and intra links in networks of heterogeneous maps26. Synchronisation was also shown to depend exclusively or complementarily on the electric or chemical couplings in two coupled neurons31 and in neural networks20,28,30,32. In particular, in the work of ref. 28, it was shown semi-analytically that the stability of the complete synchronous manifold depends on the Laplacian matrix of the electric synapses, the degree of chemical synapses, and the type of chemical synapses (inhibitory or excitatory). The relationship between topology and the diffusive behaviour in multiplex networks composed by two coupled complex networks of ODEs with constant Jacobian was made clear in ref. 11. Analytical results for the stability analysis of the full synchronisation manifold for two equal networks coupled by constant coupling strengths were also considered in33,34.

In this work, we elucidate the interplay among the topological aspects previously described to be relevant in the study of synchronisation (i.e., the eigenvalues of the Laplacian, the ratio α between inter degree and the number of nodes of the subnetworks, and the inter and intra coupling strengths) and complex behaviour in multiplex networks of two undirected coupled equal complex networks. We will show analytically how topology drives and is related not only to weak or strong forms of synchronisation, but also to other complex forms of behaviour: chaos and information transmission. Thus, providing an innovative set of mathematical tools to study how complex behaviour emerges in multiplex networks. This achievement was possible because we were able to analytically calculate, for the first time, one of the most challenging quantities in nonlinear systems, the complete spectrum of Lyapunov Exponents for a class of multiplex networks with constant Jacobian. This intricate relationship was also studied numerically in multiplex neural networks.

Our results show that in fact the ratio α is the determinant factor for the complex behaviour of the network, which also explain why the ratio between inter and intra or the number of interlinks has been previously seem to drive synchronisation24,26,27,30. We also show that synchronisation and information, whose quantifiers depend on the spectral gap of the Laplacian, will depend exclusively or complementarily on the inter and intra coupling strengths as observed in30,31, and demonstrated in28. For networks with constant Jacobian, synchronisation and information will depend exclusively on either the intra or the intercoupling strengths, if the two networks have symmetric interconnections, and will depend complementarily on both intra and interconnections, if the two networks have asymmetric interconnections. For the multiplex neural networks, we find that intra and inter couplings will complementarily cooperate to complex behaviour if the two neural complex networks are coupled by inter chemical and excitatory synapses. If intercouplings are of the inhibitory nature, behaviour will mainly depend on the intracoupling. Therefore, it is the excitatory chemical synapses that promote integration between intra (local) and inter (global) synapses in neural networks. On the other hand, in the networks with constant Jacobian, integration between inter and intra comes about by the break of symmetry caused by the asymmetric configuration. Moreover, for this configuration, a bottle-neck effect appears for an appropriately rescaled intercoupling strength. In this case, an increase in the synchronisation level of the network leads to an increase in the capacity of the network to exchange information.

Methods

Each complex network connects with each other in two ways, by a symmetric or an asymmetric interlink configuration. For the symmetric case, each node in a subnetwork can have at most one connection with a corresponding node in the other equal subnetwork (See Fig. 1). The general asymmetric configuration presents nodes in one network that can randomly connect to other nodes in the other network. The considered network configurations are models of extended space-time chaotic systems35,36,37,38 or chemical chaos39,40. It is also a model for two types of structures found in real neural networks41. The one with stronger community structure (small first eigenvalue of Laplacian matrix, or strong intracouplings), and the one with a high level of bipartiteness, i.e., two similar complex networks strongly connected by intercouplings (larger last eigenvalue of the Laplacian matrix, or strong intercoupling). From the spectral analysis performed in ref. 20 about the C. elegans and the human brains, one can conclude that these systems have communities with similar structure. Therefore, the simplest mathematical model for a multiplex network of many similar communities intercoupled (such as the brain), would be to consider networks formed by coupling two topologically equal networks.

Figure 1. Examples of symmetric network topologies with N = 10 and 12 = 10 considered in this work.

Figure 1

Subnetworks have a ring topology in (A), a star topology in (B), and an all-to-all topology in (C). Black lines represent intra links, and Gray (red online) lines represent inter links.

We consider two types of dynamics for the nodes of the network. The shift map (see Sec. “Extension to continuous networks” for networks with continuous-time descriptions), forming a discrete network of diffusively connected nodes, and the Hindmarsh-Rose (HR) neuron42, connected with inter chemical and intra electrically synapses.

Let X represents the state variables of a network with N = 2N1 nodes formed by two equal coupled complex networks composed each of N1 nodes that are coupled by Inline graphic “long-range” inter-connections. The dynamical description of the nodes is given by either the discrete-time function Inline graphic or the continuous-time function f(xi), representing the Hindmarsh-Rose neuron model.

The discrete network of shift maps is described by

graphic file with name srep22617-m3.jpg

where Inline graphic represents an effective inter degree of the network. The network can be written in a matricial form by Inline graphic, where Inline graphic, Inline graphic and Inline graphic are Laplacian matrices and T stands for the transpose. G represents the Laplacian of the two uncoupled complex networks and its intra links (the Laplacian matrix A) and L represents the inter-couplings Laplacian matrix between the complex networks. D1 and D2 represent the identity degree of the adjacency matrices B and BT, respectivelly, representing the inter couplings. Their components are defined as Inline graphic and Inline graphic, with null off diagonal terms. It can be written in an even more compact form by

graphic file with name srep22617-m11.jpg

where Inline graphic.

The network of HR neurons represented by the coupling in the first coordinate is described by

graphic file with name srep22617-m13.jpg

where f1 represents the first component of the HR vector flow dynamics, x(i) is a vector with components Inline graphic representing the variables of neuron i, G is the Laplacian for the intra electrical couplings, and C (with components Cij) is an adjacency matrix representing the inter chemical couplings. The chemical synapses function S is modelled by the sigmoidal function Inline graphic with Θsyn = −0.25, λ = 10 and Vsyn = 2.0 for excitatory and Vsyn = −2.0 for inhibitory.

In the brain, short-range connections among neurons happen by electric synapses, due to the potential difference of two neighbouring neuron body cells. In this work, the intra electrical synapses are mimicking this local interaction. Long-range connections are done by the chemical synapses, the inter connections in this work. However, to compare results between the HR networks and the discrete networks, the two subnetworks of HR neurons will have equal topologies, a configuration unlikely to be found in the brain, but that can however allow analytical insight into small brain circuits.

As a measure of chaos, we consider the sum of the positive Lyapunov exponents of the network, denoted by HKS. As a measure of the ability of the network to exchange information, we consider an upper bound for the Mutual Information Rate (MIR) between any two nodes in the network:

graphic file with name srep22617-m16.jpg

in which λ1 and λ2 represent the two largest positive Lyapunov exponents of the network. We assume that these two largest Lyapunov exponents are approximations for the two largest expansion rates (or finite-time finite-resolution Lyapunov exponents) calculated in a bi-dimensional space43 composed by any two nodes of the network. Equation (4) is constructed under the hypothesis that given two time-series, x1(t) and x2(t), an observer is not able to have a infinite resolution measurement of a trajectory point, but can only specify the location of a x1 × x2 point within a cell belonging to an order-T Markov partition, and thus the correlation of points decay to approximately zero after T iterations. For dynamical networks such as the ones we are working with, measurements can be done with higher resolution and it is typical to expect that the expansion rates on any 2D subspace formed by the state variables of two nodes are very good approximations of the 2 largest Lyapunov exponents of the network. Such a choice implies that IC in Eq. (4) is an invariant of the network and it represents the maximal rate of mutual information that can be realised when measurements are made in any two nodes of the network, and no time-delay reconstruction is performed. Details about the equivalence between Lyapunov exponents and expansion rates can be seen in ref. 43, and an explicitly numerical comparison can be seen in ref. 44. An extension of Eq. (4) to measure upper bounds of MIR in larger subspaces of a network (composed by group of nodes or multivariable subspaces) can also be seen in ref. 43.

Synchronisation is detected by various approaches. Linear stability of the synchronous manifold for complete synchronisation in the discrete network will be calculated analytically. For both types of networks, the level of weak synchronisation will be estimated by the value of HKS, since the higher HKS is (and the larger with respect to IC), the less synchronous nodes in the network are. Notice also that if HKS = IC, the network is generalised synchronous and possesses only one positive Lyapunov exponent. For the network of Hindmarsh-Rose neurons, we measure synchronisation by calculating the order parameter r and the local order parameter δr as introduced in ref. 45, the order parameter calculated considering the phase difference between all pair of nodes in the continuous network, as an estimation for the synchrony level of the network. Inter and intra coupling strengths promote global phase synchronisation and cluster phase synchronisation if r and δr are large, respectively. Comparing the results of the following Hindmarsh-Rose networks section and the parameter spaces of Figs 7,8, and 9 in the Supplementary Material, one will conclude that for the inhibitory networks the smaller the sum of the LEs is the larger the order parameter r is, and the larger IC is the smaller δr is. Thus, enhancement of global phase synchronisation (quantified by r) decreases the level of chaos in the network, and local phase synchronisation (quantified by δr) enhances exchange of information between nodes (IC). The phase ϕi of a node i is calculated using the equation for its derivative Inline graphic derived in refs 46,47.

Results

Shift map networks

To calculate the Lyapunov exponents of the discrete network (see Sec. “Extension to continuous networks” for an extension to continuous networks), we recall that since the map produces a constant Jacobian Inline graphic the Lyapunov spectra of the synchronisation manifold described by Inline graphic is equal to the spectra of Lyapunov exponents of the network (where typically Inline graphic). In addition, the Lyapunov exponents of the synchronisation manifold are simply the Lyapunov exponents of the Master Stability Function (MSF)48, the equations that describe the variational equations of Eq. (1) linearly expanded around the synchronisation manifold (assuming Inline graphic) and diagonalised, producing N equations in the m eigenmodes:

graphic file with name srep22617-m22.jpg

where μm represents the eigenvalues of M ordered by magnitude, i.e., Inline graphic. The ordered Lyapunov exponents are given by the logarithm of the absolute value of the derivative of the MSF in (5), which leads to

graphic file with name srep22617-m24.jpg

In this work, we consider two network configurations. Firstly, the symmetric configuration, when the two networks are connected by Inline graphic undirected interlinks, and each node in a network connects to at most one corresponding node in the other subnetwork. Secondly, the asymmetric configuration, when the two networks are connected by only one undirected random interlink. So, D1 = D2.

For the symmetric configuration12 (see also ref. 28), we have that

graphic file with name srep22617-m26.jpg

where Inline graphic are the unordered eigenvalues of M (i = 0, 1, 2, …, N1 − 1) and ωi represents the ordered set of eigenvalues of the Matrix A (such that ωi+1 ≥ ωi, and ω0 = 0), whose unordered spectra is given by Inline graphic for a closed ring topology, or ω1 = 0, ωi = 1 (for i = 1, …, N1 − 2), and ωN−1 = N1 for a star topology, and ω1 = 0, ωk = N1, for all-to-all topology. The inter degree α represents the effective connection that every node in one subnetwork will have with the other. If 2γα < εω1, then μ1 = 2γα, otherwise μ1 = εω1. Complete synchronisation of the shift map network is linearly stable if Inline graphic, however notice that our study considers coupling ranges outside of the complete stability region. The second largest eigenvalue, μ1, and therefore IC (and the stability of the synchronous manifold) will only depend on the inter connections if

graphic file with name srep22617-m30.jpg

and these quantities will only depend on the intra connections if this inequality is not satisfied.

It is fundamental to mentioning that the eigenvalues obtained in Eq. (7) using the expansion in12 provide values that are exact in the topologies considered in this work (demonstration to appear elsewhere). Consequently, the Lyapunov exponents calculated by Eq. (6) are also exact.

For the symmetric configuration, if inequality (8) is satisfied, Inline graphic, or Inline graphicInline graphic, otherwise. Since Inline graphic, then the upper bound for the MIR exchanged between any two nodes in this network, assuming Inline graphic, is given by

graphic file with name srep22617-m36.jpg

if inequality (8) satisfied

graphic file with name srep22617-m37.jpg

otherwise.

Therefore, the upper bound for the MIR will either depend on γ or on ε. If λ2 ≤ 0, then IC = λ1 = log(2).

For the asymmetric configuration12, we have that

graphic file with name srep22617-m38.jpg

for i = 1, 2, …, N1 − 1. If Inline graphic, then Inline graphic and Inline graphic, otherwise Inline graphic and Inline graphic. Complete synchronisation is linearly stable if Inline graphic. If

graphic file with name srep22617-m45.jpg

the second largest eigenvalue and, therefore, IC (and the stability of the synchronous manifold) will only depend on the interconnection. If this inequality is not satisfied, these quantities will depend mutually on both types of connections. Since α always appears in the second largest eigenvalue, the smallest its value the largest will be IC. Our analytical results are valid for all asymmetric configurations considered in ref. 12, however in this paper we focus on the “bottleneck” configuration, where there is only one random interlink.

For the asymmetric bottle neck configuration, if inequality (12) is satisfied, Inline graphic, or Inline graphic, otherwise. Since λ1 = log(2), then the upper bound for the MIR exchanged between any two nodes in this network, assuming λ2 > 0, is given by

graphic file with name srep22617-m48.jpg

if inequality (12) satisfied

graphic file with name srep22617-m49.jpg

otherwise.

Therefore, the upper bound for the MIR will either depend on γ, if inequality (12) is satisfied, or on both couplings if this inequality is not satisfied. If Inline graphic, then IC = λ1 = log(2).

Figure 2(A–D) are parameter spaces (ε × γ) showing whether inequality (8) (A–C) or inequality (12) (D) are satisfied (white) or not (black). Figure 2(E–H) show the value of IC. In Fig. 2(E–G) we show results for the symmetric configuration. IC will only depend on the intercoupling γ if inequality (8) is satisfied, and will only depend on the intracoupling ε if this inequality is not satisfied. In Fig. 2(H), for the bottleneck configuration, IC will only depend on the inter coupling if this inequality is satisfied, but will depend on both inter and intra couplings if this inequality is not satisfied. The sum of Lyapunov exponents is given by Inline graphic, where P represents the number of positive Lyapunov exponents of the network. From this equation, it becomes clear that if N1 is increased and the topology considered makes ωi to increase proportional to N1, but the ratio α is maintained (meaning that inter connections grow only proportional to N1), then the term εωi becomes predominant in HKS, and as a consequence, chaos in the network becomes more dependent on ε than on γ. To illustrate this argument, let us consider the symmetric configuration and assume that ε and γ are sufficiently small such that all Lyapunov exponents are positive. Then, the summation to calculate HKS has N terms and Inline graphic. Thus, the term with ε dominates for larger N1. This becomes even more evident, if the topology is an all-to-all: Inline graphic. The predominance of the intra in comparison to the inter coupling can be seem in all panels of Fig. 2(I–L), for a network of two coupled ring subnetworks. Similar results to other network configurations can be seen in Supplementary Material. This analytical illustration gives us a clear view of the behavioral changes as one goes from one network (γ = 0) to 2 coupled networks (γ > 0). Half of the Lyapunov exponents decrease their absolute values. The consequences for HKS and IC depend on the values of ε and the topology being considered, as can be seen in Fig. 2 by following a vertical line for a growing value of γ. Since complete synchronisation is linearly stable if Inline graphic, then the stability of the synchronisation manifold will also depend on the satisfaction of inequality (12).

Figure 2. Results for networks of shift maps, with two coupled rings.

Figure 2

(AC) White (black) region indicates values of ε and γ for which inequality (8) is satisfied (not satisfied). (D) Colour code same as in (AC), but based on inequality (12). (EH) color code shows the value of IC. (IL) Sum of positive Lyapunov exponents. N = 10 and 12 = 5 in (A,E,I), N = 20 and 12 = 10 in (B,F,J), N = 30 and 12 = 15 in (C,G,K), N = 10 and 12 = 1 in (D,H,L). In (L), the maximal value of γ equal to 2.5 was chosen to allow that the range of values for the quantity γα is the same in figures (IL).

Extension to continuous networks

These results can be extended to linear networks of ODEs. As an example, consider a continuous network of 1D coupled linear ODEs described by Inline graphic. Then, the Lyapunov exponents of this system are equal to the Lyapunov exponents of the synchronisation manifold and its transversal modes, and therefore are equal to λm+1 = α − μm.

Hindmarsh-Rose networks

The Lyapunov Exponents of the HR neural networks are calculated numerically. For symmetric HR neural networks with inhibitory inter connections, HKS is mostly dependent only on the electrical intra coupling, as can be seen from Fig. 3(A,E,I) (for coupled ring complex networks), and the results shown in Supplementary Material, for other networks. The quantity IC is also mostly dependent on the electrical intra coupling in asymmetric configurations with inter inhibitory synapses (see Fig. 3(B,F)), but for the asymmetric and inhibitory configuration (Fig. 3(J)), IC values depend mutually on both inter and intra couplings. Therefore, in most of the cases studied, neural networks formed by complex networks connected with inhibitory connections will have a behaviour (HKS and IC) that mainly depends on the intra electric coupling. If inter connections are excitatory, both HKS and IS are a non-trivial function of the inter and intra coupling, as it can be seen in Fig. 3(C,D,G,H,K,L). The inter degree α is also determinant for the similar behaviours observed in symmetric neural networks (for both inhibitory and excitatory) of different sizes, as one can check by verifying how similar the parameter spaces of Fig. 3(A,B) are with the ones in Fig. 3(E,F), or the parameter spaces of Fig. 3(C,D) and the ones in Fig. 3(G,H)).

Figure 3.

Figure 3

N = 10 and 12 = 5 for (AD), N = 20 and 12 = 10 for (EH), and N = 10 and 12 = 1 for (IL). Sum of positive Lyapunov exponents shown in left column and IC shown in right column for two coupled rings of Hindmarsh-Rose neurons with inter inhibitory coupling (in (A,B,E,F,I,J)) and inter excitatory coupling (in (C,D,G,H,K,L)).

To understand why if different neural networks have equal inter-degree Inline graphic, then they will have similar parameter spaces for HKS and IC, we consider the conjecture of ref. 49 that shows that Lyapunov exponents and Lyapunov Exponent of the synchronisation manifold (LESM) (defined by x(i) = x(j) = xs) are connected. Then, we remark that if each neuron in the network has the same inter-degree, k, then Inline graphic. This is a necessary condition in order to obtain a Master Stability Function (MSF) of the network as derived in28. The linear stability of this network and the ith LESM of this network will depend on a function Inline graphic, where Inline graphic represents the eigenvalues of the Laplacian matrix B. Inhibition or excitation contributes to the stability of the MSF and to the LESM through the term Inline graphic. If the coupling is inhibitory, all the terms in the function Γ will be negative, and they all typically contribute to making the network more stable and to have smaller values of LESM. But both terms, Inline graphic and Inline graphic, can be neglected, since S is nonzero during a spike and Inline graphic is only nonzero at the moment of the beginning of a spike. Therefore, the stability of the synchronisation manifold, as well as the LEs and IC (using49) will mainly depend on the value of the intra coupling ε (see also Fig. 5 in ref. 28). If, however, the coupling is excitatory, we cannot neglect the term Inline graphic. If two networks with different sizes have the same k for each neurone, then the eigenvalues of B for the larger network will be the same of the ones for the smaller network but appearing with multiplicity given by the dimension of the matrix. If the two different networks have the same topology, then some of the smallest eigenvalues of A for the larger network might be similar. These smallest eigenvalues contribute to making the term εωi small, but with a magnitude comparable to the magnitude of the term Inline graphic. Thus, if k is made constant, larger networks might present similar parameter spaces for HKS and IC.

Therefore, the nonlinearity of the coupling function has a major contribution to behaviour, and should be taken into consideration when studying other types of neural networks.

The bottleneck effect

In the bottleneck configuration, the inter-degree decreases to 1/N1. This results in a value of γα smaller when compared to this value for symmetric configurations. Consequently, given two networks, one symmetric and another asymmetric, both with the same N1 and the same γλ2, the value of λ2 for the asymmetric bottleneck configuration will be larger than λ2 for the symmetric configuration, which leads to that IC for the asymmetric case is smaller than IC for the symmetric case. However, if we rescale γ used in the asymmetric bottleneck configuration to keep the quantity γα constant in all our simulations, the term εω1 appearing in μ1 will compensate λ2 when inequality (12) is satisfied, finally producing an asymmetric network that has a larger value of IC than the corresponding symmetric one. Regarding the neuronal networks, the bottleneck effect is evident as one compare Fig. 3(L) (asymmetric) with Fig. 3(D,F). No bottleneck effect was verified for inhibitory inter synapses. Concluding, a decrease in synchronisation can increase the capacity of the network to exchange information.

Extension to larger multiplex networks

Knowing that our result in Eq. (7) are exact, it is possible to calculate analytically the eigenvalues of arbitrarily large networks. As an example, consider a subnetwork Ω(0) with N1 nodes and whose eigenvalues of the matrix A are denoted by ωi. Assume we construct a symmetric network, denoted by Ω(1), constructed by coupling two of these equal subnetworks Ω(0) with a given α and γ. If μi (i = 1, …, N) represent the ordered eigenvalues of Ω(1), then we can construct a network Ω(2) formed by two networks Ω(1) coupled by inter connections with the same γ and α parameters of Ω(1), and whose eingenvalues of the matrix M are given by Inline graphic Ones sees that if ε, γ, and α are preserved during the growing of the network (into a hierarchical network), the action of couplings a subnetwork into another subnetwork is to enlarge the spectral radius of the matrix M of the full network, a direct consequence of the inter-coupling strengths.

Discussion

A topic of research that has attracted great attention in multiplex networks was the search for a better understanding of how weak or strong synchronisation (not full) is linked to the various aspects of the network topology. Previous works have provided complementary, but not unified conclusions regarding this relationship. One of the difficulties into clarifying this matter is that the relationship between the spectrum of eigenvalues of the connecting Laplacian matrix and the synchronous behaviour of the network is poorly understood when the network is in a typical natural state and there is no full synchronisation. Our main contribution in this work was to understand this relationship when a multiplex network is out of full synchronisation, but have also provided conditions for the stability of the full synchronous state. We went a step further and have also understood how relevant aspects of the network topology are related to chaos and information transmission. Thus, providing an innovative set of mathematical tools to study how and why higher level complex behaviour emerges in multiplex networks.

Additional Information

How to cite this article: Baptista, M. S. et al. Chaotic, informational and synchronous behaviour of multiplex networks. Sci. Rep. 6, 22617; doi: 10.1038/srep22617 (2016).

Supplementary Material

Supplementary Information
srep22617-s1.pdf (723.3KB, pdf)

Acknowledgments

MSB acknowledges the Engineering and Physical Sciences Research Council grant Ref. EP/I032606/1. This work was also partially supported by CNPq, CAPES, and Fundação Araucária.

Footnotes

The authors declare no competing financial interests.

Author Contributions All authors, M.S.B., R.M.S., R.F.P. and S.E.S.P. have equally contributed to the conceptualisation of this work, numerical work and its analysis, the creation of all the pictures, and to the writing and revision of the manuscript. Figure 1 was made using the software package igraph, by Gabor Csardi and Tamas Nepusz.

References

  1. Watts D. J. & Strogatz S. H. Collective dynamics of ‘small-world’ networks. Nature 393, 440–442 (1998). [DOI] [PubMed] [Google Scholar]
  2. Barabási A.-L. & Albert R. Emergence of scaling in random networks. Science 286, 509–512 (1999). [DOI] [PubMed] [Google Scholar]
  3. Renyi A. & Erdos P. On random graphs. Publ. Math. 6, 5 (1959). [Google Scholar]
  4. Bullmore E. & Sporns O. Complex brain networks: graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 10, 186–198 (2009). [DOI] [PubMed] [Google Scholar]
  5. Hagmann P. et al. Mapping the structural core of human cerebral cortex. PLoS Biol. 6, 1479–1493 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Mac Carron P. & Kenna R. Universal properties of mythological networks. Europhys. Lett. 99, 28002-p1–28002-p6 (2012). [Google Scholar]
  7. Levary D., Eckmann J.-P., Moses E. & Tlusty T. Loops and self-reference in the construction of dictionaries. Phys. Rev. X 2, 031018-1–031018-10 (2012). [Google Scholar]
  8. Pereira R. F., Camargo S., Pinto S. d. S., Lopes S. R. & Viana R. L. Periodic-orbit analysis and scaling laws of intermingled basins of attraction in an ecological dynamical system. Phys. Rev. E 78, 056214-1–056214-10 (2008). [DOI] [PubMed] [Google Scholar]
  9. Stelzl U. et al. A human protein-protein interaction network: a resource for annotating the proteome. Cell 122, 957–968 (2005). [DOI] [PubMed] [Google Scholar]
  10. de Souza S. L., Caldas I. L., Viana R. L., Batista A. M. & Kapitaniak T. Noise-induced basin hopping in a gearbox model. Chaos Solitons Fractals 26, 1523–1531 (2005). [Google Scholar]
  11. Gomez S. et al. Diffusion dynamics on multiplex networks. Phys. Rev. Lett. 110, 028701-1–028701-5 (2013). [DOI] [PubMed] [Google Scholar]
  12. Martn-Hernández J., Wang H., Van Mieghem P. & D'Agostino G. Algebraic connectivity of interdependent networks. Physica A 404, 92–105 (2014). [Google Scholar]
  13. Sole-Ribalta A. et al. Spectral properties of the laplacian of multiplex networks. Phys. Rev. E 88, 032807-1–032807-6 (2013). [DOI] [PubMed] [Google Scholar]
  14. De Domenico M. et al. Mathematical formulation of multilayer networks. Phys. Rev. X 3, 041022-1–041022-15 (2013). [Google Scholar]
  15. Granell C., Gómez S. & Arenas A. Dynamical interplay between awareness and epidemic spreading in multiplex networks. Phys. Rev. Lett. 111, 128701-1–128701-5 (2013). [DOI] [PubMed] [Google Scholar]
  16. Boccaletti S. et al. The structure and dynamics of multilayer networks. Phys. Rep . 544, 1–122 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Asllani M., Busiello D. M., Carletti T., Fanelli D. & Planchon G. Turing patterns in multiplex networks. Phys. Rev. E 90, 042814-1–042814-5 (2014). [DOI] [PubMed] [Google Scholar]
  18. Kouvaris N. E., Hata S. & Daz-Guilera A. Pattern formation in multiplex networks. arXiv:1412.2923 (2014). [DOI] [PMC free article] [PubMed]
  19. Gallos L. K., Makse H. A. & Sigman M. A small world of weak ties provides optimal global integration of self-similar modules in functional brain networks. Proc. Natl. Acad. Sci. USA 109, 2825–2830 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Antonopoulos C. G., Srivastava S., Pinto S. E. d. S. & Baptista M. S. Do brain networks evolve by maximizing their information flow capacity? PLoS Comput. Biol. 11, e1004372-1–e1004372-29 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Szmoski R., Ferrari F., Pinto S. d. S., Baptista M. & Viana R. Secure information transfer based on computing reservoir. Phys. Lett. A 377, 760–765 (2013). [Google Scholar]
  22. Meunier D., Lambiotte R. & Bullmore E. T. Modular and hierarchically modular organization of brain networks. Front. Neurosci . 4, 200-1–200-11 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Dwivedi S. K., Sarkar C. & Jalan S. Optimization of synchronizability in multiplex networks. Europhys. Lett. 111, 10005-p1- 10005–p5 (2015). [Google Scholar]
  24. Zhao M., Zhou C., Lü J. & Lai C. H. Competition between intra-community and inter-community synchronization and relevance in brain cortical networks. Phys. Rev. E 84, 016109-1- 016109–9 (2011). [DOI] [PubMed] [Google Scholar]
  25. Asheghan M. M. & Mguez J. Robust global synchronization of two complex dynamical networks. Chaos 23, 023108-1–023108-11 (2013). [DOI] [PubMed] [Google Scholar]
  26. Lu W., Liu B. & Chen T. Cluster synchronization in networks of distinct groups of maps. Eur. Phys. J. B 77, 257–264 (2010). [Google Scholar]
  27. Guan S., Wang X., Lai Y.-C. & Lai C.-H. Transition to global synchronization in clustered networks. Phys. Rev. E 77, 046211-1–046211-5 (2008). [DOI] [PubMed] [Google Scholar]
  28. Baptista M., Kakmeni F. M. & Grebogi C. Combined effect of chemical and electrical synapses in hindmarsh-rose neural networks on synchronization and the rate of information. Phys. Rev. E 82, 036203-1–036203-12 (2010). [DOI] [PubMed] [Google Scholar]
  29. Fuchs E., Ayali A., Ben-Jacob E. & Boccaletti S. The formation of synchronization cliques during the development of modular neural networks. Phys. Biol. 6, 036018-1- 036018–12 (2009). [DOI] [PubMed] [Google Scholar]
  30. Sun X., Lei J., Perc M., Kurths J. & Chen G. Burst synchronization transitions in a neuronal network of subnetworks. Chaos 21, 016110-1–016110-10 (2011). [DOI] [PubMed] [Google Scholar]
  31. Pfeuty B., Mato G., Golomb D. & Hansel D. The combined effects of inhibitory and electrical synapses in synchrony. Neural Comput. 17, 633–670 (2005). [DOI] [PubMed] [Google Scholar]
  32. Hizanidis J., Kouvaris N. E., Zamora-López G., Daz-Guilera A. & Antonopoulos C. G. Chimera-like states in modular neural networks. arXiv:1510.00286 (2015). [DOI] [PMC free article] [PubMed]
  33. Li C., Sun W. & Kurths J. Synchronization between two coupled complex networks. Phys. Rev. E 76, 046204-1–046204-6 (2007). [DOI] [PubMed] [Google Scholar]
  34. Li C., Xu C., Sun W., Xu J. & Kurths J. Outer synchronization of coupled discrete-time networks. Chaos 19, 013106-1–013106-7 (2009). [DOI] [PubMed] [Google Scholar]
  35. Ahlers V. & Pikovsky A. Critical properties of the synchronization transition in space-time chaos. Phys. Rev. Lett. 88, 254101-1–254101-4 (2002). [DOI] [PubMed] [Google Scholar]
  36. Pikovsky A. S. Local lyapunov exponents for spatiotemporal chaos. Chaos 3, 225–232 (1993). [DOI] [PubMed] [Google Scholar]
  37. Cencini M., Tessone C. & Torcini A. Chaotic synchronizations of spatially extended systems as nonequilibrium phase transitions. Chaos 18, 037125-1–037125-11 (2008). [DOI] [PubMed] [Google Scholar]
  38. Tessone C. J., Cencini M. & Torcini A. Synchronization of extended chaotic systems with long-range interactions: an analogy to levy-flight spreading of epidemics. Phys. Rev. Lett. 97, 224101-1–224101-4 (2006). [DOI] [PubMed] [Google Scholar]
  39. Kiss I. Z., Zhai Y. & Hudson J. L. Collective dynamics of a weakly coupled electrochemical reaction on an array. Ind. Eng. Chem. Res. 41, 6363–6374 (2002). [Google Scholar]
  40. Kiss I. Z., Zhai Y. & Hudson J. L. Collective dynamics of chaotic chemical oscillators and the law of large numbers. Phys. Rev. Lett. 88, 238301-1–238301-4 (2002). [DOI] [PubMed] [Google Scholar]
  41. Titz C. & Karbach J. Working memory and executive functions: effects of training on academic achievement. Psychol. Res. 78, 852–868 (2014). [DOI] [PubMed] [Google Scholar]
  42. Hindmarsh J. & Rose R. A model of neuronal bursting using three coupled first order differential equations. Proy. Soc. of Lond B Bio 221, 87–102 (1984). [DOI] [PubMed] [Google Scholar]
  43. Baptista M. S. et al. Mutual information rate and bounds for it. PLoS ONE 7, e46745-1–e46745-10 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Baptista M. et al. Upper and lower bounds for the mutual information in dynamical networks. arXiv:1104.3498v3 (2011).
  45. Gómez-Gardeñes J., Campillo M., Flora L. & Moreno Y. Dynamical organization of cooperation in complex topologies. Phys. Rev. Lett. 98, 108103-1–108103-4 (2007). [DOI] [PubMed] [Google Scholar]
  46. Pereira T., Baptista M. & Kurths J. General framework for phase synchronization through localized sets. Phys. Rev. E 75, 026216-1–026216-12 (2007). [DOI] [PubMed] [Google Scholar]
  47. Pereira T., Baptista M. & Kurths J. Phase and average period of chaotic oscillators. Phys. Lett. A 362, 159–165 (2007). [Google Scholar]
  48. Pecora L. M. & Carroll T. L. Master stability functions for synchronized coupled systems. Phys. Rev. Lett. 80, 2109–2112 (1998). [Google Scholar]
  49. Baptista M., Kakmeni F. M., Del Magno G. & Hussein M. How complex a dynamical network can be? Phys. Lett. A 375, 1309–1318 (2011). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Information
srep22617-s1.pdf (723.3KB, pdf)

Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES