Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2023 Apr 5;13:5588. doi: 10.1038/s41598-023-32762-7

On quantification and maximization of information transfer in network dynamical systems

Moirangthem Sailash Singh 1,, Ramkrishna Pasumarthy 1, Umesh Vaidya 2, Steffen Leonhardt 3
PMCID: PMC10076297  PMID: 37019948

Abstract

Information flow among nodes in a complex network describes the overall cause-effect relationships among the nodes and provides a better understanding of the contributions of these nodes individually or collectively towards the underlying network dynamics. Variations in network topologies result in varying information flows among nodes. We integrate theories from information science with control network theory into a framework that enables us to quantify and control the information flows among the nodes in a complex network. The framework explicates the relationships between the network topology and the functional patterns, such as the information transfers in biological networks, information rerouting in sensor nodes, and influence patterns in social networks. We show that by designing or re-configuring the network topology, we can optimize the information transfer function between two chosen nodes. As a proof of concept, we apply our proposed methods in the context of brain networks, where we reconfigure neural circuits to optimize excitation levels among the excitatory neurons.

Subject terms: Information technology, Computational science, Applied mathematics, Neural circuits

Introduction

The cause-effect relationships between various events or processes, in which an event contributes to the evolution of another event or a state occurs in different physical1, biological23, financial4, or technological systems or networks56. Apart from science, causality has been an important topic in contemporary philosophy and its branches, including metaphysics, ontology, and epistemology. In physical systems, Maxwell illustrated with an experiment (Maxwell’s demon)7 that reveals the relationship between information and entropy. The experiment showed that the restrictions imposed by the second law of thermodynamics can be relaxed by using information (velocity and positions of the particles) contained in Maxwell’s demon. These notions of information and entropy provide a thermodynamical description of information flows in dynamical systems8. In a social network910, information is encoded in the network topology and essential for building reputation, trust, better collaboration, or finding short chains in an extensive social network. In cell biology23, the receptor function relies on precise dynamical communication and coordinated information transfer between the cell surface receptors and the outside world and within the gene networks. In neurological networks11, information transfers happen across the synapse by the activity of several neural populations; the dendrites transmit information to the cell body, and the axon transmits information away from the cell body.

The pattern of connections between the proteins or neurons determines how information flows through the gene regulatory networks or neural circuits. During evolution, the gene essentiality changes, and the number of connections between the essential and non-essential genes depends on the ancestral species. Increased interactions among the genes result in transforming non-essential genes into essential genes12. Thus knowledge of the ‘wiring’ of these networks helps understand how the collective behaviour contributes to the information flows among the cells. The connectome describes the complete structural wiring diagram of the neurons in the nervous system. Studies show that the changes in the ability to learn and form memories in the nervous system depend on the modification in the synaptic strength through potentiation or depression,131415. An approach to modify synaptic strengths is to reconfigure the wiring by changing the physical connections between the neurons1617. Recent evidence has shown that network rewiring is an essential mechanism in learning and neuroplasticity, defined as the ability of the brain to modify the information flows among the neurons in response to intrinsic and extrinsic stimuli1819.

Most of the literature around the study of complex dynamical networks focuses mainly on the study of controllability and reachability of nodes and their roles in controlling the network dynamics2021. And a few other works focus on identifying the effective connectivity from time series data, such as Fourier-based or polynomial-based interpolation2223. These methods are based on interpolation techniques, and the estimation accuracy depends hugely on the chosen basis functions. Various studies on complex networks are specifically focused on the analysis of complex brain networks24252627. The methods to investigate the functional connectivity between brain regions in these works include connectivity models such as structural equation modelling2425, dynamic causal modelling26, or Granger causality27. The structural equation modelling method2425 is based on estimating the correlation matrix between the brain regions and is intractable for large networks. Dynamic causal modelling26 estimates the connectivity by perturbing the brain’s dynamic system and measuring the response and does not incorporate an information-theoretic measure. Granger causality characterizes the direction of information flow, but it does not quantify the amount of causal inferences. Therefore, in the event of bidirectional causal inferences, Granger causality is difficult to differentiate the relative strengths. The works in all these studies of brain networks focus only on finding the effective connectivity in the brain network. Recently, network scientists have integrated information theory with network theory to study the flow of information in complex networks2829. These studies focus mainly on estimating information transfers in stationary random processes. However, in this work, we consider complex dynamical networks with intrinsic stochastic nodal dynamics that can provide accurate estimates of the evolution of information transfers. We model the neurological network based on a dynamic model of the brain (Wilson–Cowan model) and infer the coupling strengths by perturbing the system and finding the phase responses (Phase Response Curve). In this regard, our methods for estimating coupling strengths from neurophysiological time series differ from the methods in22,23 where there is no designed perturbation, and the inputs are treated as unknown. We attempt to answer two crucial questions: (i) Is there a way to quantify the information flows among nodes in complex dynamical networks? (ii) What are the effects of changing the network topology on the information transfers among the nodes? Moreover, assuming we have the authority to configure the network topology, can we maximize the information transfer between two predefined nodes? A major distinctive feature of our work, therefore, lies in integrating theories from information theory, graph theory and optimization algorithms to quantify the flow of information between various nodes in complex dynamical networks and finding the optimal topology for maximized information flows.

There are various information-theoretic measures to quantify information flow, such as the time-delayed mutual information30, causation entropy31, Granger causality32 etc. One limitation of these measures is the lack of determining the cause-effect relation or the direction of information flow. Schreiber’s transfer entropy33 describes the flow of information between two random processes and provides a directional sense to the information transfer. However, evidence34 has shown that transfer entropy may give qualitatively incorrect results, such as imperfect observations of the states, and, as a result, may not always successfully quantify the true information transfers in dynamical systems35. Recently, Liang and Kleeman3637 formulate the evolution of information transfers in dynamical systems. In our work, we adopt Liang-Kleeman’s formalism of information transfer to measure the flow of information in a network. This formalism has been used to understand causal inference using time series data in large-scale networks38 and for identifying sources of instability in network power systems39.

To understand the effects of topological changes on the information transfer, we analyze the structural set properties of the information transfer function. Our information transfer function is also closely related to the mutual information40, defined as the amount of information obtained about one random variable by observing the second random variable. Solving the maximization of mutual information under a constraint on the marginal distribution has been proven to be NP-Hard4142. Maximizing information transfer under edge constraints is a variant of such problems, and we propose algorithms with provable suboptimality bounds for solving such problems. We split the objective function in our maximization problem into two parts: a first term capturing the network topology and a second term capturing the edge weights. Finding the optimal topology problem can be divided into three subproblems : (a) Design Problem: To design a near-optimal topology given the number of nodes and edges (b) Update Problem: To add a fixed number of edges in a given network and (c) Rewiring Problem: Reconfigure a fixed number of edges that maximizes information transfer. The weights of each edge are upper bounded by a positive weight, and a positive real number bounds the total edge weights. A few questions arise naturally, which we will answer in this report. What is the approximation guarantee when the Greedy Algorithm is used in solving the problems? Are there any algorithms that perform close to the Greedy Algorithm while reducing the computational cost? As a computationally cheaper alternative to the Greedy Algorithm, we propose a new algorithm, the ‘Sub-Graph Completion Algorithm’, that performs closely to the Greedy Algorithm while reducing the computational cost by three folds. We also propose a new centrality measure named ‘Information Transfer Edge Centrality’ that quantifies the contributions of edges towards information transfers among nodes in the network. Finally, we apply our proposed algorithms and validate the approximation guarantees in various random networks. We also apply our algorithms to maximize information transfer between two excitatory neurons in a neurological network.

Results

Quantifying the information transfer

To compute the information transfer, we consider a directed network with linear time-invariant stochastic dynamics given by:

dx(t)=Ax(t)dt+B1dw(t), 1

where x(t)Rn are the nodal states of the network, w(t)Rm is a white noise with mean zero and unit covariance and B1 denotes the input noise matrix. The choice for the model is motivated by the fact that we can reduce most real-world oscillatory dynamical networks into phase description models that can be approximated by linear stochastic systems29. The model does not incorporate control nodes in the network system and the problem formulation does not require the controllability constraint to maximize the objective function. We assume that the initial states x(0) denoted as x0 are drawn from a normal Gaussian distribution ρ, with initial mean μ0 and covariance Σ0. Additionally, we assume that there are no self-loops in the considered networks. The transpose of the state matrix, ATRn×n describes the weighted adjacency matrix. The directed graph is denoted by G(V,EA,wA), with vertices V={1,2,n}, given by the n states, EA={(i,j)|i,jV} is the edge set, and wA:EAR+ is the weight function. The non-zero entries of B1 define how each of the nodes is affected by the white noise. For the linear time-invariant stochastic network model in (1) with n random variables and edges EA, information transfer from node j to i at time t for i,j{1,2,3n}, denoted as Tjit is

Tjit(EA)=-E[1ρiRn-2fiρxi]=aijσijt(EA)σiit(EA), 2

where ρ denotes the joint distribution of (x1,xj-1,xj+1,xn) at time t, ρi denotes the marginal distribution of state xi,ρj|i is the conditional probability distribution of xj given xi at time t and σijt denotes the (ij) element of Σt. The derivations are given in Supplementary Notes 1 and 3. In this work, we consider the case where the network G admits cooperative (i.e., A(i,j)0) interactions among the nodes as negative interactions are not physically meaningful in biological networks and other real-world networks. We shall drop the explicit dependence of Tji on t, as maximizing Tji for one time instant maximizes it for all other time instances (Corollary 1.1, Supplementary Note 5). Figure 1 shows our framework for maximizing information transfer from node 3 to node 1 in a given network. In Supplementary note 2, we show the theoretical relationships between Liang-Kleeman’s information transfer, Horowitz’s information, and Schreiber’s transfer entropy.

Figure 1.

Figure 1

Rewiring network topologies to maximize information transfer from node 3 to node 1. The top figure shows a network of 6 nodes and 9 edges, with zero initial mean, initial covariance, Σ0=I6, In is an identity matrix of order n, and B1=0.1I6. The matrix heat map corresponds to the various information transfers among the nodes at t=10. The bottom figures show the network topologies which maximize the information transfer using the update and the rewiring techniques.

Structural analysis of information transfer function

For the directed network, G(V,EA,wA) associated with the system in (1), we study the structural properties of Tji. The domain of Tji(EA) is the subset of edges, EAE, where E is the set of all possible edges of |V| nodes and the range is a positive real number. It is easy to see from (2) that Tji(EA) is a function of two set functions σij(EA) and σii(EA). To maximize Tji, we need to maximize σij and minimize σii concurrently. However, this approach is not feasible as both σij and σii are monotone non-decreasing functions of edges (Lemma 1, Supplementary Note 5). Alternatively, we find the set of edges, EgE, such that if any edge from Eg is added to EA, the marginal increase in σij is greater than the marginal increase in σii. We can formally define the set Eg as

Eg={x|σij(EAx)-σij(EA)>σii(EAx)-σii(EA);xE} 3

Thus, it is easy to see from (3) that Tji is a monotone increasing function of the edges in the set Eg. To find the elements of Eg, we recall the definition of “communicability”43 from graph theory. Communicability from node i to node j in G(V,E), i,jV, denoted as cij is defined as the total number of walks of all lengths from node i to j, weighting walks of length k by a factor 1k!. It quantifies the ability to exchange messages between two nodes and is given by

cij=[e(A0,1)]ij=A0,1(i,j)+(A0,1)22!(i,j)+ 4

where A0,1 is the structural interconnection matrix of G. A walk of length k is a sequence of nodes n1,n2,nk,nk+1 such that for all 1lk, (il,il+1)E. The relationship between σij and cij is given by (Theorem 1, Supplementary Note 5) σijk=1nckickj,σiik=1ncki2. Thus, a comparison between (4) and σij reveals that σij increases for every incoming path of any length to node j, with higher contributions from shorter paths to node j. Similarly, σii increases quadratically with incoming paths to node i, with the highest contributions from shortest (direct) paths to node i. Therefore, if we fix the in-degree of node i to 1 with the only edge to node i from node j, then any directed paths to node i formed by the remaining edges pass through node j. As a result, node j has shorter directed paths as compared to node i, and by definition of communicability, σij and σii satisfies the inequality condition in (3). Consequently, if we assume there are no incoming edges to node i except from node j, then Tji is a monotone non-decreasing function of edges (Theorem 1, Supplementary Note 5). Now, we consider the case when a given network has direct edges to node i from nodes other than node j. In this case, we avoid adding edges that form directed paths to node i but not passing through node j for reasons explained earlier. These edges significantly increase σii while their contribution towards σij is minimal. Supplementary Fig. 3 shows the structure of the set Eg in the adjacency matrix. The results in this section reveal the relationship between the network structure and the functional pattern, defined by the information transfer function. In the next section, we formally define our problem definitions and propose algorithms to solve the maximization problem. We define the set of possible edges that can be added as the “Ground Set” and is given by Eg.

Finding the optimal topology

We now propose algorithms for solving our optimization problems, namely the design, update, and rewiring problems. The update problem can be considered as a sub-class of design problem since we are adding k edges to existing network topology.

Problem 1: design problem

The design problem is to construct a connected network topology with n nodes and k edges that maximize the information transfer from a predefined node j to another predefined node i, where j,i{1,2,n}. The total edge weight is bounded by wmaxR+. Additionally, the weights of each link, wi are upper bounded by wub. Our first objective is to find the topology that maximizes Tji by adding minimum edges that ensure the network is at least weakly connected. This topology is a tree network with n-2 edges into j from the remaining nodes and an edge from node j to i. We call this the base topology and denote the set of edges by Eb. The design problem now is to add k-n+1 edges from the ground set Eg to the base topology, that maximizes Tji. We then find the optimal edge weights for every new edge. The problem can be formulated as

maximizeSEgTjiSsubject to|S|k,0wiwub,i=1kwiwmax,where |.| denotes the cardinality of a set. 5

Problem 2: rewiring problem

Given a weighted network GA(V,EA,wA), the problem is to maximize the information transfer between two given nodes by reconfiguring at most k existing edges. The modified network is given by GAGδA=(V,EAEδA,wA+wδA), where GδA denotes the modifications on the existing network. We require that the total weights of the modified edges be bounded by wmax and each of the individual edge-weights wδAi be bounded by wub. The rewiring problem can be formulated as

maximizeEδAETjiEA+EδAsubject to|EδA|k,0wδAiwub,i=1kwδAiwmax. 6

Below, we propose algorithms that solve Problems 1 and 2. First, we propose the algorithms for adding edges that maximize Tji in Problem 1. Next, to solve Problem 2, we propose an algorithm that removes edges with minimal contribution to the information transfer function. We then use the algorithms for Problem 1 to add new edges.

Algorithms for Network Design (Problem 1): We propose the Subgraph Completion Algorithm. This technique relies on the communicability centrality measure. From the definition of communicability in (4), shorter paths to node j contribute more to the communicability function. To increase the connectivity with shorter paths to node j, we form complete subgraphs between 2 nodes, with j as one of the two nodes, for all the possible combinations excluding node i. We then form complete subgraphs for all the combinations of 3,4,..(n-1) nodes, with j as one of the nodes. If further |S|<k, we arbitrarily add outgoing edges from node i to the rest of the nodes. We call this “Subgraph Completion Algorithm”, which is given in Algorithm 1 (Algorithms Section) and illustrated in Supplementary Fig. 4. We also use a Greedy Algorithm that computes the contribution of each edge towards Tji and selects the best edge whose contribution is the highest. The iteration continues until the added number of edges equals k (Supplementary note 5). Other commonly used algorithms include modular and complementary modular addition techniques (Methods-Algorithms).

Algorithms for rewiring edges (Problem 2): To maximize Tji for a given weighted network GA(V,EA,wA), associated with the system (1) by rewiring the topology, we remove the existing incoming edges to node i except from node j (Theorem 1, Supplementary Note 5). Let Ei denote this set of edges. If |Ei|k, we simply remove any k edges from Ei. Else, if |Ei|k, we look for other k-|Ei| edges to be removed. Towards this end, we introduce novel centrality measures that quantify (a) the causal inference of a node to the rest of the network (node-to-network influence) and (b) the effects (in terms of information transfer) received by a node from the network (network to node influence). Finally, we derive an Information Transfer Edge Centrality (ITEC) measure that quantifies the contributions of edges towards information transfers among nodes in the network. To define the ITEC, we first define the cause and the effect node centralities below.

Cause centrality in complex network (node to network influence): Cause centrality, denoted by Tj, quantifies the contribution of information/causal inferences by a node across the network. In other words, it quantifies the ability of a node j to transfer information across the network. For the system in (1) with adjacency matrix ATRn×n, the cause centrality of a node j{1,2n} is given by

Tj=Tj1+Tj2++Tjn=A1jσ1jσ11+A2jσ2jσ22++Anjσnjσnn=A(:,j)TM(:,j);whereM(i,j)=Σ(i,j)Σ(i,i).i,j{1,2n}. 7

Effect centrality in a complex network (network to node influence): Effect centrality of a node j, Rj is defined as the amount of information received by a node j from all other nodes in the network. It measures the ability of nodes in a network to receive more “effects” or gather more information along the directed paths across the network. For the system in (1) with adjacency matrix ATRn×n, the effect centrality of a node j{1,2n} is given by

Rj=Aj1σj1σjj+Aj2σj2σjj++Ajnσjnσjj=A(j,:)M(j,:)T. 8

Information Transfer Edge Centrality: We combine the cause and effect centralities to derive a novel edge centrality measure based on information transfer. Intuitively, the contribution of an edge toward the information transfers across the network is related to the nodes it connects. If an edge connects a node with high cause centrality to a node with high effect centrality, then the edge has more influence on the information transfers across the network. Thus, the information transfer edge centrality of an edge (ij), denoted as ecij is ecij=TiRj,i,j{1,2n}.

To remove k-|Ei| edges from the given network topology, we use the rankings provided by various edge centrality measures and remove the lowest rank k edges. We denote this set of edges to be removed by Er. We then use the Greedy Algorithm (Algorithm in Supplementary Note 5) or Subgraph Completion Algorithm (Algorithm 1, Methods) to add k new edges.

Optimal assignment of edge weights

Let S be the set of edges in the optimal topology which maximizes Tji with |S|k. We show that the optimal edge weights to be assigned lie on the boundary of the feasible weight set (Proposition 1 in Supplementary Note 5). Therefore, given the cardinality constraint k, optimal edge set S, wmax and wub, compute Kub=wmaxwub and Kubl=wmax-Kubwub. Assign wub to the first Kub elements in S. Assign Kubl to the next element in S and 0 to the remaining edges.

Approximation guarantee

Due to the NP-Hardness of our optimization problems, the solutions given by the algorithms are not guaranteed to be optimal. Finding an optimal solution requires the brute force method of finding all the k combinations of edges in the network and computing the information transfer. This method is intractable for moderate to large networks. We look at the structural set properties (submodular and supermodular) of our information transfer function to find an approximation guarantee of using the Greedy Algorithm in solving optimization problems. A set function, f:2ER is called submodular if for all PQE and sE\Q, it holds that f(P{s})-f(P)f(Q{s})-f(Q). If -f is a sub-modular function, then f is called a super-modular function. Theorem 2 (Supplementary note 6) shows that the information transfer function is neither submodular nor supermodular. Therefore, the standard approximation guarantee44 provided by the Greedy Algorithm does not hold. Some recent works on optimizing set functions that are neither submodular nor supermodular show that the Greedy Algorithm can still provide performance guarantees. For example, in45, the authors employ the submodularity ratio, γ and the curvature, α to define an approximation guarantee of greater than 1α(1-e-αγ)f where f denotes the optimal value. For a given non-negative set function f, the submodularity ratio is the largest γR+ such that ωΩ\SΔω(S)γΔΩ(S),Ω,SE. The curvature is the smallest αR+ such that Δj(S\jΩ)(1-α)Δj(S\j),Ω,SE,jS\Ω.

To justify the use of the Greedy Algorithm for solving the problems, we derive a positive lowerbound on γ and an upperbound on α for our set function in the network topology defined by A0,1. In the ground set Eg, the bounds on γ and α are given by (Theorem 3, Supplementary note 5)

γTji(ωij)Tji(Eg)-Tji(ωij),α1-Tji(ωij)Tji(Eg)-Tji(ωij),whereωij={(j,i)},Tji=Tji. 9

Examples

Design Problem: We first consider a small network of 6 nodes and analyze the performance of our heuristic algorithms for adding 11–17 edges that maximize T31. We take the edge weights to be 1. To compare the results of our algorithms with the optimal value, we employ a brute force technique to find the optimal T31 with 11–17 edges. Since the method requires an exhaustive search over different combinations, we restrict our analyses to 6 nodes. The performance comparison is shown in Fig. 2a. In all the figures, we denote the Subgraph Completion Algorithm by SC, the Greedy Algorithm by Greedy, Modular Addition, and Complementary Modular Addition by MA and CMA, respectively. We see that the Greedy Algorithm performs better than the rest, and the SC Algorithm performs closely to the Greedy Algorithm. Now, we look at the performance of the proposed algorithms at each stage of edge addition. Let the number of nodes be n=15, and the objective is to maximize T31. We take the input noise matrix B=0.1I15 and the initial covariance Σ0=5I15. After fixing the in-degree of node 1 and constructing the base topology, we have n2-2(n+1) = 197 possible edges. Out of these, 14 edges are self-loops. So, we need to select k out of 183 edges that maximize T31. The values of T31 obtained for different values of k using the algorithms are shown in Fig. 2b. The constraint on the total weight is removed, and all the weights are assigned wub=1.

Figure 2.

Figure 2

(a) Performance of different algorithms with respect to the optimum value for n=6, the input noise matrix B=0.1I6 and the initial covariance Σ0=5I6. The evolution of T3i is shown in Supplementary Fig. 5. (b) Performance of different algorithms for maximizing T31 (c) Performances of different algorithms for maximizing T52 for 100 random networks. We observe that out of the 100 random networks, the Greedy Algorithm achieves 90–100% of the optimum value for 94 networks, and for the rest of 6 networks, the greedy algorithm achieves 80–90% of the optimum value.

Update Problem: In the update problem, we are a given network topology, and the goal is to add k edges that maximize Tji. To compare the performance, we generate 100 randomly connected networks of 6 nodes and 10 edges. We use the above algorithms to add 5 new edges with wmax=4.2 and wub=1 such that T52 is maximized. Because of the complexity in finding the optimum value for T52 (using the Brute force approach for comparison purposes) for large networks, we limit our analysis to a small network of 6 nodes. We take Σ0=5I6 and B=0.1I6. The performance comparison is shown in Fig. 2c.

Computational Complexities:The Greedy Algorithm is computationally expensive, bearing the worst-case computational complexity of O(n4β1+n4), where β1 is the cost of computing the information transfer function and n is the number of edges to be added. The performance of the Subgraph Completion Algorithm is very close to the Greedy Algorithm with a significantly less computational complexity of O(n4). A detailed comparison of the performances of these algorithms in terms of computational complexities and the maximization of the information transfers is given in Supplementary Note 6. An illustration of different topologies generated by the proposed algorithms for a network size of 20 nodes is also given.

Approximation Guarantee: From the definitions of submodularity ratio and curvature (Definitions, Supplementary Note 5), we compute γ and α among all the subsets of Eg and select the largest and the smallest values respectively. We randomly generate 100 different subsets of S for a network of 50 nodes and determine the largest and smallest values of γ and α. The largest value of γ has an average value of 0.9, signifying the closeness to submodularity empirically. The value of α ranges between 0 and 0.4, with an average value of 0.15. Thus using 1α(1-e-αγ)f, the Greedy Algorithm achieves over 80%, and it outperforms the worst-case approximation of 60% for submodular functions.

Applications to neurological networks

Information flows in Neurological Networks: We study the various information transfers among the excitatory populations of neurological networks. The dynamical interactions among the excitatory and inhibitory populations in a synaptically coupled neuronal network can be approximated by the Wilson–Cowan model of interacting oscillators (Supplementary note 8). In neurological networks, a single neuron fires repetitively when injected with a constant current. Therefore, it is reasonable to regard a simulated neuron as a limit cycle, at least for a certain small duration over a period of several spikes. We, therefore, assume that each oscillator i has an asymptotically stable periodic solution with frequency ωi. The couplings among the neurons are often only through weak input currents to the membrane potential of the cell. Thus, we assume weak couplings among the oscillators to prevent “Oscillator death”46. Moreover, when the couplings are weak, we can reduce the system of nonlinear equations to a set of equations on a torus using invariant manifold theory46. We then use averaging theory to obtain equations that depend only on the phase differences as (Supplementary Note 7)

dϕi=(ωi+jγij(ϕi-ϕj))dt+ςikdwk 10

where γi,j denotes the coupling function between nodes i and j and the last term models external stochastic noise process, ξi with covariance ςi, wk is a white noise Gaussian process with zero mean and unit covariance (Supplementary note 7). Because of the white noise process, strong deviations may occur that switches the dynamics to other stable states. When the noise levels are reduced, the expected time for such switching of the stable phase-locked states becomes arbitrarily large. In our work, we focus on finding the information transfer from a single dynamical state to another dynamical state. Therefore, we assume that the noise levels are small enough so that no such switching occurs during the relevant time intervals where the dynamical states communicate. The coupling function is computed by finding the response of the phase difference due to electrical synapse via gap junction potentials. A sensitivity analysis of the coupling function to noise levels, types of noise, and local noise is given in Supplementary Note 9. Information transfer between any two neurons in the network can be defined as an excitatory neuron’s influence on the excitation level of the second neuron and depends on the level of phase synchronization over the periodic interval. A popular and widely used theory in computing information transfers among neurons is that effective transmission of information between two oscillating neurons occurs when the pre-synaptic input of the sending neuron reaches the post-synaptic neuron at its maximum excitability phase, thereby amplifying the firing rate of the post-synaptic group. To compute the information transfers, we decompose the dynamics in (10) into a deterministic component and a fluctuating stochastic component. We estimate the stochastic component using linear approximations yielding a linear continuous stochastic model of the form in (1) (Methods and Supplementary Note 8). We show that changes in the network topology alter the information transfers among neurons and that by designing the correct topology, we can control the information transfers to modify undesired excitation levels or achieve desired patterns of information transfers. Change in network topology can be due to endogenous changes promoting physiological or pathological conditions or exogenous interventions. We assume the initial state covariance of the fluctuating components is 0.1I8, and the input noise matrix is taken as 0.001I8. We first show in Fig. 3a–d that a change in the interactions among the neurons induces a change in the stable phase-locked states and eventually in the coupling strength and information transfers. Next, we show in Fig. 3h–n how we can use our proposed algorithms in the previous section to maximize T87 for the network shown in Fig. 3e. Figure 3f illustrates the oscillatory dynamics of the neurons and in Figure 3g, we demonstrate the variations of the phase difference around a stable point.

Figure 3.

Figure 3

(a) The Wilson–Cowan neuronal oscillator consisting of two excitatory (triangle) and inhibitory (circle) neurons with average membrane potentials of v and u for two network topologies. The edge weight is 0.1. (b) The coupling function curves for both cases in figure (a). The dark red and blue curves show the coupling function and its antisymmetric curve for the bottom figure in (a). The other two dashed curves correspond to the coupling for the topology in the top Figure (a). (c) The transpose of coupling matrices found by linearizing the coupling functions shown in figure (b) around the zero crossings of γ¯ in both cases. The (2, 1) element in the upper matrix is 0 as the corresponding network has no connection from 1 to 2 (d) Tji curves for both the topologies in Figure (a). The red curve shows T21 for the upper topology. T12 is 0 for this topology as there is no connection from node 1 to 2. The blue curve and the dotted red curve show T12 and T21 for the topology in bottom figure (a). As the coupling strengths are similar, the two information transfer curves overlap (e) Excitatory and inhibitory network of 8 nodes with couplings 0.015 and 0.1 (f) Oscillatory behaviours of the neurons (g) The phase differences fluctuate around a stable phase-locked state (darker lines) (h) Various information transfers among the excitatory neurons. (i) Given binary interconnection matrix of 8 nodes (j) Interconnection matrix using the update technique of adding 5 new edges with Greedy Algorithm (k) Interconnection matrix using the rewiring technique of 7 nodes using the Greedy Algorithm and ITEC centrality measure, (l) Information transfers among various excitatory neurons after updating with 5 edges (m) Information transfers among various excitatory neurons after rewiring using 7 edges, (n) Evolution of T87 after the update and the rewiring process.

Update Problem: We consider the given neural network in Fig. 3e for both the update and rewiring problems. We take the initial state covariance for the states ϕi as 0.1I8 and the input noise matrix as 0.001I8. The adjacency matrix has entries given by Gi,k. Note that Gi,j depends on the phase difference and the phase response curve. Also, Gi,j=0 if there is no edge from i to j (see Methods). The update problem is to add 5 edges such that T87 is maximized. The upper bounds on the weights are given by wmax=0.07,wub=0.015. Note that the coupling matrix given in Fig. 3c should not be confused with the weights wmax and wub. The edge weights are denoted by the black (0.1) and purple arrows (0.015) in the network in Fig. 3e.

Rewiring Problem: We continue with the example of the neural network in Fig. 3e for the rewiring problem to maximize T87. We restrict the number of edges that can be reconfigured to 7. Following Algorithm 2, we first remove the three edges sinking in node 7 (excluding the edge from node 8). The remaining 4 edges to be removed are found from the lowest rank edge rankings based on the ITEC. The bounds on the weight are given by wmax=0.1 and wub=0.015.

These results validate the postulation that the functional information transfers among the neurons depend on the underlying network topology, which may occasionally change due to physiological or pathological conditions.

Discussion

This report provides a generic mechanism to quantify the information transfers among nodes in complex network systems. For a network system with linear stochastic dynamics, we define information transfer as the difference between the marginal entropies. For weakly coupled oscillators with stochastic fluctuations, we show that the information transfer is a function of the state covariance and the coupling strengths among the oscillators. We show that the formulation is consistent with Schreiber’s transfer entropy and Horowitz’s thermodynamical information flow (Supplementary node 3). We provide supporting examples that indicate the change in information transfer patterns because of network topology changes. For networks of weakly coupled oscillators, the theory is based on a linear approximation of the phase dynamics around the stable phase-locked states. The method thus highlights the significance of phase synchronizations in the study of weakly coupled oscillators.

The structural analysis of the information transfer function reveals that the information transfer is a monotone-increasing function under specific conditions. The NP-hardness of the function forces us to define an approximation guarantee when using the Greedy algorithm. Also, the information transfer function is proven to be neither a submodular nor a supermodular function. These conditions place the context of our study outside the standard submodular or supermodular functions, preventing the use of the standard approximation guarantee of (1-1/e)63.21% (of the optimal value for submodular functions). However, these conditions are favourable because the complexities are reduced by minimizing the search space to only those edges with positive contributions. Also, we show that the information transfer function enjoys an approximation guarantee of more than 80% when we use the Greedy Algorithm. For assigning the edge weights, we proved that optimal edge weights to be assigned to the set of new edges lie on the boundary of the feasible weight set.

Information transfer, in the context of neurological networks, is defined by the amount of influence of one node on the excitation levels of a neighbouring node and depends on the level of phase synchronization. We computed the various information transfers among the neurons in a Wilson–Cowan model of 8 neurons. Finally, using the proposed algorithms, we maximized information transfer between two prespecified excitatory neurons. While the theory in this report focuses on maximizing information transfers by finding the near-optimal topology, there are other possible scopes that we can explore to control information transfer. For example, if the system in (1) is controllable with an input matrix defining the controllable nodes in the network, then we can study the variations in information transfer due to varying inputs. Hybrid control of the topology (passive) and external control (active) may provide more flexibility in controlling information transfer.

Methods

Algorithms

graphic file with name 41598_2023_32762_Figa_HTML.jpg

graphic file with name 41598_2023_32762_Figb_HTML.jpg

Modular Addition Technique: 47 In this approach, we compute Tji for each potential edge in the network. The edges are then sorted in decreasing order of their contribution to Tji. The first k edges are then used for maximizing Tji.

Complementary Modular Addition Technique:47 Given the ground set, Eg , we compute f(Eg)-f(Eg)\i,iEg where f is Tji. The edges are then sorted in descending order and the first k links are added to the base topology.

Reducing the phase dynamics into linear stochastic dynamics

We assume that in the unperturbed system (ςik=0), the phase dynamics in (10) has a stable phase-locked state with a constant phase difference, Δϕij=ϕiref-ϕjref and a collective oscillation frequency, Ω, that is for all i{1,N}, Ω=ωi+jγij(Δϕi,j). We decompose the phase dynamics into a deterministic reference part, ϕiref, and a fluctuating part, ϕifluc. The solution to the deterministic dynamics is given by ϕiref(t)=Ωt+Δϕi,1ref. Introducing new coordinates, φi=ϕi-ϕiref, (10) can be written as dφ=f(φ)dt+ςdw, where fi(ς)=ωi+jγij(ςi-ςj+Δϕi,jref)-Ω. We assume that the noise levels, ςik are small and linearizing around the stable phase-locked states, we get a linear continuous stochastic model as

dφ=Gφdt+ςdw,whereGij=γijΔϕi,jref.

Supplementary Information

Below is the link to the electronic supplementary material.

Author contributions

Conceptualization: S.S. and R.P.; methodology: S.S., R.P., U.V., and S.L.; investigation: S.S., and R.P.; writing: S.S. and R.P.; review and editing: S.S., R.P., U.V., and S.L.

Data availibility

The codes/data used during the current study are available from the corresponding author upon reasonable request.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

The online version contains supplementary material available at (10.1038/s41598-023-32762-7)

References

  • 1.Allahverdyan AE, Janzing D, Mahler G. Thermodynamic efficiency of information and heat flow. J. Stat. Mech: Theory Exp. 2009;2009:P09011. doi: 10.1088/1742-5468/2009/09/P09011. [DOI] [Google Scholar]
  • 2.Tyson JJ, Chen K, Novak B. Network dynamics and cell physiology. Nat. Rev. Mol. Cell Biol. 2001;2:908–916. doi: 10.1038/35103078. [DOI] [PubMed] [Google Scholar]
  • 3.Tkačik G, Callan CG, Jr, Bialek W. Information flow and optimization in transcriptional regulation. Proc. Natl. Acad. Sci. 2008;105:12265–12270. doi: 10.1073/pnas.0806077105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Chen CR, Lung PP, Tay NS. Information flow between the stock and option markets: Where do informed traders trade? Rev. Financ. Econ. 2005;14:1–23. doi: 10.1016/j.rfe.2004.03.001. [DOI] [Google Scholar]
  • 5.Ay N, Polani D. Information flows in causal networks. Adv. Complex Syst. 2008;11:17–41. doi: 10.1142/S0219525908001465. [DOI] [Google Scholar]
  • 6.Peruani F, Tabourier L. Directedness of information flow in mobile phone communication networks. PLoS ONE. 2011;6:e28860. doi: 10.1371/journal.pone.0028860. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Maxwell, J. Theory of heat. mineola, ny (2001).
  • 8.Cafaro C, Ali SA, Giffin A. Thermodynamic aspects of information transfer in complex dynamical systems. Phys. Rev. E. 2016;93:022114. doi: 10.1103/PhysRevE.93.022114. [DOI] [PubMed] [Google Scholar]
  • 9.Gonzalez MC, Hidalgo CA, Barabasi A-L. Understanding individual human mobility patterns. Nature. 2008;453:779–782. doi: 10.1038/nature06958. [DOI] [PubMed] [Google Scholar]
  • 10.Kleinberg JM. Navigation in a small world. Nature. 2000;406:845–845. doi: 10.1038/35022643. [DOI] [PubMed] [Google Scholar]
  • 11.Bullmore E, Sporns O. Complex brain networks: Graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 2009;10:186–198. doi: 10.1038/nrn2575. [DOI] [PubMed] [Google Scholar]
  • 12.Kim J, Kim I, Han SK, Bowie JU, Kim S. Network rewiring is an important mechanism of gene essentiality change. Sci. Rep. 2012;2:1–7. doi: 10.1038/srep00900. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Martin S, Grimwood PD, Morris RG, et al. Synaptic plasticity and memory: An evaluation of the hypothesis. Annu. Rev. Neurosci. 2000;23:649–711. doi: 10.1146/annurev.neuro.23.1.649. [DOI] [PubMed] [Google Scholar]
  • 14.Nabavi S, et al. Engineering a memory with ltd and ltp. Nature. 2014;511:348–352. doi: 10.1038/nature13294. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Whitlock JR, Heynen AJ, Shuler MG, Bear MF. Learning induces long-term potentiation in the hippocampus. Science. 2006;313:1093–1097. doi: 10.1126/science.1128134. [DOI] [PubMed] [Google Scholar]
  • 16.Barnes SJ, Finnerty GT. Sensory experience and cortical rewiring. Neuroscientist. 2010;16:186–198. doi: 10.1177/1073858409343961. [DOI] [PubMed] [Google Scholar]
  • 17.Chklovskii DB, Mel B, Svoboda K. Cortical rewiring and information storage. Nature. 2004;431:782–788. doi: 10.1038/nature03012. [DOI] [PubMed] [Google Scholar]
  • 18.Albieri G, et al. Rapid bidirectional reorganization of cortical microcircuits. Cereb. Cortex. 2015;25:3025–3035. doi: 10.1093/cercor/bhu098. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Barnes SJ, et al. Delayed and temporally imprecise neurotransmission in reorganizing cortical microcircuits. J. Neurosci. 2015;35:9024–9037. doi: 10.1523/JNEUROSCI.4583-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Braun U, et al. From maps to multi-dimensional network mechanisms of mental disorders. Neuron. 2018;97:14–31. doi: 10.1016/j.neuron.2017.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Liu Y-Y, Slotine J-J, Barabási A-L. Controllability of complex networks. Nature. 2011;473:167–173. doi: 10.1038/nature10011. [DOI] [PubMed] [Google Scholar]
  • 22.Bomela W, Wang S, Chou C-A, Li J-S. Real-time inference and detection of disruptive eeg networks for epileptic seizures. Sci. Rep. 2020;10:8653. doi: 10.1038/s41598-020-65401-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Wang S, et al. Inferring dynamic topology for decoding spatiotemporal structures in complex heterogeneous networks. Proc. Natl. Acad. Sci. 2018;115:9300–9305. doi: 10.1073/pnas.1721286115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.McIntosh A, et al. Network analysis of cortical visual pathways mapped with pet. J. Neurosci. 1994;14:655–666. doi: 10.1523/JNEUROSCI.14-02-00655.1994. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Bullmore E, et al. How good is good enough in path analysis of fmri data? Neuroimage. 2000;11:289–301. doi: 10.1006/nimg.2000.0544. [DOI] [PubMed] [Google Scholar]
  • 26.Friston KJ, Harrison L, Penny W. Dynamic causal modelling. Neuroimage. 2003;19:1273–1302. doi: 10.1016/S1053-8119(03)00202-7. [DOI] [PubMed] [Google Scholar]
  • 27.Brovelli A, et al. Beta oscillations in a large-scale sensorimotor cortical network: Directional influences revealed by granger causality. Proc. Natl. Acad. Sci. 2004;101:9849–9854. doi: 10.1073/pnas.0308538101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Ursino M, Ricci G, Magosso E. Transfer entropy as a measure of brain connectivity: A critical analysis with the help of neural mass models. Front. Comput. Neurosci. 2020;14:45. doi: 10.3389/fncom.2020.00045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Kirst C, Timme M, Battaglia D. Dynamic information routing in complex networks. Nat. Commun. 2016;7:1–9. doi: 10.1038/ncomms11061. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Vastano JA, Swinney HL. Information transport in spatiotemporal systems. Phys. Rev. Lett. 1988;60:1773. doi: 10.1103/PhysRevLett.60.1773. [DOI] [PubMed] [Google Scholar]
  • 31.Sun J, Bollt EM. Causation entropy identifies indirect influences, dominance of neighbors and anticipatory couplings. Phys. D. 2014;267:49–57. doi: 10.1016/j.physd.2013.07.001. [DOI] [Google Scholar]
  • 32.Friston KJ, et al. Granger causality revisited. Neuroimage. 2014;101:796–808. doi: 10.1016/j.neuroimage.2014.06.062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Schreiber T. Measuring information transfer. Phys. Rev. Lett. 2000;85:461. doi: 10.1103/PhysRevLett.85.461. [DOI] [PubMed] [Google Scholar]
  • 34.Smirnov DA. Spurious causalities with transfer entropy. Phys. Rev. E. 2013;87:042917. doi: 10.1103/PhysRevE.87.042917. [DOI] [PubMed] [Google Scholar]
  • 35.Kiwata H. Relationship between Schreiber’s transfer entropy and Liang–Kleeman information flow from the perspective of stochastic thermodynamics. Phys. Rev. E. 2022;105:044130. doi: 10.1103/PhysRevE.105.044130. [DOI] [PubMed] [Google Scholar]
  • 36.San Liang X, Kleeman R. Information transfer between dynamical system components. Phys. Rev. Lett. 2005;95:244101. doi: 10.1103/PhysRevLett.95.244101. [DOI] [PubMed] [Google Scholar]
  • 37.Sinha, S. & Vaidya, U. Formalism for information transfer in dynamical network. In 2015 54th IEEE Conference on Decision and Control (CDC), 5731–5736 (IEEE, 2015).
  • 38.Sinha S, Vaidya U. On data-driven computation of information transfer for causal inference in discrete-time dynamical systems. J. Nonlinear Sci. 2020;30:1651–1676. doi: 10.1007/s00332-020-09620-1. [DOI] [Google Scholar]
  • 39.Sinha S, Sharma P, Vaidya U, Ajjarapu V. On information transfer-based characterization of power system stability. IEEE Trans. Power Syst. 2019;34:3804–3812. doi: 10.1109/TPWRS.2019.2909723. [DOI] [Google Scholar]
  • 40.Jaynes ET. Information theory and statistical mechanics. Phys. Rev. 1957;106:620. doi: 10.1103/PhysRev.106.620. [DOI] [Google Scholar]
  • 41.Chellappan V, Sivalingam KM, Krithivasan K. A centrality entropy maximization problem in shortest path routing networks. Comput. Netw. 2016;104:1–15. doi: 10.1016/j.comnet.2016.04.015. [DOI] [Google Scholar]
  • 42.Kovačević, M., Stanojević, I. & Šenk, V. On the hardness of entropy minimization and related problems. In 2012 IEEE Information Theory Workshop, 512–516 (IEEE, 2012).
  • 43.Estrada E, Hatano N, Benzi M. The physics of communicability in complex networks. Phys. Rep. 2012;514:89–119. doi: 10.1016/j.physrep.2012.01.006. [DOI] [PubMed] [Google Scholar]
  • 44.Nemhauser GL, Wolsey LA, Fisher ML. An analysis of approximations for maximizing submodular set functions-i. Math. Program. 1978;14:265–294. doi: 10.1007/BF01588971. [DOI] [Google Scholar]
  • 45.Chamon, L. F. & Ribeiro, A. Near-optimality of greedy set selection in the sampling of graph signals. In 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 1265–1269 (IEEE, 2016).
  • 46.Ermentrout GB, Kopell N. Multiple pulse interactions and averaging in systems of coupled neural oscillators. J. Math. Biol. 1991;29:195–217. doi: 10.1007/BF00160535. [DOI] [Google Scholar]
  • 47.Srighakollapu MV, Kalaimani RK, Pasumarthy R. Optimizing network topology for average controllability. Syst. Control Lett. 2021;158:105061. doi: 10.1016/j.sysconle.2021.105061. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

The codes/data used during the current study are available from the corresponding author upon reasonable request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES