Abstract
Information flow among nodes in a complex network describes the overall cause-effect relationships among the nodes and provides a better understanding of the contributions of these nodes individually or collectively towards the underlying network dynamics. Variations in network topologies result in varying information flows among nodes. We integrate theories from information science with control network theory into a framework that enables us to quantify and control the information flows among the nodes in a complex network. The framework explicates the relationships between the network topology and the functional patterns, such as the information transfers in biological networks, information rerouting in sensor nodes, and influence patterns in social networks. We show that by designing or re-configuring the network topology, we can optimize the information transfer function between two chosen nodes. As a proof of concept, we apply our proposed methods in the context of brain networks, where we reconfigure neural circuits to optimize excitation levels among the excitatory neurons.
Subject terms: Information technology, Computational science, Applied mathematics, Neural circuits
Introduction
The cause-effect relationships between various events or processes, in which an event contributes to the evolution of another event or a state occurs in different physical1, biological2’3, financial4, or technological systems or networks5’6. Apart from science, causality has been an important topic in contemporary philosophy and its branches, including metaphysics, ontology, and epistemology. In physical systems, Maxwell illustrated with an experiment (Maxwell’s demon)7 that reveals the relationship between information and entropy. The experiment showed that the restrictions imposed by the second law of thermodynamics can be relaxed by using information (velocity and positions of the particles) contained in Maxwell’s demon. These notions of information and entropy provide a thermodynamical description of information flows in dynamical systems8. In a social network9’10, information is encoded in the network topology and essential for building reputation, trust, better collaboration, or finding short chains in an extensive social network. In cell biology2’3, the receptor function relies on precise dynamical communication and coordinated information transfer between the cell surface receptors and the outside world and within the gene networks. In neurological networks11, information transfers happen across the synapse by the activity of several neural populations; the dendrites transmit information to the cell body, and the axon transmits information away from the cell body.
The pattern of connections between the proteins or neurons determines how information flows through the gene regulatory networks or neural circuits. During evolution, the gene essentiality changes, and the number of connections between the essential and non-essential genes depends on the ancestral species. Increased interactions among the genes result in transforming non-essential genes into essential genes12. Thus knowledge of the ‘wiring’ of these networks helps understand how the collective behaviour contributes to the information flows among the cells. The connectome describes the complete structural wiring diagram of the neurons in the nervous system. Studies show that the changes in the ability to learn and form memories in the nervous system depend on the modification in the synaptic strength through potentiation or depression,13’14’15. An approach to modify synaptic strengths is to reconfigure the wiring by changing the physical connections between the neurons16’17. Recent evidence has shown that network rewiring is an essential mechanism in learning and neuroplasticity, defined as the ability of the brain to modify the information flows among the neurons in response to intrinsic and extrinsic stimuli18’19.
Most of the literature around the study of complex dynamical networks focuses mainly on the study of controllability and reachability of nodes and their roles in controlling the network dynamics20’21. And a few other works focus on identifying the effective connectivity from time series data, such as Fourier-based or polynomial-based interpolation22’23. These methods are based on interpolation techniques, and the estimation accuracy depends hugely on the chosen basis functions. Various studies on complex networks are specifically focused on the analysis of complex brain networks24’25’26’27. The methods to investigate the functional connectivity between brain regions in these works include connectivity models such as structural equation modelling24’25, dynamic causal modelling26, or Granger causality27. The structural equation modelling method24’25 is based on estimating the correlation matrix between the brain regions and is intractable for large networks. Dynamic causal modelling26 estimates the connectivity by perturbing the brain’s dynamic system and measuring the response and does not incorporate an information-theoretic measure. Granger causality characterizes the direction of information flow, but it does not quantify the amount of causal inferences. Therefore, in the event of bidirectional causal inferences, Granger causality is difficult to differentiate the relative strengths. The works in all these studies of brain networks focus only on finding the effective connectivity in the brain network. Recently, network scientists have integrated information theory with network theory to study the flow of information in complex networks28’29. These studies focus mainly on estimating information transfers in stationary random processes. However, in this work, we consider complex dynamical networks with intrinsic stochastic nodal dynamics that can provide accurate estimates of the evolution of information transfers. We model the neurological network based on a dynamic model of the brain (Wilson–Cowan model) and infer the coupling strengths by perturbing the system and finding the phase responses (Phase Response Curve). In this regard, our methods for estimating coupling strengths from neurophysiological time series differ from the methods in22,23 where there is no designed perturbation, and the inputs are treated as unknown. We attempt to answer two crucial questions: (i) Is there a way to quantify the information flows among nodes in complex dynamical networks? (ii) What are the effects of changing the network topology on the information transfers among the nodes? Moreover, assuming we have the authority to configure the network topology, can we maximize the information transfer between two predefined nodes? A major distinctive feature of our work, therefore, lies in integrating theories from information theory, graph theory and optimization algorithms to quantify the flow of information between various nodes in complex dynamical networks and finding the optimal topology for maximized information flows.
There are various information-theoretic measures to quantify information flow, such as the time-delayed mutual information30, causation entropy31, Granger causality32 etc. One limitation of these measures is the lack of determining the cause-effect relation or the direction of information flow. Schreiber’s transfer entropy33 describes the flow of information between two random processes and provides a directional sense to the information transfer. However, evidence34 has shown that transfer entropy may give qualitatively incorrect results, such as imperfect observations of the states, and, as a result, may not always successfully quantify the true information transfers in dynamical systems35. Recently, Liang and Kleeman36’37 formulate the evolution of information transfers in dynamical systems. In our work, we adopt Liang-Kleeman’s formalism of information transfer to measure the flow of information in a network. This formalism has been used to understand causal inference using time series data in large-scale networks38 and for identifying sources of instability in network power systems39.
To understand the effects of topological changes on the information transfer, we analyze the structural set properties of the information transfer function. Our information transfer function is also closely related to the mutual information40, defined as the amount of information obtained about one random variable by observing the second random variable. Solving the maximization of mutual information under a constraint on the marginal distribution has been proven to be NP-Hard41’42. Maximizing information transfer under edge constraints is a variant of such problems, and we propose algorithms with provable suboptimality bounds for solving such problems. We split the objective function in our maximization problem into two parts: a first term capturing the network topology and a second term capturing the edge weights. Finding the optimal topology problem can be divided into three subproblems : (a) Design Problem: To design a near-optimal topology given the number of nodes and edges (b) Update Problem: To add a fixed number of edges in a given network and (c) Rewiring Problem: Reconfigure a fixed number of edges that maximizes information transfer. The weights of each edge are upper bounded by a positive weight, and a positive real number bounds the total edge weights. A few questions arise naturally, which we will answer in this report. What is the approximation guarantee when the Greedy Algorithm is used in solving the problems? Are there any algorithms that perform close to the Greedy Algorithm while reducing the computational cost? As a computationally cheaper alternative to the Greedy Algorithm, we propose a new algorithm, the ‘Sub-Graph Completion Algorithm’, that performs closely to the Greedy Algorithm while reducing the computational cost by three folds. We also propose a new centrality measure named ‘Information Transfer Edge Centrality’ that quantifies the contributions of edges towards information transfers among nodes in the network. Finally, we apply our proposed algorithms and validate the approximation guarantees in various random networks. We also apply our algorithms to maximize information transfer between two excitatory neurons in a neurological network.
Results
Quantifying the information transfer
To compute the information transfer, we consider a directed network with linear time-invariant stochastic dynamics given by:
1 |
where are the nodal states of the network, is a white noise with mean zero and unit covariance and denotes the input noise matrix. The choice for the model is motivated by the fact that we can reduce most real-world oscillatory dynamical networks into phase description models that can be approximated by linear stochastic systems29. The model does not incorporate control nodes in the network system and the problem formulation does not require the controllability constraint to maximize the objective function. We assume that the initial states x(0) denoted as are drawn from a normal Gaussian distribution , with initial mean and covariance . Additionally, we assume that there are no self-loops in the considered networks. The transpose of the state matrix, describes the weighted adjacency matrix. The directed graph is denoted by , with vertices , given by the n states, is the edge set, and is the weight function. The non-zero entries of define how each of the nodes is affected by the white noise. For the linear time-invariant stochastic network model in (1) with n random variables and edges , information transfer from node j to i at time t for , denoted as is
2 |
where denotes the joint distribution of at time t, denotes the marginal distribution of state , is the conditional probability distribution of given at time t and denotes the (i, j) element of . The derivations are given in Supplementary Notes 1 and 3. In this work, we consider the case where the network admits cooperative (i.e., ) interactions among the nodes as negative interactions are not physically meaningful in biological networks and other real-world networks. We shall drop the explicit dependence of on t, as maximizing for one time instant maximizes it for all other time instances (Corollary 1.1, Supplementary Note 5). Figure 1 shows our framework for maximizing information transfer from node 3 to node 1 in a given network. In Supplementary note 2, we show the theoretical relationships between Liang-Kleeman’s information transfer, Horowitz’s information, and Schreiber’s transfer entropy.
Structural analysis of information transfer function
For the directed network, associated with the system in (1), we study the structural properties of . The domain of is the subset of edges, , where is the set of all possible edges of nodes and the range is a positive real number. It is easy to see from (2) that is a function of two set functions and . To maximize , we need to maximize and minimize concurrently. However, this approach is not feasible as both and are monotone non-decreasing functions of edges (Lemma 1, Supplementary Note 5). Alternatively, we find the set of edges, , such that if any edge from is added to , the marginal increase in is greater than the marginal increase in . We can formally define the set as
3 |
Thus, it is easy to see from (3) that is a monotone increasing function of the edges in the set . To find the elements of , we recall the definition of “communicability”43 from graph theory. Communicability from node i to node j in , , denoted as is defined as the total number of walks of all lengths from node i to j, weighting walks of length k by a factor . It quantifies the ability to exchange messages between two nodes and is given by
4 |
where is the structural interconnection matrix of . A walk of length k is a sequence of nodes such that for all , . The relationship between and is given by (Theorem 1, Supplementary Note 5) . Thus, a comparison between (4) and reveals that increases for every incoming path of any length to node j, with higher contributions from shorter paths to node j. Similarly, increases quadratically with incoming paths to node i, with the highest contributions from shortest (direct) paths to node i. Therefore, if we fix the in-degree of node i to 1 with the only edge to node i from node j, then any directed paths to node i formed by the remaining edges pass through node j. As a result, node j has shorter directed paths as compared to node i, and by definition of communicability, and satisfies the inequality condition in (3). Consequently, if we assume there are no incoming edges to node i except from node j, then is a monotone non-decreasing function of edges (Theorem 1, Supplementary Note 5). Now, we consider the case when a given network has direct edges to node i from nodes other than node j. In this case, we avoid adding edges that form directed paths to node i but not passing through node j for reasons explained earlier. These edges significantly increase while their contribution towards is minimal. Supplementary Fig. 3 shows the structure of the set in the adjacency matrix. The results in this section reveal the relationship between the network structure and the functional pattern, defined by the information transfer function. In the next section, we formally define our problem definitions and propose algorithms to solve the maximization problem. We define the set of possible edges that can be added as the “Ground Set” and is given by .
Finding the optimal topology
We now propose algorithms for solving our optimization problems, namely the design, update, and rewiring problems. The update problem can be considered as a sub-class of design problem since we are adding k edges to existing network topology.
Problem 1: design problem
The design problem is to construct a connected network topology with n nodes and k edges that maximize the information transfer from a predefined node j to another predefined node i, where The total edge weight is bounded by . Additionally, the weights of each link, are upper bounded by . Our first objective is to find the topology that maximizes by adding minimum edges that ensure the network is at least weakly connected. This topology is a tree network with edges into j from the remaining nodes and an edge from node j to i. We call this the base topology and denote the set of edges by . The design problem now is to add edges from the ground set to the base topology, that maximizes . We then find the optimal edge weights for every new edge. The problem can be formulated as
5 |
Problem 2: rewiring problem
Given a weighted network , the problem is to maximize the information transfer between two given nodes by reconfiguring at most k existing edges. The modified network is given by , where denotes the modifications on the existing network. We require that the total weights of the modified edges be bounded by and each of the individual edge-weights be bounded by . The rewiring problem can be formulated as
6 |
Below, we propose algorithms that solve Problems 1 and 2. First, we propose the algorithms for adding edges that maximize in Problem 1. Next, to solve Problem 2, we propose an algorithm that removes edges with minimal contribution to the information transfer function. We then use the algorithms for Problem 1 to add new edges.
Algorithms for Network Design (Problem 1): We propose the Subgraph Completion Algorithm. This technique relies on the communicability centrality measure. From the definition of communicability in (4), shorter paths to node j contribute more to the communicability function. To increase the connectivity with shorter paths to node j, we form complete subgraphs between 2 nodes, with j as one of the two nodes, for all the possible combinations excluding node i. We then form complete subgraphs for all the combinations of nodes, with j as one of the nodes. If further , we arbitrarily add outgoing edges from node i to the rest of the nodes. We call this “Subgraph Completion Algorithm”, which is given in Algorithm 1 (Algorithms Section) and illustrated in Supplementary Fig. 4. We also use a Greedy Algorithm that computes the contribution of each edge towards and selects the best edge whose contribution is the highest. The iteration continues until the added number of edges equals k (Supplementary note 5). Other commonly used algorithms include modular and complementary modular addition techniques (Methods-Algorithms).
Algorithms for rewiring edges (Problem 2): To maximize for a given weighted network , associated with the system (1) by rewiring the topology, we remove the existing incoming edges to node i except from node j (Theorem 1, Supplementary Note 5). Let denote this set of edges. If , we simply remove any k edges from . Else, if , we look for other edges to be removed. Towards this end, we introduce novel centrality measures that quantify (a) the causal inference of a node to the rest of the network (node-to-network influence) and (b) the effects (in terms of information transfer) received by a node from the network (network to node influence). Finally, we derive an Information Transfer Edge Centrality (ITEC) measure that quantifies the contributions of edges towards information transfers among nodes in the network. To define the ITEC, we first define the cause and the effect node centralities below.
Cause centrality in complex network (node to network influence): Cause centrality, denoted by , quantifies the contribution of information/causal inferences by a node across the network. In other words, it quantifies the ability of a node j to transfer information across the network. For the system in (1) with adjacency matrix , the cause centrality of a node is given by
7 |
Effect centrality in a complex network (network to node influence): Effect centrality of a node j, is defined as the amount of information received by a node j from all other nodes in the network. It measures the ability of nodes in a network to receive more “effects” or gather more information along the directed paths across the network. For the system in (1) with adjacency matrix , the effect centrality of a node is given by
8 |
Information Transfer Edge Centrality: We combine the cause and effect centralities to derive a novel edge centrality measure based on information transfer. Intuitively, the contribution of an edge toward the information transfers across the network is related to the nodes it connects. If an edge connects a node with high cause centrality to a node with high effect centrality, then the edge has more influence on the information transfers across the network. Thus, the information transfer edge centrality of an edge (i, j), denoted as is
To remove edges from the given network topology, we use the rankings provided by various edge centrality measures and remove the lowest rank k edges. We denote this set of edges to be removed by . We then use the Greedy Algorithm (Algorithm in Supplementary Note 5) or Subgraph Completion Algorithm (Algorithm 1, Methods) to add k new edges.
Optimal assignment of edge weights
Let be the set of edges in the optimal topology which maximizes with . We show that the optimal edge weights to be assigned lie on the boundary of the feasible weight set (Proposition 1 in Supplementary Note 5). Therefore, given the cardinality constraint k, optimal edge set , and , compute and . Assign to the first elements in . Assign to the next element in and 0 to the remaining edges.
Approximation guarantee
Due to the NP-Hardness of our optimization problems, the solutions given by the algorithms are not guaranteed to be optimal. Finding an optimal solution requires the brute force method of finding all the k combinations of edges in the network and computing the information transfer. This method is intractable for moderate to large networks. We look at the structural set properties (submodular and supermodular) of our information transfer function to find an approximation guarantee of using the Greedy Algorithm in solving optimization problems. A set function, is called submodular if for all and , it holds that . If is a sub-modular function, then f is called a super-modular function. Theorem 2 (Supplementary note 6) shows that the information transfer function is neither submodular nor supermodular. Therefore, the standard approximation guarantee44 provided by the Greedy Algorithm does not hold. Some recent works on optimizing set functions that are neither submodular nor supermodular show that the Greedy Algorithm can still provide performance guarantees. For example, in45, the authors employ the submodularity ratio, and the curvature, to define an approximation guarantee of greater than where denotes the optimal value. For a given non-negative set function f, the submodularity ratio is the largest such that . The curvature is the smallest such that .
To justify the use of the Greedy Algorithm for solving the problems, we derive a positive lowerbound on and an upperbound on for our set function in the network topology defined by . In the ground set , the bounds on and are given by (Theorem 3, Supplementary note 5)
9 |
Examples
Design Problem: We first consider a small network of 6 nodes and analyze the performance of our heuristic algorithms for adding 11–17 edges that maximize . We take the edge weights to be 1. To compare the results of our algorithms with the optimal value, we employ a brute force technique to find the optimal with 11–17 edges. Since the method requires an exhaustive search over different combinations, we restrict our analyses to 6 nodes. The performance comparison is shown in Fig. 2a. In all the figures, we denote the Subgraph Completion Algorithm by SC, the Greedy Algorithm by Greedy, Modular Addition, and Complementary Modular Addition by MA and CMA, respectively. We see that the Greedy Algorithm performs better than the rest, and the SC Algorithm performs closely to the Greedy Algorithm. Now, we look at the performance of the proposed algorithms at each stage of edge addition. Let the number of nodes be , and the objective is to maximize . We take the input noise matrix and the initial covariance . After fixing the in-degree of node 1 and constructing the base topology, we have = 197 possible edges. Out of these, 14 edges are self-loops. So, we need to select k out of 183 edges that maximize . The values of obtained for different values of k using the algorithms are shown in Fig. 2b. The constraint on the total weight is removed, and all the weights are assigned .
Update Problem: In the update problem, we are a given network topology, and the goal is to add k edges that maximize . To compare the performance, we generate 100 randomly connected networks of 6 nodes and 10 edges. We use the above algorithms to add 5 new edges with and such that is maximized. Because of the complexity in finding the optimum value for (using the Brute force approach for comparison purposes) for large networks, we limit our analysis to a small network of 6 nodes. We take and . The performance comparison is shown in Fig. 2c.
Computational Complexities:The Greedy Algorithm is computationally expensive, bearing the worst-case computational complexity of , where is the cost of computing the information transfer function and n is the number of edges to be added. The performance of the Subgraph Completion Algorithm is very close to the Greedy Algorithm with a significantly less computational complexity of . A detailed comparison of the performances of these algorithms in terms of computational complexities and the maximization of the information transfers is given in Supplementary Note 6. An illustration of different topologies generated by the proposed algorithms for a network size of 20 nodes is also given.
Approximation Guarantee: From the definitions of submodularity ratio and curvature (Definitions, Supplementary Note 5), we compute and among all the subsets of and select the largest and the smallest values respectively. We randomly generate 100 different subsets of S for a network of 50 nodes and determine the largest and smallest values of and . The largest value of has an average value of 0.9, signifying the closeness to submodularity empirically. The value of ranges between 0 and 0.4, with an average value of 0.15. Thus using , the Greedy Algorithm achieves over , and it outperforms the worst-case approximation of for submodular functions.
Applications to neurological networks
Information flows in Neurological Networks: We study the various information transfers among the excitatory populations of neurological networks. The dynamical interactions among the excitatory and inhibitory populations in a synaptically coupled neuronal network can be approximated by the Wilson–Cowan model of interacting oscillators (Supplementary note 8). In neurological networks, a single neuron fires repetitively when injected with a constant current. Therefore, it is reasonable to regard a simulated neuron as a limit cycle, at least for a certain small duration over a period of several spikes. We, therefore, assume that each oscillator i has an asymptotically stable periodic solution with frequency . The couplings among the neurons are often only through weak input currents to the membrane potential of the cell. Thus, we assume weak couplings among the oscillators to prevent “Oscillator death”46. Moreover, when the couplings are weak, we can reduce the system of nonlinear equations to a set of equations on a torus using invariant manifold theory46. We then use averaging theory to obtain equations that depend only on the phase differences as (Supplementary Note 7)
10 |
where denotes the coupling function between nodes i and j and the last term models external stochastic noise process, with covariance , is a white noise Gaussian process with zero mean and unit covariance (Supplementary note 7). Because of the white noise process, strong deviations may occur that switches the dynamics to other stable states. When the noise levels are reduced, the expected time for such switching of the stable phase-locked states becomes arbitrarily large. In our work, we focus on finding the information transfer from a single dynamical state to another dynamical state. Therefore, we assume that the noise levels are small enough so that no such switching occurs during the relevant time intervals where the dynamical states communicate. The coupling function is computed by finding the response of the phase difference due to electrical synapse via gap junction potentials. A sensitivity analysis of the coupling function to noise levels, types of noise, and local noise is given in Supplementary Note 9. Information transfer between any two neurons in the network can be defined as an excitatory neuron’s influence on the excitation level of the second neuron and depends on the level of phase synchronization over the periodic interval. A popular and widely used theory in computing information transfers among neurons is that effective transmission of information between two oscillating neurons occurs when the pre-synaptic input of the sending neuron reaches the post-synaptic neuron at its maximum excitability phase, thereby amplifying the firing rate of the post-synaptic group. To compute the information transfers, we decompose the dynamics in (10) into a deterministic component and a fluctuating stochastic component. We estimate the stochastic component using linear approximations yielding a linear continuous stochastic model of the form in (1) (Methods and Supplementary Note 8). We show that changes in the network topology alter the information transfers among neurons and that by designing the correct topology, we can control the information transfers to modify undesired excitation levels or achieve desired patterns of information transfers. Change in network topology can be due to endogenous changes promoting physiological or pathological conditions or exogenous interventions. We assume the initial state covariance of the fluctuating components is , and the input noise matrix is taken as . We first show in Fig. 3a–d that a change in the interactions among the neurons induces a change in the stable phase-locked states and eventually in the coupling strength and information transfers. Next, we show in Fig. 3h–n how we can use our proposed algorithms in the previous section to maximize for the network shown in Fig. 3e. Figure 3f illustrates the oscillatory dynamics of the neurons and in Figure 3g, we demonstrate the variations of the phase difference around a stable point.
Update Problem: We consider the given neural network in Fig. 3e for both the update and rewiring problems. We take the initial state covariance for the states as and the input noise matrix as . The adjacency matrix has entries given by . Note that depends on the phase difference and the phase response curve. Also, if there is no edge from i to j (see Methods). The update problem is to add 5 edges such that is maximized. The upper bounds on the weights are given by . Note that the coupling matrix given in Fig. 3c should not be confused with the weights and . The edge weights are denoted by the black (0.1) and purple arrows (0.015) in the network in Fig. 3e.
Rewiring Problem: We continue with the example of the neural network in Fig. 3e for the rewiring problem to maximize . We restrict the number of edges that can be reconfigured to 7. Following Algorithm 2, we first remove the three edges sinking in node 7 (excluding the edge from node 8). The remaining 4 edges to be removed are found from the lowest rank edge rankings based on the ITEC. The bounds on the weight are given by and .
These results validate the postulation that the functional information transfers among the neurons depend on the underlying network topology, which may occasionally change due to physiological or pathological conditions.
Discussion
This report provides a generic mechanism to quantify the information transfers among nodes in complex network systems. For a network system with linear stochastic dynamics, we define information transfer as the difference between the marginal entropies. For weakly coupled oscillators with stochastic fluctuations, we show that the information transfer is a function of the state covariance and the coupling strengths among the oscillators. We show that the formulation is consistent with Schreiber’s transfer entropy and Horowitz’s thermodynamical information flow (Supplementary node 3). We provide supporting examples that indicate the change in information transfer patterns because of network topology changes. For networks of weakly coupled oscillators, the theory is based on a linear approximation of the phase dynamics around the stable phase-locked states. The method thus highlights the significance of phase synchronizations in the study of weakly coupled oscillators.
The structural analysis of the information transfer function reveals that the information transfer is a monotone-increasing function under specific conditions. The of the function forces us to define an approximation guarantee when using the Greedy algorithm. Also, the information transfer function is proven to be neither a submodular nor a supermodular function. These conditions place the context of our study outside the standard submodular or supermodular functions, preventing the use of the standard approximation guarantee of (of the optimal value for submodular functions). However, these conditions are favourable because the complexities are reduced by minimizing the search space to only those edges with positive contributions. Also, we show that the information transfer function enjoys an approximation guarantee of more than when we use the Greedy Algorithm. For assigning the edge weights, we proved that optimal edge weights to be assigned to the set of new edges lie on the boundary of the feasible weight set.
Information transfer, in the context of neurological networks, is defined by the amount of influence of one node on the excitation levels of a neighbouring node and depends on the level of phase synchronization. We computed the various information transfers among the neurons in a Wilson–Cowan model of 8 neurons. Finally, using the proposed algorithms, we maximized information transfer between two prespecified excitatory neurons. While the theory in this report focuses on maximizing information transfers by finding the near-optimal topology, there are other possible scopes that we can explore to control information transfer. For example, if the system in (1) is controllable with an input matrix defining the controllable nodes in the network, then we can study the variations in information transfer due to varying inputs. Hybrid control of the topology (passive) and external control (active) may provide more flexibility in controlling information transfer.
Methods
Algorithms
Modular Addition Technique: 47 In this approach, we compute for each potential edge in the network. The edges are then sorted in decreasing order of their contribution to . The first k edges are then used for maximizing .
Complementary Modular Addition Technique:47 Given the ground set, , we compute where f is . The edges are then sorted in descending order and the first k links are added to the base topology.
Reducing the phase dynamics into linear stochastic dynamics
We assume that in the unperturbed system (), the phase dynamics in (10) has a stable phase-locked state with a constant phase difference, and a collective oscillation frequency, , that is for all , . We decompose the phase dynamics into a deterministic reference part, , and a fluctuating part, . The solution to the deterministic dynamics is given by . Introducing new coordinates, , (10) can be written as , where . We assume that the noise levels, are small and linearizing around the stable phase-locked states, we get a linear continuous stochastic model as
Supplementary Information
Below is the link to the electronic supplementary material.
Author contributions
Conceptualization: S.S. and R.P.; methodology: S.S., R.P., U.V., and S.L.; investigation: S.S., and R.P.; writing: S.S. and R.P.; review and editing: S.S., R.P., U.V., and S.L.
Data availibility
The codes/data used during the current study are available from the corresponding author upon reasonable request.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
The online version contains supplementary material available at (10.1038/s41598-023-32762-7)
References
- 1.Allahverdyan AE, Janzing D, Mahler G. Thermodynamic efficiency of information and heat flow. J. Stat. Mech: Theory Exp. 2009;2009:P09011. doi: 10.1088/1742-5468/2009/09/P09011. [DOI] [Google Scholar]
- 2.Tyson JJ, Chen K, Novak B. Network dynamics and cell physiology. Nat. Rev. Mol. Cell Biol. 2001;2:908–916. doi: 10.1038/35103078. [DOI] [PubMed] [Google Scholar]
- 3.Tkačik G, Callan CG, Jr, Bialek W. Information flow and optimization in transcriptional regulation. Proc. Natl. Acad. Sci. 2008;105:12265–12270. doi: 10.1073/pnas.0806077105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Chen CR, Lung PP, Tay NS. Information flow between the stock and option markets: Where do informed traders trade? Rev. Financ. Econ. 2005;14:1–23. doi: 10.1016/j.rfe.2004.03.001. [DOI] [Google Scholar]
- 5.Ay N, Polani D. Information flows in causal networks. Adv. Complex Syst. 2008;11:17–41. doi: 10.1142/S0219525908001465. [DOI] [Google Scholar]
- 6.Peruani F, Tabourier L. Directedness of information flow in mobile phone communication networks. PLoS ONE. 2011;6:e28860. doi: 10.1371/journal.pone.0028860. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Maxwell, J. Theory of heat. mineola, ny (2001).
- 8.Cafaro C, Ali SA, Giffin A. Thermodynamic aspects of information transfer in complex dynamical systems. Phys. Rev. E. 2016;93:022114. doi: 10.1103/PhysRevE.93.022114. [DOI] [PubMed] [Google Scholar]
- 9.Gonzalez MC, Hidalgo CA, Barabasi A-L. Understanding individual human mobility patterns. Nature. 2008;453:779–782. doi: 10.1038/nature06958. [DOI] [PubMed] [Google Scholar]
- 10.Kleinberg JM. Navigation in a small world. Nature. 2000;406:845–845. doi: 10.1038/35022643. [DOI] [PubMed] [Google Scholar]
- 11.Bullmore E, Sporns O. Complex brain networks: Graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 2009;10:186–198. doi: 10.1038/nrn2575. [DOI] [PubMed] [Google Scholar]
- 12.Kim J, Kim I, Han SK, Bowie JU, Kim S. Network rewiring is an important mechanism of gene essentiality change. Sci. Rep. 2012;2:1–7. doi: 10.1038/srep00900. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Martin S, Grimwood PD, Morris RG, et al. Synaptic plasticity and memory: An evaluation of the hypothesis. Annu. Rev. Neurosci. 2000;23:649–711. doi: 10.1146/annurev.neuro.23.1.649. [DOI] [PubMed] [Google Scholar]
- 14.Nabavi S, et al. Engineering a memory with ltd and ltp. Nature. 2014;511:348–352. doi: 10.1038/nature13294. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Whitlock JR, Heynen AJ, Shuler MG, Bear MF. Learning induces long-term potentiation in the hippocampus. Science. 2006;313:1093–1097. doi: 10.1126/science.1128134. [DOI] [PubMed] [Google Scholar]
- 16.Barnes SJ, Finnerty GT. Sensory experience and cortical rewiring. Neuroscientist. 2010;16:186–198. doi: 10.1177/1073858409343961. [DOI] [PubMed] [Google Scholar]
- 17.Chklovskii DB, Mel B, Svoboda K. Cortical rewiring and information storage. Nature. 2004;431:782–788. doi: 10.1038/nature03012. [DOI] [PubMed] [Google Scholar]
- 18.Albieri G, et al. Rapid bidirectional reorganization of cortical microcircuits. Cereb. Cortex. 2015;25:3025–3035. doi: 10.1093/cercor/bhu098. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Barnes SJ, et al. Delayed and temporally imprecise neurotransmission in reorganizing cortical microcircuits. J. Neurosci. 2015;35:9024–9037. doi: 10.1523/JNEUROSCI.4583-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Braun U, et al. From maps to multi-dimensional network mechanisms of mental disorders. Neuron. 2018;97:14–31. doi: 10.1016/j.neuron.2017.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Liu Y-Y, Slotine J-J, Barabási A-L. Controllability of complex networks. Nature. 2011;473:167–173. doi: 10.1038/nature10011. [DOI] [PubMed] [Google Scholar]
- 22.Bomela W, Wang S, Chou C-A, Li J-S. Real-time inference and detection of disruptive eeg networks for epileptic seizures. Sci. Rep. 2020;10:8653. doi: 10.1038/s41598-020-65401-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Wang S, et al. Inferring dynamic topology for decoding spatiotemporal structures in complex heterogeneous networks. Proc. Natl. Acad. Sci. 2018;115:9300–9305. doi: 10.1073/pnas.1721286115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.McIntosh A, et al. Network analysis of cortical visual pathways mapped with pet. J. Neurosci. 1994;14:655–666. doi: 10.1523/JNEUROSCI.14-02-00655.1994. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Bullmore E, et al. How good is good enough in path analysis of fmri data? Neuroimage. 2000;11:289–301. doi: 10.1006/nimg.2000.0544. [DOI] [PubMed] [Google Scholar]
- 26.Friston KJ, Harrison L, Penny W. Dynamic causal modelling. Neuroimage. 2003;19:1273–1302. doi: 10.1016/S1053-8119(03)00202-7. [DOI] [PubMed] [Google Scholar]
- 27.Brovelli A, et al. Beta oscillations in a large-scale sensorimotor cortical network: Directional influences revealed by granger causality. Proc. Natl. Acad. Sci. 2004;101:9849–9854. doi: 10.1073/pnas.0308538101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Ursino M, Ricci G, Magosso E. Transfer entropy as a measure of brain connectivity: A critical analysis with the help of neural mass models. Front. Comput. Neurosci. 2020;14:45. doi: 10.3389/fncom.2020.00045. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Kirst C, Timme M, Battaglia D. Dynamic information routing in complex networks. Nat. Commun. 2016;7:1–9. doi: 10.1038/ncomms11061. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Vastano JA, Swinney HL. Information transport in spatiotemporal systems. Phys. Rev. Lett. 1988;60:1773. doi: 10.1103/PhysRevLett.60.1773. [DOI] [PubMed] [Google Scholar]
- 31.Sun J, Bollt EM. Causation entropy identifies indirect influences, dominance of neighbors and anticipatory couplings. Phys. D. 2014;267:49–57. doi: 10.1016/j.physd.2013.07.001. [DOI] [Google Scholar]
- 32.Friston KJ, et al. Granger causality revisited. Neuroimage. 2014;101:796–808. doi: 10.1016/j.neuroimage.2014.06.062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Schreiber T. Measuring information transfer. Phys. Rev. Lett. 2000;85:461. doi: 10.1103/PhysRevLett.85.461. [DOI] [PubMed] [Google Scholar]
- 34.Smirnov DA. Spurious causalities with transfer entropy. Phys. Rev. E. 2013;87:042917. doi: 10.1103/PhysRevE.87.042917. [DOI] [PubMed] [Google Scholar]
- 35.Kiwata H. Relationship between Schreiber’s transfer entropy and Liang–Kleeman information flow from the perspective of stochastic thermodynamics. Phys. Rev. E. 2022;105:044130. doi: 10.1103/PhysRevE.105.044130. [DOI] [PubMed] [Google Scholar]
- 36.San Liang X, Kleeman R. Information transfer between dynamical system components. Phys. Rev. Lett. 2005;95:244101. doi: 10.1103/PhysRevLett.95.244101. [DOI] [PubMed] [Google Scholar]
- 37.Sinha, S. & Vaidya, U. Formalism for information transfer in dynamical network. In 2015 54th IEEE Conference on Decision and Control (CDC), 5731–5736 (IEEE, 2015).
- 38.Sinha S, Vaidya U. On data-driven computation of information transfer for causal inference in discrete-time dynamical systems. J. Nonlinear Sci. 2020;30:1651–1676. doi: 10.1007/s00332-020-09620-1. [DOI] [Google Scholar]
- 39.Sinha S, Sharma P, Vaidya U, Ajjarapu V. On information transfer-based characterization of power system stability. IEEE Trans. Power Syst. 2019;34:3804–3812. doi: 10.1109/TPWRS.2019.2909723. [DOI] [Google Scholar]
- 40.Jaynes ET. Information theory and statistical mechanics. Phys. Rev. 1957;106:620. doi: 10.1103/PhysRev.106.620. [DOI] [Google Scholar]
- 41.Chellappan V, Sivalingam KM, Krithivasan K. A centrality entropy maximization problem in shortest path routing networks. Comput. Netw. 2016;104:1–15. doi: 10.1016/j.comnet.2016.04.015. [DOI] [Google Scholar]
- 42.Kovačević, M., Stanojević, I. & Šenk, V. On the hardness of entropy minimization and related problems. In 2012 IEEE Information Theory Workshop, 512–516 (IEEE, 2012).
- 43.Estrada E, Hatano N, Benzi M. The physics of communicability in complex networks. Phys. Rep. 2012;514:89–119. doi: 10.1016/j.physrep.2012.01.006. [DOI] [PubMed] [Google Scholar]
- 44.Nemhauser GL, Wolsey LA, Fisher ML. An analysis of approximations for maximizing submodular set functions-i. Math. Program. 1978;14:265–294. doi: 10.1007/BF01588971. [DOI] [Google Scholar]
- 45.Chamon, L. F. & Ribeiro, A. Near-optimality of greedy set selection in the sampling of graph signals. In 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 1265–1269 (IEEE, 2016).
- 46.Ermentrout GB, Kopell N. Multiple pulse interactions and averaging in systems of coupled neural oscillators. J. Math. Biol. 1991;29:195–217. doi: 10.1007/BF00160535. [DOI] [Google Scholar]
- 47.Srighakollapu MV, Kalaimani RK, Pasumarthy R. Optimizing network topology for average controllability. Syst. Control Lett. 2021;158:105061. doi: 10.1016/j.sysconle.2021.105061. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The codes/data used during the current study are available from the corresponding author upon reasonable request.