Abstract
Cooperative dynamics are common in ecology and population dynamics. However, their commonly high degree of complexity with a large number of coupled degrees of freedom renders them difficult to analyse. Here, we present a graph-theoretical criterion, via a diakoptic approach (divide-and-conquer) to determine a cooperative system’s stability by decomposing the system’s dependence graph into its strongly connected components (SCCs). In particular, we show that a linear cooperative system is Lyapunov stable if the SCCs of the associated dependence graph all have non-positive dominant eigenvalues, and if no SCCs which have dominant eigenvalue zero are connected by a path.
Keywords: cooperative systems, stability analysis, diakoptics, linear systems, population dynamics
1. Introduction
Cooperative systems are a wide class of dynamical systems characterized by a non-negative dependence between components [1]. Common examples are (bio-)chemical reaction networks with mutually activating interactions and compartmental dynamics, where a conserved quantity transits between different compartments or states [2,3]. However, cooperative systems also include non-conserved replicator dynamics, such as (multi-species) population dynamics, where a population of replicators transits between different states/compartments. Examples for the latter are organisms which transit through life cycles or tissue cells (e.g. stem cells) which proliferate, switch between different phenotypes [4] and differentiate during biological development and in renewing tissues. If one considers the dynamics of sub-populations embedded in a larger population, then the equations describing the system are linear: while a population as a whole may be subject to a nonlinear feedback (for example, by a finite carrying capacity), smaller embedded sub-populations compete neutrally with each other without affecting the population as a whole. This renders the dynamics linear.
In this article, we find conditions for the stability of linear cooperative systems, based on graphical criteria of the underlying dependence graph. In an ecological or biological context, stability of populations is required to maintain ecological equilibrium (population of individuals) or a functional biological tissue (population of tissue cells). In particular, instability of a tissue cell population may lead to cancer, thus the study of a cell population’s stability is of high biomedical importance. However, the commonly applied property of asymptotic stability is not viable for linear systems in a biological context, since the only asymptotically stable state is extinction. In these contexts, it is therefore more appropriate to study marginally stable steady states, a form of Lyapunov stability.
While general criteria for a cooperative system’s stability are well established [5], real-world systems can be very complex, with a large number of variables and complex interactions, in which case their analysis is a highly challenging endeavour. Topological features of trajectories, such as compactness, can theoretically be used to determine stability [6,7], but are in practice difficult to apply without explicitly solving the underlying differential equation.
To simplify the analysis of a system, it is useful to represent it as a directed graph in which dependent variables xi(t), , are nodes, and links denote dependence relations between those variables. The Jacobian matrix of such a system can be interpreted as an adjacency matrix of an underlying graph representing the mutual dependence of components. Cooperative systems are then defined by non-negativity of Jacobian’s off-diagonal entries, which corresponds to positive-only weights of links in the network. The corresponding Jacobian matrix is a Metzler matrix and thus methods based on non-negative matrices (and the Perron–Fobenius theorem) can be applied to study them [8–10].
A paradigm to study complex systems is the diakoptic view (divide-and-conquer) [11]: a large interacting system is decomposed into suitable small subsystems, which are studied in isolation, a task which is usually easier to perform. Then a synthesis of subsystems yields the features of the whole system. The analogy between dynamical systems and graphs may be used to apply graph-theoretical tools to perform such a diakoptic decomposition of the system (i.e. graph) into smaller subsystems which can significantly simplify the analysis of stability features of the respective system.
A diakoptic approach, based on the decomposition of the underlying graph into its strongly connected components (SCCs) has been used to determine asymptotic stability of cooperative systems [12]. An SCC is a subset of nodes which are all mutually reachable by directed paths. Simply speaking, a system is asymptotically stable if, and only if, all its SCCs, when decoupled from each other, are asymptotically stable. This holds, since the eigenvalues of the system’s adjacency matrix are the union of the SCCs’ eigenvalues [9,12], which can also easily be checked by evaluating row sums of the dynamical matrix [13]. However, this criterion cannot be straightforwardly generalized to determine marginal stability; conclusions about marginal stability versus instability can in general not be drawn just by considering SCCs in isolation and a system may be unstable even if there is no unstable individual SCC. Only for linear compartmental systems—i.e. cooperative systems that feature a conserved quantity—the existence of a marginally state can be found through analysis of SCCs in isolation: if there is at least one singular SCC, a so-called trap, then a non-trivial marginally stable steady state exists [14,15]. However, this criterion cannot be applied to linear cooperative systems in general, when dynamics are not conserved.
Here, we introduce a diakoptic approach to determine the stability of general linear cooperative systems, which is also applicable for non-conserved systems, and allows us to identify conditions for marginal stability. This approach is based on graphical criteria of the underlying dependence graph when decomposed for its SCCs. The stability can then be inferred from (i) the spectrum of the Jacobian matrices of isolated SCCs, and (ii) the hierarchical arrangement of the SCCs. Our main result is theorem 2.9 (illustrated in figure 2) which states that for (marginal) stability to prevail, no SCC may have positive eigenvalues, and any otherwise singular SCCs may not stand in any hierarchical relation to each other, i.e. there may be no (directed) path connecting them. This reflects the principle that the larger and more connected complex systems are, the more likely they are to become unstable [16].
Figure 2.

Illustration of theorem 2.9. Circles are SCCs, according to the condensation mapping as illustrated in figure 1 and coloured according to their type. For a linear cooperative system to be stable, all SCCs must have non-positive eigenvalues (no super-critical SCCs), and any SCCs with dominant eigenvalue zero (critical SCCs) cannot be connected by any directed path. Configurations which allow marginally stable states are shown with a green tick, and those which are unstable with a red cross. For the former, we also mark the trivial blocks. All non-negative, marginally stable states can be determined by setting zero all the nodes in all the trivial blocks, choose one 0-eigenvector for each critical (blue) block, and propagate them downstream using equations (2.8) and (2.9).
2. Results
We consider a generic cooperative linear dynamical system of a positive quantity (mass) m on a directed weighted graph with n nodes, whereby we denote mi = mi(t) as the mass on node i = 1, …, n at time t. The state vector of the system is m = (m1, m2, …, mn)T and the system is written as
| 2.1 |
for a n × n real square matrix
| 2.2 |
The condition aij ≥ 0 for i ≠ j, defines the system as cooperative, since A is the Jacobian matrix of system (2.1). We note that the system is not necessarily conserved, i.e. the ‘mass’ could replicate, such as a population of biological individuals, cells or viruses.
We consider the underlying directed weighted graph G(A) with transposed adjacency matrix A, that is, the graph with n nodes and a link from j to i, weighted by aij, only if aij ≠ 0. This is a finite simple graph with positively weighted edges and arbitrarily weighted self-loops. We wish to relate the stability of the fixed points of (2.1) to the network structure of G(A).
Since A is the Jacobian of (2.1), the stability of a fixed point m*, defined by (that is, a 0-eigenvector of A), is determined by the spectral properties of A. For the system to be asymptotically stable, all the real parts of the eigenvalues of A must be negative. In this case, however, and the only fixed point is trivial, . As we are interested in non-trivial solutions, we focus instead on Lyapunov stable fixed points which are at least marginally stable (also called semi-stable [5]). This is the case if the eigenvalue of A with largest real part is zero and its geometric multiplicity is equal to its algebraic multiplicity [17]. Our main result is a necessary and sufficient condition on the structure of the graph G(A), for the dynamical system to have non-trivial, marginally stable, non-negative solutions. Note that we call a vector m (or, similarly, a matrix) non-negative, written m ≥ 0, if all entries are real and non-negative, and positive, written m > 0, if all entries are real and positive.
First, we decompose G(A) into its strongly connected components, as follows. A (sub-)graph is strongly connected if for any pair of nodes i and j in the graph there is a directed path from i to j and a directed path from j to i, that is, every pair of nodes is mutually reachable. Every directed graph can be partitioned into maximal strongly connected subgraphs, the graph’s strongly connected components (SCCs). The SCCs of a directed graph G form another graph called the condensation of G: in it, each node represents an SCC, and if two SCCs in G are connected by at least one link, then the condensation possesses a link between them, in the same direction as in G (figure 1). The condensation of a directed graph is always a directed acyclic graph and, hence, its nodes (the SCCs of G) admit a topological ordering [18]: an ordering B1, B2, …, Bh (from now on, we will identify the kth connected component of G with its adjacency matrix Bk) such that if there is a link from Bi to Bj then i ≤ j (see figure 1 for an example). We can extend the ordering to the nodes of G so that node u ∈ Bi appears before node v ∈ Bj whenever i ≤ j. With respect to this re-ordering and re-labelling of the nodes of G, the adjacency matrix A of G(A) becomes a lower triangular matrix
| 2.3 |
where h is the number of SCCs of G(A), Bk is the adjacency matrix of the kth SCC (1 ≤ k ≤ h), and Ckl encodes the connectivity from Bl to Bk. This is sometimes called the normal form of a reducible matrix [19]. If there exist a path from k to l (thus k ≤ l), we call Bk upstream of Bl, and Bl is downstream of Bk. If Bk is connected by a single (directed) link to Bl then we also call Bk immediately upstream of Bl, and Bl immediately downstream of Bk. From now on, we will implicitly assume a topological ordering and notation as above.
Figure 1.
Decomposition of a directed graph into SCCs and its condensation graph. (a) A directed graph (black dots represent nodes, and arrows directed links) and its SCCs (dashed circles). Note that every node belongs to an SCC, and that an SCC can be a single node. (b) Condensation of the directed graph: black circles represent the SCCs and arrows whenever two SCCs are connected via at least one link (in the direction shown). The condensation of a graph is always a directed acyclic graph and hence admits a topological ordering, shown here as B1 to B8.
Since A, written in the form (2.3), is a lower triangular block matrix, the characteristic polynomial of A, , is the product of the characteristic polynomials of the Bk’s
| 2.4 |
Thus, the spectrum of A—seen as a multiset—is the union of the spectra of the Bk’s, and the algebraic multiplicity of the eigenvalues is preserved.
Since all off-diagonal elements of A (and hence of each Bk) are non-negative, and each Bk is the adjacency matrix of a strongly connected graph, the matrices Bk are irreducible Metzler matrices, for which the Perron–Frobenius theorem applies, to the shifted eigenvalues [20]. Therefore, each matrix Bk has a real eigenvalue μk with (strictly) largest real part, which is simple and has a positive eigenvector. We call μk the dominant eigenvalue of the matrix Bk.
We now introduce some further terminology. We call each SCC, and equivalently its adjacency matrix Bk, a block of the system (we use the term ‘block’ and the notation Bk for both the matrix and its graph). We call a block critical if its dominant eigenvalue μk = 0, sub-critical if μk < 0 and super-critical if μk > 0. Correspondingly, we define the index subsets , , and . The first things we note are (e.g. [12])
Lemma 2.1. —
If at least one block Bk of A is super-critical, then the system (2.1) is unstable.
Lemma 2.2. —
The system (2.1) is asymptotically stable if and only if all blocks Bk of A are sub-critical.
These lemmas follow immediately from the fact that a system is unstable if at least one real part of an eigenvalue of A is positive, and it is asymptotically stable if and only if all real parts of eigenvalues are negative, together with the property that the spectrum of A is the multi-set union of spectra of the Bk (note, however, that the ‘if and only if’ statement only holds for lemma 2.2) [12]. In the situation of lemma 2.2, observe that and hence is the only fixed point of the system.
Lemmas 2.1 and 2.2 cover all cases where any super-critical blocks exist, or only sub-critical ones. In these cases, the system is either unstable, or has only a trivial (zero) fixed point. In the following, we will consider only the remaining cases when no super-critical blocks exist, but there is at least one critical block, and investigate the existence of non-trivial, non-negative (so that each node supports a non-negative fraction of the ‘mass’) marginally stable fixed points.
If no super-critical, and at least one critical, block exists, the dominant eigenvalue of A is zero and, according to the Perron–Frobenius theorem, there exist non-trivial eigenvectors m* for the eigenvalue zero. It is assured that all such m* are equilibrium points of system (2.1); however, to be a (Lyapunov) stable equilibrium it is required that the algebraic multiplicity of eigenvalue zero is equal to its geometric one, or equivalently, equal to the dimension to the nullspace of A. We will approach the latter question by explicitly constructing such equilibrium sets.
Let us first write the equilibrium condition of the dynamical system (2.1), using (2.3), as
| 2.5 |
i.e. the equilibrium vector m* is decomposed in the projections on the sub-space of Bk, in the form m* = (m1, m2, …, mh)T. For simplicity, we call the steady state on Bk. We further call a block Bk trivial if for all non-negative marginally stable fixed points m* of system (2.5), and non-trivial otherwise.1 In other words, a trivial block is one that does not support any positive fraction of the ‘mass’ for any non-negative fixed point.
Our first result is a formula for the steady states on sub-critical blocks Bk. Let us consider the kth row of (2.5),
| 2.6 |
where Bk is a sub-critical block. Since all eigenvalue real parts of Bk are negative, and thus Bk is invertible, so that we obtain a recursive formula for the steady state:
| 2.7 |
Let If ⊆ Ic denote the indices of the critical SCCs for which there are no other critical SCCs downstream. Let us call them final critical blocks. With this terminology, we have, from the recursion relation above, the following:
Theorem 2.3. —
If Bk is a sub-critical block of A and equation (2.5) holds, then m*k, the steady state on Bk, is uniquely determined by the final critical blocks upstream of Bk, namely
2.8 where
2.9 and is the set of all paths from Bl (l ∈ If) to Bk, written as a sequence of nodes (l1, l2, …, ln), where n is the length of the path.
This follows directly if we apply the relation equation (2.7) recursively to all steady states of sub-critical blocks Bk on the right-hand side of equation (2.7), using that when propagating upstream, no critical block can be encountered before a final critical block is encountered.
Theorem 2.3 assures that the steady state on any sub-critical block is uniquely defined by the steady states on all critical blocks upstream of the former. Furthermore, we can conclude:
Corollary 2.4. —
If Bk is a sub-critical block of A, then Bk is trivial if and only if all Bl immediately upstream of Bk are trivial.
Proof. —
From equation (2.7), it directly follows that if all Bl immediately upstream are trivial (), then Bk is trivial (). Now let us consider the case that at least one Bl immediately upstream has . We first note that since Bk is a Metzler matrix with , −Bk is a non-singular M-matrix, and its inverse is a positive matrix (shown in [21]). Thus, m*k is positive if at least one (recall that m* and the Ckl’s are non-negative). Therefore, it follows: if Bk is trivial, i.e. , then for all immediately upstream Bl, ml = 0, and hence Bl is trivial. ▪
Now we make a topological characterization of the trivial blocks.
Theorem 2.5. —
A block is trivial if and only if
(i) it is upstream of a critical block, or
(ii) it is a sub-critical block which is not downstream of a critical block.
Thereby all trivial blocks can be easily identified by inspecting the condensed graph and its critical blocks (figure 1).
Proof. —
To prove theorem 2.5, consider an equilibrium point m*. Then equation (2.5) holds, and in particular its kth row equation (2.6). Now Bk is critical and hence has an eigenvalue zero (μk = 0 by definition), and thus Bk is not invertible. Let us multiply both sides of equation (2.6) with the matrix exponential to yield
2.10
where we used that a square matrix M commutes with its exponential, eM M = MeM. In general, eMtx is a solution of the linear ODE and thus converges to a linear combination of dominant eigenvectors (eigenvectors of the dominant eigenvalues) of M. Since Bk is critical, the corresponding dominant eigenvalue is zero and thus for t → ∞. This means that for t → ∞. Let us call and . We then have Lv = 0 and we want to show that v = 0. This is not true for a general vector v unless L is invertible, but will hold for non-negative eigenvectors such as v. In fact, the matrix L is not invertible in general as all its eigenvalues are zero, except a simple eigenvalue 1 with a positive (left) eigenvector u (see lemma 2.7 below). In this case, uL = u and uv = uLv = 0, a contradiction, since u is positive and v is non-negative, unless v = 0.
All in all, we conclude that . Note that all entries of the matrices Ckl and of the vector are non-negative, so this can only be the case if, for all l < k, Ckl = 0 or . Since, for all immediately upstream blocks, we have Ckl ≠ 0, it follows that
Lemma 2.6. —
All blocks immediately upstream of a critical block are trivial.
Crucially, from theorem 2.3 and lemma 2.1, it follows that all blocks Bm immediately upstream of any trivial block Bl are trivial (either Bl is critical, or sub-critical and trivial). By applying this argument recursively to lemma 2.1, the first part of theorem 2.5 follows. The second part is an immediate consequence of corollary 2.4. ▪
To complete the proof above, we state and prove the following.
Lemma 2.7. —
Let B be an irreducible Metzler matrix with shifted Perron–Frobenius eigenvalue 0 and positive left eigenvector u. Then the limit matrix L = lim t→∞etB exists and satisfies uL = u.
Proof. —
Define ft(x) = etx for t > 0 and write B in Jordan normal form as B = PJP−1. By definition [22], the matrix function ft(B) = etB equals Pf(J)P−1, where f(J) is the block diagonal matrix obtained by applying f to each diagonal Jordan block, Ji, of J as
The eigenvalue of B with largest real part is 0 (dominant eigenvalue), hence , if λi ≠ 0, and 1 otherwise, for all k ≥ 0. All in all, L = PMP−1 where M is the zero matrix except a single 1 in the diagonal. Its eigenvectors (the columns of P) are the same as the (generalized) eigenvectors of B = PJP−1. Hence the left 0-eigenvector u of B becomes a left 1-eigenvector of L, uL = u. ▪
We can also easily conclude from theorem 2.5 and equation (2.10):
Corollary 2.8. —
The steady states on a non-trivial critical block Bk (called a free block) is the one-dimensional family of dominant eigenvectors (of eigenvalue zero) of Bk. We can write these as where is a free parameter, and is a (normalized) dominant eigenvector of Bk.
Theorems 2.3 and 2.5, and corollary 2.8, allow us to construct the most generic steady state of system (2.1), that is, the nullspace of A. From theorem 2.5, it follows that the set of non-trivial SCCs is exactly the set of final critical blocks, as defined before theorem 2.3. Hence If ⊆ Ic is also the index set of non-trivial critical blocks, that is, . All in all, a steady-state vector m* = (m1, m2, …, mh)T has the form
| 2.11 |
This can also be written as
| 2.12 |
Hence, the dimension of the nullspace of A (equivalently, the geometric multiplicity of the eigenvalue zero) is equal to the number of non-trivial critical blocks. The algebraic multiplicity, on the other hand, is the number of all critical blocks, since according to the Perron–Frobenius theorem for Metzler matrices, each critical block has a simple (algebraic multiplicity 1) eigenvalue zero and thus contributes once to the multiset of eigenvalues of A, by equation (2.4). Recall that system (2.1) is marginally stable if and only if the dominant eigenvalue of A is zero and its geometric multiplicity is equal to the algebraic multiplicity; this is thus the case only if there are no super-critical blocks and all critical blocks are non-trivial. According to theorem 2.5, this is the case if no critical block is upstream of another critical block, or alternatively, if there are no paths between any two critical blocks. We thereby arrive at
Theorem 2.9. —
A dynamical system in the form of equation (2.1) with Jacobian matrix A is marginally stable if, and only if, these conditions hold:
(a) There are no super-critical blocks.
(b) There is at least one critical block.
(c) There are no (directed) paths in G(A) which connect two critical blocks.
Parts (a) and (b) follow from lemmas 2.1 and 2.2, while part (c) follows from equation (2.12) and theorem 2.5, that if there is a path between two critical blocks, one of them must be trivial. This theorem is illustrated in figure 2. To summarize our findings, including the necessary definitions, we can express the stability criteria of cooperative dynamical systems as follows:
Theorem 2.10. —
Let A = [aij], with aij ≥ 0 if i ≠ j, be the Jacobian matrix of the linear cooperative system in equation (2.1), and G(A) its weighted graph, defined by the edge weights aij for all i, j. Let B1, …, Bh be the adjacency matrices of the strongly connected components (SCCs) of G(A). We define an SCC as critical if its dominant eigenvalue is zero, sub-critical, if its dominant eigenvalue is negative, and super-critical if its dominant eigenvalue is positive. Then:
- 1.
The system is asymptotically stable if and only if all SCCs are sub-critical. In that case, the steady state vanishes (is the zero-vector).
- 2.
Otherwise, the system is marginally stable if
(a) there are no super-critical SCCs, and (b) there are no paths in G(A) which connect two critical SCCs.- 3.
Otherwise, the system is unstable.
The corresponding equilibrium set of the system is given by equation (2.11).
3. Conclusion
The conditions stated in theorem 2.10 prescribe a way to simplify the analysis of a high-dimensional linear cooperative system by decomposition into lower dimensional subsystems, the strongly connected components (SCCs) of the dynamical system’s dependence graph. By spectral analysis of these SCCs and checking whether the topological conditions of theorem 2.10 are fulfilled, the system’s stability can be determined. In particular, marginal stability is of importance for linear systems, since marginally stable states represent the only possible non-vanishing stable states—i.e. not identical to the zero-vector—called steady states. Such a steady state features conservation of the quantity of interest ‘on average’, i.e. the mean value stays constant even if the quantity itself is not strictly conserved.2 By contrast, asymptotically stable states are trivially vanishing for linear systems.
Moreover, our analysis revealed that a critical SCC, i.e. one with dominant eigenvalue zero, uniquely determines the steady state of the (necessarily sub-critical) SCCs upstream and downstream of it. In particular, the steady-state configurations of all SCCs downstream of a critical SCC do generally not vanish (theorem 2.3), and are uniquely determined by equations (2.8) and (2.9), while the steady-state configurations of all SCCs upstream of it must vanish (theorem 2.5). This leads to an explicit formula (equations (2.11) and (2.12)) to construct the steady state of the whole system by the knowledge of the steady states on the critical SCCs only.
The results, theorems 2.3, 2.5, 2.9 and 2.10, can be seen as a generalization of a similar condition found for linear compartmental systems, i.e. linear cooperative systems where the quantity of interest is strictly conserved (apart from external sources and sinks) [2]. For those systems, it has been found that the existence of at least one singular SCC (having an eigenvalue zero) is sufficient to ensure a non-trivial steady state [14,15]. Notably, due to the conservation law in compartmental systems, no SCCs with positive eigenvalue may exist, meaning that all singular SCCs are critical, and furthermore, no critical SCCs can have any outgoing links. Hence, the existence of a singular SCC in a compartmental system automatically implies the conditions of our theorem 2.9. The stability conditions of theorems 2.9 and 2.10 therefore represent a generalization to cooperative systems where the quantity of interest is not necessarily conserved, and can therefore also be applied to population dynamics where individuals can replicate and transit between different states. An example are populations of stem cells in animal tissues which differentiate, thereby changing their cell type. Note that while (cell) populations as a whole are often subject to feedback and thus follow nonlinear dynamics, when considering sub-populations therein, which compete neutrally, the corresponding subsystem is linear.
In general, cooperative systems can be highly complex, with a large number of variables and very complex interactions, hence represented by large and often irregular graphs. The method presented here is a way to significantly simplify the analysis of a wide range of systems, ranging from cooperative (bio-)chemical reactions to complex population dynamics, by decomposing the systems into their strongly connected components. We have shown that a spectral analysis of each SCC, and a simple graphical criterion of the connectivity between SCCs (theorem 2.10, figure 2) completely determine the stability of any linear cooperative system. This provides a unique insight into the possible configurations of cooperative systems and demonstrates the power of graph-theoretic techniques in the analysis of complex dynamical systems.
Supplementary Material
Acknowledgements
We thank David Chillingworth for hinting us towards some literature on cooperative systems.
Footnotes
Note that m*k is the kth sub-space component of the global steady state m* of A, but not necessarily the steady state of the isolated subsystem of Bk.
We note that since a marginal steady state is a right 0-eigenvector, by fulfilling , there must also exist a left 0-eigenvector , fulfilling . The latter equation defines a generalized conservation law with coefficients ; however, this conservation law may be non-trivial and cannot be directly derived from .
Data accessibility
This article has no additional data.
Authors' contributions
P.G. and R.J.S.-G. carried out the mathematical analysis, P.G., B.D.M. and C.P. conceived and designed the project. All authors helped draft the manuscript. All authors gave final approval for publication.
Competing interests
We declare we have no competing interest.
Funding
P.G. was supported by a Medical Research Council New Investigator Research (grant no. MR/R026610/1). C.P. was supported by a Studentship of the Institute of Life Sciences.
References
- 1.Hirsch MW, Smith H. 2006. Monotone dynamical systems. In Handbook of differential equations: ordinary differential equations, Vol. 3 (eds P Drábek, A Fonda), p. 57. Amsterdam, The Netherlands: Elsevier.
- 2.Godfrey K. 1983. Compartmental models and their application. New York, NY: Academic Press. [Google Scholar]
- 3.Walter G, Contreras M. 1999. Compartmental modeling with networks. Boston, MA: Birkhauser. [Google Scholar]
- 4.Greulich P, Simons BD. 2016. Dynamic heterogeneity as a strategy of stem cell self-renewal. Proc. Natl Acad. Sci. USA 113, 7509 ( 10.1073/pnas.1602779113) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Haddad WM, Chellaboina VS, Hui Q. 2010. Nonnegative and compartmental dynamic systems. Princeton, NJ: Princeton University Press. [Google Scholar]
- 6.Hirsch MW. 1985. Systems of differential equations which are competitive or cooperative: II. Convergence almost everywhere. SIAM J. Math. Anal. 16, 423–339 ( 10.1137/0516030) [DOI] [Google Scholar]
- 7.Hirsch MW, Smith HL. 2004. Competitive and cooperative systems: a mini-review. In Positive systems (eds L Benvenuti, A De Santis, L Farina), pp. 183–190. New York, NY: Springer.
- 8.Plemmons RJ. 1977. M-matrix characterizations. I-nonsingular M-matrices. Linear Algebra Appl. 18, 175–188. ( 10.1016/0024-3795(77)90073-8) [DOI] [Google Scholar]
- 9.Berman A, Neumann M, Stern RJ. 1989. Nonnegative matrices in dynamic systems. New York, NY: John Wiley & Sons. [Google Scholar]
- 10.Berman A, Plemmons RJ. 1994. Nonnegative matrices in the mathematical sciences. New York, NY: Academic Press. [Google Scholar]
- 11.Kron G. 1963. Diakoptics: the piecewise solution of large-scale systems. London, UK: MacDonald. [Google Scholar]
- 12.Kevorkian AK. 1975. Structural aspects of large dynamic systems. IFAC Proc. Volumes 8, 101–111. ( 10.1016/S1474-6670(17)67541-4) [DOI] [Google Scholar]
- 13.Siljak DD. 1975. When is a complex ecosystem stable? Math. Biosci. 25, 25–50. ( 10.1016/0025-5564(75)90050-4) [DOI] [Google Scholar]
- 14.Foster DM, Jacquez JA. 1975. Multiple zeros for eigenvalues and the multiplicity of traps of a linear compartmental system. Math. Biosci. 26, 89–97. ( 10.1016/0025-5564(75)90096-6) [DOI] [Google Scholar]
- 15.Jacquez JA, Simon CP. 2008. Qualitative theory of compartmental systems. SIAM Rev. 35, 43–79. ( 10.1137/1035003) [DOI] [PubMed] [Google Scholar]
- 16.May RM. 1972. Will a large complex system be stable? Nature 238, 413–414. ( 10.1038/238413a0) [DOI] [PubMed] [Google Scholar]
- 17.Åström KJ, Murray RM. 2008. Feedback systems: an introduction for scientists and engineers. Princeton, NJ: Princeton University Press. [Google Scholar]
- 18.Cormen TH. 2009. Introduction to algorithms. New York, NY: MIT Press. [Google Scholar]
- 19.Varga RS. 2000. Matrix iterative analysis. Berlin, Germany: Springer. [Google Scholar]
- 20.MacCluer CR. 2000. The many proofs and applications of Perron’s theorem. SIAM Rev. 42, 487–498. ( 10.1137/S0036144599359449) [DOI] [Google Scholar]
- 21.Meyer CD, Stadelmeier MW. 1978. Singular M-matrices and inverse positivity. Linear Algebra Appl. 22, 139–156. ( 10.1016/0024-3795(78)90065-4) [DOI] [Google Scholar]
- 22.Higham NJ, Al-Mohy AH. 2010. Computing matrix functions. Acta Numer. 19, 1–57. ( 10.1017/S0962492910000036) [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
This article has no additional data.

