Abstract
This paper studies the direction of change of steady states to parameter perturbations in chemical reaction networks, and, in particular, to changes in conserved quantities. Theoretical considerations lead to the formulation of a computational procedure that provides a set of possible signs of such sensitivities. The procedure is purely algebraic and combinatorial, only using information on stoichiometry, and is independent of the values of kinetic constants. Three examples of important intracellular signal transduction models are worked out as an illustration. In these examples, the set of signs found is minimal, but there is no general guarantee that the set found will always be minimal in other examples. The paper also briefly discusses the relationship of the sign problem to the question of uniqueness of steady states in stoichiometry classes.
Inspec keywords: chemical reactions, biochemistry, cellular biophysics, algebra, combinatorial mathematics, stoichiometry
Other keywords: steady states, chemical reaction networks, parameter perturbations, algebraic method, combinatorial method, stoichiometry, intracellular signal transduction models
1 Introduction
A key question in the mathematical analysis of chemical reaction networks is the characterisation of sensitivities of steady states to parameter perturbations [1–7]. In the time scale of cellular signalling, assuming no turn‐over due to expression and degradation or dilution, one such parameter could be, for example, the total concentration of a certain enzyme in its various activity states. The value of this parameter might be manipulated experimentally in various forms in order to achieve knock‐downs or up‐regulation. Often, especially in the context of inhibitors for therapeutic purposes, it is desirable to be able to predict the sign of the effect of such perturbations on states, in a manner that depends only on the structure of the network of reactions and not on the actual values of other parameters, such as kinetic constants, which are typically very poorly characterised.
1.1 An example
We introduce the problem to be studied through an example, an enzymatic network consisting of a cascade of two reversible covalent modifications, see Fig. 1.
Fig. 1.

An enzymatic cascade
Specifically, we consider the following reaction network:
| (1) |
Here E is a constitutively active kinase which drives a phosphorylation reaction in which a substrate M 0 is converted to an active form M 1, which can be dephosphorylated back into inactive form by a constitutively active phosphatase G. There are two intermediate enzyme–substrate complexes, A = M 0 E and B = M 1 G, for these enzymatic reactions. The active form M 1 is itself a kinase which drives a phosphorylation reaction in which a second substrate N 0 is converted to an active form N 1, which can be dephosphorylated back into inactive form by a constitutively active phosphatase F. There are also two intermediate enzyme–substrate complexes, C = N 0 M 1 and D = N 1 F, for these last enzymatic reactions. In cell signalling, one typically views (1) as a cascade of a subsystem described by the first group of reactions, involving the enzyme M in its various forms, and a second subsystem described by N in its various forms, as diagrammed in Fig. 1.
An instance of great biological interest is provided by the proteins from MAPK/ERK pathways. There are several different MAPK (“mitogen‐activated protein kinase”) pathways, in each cell of a given organism, as well as in cells of different organisms, but they all share the same basic architecture, comprising a set of phosphorylation/dephosphorylation covalent modification cycles (sometimes with multiple phosphorylation steps in each subsystem). They are found in all eukaryotes [8–11], and are key participants in the regulation of some of the most important cell processes, from cell division and gene expression to differentiation and apoptosis. The targeting of MAPK/ERK components is the focus of current‐generation drugs to treat advanced melanomas and a wide range of other tumours, including lung and thyroid cancers [12]. Normally, there are three, rather than two, components to MAPK cascades, corresponding to proteins generically called MAPK, MAPKK (“MAPK kinase”), and MAPKKK (“MAPKK kinase”), a typical example being given by Erk, Mek, and Ras respectively. Fig. 2 shows several typical MAPK pathways in mammalian cells. In our example, “M ” and “N ” could be MAPKK and MAPK respectively, or MAPKKK and MAPKK respectively.
Fig. 2.

MAPK pathways in mammalian cells
Image freely licensed from creative commons media file repository
Let us denote the concentrations of the various species in (1) using the corresponding lower case letters
We assume mass action kinetics for each reaction, and an ordinary differential equation (ODE) model. For example, the forward reaction will proceed at a rate , where is a kinetic constant, so a differential equation for the state component e will have a term “ ”. A set of independent conservation laws (corresponding to a basis of the left nullspace of the stoichiometry matrix in the usual sense, see Appendix) for this system is given by:
| (2) |
| (3) |
| (4) |
| (5) |
and
| (6) |
We may think of as total amount of constitutively active enzyme (bound or in complex), and as total amount of phosphatases, as total amount of the first substrate in all free and bound forms, and as total amount of the second substrate in all free and bound forms.
The question that we wish to study is: how do steady states of the system change upon a change in one of these conserved quantities? Our interest is especially in understanding how, for example, a variation in the total amount of the second phosphatase “backwards” affects the steady‐state of the first component of the system. Specifically, we wish to determine the direction of change (increase of decrease) in individual steady‐state components when such a parameter is perturbed. Moreover, we would like to find information that is robust to the actual values of kinetic constants in each reaction. Experimental perturbations of quantities such as are implemented, in practice, through genetic, biochemical, and physical methods, including small‐molecule kinase inhibitors, changes in gene expression, repression of transcription by siRNA's, or laser trapping with optical tweezers.
Of course, there are no true “forward” and “backward” directions: the system is tightly connected, and the input/output formalism of control theory is inadequate as a paradigm (a point that was much emphasised by Willems in his work on behavioural foundations of systems theory [13]). Nonetheless, the idea of unidirectional information flow in MAPK and other cascades is well‐established and has biological substance, through the ultimate transfer of information from cell surface receptors to gene expression. This question of “backward propagation” of effects has been the subject of considerable research in the context of modularity of biological systems [14, 15] and, specifically, in the context of the “retroactivity” phenomenon [16–23]. Retroactivity is a fundamental systems‐engineering issue that arises when interconnecting biological subsystems, just as with electrical or mechanical systems: the effect of “loads” on the “output” of a system in effect creates biochemical “impedance” connections that are not obvious from a unidirectional signal‐flow view of information processing.
When we apply the theory to be developed in this paper, we find that perturbations of the total second substrate, , and perturbations of the second phosphatase, , both lead to changes in “upstream” steady states. This is an instance of the retroactivity phenomenon. More interestingly, these two types of perturbations of the “downstream” layer have opposite effects on steady‐state concentrations. This prediction has been tested experimentally and found to be correct [7].
We now turn to a precise problem statement, theoretical developments, and the description of an algorithm that addresses the question of directionality of changes in steady states upon parameter perturbations. We have developed a MATLAB® script, “CRNSESI” (Chemical Reaction Network SEnsitivity SIgns) that implements our procedure. After this, we return to the motivating example and display the signs of state changes for perturbations in each of the conserved quantities, as obtained from the use of CRNSESI. While in this example it turns out that the signs of state variations are unambiguously determined, such is not the case with other examples. To illustrate this lack of uniqueness, we provide a second example, a simple model of a phosphotransfer system, that exhibits ambiguity in one of the state components.
2 Preliminaries: general systems
We start with arbitrary systems of ODEs
| (7) |
The vectors x are assumed to lie in the positive orthant of , that is, with each , and f is a differentiable vector field, mapping into . We later specialise to ODEs that describe chemical reaction networks (CRNs), for which the abstract procedure to be described next can be made computationally explicit. In the latter context, we think of the coordinates of x as describing the concentrations of various chemical species , .
Suppose that describes a λ ‐parametrised smooth curve of steady states for the system (7), where λ is a scalar parameter ranging over some open interval Λ. The steady‐state condition amounts to asking that
| (8) |
for all values of the parameter .
In addition to (8), we also assume that the steady states of interest are constrained by a set of algebraic equations
| (9) |
where is some positive integer (which we take to be zero when there are no additional constraints). We write simply , where is a differentiable mapping whose components are the 's. Some or all might be linear functions, representing moities or stochiometric constraints, but non‐linear constraints will be useful when treating certain examples, as will be discussed later.
Let us denote by
the derivative of the vector function with respect to λ, viewed as a function .
We are interested in answering the following question:
Obviously, the answer to this question will, typically, depend on the chosen value of λ. The computation of the steady state as a function of λ will ordinarily involve the numerical approximate solution of non‐linear algebraic equations, or simulation of differential equations, and has to be repeated for each individual parameter λ. Our aim is, instead, to provide conditions that allow one to put constraints on these signs independently of the specific λ, and even independently of other parameters that might appear in the specification of f and of g, such as kinetic constants, and to do so using only linear algebraic and logical operations, with no recourse to numerical approximations.
Proceeding in complete generality, we take the derivative with respect to λ in (8), so that, by the chain rule, we have that , where denotes the Jacobian matrix of f evaluated at a state x. In other words,
| (10) |
where denotes the nullspace of the matrix . Similarly, we have that
| (11) |
The reason for introducing separately f and g will become apparent later: we will be asking that each of the entries of the Jacobian matrix of g should not change sign over the state space (which happens, in particular, when g is linear, as is the case with stoichiometric constraints). No similar requirement will be made of f, but instead, we will study the special case in which f represents the dynamics of a CRN.
2.1 Notations for signs of vectors and of subspaces
We use the following sign notations. For any (row or column) vector u with real entries, the vector of signs of entries of u, denoted sign u, is the (row or column) vector with entries in the set whose i th coordinate satisfies:
(The function sign is sometimes called the “signature function” when viewed as a map .) More generally, for any subspace of vectors with real entries, we define
Computing amounts to determining which orthants are intersected by . This combinatorial problem is studied in the theory of oriented matroids: given a basis of , the signs of represent the oriented matroid associated to a matrix that lists the basis as its columns, which is the set of “covectors” of this basis. See [24] for details and further theoretical discussion.
We also introduce the positive and negative parts of a vector u, denoted by and respectively, as follows:
Note that , , and:
| (12) |
Suppose that and , for some positive integer n. The equality:
| (13) |
need not hold for arbitrary vectors, for example, if and then , but, on the other hand,
which is not equal to sign(uv). However, equality (13) is true provided that we assume that (a) or (i.e. either for all i, or for all i, respectively), and also that (b) or . This is proved as follows. Take first the case and . Each term in the sum is non‐negative. Thus, , that is, , if and only if and for some common index i, and otherwise. Similarly, as , we know that , that is,
if and only if for some i, and otherwise. But, is the same as and . Thus (13) is true. The case and can be reduced to and by considering instead of u : . Similarly for the remaining two cases.
2.2 A parameter‐dependent constraint set
Denoting
we have that (10) and (11) implies, in terms of the sign notations just introduced:
Therefore, one could in principle determine the possible values of once that is known. However, in applications one typically does not know explicitly the curve , which makes the problem difficult because the subspace depends on λ, and even computing the steady states is a hard problem. As discussed below, for the special case of ODE systems arising from CRNs, a more systematic procedure is possible. Before turning to CRNs, however, we discuss general facts true for all systems.
For every positive concentration vector x define:
| (14) |
| (15) |
| (16) |
The row vectors are used in order to generate arbitrary linear combinations of the rows of the Jacobian matrices of f and g, a set rich enough to, ideally, permit the unique determination of the sign of .
Since at a steady state , and , we also have that:
| (17) |
for every linear combination and .
We now prove an easy yet key result, which shows that the sign vectors in the set strongly constrain the possible signs . For simplicity in notations, we drop λ in and in when λ is clear from the context, and write simply π or ξ, with coordinates and , respectively.
To state the result, we use formal logic notations. Let and be the following logical disjunctions:
Recall that the “XNOR(p, q)” binary function has value “true” if and only if p and q are simultaneously true or simultaneously false. Consider the following statement, for any given , and with :
| (18) |
This statement is true if and only if for every it holds that either:
| (19) |
or:
| (20) |
(where i and j range over in all quantifiers). In other words, either all the coordinates of the vector
are zero, or the vector must have both positive and negative entries.
For any , let . Then (18) is true.
Pick , , . Suppose that (19) is false. Then, either there is some i such that or there is some j such that . If for some i, then also . As (17) holds, , so that there must exist some other index j for which , which means that . Similarly, if there is some j such that , necessarily there is some i such that , by the same argument. □
In terms of the original data, Lemma 1 can be rephrased as follows. For each parameter value , and each vector , either for all or there are both positive and negative numbers in this sequence; and similarly for the partial derivatives of g.
The condition (18) given in Lemma 1 is only necessary, not sufficient. It may well be the case that there are sensitivity signs that pass this test, yet are not realisable for a given set of kinetic constants. In our experience, however, and as shown by the worked out examples, (18) is enough to provide a minimal set of signs, and is tight in that sense.
Given any two sign vectors σ, π, testing property (18) is simple in any programming language. For example, in MATLAB® syntax, one may write:
and the variable XNOR will have value 1 if is true and value 0 otherwise.
The basis of our approach will be as follows. We will show how to obtain a state‐independent set which is a subset of for all states x. In particular, for all steady states , we will have:
| (21) |
Compared to the individual sets , which depend on the particular steady state , the elements of this subset are obtained using only linear algebraic operations; the computation of does not entail solving non‐linear equations nor simulating differential equations. Since for all , it follows that
for any subset . Thus, we have:
| (22) |
because of Lemma 1. We will construct such subsets in our procedure, and test, for each potential sign vector π, whether the “orthogonality” property is true or not, with respect to elements of . Our procedure will provide the set . Often, our construction of leads to a that has just three elements, . (Note that is always a solution, and solutions always appear in pairs, since implies .)
To generate , we carry out a sieve procedure (for moderate number of species, this is easy and fast): we test for each π if the conjunction in (22) is true; if the test fails, the sign vector π is eliminated from the list. The surviving π's are the possible sign vectors. Of course, since the conjunction in (22) is only a necessary, and not a sufficient, condition, we are not guaranteed to find a minimal set of signs. Observe that even though questions about the set are decidable using propositional logic (there are a finite number of possible sign vectors), they have high computational complexity; for example, asking whether is NP‐hard on the number of species. Good heuristics for CNF problems include the Davis–Putnam–Logemann–Loveland (DPLL) algorithm [25]. The high computational complexity of these problems means that, generally speaking, our approach will only work well for relatively small networks.
The key issue, then, is to find a way to explicitly generate a state‐independent subset of , and we turn to that problem next.
2.3 Sketch of idea
To provide some intuition, let us consider, for the motivating example, the differential equation for e, which takes the form:
for some positive constants , , and . Along a curve of steady states, we must have
and therefore, taking derivatives with respect to λ,
| (23) |
Since and , this means that the following triplets of signs for , , and :
can never appear, since they would lead to a contradiction, namely a strictly positive and a strictly negative left‐hand side, respectively, in (23).
We were able to derive this conclusion because the signs of the coefficients of , , and are uniquely determined independently of the value of λ. That fact, in turn, follows from the fact that the gradient of the function (which appears in the right‐hand side of the differential equation) has a constant sign. In contrast, if we had, for example, a differential equation like
then we would derive, arguing in the same manner, the constraint
and here the sign of the coefficient of , , cannot be determined unless the values of and are known.
In general, additional information can be obtained by using linear combinations of right‐hand sides. For example, still for the same example, consider the equation:
Arguing as earlier, this leads to the identity
| (24) |
Subtracting (24) from (23), we have that
from which we conclude that and must have the same sign. Thus, we may obtain more information by taking linear combinations, but, again, we must check that the obtained coefficients have constant sign (which, in this case, is clear because and are constants). Our procedure is based on identifying such constant‐sign linear combinations, using only information from stoichiometry.
3 Sensitivities for CRNs
From now on, we assume that we have a system of differential equations associated to a chemical reaction network:
| (25) |
(see Appendix). Observe that , where is the Jacobian matrix of R, which is the matrix whose (k, j)th entry is .
We will assume from now on also specified a differentiable mapping
where is some positive integer (possibly zero, to indicate the case where there are no additional constraints), and g has the property that
| (26) |
In other words, the gradients of the components of g, must have signs that do not depend on the state x.
We use g in order to incorporate, in particular, stoichiometric conservation laws, which are linear functions, and thus have constant gradients and therefore gradients whose signs do not depend on x. Recall that stoichiometric constraints are obtained from the matrix Γ as follows: one considers the vectors in the left nullspace of Γ, that is, the row vectors such that . The linear functions are called conserved moities or stochiometric constraints; the time derivative of is constant along solutions of (25), since . Without loss of generality, one may take the vector ρ to have rational components or (clearing denominators) integer components, because the matrix Γ is rational. We emphasise that we do not include as components of g all stoichiometric constraints, or even all elements of a basis of the left nullspace of Γ. Indeed, in most examples of chemical reaction networks, this would lead to a unique steady state, or at most a discrete set of states. Our objective is precisely to study how steady states vary when one parameter varies, and hence a continuum of steady states is of interest.
The example in the introduction, for example, has five independent constraints, and one may show (see Appendix) that when all constraints are imposed, the steady state (given a specified set of kinetic reaction parameters) is unique. However, if, for instance, we keep , fixed but not impose a constant value on , a continuum of steady states exists, as is allowed to vary.
Observe that a non‐linear function g may sometimes also have the constant sign property. For example, suppose that , , and
where k 1 and k 2 are positive constants. Then the Jacobian matrix (gradient, since ) is:
which has constant sign .
For chemical reaction networks, it is not necessary for the entries of , and much less the entries of the products for vectors ν, to have constant sign. Our next task will be to introduce algebraic conditions that allow one to check if the sign is constant, for any given vector ν.
Before proceeding, however, we give an example of non‐constant sign. Take the following CRN, with and :
| (27) |
which is formally specified, assuming mass‐action kinetics, as follows:
Thus the ODE set corresponding to this CRN has:
Let , where, in general ei is the canonical row vector with a “1” in the i th position and zeroes elsewhere. Observe that does not have constant sign, because its second entry, which is the same as the (1, 2) entry of , is the function , which changes sign depending on whether or . Ruling out vectors ν that lead to such ambiguous signs is the purpose of our algorithm to be described next.
3.1 A first space
Introduce the following space:
Since , the definition (14) of becomes:
when specialised to CRN. Later on, we will explain how Property (26) allows us to obtain sign vectors induced by g (x) that are independent of x. On the other hand, the sign vectors generally depend on the particular x. The following lemma shows that, for vectors with non‐negative entries, the sign of the vector is the same, no matter what the state x is, and moreover, this sign can be explicitly computed using only stoichiometry information. We denote by
the j th column of the transpose , i.e.. the transpose of the j th row of A.
For any positive concentration vector x, any non‐negative row vector ρ of size , and any species index :
(28) Thus, also
(29) since the expressions in each side of (28) can only be zero or positive.
We have that
where . Since every , the equality holds if and only if for all . Similarly, from
and we have that if and only if for all . From (47), in the Appendix on CRN, we conclude (28). □
Lemma 2 is valid for all non‐negative . When specialised to , and defining , it says that σ does not depend on x. However, elements of the form will generally not be non‐negative (nor non‐positive), so the lemma cannot be applied to them. Instead, we will apply Lemma 1 to the positive and negative parts of such a vector, but only when such positive and negative parts satisfy a certain “orthogonality” property, as defined by the subset of introduced below.
3.2 A state‐independent subset of Σ
For any , consider the sign vector , whose j th entry is if with , as well as the positive and negative parts of v, and . Define the following set of vectors (“G ” for “good”):
Observe that, if , then, from , it follows that
| (30) |
Consider the following set of sign vectors parametrised by elements of :
| (31) |
The key fact is that this is a subset of , for all x :
For every positive concentration vector x,
A proof is provided in Section 6.
To interpret the set , it is helpful to study the special case in which v is simply a row of , that is, and . Since
and the vectors and have non‐overlapping positive entries (by the non‐autocatalysis assumption), we have that and . Since , asking that this number be positive amounts to asking that
(32) Since , asking that this number is positive amounts to asking that
(33) Thus, if the network in question has the property that (32) and (33) cannot both hold simultaneously for any pair of species i, j, then we cannot have that both and hold. In other words, for all i.
As an illustration, take the CRN and treated in (27). We claim that , which reflects the fact that does not have constant sign. Indeed, in this case we have that, with and , and are reactants in but is also a product of reaction , which has as a reactant. Algebraically, and , so and . This means that , since the property defining would require that at least one of or should vanish. We have re‐derived, in a purely algebraic manner, the fact that changes sign.
Testing whether a given vector , with , belongs to is easy to do. For example, in MATLAB® ‐like syntax, one may write:
and we need to verify that the vectors and have disjoint supports, which can be done with the command
which returns 1 (true) if and only if , in which case we accept v and we may use to test the conditions in Lemma 1.
3.3 Explicit generation of elements of
The set defined in (31) is constructed in such a way as to be independent of states x, which makes it more useful than the sets from a computational standpoint. Yet, in principle, computing this set potentially involves the testing of the conditions “ or ” that define the set , for every , that is, for every possible real‐valued vector (and each j). We describe next a more combinatorial way to generate the elements of .
We introduce the set of signs associated to the row span of :
| (34) |
Denote:
so that the j th column of is .
Pick any , , where . Then, for each :
By (13), applied with and , . By (13) applied with and , . Since, by (12) applied with , and , the conclusion follows.
In analogy to the definition of the set , we define (“G ” for “good”):
Observe that, if , then, since
(35) Consider the following set of sign vectors parametrised by elements of :
(36)
Pick any , , where . Then
and for such s and v,
(37) A proof is provided in Section 6.
.
Pick any element of , , . By Corollary 1, . Moreover, also by Corollary 1, , so we know that . Conversely, take an element . This means that for some . Let be such that . By Corollary 1, , and also . By definition of , this means that . □
We can simplify the definition of a bit further, by noticing that the finite subset S can be in fact be generated using only integer vectors. The definition in (34) says that:
A proof is provided in Section 6.
3.4 Adding rows to g by linear combinations of linear components
Recall that we made the assumption [Property (26)], that the components of g have gradients of constant sign. This means that the elements in following subset of , for all x :
| (38) |
where denotes the canonical row vector with a “ ” in the i th position and zeroes elsewhere, have constant sign, independently of the particular state x. We will also consider the following subset of , for all x :
| (39) |
where denotes the set of indices of rows of g that are linear functions, and means that is supported in , that is, whenever . Since a linear combination of linear functions is again linear, the elements of also have constant sign. Thus, we will only use elements of in our procedure, instead of arbitrary elements of . As part of our algorithm, we add selected combinations of such constraints as new components of g – ideally the whole sign space of the span of the rows, but in practice just a few sparse linear combinations suffice. If the coefficients of these linear functions are rational numbers (as is the case with coordinates of g that represent stoichiometric constraints), we may, without loss of generality, take integer combinations, as justified in the same manner as Lemma 5.
Let us explain, through an example, why this procedure is necessary. Suppose that the following are two rows of g
which might represent the conservation of two quantities. If is a curve of steady states, and denoting derivatives with respect to λ by primes, we have therefore that
The first of these tells us that the sign vector is either zero or must have two components of opposite signs, and the second one implies that . The conjunction of these two constraints gives the following set of possible signs:
(and the negatives of the last three). However, notice that, if we add to the rows of g also the difference , then we also know that , so that, in fact, we should also have that . Adding this constraint serves to eliminate the last two possibilities (as well as their negatives), giving the unique non‐zero solution [and its negative ]. Thus, adding the linear combination , even if it is redundant from a purely linear‐algebraic point of view, provides additional information when looking for signs.
3.5 Addition of “virtual constraints” to g
We have also found, when working out examples, that the following heuristic is useful. Consider the set consisting of all state‐dependent linear combinations
of the rows of the right‐hand side of the dynamics (25), where denotes the i th row of , and the 's are scalar functions. In abstract algebra terminology, when the reactions 's are polynomials (as with mass action kinetics), and if we restrict to polynomial coefficients , then is the ideal generated by the functions . Take any , and a parametrised set of positive steady states . Since , it follows that also for every . Now, suppose that one is able to find a function h of this form with the property that , where m (x) is a monomial and g (x) has a gradient of constant sign. Then for every , because at all positive x. This means that we may add g to the set of constraints. We call a function g of this form a “virtual constraint.”
Testing for the existence of such elements is in principle a difficult computational algebra problem. However, in many or even most natural examples of CRN's, the reaction functions are either linear or quadratic. If we consider only linear functions , then the combination elements h obtained by the above construction are at most polynomials of order three. Suppose that we look for factorisations of the form , where g is a polynomial of order at most two, and is so that the monomials in g all involve different variables. Such a g has constant‐sign gradient (because 's coordinates are all either constants or single variables ). Testing for such a factorisation, for each fixed variable as “ ” and any fixed group of monomials for g, becomes a linear algebraic problem on the coefficients of the functions . We do not discuss this further in general, but only mention an example which will be useful when analysing a particular network below.
Suppose that some two rows of are as follows:
where we are denoting the coordinates of x as for reasons that will be clear when we discuss the network where this example appears. Taking and , we have that
where is a monomial, and:
has the gradient:
which has constant sign .
3.6 Remarks on global properties
We do not directly address in this study the issue of uniqueness of steady states in each stoichiometry class. In those examples in which the space of fixed conservation laws has codimension one, as in our example when we fix all except one of the values and so on, it is possible in principle that for each value of the remaining conserved quantity there may exist several equilibria. This is a well‐studied question for CRNs, see for instance [26–35]. A routine argument on CRNs can be used to prove that for our motivating example (1), steady states are unique once that all conservation laws are taken into account (see Appendix).
However, in this work our concern has been with the determination of signs of sensitivities, and not their actual values. These are different questions. Indeed, signs might be unique even when values are not: different steady states may well “move” in the same direction upon a perturbation of parameters. For a completely trivial illustration, take any one‐dimensional (1D) differential equation . Even if f has multiple roots, leading to multiple steady states, is either equal to or at each steady state. This means that the signs of the elements in are unique (zero in the first case) or, at worst, unique up to sign reversals (in the second case). Note that any f which has the property that arises from some CRN, . Indeed, a representing CRN for , with , can be obtained as follows. For , we include a reaction with rate constant . For and , we introduce a reaction with rate constant . For and , we introduce a reaction with rate constant . Then , with if and if , and with if and if . (This network has autocatalytic reactions, but adding additional species turns it into one that does not.)
Another example is given by the 2D system that has vector field . At steady states of the form , , and , the first row of the Jacobian matrix is (and the second row is zero), where and , respectively. Thus, the nullspace is the span of , , or . These are three different subspaces, yet they all have the common sign (plus its negative, and zero). In summary, even though the tangent vectors are not unique, in this example signs are.
Suppose that signs of sensitivities are unique up to sign reversals and zero, that is, for some and all parameter values , . Then a global result along any smooth non‐singular ( for all λ) curve connecting steady states follows as a corollary. In other words, the conclusion from infinitesimal perturbations extends to global perturbations. Indeed, suppose that we want to compare the values of the steady‐state concentrations and at two parameter values . We have:
the sign depending on whether or for all λ (no change of sign is possible, by nonsingularity).
4 Summary and implementations
Our procedure for finding the set in (22), which contains all possible signs of derivatives , consists of the following steps:
Construct a subset (see below).
For each element , test the property , which defines . The s ′s that pass this test are collected into a set , which is known to be a subset of .
Take the set of elements of the form , for s in , and add to these the signs of the rows of the Jacobian of g, as well as a subset of combinations of linear components of g (by assumption, these sign vectors are independent of x). Let us call this set .
Optionally, add to sign vectors from “virtual constraints” as explained earlier.
Now apply the sieve procedure, testing the conjunction in (22). The elements π that pass this test are reported as possible signs of derivatives of steady states with respect to the parameter λ, in the sense that they have not been eliminated. These are the elements of .
If a unique (after eliminating 0 as well as one element of each pair ) solution remains, we stop. If there is more than one sign that passed all tests, and if was a proper subset of S, we may generate a larger set , and hence a potentially larger , and repeat the subsequent steps for the larger subset.
The theory guarantees that our procedure will eliminate all impossible sign vectors, thus providing a set of possible sign vectors. As is typically the case with heuristics for computationally intractable problems, there is no a priori guarantee that the set obtained by steps 1–5 should be a minimal such set, and this is why step 6 is included for further search.
The first step, constructing S, or a large subset of it, can be done in various ways. Since, by Lemma 5, we can generate S using integer vectors, the elements of S have the form where we may assume, without loss of generality, that each entry of is either zero or, if non‐zero, is either or . Thus, testing whether a sign vector s belongs to S amounts to testing the feasibility of a linear program (LP): we need that for those indices i for which , that for those indices i for which , and that for those indices i for which . (These are closed, not strict, conditions, as needed for an LP formulation.) This means that one can check each of the possible sign vectors efficiently.
One can combine the testing of LP feasibility with the search over the possible sign vectors into a mixed integer linear programming (MILP) formulation, by means of the technique called in the MILP field a “big M” approximation [36]. This is a routine reduction: one first fixes a large positive number M, and then formulates the following inequalities:
where the vector is required to be real and the variables , binary (). Given any solution, we have that (so ) for those i for which , (so ) for indices for which , and (i.e. ) when . (This trick will miss any solutions for which but M was not taken large enough that , or but M was not taken large enough that .) The resulting MILP can be solved using relaxation‐based cutting plane methods, branch and bound approaches, or heuristics such as simulated annealing [37, 38]. Such mixed‐integer techniques have been used for the related but very different problem of parameter identification for biochemical networks, see for instance [39].
Often, however, simply testing sparse integer vectors in the integer‐generating form in Lemma 5 works well. In practice, we find that linear combinations with small coefficients of pairs of canonical basis vectors , and similarly for the appropriate conservation laws, is typically enough to obtain the set of all possible sign vectors π (up to all signs being reversed, and except for the trivial solution ).
We have developed a MATLAB® script, “CRNSESI” (Chemical Reaction Network SEnsitivity SIgns) that implements our procedure. The examples given in the next section were worked out using this software. (Actual output from the program is shown in the Supplementary Materials.)
5 Three worked‐out examples
5.1 Kinase cascade
In particular, the example given in the introduction was worked out using CRNSESI. Specifically, we introduced stoichiometric constraints to keep all but one conservation law fixed, and analysed the signs of the resulting sensitivities for any curve, obtaining in each case a unique solution (up to sign reversals or the identically zero solution). The output of CRNSESI, for the concrete example given by reactions (1), can be summarised as shown below. In each case, “−1” or “1” means that the respective component of the state vector changes negatively or positively, respectively, under the corresponding perturbation.
if the first kinase, , decreases (keeping fixed):
if the first substrate, , increases (keeping fixed):
if the first phosphatase, , increases (keeping fixed):
if the second substrate, , decreases (keeping fixed):
if the second phosphatase, , decreases (keeping fixed):
If the opposite change is made on a total amount, then the signs get reversed. For example, if the second substrate, , increases, then we obtain:
Typically, one is also interested the effect of perturbations on the total concentration of active kinase, free or bound, and the total concentration of product, free or bound, . Experimentally, these quantities are far easier to quantify using Western blots or mass spec techniques [7]. In order to study changes in X and Y, we introduce “virtual” variables x and y and artificial stoichiometric constraints and , and re‐apply our algorithm. Results are as follows (using the same sign conventions as above):
if the first kinase, , decreases:
if the first substrate, , increases:
if the first phosphatase, , increases:
if the second substrate, , decreases:
if the second phosphatase, , decreases:
Notice the following remarkable phenomenon: when the total second substrate, , is perturbed, we see that x and y, the total amounts of active enzymes, both vary in the same direction. A network identification procedure that employs these experimental perturbations will infer a positive correlation between measured activity of these enzymes. On the other hand, an experiment in which the second phosphatase, , is perturbed, will lead to an inference of a graph “repression” edge. Indeed, when decreasing the second phosphatase, a “local” perturbation in the second layer, the total amount of active enzyme y increases, as it should, but the effect on the “upstream” layer quantified by x is negative, which suggests a repression of x by y. These issues, including the apparently paradoxical effect of two different perturbations leading to opposite conclusions, are extensively discussed in [7], which conducted an experimental validation of this idea.
In order to obtain the additional information, about total active kinase X and product Y, we proceeded as follows. We first add two artificial variables, x and y, so that the full state is now . The definitions of x and y are incorporated into two new “stoichiometric constraints” corresponding to these vectors in :
respectively. No change is made to the original stoichiometry matrix and original stoichiometric constraints, except for adding zeroes in the positions of x and y. The original algorithm can be run on this extended set. However, when adding artificial variables, such as x and y, which do not participate in reactions nor the original set of stoichiometric constraints, it is more efficient to first obtain solutions for the original problem, in which x and y have not yet been added, and only as a second step to add the “stoichiometric constraints” corresponding to the added variables. This typically results in a substantial savings of computing time. With this modified procedure, we obtained the following results.
fixed, so that only the first kinase, , is allowed to vary:
fixed, so that only the first substrate, , is allowed to vary:
fixed, so that only the first phosphatase, , is allowed to vary:
fixed, so that only the second substrate, , is allowed to vary:
fixed, so that only the second phosphatase, , is allowed to vary:
Let us interpret these solutions. Take for example the solution obtained when only the last substrate, , was allowed to vary. Both zero and the negative of this sign vector, namely:
are solutions. This negative version is easier to interpret: since the changes in are all positive and, by the definition (6), , these are the signs of changes in steady states when is experimentally increased. In this second form of the solution, we can read‐out the changes (positive for x and y, negative for b, and so forth) under such a perturbation.
5.2 A phosphotransfer model
(We thank Domitilla del Vecchio for suggesting that we study this example.) Consider the two reversible reactions
(we display rate constants because they play a role in the virtual constraints described later). This network can be thought to describe a phosphotransferase Y which, when in active (phosphorylated) form transfers a phosphate group to (and hence becomes inactivated, denoted by , while becomes ), and which when active can also transfer a second phosphate group to (and hence becomes inactivated, while becomes ). We write coordinates of states as . Two conservation laws are as follows:
representing the conservation of total X and total number of phosphate groups.
Two rows of are and , so, as discussed earlier, using the virtual constraint obtained from , we may add to the following sign vector:
We ask now what happens if the total amount of kinase, , is allowed to vary, but keeping and constant.
CRNSESI returns this output:
(all signs could be reversed and that would also be a solution). This means that , , and change in the same direction, but in the opposite direction, and is undetermined (star). Since , an increase in means that both and increase, and thus we conclude that increases and decreases when the kinase amount is up‐regulated.
Is the fact that our theory cannot unambiguously predict the actual change in at steady state, under kinase perturbations, a reflection of an incomplete search by our algorithm, or an intrinsic property of this system? To answer this question, we simulated the system, taking for concreteness all parameters .
First, let us simulate a system in which and we study a 10% up‐regulation from . We start from these the following two initial states:
which correspond to and , respectively. The steady states reached from here are as shown in the first and second rows, respectively, of the following matrix:
which means that the sign changes are:
consistently with our theoretical prediction.
Next, let us simulate a system in which , , and we study a 10% up‐regulation from , which is achieved by taking these two initial states:
which correspond to and , respectively. The steady states reached from here are as shown in the first and second rows, respectively, of the following matrix:
which means that the sign changes are now:
again consistently with our theoretical prediction.
These simulations explain why the actual change in at steady state, under kinase perturbations, cannot be unambiguously predicted from our algorithm, which does not take into the numerical values of the conserved quantities (nor, for that matter, of the kinetic constants k i 's). It is remarkable, however, that the sign of the perturbation in the “active” form can be unambiguously predicted (and perhaps counter‐intuitive that the change is negative).
We also run CRNSESI on two other scenarios: (1) keeping and constant gives these signs:
and (2) keeping and constant results in:
in which case the signs of perturbations in the variable are not uniquely defined.
5.3 A ligand/receptor/antagonist/trap example
(We thank Gilles Gnacadja for suggesting that we try CRNSESI on this example.) The paper [40] studied a system that models the binding of interleukin‐1 (IL‐1) ligand to IL‐1 type I receptor (IL‐1RI), under competitive binding to the same receptor by human IL‐1 receptor antagonist (IL‐1Ra). IL‐1Ra is used as a therapeutic agent in order to block IL‐1 binding (which causes undesirable physiological responses). In addition, the model included the presence of a decoy (or “trap”) receptor that binds to both IL‐1 and IL‐1Ra. A key question addressed in that paper was the determination of how the equilibrium concentration of the receptor–ligand complex depends on initial concentrations of the various players (reflected in variations in stoichiometrically conserved quantities), and specifically the determination of the direction of the changes in concentrations. We show here how CRNSESI recovers conclusions from that paper, which were obtained there through very ingenious and lengthy ad‐hoc computations.
We will employ the same notations as in [40]: the species , are, respectively, the ligand IL‐1, receptor IL‐1RI, antagonist IL‐1Ra, and trap; and the species , are, respectively, the complexes , , , and . Thus, the reaction network is
We use lower case letters to denote concentrations. There are four independent conservation laws:
We will fix , , and , and ask how steady states change in sign when is perturbed. The other cases (perturb , etc.) are of course similar.
It is easy to see that , for some positive constants , at all steady states, and this allows one to introduce an additional virtual constraint obtained from , meaning that we may add the following sign vector:
to . Indeed, four rows of the vector field are: , , , (for appropriate positive constants and ). So, at steady states, is a multiple of , and similarly for the other 's, which gives that and are both multiples of . Another way to say this is to note that the linear combination
gives
With this virtual constraint added, CRNSESI returns
for the signs of derivatives with respect to . Note that two variables are undetermined in sign. (To be more precise, CRNSESI also returns the negatives of these signs. However, since , and since all three of change with the same sign, the negative corresponds to the derivative with respect to .) This is exactly what is proved in [40] (see the first columns of the matrices in (10) and (12) in that paper). Notably, CRNSESI gave slightly more, namely that these particular signs of can never appear:
In other words, it cannot be the case that both and increase.
6 Some technical proofs
We collect here some of the longer proofs.
6.1 Proof of Lemma 3
Pick any , where , and fix any positive concentration vector x. We must prove that . As includes all expressions of the form , for , it will suffice to show that, for this same vector v,
(40) for each species index . For each , we will show the following three statements:
(41)
(42) and
(43) Suppose first that . Applying (28) with , we have that . Applying (29) with , we have that . Therefore,
thus proving (41). If, instead, and , a similar argument shows that (42) holds. Finally, suppose that . Then, again by (28), applied to and
and so (43) holds. The desired equality (40) follows from (41)–(43). Indeed, we consider three cases: (a) , (b) , and (c) . In case (a), (30) shows that (because the first and third cases would give a non‐negative value), and therefore , that is, , so (41) gives that is also negative. In case (b), similarly , and so (42) shows (40). Finally, consider case (c), . If it were the case that is non‐zero, then, since , , and therefore (30) gives that , a contradiction; similarly, must also be zero. So, (43) gives that as well. □
6.2 Proof of Proposition 1
Let , , and pick any . We claim that if and only if . Since j is arbitrary, this shows that if and only if . Indeed, suppose that . By Lemma 4, , so . Conversely, if then , for the same reason. Similarly, is equivalent to .
Suppose now that and , and pick any . Assume that . Since, by (35) and (30), and , we have, again by Lemma 4, that
If, instead, (and thus )
As j was arbitrary, and we proved that the j th coordinates of the two vectors in (37) are the same, the vectors must be the same. □
6.3 Proof of Lemma 5
Pick any . Thus , where for some . Consider the set of indices of the coordinates of v that vanish (equivalently, ),
Suppose that . Let denote the canonical column vector with a “ ” in the i th position and zeroes elsewhere, and introduce the matrix . The definition of means that and for all . The matrix has integer, and in particular rational, entries. Thus, the left nullspace of D has a rational basis, that is, there is a set of rational vectors , where q is the dimension of this nullspace, such that and if and only if u is a linear combination of the u i 's. In particular, since , there are real numbers such that . Now pick sequences of rational numbers as and define . This sequence converges to ν, and, being combinations of the u i 's, for all k. Let , so we have that as , and for all k. On the other hand, for each , as , for all large enough k, , the j th coordinate of , has the same sign as . In conclusion, for large enough k, . Multiplying the rational vector by the least common denominator of its coordinates, the sign does not change, but now we have an integer vector with the same sign. □
7 Acknowledgments
This research was supported in part by NIH Grant 1R01GM100473, ONR Grant N00014‐13‐1‐0074, and AFOSR Grant FA9550‐14‐1‐0060.
9.1 A review of chemical reaction networks terminology
We review here some basic notions about chemical networks. See, for example [41, 42] for more details. We consider a collection of chemical reactions that involves a set of “species”:
The “species” might be ions, atoms, or large molecules, depending on the context. A CRN involving these species is a set of chemical reactions , , represented symbolically as:
| (44) |
where the and are some non‐negative integers that quantify the number of units of species consumed, respectively, produced, by reaction . Thus, in reaction 1, units of species combine with units of species and so on, to produce units of species , units of species and so on, and similarly for each of the other reactions. (If there is a reverse reaction to (44), with and , one sometimes summarises both by a reversible arrow . However, from a theoretical standpoint, we view each direction as a separate reaction.)
We will assume the following “non‐autocatalysis” condition: no species can appear on both sides of the same reaction. With this assumption, either or for each species and each reaction (both are zero if the species in question is neither consumed nor produced), Note that we are not excluding autocatalysis which occurs through one ore more intermediate steps, such as the autocatalysis of in , so this assumption is not as restrictive as it might at first appear.
Suppose that for some (i, k); then we say that species is a reactant of reaction , and by the non‐autocatalysis assumption, for this pair (i, k). If instead , then we say that species is a product of reaction , and again by the non autocatalysis assumption, for this pair (i, k).
It is convenient to arrange the a ik 's and b ik 's into two matrices A, B, respectively, and introduce the stoichiometry matrix . In other words,
is defined by:
| (45) |
The matrix has as many columns as there are reactions. Its k th column shows, for each species (ordered according to their index i), the net “produced–consumed” by reaction . The symbolic information given by the reactions (44) is summarised by the matrix . Observe that if is a reactant of reaction , and if is a product of reaction .
To describe how the state of the network evolves over time, one must provide in addition to a rule for the evolution of the vector:
where the notation means the concentration of the species at time t. We will denote the concentration of simply as and let . Observe that only non‐negative concentrations make physical sense. A zero concentration means that a species is not present at all; we will be interested in positive vectors x of concentrations, those for which for all i, meaning that all species are present.
Another ingredient that we require is a formula for the actual rate at which the individual reactions take place. We denote by be algebraic form of the k th reaction. We postulate the following two axioms that the reaction rates , must satisfy:
for each (i, k) such that species is a reactant of , for all (positive) concentration vectors x;
for each (i, k) such that species is not a reactant of , for all (positive) concentration vectors x.
These axioms are natural, and are satisfied by every reasonable model, and specifically by mass‐action kinetics, in which the reaction rate is proportional to the product of the concentrations of all the reactants:
The positive coefficients are the called reaction, or kinetic, constants. By convention, when .
Recall that and if and only if is a reactant of . Therefore the above axioms state that, for every positive x,
| (46) |
and also
| (47) |
because the expressions on both sides are either zero or positive.
We arrange reactions into a column vector function :
With these conventions, the system of differential equations associated to the CRN is given as in (25), which we repeat here for convenience:
9.2 Existence and uniqueness for steady states in the example
For our motivating example (1), steady states are unique once that all conservation laws are taken into account. Existence of steady states follows from the fact that states evolve in a compact convex set, as argued, for example, in [23] (Supplemental Material). Uniqueness is shown as follows. Steady states satisfy that the right‐hand sides of the differential equations:
(where are some positive constants) are set to zero, together with the conservation laws. We argue as follows, using the constraints to first express all variables in terms of e, seen as a parameter, and then pointing out that this forces e to be uniquely determined (“increasing” and “decreasing” functions always means strictly so):
the conservation law for gives that a is a decreasing function of e;
substituting into and solving for gives that is a decreasing function of e;
from , b is an increasing function of a, and therefore b is a decreasing function of e;
substituting into and solving for gives that is an increasing function of b, and thus is a decreasing function of e;
the conservation law for gives that c is a decreasing function of , a, , b, so c is an increasing function of e;
from , d is an increasing function of c, so d is an increasing function of e;
solving for gives that is increasing in c and decreasing in , so is an increasing function of e and an increasing function of c, and thus is an increasing function of e;
substituting into and solving for gives that is an increasing function of d, so is an increasing function of c, and thus is an increasing function of e.
In conclusion, the sum of concentrations is a strictly increasing function of concentration of e. Thus, the constraint provides a unique possible value for e. Substituting back, (unique) values are obtained for all other concentrations.
8 References
- 1. Kholodenko B.N. Kiyatkin A. Bruggeman F. Sontag E.D. Westerhoff H., and Hoek J. ‘Untangling the wires: a novel strategy to trace functional interactions in signalling and gene networks’. Proc. National Academy of Sciences USA, 2002, 99, 12841–12846 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. de la Fuente A. Brazhnik P., and Mendes P.: ‘Linking the genes: inferring quantitative gene networks from microarray data’, Trends Genet., 2002, 18, (8), 395–398. (doi: 10.1016/S0168-9525(02)02692-6) [DOI] [PubMed] [Google Scholar]
- 3. Gardner T.S. di Bernardo D. Lorenz D., and Collins J.J.: ‘Inferring genetic networks and identifying compound mode of action via expression profiling’, Science, 2003, 301, (5629), pp. 102–105 (doi: 10.1126/science.1081900) [DOI] [PubMed] [Google Scholar]
- 4. Andrec M. Kholodenko B.N. Levy R.M., and Sontag E.D.: ‘Inference of signalling and gene regulatory networks by steady‐state perturbation experiments: structure and accuracy’, J. Theoret. Biol., 2005, 232, (3), pp. 427–441 (doi: 10.1016/j.jtbi.2004.08.022) [DOI] [PubMed] [Google Scholar]
- 5. Santos S.D.M. Verveer P.J., and Bastiaens P.I.H.: ‘Growth factor induced MAPK network topology shapes Erk response determining PC‐12 cell fate’, Nat. Cell Biol., 2007, 9, pp. 324–330 (doi: 10.1038/ncb1543) [DOI] [PubMed] [Google Scholar]
- 6. Kholodenko B. Yaffe M.B., and Kolch W.: ‘Computational approaches for analyzing information flow in biological networks’, Sci. Signal., 2012, 5, (220), re1. (doi: 10.1126/scisignal.2002961) [DOI] [PubMed] [Google Scholar]
- 7. Prabakaran S. Gunawardena J., and Sontag E.D.: ‘Paradoxical results in perturbation‐based signalling network reconstruction’, Biophys. J., 2014, 106, pp. 2720–2728 (doi: 10.1016/j.bpj.2014.04.031) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Pearson G. Robinson F., and Beers Gibson T. et al.: ‘Mitogen‐activated protein (MAP) kinase pathways: regulation and physiological functions’, Endocr. Rev., April 2001, 22, (2), pp. 153–183 [DOI] [PubMed] [Google Scholar]
- 9. Huang C.‐Y.F., and Ferrell J.E.: ‘Ultrasensitivity in the mitogen‐activated protein kinase cascade’, Proc. Natl. Acad. Sci. USA, 1996, 93, pp. 10078–10083 (doi: 10.1073/pnas.93.19.10078) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Shaul Y.D., and Seger R.: ‘The MEK/ERK cascade: from signalling specificity to diverse functions’, Biochim. Biophys. Acta, 2007, 1773, (8), pp. 1213–1226 (doi: 10.1016/j.bbamcr.2006.10.005) [DOI] [PubMed] [Google Scholar]
- 11. Bardwell L. Zou X. Nie Q., and Komarova N.L.: ‘Mathematical models of specificity in cell signalling’, Biophys. J., 2007, 92, (10), pp. 3425–3441 (doi: 10.1529/biophysj.106.090084) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Ledford H.. First‐in‐class cancer drug approved to fight melanoma. http://blogs.nature.com/news/2013/05/first‐in‐class‐cancer‐drug‐approved‐to‐fight‐melanoma.html, May 2013. Nature news blog
- 13. Willems J.C.: ‘Behaviors, latent variables, and interconnections’, Syst., Control Inf., 1999, 43, (9), pp. 453–464 [Google Scholar]
- 14. Lauffenburger D.A.: ‘Cell signalling pathways as control modules: complexity for simplicity? Proc. Natl. Acad. Sci. USA, 2000, 97, pp. 5031–5033 (doi: 10.1073/pnas.97.10.5031) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Hartwell L.H. Hopfield J.J. Leibler S., and Murray A.W.: ‘From molecular to modular cell biology’, Nature, 1999, 402, pp. 47–52 (doi: 10.1038/35011540) [DOI] [PubMed] [Google Scholar]
- 16. Saez‐Rodriguez J. Kremling A., and Gilles E.D.: ‘Dissecting the puzzle of life: modularization of signal transduction networks’, Comput. Chem. Eng., 2005, pp. 619–629 (doi: 10.1016/j.compchemeng.2004.08.035) [DOI] [Google Scholar]
- 17. Del Vecchio D. Ninfa A.J., and Sontag E.D.: ‘Modular cell biology: Retroactivity and insulation’, Nat. Mol. Syst. Biol., 2008, 4, p. 161 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Kim K.H., and Sauro H.M.: ‘Measuring retroactivity from noise in gene regulatory networks’, Biophys. J., March 2011, 100, (5), pp. 1167–1177 (doi: 10.1016/j.bpj.2010.12.3737) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Alexander R.P. Kim P.M. Emonet T., and Gerstein M.B.: ‘Understanding modularity in molecular networks requires dynamics’, Sci. Signal., 2009, 2, (81), p. 44 (doi: 10.1126/scisignal.281pe44) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Jiang P. Ventura A.C. Sontag E.D. Merajver S.D. Ninfa A.J., and Del Vecchio D.: ‘Load‐induced modulation of signal transduction networks’, Sci. Signal., 2011, 4, (194), ra67 (doi: 10.1126/scisignal.2002152) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Kim Y. Paroush Z. Nairz K. Hafen E. Jimenez G., and Shvartsman S. Y.: ‘Substrate‐dependent control of MAPK phosphorylation in vivo’, Mol. Syst. Biol., 2011, 7, p. 467. (doi: 10.1038/msb.2010.121) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Jayanthi S. Nilgiriwala K.S., and Del Vecchio D.: ‘Retroactivity controls the temporal dynamics of gene transcription’, ACS Synth. Biol., August 2013, 2, (8), pp. 431–441 (doi: 10.1021/sb300098w) [DOI] [PubMed] [Google Scholar]
- 23. Barton J., and Sontag E.D.: ‘The energy costs of insulators in biochemical networks’, Biophys. J., 2013, 104, pp. 1390–1380 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Björner A. Las Vergnas M. Sturmfels B. White N., and Ziegler G.M., ‘Oriented Matroids’ (Cambridge University Press, 1999, 2nd edn.) [Google Scholar]
- 25. Harrison J., ‘Handbook of practical logic and automated reasoning’ (Cambridge University Press, New York, NY, USA, 2009, 1st edn.) [Google Scholar]
- 26. Feinberg M.: ‘Chemical reaction network structure and the stability of complex isothermal reactors – i. the deficiency zero and deficiency one theorems’, Chem. Eng. Sci., 1987, 42, pp. 2229–2268. (doi: 10.1016/0009-2509(87)80099-4) [DOI] [Google Scholar]
- 27. Feinberg M.: ‘The existence and uniqueness of steady states for a class of chemical reaction networks’, Archive for Rational Mechanics and Analysis, 1995, 132, pp. 311–370. (doi: 10.1007/BF00375614) [DOI] [Google Scholar]
- 28. Conradi C. Saez‐Rodriguez J. Gilles E.‐D., and Raisch J.: ‘Using chemical reaction network theory to discard a kinetic mechanism hypothesis’, IEE Proc. Systems Biology, 2005, 152, pp. 243–248 (doi: 10.1049/ip-syb:20050045) [DOI] [PubMed] [Google Scholar]
- 29. Sontag E.D.: ‘Structure and stability of certain chemical networks and applications to the kinetic proofreading model of T‐cell receptor signal transduction’, IEEE Trans. Autom.. Control, 2001, 46, (7), pp. 1028–1047 (doi: 10.1109/9.935056) [DOI] [Google Scholar]
- 30. Craciun G, and Feinberg M: ‘Multiple equilibria in complex chemical reaction networks: extensions to entrapped species models’, IEE Proc Syst. Biol, 2006, 153, (4), pp. 179–186 (doi: 10.1049/ip-syb:20050093) [DOI] [PubMed] [Google Scholar]
- 31. Craciun G. Tang Y., and Feinberg M.: ‘Understanding bistability in complex enzyme‐driven reaction networks’, PNAS, 2006, 103, (23), pp. 8697–8702 (doi: 10.1073/pnas.0602767103) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Wang L., and Sontag E.D.: ‘On the number of steady states in a multiple futile cycle’, J. Math. Biol., 2008, 57, pp. 29–52 (doi: 10.1007/s00285-007-0145-z) [DOI] [PubMed] [Google Scholar]
- 33. Siegal‐Gaskins D. Grotewold E., and Smith G.D.: ‘The capacity for multistability in small gene regulatory networks’, BMC Syst. Biol., 2009, 3, p. 96 (doi: 10.1186/1752-0509-3-96) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Wilhelm T.: ‘The smallest chemical reaction system with bistability’, BMC Syst. Biol., 2009, 3, p. 90 (doi: 10.1186/1752-0509-3-90) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Halasz A.M. Lai H‐J. McCabe Pryor M. Radhakrishnan K., and Edwards J.S.: ‘Analytical solution of steady‐state equations for chemical reaction networks with bilinear rate laws’, IEEE/ACM Trans. Comput. Biol. Bioinfor., 2013, 10, (4), pp. 957–969. (doi: 10.1109/TCBB.2013.41) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Griva I. Nash S.G., and Sofer A., ‘Linear and nonlinear optimization’ (Society for Industrial Mathematics, Philadelphia, 2008, 2nd ed.) [Google Scholar]
- 37. Papadimitriou C.H., and Steiglitz K., ‘Combinatorial optimization: algorithms and complexity’ (Prentice‐Hall, Inc., Upper Saddle River, NJ, USA, 1982) [Google Scholar]
- 38. Williams H.P., ‘Logic and integer programming’ (Springer Publishing Company, Incorporated, 2009, 1st edn.) [Google Scholar]
- 39. Guillen‐Gosalbez G. Miro A. Alves R. Sorribas A., and Jimenez L.: ‘Identification of regulatory structure and kinetic parameters of biochemical networks via mixed‐integer dynamic optimization’, BMC Syst. Biol., October 2013, 7, (1), p. 113. (doi: 10.1186/1752-0509-7-113) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Gnacadja G. Shoshitaishvili A., and Gresser M.J. et al.: ‘Monotonicity of interleukin‐1 receptor‐ligand binding with respect to antagonist in the presence of decoy receptor’, J. Theor. Biol., February 2007, 244, (3), pp. 478–488 (doi: 10.1016/j.jtbi.2006.07.023) [DOI] [PubMed] [Google Scholar]
- 41. Sontag E.D.. Online text: Lecture Notes on Mathematical Systems Biology. http://www.math.rutgers.edu/~sontag/systems_biology_notes.pdf, 2014.
- 42. Ingalls B., ‘Mathematical modeling in systems biology’ (MIT Press, 2013) [Google Scholar]
